id
stringlengths
19
42
text
stringlengths
0
4.51M
added
stringlengths
24
24
created
stringlengths
20
24
source
stringclasses
4 values
original_shard_dir
stringclasses
232 values
original_shard_idx
int64
0
311k
num_tokens
int64
1
885k
proofpile-arXiv_065-10968
\section{Introduction} Quantitative modeling and simulation in systems biology is often hampered by the lack of reaction rates, which require repetitive time resolved measurements in the wet lab. Wet lab experiments are extremely costly with respect to facilities and time and one estimated rate often corresponds to someone's doctoral degree \cite{Takahashi:2008}. Thus, it is not surprising that the idea of calculating reaction rates from "`first principles"' and hence to forego the need of wet-lab experiments is quite attractive. \\ Every reaction consists of a diffusion controlled binding event and the subsequent chemical reaction - accompanied with conformational changes. In case of diffusion-controlled reactions, i.e.,\xspace reactions in which the reaction process is faster than the diffusional encounter of the particles, the rate of association is the rate limiting step. Therefore the estimation of the diffusional association (DA) rate is the first and most important step in determining the reaction rate of a chemical reaction. \\ A large portfolio of stochastic \cite{Northrup:1984, Wade:1993, Berry:2002} and continuum models (\cite{Chan:1984, Smart:1998}) has been developed and implemented in several software packages \cite{Briggs:1995, Gabdoulline:1997, Song:2004}. However, in these approaches model, simulator, and experiment are closely intertwined as they have been developed with an emphasis on execution efficiency rather than separation of concerns. As a consequence, existing methods can hardly be related or even compared to each other. This hampers reuse and extensions. It has been shown that the efficiency of simulators depend on the model \cite{Jeschke2008}. Consequently the development of different simulators for the same model and their configuration on demand is one source of efficiency \cite{Ewald:2009b}, which in the above approaches cannot be exploited easily. Furthermore, an impartial validation of the models is impossible, since the lacking separation between model and simulator does not allow experiments with different simulation engines in order to identify the valid ones.\\ The scope of this work is to present the software architecture FADA\xspace (Flexible Architecture for Diffusional Association), implemented in JAMES II (JAva-based Multipurpose Environment for Simulation) \cite{Himmelspach2007}. FADA\xspace unites different models and simulation engines for the computation of diffusional association processes and is further structured in terms of modeling, simulation and experiment settings. By exploiting the plug-in architecture of JAMES II, we can incorporate independent implementations of those components. Our aim is to include independent model components for the representation of particles, interparticle forces, and motion functions, which allows the flexible parametrization of the model for a repeatable and comparable execution in the context of an experiment. At the same time the architecture supports simulation engines, that are independent of the parametrization of the model. The simulation engines themselves may also contain independent components, like random number generators, parallelizations, etc., which can be exchanged or extended. FADA\xspace further allows experimentation with the created models, by exploiting existing methods as well as allowing the implementation of new methods for experiment design and analysis. \\ Based on the simple stochastic NAM (Northrup-Allison-McCammon) method \cite{Northrup:1984}, we present a formal description of a basic model for stochastic modeling of association processes. Thereby, the SpacePi-Calculus \cite{John2008} serves as description language. Using the presented toymodel, we show an exemplary workflow for a validation experiment and discuss the most important steps of the validation experiment. Finally, we present first results and give an outlook to our future experiments. \section{Brownian Dynamics in Diffusional Association} Bimolecular binding plays a central role in numerous biological processes, reaching from ligand binding, protein-protein encounter to signal transmission at synaptic junctions. The bimolecular binding process can be divided into two parts: the diffusional association to form an encounter complex, and the subsequent formation of the fully bound complex by nondiffusional rearrangements. The encounter complex is more considered to be an ensemble of possible conformations, than a well defined (transition) state \cite{Berg:1985}. Due to the relatively slow diffusion of macromolecules in solutions, the potential reaction partners stay in close vicinity once they met and thus undergo several microcollisions before diffusing apart. By this means the molecules' reactive groups can be properly aligned, such that subsequent conformational changes and eventually the desired chemical reaction may occur. As the rate of diffusional association marks an upper limit for the binding process, we consider the diffusional encounter of two particles, i.e.,\xspace the estimation of bimolecular association rates in the first place.\\ Brownian dynamics (BD) simulations are the most common approach to simulate the diffusional encounter. BD describes the diffusive motion of particles in solution, whose mass is significantly larger than the surrounding solvent. Thereby, the molecules may still be represented on an accurate atomic level (including interparticle forces), but the internal dynamics of the molecule are disregarded and the surrounding solvent is only implicitly taken into account by stochastic forces and friction. As a result of these simplifications the resolution of macromolecular motion is reduced to $\sim1~ ${\AA}. This allows BD to perform larger time steps ($\sim$1 picoseconds for proteins under normal viscosity conditions) compared to MD simulations, while the resolution is still detailed enough for the simulation of molecular binding processes. \\ In order to compute the actual rate of diffusional association, the Smoluchowski diffusion equation \cite{Smoluchowski:1917} needs to be solved. This equation describes the time evolution in terms of the probability density function of a particle undergoing Brownian movement in a force field. Due to the large variety of simultaneous interactions, the diffusion equation can usually only be solved analytically for systems with simple geometry and rudimentary modelling of interparticle forces. This is why most DA-methods are based on the work of Northrup, Allison and McCammon \cite{Northrup:1984}, who established to connect BD simulations with the estimation of association rates. \\ With increasing computational capabilities, there have also been successful attempts to use continuum models for the estimation of association rates, recently \cite{Song:2004}. However, the accurate modeling of diffusion-controlled processes in the context of realistic biomolecular systems is still challenging. The incorporation of the effects of ionic strength, protein charges, and steric repulsions has proven to be the rate limiting step for most of the existing approaches. Therefore, it is important to find simple but accurate model descriptions for these interparticle forces and effective algorithms for their evaluation. This applies for both approaches, continuum and particle based methods. \subsection{State of the art} \label{sec:stateoftheart} One of the most common setup for the simulation of biomolecular diffusional association is incorporated in the NAM (Northrup-Allison-McCammon) method \cite{Northrup:1984}. The basic setup for the NAM method is rather simple, as illustrated in figure \ref{fig:NAMmethod}. We consider two particles of arbitrary shape in threedimensional space. For the sake of simplicity the movement of both particles is modeled as relative motion. This means one of the particle is fixed in the center of the coordinate system (named particle A), while the other particle moves relatively from the perspective of the fixed particle (named particle B). \begin{figure}[htb!] \centering \includegraphics[width=9cm]{figs/NAMmethod.pdf} \caption{Schematic representation of the NAM method. Particle B associates with particle A at the active site while Particle B` diffuses into infinite seperation.} \label{fig:NAMmethod} \end{figure} The space around the fixed particle is divided by a spherical surface of radius b into an outer region $r > b$ and an inner region $r < b$. \\ In order to estimate the rate of the diffusional association \textit{k}, one usually simulates several thousands of BD trajectories and measures the fraction of ``reactive'' trajectories. The rate of association is then computed by the product of the analytically computed rate k(b) and the probability $\beta^\infty$ estimated from simulations: \begin{align} k = k_D (b) \beta^\infty . \end{align} The rate $k_D(b)$ is the steady state rate at which particles with distance $r>b$ collide with the spherical surface at $r=b$. It is defined as \begin{align} k_D(b) = 4\pi \left[ \int_b^\infty \frac{\exp(E(r)/k_BT)}{r^2D} \text{dr} \right]^{-1} \text{,} \end{align} where $E(r)$ is the centrosymmetric interaction energy between the particles depending on their separation distance r. If the particles are noninteracting spheres, i.e. no forces or hydrodynamic interactions exist for $r>b$, $k_D(b)$ is given by the Smoluchowski result: \begin{align} k_D(b) = 4\pi D_0 b, \label{eq:smoluchowski} \end{align} with $D_0$ being the standard Smoluchowski mutual diffusion coefficient. The quantity $\beta^\infty$ on the other hand is the probability that particles reaching a separation $b$ will subsequently have at least one collision with the fixed particle rather than escape into infinity. $\beta^\infty$ is usually given by \begin{align*} \beta^\infty = \frac{\beta}{1-(1-\beta)\Omega}, \quad \text{with} \quad \Omega = \frac{k_D(b)}{k_D(q)}. \end{align*} Although the development of the NAM method dates back to the 1980's, most of today's used methods are still derived from this simple method. However large improvements have been made in adequately modelling the interparticle forces and in optimizing, i.e. steering the simulation of the BD-trajectories. \paragraph{Modeling of Force Fields} Molecules carry partial charges which generate an electrostatic potential at the molecule's surface. As soon as two molecules overcome a certain distance, their diffusional movement becomes biased by the forces that their electric charges exert on each other. \\ Such intermolecular forces strongly influence the speed of association. In diffusional association these forces are typically given by the sum of the electrostatic and exclusion forces. Electrostatic interactions and hydrophobic desolvation typically speed up the association, whereas charge desolvation (i.e.,\xspace a penalty for bringing a charge close to a low dielectric molecular cavity) and steric repulsion terms lower the speed of association. As a result, the association rate of different pairs of molecules may range between $10^3 - 10^9 Ms^{-1}$. \\ Electrostatic interactions are typically computed by the Poisson-Boltzmann equation (PBE). There exist several methods exploiting numerical solvers tackling the PBE even for larger systems \cite{Davis:1989}. However, such numerical methods are still too time consuming to be applied in each step of the million steps of a BD trajectory. The search for a simplistic, but accurate approximation of intermolecular forces is therefore one of the most challenging problems in the modeling and simulation of diffusional association. \\ A variety of different electrostatic models have been used in BD simulations. They mainly vary in the level of detail employed in the modeling of the charge distribution and the approximation used to present the electrostatic interactions of the system. \\ One of the most basic method is the test charge method. Here the electrostatic potential is computed for the fixed protein (particle A) on a grid. The second molecule (particle B) is considered as a collection of point charges in the solvent dielectric that moves on the potential grid of the fixed protein. The low dielectric and ion exclusion volume of the second molecule are ignored. Although the method is rather coarse, it has been the standard method for a long time. \\ More accurate electrostatic models, with almost the same computational effort, have been proposed by Gabdoulline and Wade \cite{Gabdoulline:1996} and by Beard and Schlick \cite{Beard:2001}. Gabdoulline and Wade extended the standard ``test-charge'' approach to the effective charge model. Here the electrostatic potential is first calculated by a numerical solution of the PBE and for each molecule separately. Afterwards, the molecules are parameterized by fitting effective charges on the molecules such that the previously calculated external potential is fully represented. This has the advantage of getting a simple, but rather realistic electrostatic model of the molecules, by solving the PBE numerically only once. Beard and Schlick have developed a similar approach to reproduce the electric field of the PBE. They parameterize the molecule by effective surface charges and apply this approximation in a Debye-H\"uckel model.\\ The introduction of a charge desolvation term brought further progress in the development of modeling techniques for DA. The charge desolvation term describes the penalty due to bringing a charge close to a low dielectric molecular cavity. A combination of the effective charges approach, the charge desolvation term and an appropriate reaction criteria has been used as a single protocol, in order to simulate the association of several protein pairs with the same model \cite{Gabdoulline:2001}. \paragraph{Sampling} Many molecular systems contain energy barriers, that have to be crossed before the particles can associate. Such energy barriers are e.g. one or several unfavorable states with respect to steric repulsion of electrostatic forces. In these cases the fraction of reactive trajectories is very small. Consequently one has to perform an infeasible amount of BD trajectories until a converged association rate is reached. \\ With the Weighted Brownian dynamics (WEB) \cite{Huber:1996}, Huber and Kim adapted the NAM method to overcome reaction barriers. Instead of modeling the association of a single pair of particles, the WEB method replaces the moving particle of the NAM method by an ensemble of weighted pseudo-particles and divides the configuration space into a number of bins along the reaction coordinate. \\ Each bin is assigned a probability based on a Boltzmann distribution and initially filled with a number of weighted particles. During simulation the pseudoparticles are split and combined in order to equally sample the bins, i.e. all parts of the configuration space. WEB produces consistent results compared to the NAM approach. For fast associating processes WEB needs approximately the same computational time as the NAM method. However, for slow associating processes, WEB gains a 3-fold speed up in comparison to the NAM method \cite{Rojnuckarin:2000}. \section{The structure of FADA} In this section, we will introduce the Flexible Architecture for Diffusional Association (FADA\xspace) which has been implemented in Java. Therefore, we will summarize the requirements for FADA\xspace, resulting from state of the art of Brownian dynamics modeling described in the previous section. Based on that, we will explain the interplay of the architecture's basic components, while the following two sections will get into more detail about the components themselves. \subsection{Requirements} In order to create a flexible architecture that allows to implement a variety of DA methods, we need a basic model illuminating the biological setting of association processes for all possible implementations. Such a model needs to describe two or more molecules of some shape and certain properties in solution. These molecules move under constraints. Thus, their motion has to be modeled by a certain movement function, which defines velocity and direction of the movements with respect to the given constraints. For molecular systems these constraints mainly refer to the potential of mean forces, which incorporates ionic strength of the water molecules as well as intrinsic interactions between the particles. \\ To summarize, the general model of DA consists of three main components: Particles Model (Individual), Movement Model, Potential of Mean Force Model. These components have to be implemented by every DA approach and thus form the basis of our DA model. \\ For each DA approach, these components are refined by specifying the mathematical model in terms of rules and parameters. A language should be available, to specify models in a user-friendly way. A specific simulator will then be responsible for the execution. It should be possible to apply different simulator components that allow different execution strategies. This is of great use for model validation (since biased components can be identified \cite{Leye2010}) and it also provides the possibility of automatically choosing the best algorithm (simulator) for the given simulation model \cite{Ewald:2009b}. Consequently, a flexible experimentation workflow is required, that allows the configuration of model and simulator, as well as experiment design and analysis methods to exploit, extend and compare existing techniques. \subsection{Basic components of FADA\xspace} The key point of FADA\xspace's design is the clear distinction between model, simulator and experiment. It follows the philosophy of JAMES II\xspace \cite{Himmelspach2007, Himmelspach2008}, whose plug-in structure is the foundation for FADA\xspace. Hence, for each of the three steps different components and methods can be created, allowing a flexible combination and application of them (see Figure \ref{fig:FADA}). \begin{figure}[htb] \centering \includegraphics[width=10cm]{figs/FADA.pdf} \caption{The structure of FADA\xspace.} \label{fig:FADA} \end{figure} However, since the area of application for FADA\xspace is the DA process, some of these components have to follow a specific structure. A model can be specified with the SpacePi-Calculus (see Section \ref{modeling}). It comprises the two basic components: Individual, to describe particles and MotionModel to describe their movement. A PMFModel component, describing additional properties of the particles is under development (see Section \ref{outlook}). Accordingly, simulation engines executing those models can be created. The simulator components may be exchanged and extended by exploiting the plug-in architecture. So far a DA simulator comprises components for the generation of random numbers, for the collision detection, and for stepsize control if the collision detection works time-stepped. Validation experiments can be configured and executed using the validation environment FAMVal\xspace \cite{Leye2010}. Those experiments follow six basic steps (see Section \ref{famval}). For each step, methods and techniques may be created, extended, or exchanged (again by exploiting the plug-in architecture) in order to allow a flexible configuration of experiments. \section{Modeling and simulation with FADA} \label{modeling} In the following we will specify the NAM method in SpacePi \cite{John2008} and thereby show the exemplary definition of a simulation model and illustrate the functioning of FADA. \\ SpacePi is an extension of the $\pi$-Calculus \cite{Milner:1999}, that incorporates time and space. The $\pi$-Calculus allows the definition of multiple parallel processes and their interactions. However, in contrast to standard $\pi$, SpacePi allows to assign a position (in two- or three-dimensional space) to a process. Further a movement function can be assigned to each process, describing its spatial motion. Constraints can be defined, such that the possible interactions of the processes depends on the spatial configuration of the system, i.e.,\xspace the position, or precisely the relative distance of the two particles. SpacePi thus provides all important features, that we need to specify the NAM model \subsection{Model Specification for Association Processes} \label{sec:model} Before the NAMmodel can be specified in SpacePi, it is necessary to introduce the basic notations of the SpacePi formalism. SpacePi extends the $\pi$ process with spatial notion to allow free movement of the processes. Each process P is associated with certain position $\overrightarrow{p} \in V, V \in R^d$, where $\overrightarrow{p}$ is a vector space with norm $\|\cdot \|_v$. The modeling of movements demands the inclusion of time, as the speed of the motion could not be expressed otherwise. In SpacePi time is expressed by intervals ($\delta_t$) and the communication takes place during the time intervals. Therefor a single rule is introduced to ensure the progress from time interval to time interval, while all other rules define the activities taking place during each interval. During a time interval the velocity and direction of motion are constant. Thus the movement function reads $m~ = ~ V \times X \rightarrow V$, which takes the current position of the process and a set of additional parameters X and calculates the new position with respect to X. Note that X as well as a function for selecting parameters $\chi \in X$ needs to be provided by the model. The movement function generates a target vector of the process. The target vector is added to the current position in order to receive the target position $\overrightarrow{t} = \overrightarrow{p} + m(\overrightarrow{p}, \chi)$, which is the position reached after the time interval $\delta_t$. \\ Based on the previous definitions a process can be associated with its current position and a movement function m. We thus write a process in SpacePi as $P_m^{\overrightarrow p}$. The formal description of a SpacePi process is as follows: \begin{align*} P_m^{\overrightarrow p}~::=~\sum_{i\in I}~ \pi_i~ .~ P_{i,~m_i}^{\overrightarrow p_i}~\mid~P_{1,~m_1}^{\overrightarrow p_1}~ \mid ~P_{2,~ m_2}^{\overrightarrow p_i} ~ \mid ~ (new~x) P_{m}^{\overrightarrow p} ~ \mid ~ nil \end{align*} Nil is a shorthand for $nil_{m(\overrightarrow 0, \chi)}^{\overrightarrow 0}$ and describes an empty process without movement and a position at the point of origin of V. \\ The action $\pi$ refers to a channel through which the concurrent processes may communicate. Sending and receiving x over the channel ch with radius r is denoted by $ch !(x,r)$ and $ch ? (x,r)$, respectively. Two processes may only communicate, if their distance is closer than the given action radius $r$. A communication action with $r=\infty$ resembles the semantics of the $\pi$ calculus, i.e. the channel has no spatial restriction. Based on the given notations of the SpacePi formalism, we can now specify the NAM model. Our system consists of two Brownian particles moving in $\mathbb R^3$. The motion is described in terms of Brownian motion. We thus define the movement function as $mov(P_i, pos, t) = pos^t$ with $pos^t(P_i) = pos^{t-1}(P_i) + S$, and S being a $\mathbb{R}^3$ random variables drawn from a normal distribution with mean $\langle S \rangle = 0$ and derivation of $\langle S^2 \rangle = 2D\delta_t$. D is the diffusion coefficient, which has to be provided by the parametrization of the model. The movement function resembles the mean displacement of a Brownian particle based on the Ermak-McCammon Equation \cite{Ermak:1978} under no influence of interparticle forces. There are two events, that we want to monitor. Either the two particles react, i.e. collide, or the particles diffuse into infinite seperation.\\ The specification of the model in SpacePi is shown in figure \ref{fig:example}. Please note that no potential mean force model is specified. The model consists of three different processes $FixedParticle$, $MovingParticle$ and $ExitParticle$. The first two processes describe the two particles under consideration. Since we model the movement of both particles as relative motion, only the $MovingParticle$ is associated with a (Brownian) movement function ($bMove$). The process $FixedParticle$ is fixed in the center of the coordinate system and does not move. The expression $coll?(\sim, r) $ describes the collision event, which occurs as soon as the two particles reach a distance smaller than the given action radius r. Thereby the channel's name, the radius of the complementary channel actions, and the location of the processes are taken into account. After the collision, i.e. reaction event, the processes terminate. The third process $ExitParticle$ is an auxillary process that allows to realize the escape event. \begin{figure}[h!] \small \ovalbox{ \cornersize{0} \begin{minipage}{\textwidth} \vspace{4pt} \textbf{Position declarations}\\ $pos_F ~:=~ x = 0 ~ \wedge ~ y = 0 ~ \wedge ~ z = 0$ \\ $pos_M ~:=~ rand(x,y,z) ~ s.t. ~ (x^2 + y^2 + z^2) = b$ \\ $pos_E ~:=~ rand(x,y,z) ~ s.t. ~ (x^2 + y^2 + z^2) = q$ \\ \newline \textbf{Radius declarations}\\ $r_{react}$ = 10 \\ \newline \textbf{Potential of mean force declarations} \\ $f_{pmf}: \text{not defined}$ \\ \newline \textbf{Motion declarations}\\ bMove(): $ \dot{x}^2 + \dot{y}^2 + \dot{z}^2 < q, \quad x = pos_E(x) ~ \wedge ~ y = pos_E(y) ~ \wedge ~ z = pos_E(z)\ \text{otherwise} $\\ \newline \textbf{Process definitions} \\ $ FixedParticle = coll? (\sim, r).0 $ \\ $ MovingParticle[bMove] = coll! (\sim, r).0$ \\ $ ExitParticle = coll? (\sim, r).0 $\\ \newline \textbf{Initial process} \\ $ FixedParticle ~ | ~ MovingParticle ~ | ~ ExitParticle$ \newline \end{minipage} } \caption{\ A simplistic SpacePi model based on the NAM model for diffusional association of two particles. An expression $ch?(\sim, r)$ denotes that an empty message is to be received on channel ch with radius r.} \label{fig:example} \end{figure} In order to model the escape of the moving particle, the NAM model defines a separation distance q at which the two particles have diffused into infinite separation. As neither $\pi$ nor SpacePi calculus offer a way to represent boundaries, we incorporate the distance constraint into the movement function. The idea is to place the $ExitParticle$ outside of the q-surface and change the movement function of the $MovingParticle$, such that $MovingParticle$ will collide with $ExitParticle$ as soon as it reaches the separation distance q. We thus extend the movement of the process $MovingParticle$ as follows $mov(P_M, pos, \delta_t) = pos^t$ with $pos^t(P_M) = pos^{t-1}(P_M) + S, \ (\text{iff} ~ \|pos^t(P_M) + S\| ~ < ~ q, ~ pos^t(P_M) = pos(P_E)$ otherwise$)$. Thereby $P_M$ stands for $MovingParticle$ and $P_E$ for $ExitParticle$. Due to the direct displacement of $MovingParticle$ to the position of $ExitParticle$, the two particles collide instantaneously when $MovingParticle$ has passed the distance q. The collision of both processes $coll?(\sim, r)$ is similar to the collision action of $FixedParticle$ and $MovingParticle$. After this collision, i.e. escape event, the processes also terminate. Although the specified model is far from representing a realistic biological association process (for this a potential mean force model has to be included), the model serves as a good example to present the general idea, of how association processes can be expressed by a model formalism, evaluated by a simulator and finally be validated through validation experiments. Further the simplicity of the model allows us to calculate an accurate analytical solution, to validate our approach. \subsection{Simulation of the model} The simulator evaluates a parameterized instance of the described simulation model. For the simulation of the trajectory we use a simple step simulator, which calculates the mean displacement of the moving particle in each time step and checks for collision events. Therefore the simulator needs to implement a Random Number Generator (RNG) and a collision detection algorithm, which are realized as plug-ins. A separate implementation of these components (RNG and collision detection) allows to use different simulator configurations by applying different algorithms in each component. So far, JAMES II\xspace offers over 10 different RNG implementations, that may all be used by FADA\xspace. For the collision detections, we implemented two approaches, a time stepped and an event-triggered one. The time-stepped one, measures the distance between the particles at distinct time steps and reports a collision when the distance is $0$. To minimize the amount of undetected collisions (i.e.,\xspace two particles have a distance $= 0$ inbetween two time steps but $> 0$ at the steps) , a stepsize control is required for which we implemented two versions as well. The first one creates a constant stepsize over the whole simulation run. The second version determines the stepsize depending on the distance of the moving particle to the reaction or escaping sphere (the lesser the distance the smaller the time step). The event-triggered collision detection precalculates the time point at which the particles should collide, depending on the actual movement and position. If no movement update happens before the calculated time point, the reaction happens. This variant requires more complicated calculations (compared to the time-stepped version) but produces an exact simulation. Another benefit of the given separation of model and simulation, is the chance to optimize each component separately (e.g.,\xspace by advanced step-size adaptations, heuristics, parallelization, etc.). This allows the integration of specialized solutions created by experts. Furthermore, additional features of the model can be reflected by new components for the simulator (e.g.,\xspace a component handling the PMF model). \section{Experimentation with FADA} \label{famval} To conduct experiments with the BD model and simulator, FADA\xspace exploits the validation environment FAMVal\xspace. FAMVal\xspace is based on six steps that have been identified as being crucial for structuring validation experiments \cite{Leye2009, Leye2010}. In this section, we describe those steps and explain how each of them has been adapted in order to test the reaction rate of the BD model from Section \ref{sec:model} against the results of the state of the art and present some results. \subsection{Important steps of a validation experiment} The first and most important step is the specification of requirements for the model to be validated. Thereby, the parameters for the whole experiment are defined, influencing each of the following steps. The second step is the configuration of the model, where interesting points in the model's parameter space are selected in order to achieve the required information about the validity of the model. The third step is the execution of the model. Valid simulation engines and components are required for a reliable simulation result. Therefore, simulation runs have to be repeated with different simulator settings, to identify those biasing the simulation output due to approximation issues, bugs or side effects. The fourth step is the observation of the model execution, to produce the required simulation output. In the ideal case, only those properties of the model are observed, that are required to make a decision about whether the model fulfills the requirements or not. More observations would produce overhead, less would hamper the analysis of the results. The analysis of the observed data is the fifth step. It comprises two parts: the analysis of a single simulation run and the analysis of a set of replications. In general, the second part is based on the results of the first one, but occasions exist where just one of them is necessary (e.g.,\xspace the comparison of trajectories over a set of replications to calculate the monte-carlo variability). To execute a proper analysis, the required amount of simulation end times and required replications has to be determined. Finally, the last step is the evaluation. It takes the result of the analysis as well as the requirements into account. Feedback for the configuration step may be produced, leading to iterative experiments which can be exploited for parameter estimation and optimization tasks in general. \subsection{An example experiment} The comparison of the simulation results of a newly developed model to those of a well-established model is a typical example for a model validation experiment \cite{Sargent2008}. We describe such an experiment to validate the model of section \ref{sec:model} following the described structure of FADA\xspace. \\ The basic demand for the model is that its estimated rate constant $r_{s} = k_D(b) \beta^\infty$ matches the analytical solution $r_{a} = 4\pi Da$, where $a$ is the reaction criterion, i.e.,\xspace the minimum separation required for the two particles to react. In this case, the probability $\beta$ that is required to estimate the rate $r_{s}$ can be directly computed by: \begin{align*} \beta_{a} = \frac{a}{b}\cdot\frac{q-b}{q-a} \end{align*} We can thus use the probability rate $\beta_{s}$ as requirement, instead of the reaction rate constants. Hence, for a valid model the rate $\beta_{s}$ of simulation runs terminating with a reaction of the two particles, needs to match the rate $\beta_{a}$. For our setting, the analytical rate $\beta_{a}$ has the value of 0.111. Besides the given rate, an error tolerance $e$ is required, denoting the maximum distance between $\beta_{s}$ and $\beta_{a}$ for the model to be considered valid, with a given confidence $c$. In our experiment we set $e$ to 0.05 and $c$ to 0.99. \begin{table}[hb!] \small \centering \begin{tabular}{|c|c|} \hline \textbf{Validation steps} & \textbf{Realization} \\ \hline Specification & determine $\beta_{a}$, $e$, $c$\\ & configure the following steps\\ \hline & diffusion coefficient D\\ Configuration & initial particle distance b\\ & minimal collision distance a\\ & minimal exit distance q\\ \hline & Mersenne Twister/Default RNG \\ Execution & time stepped/event triggered \\ & fixed/adaptive steps \\ \hline Observation & end state \\ \hline Analysis & calculate $\beta_{s}$ by taking the average over all runs \\ & calculate the required replications \\ \hline Evaluation & compare $\beta_{s}$ to $\beta_{a}$ \\ \hline \end{tabular} \caption{Overview over the steps in a validation experiment and the realization to validate the BD model.} \label{fig:SimpleResults} \end{table} Since the reaction rate constant is only dependent on the reaction criterion, the parameter b, q, and D may be chosen arbitrarily. Typically q is chosen sufficiently larger than b, in order to allow a significant fraction of trajectories to react. For the configuration we used a fixed minimum reaction criterion a = 10{\AA} and fixed distances b = 50{\AA}, and q = 100{\AA}. The radius of the particles has been set to 4{\AA}. This choice of parameters is based on the experiment settings used by \cite{Rojnuckarin:2000}. Since we selected the parameters by hand (in contrast to using an algorithm, like a parameter sweep), they have to be reflected in the definition of the requirements. \\ For the execution of the model we used two different RNG's, the standard Java RNG and the Mersenne Twister, to investigate, whether a change of the RNG may have an influence on the simulation results. Furthermore, for executing the model we tried both implemented collision detection variants and for the time-stepped one both stepsize controls. The default stepsize was 0.1 milliseconds and the adapted one was 0.01 milliseconds. To calculate the association rate of two molecules just one observation is of importance, which is the fact whether the two particles have reacted or not. Therefore, the end state of reach simulation run has to be collected. The end of a simulation run is reached, either if the two particles collided or if the distance between them reached a given threshold. No additional effort had to be put in the analysis of the single simulation runs. The information about the end state of a run does not have to be post-processed in order to calculate $\beta_{s}$. This value is created during the analysis of the configuration (i.e.,\xspace the replications with the same model parameter configuration). Thereby, a list holding the end state of each simulation is maintained and after the required replications have been finished, the number of runs where the particles have reacted is summed up and divided by the count of all replications. It is not sufficient to skip the list and calculate the sum on the way, since the intermediate results are used to calculate the standard error, which is required to calculate the count of necessray replications to ensure that the resulting rate lies in the given error range $e$ with confidence $c$ \cite{Chung2004}. \\ The evaluation basically comprises the comparison of the given rate to that, produced by the simulation. If the difference lies in the given tolerance $e$, the model can be considered valid for the used parameter configuration. Since the configurations are selected by hand at the beginning of the experiment, no loop and therefore no feedback from the evaluation to the configuration is required. \paragraph{Experiment Results} Figure \ref{fig:results} shows the results of the described experiment. The exchange of the random number generator did not make any significant difference for the results. Therefore, we just present the results of the Mersenne Twister. \begin{figure}[htb] \centering \includegraphics[width=10cm]{figs/Results.pdf} \caption{Results of the validation experiment.} \label{fig:results} \end{figure} The simulations with the event-triggered collision detection produced association rates, that did lie inside the error tolerance. However, a significant difference exists between the results produced by the time-stepped collision detection and the analytical result. As the diffusion coefficient is increased, the association rate produced by the simulation is decreased, while the reference rate stays constant. The differences are higher with the constant stepsize than with the adaptive stepsize. They can be explained by the less precise simulation with the time-stepped collision detection. If the collision happens between two steps and the two particles do not overlap in the second step, the collision can not be detected. As the particles have a lower diffusion coefficient and therefore move slower i.e.,\xspace they pass a smaller distance between two time steps, the probability of missing a collision is lower. Similarly, if the time step is adapted when a particle approaches the other one (and therefore again the passed distance between two time steps is decreased), less collisions are missed. \\ The experiment shows the importance of testing different simulation engine configurations during a validation experiment. The type of the collision detection as well as the stepsize control had a high impact on the experiment results. If for instance just the time-stepped detection with the constant stepsize would have been used, the biased results would have led to the conclusion that the given model configuration is invalid which it was not. \section{Outlook} \label{outlook} The model presented and used here for testing purposes is simplistic. In order to include more realistic dynamics, e.g.,\xspace for interparticle forces and reaction criteria, it is necessary to extend the model described in section \ref{sec:model}. Since SpacePi allows an arbitrary definition of the movement function, different approaches for the modeling of the potential mean force can be integrated. Therefore, one has to define the corresponding functions, i.e.,\xspace declaring the potential of mean force model (see figure \ref{fig:example}), that calculate the impact of the interparticle forces, while taking in account the position of the two particles. The value of this force field evaluation is then used in order to parametrize the movement function of the moving particle. However, in addition it is necessary to parameterize the processes in terms of (local) charges of the particles and also introduce a parameter for the ionic strength of the surrounding solvent. \\ Another interesting, but challenging step is the integration of more complex reaction criteria. Therefore a more detailed parametrization of the processes in terms of shape and size is necessary. A promising approach, to solve this problem has been reported by \cite{Schaefer:2009}, who introduced labels to the SpacePi formalism. In our validation experiment we estimated the association rate of the presented model. It could be easily validated as the value is analytically known. However, in any realistic application of this procedure this will not be the case. Thus, the validity of the used model has to be ensured. Therefore, it is necessary to gain confidence in the model with adequate methods. One way to achieve this confidence is to check the plausibility of the model's behavior. In the following we want to give an overview over methods, that might be applied in order to check the plausibility of the BD model. \\ During the specification step the requirements the model has to fulfill, have to be defined. Those requirements have to be pointed out with respect to the overall goal of the model - the calculation of association rates. Therefore it is important to identify those properties of the model, that influence the association rates and that can be checked for their plausibility. For instance, the end states of the simulation runs do not give good insights into the validity of the model since a reference association rate to check the plausibility of their occurrence is not available. The plausibility of a model does not just depend on the behavior of one single configuration. If neighbor configurations show unexpected behavior, problems in the model structure might exist that could influence the results of the configuration under investigation. Hence, experiments with different configurations can give new insights about the model structure. Furthermore, the reusability of the model is increased, if different parameter settings have been checked for their plausibility. If new particles with different properties have to be investigated it might be sufficient to adapt the model parameters instead of creating an entire new model. However, those parameters have to be covered during the validation process. An algorithm exploring the parameter space \cite{Calvez2005} is required therefore. Future parameters of the model (e.g.,\xspace forcefield, rotational diffusion, ionization of solvent) have to be considered in this phase. \\ For the execution of the model, different simulation engines and components have to be provided, to investigate bias produced by different algorithms. To get decent information about the behaviour of the modeled particles, more complex observations are required. As mentioned before, the end state of the simulation is not sufficient anymore. However, more detailed information about the behaviour particles, like e.g.,\xspace their movement trajectory could give hints about whether issues exist, that could bias the potential reactions.\\ To analyze the simulation output results, techniques to analyze simulation traces \cite{Kleijnen2000} may be used. Furthermore, techniques from Runtime Verification \cite{Bensalem2009} and simulation-based model-checking \cite{Fages2007} could be well suited. Thereby, expected paths of the particles would be coded into LTL-formulae and the model-checker could check, whether the movement trajectories produced by the simulation runs follow the path. This analysis technique could be used twofold for validation experiments. On the one hand, it could by used directly to validate the reaction events, e.g.,\xspace something must be wrong if the trajectories of the particles should have led to a reaction, but now reaction occurred. On the other hand, it could be used, to validate certain parts of the model, e.g.,\xspace something must be wrong if the trajectories of the particles contradict their movement functions. \\ The evaluation of the analysis results is not constrained. A simple comparison of the results, to given specifications is imaginable, which however would not offer hints about the problem if the comparison turns out to be negative. One technique giving such hunts and supporting model debugging, is face validation. The analysis results are post-processed to create figures that support the modeler during the process of finding errors in the model. Those figures might facilitate the browsing through model structure and experiment results \cite{Unger2009} or display interactions between the model's components \cite{Kemper2006}. \section{Conclusion} We introduced the basic concept of FADA\xspace, an architecture for the modeling, simulation, and experimentation of diffusional association processes. FADA\xspace uses SpacePi-Calculus expressions for the definition of models. Those models comprise two particles, whose reaction behaviour is under investigation. So far, this behaviour is solely based on the movement functions of the particles, but additional features (e.g.,\xspace interparticle forces, detailed molecule representation, decent reaction criteria) are under development. We proposed a simple simulation engine relying on the three components collision detection, random number generator and stepsize control. According to additional model features, the simulator will be adapted by new components in the future. For the experimentation, FADA\xspace distinguishes six steps in order to conduct a validation experiment. Exemplarily, we described the realization of those steps for the validation of the association rate of a BD model. The results of this experiment showed that the simulation engine has an impact on the behaviour of the simulated model and that different simulator configurations have to be used in order to sort out invalid ones. With new features of the model, the realized steps of the experimentation process have to be adapted. This is especially important, if the model has to be validated with no reference association rate at hand, which is the case, if the model should be used to estimate association rates. For both tasks, the validation of DA models as well as the estimation of reaction rates based on them, FADA\xspace can be used. \bibliographystyle{eptcs}
2024-02-18T23:40:23.466Z
2010-02-22T07:38:58.000Z
algebraic_stack_train_0000
2,247
7,211
proofpile-arXiv_065-11067
\section{Introduction}\label{sec:Intro} Previously, most of the works on quantum error-correcting codes were done with the assumption that the channel is symmetric. That is, the various error types were taken to be equiprobable. To be brief, the term \textit{quantum codes} or QECC is henceforth used to refer to quantum error-correcting codes. Recently, it has been established that, in many quantum mechanical systems, the phase-flip errors happen more frequently than the bit-flip errors or the combined bit-phase flip errors. For more details, ~\cite{SRK09} can be consulted. There is a need to design quantum codes that take advantage of this asymmetry in quantum channels. We call such codes \textit{asymmetric quantum codes}. We require the codes to correct many phase-flip errors but not necessarily the same number of bit-flip errors. In this paper we extend the construction of asymmetric quantum codes in~\cite{WFLX09} to include codes derived from classical additive codes under the trace Hermitian inner product. This work is organized as follows. In Section~\ref{sec:Prelims}, we state some basic definitions and properties of linear and additive codes. Section ~\ref{sec:QuantumCodes} provides an introduction to quantum error-correcting codes in general, differentiating the symmetric and the asymmetric cases. In Section~\ref{sec:AsymQECC}, a construction of asymmetric QECC based on additive codes is presented. The rest of the paper focuses on additive codes over ${\mathbb F}_{4}$. Section~\ref{sec:AdditiveF4} recalls briefly important known facts regarding these codes. A construction of asymmetric QECC from extremal or optimal self-dual additive codes is given in Section~\ref{sec:ExtremalSD}. A construction from Hermitian self-orthogonal ${\mathbb F}_{4}$-linear codes is the topic of Section~\ref{sec:HSelfOrtho}. Sections~\ref{sec:Cyclic} and~\ref{sec:BCHCodes} use nested ${\mathbb F}_{4}$-linear cyclic codes for lengths $n \leq 25$ and nested BCH codes for lengths $27 \leq n \leq 51$, respectively, in the construction. New or better asymmetric quantum codes constructed from nested additive codes over ${\mathbb F}_{4}$ are presented in Section~\ref{sec:nestedadditive}, exhibiting the gain of extending the construction to include additive codes. Section~\ref{sec:Conclusion} provides conclusions and some open problems. \section{Preliminaries}\label{sec:Prelims} Let $p$ be a prime and $q = p^f$ for some positive integer $f$. An $[n,k,d]_q$-linear code $C$ of length $n$, dimension $k$, and minimum distance $d$ is a subspace of dimension $k$ of the vector space ${\mathbb F}_q^n$ over the finite field ${\mathbb F}_q=GF(q)$ with $q$ elements. For a general, not necessarily linear, code $C$, the notation $(n,M=|C|,d)_q$ is commonly used. The \textit{Hamming weight} of a vector or a codeword ${\mathbf{v}}$ in a code $C$, denoted by $\mathop{{\rm wt}}_H({\mathbf{v}})$, is the number of its nonzero entries. Given two elements ${\mathbf{u}},{\mathbf{v}} \in C$, the number of positions where their respective entries disagree, written as $\mathop{{\rm dist}}_H({\mathbf{u}},{\mathbf{v}})$, is called the \textit{Hamming distance} of ${\mathbf{u}}$ and ${\mathbf{v}}$. For any code $C$, the \textit{minimum distance} $d = d(C)$ is given by $d = d(C) = \mathop{{\rm min}}\left\lbrace \mathop{{\rm dist}}_H({\mathbf{u}},{\mathbf{v}}): {\mathbf{u}},{\mathbf{v}} \in C,{\mathbf{u}} \neq {\mathbf{v}}\right\rbrace$. If $C$ is linear, then its closure property implies that $d(C)$ is given by the minimum Hamming weight of nonzero vectors in $C$. We follow~\cite{NRS06} in defining the following three families of codes according to their duality types. \begin{defn}\label{def1.1} Let $q=r^2=p^f$ be an even power of an arbitrary prime $p$ with $\overline{x}=x^{r}$ for $x \in {\mathbb F}_{q}$. Let $n$ be a positive integer and ${\mathbf{u}} = (u_1,\ldots,u_n), {\mathbf{v}} = (v_1,\ldots,v_n) \in {\mathbb F}_{q}^n$. \begin{enumerate} \item $\mathbf{q^{\mathop{{\rm H}}}}$ is the family of ${\mathbb F}_{q}$-linear codes of length $n$ with the \textit{Hermitian inner product} \begin{equation}\label{eq:1.1} \left\langle {\mathbf{u}},{\mathbf{v}}\right\rangle _{\mathop{{\rm H}}} := \sum_{i=1}^{n} u_i \cdot v_i^{\sqrt{q}} \text{.} \end{equation} \item $\mathbf{q^{\mathop{{\rm H}}+}}$ \textbf{(even)} is the family of trace Hermitian codes over ${\mathbb F}_{q}$ of length $n$ which are ${\mathbb F}_{r}$-linear, where $r^2=q$ is even. The duality is defined according to the \textit{trace Hermitian inner product} \begin{equation}\label{eq:1.2} \left\langle {\mathbf{u}},{\mathbf{v}}\right\rangle _{\mathop{{\rm tr}}} := \sum_{i=1}^{n} (u_i\cdot v_i^{\sqrt{q}} + u_i^{\sqrt{q}} \cdot v_i) \text{.} \end{equation} \item $\mathbf{q^{\mathop{{\rm H}}+}}$ \textbf{(odd)} is the family of trace Hermitian codes over ${\mathbb F}_{q}$ of length $n$ which are ${\mathbb F}_{r}$-linear, where $r^2=q$ is odd. The duality is defined according to the following inner product, which we will still call \textit{trace Hermitian inner product}, \begin{equation}\label{eq:1.2a} \left\langle {\mathbf{u}},{\mathbf{v}}\right\rangle _{\mathop{{\rm tr}}} := \alpha \cdot \sum_{i=1}^{n} (u_i\cdot v_i^{\sqrt{q}} - u_i^{\sqrt{q}} \cdot v_i) \text{,} \end{equation} where $\alpha \in {\mathbb F}_{q} \setminus \left\lbrace 0 \right\rbrace$ with $\alpha^{r}= -\alpha$. \end{enumerate} \end{defn} \begin{defn}\label{def:1.1a} A code $C$ of length $n$ is said to be a \textit{(classical) additive code} if $C$ belongs to either the family $q^{\mathop{{\rm H}}+}$ (even) or to the family $q^{\mathop{{\rm H}}+}$ (odd). \end{defn} Let $C$ be a code. Under a chosen inner product $*$, the \textit{dual code} $C^{\perp_{*}}$ of $C$ is given by \begin{equation*} C^{\perp_{*}} := \left\lbrace {\mathbf{u}} \in {\mathbb F}_q^n : \left\langle {\mathbf{u}},{\mathbf{v}}\right\rangle _{*} = 0 \text{ for all } {\mathbf{v}} \in C \right\rbrace \text{.} \end{equation*} Accordingly, for a code $C$ in the family $(q^{\mathop{{\rm H}}})$, \begin{equation*} C^{\perp_{\mathop{{\rm H}}}} := \left\lbrace {\mathbf{u}} \in {\mathbb F}_q^n : \left\langle {\mathbf{u}},{\mathbf{v}}\right\rangle _{\mathop{{\rm H}}} = 0 \text{ for all } {\mathbf{v}} \in C \right\rbrace \text{,} \end{equation*} and, for a code $C$ in the family $(q^{\mathop{{\rm H}}+})$ (even) or $(q^{\mathop{{\rm H}}+})$ (odd), \begin{equation*} C^{\perp_{\mathop{{\rm tr}}}} := \left\lbrace {\mathbf{u}} \in {\mathbb F}_q^n : \left\langle {\mathbf{u}},{\mathbf{v}}\right\rangle _{\mathop{{\rm tr}}} = 0 \text{ for all } {\mathbf{v}} \in C \right\rbrace \text{.} \end{equation*} A code is said to be \textit{self-orthogonal} if it is contained in its dual and is said to be \textit{self-dual} if its dual is itself. We say that a family of codes is \textit{closed} if $(C^{\perp_{*}})^{\perp_{*}} = C$ for each $C$ in that family. It has been established~\cite[Ch. 3]{NRS06} that the three families of codes in Definition~\ref{def1.1} are closed. The weight distribution of a code and that of its dual are important in the studies of their properties. \begin{defn}\label{def1.2} The \textit{weight enumerator} $W_C(X,Y)$ of an $(n,M=|C|,d)_q$-code $C$ is the polynomial \begin{equation}\label{WE} W_C(X,Y)=\sum_{i=0}^n A_{i} X^{n-i}Y^{i} \text{,} \end{equation} where $A_{i}$ is the number of codewords of weight $i$ in the code $C$. \end{defn} The weight enumerator of the Hermitian dual code $C^{\perp_{\mathop{{\rm H}}}}$ of an $[n,k,d]_q$-code $C$ is connected to the weight enumerator of the code $C$ via the MacWilliams Equation \begin{equation}\label{eq:MacW} W_{C^{\perp_{\mathop{{\rm H}}}}} (X,Y)= \frac{1}{|C|} W_C(X+(q-1)Y,X-Y) \text{.} \end{equation} In the case of nonlinear codes, we can define a similar notion called the \textit{distance distribution}. The MacWilliams Equation can be generalized to the nonlinear cases as well (see~\cite[Ch. 5]{MS77}). From~\cite[Sect. 2.3]{NRS06} we know that the families $q^{\mathop{{\rm H}}+}$ (even) and $q^{\mathop{{\rm H}}+}$ (odd) have the same MacWilliams Equation as the family $q^{\mathop{{\rm H}}}$. Thus, Equation (\ref{eq:MacW}) applies to all three families. Classical codes are connected to many other combinatorial structures. One such structure is the orthogonal array. \begin{defn}\label{def1.3} Let $S$ be a set of $q$ symbols or levels. An orthogonal array $A$ with $M$ runs, $n$ factors, $q$ levels and strength $t$ with index $\lambda$, denoted by $OA(M,n,q,t)$, is an $M \times n$ array $A$ with entries from $S$ such that every $M \times t$ subarray of $A$ contains each $t$-tuple of $S^t$ exactly $\lambda = \frac{M}{q^t}$ times as a row. \end{defn} The parameter $\lambda$ is usually not written explicitly in the notation since its value depends on $M,q$ and $t$. The rows of an orthogonal array are distinct since the purpose of its construction is to minimize the number of runs in the experiment while keeping some required conditions satisfied. There is a natural correspondence between codes and orthogonal arrays. The codewords in a code $C$ can be seen as the rows of an orthogonal array $A$ and vice versa. The following proposition due to Delsarte (see~\cite[Th. 4.5]{Del73}) will be useful in the sequel. Note that the code $C$ in the proposition is a general code. No linearity is required. The duality here is defined over any inner product. For more on how the dual distance is defined for nonlinear codes, we refer to~\cite[Sec. 4.4]{HSS99}. \begin{prop}\cite[Th. 4.9]{HSS99}\label{OA} If $C$ is an $(n,M=|C|,d)_q$ code with dual distance $d^{\perp}$, then the corresponding orthogonal array is an $OA(M,n,q,d^{\perp}-1)$. Conversely, the code corresponding to an $OA(M,n,q,t)$ is an $(n,M,d)_q$ code with dual distance $d^{\perp} \geq t+1$. If the orthogonal array has strength $t$ but not $t+1$, then $d^{\perp}$ is precisely $t+1$. \end{prop} \section{Quantum Codes}\label{sec:QuantumCodes} We assume that the reader is familiar with the standard error model in quantum error-correction. The essentials can be found, for instance, in~\cite{AK01} and in~\cite{FLX06}. For convenience, some basic definitions and results are reproduced here. Let $\mathbb{C}$ be the field of complex numbers and $\eta=e^{\frac{2\pi \sqrt{-1}}{p}}\in \mathbb{C}$. We fix an orthonormal basis of $\mathbb{C}^{q}$ $$\left\{|v\rangle:v\in \mathbb{F}_{q}\right\}$$ with respect to the Hermitian inner product. For a positive integer $n$, let $V_{n}=(\mathbb{C}^{q})^{\otimes n }$ be the $n$-fold tensor product of $\mathbb{C}^{q}$. Then $V_{n}$ has the following orthonormal basis \begin{equation}\label{basis} \left\{|{\mathbf{c}}\rangle =|c_{1}c_{2}\ldots c_{n}\rangle : {\mathbf{c}}=(c_{1},\ldots,c_{n}) \in \mathbb{F}_{q}^n\right\} \text{,} \end{equation} where $|c_{1}c_{2}\ldots c_{n}\rangle$ abbreviates $|c_{1}\rangle\otimes|c_{2}\rangle\otimes\cdots \otimes |c_{n}\rangle$. For two quantum states $|{\mathbf{u}}\rangle$ and $|{\mathbf{v}}\rangle$ in $V_{n}$ with $$|{\mathbf{u}}\rangle=\sum\limits_{{\mathbf{c}}\in \mathbb{F}_{q}^{n}}\alpha({\mathbf{c}})|{\mathbf{c}}\rangle,\quad |{\mathbf{v}}\rangle =\sum\limits_{{\mathbf{c}}\in \mathbb{F}_{q}^{n}}\beta({\mathbf{c}})|{\mathbf{c}}\rangle \quad (\alpha({\mathbf{c}}),\beta({\mathbf{c}})\in \mathbb{C}),$$ the Hermitian inner product of $|{\mathbf{u}}\rangle$ and $|{\mathbf{v}}\rangle$ is $$\langle {\mathbf{u}}|{\mathbf{v}}\rangle=\sum\limits_{{\mathbf{c}}\in \mathbb{F}_{q}^{n}} \overline{\alpha({\mathbf{c}})}\beta({\mathbf{c}})\in \mathbb{C},$$ where $\overline{\alpha({\mathbf{c}})}$ is the complex conjugate of $\alpha({\mathbf{c}})$. We say $|{\mathbf{u}}\rangle$ and $|{\mathbf{v}}\rangle$ are \textit{orthogonal} if $\langle {\mathbf{u}}|{\mathbf{v}}\rangle=0$. A quantum error acting on $V_{n}$ is a unitary linear operator on $V_{n}$ and has the following form $$e=X({\mathbf{a}})Z({\mathbf{b}})$$ with ${\mathbf{a}}=(a_{1},\ldots,a_{n}),{\mathbf{b}}=(b_{1}, \ldots,b_{n})\in \mathbb{F}_{q}^{n}$. The action of $e$ on the basis (\ref{basis}) of $V_{n}$ is $$e|{\mathbf{c}}\rangle=X(a_{1})Z(b_{1})|c_{1}\rangle \otimes \ldots \otimes X(a_{n})Z(b_{n})|c_{n}\rangle, $$ where $$X(a_{i})|c_{i}\rangle=|a_{i}+c_{i}\rangle, \quad Z(b_{i})|c_{i}\rangle=\eta^{T(b_{i}c_{i})}|c_{i}\rangle$$ with $T:\;\mathbb{F}_{q}\to\mathbb{F}_{p}$ being the trace mapping $$T(\alpha)=\alpha+\alpha^{p}+\alpha^{p^{2}}+\ldots+\alpha^{p^{m-1}},$$ for $q=p^{m}$. Therefore, $$e|{\mathbf{c}}\rangle=\eta^{T({\mathbf{b}}\cdot {\mathbf{c}})}|{\mathbf{a}}+{\mathbf{c}}\rangle,$$ where ${\mathbf{b}}\cdot {\mathbf{c}}=\sum\limits_{i=1}^{n}b_{i}c_{i}\in \mathbb{F}_{q}$ is the usual inner product in $\mathbb{F}_{q}^n$. For $e=X({\mathbf{a}})Z({\mathbf{b}})$ and $e^{'}=X({\mathbf{a}}^{'})Z({\mathbf{b}}^{'})$ with ${\mathbf{a}},{\mathbf{b}}$, and ${\mathbf{a}}^{'},{\mathbf{b}}^{'}\in \mathbb{F}_{q}^{n}$, $$e e^{'}=\eta^{T({\mathbf{a}}\cdot {\mathbf{b}}^{'}-{\mathbf{a}}^{'}\cdot {\mathbf{b}})}e^{'} e.$$ Hence, the set $$E_{n}=\left\{\eta^{\lambda}X({\mathbf{a}})Z({\mathbf{b}}) | 0\leq \lambda\leq p-1, {\mathbf{a}},{\mathbf{b}}\in \mathbb{F}_{q}^{n} \right\}$$ forms a (nonabelian) group, called the \textit{error group} on $V_{n}$. \begin{defn}\label{def2.1} For a quantum error $e=\eta^{\lambda}X({\mathbf{a}})Z({\mathbf{b}})\in E_{n}$, we define the {\it quantum weight} $w_{Q}(e)$, the {\it $X$-weight} $w_{X}(e)$ and the {\it $Z$-weight} $w_{Z}(e)$ of $e$ by \[ \begin{array}{rcl} w_{Q}(e)&=&|\{i: 1\leq i\leq n, (a_{i},b_{i})\neq(0,0)\}| \text{,}\\ w_{X}(e)&=&|\{i: 1\leq i\leq n, a_{i}\neq0\}| \text{,}\\ w_{Z}(e)&=&|\{i: 1\leq i\leq n, b_{i}\neq0\}| \text{.} \end{array} \] \end{defn} Thus, $w_{Q}(e)$ is the number of qudits where the action of $e$ is nontrivial by $X(a_{i})Z(b_{i})\not=I$ (identity) while $w_{X}(e)$ and $w_{Z}(e)$ are, respectively, the numbers of qudits where the $X$-action and the $Z$-action of $e$ are nontrivial. We are now ready to define the distinction between symmetric and asymmetric quantum codes. \begin{defn}\label{def2.2} A \textit{$q$-ary quantum code} of length $n$ is a subspace $Q$ of $V_{n}$ with dimension $K\geq1$. A quantum code $Q$ of dimension $K\geq2$ is said to detect $d-1$ qudits of errors for $d\geq1$ if, for every orthogonal pair $|{\mathbf{u}}\rangle$, $|{\mathbf{v}}\rangle$ in $Q$ with $\langle {\mathbf{u}}|{\mathbf{v}} \rangle=0$ and every $e \in E_{n}$ with $w_{Q}(e)\leq d-1$, $|{\mathbf{u}}\rangle$ and $e|{\mathbf{v}}\rangle$ are orthogonal. In this case, we call $Q$ a \textit{symmetric} quantum code with parameters $((n,K,d))_{q}$ or $[[n,k,d]]_{q}$, where $k=\log_{q}K$. Such a quantum code is called \textit{pure} if $\langle {\mathbf{u}}|e|{\mathbf{v}} \rangle=0$ for any $|{\mathbf{u}}\rangle$ and $|{\mathbf{v}}\rangle$ in $Q$ and any $e \in E_{n}$ with $1\leq w_{Q}(e)\leq d-1$. A quantum code $Q$ with $K=1$ is assumed to be pure. Let $d_{x}$ and $d_{z}$ be positive integers. A quantum code $Q$ in $V_{n}$ with dimension $K\geq2$ is called an \textit{asymmetric quantum code} with parameters $((n,K,d_{z}/d_{x}))_{q}$ or $[[n,k,d_{z}/d_{x}]]_{q}$, where $k=\log_{q}K$, if $Q$ detects $d_{x}-1$ qudits of $X$-errors and, at the same time, $d_{z}-1$ qudits of $Z$-errors. That is, if $\langle {\mathbf{u}}|{\mathbf{v}} \rangle=0$ for $|{\mathbf{u}}\rangle,|{\mathbf{v}}\rangle\in Q$, then $\langle {\mathbf{u}}|e|{\mathbf{v}} \rangle=0$ for any $e \in E_{n}$ such that $w_{X}(e)\leq d_{x}-1$ and $w_{Z}(e)\leq d_{z}-1$. Such an asymmetric quantum code $Q$ is called \textit{pure} if $\langle {\mathbf{u}}|e|{\mathbf{v}} \rangle=0$ for any $|{\mathbf{u}}\rangle,|{\mathbf{v}}\rangle\in Q$ and $e \in E_{n}$ such that $1 \leq w_{X}(e)\leq d_{x}-1$ and $1 \leq w_{Z}(e)\leq d_{z}-1$. An asymmetric quantum code $Q$ with $K=1$ is assumed to be pure. \end{defn} \begin{rem}\label{rem2.3} An asymmetric quantum code with parameters $((n,K,d/d))_{q}$ is a symmetric quantum code with parameters $((n,K,d))_{q}$, but the converse is not true since, for $e\in E_{n}$ with $w_{X}(e)\leq d-1$ and $w_{Z}(e)\leq d-1$, the weight $w_{Q}(e)$ may be bigger than $d-1$. \end{rem} Given any two codes $C$ and $D$, let the notation $\mathop{{\rm wt}}(C \setminus D)$ denote $\mathop{{\rm min}}\left\lbrace \mathop{{\rm wt}}_{H}({\mathbf{u}} (\neq {\mathbf{0}})) : {\mathbf{u}}\in(C \setminus D)\right\rbrace$. The analogue of the well-known CSS construction (see~\cite{CRSS98}) for the asymmetric case is known. \begin{prop}\cite[Lemma 3.1]{SRK09}\label{prop2.4} Let $C_x,C_z$ be linear codes over ${\mathbb F}_q^n$ with parameters $[n,k_x]_q$, and $[n,k_z]_q$ respectively. Let $C_x^\perp \subseteq C_z$. Then there exists an $[[n,k_x +k_z-n,d_{z} /d_{x}]]_q$ asymmetric quantum code, where $d_x =\mathop{{\rm wt}}(C_x \setminus C_z^\perp)$ and $d_z =\mathop{{\rm wt}}(C_z \setminus C_x^\perp)$. \end{prop} The resulting code is said to be \textit{pure} if, in the above construction, $d_x=d(C_x)$ and $d_z=d(C_z)$. \section{Asymmetric QECC from Additive Codes}\label{sec:AsymQECC} The following result has been established recently: \begin{thm}\cite[Th. 3.1]{WFLX09}\label{thm:3.1} \begin{enumerate} \item There exists an asymmetric quantum code with parameters $((n,K,d_z/d_x))_q$ with $K \geq 2$ if and only if there exist $K$ nonzero mappings \begin{equation}\label{eq:3.1} \varphi_i : {\mathbb F}_q^n \rightarrow \mathbb{C} \text{ for } 1\leq i \leq K \end{equation} satisfying the following conditions: for each $d$ such that $1 \leq d \leq \mathop{{\rm min}}\left\lbrace d_x,d_z\right\rbrace $ and partition of $\left\lbrace 1,2,\ldots,n\right\rbrace $, \begin{equation}\label{eq:3.2} \begin{cases} \left\lbrace 1,2,\ldots,n \right\rbrace = A \cup X \cup Z \cup B \text{,} \\ |A| = d-1,\quad |B| = n+d-d_x-d_z+1 \text{,}\\ |X|=d_x - d,\quad |Z| = d_z - d \text{,} \end{cases} \end{equation} and each ${\mathbf{c}}_A,{\mathbf{c}}_A' \in {\mathbb F}_q^{|A|}$, ${\mathbf{c}}_Z \in {\mathbb F}_q^{|Z|}$ and ${\mathbf{a}}_X \in {\mathbb F}_q^{|X|}$, we have the equality \begin{multline}\label{eq:3.3} \sum_{\substack{ {\mathbf{c}}_X \in {\mathbb F}_q^{|X|} \text{,}\\ {\mathbf{c}}_B \in {\mathbb F}_q^{|B|}}} \overline{\varphi_i ({\mathbf{c}}_A,{\mathbf{c}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B)} \varphi_j({\mathbf{c}}_A',{\mathbf{c}}_X - {\mathbf{a}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B) \\ = \begin{cases} 0 &\text{for $i \neq j$,} \\ I({\mathbf{c}}_A,{\mathbf{c}}_A',{\mathbf{c}}_Z,{\mathbf{a}}_X) &\text{for $i = j$,} \end{cases} \end{multline} where $I({\mathbf{c}}_A,{\mathbf{c}}_A',{\mathbf{c}}_Z,{\mathbf{a}}_X)$ is an element of $\mathbb{C}$ which is independent of $i$. The notation $({\mathbf{c}}_A,{\mathbf{c}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B)$ represents the rearrangement of the entries of the vector ${\mathbf{c}} \in {\mathbb F}_q^n$ according to the partition of $\left\lbrace 1,2,\ldots,n\right\rbrace $ given in Equation (\ref{eq:3.2}). \item Let $(\varphi_i,\varphi_j)$ stand for $\sum_{{\mathbf{c}}\in {\mathbb F}_q^n} \overline{\varphi_i({\mathbf{c}})}\varphi_j({\mathbf{c}})$. There exists a pure asymmetric quantum code with parameters $((n,K\geq1,d_z/d_x))_q$ if and only if there exist $K$ nonzero mappings $\varphi_i$ as shown in Equation (\ref{eq:3.1}) such that \begin{itemize} \item $\varphi_i$ are linearly independent for $1\leq i\leq K$, i.e., the rank of the $K \times q^n$ matrix $(\varphi_i ({\mathbf{c}}))_{1\leq i\leq K, {\mathbf{c}} \in {\mathbb F}_q^n}$ is $K$; and \item for each $d$ with $1 \leq d \leq \mathop{{\rm min}}\left\lbrace d_x,d_z\right\rbrace $, a partition in Equation (\ref{eq:3.2}) and ${\mathbf{c}}_A,{\mathbf{a}}_A \in {\mathbb F}_q^{|A|}, {\mathbf{c}}_Z \in {\mathbb F}_q^{|Z|}$ and ${\mathbf{a}}_X \in {\mathbb F}_q^{|X|}$, we have the equality \end{itemize} \end{enumerate} \begin{multline}\label{eq:3.4} \sum_{\substack{ {\mathbf{c}}_X \in {\mathbb F}_q^{|X|}, \\ {\mathbf{c}}_B \in {\mathbb F}_q^{|B|}}} \overline{\varphi_i ({\mathbf{c}}_A,{\mathbf{c}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B)} \varphi_j({\mathbf{c}}_A+{\mathbf{a}}_A,{\mathbf{c}}_X + {\mathbf{a}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B) \\ = \begin{cases} 0 &\text{for $({\mathbf{a}}_A,{\mathbf{a}}_X) \neq ({\mathbf{0}},{\mathbf{0}})$,} \\ \frac{(\varphi_i,\varphi_j)}{q^{d_z-1}} &\text{for $({\mathbf{a}}_A,{\mathbf{a}}_X) = ({\mathbf{0}},{\mathbf{0}})$.} \end{cases} \end{multline} \end{thm} The following result is due to Keqin Feng and Long Wang. It has, however, never appeared formally in a published form before. Since it will be needed in the sequel, we present it here with a proof. \begin{prop}(K.~Feng and L.~Wang)\label{prop:3.2} Let $a,b$ be positive integers. There exists an asymmetric quantum code $Q$ with parameters $((n,K,a/b))_q$ if and only if there exists an asymmetric quantum code $Q'$ with parameters $((n,K,b/a))_q$. $Q'$ is pure if and only if $Q$ is pure. \end{prop} \begin{proof} We begin by assuming the existence of an $((n,K,a/b))_q$ asymmetric quantum code $Q$. Let $\varphi_{i}$ with $1 \leq i \leq K$ be the $K$ mappings given in Theorem~\ref{thm:3.1}. Define the following $K$ mappings \begin{equation}\label{eq:3.5} \begin{aligned} \Phi_i :\quad &{\mathbb F}_q^n \rightarrow \mathbb{C} \text{ for } 1\leq i \leq K \\ &{\mathbf{v}} \mapsto \sum_{{\mathbf{c}} \in {\mathbb F}_{q}^{n}} \varphi_{i}({\mathbf{c}}) \eta^{T({\mathbf{c}} \cdot {\mathbf{v}})}. \end{aligned} \end{equation} Let ${\mathbf{v}}_{A},{\mathbf{b}}_{A} \in {\mathbb F}_{q}^{|A|}, {\mathbf{v}}_{X} \in {\mathbb F}_{q}^{|X|}$, and ${\mathbf{b}}_{Z} \in {\mathbb F}_{q}^{|Z|}$. For each $d$ such that $1 \leq d \leq \mathop{{\rm min}}\left\lbrace d_x,d_z\right\rbrace $ and a partition of $\left\lbrace 1,2,\ldots,n \right\rbrace $ given in Equation (\ref{eq:3.2}), we show that \begin{multline}\label{eq:3.6} S = \sum_{\substack{ {\mathbf{v}}_Z \in {\mathbb F}_q^{|Z|} \text{,}\\ {\mathbf{v}}_B \in {\mathbb F}_q^{|B|}}} \overline{\Phi_i ({\mathbf{v}})} \Phi_j({\mathbf{v}}_A+{\mathbf{b}}_A,{\mathbf{v}}_X,{\mathbf{v}}_Z+{\mathbf{b}}_Z,{\mathbf{v}}_B) \\ = \begin{cases} 0 &\text{for $i \neq j$,} \\ I'({\mathbf{v}}_A,{\mathbf{b}}_A,{\mathbf{b}}_Z,{\mathbf{v}}_X) &\text{for $i = j$,} \end{cases} \end{multline} where $I'({\mathbf{v}}_A,{\mathbf{b}}_A,{\mathbf{b}}_Z,{\mathbf{v}}_X)$ is an element of $\mathbb{C}$ which is independent of $i$. Let ${\mathbf{t}}=({\mathbf{v}}_A+{\mathbf{b}}_A,{\mathbf{v}}_X,{\mathbf{v}}_Z+{\mathbf{b}}_Z,{\mathbf{v}}_B)$. Applying Equation (\ref{eq:3.5}) yields \begin{equation}\label{eq:3.7} S = \sum_{\substack{ {\mathbf{v}}_Z \in {\mathbb F}_q^{|Z|} \text{,}\\ {\mathbf{v}}_B \in {\mathbb F}_q^{|B|}}} \sum_{{\mathbf{c}},{\mathbf{d}} \in {\mathbb F}_{q}^n} \overline{\varphi_i ({\mathbf{c}})} \varphi_j ({\mathbf{d}}) \eta^{T((-{\mathbf{c}} \cdot {\mathbf{v}})+({\mathbf{d}} \cdot {\mathbf{t}}))} \text{.} \end{equation} By carefully rearranging the summations and grouping the terms, we get \begin{equation}\label{eq:3.8} S = \sum_{{\mathbf{c}},{\mathbf{d}} \in {\mathbb F}_{q}^n} \overline{\varphi_i ({\mathbf{c}})} \varphi_j ({\mathbf{d}}) \cdot \kappa \cdot \lambda \text{,} \end{equation} where \begin{align*} \kappa &= \eta^{T({\mathbf{v}}_A \cdot({\mathbf{d}}_A-{\mathbf{c}}_A)+{\mathbf{v}}_X \cdot({\mathbf{d}}_X-{\mathbf{c}}_X)+ {\mathbf{d}}_A \cdot {\mathbf{b}}_A+{\mathbf{d}}_Z \cdot {\mathbf{b}}_Z)} \text{,} \\ \lambda &= \sum_{\substack{ {\mathbf{v}}_Z \in {\mathbb F}_q^{|Z|} \text{,}\\ {\mathbf{v}}_B \in {\mathbb F}_q^{|B|}}} \eta^{T({\mathbf{v}}_B \cdot({\mathbf{d}}_B-{\mathbf{c}}_B)+{\mathbf{v}}_Z \cdot({\mathbf{d}}_Z-{\mathbf{c}}_Z))} \text{.} \end{align*} By orthogonality of characters, \begin{equation*} \lambda = \begin{cases} q^{|Z|+|B|} & \text{if }{\mathbf{d}}_B = {\mathbf{c}}_B \text{ and } {\mathbf{d}}_Z= {\mathbf{c}}_Z \text{,} \\ 0 & \text{otherwise.} \end{cases} \end{equation*} Therefore, \begin{equation}\label{eq:3.9} S=\sum_{\substack{{\mathbf{c}} \in {\mathbb F}_{q}^n \\ {\mathbf{d}}_A \in {\mathbb F}_q^{|A|} \text{,}{\mathbf{d}}_X \in {\mathbb F}_q^{|X|}}} q^{|Z|+|B|} \cdot \overline{\varphi_i ({\mathbf{c}})} \varphi_j ({\mathbf{d}}_A,{\mathbf{d}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B) \cdot \pi \text{,} \end{equation} where \begin{equation*} \pi = \eta^{T({\mathbf{v}}_A \cdot({\mathbf{d}}_A-{\mathbf{c}}_A)+{\mathbf{v}}_X \cdot({\mathbf{d}}_X-{\mathbf{c}}_X)+{\mathbf{d}}_A \cdot {\mathbf{b}}_A+{\mathbf{c}}_Z \cdot {\mathbf{b}}_Z)}. \end{equation*} Now, we let $k=n-d_x+1$, ${\mathbf{a}}_A={\mathbf{d}}_A-{\mathbf{c}}_A$, and ${\mathbf{a}}_X={\mathbf{d}}_X-{\mathbf{c}}_X$. Splitting up the summation once again yields \begin{multline}\label{eq:3.10} S=q^k \sum_{\substack{{\mathbf{c}}_A,{\mathbf{a}}_A \in {\mathbb F}_{q}^{|A|} \\ {\mathbf{c}}_Z \in {\mathbb F}_q^{|Z|} \text{,}{\mathbf{a}}_X \in {\mathbb F}_q^{|X|}}} \eta^{T({\mathbf{v}}_A \cdot {\mathbf{a}}_A+{\mathbf{v}}_X \cdot{\mathbf{a}}_X+{\mathbf{b}}_A \cdot ({\mathbf{c}}_A + {\mathbf{a}}_A) +{\mathbf{c}}_Z \cdot {\mathbf{b}}_Z)} \\ \cdot \sum_{\substack{ {\mathbf{c}}_X \in {\mathbb F}_q^{|X|} \text{,}\\ {\mathbf{c}}_B \in {\mathbb F}_q^{|B|}}} \overline{\varphi_i ({\mathbf{c}}_A,{\mathbf{c}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B)} \varphi_j({\mathbf{c}}_A+{\mathbf{a}}_A,{\mathbf{c}}_X + {\mathbf{a}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B) \text{.} \end{multline} Invoking Equation (\ref{eq:3.3}) concludes the proof for the first part with $I'$ given by \begin{equation}\label{eq:3.11} I'=q^k I \sum_{\substack{{\mathbf{c}}_A,{\mathbf{a}}_A \in {\mathbb F}_{q}^{|A|} \\ {\mathbf{c}}_Z \in {\mathbb F}_q^{|Z|} \text{,}{\mathbf{a}}_X \in {\mathbb F}_q^{|X|}}} \eta^{T({\mathbf{v}}_A \cdot {\mathbf{a}}_A+{\mathbf{v}}_X \cdot{\mathbf{a}}_X+{\mathbf{b}}_A \cdot ({\mathbf{c}}_A + {\mathbf{a}}_A) +{\mathbf{c}}_Z \cdot {\mathbf{b}}_Z)} \text{.} \end{equation} For the second part, let us assume the existence of a pure $((n,K,a/b))_q$ asymmetric quantum code $Q$. Note that the Fourier transformations $\Phi_i$ for $1 \leq i\leq K$ are linearly independent. We use Equations (\ref{eq:3.10}) and (\ref{eq:3.4}) to establish the equality \begin{multline}\label{eq:3.12} S = \sum_{\substack{ {\mathbf{v}}_Z \in {\mathbb F}_q^{|Z|} \text{,}\\ {\mathbf{v}}_B \in {\mathbb F}_q^{|B|}}} \overline{\Phi_i ({\mathbf{v}})} \Phi_j({\mathbf{v}}_A+{\mathbf{b}}_A,{\mathbf{v}}_X,{\mathbf{v}}_Z+{\mathbf{b}}_Z,{\mathbf{v}}_B) \\ = \begin{cases} 0 &\text{for $({\mathbf{b}}_A,{\mathbf{b}}_Z) \neq ({\mathbf{0}},{\mathbf{0}})$,} \\ q^{n} \frac{(\varphi_i,\varphi_j)}{q^{d_x-1}} &\text{for $({\mathbf{b}}_A,{\mathbf{b}}_Z) = ({\mathbf{0}},{\mathbf{0}})$.} \end{cases} \end{multline} Consider the term \begin{equation*} M:=\sum_{\substack{ {\mathbf{c}}_X \in {\mathbb F}_q^{|X|} \text{,}\\ {\mathbf{c}}_B \in {\mathbb F}_q^{|B|}}} \overline{\varphi_i ({\mathbf{c}})} \varphi_j({\mathbf{c}}_A+{\mathbf{a}}_A,{\mathbf{c}}_X + {\mathbf{a}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B) \end{equation*} in Equation (\ref{eq:3.10}). By the purity assumption, for $({\mathbf{a}}_A,{\mathbf{a}}_X) \neq ({\mathbf{0}},{\mathbf{0}})$, $M=0$. For $({\mathbf{a}}_A,{\mathbf{a}}_X)=({\mathbf{0}},{\mathbf{0}})$, $M=\frac{(\varphi_i,\varphi_j)}{q^{d_{z}-1}}$. Hence, \begin{equation}\label{eq:3.13} S=q^k \sum_{\substack{{\mathbf{c}}_A \in {\mathbb F}_{q}^{|A|} \text{,}\\ {\mathbf{c}}_Z \in {\mathbb F}_q^{|Z|}}} \eta^{T({\mathbf{b}}_A \cdot {\mathbf{c}}_A + {\mathbf{b}}_Z \cdot {\mathbf{c}}_Z)} \cdot \frac{(\varphi_i,\varphi_j)}{q^{d_z-1}} \text{.} \end{equation} By orthogonality of characters, if $({\mathbf{b}}_A,{\mathbf{b}}_Z) \neq ({\mathbf{0}},{\mathbf{0}})$, then \begin{equation*} \sum_{\substack{{\mathbf{c}}_A \in {\mathbb F}_{q}^{|A|} \text{,}\\ {\mathbf{c}}_Z \in {\mathbb F}_q^{|Z|}}} \eta^{T({\mathbf{b}}_A \cdot {\mathbf{c}}_A + {\mathbf{b}}_Z \cdot {\mathbf{c}}_Z)} = 0 \text{,} \end{equation*} making $S=0$. If $({\mathbf{b}}_A,{\mathbf{b}}_Z)=({\mathbf{0}},{\mathbf{0}})$, then \begin{equation*} S = q^k \cdot q^{|A|+|Z|} \cdot \frac{(\varphi_i,\varphi_j)}{q^{d_z-1}} \text{.} \end{equation*} This completes the proof of the second part. \end{proof} With this result, without loss of generality, $d_z \geq d_x$ is henceforth assumed. \begin{rem}\label{rem3.3} If we examine closely the proof of Theorem~\ref{thm:3.1} above as presented in Theorem 3.1 of~\cite{WFLX09}, only the additive property (instead of linearity) is used. We will show that the conclusion of the theorem with an adjusted value for $K$ still follows if we use \underline{classical additive codes} instead of linear codes. \end{rem} \begin{thm}\label{thm:3.4} Let $d_x$ and $d_z$ be positive integers. Let $C$ be a classical additive code in ${\mathbb F}_q^n$. Assume that $d^{\perp_{\mathop{{\rm tr}}}}= d(C^{\perp_{\mathop{{\rm tr}}}})$ is the minimum distance of the dual code $C^{\perp_{\mathop{{\rm tr}}}}$ of $C$ under the trace Hermitian inner product. For a set $V:=\left\lbrace {\mathbf{v}}_i : 1\leq i \leq K\right\rbrace $ of $K$ distinct vectors in ${\mathbb F}_q^n$, let $d_v:=\mathop{{\rm min}}\left\lbrace \mathop{{\rm wt}}_{H}({\mathbf{v}}_i - {\mathbf{v}}_j + {\mathbf{c}}) : 1 \leq i \neq j \leq K, {\mathbf{c}} \in C\right\rbrace$. If $d^{\perp_{\mathop{{\rm tr}}}} \geq d_z$ and $d_v \geq d_x$, then there exists an asymmetric quantum code $Q$ with parameters $((n,K,d_z/d_x))_q$. \end{thm} \begin{proof} Define the following functions \begin{equation}\label{eq:3.14} \begin{aligned} \varphi_i :\quad &{\mathbb F}_q^n \rightarrow \mathbb{C} \text{ for } 1\leq i \leq K \\ &{\mathbf{u}} \mapsto \begin{cases} 1 &\text{if ${\mathbf{u}} \in {\mathbf{v}}_i+C$,} \\ 0 &\text{if ${\mathbf{u}} \not \in {\mathbf{v}}_i+C$.} \end{cases} \end{aligned} \end{equation} For each $d$ such that $1 \leq d \leq \mathop{{\rm min}}\left\lbrace d_x,d_z\right\rbrace $ and a partition of $\left\lbrace 1,2,\ldots,n \right\rbrace $ given in Equation (\ref{eq:3.2}), \begin{equation*} \overline{\varphi_i ({\mathbf{c}}_A,{\mathbf{c}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B)} \varphi_j({\mathbf{c}}_A+{\mathbf{a}}_A,{\mathbf{c}}_X + {\mathbf{a}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B) \neq 0 \end{equation*} if and only if \begin{equation*} \begin{cases} ({\mathbf{c}}_A,{\mathbf{c}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B) &\in {\mathbf{v}}_i+C \text{,} \\ ({\mathbf{c}}_A+{\mathbf{a}}_A,{\mathbf{c}}_X +{\mathbf{a}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B) &\in {\mathbf{v}}_j+C \text{,} \end{cases} \end{equation*} which, in turn, is equivalent to \begin{equation}\label{eq:3.15} \begin{cases} ({\mathbf{c}}_A,{\mathbf{c}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B) &\in {\mathbf{v}}_i+C \text{,} \\ ({\mathbf{a}}_A,{\mathbf{a}}_X,{\mathbf{0}}_Z,{\mathbf{0}}_B) &\in {\mathbf{v}}_j-{\mathbf{v}}_i+C \text{.} \end{cases} \end{equation} Note that since $\textnormal{{\rm wt}}_{H}({\mathbf{a}}_A,{\mathbf{a}}_X,{\mathbf{0}}_Z,{\mathbf{0}}_B) \leq |A|+|X| = d_x-1$, we know that $({\mathbf{a}}_A,{\mathbf{a}}_X,{\mathbf{0}}_Z,{\mathbf{0}}_B) \in {\mathbf{v}}_j-{\mathbf{v}}_i+C$ means $i=j$ by the definition of $d_v$ above. Thus, if $i \neq j$, \begin{equation}\label{eq:3.16} \sum_{\substack{ {\mathbf{c}}_X \in {\mathbb F}_q^{|X|} \\ {\mathbf{c}}_B \in {\mathbb F}_q^{|B|}}} \overline{\varphi_i ({\mathbf{c}}_A,{\mathbf{c}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B)} \varphi_j({\mathbf{c}}_A+{\mathbf{a}}_A,{\mathbf{c}}_X + {\mathbf{a}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B) = 0 \text{.} \end{equation} Now, consider the case of $i = j$. By Equation (\ref{eq:3.15}), if $({\mathbf{a}}_A,{\mathbf{a}}_X,{\mathbf{0}}_Z,{\mathbf{0}}_B) \not \in C$, then it has no contribution to the sum we are interested in. If $({\mathbf{a}}_A,{\mathbf{a}}_X,{\mathbf{0}}_Z,{\mathbf{0}}_B) \in C$, then \begin{multline}\label{eq:3.17} \sum_{\substack{ {\mathbf{c}}_X \in {\mathbb F}_q^{|X|} \\ {\mathbf{c}}_B \in {\mathbb F}_q^{|B|}}} \overline{\varphi_i ({\mathbf{c}}_A,{\mathbf{c}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B)} \varphi_i({\mathbf{c}}_A+{\mathbf{a}}_A,{\mathbf{c}}_X + {\mathbf{a}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B) \\ = \sum_{\begin{subarray}{c} {\mathbf{c}}_X \in {\mathbb F}_q^{|X|}, {\mathbf{c}}_B \in {\mathbb F}_q^{|B|} \\ ({\mathbf{c}}_A,{\mathbf{c}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B) \in {\mathbf{v}}_i+C \end{subarray}} 1 \text{.} \end{multline} Proposition~\ref{OA} above tells us that, if $C$ is any classical $q$-ary code of length $n$ and size $M$ such that the minimum distance $d^{\perp}$ of its dual is greater than or equal to $d_z$, then any coset of $C$ is an orthogonal array of level $q$ and of strength exactly $d_z-1$. In other words, there are exactly $\frac{|C|}{q^{d_z-1}}$ vectors $({\mathbf{c}}_A,{\mathbf{c}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B) \in {\mathbf{v}}_i+C$ for any fixed $({\mathbf{c}}_A,{\mathbf{c}}_Z) \in {\mathbb F}_q^{d_z-1}$. Thus, for $i = j$, the sum on the right hand side of Equation (\ref{eq:3.17}) is $\frac{|C|}{q^{d_z-1}}$, which is independent of $i$. By Theorem~\ref{thm:3.1} we have an asymmetric quantum code $Q$ with parameters $((n,K,d_z/d_x))_q$. \end{proof} \begin{thm}\label{thm:3.5} Let $q=r^2$ be an even power of a prime $p$. For $i = 1,2$, let $C_i$ be a classical additive code with parameters $(n,K_i,d_i)_q$. If $C_1^{\perp_{\mathop{{\rm tr}}}} \subseteq C_2$, then there exists an asymmetric quantum code $Q$ with parameters $((n,\frac{|C_2|}{|C_1^{\perp_{\mathop{{\rm tr}}}}|},d_z/d_x))_q$ where $\left\lbrace d_z,d_x\right\rbrace = \left\lbrace d_1,d_2\right\rbrace$. \end{thm} \begin{proof} We take $C = C_1^{\perp_{\mathop{{\rm tr}}}}$ in Theorem~\ref{thm:3.4} above. Since $C_1^{\perp_{\mathop{{\rm tr}}}} \subseteq C_2$, we have $C_2 = C_1^{\perp_{\mathop{{\rm tr}}}} \oplus C'$, where $C'$ is an ${\mathbb F}_r$-submodule of $C_2$ and $\oplus$ is the direct sum so that $|C'|=\frac{|C_2|}{|C_1^{\perp_{\mathop{{\rm tr}}}}|}$. Let $C' = \left\lbrace {\mathbf{v}}_1,\ldots,{\mathbf{v}}_K\right\rbrace$, where $K = \frac{|C_2|}{|C_1^{\perp_{\mathop{{\rm tr}}}}|}$. Then \begin{align*} d^{\perp_{\mathop{{\rm tr}}}} &= d(C^{\perp_{\mathop{{\rm tr}}}}) = d(C_1) = d_1 \text{ and} \\ d_v &= \mathop{{\rm min}}\left\lbrace \textnormal{{\rm wt}}_{H}({\mathbf{v}}_i - {\mathbf{v}}_j + {\mathbf{c}}) : 1 \leq i \neq j \leq K, {\mathbf{c}} \in C \right\rbrace \\ &= \mathop{{\rm min}} \left\lbrace \textnormal{{\rm wt}}_{H}({\mathbf{v}} + {\mathbf{c}}) : {\mathbf{0}} \neq {\mathbf{v}} \in C', {\mathbf{c}} \in C_1^{\perp_{\mathop{{\rm tr}}}} \right\rbrace \geq d_2 \text{.} \end{align*} \end{proof} Theorem~\ref{thm:3.5} can now be used to construct quantum codes. In this paper, all computations are done in MAGMA~\cite{BCP97} version V2.16-5. The construction method of Theorem~\ref{thm:3.5} falls into what some have labelled the \textit{CSS-type construction}. It is noted in~\cite[Lemma 3.3]{SRK09} that any CSS-type ${\mathbb F}_{q}$-linear $[[n,k,d_{z}/d_{x}]]_{q}$-code satisfies the quantum version of the Singleton bound \begin{equation*} k \leq n-d_{x}-d_{z}+2 \text{.} \end{equation*} This bound is conjectured to hold for all asymmetric quantum codes. Some of our codes in later sections attain $k = n-d_{x}-d_{z}+2$. They are printed in boldface throughout the tables and examples. \section{Additive Codes over ${\mathbb F}_4$}\label{sec:AdditiveF4} Let ${\mathbb F}_4:=\left\lbrace 0,1,\omega,\omega^{2}=\overline{\omega}\right\rbrace $. For $x \in {\mathbb F}_{4}$, $\overline{x}=x^{2}$, the conjugate of $x$. By definition, an additive code $C$ of length $n$ over ${\mathbb F}_{4}$ is a free ${\mathbb F}_{2}$-module. It has size $2^{l}$ for some $0 \leq l \leq 2n$. As an ${\mathbb F}_{2}$-module, $C$ has a basis consisting of $l$ basis vectors. A \textit{generator matrix} of $C$ is an $l \times n$ matrix with entries elements of ${\mathbb F}_{4}$ whose rows form a basis of $C$. Additive codes over ${\mathbb F}_4$ equipped with the trace Hermitian inner product have been studied primarily in connection to designs (e.g.~\cite{KP03}) and to stabilizer quantum codes (e.g.~\cite{GHKP01} and~\cite[Sec. 9.10]{HP03}). It is well known that if $C$ is an additive $(n,2^l)_4$-code, then $C^{\perp_{\mathop{{\rm tr}}}}$ is an additive $(n,2^{2n-l})_4$-code. To compute the weight enumerator of $C^{\perp_{\mathop{{\rm tr}}}}$ we use Equation (\ref{eq:MacW}) with $q=4$ \begin{equation}\label{eq:4.1} W_{C^{\perp_{\mathop{{\rm tr}}}}}(X,Y) = \frac{1}{|C|} W_C(X+3Y,X-Y) \text{.} \end{equation} \begin{rem}\label{rem:4.1} If the code $C$ is ${\mathbb F}_4$-linear with parameters $[n,k,d]_4$, then $C^{\perp_{\mathop{{\rm H}}}} = C^{\perp_{\mathop{{\rm tr}}}}$. This is because $C^{\perp_{\mathop{{\rm H}}}}$ is of size $4^{n-k} = 2^{2n-2k}$ which is also the size of $C^{\perp_{\mathop{{\rm tr}}}}$. Alternatively, one can invoke ~\cite[Th. 3]{CRSS98}. \end{rem} From here on, we assume the trace Hermitian inner product whenever additive ${\mathbb F}_{4}$ codes are discussed and the Hermitian inner product whenever ${\mathbb F}_{4}$-linear codes are used. Two additive codes $C_1$ and $C_2$ over ${\mathbb F}_{4}$ are said to be \textit{equivalent} if there is a map sending the codewords of one code onto the codewords of the other where the map consists of a permutation of coordinates, followed by a scaling of coordinates by elements of ${\mathbb F}_{4}$, followed by a conjugation of the entries of some of the coordinates. \section{Construction from Extremal or Optimal Additive Self-Dual Codes over ${\mathbb F}_4$} \label{sec:ExtremalSD} As a direct consequence of Theorem~\ref{thm:3.5}, we have the following result. \begin{thm}\label{thm:5.1} If $C$ is an additive self-dual code of parameters $(n,2^n,d)_4$, then there exists an $[[n,0,d_z/d_x]]_4$ asymmetric quantum code $Q$ with $d_z = d_x = d(C^{\perp_{\mathop{{\rm tr}}}})$. \end{thm} Additive self-dual codes over ${\mathbb F}_{4}$ exist for any length $n$ since the identity matrix $I_{n}$ clearly generates a self-dual $(n,2^{n},1)_{4}$-code. Any linear self-dual $[n,n/2,d]_{4}$-code is also an additive self-dual $(n,2^{n},d)_{4}$-code. \begin{defn}\label{defn:5.2} A self-dual $(n,2^n,d)_4$-code $C$ is \textit{Type II} if all of its codewords have even weight. If $C$ has a codeword of odd weight, then $C$ is \textit{Type I}. \end{defn} It is known (see~\cite[Sec. 4.2]{RS98}) that Type II codes of length $n$ exist only if $n$ is even and that a Type I code is not ${\mathbb F}_4$-linear. There is a bound in~\cite[Th. 33]{RS98} on the minimum weight of an additive self-dual code. If $d_{I}$ and $d_{II}$ are the minimum weights of Type I and Type II codes of length $n$, respectively, then \begin{equation} \begin{aligned} d_{I} &\leq \begin{cases} 2 \lfloor \frac{n}{6} \rfloor + 1 \text{,} \hfil \text{ if } n \equiv 0 \pmod{6} \\ 2 \lfloor \frac{n}{6} \rfloor + 3 \text{,} \hfil \text{ if } n \equiv 0 \pmod{6} \\ 2 \lfloor \frac{n}{6} \rfloor + 2 \text{,} \hfil \text{otherwise} \end{cases} \text{,}\\ d_{II} &\leq 2 \lfloor \frac{n}{6} \rfloor + 2 \text{.} \end{aligned} \end{equation} A code that meets the appropriate bound is called \textit{extremal}. If a code is not extremal yet no code of the given type can exist with a larger minimum weight, then we call the code \textit{optimal}. The complete classification, up to equivalence, of additive self-dual codes over ${\mathbb F}_{4}$ up to $n=12$ can be found in~\cite{DP06}. The classification of extremal codes of lengths $n=13$ and $n=14$ is presented in~\cite{Var07}. Many examples of good additive codes for larger values of $n$ are presented in~\cite{GK04},~\cite{Var07}, and~\cite{Var09}. Table~\ref{table:SDCodes} summarizes the results thus far and lists down the resulting asymmetric quantum codes for lengths up to $n=30$. The subscripts $_{I}$ and $_{II}$ indicates the types of the codes. The superscripts $^{e},^{o},^{b}$ indicate the fact that the minimum distance $d$ is extremal, optimal, and best-known (not necessarily extremal or optimal), respectively. The number of codes for each set of given parameters is listed in the column under the heading \textbf{num}. \begin{table*} \caption{Best-Known Additive Self-Dual Codes over ${\mathbb F}_{4}$ for $n \leq 30$ and the Resulting Asymmetric Quantum Codes} \label{table:SDCodes} \centering \begin{tabular}{|| c | c | c | c | l | c | c | c | l || c | c | c | c | l ||} \hline \textbf{$n$} & \textbf{$d_{I}$} &\textbf{num$_{I}$} & \textbf{Ref.} & \textbf{Code $Q$} & \textbf{$d_{II}$} &\textbf{num$_{II}$} & \textbf{Ref.} & \textbf{Code $Q$} & \textbf{$n$} & \textbf{$d_{I}$} &\textbf{num$_{I}$} & \textbf{Ref.} & \textbf{Code $Q$} \\ \hline $2$ & $1^{o}$ & 1 & \cite{DP06} & $[[2,0,1/1]]_{4}$ & $2^{e}$ & 1 & \cite{DP06} & $\mathbf{[[2,0,2/2]]_{4}}$ & $3$ & $2^{e}$ & 1 & \cite{DP06} & $[[3,0,2/2]]_{4}$ \\ $4$ & $2^{e}$ & 1 & \cite{DP06} & $[[4,0,2/2]]_{4}$ & $2^{e}$ & 2 & \cite{DP06} & $[[4,0,2/2]]_{4}$ & $5$ & $3^{e}$ & 1 & \cite{DP06} & $[[5,0,3/3]]_{4}$ \\ $6$ & $3^{e}$ & 1 & \cite{DP06} & $[[6,0,3/3]]_{4}$ & $4^{e}$ & 1 & \cite{DP06} & $\mathbf{[[6,0,4/4]]_{4}}$ & $7$ & $3^{o}$ & 4 & \cite{DP06} & $[[7,0,3/3]]_{4}$ \\ $8$ & $4^{e}$ & 2 & \cite{DP06} & $[[8,0,4/4]]_{4}$ & $4^{e}$ & 3 & \cite{DP06} & $[[8,0,4/4]]_{4}$ & $9$ & $4^{e}$ & 8 & \cite{DP06} & $[[9,0,4/4]]_{4}$ \\ $10$ & $4^{e}$ & 101 & \cite{DP06} & $[[10,0,4/4]]_{4}$ & $4^{e}$ & 19 & \cite{DP06} & $[[10,0,4/4]]_{4}$ & $11$ & $5^{e}$ & 1 & \cite{DP06} & $[[11,0,5/5]]_{4}$\\ $12$ & $5^{e}$ & 63 & \cite{DP06} & $[[12,0,5/5]]_{4}$ & $6^{e}$ & 1 & \cite{DP06} & $[[12,0,6/6]]_{4}$ & $13$ & $5^{o}$ & 85845 & \cite{Var07} & $[[13,0,5/5]]_{4}$ \\ $14$ & $6^{e}$ & 2 & \cite{Var07} & $[[14,0,6/6]]_{4}$ & $6^{e}$ & 1020 & \cite{DP06} & $[[14,0,6/6]]_{4}$ & $15$ & $6^{e}$ & $\geq 2118$ & \cite{Var07} & $[[15,0,6/6]]_{4}$ \\ $16$ & $6^{e}$ & $\geq 8369$ & \cite{Var07} & $[[16,0,6/6]]_{4}$ & $6^{e}$ & $\geq 112$ & \cite{Var07} & $[[16,0,6/6]]_{4}$ & $17$ & $7^{e}$ & $\geq 2$ & \cite{Var07} & $[[17,0,7/7]]_{4}$ \\ $18$ & $7^{e}$ & $\geq 2$ & \cite{Var07} & $[[18,0,7/7]]_{4}$ & $8^{e}$ & $\geq 1$ & \cite{Var07} & $[[18,0,8/8]]_{4}$ & $19$ & $7^{b}$ & $\geq 17$ & \cite{Var07} & $[[19,0,7/7]]_{4}$ \\ $20$ & $8^{e}$ & $\geq 3$ & \cite{GK04} & $[[20,0,8/8]]_{4}$ & $8^{e}$ & $\geq 5$ & \cite{GK04} & $[[20,0,8/8]]_{4}$ & $21$ & $8^{e}$ & $\geq 2$ & \cite{Var07} & $[[21,0,8/8]]_{4}$ \\ $22$ & $8^{e}$ & $\geq 1$ & \cite{GK04} & $[[22,0,8/8]]_{4}$ & $8^{e}$ & $\geq 67$ & \cite{GK04} & $[[22,0,8/8]]_{4}$ & $23$ & $8^{b}$ & $\geq 2$ & \cite{GK04} & $[[23,0,8/8]]_{4}$ \\ $24$ & $8^{b}$ & $\geq 5$ & \cite{Var09} & $[[24,0,8/8]]_{4}$ & $8^{b}$ & $\geq 51$ & \cite{GK04} & $[[24,0,8/8]]_{4}$ & $25$ & $8^{b}$ & $\geq 30$ & \cite{GK04} & $[[25,0,8/8]]_{4}$ \\ $26$ & $8^{b}$ & $\geq 49$ & \cite{Var09} & $[[26,0,8/8]]_{4}$ & $8^{b}$ & $\geq 161$ & \cite{Var09} & $[[26,0,8/8]]_{4}$ & $27$ & $8^{b}$ & $\geq 15$ & \cite{GK04} & $[[27,0,9/9]]_{4}$ \\ $28$ & $10$ & ? & \cite{GK04} & & $10^{e}$ & $\geq 1$ & \cite{GK04} & $[[28,0,10/10]]_{4}$ & $29$ & $11^{e}$ & $\geq 1$ & \cite{GK04} & $[[29,0,11/11]]_{4}$ \\ $30$ & $11$ & ? & \cite{GK04} & & $12^{e}$ & $\geq 1$ & \cite{GK04} & $[[30,0,12/12]]_{4}$ & & & & & \\ \hline \end{tabular} \end{table*} \begin{rem}\label{rem:5.3} \begin{enumerate} \item The unique additive $(12,2^{12},6)_4$-code is also known as \textit{dodecacode}. It is well known that the best Hermitian self-dual linear code is of parameters $[12,6,4]_4$. \item In~\cite{Var09}, four so-called \textit{additive circulant graph codes} of parameters $(30,2^{30},12)_4$ are constructed without classification. It is yet unknown if any of these four codes is inequivalent to the one listed in~\cite{GK04}. \end{enumerate} \end{rem} \section{Construction from Self-Orthogonal Linear Codes} \label{sec:HSelfOrtho} It is well known (see~\cite[Th. 1.4.10]{HP03}) that a linear code $C$ having the parameters $[n,k,d]_4$ is Hermitian self-orthogonal if and only if the weights of its codewords are all even. \begin{thm}\label{thm:6.1} If $C$ is a Hermitian self-orthogonal code of parameters $[n,k,d]_4$, then there exists an asymmetric quantum code $Q$ with parameters $[[n,n-2k,d_z/d_x]]_4$, where \begin{equation}\label{eq:4.2} d_x = d_z = d(C^{\perp_{\mathop{{\rm H}}}})\text{.} \end{equation} \end{thm} \begin{proof} Seen as an additive code, $C$ is of parameters $(n,2^{2k},d)_4$ with $C^{\perp_{\mathop{{\rm tr}}}}$ being the code $C^{\perp_{\mathop{{\rm H}}}}$ seen as an $(n,2^{2n-2k},d^{\perp_{\mathop{{\rm tr}}}})$ additive code (see Remark~\ref{rem:4.1}). Applying Theorem~\ref{thm:3.5} by taking $C_1 = C^{\perp_{\mathop{{\rm tr}}}} = C_2$ to satisfy $C_1^{\perp_{\mathop{{\rm tr}}}} \subseteq C_2$ completes the proof. \end{proof} \begin{ex}\label{ex:4.3} Let $n$ be an even positive integer. Consider the repetition code $[n,1,n]_4$ with weight enumerator $1+3Y^n$. Since the weights are all even, this ${\mathbb F}_{4}$-linear code is Hermitian self-orthogonal. We then have a quantum code $Q$ with parameters $\mathbf{[[n,n-2,2/2]]_{4}}$. \end{ex} Table~\ref{table:ClassSO} below presents the resulting asymmetric quantum codes based on the classification of self-orthogonal ${\mathbb F}_{4}$-linear codes of length up to 29 and of dimensions 3 up to 6 as presented in~\cite{BO05}. I. Bouyukliev~\cite{Bou09} shared with us the original data used in the said classification plus some additional results for lengths 30 and 31. Given fixed length $n$ and dimension $k$, we only consider $[n,k,d]_4$-codes $C$ with maximal possible value for the minimum distances of their duals. For example, among 12 self-orthogonal $[10,4,4]_4$-codes, there are 4 distinct codes with $d^{\perp_{\mathop{{\rm H}}}}=3$ while the remaining 8 codes have $d^{\perp_{\mathop{{\rm H}}}}=2$. We take only the first 4 codes. The number of distinct codes that can be used for the construction of the asymmetric quantum codes for each set of given parameters is listed in the fourth column of the table. \begin{table} \caption{Asymmetric QECC from Classified Hermitian Self-Orthogonal ${\mathbb F}_{4}$-Linear Codes in~\cite{BO05}} \label{table:ClassSO} \centering \begin{tabular}{| c | l | l | c | l |} \hline \textbf{No.} & \textbf{Code $C$} &\textbf{Code $Q$} & \textbf{num} \\ \hline 1 & $[6,3,4]_4$ & $[[6,0,3/3]]_4$ & $1$ \\ 2 & $[7,3,4]_4$ & $[[7,1,3/3]]_4$ & $1$ \\ 3 & $[8,3,4]_4$ & $[[8,2,3/3]]_4$ & $1$ \\ 4 & $[8,4,4]_4$ & $[[8,0,4/4]]_4$ & $1$ \\ 5 & $[9,3,6]_4$ & $[[9,3,3/3]]_4$ & $1$ \\ 6 & $[9,4,4]_4$ & $[[9,1,3/3]]_4$ & $2$ \\ 7 & $[10,3,6]_4$ & $[[10,4,3/3]]_4$ & $1$ \\ 8 & $[10,4,4]_4$ & $[[10,2,3/3]]_4$ & $3$ \\ 9 & $[10,5,4]_4$ & $[[10,0,4/4]]_4$ & $2$ \\ 10 & $[11,3,6]_4$ & $[[11,5,3/3]]_4$ & $1$ \\ 11 & $[11,4,4]_4$ & $[[11,3,3/3]]_4$ & $3$ \\ 12 & $[11,4,6]_4$ & $[[11,3,3/3]]_4$ & $1$ \\ 13 & $[11,5,4]_4$ & $[[11,1,3/3]]_4$ & $6$ \\ 14 & $[12,3,8]_4$ & $[[12,6,3/3]]_4$ & $1$ \\ 15 & $[12,4,6]_4$ & $[[12,4,4/4]]_4$ & $1$ \\ 16 & $[12,5,6]_4$ & $[[12,2,4/4]]_4$ & $1$ \\ 17 & $[12,6,4]_4$ & $[[12,0,4/4]]_4$ & $5$ \\ 18 & $[13,3,8]_4$ & $[[13,7,3/3]]_4$ & $1$ \\ 19 & $[13,4,8]_4$ & $[[13,5,3/3]]_4$ & $5$ \\ 20 & $[13,5,6]_4$ & $[[13,3,4/4]]_4$ & $1$ \\ 21 & $[13,6,6]_4$ & $[[13,1,5/5]]_4$ & $1$ \\ 22 & $[14,3,10]_4$ & $[[14,8,3/3]]_4$ & $1$ \\ 23 & $[14,4,8]_4$ & $[[14,6,4/4]]_4$ & $1$ \\ 24 & $[14,5,8]_4$ & $[[14,4,4/4]]_4$ & $4$ \\ 25 & $[14,6,6]_4$ & $[[14,2,5/5]]_4$ & $1$ \\ 26 & $[14,7,6]_4$ & $[[14,0,6/6]]_4$ & $1$ \\ 27 & $[15,3,10]_4$ & $[[15,9,3/3]]_4$ & $1$ \\ 28 & $[15,4,8]_4$ & $[[15,7,3/3]]_4$ & $189$ \\ 29 & $[15,5,8]_4$ & $[[15,5,4/4]]_4$ & $26$ \\ 30 & $[15,6,8]_4$ & $[[15,3,5/5]]_4$ & $3$ \\ 31 & $[16,3,12]_4$ & $[[16,10,3/3]]_4$ & $1$ \\ 32 & $[16,4,10]_4$ & $[[16,8,3/3]]_4$ & $38$ \\ 33 & $[16,5,8]_4$ & $[[16,6,4/4]]_4$ & $519$ \\ 34 & $[16,6,8]_4$ & $[[16,4,4/4]]_4$ & $697$ \\ 35 & $[17,3,12]_4$ & $[[17,11,2/2]]_4$ & $4$ \\ 36 & $[17,4,12]_4$ & $[[17,9,4/4]]_4$ & $1$ \\ 37 & $[17,5,10]_4$ & $[[17,7,4/4]]_4$ & $27$ \\ 38 & $[18,3,12]_4$ & $[[18,12,2/2]]_4$ & $45$ \\ 39 & $[18,4,12]_4$ & $[[18,10,3/3]]_4$ & $11$ \\ 40 & $[18,6,10]_4$ & $[[18,6,5/5]]_4$ & $2$ \\ 41 & $[19,3,12]_4$ & $[[19,13,2/2]]_4$ & $185$ \\ 42 & $[19,4,12]_4$ & $[[19,11,3/3]]_4$ & $2570$ \\ 43 & $[20,3,14]_4$ & $[[20,14,2/2]]_4$ & $10$ \\ 44 & $[20,5,12]_4$ & $[[20,10,3/3]]_4$ & $4$ \\ 45 & $[21,3,16]_4$ & $[[21,15,3/3]]_4$ & $1$ \\ 46 & $[21,4,14]_4$ & $[[21,13,3/3]]_4$ & $212$ \\ 47 & $[21,5,12]_4$ & $[[21,11,3/3]]_4$ & $3$ \\ 48 & $[22,3,16]_4$ & $[[22,16,3/3]]_4$ & $4$ \\ 49 & $[22,5,14]_4$ & $[[22,12,4/4]]_4$ & $67$ \\ 50 & $[23,3,16]_4$ & $[[23,17,2/2]]_4$ & $46$ \\ 51 & $[23,4,16]_4$ & $[[23,15,3/3]]_4$ & $1$ \\ 52 & $[24,3,16]_4$ & $[[24,18,2/2]]_4$ & $614$ \\ 53 & $[24,4,16]_4$ & $[[24,16,3/3]]_4$ & $20456$ \\ 54 & $[25,3,18]_4$ & $[[25,19,2/2]]_4$ & $6$ \\ 55 & $[25,4,16]_4$ & $[[25,17,3/3]]_4$ & $19$ \\ 56 & $[26,3,18]_4$ & $[[26,20,2/2]]_4$ & $185$ \\ 57 & $[26,4,18]_4$ & $[[26,18,3/3]]_4$ & $14$ \\ 58 & $[27,3,20]_4$ & $[[27,21,2/2]]_4$ & $2$ \\ 59 & $[28,3,20]_4$ & $[[28,22,2/2]]_4$ & $46$ \\ 60 & $[28,4,20]_4$ & $[[28,20,3/3]]_4$ & $1$ \\ 61 & $[29,3,20]_4$ & $[[29,23,2/2]]_4$ & $850$ \\ 62 & $[29,4,20]_4$ & $[[29,21,3/3]]_4$ & $11365$ \\ 63 & $[30,5,20]_4$ & $[[30,20,3/3]]_4$ & $\geq90$ \\ 64 & $[31,4,22]_4$ & $[[31,23,3/3]]_4$ & $1$ \\ \hline \end{tabular} \end{table} Comparing some entries in Table~\ref{table:ClassSO}, say, numbers 5 and 6, we notice that the $[[9,3,3/3]]_4$-code has better parameters than the $[[9,1,3/3]]_4$-code does. Both codes are included in the table in the interest of preserving the information on precisely how many of such codes there are from the classification result. In~\cite[Table 7]{GG09}, examples of ${\mathbb F}_{4}$-linear self-dual codes for even lengths $2 \leq n \leq 80$ are presented. Table~\ref{table:ClassSD} lists down the resulting asymmetric quantum codes for $32 \leq n \leq 80$. \begin{table} \caption{Asymmetric QECC from Hermitian Self-Dual ${\mathbb F}_{4}$-Linear Codes based on~\cite[Table 7]{GG09} for $32 \leq n \leq 80$} \label{table:ClassSD} \centering \begin{tabular}{| l | l || l | l |} \hline \textbf{Code $C$} &\textbf{Code $Q$} & \textbf{Code $C$} &\textbf{Code $Q$}\\ \hline $[32,16,10]_4$ & $[[32,0,10/10]]_4$ & $[58,29,14]_4$ & $[[58,0,14/14]]_4$\\ $[34,17,10]_4$ & $[[34,0,10/10]]_4$ & $[60,30,16]_4$ & $[[60,0,16/16]]_4$\\ $[36,18,12]_4$ & $[[36,0,12/12]]_4$ & $[62,31,18]_4$ & $[[62,0,18/18]]_4$\\ $[38,19,12]_4$ & $[[38,0,12/12]]_4$ & $[64,32,16]_4$ & $[[64,0,16/16]]_4$\\ $[40,20,12]_4$ & $[[40,0,12/12]]_4$ & $[66,33,16]_4$ & $[[66,0,16/16]]_4$\\ $[42,21,12]_4$ & $[[42,0,12/12]]_4$ & $[68,34,18]_4$ & $[[68,0,18/18]]_4$\\ $[44,22,12]_4$ & $[[44,0,12/12]]_4$ & $[70,35,18]_4$ & $[[70,0,18/18]]_4$\\ $[46,23,14]_4$ & $[[46,0,14/14]]_4$ & $[72,36,18]_4$ & $[[72,0,18/18]]_4$\\ $[48,24,14]_4$ & $[[48,0,14/14]]_4$ & $[74,37,18]_4$ & $[[74,0,18/18]]_4$\\ $[50,25,14]_4$ & $[[50,0,14/14]]_4$ & $[76,38,18]_4$ & $[[76,0,18/18]]_4$\\ $[52,26,14]_4$ & $[[52,0,14/14]]_4$ & $[78,39,18]_4$ & $[[78,0,18/18]]_4$\\ $[54,27,16]_4$ & $[[54,0,16/16]]_4$ & $[80,40,20]_4$ & $[[80,0,20/20]]_4$\\ $[56,28,14]_4$ & $[[56,0,14/14]]_4$ & & \\ \hline \end{tabular} \end{table} For parameters other than those listed in Table~\ref{table:ClassSO}, we do not have complete classification just yet. The Q-extension program described in~\cite{Bou07} can be used to extend the classification effort given sufficient resources. Some classifications based on the optimality of the minimum distances of the codes can be found in~\cite{BGV04} and in~\cite{ZLL09}, although when used in the construction of asymmetric quantum codes using our framework, they do not yield good $d_{z} = d_{x}$ relative to the length $n$. Many other ${\mathbb F}_{4}$-linear self-orthogonal codes are known. Examples can be found in~\cite[Table II]{CRSS98}, \cite{MLZF07}, as well as from the list of best known linear codes (BKLC) over ${\mathbb F}_4$ as explained in~\cite{Gr09}. Table~\ref{table:RandomSO} presents more examples of asymmetric quantum codes constructed based on known self-orthogonal linear codes up to length $n=40$. The list of codes in Table~\ref{table:RandomSO} is by no means exhaustive. It may be possible to find asymmetric codes with better parameters. \begin{table} \caption{Asymmetric QECC from Hermitian Self-orthogonal ${\mathbb F}_{4}$-Linear Codes for $n \leq 40$} \label{table:RandomSO} \centering \begin{tabular}{| c | l | l | l |} \hline \textbf{No.} & \textbf{Code $C$} & \textbf{Code $Q$} & \textbf{Ref.} \\ \hline 1 & $[5,2,4]_4$ & $\mathbf{[[5,1,3/3]]_4}$ & \cite[BKLC]{Gr09}\\ 2 & $[6,2,2]_4$ & $[[6,2,2/2]]_4$ & \cite{Bou09}\\ 3 & $[8,2,6]_4$ & $[[8,4,2/2]]_4$ & \cite[BKLC]{Gr09}\\ 4 & $[10,2,8]_4$ & $[[10,6,2/2]]_4$ & \cite[BKLC]{Gr09}\\ 5 & $[14,7,6]_4$ & $[[14,0,6/6]]_4$ & \cite[Table 7]{GG09}\\ 6 & $[15,2,12]_4$ & $[[15,11,2/2]]_4$ & \cite[BKLC]{Gr09}\\ 7 & $[16,8,6]_4$ & $[[16,0,6/6]]_4$ & \cite[Table 7]{GG09}\\ 8 & $[20,2,16]_4$ & $[[20,16,2/2]]_4$ & \cite[BKLC]{Gr09}\\ 9 & $[20,5,12]_4$ & $[[20,10,4/4]]_4$ & \cite[Table II]{CRSS98}\\ 10 & $[20,9,8]_4$ & $[[20,2,6/6]]_4$ & \cite[p.~788]{MLZF07}\\ 11 & $[20,10,8]_4$ & $[[20,0,8/8]]_4$ & \cite[Table 7]{GG09}\\ 12 & $[22,8,10]_4$ & $[[22,6,5/5]]_4$ & \cite{LM09}\\ 13 & $[22,10,8]_4$ & $[[22,2,6/6]]_4$ & \cite{LM09}\\ 14 & $[22,11,8]_4$ & $[[22,0,8/8]]_4$ & \cite[Table 7]{GG09}\\ 15 & $[23,8,10]_4$ & $[[23,7,5/5]]_4$ & \cite{LM09}\\ 16 & $[23,8,12]_4$ & $[[23,7,5/5]]_4$ & \cite[BKLC]{Gr09}\\ 17 & $[23,10,8]_4$ & $[[23,3,6/6]]_4$ & \cite{LM09}\\ 18 & $[24,5,16]_4$ & $[[24,14,3/3]]_4$ & \cite[BKLC]{Gr09}\\ 19 & $[24,8,10]_4$ & $[[24,8,5/5]]_4$ & \cite{LM09}\\ 20 & $[24,9,12]_4$ & $[[24,6,6/6]]_4$ & \cite[BKLC]{Gr09}\\ 21 & $[24,12,8]_4$ & $[[24,0,8/8]]_4$ & \cite[Table 7]{GG09}\\ 22 & $[25,2,20]_4$ & $[[25,21,2/2]]_4$ & \cite[BKLC]{Gr09}\\ 23 & $[25,5,16]_4$ & $[[25,15,4/4]]_4$ & \cite[Table II]{CRSS98}\\ 24 & $[25,8,10]_4$ & $[[25,9,5/5]]_4$ & \cite{LM09}\\ 25 & $[25,10,12]_4$ & $[[25,5,7/7]]_4$ & \cite{LM09}\\ 26 & $[26,2,20]_4$ & $[[26,22,2/2]]_4$ & \cite[BKLC]{Gr09}\\ 27 & $[26,6,16]_4$ & $[[26,14,4/4]]_4$ & \cite[BKLC]{Gr09}\\ 28 & $[26,9,10]_4$ & $[[26,8,5/5]]_4$ & \cite{LM09}\\ 29 & $[26,10,10]_4$ & $[[26,6,6/6]]_4$ & \cite{LM09}\\ 30 & $[26,11,12]_4$ & $[[26,4,8/8]]_4$ & \cite[BKLC]{Gr09}\\ 31 & $[27,6,16]_4$ & $[[27,15,3/3]]_4$ & \cite[BKLC]{Gr09}\\ 32 & $[27,9,10]_4$ & $[[27,9,5/5]]_4$ & \cite{LM09}\\ 33 & $[27,10,10]_4$ & $[[27,7,6/6]]_4$ & \cite{LM09}\\ 34 & $[27,12,12]_4$ & $[[27,3,9/9]]_4$ & \cite[BKLC]{Gr09}\\ 35 & $[28,7,16]_4$ & $[[28,14,5/5]]_4$ & \cite[Table II]{CRSS98}\\ 36 & $[28,8,12]_4$ & $[[28,12,5/5]]_4$ & \cite{LM09}\\ 37 & $[28,10,10]_4$ & $[[28,8,6/6]]_4$ & \cite{LM09}\\ 38 & $[28,13,12]_4$ & $[[28,2,10/10]]_4$ & \cite[BKLC]{Gr09}\\ 39 & $[29,8,16]_4$ & $[[29,13,5/5]]_4$ & \cite[BKLC]{Gr09}\\ 40 & $[29,11,10]_4$ & $[[29,7,6/6]]_4$ & \cite{LM09}\\ 41 & $[29,14,12]_4$ & $[[29,1,11/11]]_4$ & \cite[BKLC]{Gr09}\\ 42 & $[30,2,24]_4$ & $[[30,26,2/2]]_4$ & \cite[BKLC]{Gr09}\\ 43 & $[30,5,20]_4$ & $[[30,20,4/4]]_4$ & \cite[Table 13.2]{NRS06}\\ 44 & $[30,9,12]_4$ & $[[30,12,5/5]]_4$ & \cite{LM09}\\ 45 & $[30,11,10]_4$ & $[[30,8,6/6]]_4$ & \cite{LM09}\\ 46 & $[30,15,12]_4$ & $[[30,0,12/12]]_4$ & \cite[Table 7]{GG09}\\ 47 & $[31,9,16]_4$ & $[[31,13,6/6]]_4$ & \cite[BKLC]{Gr09}\\ 48 & $[32,6,20]_4$ & $[[32,20,4/4]]_4$ & \cite[Table 3]{GO98}\\ 49 & $[32,9,14]_4$ & $[[32,14,5/5]]_4$ & \cite{LM09}\\ 50 & $[32,11,12]_4$ & $[[32,10,6/6]]_4$ & \cite{LM09}\\ 51 & $[33,7,20]_4$ & $[[33,19,4/4]]_4$ & \cite[BKLC]{Gr09}\\ 52 & $[33,9,14]_4$ & $[[33,15,5/5]]_4$ & \cite{LM09}\\ 53 & $[33,10,16]_4$ & $[[33,13,6/6]]_4$ & \cite[BKLC]{Gr09}\\ 54 & $[33,12,14]_4$ & $[[33,9,7/7]]_4$ & \cite[BKLC]{Gr09}\\ 55 & $[33,15,12]_4$ & $[[33,3,9/9]]_4$ & \cite[BKLC]{Gr09}\\ 56 & $[34,9,18]_4$ & $[[34,16,6/6]]_4$ & \cite{LM09}\\ 57 & $[34,13,14]_4$ & $[[34,8,8/8]]_4$ & \cite[BKLC]{Gr09}\\ 58 & $[34,16,12]_4$ & $[[34,2,10/10]]_4$ & \cite[BKLC]{Gr09}\\ 59 & $[35,5,24]_4$ & $[[35,25,3/3]]_4$ & \cite[BKLC]{Gr09}\\ 60 & $[35,8,20]_4$ & $[[35,19,5/5]]_4$ & \cite[BKLC]{Gr09}\\ 61 & $[35,11,14]_4$ & $[[35,13,6/6]]_4$ & \cite{LM09}\\ 62 & $[35,17,12]_4$ & $[[35,1,11/11]]_4$ & \cite[BKLC]{Gr09}\\ 63 & $[36,9,16]_4$ & $[[36,18,4/4]]_4$ & \cite{LM09}\\ 64 & $[36,11,14]_4$ & $[[36,14,6/6]]_4$ & \cite{LM09}\\ 65 & $[37,9,20]_4$ & $[[37,19,5/5]]_4$ & \cite[BKLC]{Gr09}\\ 66 & $[37,18,12]_4$ & $[[37,1,11/11]]_4$ & \cite[BKLC]{Gr09}\\ 67 & $[38,6,24]_4$ & $[[38,26,3/3]]_4$ & \cite[BKLC]{Gr09}\\ 68 & $[38,11,18]_4$ & $[[38,16,6/6]]_4$ & \cite[BKLC]{Gr09}\\ 69 & $[39,12,18]_4$ & $[[39,15,7/7]]_4$ & \cite[BKLC]{Gr09}\\ 70 & $[39,4,28]_4$ & $[[39,31,3/3]]_4$ & \cite[BKLC]{Gr09}\\ 71 & $[39,7,24]_4$ & $[[39,25,4/4]]_4$ & \cite[BKLC]{Gr09}\\ 72 & $[40,5,28]_4$ & $[[40,30,4/4]]_4$ & \cite[Table II]{CRSS98}\\ 73 & $[40,15,16]_4$ & $[[40,10,7/7]]_4$ & \cite[BKLC]{Gr09}\\ \hline \end{tabular} \end{table} For lengths larger than $n=40$,~\cite{GO98} provides some known ${\mathbb F}_{4}$-linear codes of dimension 6 that belong to the class of \textit{quasi-twisted codes}. Based on the weight distribution of these codes~\cite[Table 3]{GO98}, we know which ones of them are self-orthogonal. Applying Theorem~\ref{thm:3.5} to them yields the 12 quantum codes listed in Table~\ref{table:ClassQT}. \begin{table} \caption{Asymmetric QECC from Hermitian Self-orthogonal Quasi-Twisted Codes Found in~\cite{GO98}} \label{table:ClassQT} \centering \begin{tabular}{| l | l || l | l |} \hline \textbf{Code $C$} & \textbf{Code $Q$} & \textbf{Code $C$} & \textbf{Code $Q$}\\ \hline $[48,6,32]_4$ & $[[48,36,3/3]]_4$ & $[144,6,104]_4$ & $[[144,132,3/3]]_4$\\ $[78,6,56]_4$ & $[[78,66,4/4]]_4$ & $[150,6,108]_4$ & $[[150,138,3/3]]_4$\\ $[102,6,72]_4$ & $[[102,90,3/3]]_4$ & $[160,6,116]_4$ & $[[160,148,3/3]]_4$\\ $[112,6,80]_4$ & $[[112,100,3/3]]_4$ & $[182,6,132]_4$ & $[[182,170,3/3]]_4$\\ $[120,6,86]_4$ & $[[120,108,4/4]]_4$ & $[192,6,138]_4$ & $[[192,180,3/3]]_4$\\ $[132,6,94]_4$ & $[[132,120,3/3]]_4$ & $[200,6,144]_4$ & $[[200,188,3/3]]_4$\\ \hline \end{tabular} \end{table} Another family of codes that we can use is the \textit{MacDonald codes}, commonly denoted by $C_{k,u}$ with $k>u>0$. The MacDonald codes are linear codes with parameters $[(q^{k}-q^{u})/(q-1),k,q^{k-1}-q^{u-1}]_{q}$. Some historical background and a construction of their generator matrices can be found in~\cite{BD03}. It is known that these codes are \textit{two-weight codes}. That is, they have nonzero codewords of only two possible weights. In~\cite[Figures 1a and 2a]{CaKa86}, the MacDonald codes are labeled SU1. There are $q^{k}-q^{k-u}$ codewords of weight $q^{k-1}-q^{u-1}$ and $q^{k-u}-1$ codewords of weight $q^{k-1}$. The MacDonald codes satisfy the equality of the \textit{Griesmer bound} which says that, for any $[n,k\geq1,d]_q$-code, \begin{equation}\label{eq:4.3} n \geq \sum_{i=0}^{k-1}\left \lceil \frac{d}{q^{i}} \right \rceil \text{.} \end{equation} \begin{ex}\label{ex:4.4} For $q=4,k>u>1$, the MacDonald codes are self-orthogonal since both $4^{k-1}-4^{u-1}$ and $4^{k-1}$ are even. For a $[(4^{k}-4^{u})/3,k,4^{k-1}-4^{u-1}]_{4}$-code $C_{k,u}$, we know (see~\cite[Lemma 4]{BD03}) that $d(C_{k,u}^{\perp_{\mathop{{\rm H}}}}) \geq 3$. Using $C_{k,u}$ and applying Theorem~\ref{thm:6.1}, we get an asymmetric quantum code $Q$ with parameters \begin{equation*} [[(4^{k}-4^{u})/3,((4^{k}-4^{u})/3)-2k,(\geq 3)/(\geq 3)]]_4 \text{.} \end{equation*} For $k\leq 5$ we have the following more explicit examples. The weight enumerator is written in an abbreviated form. For instance, $(0,1),(12,60),(16,3)$ means that the corresponding code has 1 codeword of weight 0, 60 codewords of weight 12 and 3 codewords of weight 16. \begin{enumerate} \item For $k=3,u=2$, we have the $[16,3,12]_4$-code with weight enumerator $(0,1),(12,60),(16,3)$. The resulting asymmetric QECC is a $[[16,10,3/3]]_4$-code. This code is listed as number 31 in Table~\ref{table:ClassSO}. \item For $k=4,u=3$, we have the $[64,4,48]_4$-code with weight enumerator $(0,1),(48,252),(64,3)$. The resulting asymmetric QECC is a $[[64,56,3/3]]_4$-code. \item For $k=4,u=2$, we have the $[80,4,60]_4$-code with weight enumerator $(0,1),(60,240),(64,15)$. The resulting asymmetric QECC is a $[[80,72,3/3]]_4$-code. \item For $k=5,u=4$, we have the $[256,5,192]_4$-code with weight enumerator $(0,1),(192,1020),(256,3)$. The resulting asymmetric QECC is a $[[256,246,3/3]]_4$-code. \item For $k=5,u=3$, we have the $[320,5,240]_4$-code with weight enumerator $(0,1),(240,1008),(256,15)$. The resulting asymmetric QECC is a $[[320,310,3/3]]_4$-code. \item For $k=5,u=2$, we have the $[336,5,252]_4$-code with weight enumerator $(0,1),(252,960),(256,63)$. The resulting asymmetric QECC is a $[[336,326,3/3]]_4$-code. \end{enumerate} \end{ex} \section{Construction from Nested Linear Cyclic Codes}\label{sec:Cyclic} The asymmetric quantum codes that we have constructed so far have $d_{z} = d_{x}$. From this section onward, we construct asymmetric quantum codes with $d_{z} \geq d_{x}$. In most cases, $d_{z} > d_{x}$. It is well established that, under the natural correspondence of vectors and polynomials, the study of cyclic codes in ${\mathbb F}_q^n$ is equivalent to the study of ideals in the residue class ring \begin{equation*} \mathcal{R}_n = {\mathbb F}_q[x]/(x^n-1) \text{.} \end{equation*} The study of ideals in $\mathcal{R}_n$ depends on factoring $x^n-1$. Basic results concerning and the properties of cyclic codes can be found in~\cite[Ch. 4]{HP03} or~\cite[Ch. 7]{MS77}. A cyclic code $C$ is a subset of a cyclic code $D$ of equal length over ${\mathbb F}_q$ if and only if the generator polynomial of $D$ divides the generator polynomial of $C$. Both polynomials divide $x^n-1$. Once the factorization of $x^n-1$ into irreducible polynomials is known, the nestedness property becomes apparent. We further require that $n$ be relatively prime to $4$ to exclude the so-called repeated-root cases since the resulting cyclic codes when $n$ is not relatively prime to $4$ have inferior parameters. See~\cite[p. 976]{Cha98} for comments and references regarding this matter. \begin{thm}\label{thm:7.1} Let $C$ and $D$ be cyclic codes of parameters $[n,k_1,d_1]_4$ and $[n,k_2,d_2]_4$, respectively, with $C \subseteq D$, then there exists an asymmetric quantum code $Q$ with parameters $[[n,k_2-k_1,d_z/d_x]]_4$, where \begin{equation}\label{eq:7.1} \left\lbrace d_z,d_x \right\rbrace = \left\lbrace d(C^{\perp_{\mathop{{\rm H}}}}),d_2 \right\rbrace \text{.} \end{equation} \end{thm} \begin{proof} Apply Theorem~\ref{thm:3.5} by taking $C_1 = C^{\perp_{\mathop{{\rm tr}}}}$ and $C_2 = D$. Since $C$ is an $[n,k_1,d_1]_4$ code, $C$ is an additive code of parameters $(n,2^{2k_1},d_1)_4$. Similarly, $D$ is an additive code of parameters $(n,2^{2k_2},d_2)_4$. The values for $d_z$ and $d_x$ can be verified by simple calculations. \end{proof} \begin{ex}\label{ex:7.2} Let $C$ be the repetition $[n,1,n]_4$-code generated by the polynomial $(x^{n}+1)/(x+1)$. If we take $C=D$ in Theorem~\ref{thm:7.1}, then we get a quantum code $Q$ with parameters $\mathbf{[[n,0,n/2]]_4}$. \end{ex} Tables~\ref{table:Cyclic} and~\ref{table:Cyclic2} list examples of asymmetric quantum codes constructed from nested cyclic codes up to $n=25$. We exclude the case $C=D$ since the parameters of the resulting quantum code $Q$ are $[[n,0,d(C)/2]]_{4}$ which are never better than those of the code $Q$ in Example~\ref{ex:7.2}. Among the resulting codes $Q$ of equal length and dimension, we choose one with the largest $d_z,d_x$ values. For codes $Q$ with equal length and distances, we choose one with the largest dimension. \begin{table*} \caption{Asymmetric QECC from Nested Cyclic Codes} \label{table:Cyclic} \centering \begin{tabular}{| c | l | l | l |} \hline \textbf{No.} & \textbf{Codes $C$ and $D$} & \textbf{Generator Polynomials of $C$ and of $D$} & \textbf{Code $Q$} \\ \hline 1 & $[3,1,3]_4$ & $(x+1)(x+\omega)$ & $\mathbf{[[3,1,2/2]]_4}$ \\ & $[3,2,2]_4$ & $(x+1)$ & \\ 2 & $[5,1,5]_4$ & $(x^2+\omega x+1)(x^2+\omega^2 x+1)$ & $\mathbf{[[5,2,3/2]]_{4}}$ \\ & $[5,3,3]_4$ & $(x^2+\omega^2 x+1)$ & \\ 3 & $[7,1,7]_4$ & $(x^3+x+1)(x^3+x^2+1)$ & $[[7,3,3/2]]_4$ \\ & $[7,4,3]_4$ & $(x^3+x+1)$ & \\ 4 & $[7,3,4]_4$ & $(x^3+x+1)(x+1)$ & $[[7,1,3/3]]_4$ \\ & $[7,4,3]_4$ & $(x^3+x+1)$ & \\ 5 & $[9,1,9]_4$ & $(x+\omega)(x+\omega^2)(x^3+\omega)(x^3+\omega^2)$ & $[[9,1,6/2]]_4$ \\ & $[9,2,6]_4$ & $(x+\omega^2)(x^3+\omega)(x^3+\omega^2)$ & \\ 6 & $[9,1,9]_4$ & $(x+\omega)(x+\omega^2)(x^3+\omega)(x^3+\omega^2)$ & $[[9,4,3/2]]_4$ \\ & $[9,5,3]_4$ & $(x+\omega)(x^3+\omega)$ & \\ 7 & $[9,1,9]_4$ & $(x+\omega)(x+\omega^2)(x^3+\omega)(x^3+\omega^2)$ & $\mathbf{[[9,7,2/2]]_4}$ \\ & $[9,8,2]_4$ & $(x+\omega)$ & \\ 8 & $[11,5,6]_4$ & $(x+1)(x^5+\omega^2 x^4+x^3+x^2+\omega x+1)$ & $[[11,1,5/5]]_4$ \\ & $[11,6,5]_4$ & $(x^5+\omega^2 x^4+x^3+x^2+\omega x+1)$ & \\ 9 & $[11,1,11]_4$ & $(x^{11}+1)/(x+1)$ & $[[11,5,5/2]]_4$ \\ & $[11,6,5]_4$ & $(x^5+\omega x^4+x^3+x^2+\omega^2 x+1)$ & \\ 10 & $[13,6,6]_4$ & $(x+1)(x^6 +\omega x^5 +\omega^2 x^3 +\omega x +1)$ & $[[13,1,5/5]]_4$ \\ & $[13,7,5]_4$ & $(x^6 +\omega x^5 +\omega^2 x^3 +\omega x +1)$ & \\ 11 & $[13,1,13]_4$ & $(x^{13}+1)/(x+1)$ & $[[13,6,5/2]]_4$ \\ & $[13,7,5]_4$ & $(x^6 +\omega x^5 +\omega^2 x^3 +\omega x +1)$ & \\ 12 & $[15,3,11]_4$ & $(x^{15}+1)/((x+1)(x^2 + \omega^2 x + \omega^2))$ & $[[15,1,9/3]]_4$ \\ & $[15,4,9]_4$ & $(x^{15}+1)/((x+1)(x+\omega)(x^2 + \omega^2 x + \omega^2))$ & \\ 13 & $[15,6,8]_4$ & $(x^9 +\omega x^8 +x^7+x^5 +\omega x^4 + \omega^2 x^2 +\omega^2x + 1)$ & $[[15,1,7/5]]_4$ \\ & $[15,7,7]_4$ & $(x^8 +\omega^2 x^7 +\omega x^6 +\omega x^5 +\omega^2 x^4+x^3+x^2+\omega x + 1)$ & \\ 14 & $[15,7,7]_4$ & $(x^8 +\omega^2 x^7 +\omega x^6 +\omega x^5 +\omega^2 x^4+x^3+x^2+\omega x + 1)$ & $[[15,1,6/6]]_4$ \\ & $[15,8,6]_4$ & $(x^7 + x^6+\omega x^4+x^2+\omega^2 x +\omega^2)$ & \\ 15 & $[15,1,15]_4$ & $(x^{15}+1)/(x+1)$ & $[[15,2,11/2]]_4$ \\ & $[15,3,11]_4$ & $(x^{15}+1)/((x+1)(x^2+\omega^2 x+\omega^2))$ & \\ 16 & $[15,3,11]_4$ & $(x^{15}+1)/((x+1)(x^2+\omega^2 x+\omega^2))$ & $[[15,2,8/3]]_4$ \\ & $[15,5,8]_4$ & $(x^{15}+1)/((x+1)(x^2+\omega^2 x+\omega^2)(x^2+\omega^2 x+1))$ & \\ 17 & $[15,6,8]_4$ & $(x^{15}+1)/((x^2+\omega x+\omega)(x^2+\omega^2 x+\omega^2)(x^2+\omega^2 x+1))$ & $[[15,2,6/5]]_4$ \\ & $[15,8,6]_4$ & $(x^7 + x^6+\omega x^4+x^2+\omega^2 x +\omega^2)$ & \\ 18 & $[15,1,15]_4$ & $(x^{15}+1)/(x+1)$ & $[[15,3,9/2]]_4$ \\ & $[15,4,9]_4$ & $(x^{15}+1)/((x+1)(x+\omega)(x^2+\omega^2 x+\omega^2))$ & \\ 19 &$[15,8,6]_4$ & $x^{7} + \omega x^{6} + \omega^2 x^{4} + \omega^2 x^{2} + \omega x + \omega^2$ & $[[15,4,7/3]]_4$ \\ &$[15,12,3]_4$& $x^{3} + x^{2} + \omega^{2}$ & \\ 20 & $[15,1,15]_4$ & $(x^{15}+1)/(x+1)$ & $[[15,4,8/2]]_4$ \\ & $[15,5,8]_4$ & $(x^{15}+1)/((x+1)(x^2+\omega^2 x+\omega^2)(x^2+\omega^2 x+1))$ & \\ 21 &$[15,4,10]_4$& $(x^{15}+1)/(x^4 + \omega^2 x^3 + \omega x^2 + \omega x + w)$ & $[[15,5,5/4]]_4$ \\ &$[15,9,5]_4$& $x^6 + \omega^2 x^5 + \omega^2 x^4 + x^3 + x^2 + \omega x + 1$ & \\ 22 & $[15,1,15]_4$ & $(x^{15}+1)/(x+1)$ & $[[15,6,7/2]]_4$ \\ & $[15,7,7]_4$ & $(x^8 +\omega^2 x^7 +\omega x^6 +\omega x^5 +\omega^2 x^4+x^3+x^2+\omega x + 1)$ & \\ 23 & $[15,3,11]_4$ & $(x^{15}+1)/((x+1)(x^2 + \omega^2 x + \omega^2))$ & $[[15,6,5/3]]_4$ \\ & $[15,9,5]_4$ & $(x^6 + \omega x^5 + x^4 + x^3 + \omega^2 x^2 + \omega^2 x + 1)$ & \\ 24 & $[15,1,15]_4$ & $(x^{15}+1)/(x+1)$ & $[[15,7,6/2]]_4$ \\ & $[15,8,6]_4$ & $(x^7 + x^6+\omega x^4+x^2+\omega^2 x +\omega^2)$ & \\ 25 & $[15,1,15]_4$ & $(x^{15}+1)/(x+1)$ & $[[15,8,5/2]]_4$ \\ & $[15,9,5]_4$ & $(x^6 + \omega x^5 + x^4 + x^3 + \omega^2 x^2 + \omega^2 x + 1)$ & \\ 26 &$[15,3,11]_4$& $(x^{15}+1)/(x^3 + \omega^2 x^2 + \omega^2)$ & $[[15,9,3/3]]_4$ \\ &$[15,12,3]_4$& $x^3 + x^2 + \omega^2$ & \\ 27 &$[15,1,15]_4$& $(x^{15}+1)/(x+1)$ & $[[15,11,3/2]]_4$ \\ &$[15,12,3]_4$& $x^3 + x^2 + \omega^2$ & \\ 28 & $[15,1,15]_4$ & $(x^{15}+1)/(x+1)$ & $\mathbf{[[15,13,2/2]]_4}$ \\ & $[15,14,2]_4$ & $(x+ \omega)$ & \\ 29 & $[17,12,4]_4$ & $(x^5 + \omega x^3 + \omega x^2 + 1)$ & $[[17,1,9/4]]_4$ \\ & $[17,13,4]_4$ & $(x^4 + x^3 + \omega^2 x^2 + x + 1)$ & \\ 30 & $[17,8,8]_4$ & $(x^9 +\omega x^8 +\omega^2 x^7 +\omega^2 x^6 +\omega^2 x^3 +\omega^2 x^2 +\omega x + 1)$ & $[[17,1,7/7]]_4$ \\ & $[17,9,7]_4$ & $(x^8 +\omega^2 x^7 +\omega^2 x^5 +\omega^2 x^4 +\omega^2 x^3 +\omega^2 x + 1)$ & \\ 31 & $[17,1,17]_4$ & $(x^{17}+1)/(x+1)$ & $[[17,4,9/2]]_4$ \\ & $[17,5,9]_4$ & $(x^{17}+1)/(x^5 +\omega^2 x^4 +\omega^2 x^3 +\omega^2 x^2 +\omega^2 x + 1)$ & \\ 32 & $[17,4,12]_4$ & $(x^{17}+1)/(x^4+x^3+\omega x^2+x+1)$ & $[[17,4,8/4]]_4$ \\ & $[17,8,8]_4$ & $(x^9 +\omega x^8 +\omega^2 x^7 +\omega^2 x^6 +\omega^2 x^3 +\omega^2 x^2 +\omega x + 1)$ & \\ 33 & $[17,4,12]_4$ & $(x^{17}+1)/(x^4+x^3+\omega x^2+x+1)$ & $[[17,5,7/4]]_4$ \\ & $[17,9,7]_4$ & $(x^8 +\omega^2 x^7 +\omega^2 x^5 +\omega^2 x^4 +\omega^2 x^3 +\omega^2 x + 1)$ & \\ 34 & $[17,1,17]_4$ & $(x^{17}+1)/(x+1)$ & $[[17,8,7/2]]_4$ \\ & $[17,9,7]_4$ & $(x^8 +\omega x^7 +\omega x^5 +\omega x^4 +\omega x^3 +\omega x + 1)$ & \\ 35 & $[17,1,17]_4$ & $(x^{17}+1)/(x+1)$ & $[[17,12,4/2]]_4$ \\ & $[17,13,4]_4$ & $(x^4 + x^3 + \omega^2 x^2 + x + 1)$ & \\ \hline \end{tabular} \end{table*} \begin{table*} \caption{Asymmetric QECC from Nested Cyclic Codes Continued} \label{table:Cyclic2} \centering \begin{tabular}{| c | l | l | l |} \hline \textbf{No.} & \textbf{Codes $C$ and $D$} & \textbf{Generator Polynomials of $C$ and $D$} & \textbf{Code $Q$} \\ \hline 36 & $[19,9,8]_4$ & $(x+1)(x^9 +\omega^2 x^8 +\omega^2 x^6 +\omega^2 x^5 +\omega x^4 +\omega x^3 +\omega x + 1)$ & $[[19,1,7/7]]_4$\\ & $[19,10,7]_4$ & $(x^9 + \omega^2 x^8 + \omega^2 x^6 + \omega^2 x^5 + \omega x^4 + \omega x^3 + \omega x + 1)$ & \\ 37 & $[19,1,19]_4$ & $(x^{19}+1)/(x+1)$ & $[[19,9,7/2]]_4$ \\ & $[19,10,7]_4$ & $(x^9 + \omega x^8 + \omega x^6 + \omega x^5 + \omega^2 x^4 + \omega^2 x^3 + \omega^2 x + 1)$ & \\ 38 & $[21,1,21]_4$ & $(x^{21}+1)/(x+1)$ & $[[21,1,14/2]]_4$ \\ & $[21,2,14]_4$ & $(x^{21}+1)/(x^2 + \omega^2 x + \omega)$ & \\ 39 &$[21,4,12]_4$& $(x^{21}+1)/(x^4 + \omega x^3 + \omega^2 x^2 + x + 1)$ & $[[21,3,11/3]]_4$ \\ &$[21,7,11]_4$& $x^{14} + \omega x^{13} + \omega^2 x^{12} + \omega^2 x^{10} + x^8 +\omega^2 x^7 + x^6 + \omega^2 x^4 + \omega^2 x^2 + \omega x + 1$ & \\ 40 &$[21,4,12]_4$& $(x^{21}+1)/(x^4 + \omega^2 x^3 + \omega x^2 + x + 1)$ & $[[21,4,9/3]]_4$ \\ &$[21,8,9]_4$& $(x^{21}+1)/(x^8 + x^7 + \omega x^6 + \omega x^5 + x^4 + \omega^2 x^3 + x^2 + x + \omega)$ &\\ 41 & $[21,1,21]_4$ & $(x^{21}+1)/(x+1)$ & $[[21,3,12/2]]_4$ \\ & $[21,4,12]_4$ & $(x^{21}+1)/((x+1)(x^3+\omega^2 x^2+1))$ & \\ 42 & $[21,7,11]_4$ & $(x^{21}+1)/(x^7 +\omega^2 x^6 + x^4 + x^3 +\omega^2 x + 1)$ & $[[21,3,8/5]]_4$ \\ & $[21,10,8]_4$ & $(x^{11}+\omega x^{10}+ x^8 + x^7 +x^6 + x^5 +\omega x^4 +\omega x^3 +\omega^2 x^2 +\omega^2 x + 1)$ & \\ 43 & $[21,4,12]_4$ & $(x^{21}+1)/(x^4 + x^3 + \omega x^2 +\omega^2 x + 1)$ & $[[21,4,9/3]]_4$ \\ & $[21,8,9]_4$ & $(x^{21}+1)/(x^8 +\omega x^7 +\omega^2 x^6 + x^5 +\omega^2 x^4 +\omega x^3 +\omega x^2 +\omega)$ & \\ 44 & $[21,7,11]_4$ & $(x^{21}+1)/(x^7 +\omega^2 x^6 + x^4 + x^3 +\omega^2 x + 1)$ & $[[21,4,6/5]]_4$ \\ & $[21,11,6]_4$ & $(x^{21}+1)/(x^{11} + x^8 +\omega^2 x^7 + x^2 +\omega)$ & \\ 45 & $[21,1,21]_4$ & $(x^{21}+1)/(x+1)$ & $[[21,6,11/2]]_4$ \\ & $[21,7,11]_4$ & $(x^{21}+1)/(x^7 +\omega^2 x^6 + x^4 + x^3 +\omega^2 x + 1)$ & \\ 46 & $[21,4,12]_4$ & $(x^{21}+1)/(x^4 + x^3 +\omega x^2 +\omega^2 x + 1)$ & $[[21,6,8/3]]_4$ \\ & $[21,10,8]_4$ & $(x^{11}+\omega x^{10}+ x^8 + x^7 +x^6 + x^5 +\omega x^4 +\omega x^3 +\omega^2 x^2 +\omega^2 x + 1)$ & \\ 47 &$[21,7,11]_4$& $(x^{21}+1)/(x^7 + \omega^2 x^6 + x^4 + x^3 + \omega ^2 x + 1)$ & $[[21,7,5/5]]_4$ \\ &$[21,14,5]_4$& $x^7 + x^6 + x^4 + \omega x^3 + \omega^2x + \omega$ & \\ 48 & $[21,1,21]_4$ & $(x^{21}+1)/(x+1)$ & $[[21,7,9/2]]_4$ \\ & $[21,8,9]_4$ & $(x^{21}+1)/(x^8 +\omega x^7 +\omega^2 x^6 + x^5 +\omega^2 x^4 +\omega x^3 +\omega x^2 +\omega)$ & \\ 49 & $[21,4,12]_4$ & $(x^{21}+1)/(x^4 + x^3 +\omega x^2 +\omega^2 x + 1)$ & $[[21,7,6/3]]_4$ \\ & $[21,11,6]_4$ & $(x^{10} +\omega x^9 + x^8 +\omega x^7 + x^6 + x^5 + x^4 +\omega^2 x^2 +\omega^2)$ & \\ 50 & $[21,1,21]_4$ & $(x^{21}+1)/(x+1)$ & $[[21,9,8/2]]_4$ \\ & $[21,10,8]_4$ & $(x^{11}+\omega x^{10}+x^8+x^7+x^6+x^5+\omega x^4 +\omega x^3+\omega^2 x^2+\omega^2 x+ 1)$ & \\ 51 & $[21,1,21]_4$ & $(x^{21}+1)/(x+1)$ & $[[21,10,6/2]]_4$ \\ & $[21,11,6]_4$ & $(x^{10} + x^7 +\omega^2 x^6 + x^4 +\omega x^2 +\omega^2)$ & \\ 52 & $[21,4,12]_4$ & $(x^{21}+1)/(x^4 + x^3 +\omega x^2 + \omega^2 x + 1)$ & $[[21,10,5/3]]_4$ \\ & $[21,14,5]_4$ & $(x^7 + x^6 + x^4 +\omega x^3 +\omega^2 x +\omega)$ & \\ 53 & $[21,1,21]_4$ & $(x^{21}+1)/(x+1)$ & $[[21,13,5/2]]_4$ \\ & $[21,14,5]_4$ & $(x^7 + x^6 + x^4 +\omega x^3 +\omega^2 x +\omega)$ & \\ 54 & $[21,1,21]_4$ & $(x^{21}+1)/(x+1)$ & $[[21,16,3/2]]_4$ \\ & $[21,17,3]_4$ & $(x+\omega)(x^3+\omega^2 x^2 +1)$ & \\ 55 & $[21,1,21]_4$ & $(x^{21}+1)/(x+1)$ & $\mathbf{[[21,19,2/2]]_4}$ \\ & $[21,20,2]_4$ & $(x+\omega)$ & \\ 56 & $[23,11,8]_4$ & $(x+1)(x^{11}+x^{10}+x^6+x^5+x^4+x^2+1)$ & $[[23,1,7/7]]_4$ \\ & $[23,12,7]_4$ & $(x^{11}+x^{10}+x^6+x^5+x^4+x^2+1)$ & \\ 57 & $[23,1,23]_4$ & $(x^{23}+1)/(x+1)$ & $[[23,11,7/2]]_4$ \\ & $[23,12,7]_4$ & $(x^{11}+x^9+x^7+x^6+x^5+x+1)$ & \\ 58 & $[25,1,25]_4$ & $(x^{25}+1)/(x+1)$ & $[[25,2,15/2]]_4$ \\ & $[25,3,15]_4$ & $(x^{25}+1)/(x^3 + \omega x^2 + \omega x + 1)$ & \\ 59 & $[25,12,4]_4$ & $(x^{13}+\omega^2 x^{12}+\omega^2 x^{11}+x^{10}+\omega x^8+x^7+x^6+\omega x^5 + x^3+\omega^2 x^2+\omega^2 x+ 1)$ & $[[25,2,4/4]]_4$ \\ & $[25,14,4]_4$ & $(x^{11}+ x^{10}+\omega x^6 +\omega x^5 + x + 1)$ & \\ 60 & $[25,1,25]_4$ & $(x^{25}+1)/(x+1)$ & $[[25,4,5/2]]_4$ \\ & $[25,5,5]_4$ & $(x^{25}+1)/(x^5 + 1)$ & \\ 61 & $[25,10,4]_4$ & $(x^{15}+\omega^2 x^{10} +\omega^2 x^5 + 1)$ & $[[25,4,4/3]]_4$ \\ & $[25,14,4]_4$ & $(x^{11} + x^{10} +\omega x^6 +\omega x^5 + x + 1)$ & \\ 62 & $[25,10,4]_4$ & $(x^{15}+\omega^2 x^{10} +\omega^2 x^5 + 1)$ & $[[25,5,3/3]]_4$ \\ & $[25,15,3]_4$ & $(x^{10}+\omega x^5 +1)$ & \\ 63 & $[25,1,25]_4$ & $(x^{23}+1)/(x + 1)$ & $[[25,12,4/2]]_4$ \\ & $[25,13,4]_4$ & $(x^{12}+\omega x^{11}+ x^{10} +\omega^2 x^7 + x^6 +\omega^2 x^5 + x^2 +\omega x + 1)$ & \\ 64 & $[25,1,25]_4$ & $(x^{23}+1)/(x+1)$ & $[[25,14,3/2]]_4$ \\ & $[25,15,3]_4$ & $(x^{10}+\omega x^5 +1)$ & \\ 65 & $[25,1,25]_4$ & $(x^{23}+1)/(x+1)$ & $[[25,22,2/2]]_4$ \\ & $[25,23,2]_4$ & $(x^2 +\omega x +1)$ & \\ \hline \end{tabular} \end{table*} \section{Construction from Nested Linear BCH Codes}\label{sec:BCHCodes} It is well known (see~\cite[Sec. 3]{Cha98}) that finding the minimum distance or even finding a good lower bound on the minimum distance of a cyclic code is not a trivial problem. One important family of cyclic codes is the family of BCH codes. Their importance lies on the fact that their designed distance provides a reasonably good lower bound on the minimum distance. For more on BCH codes,~\cite[Ch. 5]{HP03} can be consulted. The BCH Code constructor in MAGMA can be used to find nested codes to produce more asymmetric quantum codes. Table~\ref{table:BCH} lists down the BCH codes over ${\mathbb F}_{4}$ for $n=27$ to $n=51$ with $n$ coprime to $4$. For a fixed length $n$, the codes are nested, i.e., a code $C$ with dimension $k_{1}$ is a subcode of a code $D$ with dimension $k_{2} > k_{1}$. The construction process can be done for larger values of $n$ if so desired. The range of the designed distances that can be supplied into MAGMA to come up with the code $C$ and the actual minimum distance of $C$ are denoted by $\delta(C)$ and $d(C)$, respectively. The minimum distance of $C^{\perp_{\mathop{{\rm tr}}}}$, which is needed in the computation of ${d_z,d_x}$, is denoted by $d(C^{\perp_{\mathop{{\rm tr}}}})$. To save space, the BCH $[n,1,n]_{4}$ repetition code generated by the all one vector ${\mathbf{1}}$ is not listed down in the table although this code is used in the construction of many asymmetric quantum codes presented in Table~\ref{table:BCH_QECC}. \begin{table} \caption{BCH Codes over ${\mathbb F}_{4}$ with $2 \leq k < n$ for $27\leq n \leq51$} \label{table:BCH} \centering \begin{tabular}{| c | c | c | c | l | c |} \hline \textbf{No.} & \textbf{$n$} & \textbf{$\delta(C)$} & \textbf{$d(C)$} & \textbf{Code $C$} & \textbf{$d(C^{\perp_{\mathop{{\rm tr}}}})$} \\ \hline 1 & $27$ & $2$ & $2$ & $[27,18,2]_4$ & $3$\\ 2 & & $3$ & $3$ & $[27,9,3]_4$ & $2$\\ 3 & & $4-6$ & $6$ & $[27,6,6]_4$ & $2$\\ 4 & & $7-9$ & $9$ & $[27,3,9]_4$ & $2$\\ 5 & & $10-18$ & $18$ & $[27,2,18]_4$ & $2$\\ 6 & $29$ & $2$ & $11$ & $[29,15,11]_4$ & $12$\\ 7 & $31$ & $2-3$ & $3$ & $[31,26,3]_4$ & $16$\\ 8 & & $4-5$ & $5$ & $[31,21,5]_4$ & $12$\\ 9 & & $6-7$ & $7$ & $[31,16,7]_4$ & $8$\\ 10 & & $8-11$ & $11$ & $[31,11,11]_4$ & $6$\\ 11 & & $12-15$ & $15$ & $[31,6,15]_4$ & $4$\\ 12 & $33$ & $2$ & $2$ & $[33,28,2]_4$ & $18$\\ 13 & & $3$ & $3$ & $[33,23,3]_4$ & $12$\\ 14 & & $4-5$ & $8$ & $[33,18,8]_4$ & $11$\\ 15 & & $6$ & $10$ & $[33,13,10]_4$ & $6$\\ 16 & & $7$ & $11$ & $[33,8,11]_4$ & $4$\\ 17 & & $8-11$ & $11$ & $[33,3,11]_4$ & $2$\\ 18 & & $12-22$ & $22$ & $[33,2,22]_4$ & $2$\\ 19 & $35$ & $2$ & $3$ & $[35,29,3]_4$ & $16$\\ 20 & & $3$ & $3$ & $[35,23,3]_4$ & $8$\\ 21 & & $4-5$ & $5$ & $[35,17,5]_4$ & $8$\\ 22 & & $6$ & $7$ & $[35,14,7]_4$ & $8$\\ 23 & & $7$ & $7$ & $[35,8,7]_4$ & $4$\\ 24 & & $8-14$ & $15$ & $[35,6,15]_4$ & $4$\\ 25 & & $15$ & $15$ & $[35,4,15]_4$ & $2$\\ 26 & $37$ & $2$ & $11$ & $[37,19,11]_4$ & $12$\\ 27 & $39$ & $2$ & $2$ & $[39,33,2]_4$ & $18$\\ 28 & & $3$ & $3$ & $[39,27,3]_4$ & $12$\\ 29 & & $4-6$ & $9$ & $[39,21,9]_4$ & $12$\\ 30 & & $7$ & $10$ & $[39,15,10]_4$ & $6$\\ 31 & & $8-13$ & $13$ & $[39,9,13]_4$ & $4$\\ 32 & & $14$ & $15$ & $[39,8,15]_4$ & $3$\\ 33 & & $15-26$ & $26$ & $[39,2,26]_4$ & $2$\\ 34 & $41$ & $2$ & $6$ & $[41,31,6]_4$ & $20$\\ 35 & & $3$ & $9$ & $[41,21,9]_4$ & $10$\\ 36 & & $4-6$ & $20$ & $[41,11,20]_4$ & $7$\\ 37 & $43$ & $2$ & $5$ & $[43,36,5]_4$ & $27$\\ 38 & & $3$ & $6$ & $[43,29,6]_4$ & $14$\\ 39 & & $4-6$ & $11$ & $[43,22,11]_4$ & $12$\\ 40 & & $7$ & $13$ & $[43,15,13]_4$ & $6$\\ 41 & & $8-9$ & $26$ & $[43,8,26]_4$ & $5$\\ 42 & $45$ & $2$ & $2$ & $[45,39,2]_4$ & $12$\\ 43 & & $3$ & $3$ & $[45,33,3]_4$ & $8$\\ 44 & & $4-5$ & $5$ & $[45,31,5]_4$ & $8$\\ 45 & & $6$ & $6$ & $[45,28,6]_4$ & $8$\\ 46 & & $7$ & $7$ & $[45,26,7]_4$ & $8$\\ 47 & & $8-9$ & $9$ & $[45,20,9]_4$ & $6$\\ 48 & & $10$ & $10$ & $[45,18,10]_4$ & $6$\\ 49 & & $11$ & $11$ & $[45,15,11]_4$ & $3$\\ 50 & & $12-15$ & $15$ & $[45,9,15]_4$ & $2$\\ 51 & & $16-18$ & $18$ & $[45,8,18]_4$ & $2$\\ 52 & & $19-21$ & $21$ & $[45,6,21]_4$ & $2$\\ 53 & & $22-30$ & $30$ & $[45,4,30]_4$ & $2$\\ 54 & & $31-33$ & $33$ & $[45,3,33]_4$ & $2$\\ 55 & $47$ & $2-5$ & $11$ & $[47,24,11]_4$ & $12$\\ 56 & $49$ & $2-3$ & $3$ & $[49,28,3]_4$ & $4$\\ 57 & & $4-7$ & $7$ & $[49,7,7]_4$ & $2$\\ 58 & & $8-21$ & $21$ & $[49,4,21]_4$ & $2$\\ 59 & $51$ & $2$ & $2$ & $[51,47,2]_4$ & $36$\\ 60 & & $3$ & $3$ & $[51,43,3]_4$ & $24$\\ 61 & & $4-5$ & $5$ & $[51,39,5]_4$ & $24$\\ 62 & & $6$ & $9$ & $[51,35,9]_4$ & $22$\\ 63 & & $7$ & $9$ & $[51,31,9]_4$ & $14$\\ 64 & & $8-9$ & $9$ & $[51,27,9]_4$ & $10$\\ 65 & & $10-11$ & $14$ & $[51,23,14]_4$ & $10$\\ 66 & & $12-17$ & $17$ & $[51,19,17]_4$ & $8$\\ 67 & & $18$ & $18$ & $[51,18,18]_4$ & $8$\\ 68 & & $19$ & $19$ & $[51,14,19]_4$ & $6$\\ 69 & & $20-22$ & $27$ & $[51,10,27]_4$ & $6$\\ 70 & & $23-34$ & $34$ & $[51,6,34]_4$ & $4$\\ 71 & & $35$ & $35$ & $[51,5,35]_4$ & $3$\\ \hline \end{tabular} \end{table} Table~\ref{table:BCH_QECC} presents the resulting asymmetric quantum codes from nested BCH Codes based on Theorem~\ref{thm:3.5}. The inner codes are listed in the column denoted by Code $C_{1}^{\perp_{\mathop{{\rm tr}}}}$ while the corresponding larger codes are put in the column denoted by Code $C_{2}$. The values for $d_z,d_x$ are derived from the last column of Table~\ref{table:BCH} while keeping Proposition~\ref{prop:3.2} in mind. \begin{table*} \caption{Asymmetric QECC from BCH Codes} \label{table:BCH_QECC} \centering \begin{tabular}{| c | c | l | l | l || c | c | l | l | l |} \hline \textbf{No.} & \textbf{$n$} & \textbf{Code $C_{1}^{\perp_{\mathop{{\rm tr}}}}$} & \textbf{Code $C_{2}$} & \textbf{Code $Q$} & \textbf{No.} & \textbf{$n$} & \textbf{Code $C_{1}^{\perp_{\mathop{{\rm tr}}}}$} & \textbf{Code $C_{2}$} & \textbf{Code $Q$}\\ \hline 1 & $27$ & $[27,1,27]_4$ & $[27,2,18]_4$ & $[[27,1,18/2]]_4$ & 76 & $43$ & $[43,8,26]_4$ & $[43,29,6]_4$ & $[[43,21,6/5]]_4$\\ 2 & & $[27,1,27]_4$ & $[27,3,9]_4$ & $[[27,2,9/2]]_4$ & 77 & & $[43,1,43]_4$ & $[43,29,6]_4$ & $[[43,28,6/2]]_4$\\ 3 & & $[27,1,27]_4$ & $[27,6,6]_4$ & $[[27,5,6/2]]_4$ & 78 & & $[43,8,26]_4$ & $[43,36,5]_4$ & $[[43,28,5/5]]_4$\\ 4 & & $[27,1,27]_4$ & $[27,9,3]_4$ & $[[27,8,3/2]]_4$ & 79 & & $[43,1,43]_4$ & $[43,36,5]_4$ & $[[43,35,5/2]]_4$\\ 5 & & $[27,1,27]_4$ & $[27,18,2]_4$ & $[[27,17,2/2]]_4$ & 80 & $45$ & $[45,1,45]_4$ & $[45,3,33]_4$ & $[[45,2,33/2]]_4$\\ 6 & $29$ & $[29,1,29]_4$ & $[29,15,11]_4$ & $[[29,14,11/2]]_4$ & 81 & & $[45,18,10]_4$ & $[45,20,9]_4$ & $[[45,2,9/6]]_4$\\ 7 & $31$ & $[31,1,31]_4$ & $[31,6,15]_4$ & $[[31,5,15/2]]_4$ & 82 & & $[45,1,45]_4$ & $[45,4,30]_4$ & $[[45,3,30/2]]_4$\\ 8 & & $[31,21,5]_4$ & $[31,26,3]_4$ & $[[31,5,12/3]]_4$ & 83 & & $[45,15,11]_4$ & $[45,18,10]_4$ & $[[45,3,10/3]]_4$\\ 9 & & $[31,6,15]_4$ & $[31,11,11]_4$ & $[[31,5,11/4]]_4$ & 84 & & $[45,1,45]_4$ & $[45,6,21]_4$ & $[[45,5,21/2]]_4$\\ 10 & & $[31,16,7]_4$ & $[31,21,5]_4$ & $[[31,5,8/5]]_4$ & 85 & & $[45,15,11]_4$ & $[45,20,9]_4$ & $[[45,5,9/3]]_4$\\ 11 & & $[31,11,11]_4$ & $[31,16,7]_4$ & $[[31,5,7/6]]_4$ & 86 & & $[45,26,7]_4$ & $[45,31,5]_4$ & $[[45,5,8/5]]_4$\\ 12 & & $[31,1,31]_4$ & $[31,11,11]_4$ & $[[31,10,11/2]]_4$ & 87 & & $[45,1,45]_4$ & $[45,8,18]_4$ & $[[45,7,18/2]]_4$\\ 13 & & $[31,16,7]_4$ & $[31,26,3]_4$ & $[[31,10,8/3]]_4$ & 88 & & $[45,26,7]_4$ & $[45,33,3]_4$ & $[[45,7,8/3]]_4$\\ 14 & & $[31,6,15]_4$ & $[31,16,7]_4$ & $[[31,10,7/4]]_4$ & 89 & & $[45,1,45]_4$ & $[45,9,15]_4$ & $[[45,8,15/2]]_4$\\ 15 & & $[31,11,11]_4$ & $[31,21,5]_4$ & $[[31,10,6/5]]_4$ & 90 & & $[45,18,10]_4$ & $[45,26,7]_4$ & $[[45,8,7/6]]_4$\\ 16 & & $[31,1,31]_4$ & $[31,16,7]_4$ & $[[31,15,7/2]]_4$ & 91 & & $[45,18,10]_4$ & $[45,28,6]_4$ & $[[45,10,6/6]]_4$\\ 17 & & $[31,11,11]_4$ & $[31,26,3]_4$ & $[[31,15,6/3]]_4$ & 92 & & $[45,15,11]_4$ & $[45,26,7]_4$ & $[[45,11,7/3]]_4$\\ 18 & & $[31,6,15]_4$ & $[31,21,5]_4$ & $[[31,15,5/4]]_4$ & 93 & & $[45,18,10]_4$ & $[45,31,5]_4$ & $[[45,13,6/5]]_4$\\ 19 & & $[31,1,31]_4$ & $[31,21,5]_4$ & $[[31,20,5/2]]_4$ & 94 & & $[45,1,45]_4$ & $[45,15,11]_4$ & $[[45,14,11/2]]_4$\\ 20 & & $[31,6,15]_4$ & $[31,26,3]_4$ & $[[31,20,4/3]]_4$ & 95 & & $[45,18,10]_4$ & $[45,33,3]_4$ & $[[45,15,6/3]]_4$\\ 21 & & $[31,1,31]_4$ & $[31,26,3]_4$ & $[[31,25,3/2]]_4$ & 96 & & $[45,15,11]_4$ & $[45,31,5]_4$ & $[[45,16,5/3]]_4$\\ 22 & $33$ & $[33,1,33]_4$ & $[33,2,22]_4$ & $[[33,1,22/2]]_4$ & 97 & & $[45,1,45]_4$ & $[45,18,10]_4$ & $[[45,17,10/2]]_4$\\ 23 & & $[33,23,3]_4$ & $[33,28,2]_4$ & $[[33,5,12/2]]_4$ & 98 & & $[45,15,11]_4$ & $[45,33,3]_4$ & $[[45,18,3/3]]_4$\\ 24 & & $[33,18,8]_4$ & $[33,23,3]_4$ & $[[33,5,11/3]]_4$ & 99 & & $[45,1,45]_4$ & $[45,20,9]_4$ & $[[45,19,9/2]]_4$\\ 25 & & $[33,8,11]_4$ & $[33,13,10]_4$ & $[[33,5,10/4]]_4$ & 100 & & $[45,1,45]_4$ & $[45,26,7]_4$ & $[[45,25,7/2]]_4$\\ 26 & & $[33,13,10]_4$ & $[33,18,8]_4$ & $[[33,5,8/6]]_4$ & 101 & & $[45,1,45]_4$ & $[45,28,6]_4$ & $[[45,27,6/2]]_4$\\ 27 & & $[33,18,8]_4$ & $[33,28,2]_4$ & $[[33,10,11/2]]_4$ & 102 & & $[45,1,45]_4$ & $[45,31,5]_4$ & $[[45,30,5/2]]_4$\\ 28 & & $[33,8,11]_4$ & $[33,18,8]_4$ & $[[33,10,8/4]]_4$ & 103 & & $[45,1,45]_4$ & $[45,33,3]_4$ & $[[45,32,3/2]]_4$\\ 29 & & $[33,1,33]_4$ & $[33,13,10]_4$ & $[[33,12,10/2]]_4$ & 104 & & $[45,1,45]_4$ & $[45,39,2]_4$ & $[[45,38,2/2]]_4$\\ 30 & & $[33,8,11]_4$ & $[33,23,3]_4$ & $[[33,15,4/3]]_4$ & 105 & $47$ & $[47,1,47]_4$ & $[47,24,11]_4$ & $[[47,23,11/2]]_4$\\ 31 & & $[33,1,33]_4$ & $[33,18,8]_4$ & $[[33,17,8/2]]_4$ & 106 & $49$ & $[49,1,49]_4$ & $[49,4,21]_4$ & $[[49,3,21/2]]_4$\\ 32 & & $[33,8,11]_4$ & $[33,28,2]_4$ & $[[33,20,4/2]]_4$ & 107 & & $[49,1,49]_4$ & $[49,7,7]_4$ & $[[49,6,7/2]]_4$\\ 33 & & $[33,1,33]_4$ & $[33,23,3]_4$ & $[[33,22,3/2]]_4$ & 108 & & $[49,1,49]_4$ & $[49,28,3]_4$ & $[[49,27,3/2]]_4$\\ 34 & & $[33,1,33]_4$ & $[33,28,2]_4$ & $[[33,27,2/2]]_4$ & 109 & $51$ & $[51,5,35]_4$ & $[51,6,34]_4$ & $[[51,1,34/3]]_4$\\ 35 & $35$ & $[35,14,7]_4$ & $[35,17,5]_4$ & $[[35,3,8/5]]_4$ & 110 & & $[51,18,18]_4$ & $[51,19,17]_4$ & $[[51,1,17/8]]_4$\\ 36 & & $[35,1,35]_4$ & $[35,6,15]_4$ & $[[35,5,15/2]]_4$ & 111 & & $[51,1,51]_4$ & $[51,5,35]_4$ & $[[51,4,35/2]]_4$\\ 37 & & $[35,6,15]_4$ & $[35,14,7]_4$ & $[[35,8,7/4]]_4$ & 112 & & $[51,6,34]_4$ & $[51,10,27]_4$ & $[[51,4,27/4]]_4$\\ 38 & & $[35,6,15]_4$ & $[35,17,5]_4$ & $[[35,11,5/4]]_4$ & 113 & & $[51,10,27]_4$ & $[51,14,19]_4$ & $[[51,4,19/6]]_4$\\ 39 & & $[35,14,7]_4$ & $[35,29,3]_4$ & $[[35,15,8/3]]_4$ & 114 & & $[51,1,51]_4$ & $[51,6,34]_4$ & $[[51,5,34/2]]_4$\\ 40 & & $[35,1,35]_4$ & $[35,17,5]_4$ & $[[35,16,5/2]]_4$ & 115 & & $[51,5,35]_4$ & $[51,10,27]_4$ & $[[51,5,27/3]]_4$\\ 41 & & $[35,6,15]_4$ & $[35,29,3]_4$ & $[[35,23,4/3]]_4$ & 116 & & $[51,18,18]_4$ & $[51,23,14]_4$ & $[[51,5,14/8]]_4$\\ 42 & & $[35,1,35]_4$ & $[35,29,3]_4$ & $[[35,28,3/2]]_4$ & 117 & & $[51,39,5]_4$ & $[51,47,2]_4$ & $[[51,8,24/2]]_4$\\ 43 & $37$ & $[37,1,37]_4$ & $[37,19,2]_4$ & $[[37,18,11/2]]_4$ & 118 & & $[51,6,34]_4$ & $[51,14,19]_4$ & $[[51,8,19/4]]_4$\\ 44 & $39$ & $[39,1,39]_4$ & $[39,2,26]_4$ & $[[39,1,26/2]]_4$ & 119 & & $[51,10,27]_4$ & $[51,18,18]_4$ & $[[51,8,18/6]]_4$\\ 45 & & $[39,1,39]_4$ & $[39,2,26]_4$ & $[[39,1,26/2]]_4$ & 120 & & $[51,1,51]_4$ & $[51,10,27]_4$ & $[[51,9,27/2]]_4$\\ 46 & & $[39,8,15]_4$ & $[39,9,13]_4$ & $[[39,1,13/3]]_4$ & 121 & & $[51,5,35]_4$ & $[51,14,19]_4$ & $[[51,9,19/3]]_4$\\ 47 & & $[39,21,9]_4$ & $[39,27,3]_4$ & $[[39,6,12/3]]_4$ & 122 & & $[51,10,27]_4$ & $[51,19,17]_4$ & $[[51,9,17/6]]_4$\\ 48 & & $[39,9,13]_4$ & $[39,15,10]_4$ & $[[39,6,10/4]]_4$ & 123 & & $[51,6,34]_4$ & $[51,18,18]_4$ & $[[51,12,18/4]]_4$\\ 49 & & $[39,15,10]_4$ & $[39,21,9]_4$ & $[[39,6,9/6]]_4$ & 124 & & $[51,1,51]_4$ & $[51,14,19]_4$ & $[[51,13,19/2]]_4$\\ 50 & & $[39,1,39]_4$ & $[39,8,15]_4$ & $[[39,7,15/2]]_4$ & 125 & & $[51,5,35]_4$ & $[51,18,18]_4$ & $[[51,13,18/3]]_4$\\ 51 & & $[39,8,15]_4$ & $[39,15,10]_4$ & $[[39,7,10/3]]_4$ & 126 & & $[51,6,34]_4$ & $[51,19,17]_4$ & $[[51,13,17/4]]_4$\\ 52 & & $[39,1,39]_4$ & $[39,9,13]_4$ & $[[39,8,13/2]]_4$ & 127 & & $[51,10,27]_4$ & $[51,23,14]_4$ & $[[51,13,14/6]]_4$\\ 53 & & $[39,21,9]_4$ & $[39,33,2]_4$ & $[[39,12,12/2]]_4$ & 128 & & $[51,5,35]_4$ & $[51,19,17]_4$ & $[[51,14,17/3]]_4$\\ 54 & & $[39,9,13]_4$ & $[39,21,9]_4$ & $[[39,12,9/4]]_4$ & 129 & & $[51,1,51]_4$ & $[51,18,18]_4$ & $[[51,17,18/2]]_4$\\ 55 & & $[39,8,15]_4$ & $[39,21,9]_4$ & $[[39,13,9/3]]_4$ & 130 & & $[51,6,34]_4$ & $[51,23,14]_4$ & $[[51,17,14/4]]_4$\\ 56 & & $[39,1,39]_4$ & $[39,15,10]_4$ & $[[39,14,10/2]]_4$ & 131 & & $[51,18,18]_4$ & $[51,35,9]_4$ & $[[51,17,9/8]]_4$\\ 57 & & $[39,9,13]_4$ & $[39,27,3]_4$ & $[[39,18,4/3]]_4$ & 132 & & $[51,1,51]_4$ & $[51,19,17]_4$ & $[[51,18,17/2]]_4$\\ 58 & & $[39,8,15]_4$ & $[39,27,3]_4$ & $[[39,19,3/3]]_4$ & 133 & & $[51,5,35]_4$ & $[51,23,14]_4$ & $[[51,18,14/3]]_4$\\ 59 & & $[39,1,39]_4$ & $[39,21,9]_4$ & $[[39,20,9/2]]_4$ & 134 & & $[51,1,51]_4$ & $[51,23,14]_4$ & $[[51,22,14/2]]_4$\\ 60 & & $[39,9,13]_4$ & $[39,33,2]_4$ & $[[39,24,4/2]]_4$ & 135 & & $[51,10,27]_4$ & $[51,35,9]_4$ & $[[51,25,9/6]]_4$\\ 61 & & $[39,1,39]_4$ & $[39,27,3]_4$ & $[[39,26,3/2]]_4$ & 136 & & $[51,6,34]_4$ & $[51,35,9]_4$ & $[[51,29,9/4]]_4$\\ 62 & & $[39,1,39]_4$ & $[39,33,2]_4$ & $[[39,32,2/2]]_4$ & 137 & & $[51,10,27]_4$ & $[51,39,5]_4$ & $[[51,29,6/5]]_4$\\ 63 & $41$ & $[41,1,41]_4$ & $[41,11,20]_4$ & $[[41,10,20/2]]_4$ & 138 & & $[51,5,35]_4$ & $[51,35,9]_4$ & $[[51,30,9/3]]_4$\\ 64 & & $[41,21,9]_4$ & $[41,31,6]_4$ & $[[41,10,10/6]]_4$ & 139 & & $[51,10,27]_4$ & $[51,43,3]_4$ & $[[51,33,6/3]]_4$\\ 65 & & $[41,11,20]_4$ & $[41,21,9]_4$ & $[[41,10,9/7]]_4$ & 140 & & $[51,6,34]_4$ & $[51,39,5]_4$ & $[[51,33,5/4]]_4$\\ 66 & & $[41,1,41]_4$ & $[41,21,9]_4$ & $[[41,20,9/2]]_4$ & 141 & & $[51,1,51]_4$ & $[51,35,9]_4$ & $[[51,34,9/2]]_4$\\ 67 & & $[41,11,20]_4$ & $[41,31,6]_4$ & $[[41,20,7/6]]_4$ & 142 & & $[51,5,35]_4$ & $[51,39,5]_4$ & $[[51,34,5/3]]_4$\\ 68 & & $[41,1,41]_4$ & $[41,31,6]_4$ & $[[41,30,6/2]]_4$ & 143 & & $[51,10,27]_4$ & $[51,47,2]_4$ & $[[51,37,6/2]]_4$\\ 69 & $43$ & $[43,1,43]_4$ & $[43,8,26]_4$ & $[[43,7,26/2]]_4$ & 144 & & $[51,6,34]_4$ & $[51,43,3]_4$ & $[[51,37,4/3]]_4$\\ 70 & & $[43,29,6]_4$ & $[43,36,5]_4$ & $[[43,7,14/5]]_4$ & 145 & & $[51,1,51]_4$ & $[51,39,5]_4$ & $[[51,38,5/2]]_4$\\ 71 & & $[43,22,11]_4$ & $[43,29,6]_4$ & $[[43,7,12/6]]_4$ & 146 & & $[51,5,35]_4$ & $[51,43,3]_4$ & $[[51,38,3/3]]_4$\\ 72 & $43$ & $[43,1,43]_4$ & $[43,15,13]_4$ & $[[43,14,13/2]]_4$ & 147 & & $[51,6,34]_4$ & $[51,47,2]_4$ & $[[51,41,4/2]]_4$\\ 73 & & $[43,22,11]_4$ & $[43,36,5]_4$ & $[[43,14,12/5]]_4$ & 148 & & $[51,5,35]_4$ & $[51,47,2]_4$ & $[[51,42,3/2]]_4$\\ 74 & & $[43,15,13]_4$ & $[43,29,6]_4$ & $[[43,14,6/6]]_4$ & 149 & & $[51,1,51]_4$ & $[51,47,2]_4$ & $[[51,46,2/2]]_4$\\ 75 & & $[43,1,43]_4$ & $[43,22,11]_4$ & $[[43,21,11/2]]_4$ & & & & & \\ \hline \end{tabular} \end{table*} \section{Asymmetric Quantum Codes from Nested Additive Codes over ${\mathbb F}_4$}\label{sec:nestedadditive} To show the gain that we can get from Theorem~\ref{thm:3.5} over the construction which is based solely on ${\mathbb F}_{4}$-linear codes, we exhibit asymmetric quantum codes which are derived from nested additive codes. An example of asymmetric quantum code with $k>0$ can be derived from a self-orthogonal additive cyclic code listed as Entry 3 in~\cite[Table I]{CRSS98}. The code is of parameters $(21,2^{16},9)_4$ yielding a $[[21,5,6/6]]_4$ quantum code $Q$ by Theorem~\ref{thm:3.5}. In a similar manner, a $[[23,12,4/4]]_4$ quantum code can be derived from Entry 5 of the same table. Another very interesting example is the $(12,2^{12},6)_4$ dodecacode $C$ mentioned in Remark~\ref{rem:5.3}. Its generator matrix $G$ is given in Equation (\ref{eq:9.1}). Let $G_{D},G_{E}$ be matrices formed, respectively, by deleting the last 4 and 8 rows of $G$. Construct two additive codes $D,E \subset C$ with generator matrices $G_{D}$ and $G_{E}$, respectively. Applying Theorem~\ref{thm:3.5} with $C_1=D^{\perp_{\mathop{{\rm tr}}}}$ and $C_2 = C$ yields an asymmetric quantum code $Q$ with parameters $[[12,2,6/3]]_4$. Performing the same process to $E \subset C$ results in a $[[12,4,6/2]]_4$-code. \begin{equation}\label{eq:9.1} G=\left( \begin{array}{*{12}{l}} 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & \omega & \omega & \omega & \omega & \omega & \omega\\ 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ \omega & \omega & \omega & \omega & \omega & \omega & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & \omega & \overline{\omega} & 0 & 0 & 0 & 1 & \omega & \overline{\omega}\\ 0 & 0 & 0 & \omega & \overline{\omega} & 1 & 0 & 0 & 0 & \omega & \overline{\omega} & 1\\ 1 & \overline{\omega} & \omega & 0 & 0 & 0 & 1 & \overline{\omega} & \omega & 0 & 0 & 0\\ \omega & 1 & \overline{\omega} & 0 & 0 & 0 & \omega & 1 & \overline{\omega} & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & \overline{\omega} & \omega & \omega & \overline{\omega} & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & \omega & 1 & \overline{\omega} & 1 & \omega & \overline{\omega} & 0 & 0 & 0\\ 1 & \omega & \overline{\omega} & 0 & 0 & 0 & 0 & 0 & 0 & \overline{\omega} & \omega & 1\\ \overline{\omega} & 1 & \omega & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \overline{\omega} & \omega \end{array} \right)\text{.} \end{equation} The next three subsections present more systematic approaches to finding good asymmetric quantum codes based on nested additive codes. \subsection{Construction from circulant codes}\label{subsec:circulant} As is the case with linear codes, an additive code $C$ is said to be cyclic if, given a codeword ${\mathbf{v}} \in C$, the cyclic shift of ${\mathbf{v}}$ is also in $C$. It is known (see~\cite[Th. 14]{CRSS98}) that any additive cyclic $(n,2^{k})_{4}$-code $C$ has at most two generators. A more detailed study of additive cyclic codes over ${\mathbb F}_{4}$ is given in~\cite{Huff07}. Instead of using additive cyclic codes, a subfamily which is called \textit{additive circulant $(n,2^{n})_4$-code} in~\cite{GK04} is used for ease of computation. An additive circulant code $C$ has as a generator matrix $G$ the complete cyclic shifts of just one codeword ${\mathbf{v}}=(v_{1},v_{2},\ldots,v_{n})$. We call $G$ the \textit{cyclic development of ${\mathbf{v}}$}. More explicitly, $G$ is given by \begin{equation}\label{eq:circ} G=\left( \begin{array}{*{12}{l}} v_{1} & v_{2} & v_{3} & \ldots & v_{n-1} & v_{n}\\ v_{n} & v_{1} & v_{2} & \ldots & v_{n-2} & v_{n-1}\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ v_{2} & v_{3} & v_{4} & \ldots & v_{n} & v_{1} \end{array} \right)\text{.} \end{equation} To generate a subcode of a circulant extremal self-dual code $C$ we delete the rows of its generator matrix $G$ starting from the last row, the first row being the generating codeword ${\mathbf{v}}$. We record the best possible combinations of the size of the resulting code $Q$ and $\left\lbrace d_z,d_x \right\rbrace$. To save space, only new codes or codes with better parameters than those previously constructed are presented. Table~\ref{table:Circu} summarizes the finding for $n \leq 30$. Zeros on the right of each generating codeword are omitted. The number of last rows to be deleted to obtain the desired subcode is given in the column denoted by \textbf{del}. \begin{table*} \caption{Asymmetric Quantum Codes from Additive Circulant Codes for $n \leq 30$} \label{table:Circu} \centering \begin{tabular}{| c | c | l | c | l |} \hline \textbf{No.} & \textbf{$n$} & \textbf{Generator ${\mathbf{v}}$} & \textbf{del} & \textbf{Code $Q$} \\ \hline 1 & $8$ & $(\overline{\omega},1,\omega,0,1)$ & $1$ & $[[8,0.5,4/2]]_4$ \\ 2 & $10$ & $(\overline{\omega},1,\omega,\omega,0,0,1)$ & $2$ & $[[10,1,4/3]]_4$ \\ 3 & & & $3$ & $[[10,1.5,4/2]]_4$ \\ 4 & $12$ & $(1,0,\omega,1,\omega,0,1)$ & $2$ & $[[12,1,5/3]]_4$ \\ 5 & $14$ & $(\omega,1,1,\overline{\omega},0,\omega,0,1)$ & $1$ & $[[14,0.5,6/3]]_4$ \\ 6 & & & $5$ & $[[14,2.5,6/2]]_4$ \\ 7 & $16$ & $(\omega,1,\overline{\omega},\omega,0,0,0,\omega,1)$ & $2$ & $[[16,1,6/4]]_4$ \\ 8 & $16$ & $(\omega,1,1,0,0,\overline{\omega},\omega,0,1)$ & $6$ & $[[16,3,6/3]]_4$ \\ 9 & $16$ & $(\omega,1,\overline{\omega},\omega,0,0,0,\omega,1)$ & $7$ & $[[16,3.5,6/2]]_4$ \\ 10 & $19$ & $(1,0,\omega,\overline{\omega},1,\overline{\omega},\omega,0,1)$ & $4$ & $[[19,2,7/4]]_4$ \\ 11 & $20$ & $(\overline{\omega},\overline{\omega},\omega,\overline{\omega},\omega,1,0,0,0,1,1)$ & $2$ & $[[20,1,8/5]]_4$ \\ 12 & & & $3$ & $[[20,1.5,8/4]]_4$ \\ 13 & & & $7$ & $[[20,3.5,8/3]]_4$ \\ 14 & $22$ & $(\omega,\omega,\omega,\omega,1,1,\overline{\omega},\omega,0,\omega,\omega,0,\omega,\omega,1,1)$ & $4$&$[[22,2,8/5]]_4$\\ 15 & & & $6$ & $[[22,3,8/4]]_4$ \\ 16 & & & $10$ & $[[22,5,8/3]]_4$ \\ 17 & & & $11$ & $[[22,5.5,8/2]]_4$ \\ 18 & $23$ & $(1,\omega,\omega,1,\overline{\omega},1,\omega,\omega,1)$ & $2$ & $[[23,1,8/4]]_4$\\ 19 & & & $6$ & $[[23,3,8/3]]_4$ \\ 20 & $25$ & $(1,1,\omega,0,1,\overline{\omega},1,0,\omega,1,1)$ & $3$ & $[[25,1.5,8/5]]_4$\\ 21 & & & $6$ & $[[25,3,8/4]]_4$ \\ 22 & $26$ & $(1,0,\omega,\omega,\omega,\overline{\omega},\omega,\omega,\omega,0,1)$ & $5$ & $[[26,2.5,8/4]]_4$\\ 23 & & & $6$ & $[[26,3,8/3]]_4$ \\ 24 & $27$ & $(1,0,\omega,1,\omega,\overline{\omega},\omega,1,\omega,0,1)$ & $1$ & $[[27,0.5,8/5]]_4$\\ 25 & $27$ & $(1,0,\omega,\omega,1,\overline{\omega},1,\omega,\omega,0,1)$ & $5$ & $[[27,2.5,8/4]]_4$ \\ 26 & & & $6$ & $[[27,3,8/3]]_4$\\ 27 & $28$ & $(\overline{\omega},\omega,\overline{\omega},1,\overline{\omega},1,\omega,\omega,\overline{\omega},\overline{\omega},\omega,\omega,0,1,1)$ & $1$ & $[[28,0.5,10/7]]_4$\\ 28 & & & $2$ & $[[28,1,10/5]]_4$ \\ 29 & & & $9$ & $[[28,4.5,10/4]]_4$ \\ 30 & & & $11$ & $[[28,5.5,10/3]]_4$ \\ 31 & $29$ & $(1,\omega,0,\omega,\overline{\omega},1,\overline{\omega},\omega,\overline{\omega},1,\overline{\omega},\omega,0,\omega,1)$ & $1$ & $[[29,0.5,11/7]]_4$\\ 32 & & & $3$ & $[[29,1.5,11/6]]_4$ \\ 33 & & & $8$ & $[[29,4,11/4]]_4$ \\ 34 & & & $12$ & $[[29,6,11/3]]_4$ \\ 35 & $30$ & $(\overline{\omega},0,\overline{\omega},\omega,1,\omega,0,\overline{\omega},\omega,0,1,\omega,1,1,0,1)$ & $5$ & $[[30,2.5,12/6]]_4$\\ 36 & & & $6$ & $[[30,3,12/5]]_4$ \\ 37 & & & $10$ & $[[30,5,12/3]]_4$ \\ 38 & & & $11$ & $[[30,5.5,12/2]]_4$ \\ \hline \end{tabular} \end{table*} \subsection{Construction from $4$-circulant and bordered $4$-circulant codes}\label{subsec:4circulant} Following~\cite{GK04}, a \textit{$4$-circulant additive $(n,2^{n})_4$-code of even length $n$} has the following generator matrix: \begin{equation}\label{eq:4circ} G=\left( \begin{array}{*{12}{l}} I_{\frac{n}{2}} & A_{\frac{n}{2}}\\ B_{\frac{n}{2}} & I_{\frac{n}{2}} \end{array} \right) \end{equation} where $I_{\frac{n}{2}}$ is an identity matrix of size $n/2$ and $A_{\frac{n}{2}},B_{\frac{n}{2}}$ are circulant matrices of the form given in Equation (\ref{eq:circ}). Starting from a generator matrix $G_{C}$ of an additive $4$-circulant code $C$, a matrix $G_{D}$ is constructed by deleting the last $r$ rows of $G_{C}$ to derive an additive subcode $D$ of $C$. For $n \leq 30$ we found three asymmetric quantum codes which are either new or better than the ones previously constructed. Table~\ref{table:4Circu} presents the findings. Under the column denoted by \textbf{$A,B$} we list down the generating codewords for the matrices $A$ and $B$, in that order. \begin{table} \caption{Asymmetric Quantum Codes from Additive 4-Circulant Codes for $n \leq 30$} \label{table:4Circu} \centering \begin{tabular}{| c | c | c | l |} \hline \textbf{$n$} & \textbf{$A,B$} & \textbf{del} & \textbf{Code $Q$} \\ \hline $14$ & $(1,\omega,\omega,\omega,1,0,0)$, & $2$ & $[[14,1,6/3]]_4$ \\ & $(1,0,0,1,\omega,\omega,\omega)$ & & \\ $20$ & $(\overline{\omega},\omega,\omega,\omega,\omega,\omega,\overline{\omega},0,\omega,0)$, & $8$ & $[[20,4,8/2]]_4$ \\ & $(\omega,0,\overline{\omega},0,\omega,\omega,\overline{\omega},\omega,\overline{\omega},\omega)$ & & \\ \hline \end{tabular} \end{table} Let ${\mathbf{d}}=(\omega,\ldots,\omega)$ and ${\mathbf{c}}$ be the transpose of ${\mathbf{d}}$. A \textit{bordered $4$-circulant additive $(n,2^{n})_4$-code of odd length $n$} has the following generator matrix: \begin{equation}\label{eq:bordered4circ} G=\left( \begin{array}{*{12}{l}} e & {\mathbf{1}} & {\mathbf{d}}\\ {\mathbf{1}} & I_{\frac{n-1}{2}} & A_{\frac{n-1}{2}}\\ {\mathbf{c}} & B_{\frac{n-1}{2}} & I_{\frac{n-1}{2}} \end{array} \right)\ \end{equation} where $e$ is one of $0,1,\omega$, or $\overline{\omega}$, and $A_{\frac{n-1}{2}},B_{\frac{n-1}{2}}$ are circulant matrices. We perform the procedure of constructing a subcode $D$ of $C$ by deleting the rows of $G_{C}$, starting from the last row. For $n \leq 30$, the five asymmetric quantum codes, either new or of better parameters, found can be seen in Table~\ref{table:B4Circu}. As before, under the column denoted by \textbf{$A,B$} we list down the generating codewords for the matrices $A$ and $B$, in that order. \begin{table} \caption{Asymmetric Quantum Codes from Additive Bordered 4-Circulant Codes for $n \leq 30$} \label{table:B4Circu} \centering \begin{tabular}{| c | c | c | c | l |} \hline \textbf{$n$} & \textbf{$e$} & \textbf{$A,B$} & \textbf{del} & \textbf{Code $Q$} \\ \hline $23$ & $\omega$ & $(\overline{\omega},1,\overline{\omega},\omega,\omega,1,1,0,0,0,0)$, & $1$ & $[[23,0.5,8/5]]_4$ \\ & & $(0,\overline{\omega},\omega,\omega,\omega,\omega,\overline{\omega},1,0,1,\omega)$ & & \\ $23$ & $\omega$ & $(\overline{\omega},1,\overline{\omega},\omega,\omega,1,1,0,0,0,0)$, & $4$ & $[[23,2,8/4]]_4$ \\ & & $(0,\overline{\omega},\omega,\omega,\omega,\omega,\overline{\omega},1,0,1,\omega)$ & & \\ $23$ & $\omega$ & $(1,1,\overline{\omega},1,\overline{\omega},1,1,0,0,0,0)$, & $9$ & $[[23,4.5,8/2]]_4$ \\ & &$(\overline{\omega},\overline{\omega},\omega,\omega,\overline{\omega},\overline{\omega},\omega,0,\omega,0,\omega)$ & & \\ $25$ & $\omega$ & $(\omega,1,1,\omega,\overline{\omega},1,0,1,0,0,0,0)$, & $4$ & $[[25,2,8/5]]_4$ \\ & & $(1,\overline{\omega},\overline{\omega},\omega,\omega,\overline{\omega},\omega,\overline{\omega},1,0,\overline{\omega},\omega)$ & & \\ $25$ & $\omega$ & $(\omega,1,\overline{\omega},1,1,\omega,0,1,0,0,0,0)$, & $10$ & $[[25,5,8/2]]_4$ \\ & & $(1,\overline{\omega},\overline{\omega},\overline{\omega},\omega,\overline{\omega},\omega,1,\omega,\overline{\omega},0,\omega)$ & & \\ \hline \end{tabular} \end{table} \begin{rem}\label{rem:9.1} A similar procedure has been done to the generator matrices of \textit{s-extremal additive codes} found in~\cite{BGKW07} and~\cite{Var09} as well as to the \textit{formally self-dual additive codes} of~\cite{HK08}. So far we have found no new or better asymmetric codes from these sources. \end{rem} Deleting the rows of $G_{C}$ in a more careful way than just doing so consecutively starting from the last row may yield new or better asymmetric quantum codes. The process, however, is more time consuming. Consider the following instructive example taken from bordered 4-circulant codes. Let \begin{equation*} P:=\left\lbrace 1,2,4,5,8,10,12,13,14,15,16\right\rbrace \text{.} \end{equation*} Let $C$ be a bordered 4-circulant code of length $n=23$ with generator matrix $G_{C}$ in the form given in Equation (\ref{eq:bordered4circ}) with $e=\omega$ and with the circulant matrices $A,B$ generated by, respectively, \begin{equation*} \begin{aligned} &(\overline{\omega},1,\overline{\omega},\omega,\omega,1,1,0,0,0,0)\text{, and} \\ &(0,\overline{\omega},\omega,\omega,\omega,\omega,\overline{\omega},1,0,1,\omega) \text{.} \end{aligned} \end{equation*} Use the rows of $G_{C}$ indexed by the set $P$ as the rows of $G_{D}$, the generator matrix of a subcode $D$ of $C$. Using $D \subset C$, a $[[23,6,8/2]]_4$ asymmetric quantum code $Q$ can be constructed. If we use the same code $C$ but $G_{D}$ is now $G_{C}$ with rows 3,6,7,9, and 11 deleted, then, in a similar manner, we get a $[[23,2.5,8/4]]_4$ code $Q$. \subsection{Construction from two proper subcodes}\label{subsec:proper} In the previous two subsections, the larger code $C$ is an additive self-dual code while the subcode $D$ of $C$ is constructed by deleting rows of $G_{C}$. New or better asymmetric quantum codes can be constructed from two nested proper subcodes of an additive self-dual code. The following two examples illustrate this fact. \begin{ex}\label{ex:9.1} Let $C$ be a self-dual Type II additive code of length 22 with generating vector \begin{equation*} {\mathbf{v}}=(\omega,\omega,\omega,\omega,1,1,\overline{\omega}, \omega,0,\omega,\omega,0,\omega,\omega,1,1,0,\ldots,0)\text{.} \end{equation*} Let $G_{C}$ be the generator matrix of $C$ from the cyclic development of ${\mathbf{v}}$. Derive the generator matrices $G_{D}$ of $D$ and $G_{E}$ of $E$ by deleting, respectively, the last $10$ and $11$ rows of $G_{C}$. Applying Theorem~\ref{thm:3.5} on $E \subset D$ yields an asymmetric $[[22,0.5,10/2]]_4$-code $Q$. \end{ex} \begin{ex}\label{ex:9.2} Let $C$ be a self-dual Type I additive code of length 25 labeled $C_{25,4}$ in~\cite{GK04} with generating vector \begin{equation*} {\mathbf{v}}=(1,1,\omega,0,1,\overline{\omega},1,0,\omega,1,1,0,0,\ldots,0)\text{.} \end{equation*} Let $G_{C}$ be the generator matrix of $C$ from the cyclic development of ${\mathbf{v}}$. Derive the generator matrices $G_{D}$ of $D$ and $G_{E}$ of $E$ by deleting, respectively, the last $5$ and $6$ rows of $G_{C}$. An asymmetric $[[25,0.5,9/4]]_4$-code $Q$ is hence constructed. \end{ex} \section{Conclusions and Open Problems}\label{sec:Conclusion} In this paper, we establish a new method of deriving asymmetric quantum codes from additive, not necessarily linear, codes over the field ${\mathbb F}_{q}$ with $q$ an even power of a prime $p$. Many asymmetric quantum codes over ${\mathbb F}_{4}$ are constructed. These codes are different from those listed in prior works (see~\cite[Ch. 17]{Aly08} and~\cite{SRK09}) on asymmetric quantum codes. There are several open directions to pursue. On ${\mathbb F}_{4}$-additive codes, exploring the notion of nestedness in tandem with the dual distance of the inner code is a natural continuation if we are to construct better asymmetric quantum codes. An immediate project is to understand such relation in the class of cyclic (not merely circulant) codes studied in~\cite{Huff07}. Extension to codes over ${\mathbb F}_9$ or ${\mathbb F}_{16}$ is another option worth considering. More generally, establishing propagation rules may help us find better bounds on the parameters of asymmetric quantum codes. \section*{Acknowledgment}\label{sec:Acknowledge} The first author thanks Keqin Feng for fruitful discussions. The authors thank Iliya Bouyukliev, Yuena Ma, and Ruihu Li for sharing their data on known Hermitian self-orthogonal ${\mathbb F}_{4}$-codes, and Somphong Jitman for helpful discussions.
2024-02-18T23:40:23.819Z
2011-03-31T02:00:23.000Z
algebraic_stack_train_0000
2,266
20,620
proofpile-arXiv_065-11074
\section{Introduction} \label{Introduction} The dirty boson problem has become a central and fascinating subject in condensed matter physics starting from the first theoretical investigations more than 20 years ago~\cite{Ma, Giamarchi, FWGF}. The interplay between quantum degeneracy, interactions and quenched disorder in a bosonic system gives rise to a rich scenario that exhibits new and peculiar features compared to the much older problem of the metal/insulator transition with electrons~\cite{Mott, Anderson, Belitz}. An important difference is that bosons can not rely on the Pauli pressure and repulsive interactions are crucial to avoid collapse in the lowest localized single-particle state. As a result perturbation schemes starting from the non-interacting model, that are most useful for fermions, are completely inappropriate in the case of bosons. Theoretical investigations, including quantum Monte Carlo simulations, have mainly addressed the problem of bosons on a lattice with on site bound disorder, the so-called disordered Bose-Hubbard model. In this case commensurability, {\it i.e.} the integer ratio of the number of particles to the number of lattice sites, plays a major role allowing for a superfluid/insulator (of the Mott type) transition also in the absence of disorder that is purely driven by interaction effects. Furthermore, depending on the value of the interaction strength, disorder can act in favor of superfluidity, by randomizing the insulating state close to the Mott transition, or in opposition to it by localizing almost free particles into single-particle levels. The insulating phases occurring in the two regimes of strong and weak interactions are respectively often referred to as the Bose glass, when interactions suppress superfluidity, and the Anderson glass, when interactions compete with disorder enhancing superfluidity~\cite{Scalettar, Gurarie}. The disorder driven quantum phase transition occurring at $T=0$ has been investigated in a series of numerical studies both at incommensurate and commensurate densities and in various dimensionalities~\cite{Scalettar, Krauth, Sorensen, Makivic, Zhang, Prokofev, Hitchcock, Gurarie}. The picture emerging from these studies, together with the crucial role of interactions to stabilize the system, is that superfluidity is lost for strong enough disorder, leading to a gapless normal phase different from the incompressible Mott insulator. Random potentials in continuous systems have been considered using perturbative approaches based on the Bogoliubov theory~\cite{Huang, GPS, Kobayashi, Lopatin, Falco}. These methods are reliable when both interactions and disorder are weak and allow for the determination of the effect of disorder on the thermodynamic properties, including the fraction of atoms in the condensate, the superfluid density and other thermodynamic functions. Exact numerical methods have also been applied both at zero~\cite{Astra} and at finite temperature~\cite{Boninsegni, Gordillo}. In particular, the path-integral Monte Carlo simulations carried out at finite $T$ addressed the problem of the elementary excitations~\cite{Boninsegni} and of the transition temperature~\cite{Gordillo} of a Bose fluid in a random environment. In the case of the continuous-space liquid phase, disorder always acts against superfluidity, whereas interaction helps make the superfluid state more robust. For strong disorder and low temperatures one expects the system to enter an insulating phase (Bose glass) that is smoothly connected with the high-temperature normal phase existing when the disorder is weak. On the experimental side a large body of work was devoted to $^4$He adsorbed in porous media, such as Vycor glass and aerogels. These studies investigated the behavior of the heat capacity and of the superfluid response~\cite{Reppy1, Reppy2, Reppy3}, as well as the dynamic structure factor~\cite{Glyde1, Glyde2} as a function of temperature and filling. A suppression of the $\lambda$ transition is observed and the critical coverage for the onset of superfluidity is determined as a function of temperature, however, no clear evidence is found of a compressible Bose glass phase. More recently the dirty boson problem has been addressed using ultracold atoms, which offer unprecedented control and tunability of the disorder parameters and of the interaction strength. Transport and phase-coherence properties of an interacting gas in disordered optical potentials are investigated and an insulating state is reached by increasing the strength of disorder~\cite{Florence1, Fort, Paris1, Florence2, Hulet, DeMarco1, DeMarco2}. A large experimental effort has also been devoted to the suppression of diffusion for non-interacting particles (Anderson localization)~\cite{Florence3, Paris2}. In this work we report on a path-integral Monte Carlo (PIMC) study of an interacting Bose gas in the presence of correlated disorder produced by 3D optical speckles. This random potential is relevant for experiments and allows for an independent tuning of intensity and correlation length. By increasing the disorder strength, we find a sizable reduction of the superfluid transition temperature and the shift is larger for weaker interactions. We map out the normal to superfluid phase diagram, both in the chemical potential vs. disorder and in the density vs. disorder plane. For strong disorder and in the presence of small but finite interactions, the critical chemical potential varies linearly with the disorder intensity and is essentially independent of temperature and interaction strength, in agreement with the existence of a mobility edge separating localized from extended states. We also find that the critical chemical potential is much larger than the classical percolation threshold for the same random potential, implying that a major role is played by quantum localization effects. In the regime of strong disorder and for chemical potentials below the critical value, the equilibrium state is a highly degenerate normal gas. We investigate the thermodynamic properties of this phase, finding a $T^2$ dependence of the equation of state in agreement with the peculiar feature expected for the Bose glass phase. The effect of the disorder correlation length is discussed in detail and we show that a non-trivial behavior is obtained only when the correlation length is comparable with the mean interparticle distance. At $T=0$ we also carry out calculations using the Gross-Pitaevskii (GP) equation and at finite $T$ using a self-consistent mean-field approach based on Hartree-Fock theory and on the local density approximation. The results of the GP equation for the ground-state energy and the spatial distribution of particles are accurate even in the regime of strong disorder with short-range correlations. This conclusion might be useful in view of investigating the structural properties of the Bose glass phase. We consider a system of $N$ identical bosons of mass $m$, subject to the random field $V_{\rm{dis}}$ and interacting with a repulsive pairwise potential. The Hamiltonian of the gas takes then the form: \begin{equation} \hat{H}=\sum_{i=1}^N \left(-\frac{\hbar^2}{2m}\nabla_i^2+V_{\rm{dis}}({\bf r}_i)\right)+ \sum_{i<j}V(|{\bf r}_i-{\bf r}_j|) \;. \label{Intro1} \end{equation} Interatomic interactions are modeled using a hard-sphere potential: \begin{equation} V(r)=\left\{ \begin{array}{cc} +\infty & (r<a) \\ 0 & \;\; (r>a)\;, \end{array} \right. \label{Intro2} \end{equation} where the diameter $a$ coincides with the $s$-wave length of the two-body scattering problem. Furthermore, the system is in a cubic box of volume $\Omega=L^3$ with periodic boundary conditions. The structure of the paper is as follows. In section~\ref{Section1} we introduce the random potential and its statistical properties. In section~\ref{Section2} we discuss classical percolation for the speckle potential and we estimate the percolation threshold in 3D. Some details of the PIMC numerical method are presented in section~\ref{Section3}. In section~\ref{Section4} we report our results on the superfluid transition: shift of the critical temperature, critical chemical potential and critical density. Most of these results were already presented in a previous publication of some of us~\cite{PRL}. In section~\ref{Section5} we introduce a mean-field approach based on the GP equation at $T=0$ and on a Hartree-Fock self-consistent theory at finite $T$ and for long-correlated disorder. The low temperature thermodynamics is discussed in section~\ref{Section6}, including the equation of state, the condensate and total density profiles and the behavior of superfluid density and condensate fraction as a function of temperature and interaction strength in the disordered phase. Finally, we draw our conclusions in section~\ref{Conclusions}. \begin{figure} \begin{center} \includegraphics[width=8cm]{figure1.eps} \caption{(color online). Radial dependence (in units of the inverse momentum cut-off $\Lambda$) of the disorder spatial autocorrelation function $\Gamma$. The solid (black) line refers to an average over many realizations of the random field, the (green) symbols correspond to a single realization. The (blue) line is a Gaussian fit.} \label{fig1} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=8cm]{figure2.eps} \caption{(color online). Typical shape of the speckle potential $V_{\rm{dis}}$, with averaged value $V_0=\hbar^2/m\ell_c^2$, shown in the direction (0,0,1) of the simulation box.} \label{fig2} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=8cm]{figure3.eps} \caption{(color online). Energy per particle of a classical non-interacting gas subject to the speckle potential. The homogeneous value $E/N=3k_BT/2$ is also shown as a reference.} \label{fig3} \end{center} \end{figure} \section{Speckle potential} \label{Section1} The random external field we consider is the one produced by 3D optical speckles. The local intensity is obtained from the following expression~\cite{Huntley}: \begin{equation} V_{\rm{dis}}({\bf r})=V_0\biggl|\frac{1}{\Omega} \int d{\bf k} \tilde{\varphi}({\bf k})W({\bf k}) e^{i{\bf k}\cdot{\bf r}}\biggr|^2 \;, \label{speckle1} \end{equation} where $V_0$ is a positive constant and \begin{equation} \tilde{\varphi}({\bf k})=\int d{\bf r} \varphi({\bf r}) e^{-i{\bf k}\cdot{\bf r}} \label{speckle2} \end{equation} is the Fourier transform of the complex field $\varphi({\bf r})$, whose real and imaginary part are independent random variables sampled from a gaussian distribution with zero mean and unit variance. The function $W({\bf k})$ is a low-wavevector filter defined as \begin{equation} W({\bf k})=\left\{ \begin{array}{cc} 1 & (k<\pi\Lambda) \\ 0 & \;\; (k>\pi\Lambda)\;. \end{array} \right. \label{speckle3} \end{equation} The random potential in equation~(\ref{speckle1}) is positive definite and the probability distribution of intensities is given by the normalized exponential law~\cite{Goodman} \begin{equation} P(V_{\rm{dis}})=\frac{1}{V_0}e^{-V_{\rm{dis}}/V_0} \;. \label{speckle4} \end{equation} If the system's size is large enough the above defined disordered potential is self-averaging, {\it i.e.} spatial averages coincide with averages over different realizations. For a generic function $f(V_{\rm{dis}})$ of the disorder intensity one can thus write the following identity \begin{equation} \frac{1}{\Omega}\int d{\bf r} f[V_{\rm{dis}}({\bf r})]=\int_0^\infty dV_{\rm{dis}} P(V_{\rm{dis}}) f(V_{\rm{dis}}) \equiv \langle f(V_{\rm{dis}})\rangle \;. \label{speckle5} \end{equation} According to this property, the spatial average and the corresponding root-mean-square displacement of the speckle potential are both determined by the energy scale $V_0$: $\langle V_{\rm{dis}}\rangle=V_0$ and $\sqrt{\langle V_{\rm{dis}}^2\rangle-\langle V_{\rm{dis}}\rangle^2}=V_0$. The correlation length $\ell_c$ of the random field is defined from the spatial autocorrelation function, \begin{equation} \Gamma(r)=\langle V_{\rm{dis}}({\bf r}^\prime)V_{\rm{dis}}({\bf r}^\prime+{\bf r})\rangle-\langle V_{\rm{dis}}\rangle^2 \label{speckle6} \end{equation} as the length scale for which $\Gamma(\ell_c/2)=\Gamma(0)/2$. The above equations characterizing the speckle intensity field in 3D can be straightforwardly generalized to 2D and 1D. In particular, in 1D the autocorrelation function $\Gamma(x)$ takes the simple form~\cite{Goodman} \begin{equation} \Gamma(x)=\left(\frac{\sin(\pi\Lambda x)}{\pi\Lambda x}\right)^2 \;, \label{speckle7} \end{equation} and $\ell_c=0.88/\Lambda$. In 3D we find that the autocorrelation function $\Gamma(r)$ is well approximated by a gaussian (see figure~\ref{fig1}) and we obtain numerically $\ell_c=1.1/\Lambda$. It is important to notice that standard experimental realizations of optical speckles are 2D, {\it i.e.} the speckle pattern lies in the plane perpendicular to the propagation of the laser beam, featuring equal correlation lengths in the $x$ and $y$ planar directions and a much larger $\ell_c$ in the $z$ direction. We consider instead a 3D pattern, having the same correlation length in the three spatial directions. This random field can be realized, for example, by adding speckle patterns generated from different directions. The typical shape of the speckle potential $V_{\rm{dis}}$ is shown in figure~\ref{fig2}: typical wells have size $\ell_c$ and depth $V_0$. The energy $\hbar^2/m\ell_c^2$, associated with the correlation length $\ell_c$, and $V_0$ provide the two relevant energy scales for the disorder potential. In particular, if $V_0\gg\hbar^2/m\ell_c^2$ the random potential is classical in nature, with typical wells that are deep enough to sustain many single-particle bound states. The opposite regime, $V_0\ll\hbar^2/m\ell_c^2$, corresponds instead to quantum disorder, where typical wells of size $\ell_c$ do not have bound states and these can be supported only by rare wells of size much larger than $\ell_c$ or with depth much larger than $V_0$. The root-mean-square intensity $V_0$ and the correlation length $\ell_c$ are the relevant parameters characterizing in general the various types of disorder. For example, the delta-correlated disorder, which has been considered in many theoretical investigations~\cite{Huang,Lopatin,Falco,Yukalov}, is defined by the following autocorrelation function: \begin{equation} \langle\Delta V_{\rm{dis}}({\bf r})\Delta V_{\rm{dis}}({\bf r}^\prime)\rangle=\kappa\delta({\bf r}-{\bf r}^\prime) \;, \label{speckle8} \end{equation} where $\Delta V_{\rm{dis}}({\bf r})=V_{\rm{dis}}({\bf r})-\langle V_{\rm{dis}}\rangle$ is the displacement from the average. By approximating the speckle $\Gamma$ function~(\ref{speckle6}) using a gaussian function, $\Gamma(r)=V_0^2e^{-r^2/2\sigma^2}$, with $\sigma=\ell_c/\sqrt{8\log2}$ to recover the same half width at half maximum, one finds that the speckle field in the limit $\ell_c\to0$ reproduces a delta-correlated disorder with the strength $\kappa$ given by \begin{equation} \kappa=\left(\frac{\pi}{4\log2}\right)^{3/2}V_0^2\ell_c^3 \;. \label{speckle9} \end{equation} In our simulations the length scale $\ell_c$ is typically $\sim 100$ times larger than the hard-sphere diameter $a$, allowing for a wide range of disorder intensities where interaction effects are well described by the $s$-wave scattering length and the details of the interatomic potential are irrelevant. The typical box size used in the simulations ranges from $L\sim20\ell_c$ to $L\sim50\ell_c$. An indication of self-averaging of disorder for these values of $L$ is provided by figure~\ref{fig1}, where we show the comparison between the autocorrelation function $\Gamma$ averaged over many realizations of the random potential and the one corresponding to a single realization. The self-averaging property (\ref{speckle5}) allows one to calculate the thermodynamics of a classical non-interacting gas. For example, the average energy per particle obtained from the spatial average of the disordered potential over the Boltzmann factor \begin{equation} \frac{E}{N}=\frac{3}{2}k_BT+\frac{\int d{\bf r} V_{\rm{dis}}({\bf r}) e^{-V_{\rm{dis}}({\bf r})/k_BT}} {\int d{\bf r} e^{-V_{\rm{dis}}({\bf r})/k_BT}} \;, \label{speckle10} \end{equation} yields the following simple result \begin{equation} \frac{E}{N}=\frac{3}{2}k_BT+\frac{V_0}{1+V_0/k_BT} \;. \label{speckle11} \end{equation} In figure~\ref{fig3} we compare the above analytical prediction with the results obtained from a direct spatial integration using a typical size of the simulation box. The good agreement found shows that in our simulations the self-averaging property is well satisfied for non-trivial functions of the disorder intensity. \begin{figure} \begin{center} \includegraphics[width=8cm]{figure4.eps} \caption{(color online). Percolation in a 2D speckle potential: the left and right panel correspond respectively to an accessible volume (shown in red) below and above the percolation threshold.} \label{fig4} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=8cm]{figure5.eps} \caption{(color online). Fraction of configurations of a 3D speckle pattern where percolation occurs in one of the spatial directions as a function of the accessible volume. Results are shown for two different system sizes. The estimated percolation threshold is $4(1)\cdot10^{-4}$ and is shown with the shaded area. } \label{fig5} \end{center} \end{figure} \section{Classical percolation} \label{Section2} In this Section we investigate the problem of the conducting/insulating transition in a speckle potential from the point of view of classical percolation. The relevant question is to determine the mobility edge of a classical particle subject to the random field~\cite{Zallen}. Given the disordered potential $V_{\rm{dis}}({\bf r})$, the fraction of space accessible to particles of energy $\epsilon$ is defined as the fractional volume where the reference energy exceeds the external field: \begin{equation} \Phi(\epsilon)=\frac{1}{\Omega}\int_{V_{\rm{dis}}({\bf r})<\epsilon}d{\bf r} \;. \label{percol1} \end{equation} The percolation threshold corresponds to the critical value $\Phi_c$ of the fractional volume such that, if $\Phi(\epsilon)>\Phi_c$, there are infinitely extended volumes of allowed space and particles having energy $\epsilon$ can move across the whole system. In terms of energy, the value $\epsilon_c$ determines the percolation threshold: $\Phi(\epsilon_c)=\Phi_c$. It corresponds to the classical mobility edge separating localized states with energy $\epsilon<\epsilon_c$ from delocalized ones with energy $\epsilon>\epsilon_c$. In the case of speckles the function $\Phi(\epsilon)$ can be simply expressed in terms of the disorder intensity $V_0$ using the property (\ref{speckle5}). One finds \begin{equation} \Phi(\epsilon)=1-e^{-\epsilon/V_0} \;. \label{percol2} \end{equation} The determination of the percolation threshold for lattice or continuum models is in general a difficult numerical task and the study of percolation has become a mature branch of statistical physics (see {\it e.g.} \cite{Stauffer}). Our discussion here is limited to the calculation of $\Phi_c$ and of the corresponding thereshold energy $\epsilon_c$ for the speckle potential. The role of dimensionality is crucial for this problem. In 1D the unbound nature of the disordered field implies that, approaching the thermodynamic limit, potential barriers occur higher than any finite energy $\epsilon$ and consequently $\epsilon_c=+\infty$ and $\Phi_c=1$. In 2D the percolation threshold of laser speckles was investigated experimentally~\cite{Smith} using photolithography on a conducting film obtaining the value $\Phi_c=0.407$. This result was later confirmed by a numerical study~\cite{Weinrib}. To our knowledge there are no determination of $\Phi_c$ for 3D speckle patterns. We estimate the percolation threshold in the continuum by mapping the accessible and unaccessible regions of space on a finite grid. This is done by simply comparing the local value of the external field $V_{\rm{dis}}({\bf r}_i)$ at the grid point ${\bf r}_i$ with the reference energy $\epsilon$. One then investigates the percolation of accessible grid points of the corresponding matrix. In figure~\ref{fig4} we show two typical configurations in the case of 2D speckles: panel a) corresponds to $\Phi=0.30$ well below the percolation threshold and panel b) to $\Phi=0.50$ where percolation, {\it i.e.} existence of an uninterrupted path of accessible points across the whole system in at least one of the spatial directions, has clearly occurred. Two issues have to be considered with care: i) the size $L$ in units of the correlation length $\ell_c$ must be increased in order to extrapolate to the thermodynamic limit and ii) for a given size $L$ the grid points must be dense enough. In figure~\ref{fig5} we plot the fraction of configurations exhibiting percolation in 3D speckle patterns as a function of the accessible volume $\Phi$ and for different system sizes. An estimate of the threshold gives $\Phi_c\simeq\epsilon_c/V_0=4(1)\cdot10^{-4}$. For comparison, we estimated the percolation threshold in 2D obtaining the value $\Phi_c\simeq0.4$ in agreement with previous determinations~\cite{Smith,Weinrib}. The value found for $\Phi_c$ in 3D is rather small, suggesting the presence of many long valleys where particles with a tiny fraction of the energy $V_0$ can freely move across the whole system. One should notice that a similar 3D continuum model, the percolation of voids between overlapping spheres (the so called ``Swiss cheese'' model), gives the much larger value $\Phi_c\simeq0.03$~\cite{Kertesz,Elam} for the percolation threshold. \section{PIMC method} \label{Section3} The partition function $Z$ of a bosonic system with inverse temperature $\beta=(k_BT)^{-1}$ is defined as the trace over all states of the density matrix $\hat{\rho}=e^{-\beta \hat{H}}$ properly symmetrized. The partition function satisfies the convolution equation \begin{eqnarray} Z &=& \frac{1}{N!}\sum_P \int d{\bf R} \rho({\bf R},P{\bf R},\beta) = \frac{1}{N!}\sum_P \int d{\bf R} \label{PIMC1}\\ \nonumber &\times& \int d{\bf R}_2 ... \int d{\bf R}_M \rho({\bf R},{\bf R}_2,\tau)... \rho({\bf R}_M,P{\bf R},\tau) \;, \end{eqnarray} where $\tau=\beta/M$, ${\bf R}$ collectively denotes the position vectors ${\bf R}=({\bf r}_1,{\bf r}_2,...,{\bf r}_N)$, $P{\bf R}$ denotes the position vectors with permuted labels $P{\bf R}=({\bf r}_{P(1)},{\bf r}_{P(2)},...,{\bf r}_{P(N)})$ and the sum extends over the $N!$ permutations of $N$ particles. The calculation of the partition function in equation~(\ref{PIMC1}) can be mapped to a classical-like simulation of polymeric chains with a number of beads $M$ equal to the number of terms of the convolution integral. In a PIMC calculation, one makes use of suitable approximations for the density matrix $\rho({\bf R},{\bf R}^\prime,\tau)$ at the higher temperature $1/\tau$ in equation~(\ref{PIMC1}) and performs the multidimensional integration over ${\bf R}$, ${\bf R}_2$,...,${\bf R}_M$ as well as the sum over permutations $P$ by Monte Carlo sampling~\cite{Ceperley}. The statistical expectation value of a given operator $O({\bf R})$, \begin{equation} \langle O\rangle = \frac{1}{Z}\frac{1}{N!}\sum_P \int d{\bf R} O({\bf R}) \rho({\bf R},P{\bf R},\beta) \;, \label{PIMC2} \end{equation} is calculated by generating stochastically a set of configurations $\{{\bf R}_i\}$, sampled from a probability density proportional to the symmetrized density matrix, and then by averaging over the set of values $\{O({\bf R}_i)\}$. An approximation for the high temperature density matrix, which is particularly well suited for studies of dilute gases, is based on the pair-product ansatz~\cite{Ceperley} \begin{equation} \rho({\bf R},{\bf R}^\prime,\tau)=\prod_{i=1}^N\rho_1({\bf r}_i,{\bf r}_ i^\prime,\tau)\prod_{i<j} \frac{\rho_{rel}({\bf r}_{ij},{\bf r}_{ij}^\prime,\tau)} {\rho_{rel}^0({\bf r}_{ij},{\bf r}_{ij}^\prime,\tau)} \;. \label{PIMC3} \end{equation} In the above equation $\rho_1$ is the single-particle ideal-gas density matrix \begin{equation} \rho_1({\bf r}_i,{\bf r}_i^\prime,\tau)=\left(\frac{m}{2\pi\hbar^2\tau} \right)^{3/2} e^{-({\bf r}_i-{\bf r}_i^\prime)^2m/(2\hbar^2\tau)} \;, \label{PIMC4} \end{equation} and $\rho_{rel}$ is the two-body density matrix of the interacting system, which depends on the relative coordinates ${\bf r}_{ij}={\bf r}_i-{\bf r}_j$ and ${\bf r}_{ij}^\prime={\bf r}_i^\prime-{\bf r}_j^\prime$, divided by the corresponding ideal-gas term \begin{equation} \rho_{rel}^0({\bf r}_{ij},{\bf r}_{ij}^\prime,\tau)= \left( \frac{m}{4\pi\hbar^2\tau} \right)^{3/2} e^{-({\bf r}_{ij}-{\bf r}_{ij}^\prime)^2 m/(4\hbar^2\tau)} \;. \label{PIMC5} \end{equation} The two-body density matrix at the inverse temperature $\tau$, $\rho_{rel}({\bf r},{\bf r}^\prime,\tau)$, can be calculated for a given spherical potential $V(r)$ using the partial-wave decomposition \begin{eqnarray} \rho_{rel}({\bf r},{\bf r}^\prime,\tau)&=&\frac{1}{4\pi} \sum_{\ell=0}^\infty (2\ell +1)P_\ell(\cos\theta) \label{PIMC6} \\ \nonumber &\times&\int_0^\infty dk e^{-\tau\hbar^2k^2/m} R_{k,\ell}(r)R_{k,\ell}(r^\prime) \;, \end{eqnarray} where $P_\ell(x)$ is the Legendre polynomial of order $\ell$ and $\theta$ is the angle between ${\bf r}$ and ${\bf r}^\prime$. The functions $R_{k,\ell}(r)$ are solutions of the radial Schr\"odinger equation \begin{eqnarray} &-&\frac{\hbar^2}{m}\left( \frac{d^2R_{k,\ell}}{dr^2} +\frac{2}{r} \frac{dR_{k,\ell}}{dr} -\frac{\ell(\ell+1)}{r^2}R_{k,\ell}\right) \nonumber\\ &+& V(r)R_{k,\ell} = \frac{\hbar^2k^2}{m}R_{k,\ell} \;, \label{PIMC7} \end{eqnarray} with the asymptotic behavior \begin{equation} R_{k,\ell}(r)=\sqrt{\frac{2}{\pi}}\frac{\sin(kr-\ell\pi/2+\delta_\ell)}{r} \;, \label{PIMC8} \end{equation} holding for distances $r\gg R_0$, where $R_0$ is the range of the potential. The phase shift $\delta_\ell$ of the partial wave of order $\ell$ is determined from the solution of equation~(\ref{PIMC7}) for the given interatomic potential $V(r)$. For the hard-sphere potential (\ref{Intro2}) a simple analytical approximation of the high-temperature two-body density matrix due to Cao and Berne~\cite{Cao} has been proven to be highly accurate~\cite{PSBCG}. The Cao-Berne density matrix $\rho_{rel}^{CB}({\bf r},{\bf r}^\prime,\tau)$ is obtained using the large momentum expansion of the hard-sphere phase shift $\delta_\ell\simeq-ka+\ell\pi/2$ and the large momentum expansion of the solutions of the Sch\"odinger equation~(\ref{PIMC7}) $R_{k,\ell}(r)\simeq\sqrt{2/\pi}\sin[k(r-a)]/r$. This yields the result \begin{eqnarray} \frac{\rho_{rel}^{CB}({\bf r},{\bf r}^\prime,\tau)} {\rho_{rel}^0({\bf r},{\bf r}^\prime,\tau)}&=& 1 -\frac{a(r+r^\prime)-a^2}{rr^\prime} \\ \nonumber &\times& e^{-[rr^\prime +a^2-a(r+r^\prime)](1+\cos\theta)m/(2\hbar^2\tau)} \;. \label{PIMC9} \end{eqnarray} Simulations are based on the worm algorithm~\cite{BPS}, which allows for an efficient sampling of permutation cycles. In this scheme one samples both diagonal configurations, contributing to averages of the type (\ref{PIMC2}) where all paths are closed, and off-diagonal configurations where one path is open. These latter configurations contribute to the one-body density matrix (OBDM) defined as \begin{equation} n_1({\bf r}_1,{\bf r}_1^\prime) = \frac{N}{Z}\frac{1}{N!}\sum_P \int d{\bf r}_2\cdot\cdot\cdot d{\bf r}_N \rho({\bf R},P{\bf R},\beta) \;, \label{PIMC10} \end{equation} where ${\bf r}_{P(1)}={\bf r}_1^\prime$. The long-range behavior of the OBDM determines the condensate density \begin{equation} n_0=\lim_{|{\bf r}-{\bf r}^\prime|\to\infty}n_1({\bf r},{\bf r}^\prime) \;. \label{PIMC11} \end{equation} In our simulations the largest displacement of the OBDM we consider is $|{\bf r}-{\bf r}^\prime|=L/2$. If the size $L$ is large enough the number $N_0$ of condensate particles can be written as \begin{equation} N_0=\int d{\bf r} n_1({\bf r},{\bf r}^\prime) \;, \label{PIMC12} \end{equation} where ${\bf r}^\prime$ is fixed by the constraint $|{\bf r}-{\bf r}^\prime|=L/2$ and we perform an average over the solid angle. The quantity under the integral corresponds to the local condensate density at position ${\bf r}$, which could be highly non uniform in the presence of a random potential. Beside the condensate density $n_0$, in the present study we consider also the superfluid density $\rho_s$. The superfluid component is the part of the fluid that remains at rest when an infinitely slow movement is applied to the walls that contain the system. In the path-integral formalism, the superfluid fraction of a fluid contained in a box with periodic boundary conditions can be related\cite{Ceperley} to the fluctuations of the \emph{winding number} via the equation \begin{equation} \label{rhos} \frac{\rho_s}{\rho} = \frac{m\langle{\bf W}^2 \rangle}{3\hbar^2 \beta N}. \end{equation} The \emph{winding number} ${\bf W}$ is defined as: \begin{equation} \label{windingnumber} {\bf W} = \sum_{i=1}^N \sum_{m=1}^M \left( {\bf r}_{m+1}^i - {\bf r}_m^i \right). \end{equation} It is a topological property of the configuration. It counts the net number of paths that cross any plane perpendicular to one axis. The worm algorithm is particularly suitable to perform ergodic random walks that span all possible winding numbers since it extends the configurations space by including configurations with an open path. Only the Monte Carlo moves that modify the open path can efficiently change the winding number. We perform calculations both in the canonical (at fixed density $n$) and in the grand-canonical ensemble (at fixed chemical potential $\mu$)~\cite{BPS}. We supplement the worm algorithm with two additional Monte Carlo updates that change the particle number $N$. The first update adds one particle to the system by placing a closed path at a randomly selected position. The second update erases a randomly selected closed path. The acceptance probability of the first (second) update is fixed by the fugacity $e^{\beta \mu}$ (by its inverse), by the change in the interaction energy due to the path to be inserted (erased) and by the factor $\frac{\Omega C}{N+1}$ ($\frac{N}{\Omega C}$) that takes into account the density change and the normalization of the free particle propagator $C\equiv\left(2\pi\hbar^2\beta/m\right)^{-\frac{3}{2}}$. \begin{figure} \begin{center} \includegraphics[width=8cm]{figure6.eps} \caption{(color online). Superfluid density and condensate fraction as a function of the disorder intensity $V_0$. The particle density is here $na^3=10^{-4}$ and the temperature is $T=0.5T_c^0$.} \label{fig6} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=8cm]{figure7.eps} \caption{(color online). Superfluid transition temperature as a function of the disorder strength for two values of the gas parameter $na^3$. Open and solid symbols refer respectively to $T_c$ determined from the superfluid and from the condensate fraction. The dashed line is the prediction of Ref.~\cite{Lopatin} at $na^3=10^{-4}$ shifted by $(T_c-T_c^0)$ in the absence of disorder.} \label{fig7} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=8cm]{figure8.eps} \caption{(color online). Scaling behavior of the superfluid density for different system sizes and different realizations of disorder. The value of the gas parameter is $na^3=10^{-4}$.} \label{fig8} \end{center}\end{figure} \begin{figure} \begin{center} \includegraphics[width=8cm]{figure9.eps} \caption{(color online). Critical chemical potential (shifted by $V_0$) as a function of the disorder strength for different values of the temperature (in units of $\hbar^2/m\ell_c^2$) and of the scattering length. The grey shaded area denotes the superfluid phase and the (pink) solid line corresponds to the classical percolation threshold.} \label{fig9} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=8cm]{figure10.eps} \caption{(color online). Spatial dependence of the OBDM for two values of the chemical potential slightly below and above $\mu_c$. Here $k_BT=0.13\hbar^2/m\ell_c^2$ and $a/\ell_c=0.016$. Two different system sizes are used to check the role of finite-size effects.} \label{fig10} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=8cm]{figure11.eps} \caption{(color online). Critical density as a function of the disorder strength for different values of the temperature (in units of $\hbar^2/m\ell_c^2$) and of the scattering length. The critical density separates the superfluid phase (above $n_c$) from the normal phase (below $n_c$). The horizontal arrows indicate the critical value $n_c^0$ of the non-interacting gas. The (blue) triangle corresponds to the same temperature but a larger scattering length compared to the (green) squares. Inset: Density dependence of $\rho_s/\rho$ [(pink) squares] and $n_0/n$ [(blue) circles] for the values $V_0=0$ and $V_0=6.4\hbar^2/m\ell_c^2$ of the disorder strength. Here $k_BT=0.13\hbar^2/m\ell_c^2$ and $a/\ell_c=0.016$. The vertical arrow indicates the corresponding value of the degenerate density $n_c^0$.} \label{fig11} \end{center} \end{figure} \section{Superfluid transition} \label{Section4} The effect of disorder is to suppress both the superfluid and the condensate density. In figure~\ref{fig6} we illustrate the behavior of these two quantities when the disorder strength is increased. The value of $\rho_s/\rho$ and $n_0/n$ in the absence of disorder is determined by the temperature $T$ and by the value of the gas parameter $na^3$. The figure shows a linear decrease of the superfluid and condensate components with increasing disorder in the classical regime $V_0>\hbar^2/m\ell_c^2$. An important question is to understand whether disorder also affects the superfluid transition temperature. The results are shown in figure~\ref{fig7}. The transition temperature $T_c$ is expressed in units of \begin{equation} T_c^0=\frac{2\pi\hbar^2}{mk_B}\left[\frac{n}{\zeta(3/2)}\right]^{2/3} \;, \label{SUPT1} \end{equation} the critical temperature of the non-interacting gas with $\zeta(3/2)\simeq2.612$. In these calculations the value of the scattering length and of the disorder correlation length are kept fixed and for the latter we choose the value $\ell_c=0.6n^{-1/3}$, such that there is typically one particle in each sphere of radius $\ell_c$: $n4\pi\ell_c^3/3\simeq1$. We show results corresponding to two different densities. The reported values of $T_c$ in the absence of disorder are taken from Ref.~\cite{PGP}. At the density corresponding to the gas parameter $na^3=10^{-4}$ we find no appreciable change of the transition temperature compared to clean systems by increasing the disorder strength up to $V_0\sim \hbar^2/m\ell_c^2$. For larger intensities we find a sizable shift that is well described again by a linear dependence in $V_0$. For a given strength $V_0$ the reduction of the transition temperature is enhanced for smaller values of the gas parameter, consistently with the instability of the ideal Bose gas in the presence of disorder~\cite{GPS}. The value of $T_c$ is extracted from the results of the superfluid fraction $\rho_s/\rho$ ($\rho=mn$ is the total mass density), corresponding to systems with different particle number $N$, using the scaling ansatz \begin{equation} N^{1/3}\rho_s(t,N)/\rho=f(tN^{1/3\nu})= f(0)+f^\prime(0)tN^{1/3\nu}+... \;. \label{SUPT2} \end{equation} Here, $t=(T-T_c)/T_c$ is the reduced temperature, $\nu$ is the critical exponent of the correlation length $\xi(t)\sim t^{-\nu}$, and $f(x)$ is a universal analytic function, which allows for a linear expansion around $x=0$. The validity of the scaling behavior (\ref{SUPT2}) is shown in figure~\ref{fig8}, where the effect of different realizations of the random potential is also shown. The quantity $N^{(1+\eta)/3}n_0/n$, involving the condensate fraction $n_0/n$ and the correlation function critical exponent $\eta=0.038$ of the XY-model universality class, is also expected to obey a scaling relation of the form (\ref{SUPT2}). For all reported disorder strengths $V_0$, the extracted value of the critical exponent $\nu$ is compatible with the result $\nu=0.67$ corresponding to clean systems~\cite{PGP}. It is worth noting that the values of $T_c$, obtained from the scaling law of the superfluid $\rho_s/\rho$ and of the condensate fraction $n_0/n$, coincide within our statistical uncertainty (see figure~\ref{fig7}). The presence of disorder reduces the quantum delocalization of particles occupying the deepest wells of the potential and, consequently, their contribution to the superfluid behavior. Superfluidity takes place when the degeneracy condition is met for the effectively smaller density of ``delocalized'' particles, resulting in a suppressed value of $T_c$. In Ref.~\cite{Lopatin} the shift $\delta T_c=T_c-T_c^0$ of the superfluid transition temperature is calculated using a perturbative approach for the $\delta$-correlated disorder $\langle\Delta V_{\rm{dis}}({\bf r})\Delta V_{\rm{dis}}({\bf r}^\prime)\rangle=\kappa\delta({\bf r}-{\bf r}^\prime)$, where $\Delta V_{\rm{dis}}({\bf r})=V_{\rm{dis}}({\bf r})-\langle V_{\rm{dis}}\rangle$. The $T_c$ shift is found to be quadratic in $\kappa$, implying for our speckle potential that $\delta T_c/T_c^0=(m^2V_0^2\ell_c^3/\sqrt{na}\hbar^4)^2/[2(12\log2)^3]$, where we used a gaussian fit to the radial dependence of the autocorrelation function $\Gamma$ and considered the limit $\ell_c\to0$. We report this prediction in figure~\ref{fig7} (we also add the interaction contribution not accounted for by Ref.~\cite{Lopatin}, so that in the clean case an exact result is reproduced). Our data in the regime of very weak disorder do not have enough precision to allow for a quantitative comparison and diverge from the theory before $\delta T_c/T_c^0$ becomes appreciable. The effect of disorder on the critical temperature of a hard-sphere gas was also investigated using PIMC methods in Ref.~\cite{Gordillo} where, however, no significant reduction of $T_c$ was reported. For stronger intensities of disorder, the calculation of $T_c$ becomes increasingly difficult, since the dependence on the realization gets more important (see figure~\ref{fig8}) and larger systems are needed in order to have a satisfactory self-averaging of the random potential. In figure~\ref{fig9} we report results for the critical chemical potential $\mu_c$ obtained from calculations carried out in the grand-canonical ensemble. A small change of $\mu$ around $\mu_c$ translates into a drastic change in the long-range behavior of the OBDM (see figure~\ref{fig10}): for $\mu<\mu_c$ the OBDM decays to zero and corresponds to a normal phase, for $\mu>\mu_c$ the OBDM reaches a constant value characteristic of the superfluid state. If interactions are small but finite, we also find that the value of $\mu_c$ is essentially insensitive to a change of temperature and of interaction strength. For weak disorder, this result is accompanied by a very small critical density (see figure~\ref{fig11}) and corresponds to a renormalization of $\mu_c$ due to disorder in an extremely dilute gas. For strong disorder, it is instead consistent with the picture of a mobility edge, separating localized single-particle states from extended ones, that depends only on the parameters of the random potential. In this latter regime we find a linear dependence of $\mu_c$ as a function of $V_0$, in agreement with the qualitative $T=0$ prediction of Refs.~\cite{Shklovskii,Nattermann} in the case of classical disorder. The figure also shows the classical percolation threshold $\mu=\epsilon_c$, whose value for the speckle potential has been determined in section~\ref{Section2}. One should notice that in the whole range of disorder intensities the critical chemical potential is significantly larger than $\epsilon_c$ as a consequence of quantum localization effects. In fact, in terms of a mobility edge, classical particles with energy larger than $\epsilon_c$ can freely move across the entire system, while in the quantum world extended states appear only for significantly larger energies bound by the inequality $\epsilon>\mu_c$. To conclude the study of the critical behavior, we analyze the dependence of the critical density $n_c$ on the intensity of the random potential. The calculations are carried out in the canonical ensemble at fixed temperature and scattering length. The method used to determine $n_c$ is shown in the inset of figure~\ref{fig11}. For a given value of $V_0$ one increases the density and calculates the superfluid $\rho_s/\rho$ and the condensate fraction $n_0/n$. The results are then fitted by a power-law dependence $\rho_s/\rho\sim(n-n_c)^\nu$ and $n_0/n\sim(n-n_c)^{\nu(1+\eta)}$ for $n>n_c$, where the proportionality coefficients are expected to be non-universal parameters. In the inset of figure~\ref{fig11} we show the results corresponding to a configuration without disorder ($V_0=0$) and with strong disorder ($V_0=6.4\hbar^2/m\ell_c^2$). The reported values are averaged over a few realizations of the random potential and their scatter gives an idea of the relevance of this effect. For the small value of the scattering length used here, the critical density at $V_0=0$ coincides with the non-interacting result $n_c^0=\zeta(3/2)(mk_BT/2\pi\hbar^2)^{3/2}$, while for the large $V_0$ one finds that $n_c$ is about a factor of eight greater than $n_c^0$. More comprehensive results are shown in figure~\ref{fig11} where $n_c$ is estimated from the superfluid fraction, which is less sensitive to finite-size effects. The results clearly show an increase of the critical density as a function of $V_0$, from the non-interacting degenerate density $n_c^0$ up to values $\sim 20$ times larger. It is also worth noticing that for strong disorder, an increase of the scattering length $a$ is accompanied by a decrease of $n_c$ resulting in a constant value of the critical chemical potential (see figure~\ref{fig9}). \section{Mean-field approach} \label{Section5} A simple description of the thermodynamic properties of disordered systems can be provided in terms of a mean-field approach. At $T=0$ this approach is based on the solution of the Gross-Pitaevskii (GP) equation for the order parameter in the random external field and it yields quantitatively reliable results for both the chemical potential and the density profiles. At finite temperature the mean-field theory can be efficiently applied in the case of random potentials with exceedingly long-range correlations where the local density approximation holds valid. \begin{figure} \begin{center} \includegraphics[width=8cm]{figure12.eps} \caption{(color online). Chemical potential at $T=0$ from the GP equation as a function of the disorder correlation length. The value of the gas parameter is $na^3=10^{-6}$ and the disorder strength is $V_0=10k_BT_c^0$. The limits of small and large $\ell_c$ are shown with horizontal lines.} \label{fig12} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=8cm]{figure13.eps} \caption{(color online). Density dependence of the chemical potential at $T=0$ from the GP equation: (red) solid line. The (green) dashed line corresponds to the Thomas-Fermi limit $\mu=\sqrt{2gnV_0}$ and the (blue) dotted line to the opposite limit of short correlation length $\mu=gn+V_0$. The disorder strength is $V_0=3.3\times10^{-3}\hbar^2/ma^2$ and at the density $na^3=10^{-6}$ corresponds to the value used in figure\ref{fig12}.} \label{fig13} \end{center} \end{figure} \subsection{Zero temperature} \label{Section5a} The relevant equation is the stationary GP equation for the order parameter $\psi({\bf r})$ in the presence of the random potential \begin{equation} \left[ -\frac{\hbar^2}{2m}\nabla^2 + V_{\rm{dis}}({\bf r}) + g|\psi({\bf r})|^2\right] \psi({\bf r}) =\mu\psi({\bf r}) \;, \label{GP1} \end{equation} where $g=4\pi\hbar^2a/m$ is the coupling constant. Within the GP approach one does not distinguish between order parameter and particle density and the following identity holds valid: $n({\bf r})=|\psi({\bf r})|^2$. The GP equation can be obtained from the following energy functional \cite{Dalfovo} \begin{eqnarray} E[\psi]&=&\int d^{3}r \psi^{*}\!({\bf r})\left[ -\frac{\hbar^{2}}{2m}\nabla^2 +V_{\rm{dis}}({\bf r})\right]\psi({\bf r}) \nonumber\\ &+& \frac{1}{2}g|\psi({\bf r})|^4 \label{eq:energy} \end{eqnarray} using the variational ansatz \begin{equation} \frac{\delta}{\delta \psi^{*}} \left(E[\psi] - \mu\int d^{3}r |\psi|^2\right)=0 \label{eq:gp_ener} \end{equation} that corresponds to finding configurations that minimize the energy functional (\ref{eq:gp_ener}) with the normalization constraint $\int d^{3}r|\psi({\bf r})|^2=N$. The solutions of the GP equation are obtained numerically by discretizing the wave function $\psi$ on a 3D box of size $L$ (the number of grid points ranges from $64^{3}$ to $128^{3}$). Then, the energy functional $E[\psi]$ is minimized by using a conjugate gradient algorithm as described in \cite{Castin, Press}, for a given realization of the potential and a given particle density $n=N/L^{3}$. The numerical solution yields the value of the chemical potential $\mu$ as well as the spatial particle distribution. Averages over disorder are obtained by repeating the calculation for various realizations of the random field. It is useful to rewrite the GP equation~(\ref{GP1}) in terms of the dimensionless variables $v_{\rm{dis}}({\bf r}) = V_{\rm{dis}}({\bf r})/V_0$, $\tilde{{\bf r}}={\bf r}/\ell_c$, $\tilde{V}_0=V_0/(\hbar^2/m\ell_c^2)$ and $\tilde{\mu}=\mu/(\hbar^2/m\ell_c^2)$. One finds \begin{equation} \left[ -\frac{1}{2}\tilde{\nabla}^2 + \tilde{V}_0v_{\rm{dis}}(\tilde{{\bf r}}) + \frac{\ell_c^2}{2\xi^2} |{\phi}(\tilde{{\bf r}})|^2\right] \phi(\tilde{{\bf r}}) =\tilde{\mu}\phi(\tilde{{\bf r}}) \;, \label{GP1_1} \end{equation} where we have used the wavefunction $\phi({\bf r})=\psi({\bf r})/\sqrt{n}$ rescaled in terms of the average density $n$ and the healing length $\xi = 1/\sqrt{8\pi na}$. From the above equation two regimes can be investigated analytically. On the one hand, if the correlation length $\ell_c$ is much smaller than the length $\sqrt{\hbar^2/mV_0}$ fixed by the disorder strength or equivalently for weak enough disorder ($\tilde{V}_0\ll 1$), the order parameter is almost uniform $|\phi(\tilde{\bf r})|^2\simeq1$ and the effect of the random potential on the chemical potential is just a shift of the mean-field energy \begin{equation} \mu=gn+V_0 \;. \label{GP2} \end{equation} On the other hand, if $\ell_c\gg\xi$, one can neglect the kinetic energy term in the GP equation~(\ref{GP1_1}) and use the Thomas-Fermi approximation \begin{equation} \mu=gn({\bf r}) + V_{\rm{dis}}({\bf r}) \;, \label{GP3} \end{equation} yielding the following result for the local particle density \begin{equation} n({\bf r}) = \frac{1}{g} [\mu- V_{\rm dis}({\bf r})] \Theta[\mu- V_{\rm{dis}}({\bf r})] \;. \label{GP4} \end{equation} Here $\Theta(x)$ is the Heaviside function: $\Theta(x)=1$ if $x>0$ and zero otherwise. The normalization condition $n=1/L^3\int d{\bf r} n({\bf r})$ determines the chemical potential $\mu$ in terms of the average density $n$. By using the self-averaging property (\ref{speckle5}) one obtains the equation \begin{equation} \frac{\mu}{V_0}+e^{-\mu/V_0}=1+\frac{gn}{V_0} \;, \label{GP5} \end{equation} relating the chemical potential to the disorder strength $V_0$. i) If $V_0\ll gn$ then $\mu=gn+V_0$ and the disorder acts as a small shift of the pure interaction term, similarly to the short-$\ell_c$ regime of equation~(\ref{GP2}). ii) If $V_0\gg gn$ one finds to the lowest order \begin{equation} \mu=\sqrt{2gnV_0} \;, \label{GP6} \end{equation} corresponding to an energy per particle $E/N=2\mu/3$. The variation of $\mu$ with the disorder correlation length is shown in figure~\ref{fig12}, where we report the results of the GP equation for the fixed value $na^3=10^{-6}$ of the gas parameter and disorder strength $V_0=10k_BT_c^0$. In this case $\xi n^{1/3}\sim1$ and the figure clearly shows the two limiting regimes of short and long correlation length. For the same disorder strength, in figure~\ref{fig13}, we show instead the density dependence of the chemical potential for a fixed value of $\ell_c$. The equations of state corresponding to large and small correlation lengths are also shown as a reference. In the regime $\ell_c\gg\xi$ one can make use of the Thomas-Fermi approximation for the order parameter which is expected to become more and more accurate as $\ell_c$ increases. Within this approximation one can make contact with the classical percolation problem of section~\ref{Section2} by noticing that, if $\mu$ is larger than the threshold energy $\epsilon_c$, the condensate density represented by equation~(\ref{GP4}) is different from zero on a percolating path ensuring thus the superfluid behavior of the system. For large $V_0$ the chemical potential increases as $\sqrt{V_0}$ according to equation~(\ref{GP6}), while $\epsilon_c$ is proportional to $V_0$. The small value of the ratio $\epsilon_c/V_0$ implies though that the quantum phase transition to the insulating state takes place only at extremely large disorder intensities. \begin{figure} \begin{center} \includegraphics[width=8cm]{figure14.eps} \caption{(color online). Equation of state energy vs. temperature of a gas with interaction parameter $na^3=10^{-6}$ and disorder strength $V_0=10k_BT_c^0$. Results corresponding to different values of $\ell_c$ are reported. The limiting cases of large correlation length: HF results with LDA (black line) and of small correlation length: clean system results shifted by $V_0$ (red symbols and line) are also shown. The open symbols at $T=0$ correspond to the results of the GP equation. The arrows indicate the superfluid transition temperature. The green line is a $T^2$ fit to the equation of state.} \label{fig14} \end{center} \end{figure} \subsection{Finite temperature} \label{Section5b} At $T\neq0$ one should combine the GP equation for the condensate with a proper description of the thermally excited states in the random potential. The theory becomes simple if the disorder is correlated over large distances, as one can apply the local density approximation (LDA) within standard mean-field techniques suitable for dilute gases. At low temperatures the validity condition requires $\ell_c$ to be much larger than the healing length $\xi$, while at higher temperatures the correlation length must exceed the thermal wavelength $\lambda_T$. We use the self-consistent Hartree-Fock (HF) scheme within LDA. This mean-field approach is based on the following expression for the elementary excitations of the system in terms of their momentum ${\bf p}$ and position ${\bf r}$~\cite{JLTP} \begin{equation} \epsilon({\bf p},{\bf r})=\frac{p^2}{2m}+V_{\rm{dis}}({\bf r})-\mu+2gn({\bf r}) \;. \label{FT1} \end{equation} The thermal density of atoms $n_T({\bf r})$ is obtained from the momentum integral of the distribution of elementary excitations \begin{equation} n_T({\bf r})=\int\frac{d{\bf p}}{(2\pi\hbar)^3}\frac{1}{e^{\epsilon({\bf p},{\bf r})/k_BT}-1} \;. \label{FT2} \end{equation} The condensate density $n_0({\bf r})$ is determined by the finite-$T$ extension of the GP equation~(\ref{GP4}) in the Thomas-Fermi approximation \begin{eqnarray} n_0({\bf r}) &=& \frac{1}{g} [\mu- V_{\rm dis}({\bf r})-2gn_T({\bf r})] \nonumber \\ &\;&\;\;\;\;\; \times\;\; \Theta[\mu- V_{\rm{dis}}({\bf r})-2gn_T({\bf r})] \;. \label{FT3} \end{eqnarray} The sum of thermal and condensate density gives the total density $n({\bf r})$, which must satisfy the overall normalization \begin{equation} N=\int d{\bf r} n({\bf r})=\int d{\bf r} [n_0({\bf r})+n_T({\bf r})] \;, \label{FT4} \end{equation} providing the closure relation of the mean-field equations. The total energy $E$ of the system can be calculated from the following integral \begin{eqnarray} E&=&\int \frac{d{\bf p}\,d{\bf r}}{(2\pi\hbar)^3}\frac{p^2/2m}{e^{\epsilon({\bf p},{\bf r})/k_BT}-1} +\int d{\bf r} V_{\rm{dis}}({\bf r})n({\bf r}) \nonumber\\ &+& \frac{g}{2}\int d{\bf r}[2n^2({\bf r})-n_0^2({\bf r})] \;. \label{FT5} \end{eqnarray} The HF scheme defined by the above equations neglects the quantum depletion of the condensate and at $T=0$ yields $n_0({\bf r})=n({\bf r})$, in agreement with section~\ref{Section5a}. Furthermore, it neglects the contribution of collective modes (phonons) to thermodynamics since all excitations are single particle. This approximation is known to be very accurate in dilute non-uniform systems both at high and low temperatures~\cite{JLTP}. In particular, at low temperatures, one can neglect the thermal density contribution to the expression (\ref{FT1}) of the elementary excitations and one finds the simple spectrum $\epsilon({\bf p},{\bf r})=p^2/2m+|V_{\rm{dis}}({\bf r})-\mu(T=0)|$. The term in modulus vanishes at the condensate boundaries and thermal excitations accumulate at these minima of the effective potential. The single-particle excitations close to these minima are the dominant ones at low temperature, being more important than the phonons of the bulk condensate. In fact, one can show that at low energy the single-particle excitations have the following density of states \begin{equation} g(\epsilon)=\frac{4}{3}\frac{\Omega m^{3/2}}{\sqrt{2}\pi^2\hbar^3}e^{-\mu(T=0)/V_0}\frac{\epsilon^{3/2}}{V_0} \;, \label{FT6} \end{equation} proportional to $\epsilon^{3/2}$ in contrast to $g(\epsilon)\propto\epsilon^2$ of phononic excitations. A similar situation occurs in harmonically trapped condensates~\cite{JLTP}. The above semiclassical approach provides an estimate of the temperature at which Bose-Einstein condensation sets in locally in some deep well. This temperature is defined as the point where the local density $n({\bf r})$, corresponding to some deep well in the random field, reaches the critical value $n({\bf r})\lambda_T^3=\zeta(3/2)\simeq2.612$. By neglecting interactions one finds the following implicit equation for the temperature $T^\ast$ \begin{equation} n\left(\frac{2\pi\hbar^2}{mk_BT^\ast}\right)^{3/2}=\sum_{j=1}^\infty\frac{1}{j^{3/2}} \frac{1}{1+jV_0/k_BT^\ast} \;. \label{FT7} \end{equation} The temperature $T^\ast$ is always larger than the temperature $T_c^0$ of the occurrence of Bose-Einstein in non-interacting clean systems. In particular, for large disorder strength one finds $T^\ast=T_c^0[\zeta(3/2)V_0/\zeta(5/2)k_BT_c^0]^{2/5}$. This effect comes from the reduced available volume and the corresponding higher local particle density. In the presence of weak interactions, local Bose-Einstein condensation sets in at a temperature slightly lower that $T^\ast$, because density is reduced in the wells of the random field due to repulsion and a lower temperature is needed to reach the critical value. We would like to stress that the temperature scale $T^\ast$ corresponds to the appearance of local condensates at the minima of $V_{\rm{dis}}({\bf r})$ and should not be confused with the critical temperature $T_c$ at which extended superfluidity sets in. Within the above semiclassical approach this latter temperature corresponds to the chemical potential reaching the percolation threshold of the effective potential $V_{\rm{dis}}({\bf r})+2gn_T({\bf r})$ where, according to equation~(\ref{FT3}), the condensate density $n_0({\bf r})$ is different from zero on a percolating path. For weakly-interacting systems though, because of the small value of the percolating volume fraction $\Phi_c$, the temperatures $T_c$ and $T^\ast$ are very close, unless for extremely large disorder intensities. As an example we consider the configuration shown in figure~\ref{fig14} corresponding to $na^3=10^{-6}$ and $V_0=10k_BT_c^0$ in the regime of extremely long-range correlation length $\ell_c$. The value of the temperature $T^\ast$ obtained from equation~(\ref{FT7}) is given by $T^\ast=3.63T_c^0$. The self-consistent solution of the HF equations yields a temperature $T_{\rm{BEC}}$ for the local onset of Bose-Einstein condensation in the range $3.5T_c^0<T_{\rm{BEC}}<T^\ast$. The transition temperature $T_c$ where the condensate density percolates is found to be in the range $3.4T_c^0<T_c<T_{\rm{BEC}}$. The HF equation of state and the corresponding transition temperature are shown in figure~\ref{fig14} for a fixed density of the gas and for large disorder strength. In the figure is also reported the equation of state corresponding to the regime of a very short correlation length $\ell_c$, where the energy is merely shifted by the average disorder intensity $V_0$ from the value of the clean system. This result is consistent with the $T=0$ prediction (\ref{GP1}) and the corresponding transition temperature coincides with $T_c$ in the absence of disorder. \begin{figure} \begin{center} \includegraphics[width=8cm]{figure15.eps} \caption{(color online). Particle density profiles in the $(x=0,y,z)$ plane at different temperatures and for a given realization of disorder characterized by $V_0=10k_BT_c^0$ and $\ell_c=0.6n^{-1/3}$. The $T=0$ profile is obtained using the GP equation. The average density corresponds to the value $na^3=10^{-6}$ of the gas parameter.} \label{fig15} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=8cm]{figure16.eps} \caption{(color online). Particle and condensate density profiles in the $(x=0,y,z)$ plane at two different temperatures, above and below $T_c$. The configuration is the same as in figure~\ref{fig15}.} \label{fig16} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=8cm]{figure17.eps} \caption{(color online). Particle and condensate density profiles along the $(x=0,y=0,z)$ axis corresponding to the configuration of figure~\ref{fig16} at $T=0.1T_c^0$.} \label{fig17} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=8cm]{figure18.eps} \caption{(color online). Particle density profiles in the $(x=0,y,z)$ plane for different values of the gas parameter and for a given realization of disorder characterized by $V_0=10k_BT_c^0$ and $\ell_c=0.6n^{-1/3}$. The temperature is $T=0.1T_c^0$.} \label{fig18} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=8cm]{figure19.eps} \caption{(color online). Temperature dependence of the superfluid and condensate density without disorder and with strong disorder ($V_0=10k_BT_c^0$ and $n^{1/3}\ell_c=0.6$). The value of the gas parameter is $na^3=10^{-6}$. The grey shaded area shows the estimated range of temperatures where the superfluid transition takes place. Solid and open symbols refer to two different system sizes to check the role of finite-size effects. The solid and dashed lines in the absence of disorder are the predictions of a self-consistent mean-field calculation, while the dotted lines with disorder are guides to the eye.} \label{fig19} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=8cm]{figure20.eps} \caption{(color online). Dependence on the gas parameter of the superfluid and condensate density in the presence of strong disorder ($V_0=10k_BT_c^0$ and $n^{1/3}\ell_c=0.6$). The temperature is $T=0.1T_c^0$. Solid and open symbols refer to two different system sizes to check the role of finite-size effects. The error bars displayed with the solid symbols corresponding to $\rho_s/\rho$ come from averages over a number of disorder realizations. The dashed lines are guides to the eye.} \label{fig20} \end{center} \end{figure} \section{Low temperature thermodynamics} \label{Section6} The energy per particle obtained from PIMC simulations is reported in figure~\ref{fig14} as a function of temperature and for random potentials with different correlation lengths in the range of the mean interparticle distance. In particular, for the value $\ell_c=0.6n^{-1/3}$ we also indicate the value of the superfluid transition temperature. The equation of state and the value of $T_c$ corresponding to the limiting regimes of exceedingly long and short correlation lengths are also shown as a reference. Three important remarks about this figure are worth stressing. a) At fixed particle density and for a given disorder strength the largest suppression of $T_c$ is achieved with $\ell_c$ on the order of the interparticle distance. b) At low temperature the energetics of the system is well described by the GP equation even when the disorder is strong and $n^{1/3}\ell_c\sim1$. c) For random potentials of large strength and with spatial correlations in the range $n^{1/3}\ell_c\sim1$, a window of temperatures opens up where the system is normal (i.e. not superfluid), though highly degenerate ($n\lambda_T^3>1$). The properties of the "exotic'' normal phase displayed at low temperatures for strong disorder are worth investigating. We find that the equation of state is well fitted by a quadratic temperature dependence as shown in figure~\ref{fig14}. This $T^2$-law for the energy, and consequently linear dependence of the specific heat, is a remarkable feature of the phase. In a clean superfluid, or in a dirty one with short-range disorder correlations (see figure~\ref{fig14}), the energy at low temperatures exhibits the $T^4$-law typical of the thermal excitation of phonons. In the opposite regime of long-range disorder correlations one can apply the HF-LDA approach described in section~\ref{Section5b}, which is expected to become more and more accurate as $\ell_c$ increases. Within this approach the system consists of large condensate lakes that may or may not be connected through a percolating path, the relevant excitations at low temperature are of single-particle nature and are localized at the boundary of the condensate lakes. These excitations contribute to the energy with a $T^{7/2}$-law\footnote{In the insulating Bose glass phase the asymptotic low-temperature law is expected to be $\sim T^2$ independent of the value of the correlation length. However, for increasing $\ell_c$, the range of temperatures where the asymptotic law applies is suppressed.}. A Bose glass is predicted to exhibit a non-vanishing density of states at zero energy\cite{FWGF}, which results in a $T^2$-law for the low temperature equation of state. We interpret the quadratic dependence found in the low temperature normal phase as an evidence of the Bose glass phase. Another important output of PIMC simulations with the random potential is the spatial particle distribution and the distribution of particles contributing to the condensate density. In figure~\ref{fig15} we show the results of the particle density $n({\bf r})$ for a given realization of the random potential. Starting from a high temperature distribution spread over the entire system, as temperature is decreased, the density becomes more and more peaked in correspondence of the minima of the external potential. Around $T\simeq0.4T_c^0$ the system turns superfluid (see figure~\ref{fig19}) and the density changes only slightly down to the lowest temperatures. Remarkably, the comparison with the findings of the GP equation for the same realization of the random field is rather good both for the position and the relative intensity of the peaks. The comparison between particle density and condensate density profiles is reported in figure~\ref{fig16} for two temperatures, above and below $T_c$. The result at the lowest temperature shows that the condensate density follows the particle distribution, but does not exhibit its pronounced peaks. This behavior is clearly represented in figure~\ref{fig17}, where we show the profiles along a cut through the plane of figure~\ref{fig16}. Notice that the particles contributing to the condensate are only a small fraction ($\sim20\%$) of the total number of particles (see figure~\ref{fig19}). Finally, in figure~\ref{fig18} we report the density profiles as a function of the interaction strength. The figure clearly shows the effect of particle localization in the deepest wells of the random field as the value of the gas parameter is reduced. The behavior of the superfluid and condensate fraction in the proximity of the phase transition with and without disorder are shown in figure~\ref{fig19}. It is interesting to notice the large depletion of $\rho_s$ and $n_0$ even at the lowest temperatures and the fact that, in the regime of strong disorder, the superfluid component becomes significantly smaller than the condensate one. Similar results are reported in figure~\ref{fig20} for a fixed temperature and for varying values of the interaction strength. The figure clearly shows that by making interactions weaker the system will eventually turn normal and that the difference between superfluid and condensate fraction disappears with strong enough interactions. To complete the picture we would like to mention the limit of weak disorder $(V_0 m\ell_c^2/\hbar^2) \ll 1$ and weak interactions $na^3 \to 0$ considered in Ref.~\cite{Nattermann}. It explains how interactions stabilize the superfluid phase starting from the localized phase of the non-interacting gas. At zero temperature the boundary between the insulating (Bose glass) and superfluid phases is set (up to logarithmic corrections) by percolation at the Larkin energy scale $ V_0^4 m^3 \ell_c^6/ \hbar^6 $ which has to be compared with the chemical potential of the weakly interacting gas $gn$. This leads to the relation for the critical density $(na)_c \sim V_0^4$, {\it i.e.} for small $V_0$ only a tiny density of particles is required to off-set the localization effects. In practice, one observes extremely robust superfluidity of the weakly-interacting Bose gas even in the presence of relatively large disorder, see Figs.~\ref{fig7}, \ref{fig19}, \ref{fig20}; for $na \gg (na)_c$ the critical temperature is essentially the same as in the ideal Bose gas. \section{Conclusions} \label{Conclusions} We find that in a quantum degenerate bosonic gas a random potential is most efficient in suppressing superfluidity if it is correlated over length scales comparable with the mean interparticle distance. However, for the typical diluteness conditions of ultracold gases, disorder intensities significantly larger than the energy scale $k_BT_c^0$ set by the BEC transition of the ideal gas are required for a significant reduction of the superfluid critical temperature and stronger interactions make the superfluid state more robust. In the regime of weak interactions and strong disorder, the superfluid transition turns out to be well characterized by the existence of a mobility edge, separating localized from extended states, that is largely independent of temperature and interaction strength. This picture is similar to the percolation threshold of classical particles, but we find that quantum localization effects drive the system normal in a large region of energies where classical states would be extended. Furthermore, most of the particles are localized in the deepest wells of the random potential and only a small proportion contributes to the extended condensate state. The effective density of these delocalized particles is much smoother due to the screening of the external field from the other particles and sets the critical density of superfluidity which however starts at significantly lower temperatures compared to clean systems. For larger disorder intensities an "exotic'' normal phase appears in the degenerate regime, even though we can not reach $T=0$. This phase is characterized by a peculiar $T^2$ dependence of the equation of state, that is markedly different from the $T^4$ law of homogeneous superfluids and from the $T^{7/2}$ law found for large condensate lakes within LDA and is in agreement with the predictions for the Bose glass phase. Remarkably, some aspects of this phase, such as the $T=0$ equation of state and the spatial distribution of particles can be correctly described using the mean-field GP theory. \section*{Acknowledgements} We acknowledge useful discussions with B. Svistunov, and L.P. Pitaevskii. This work, as part of the European Science Foundation EUROCORES Program ``EuroQUAM-FerMix'', was supported by funds from the CNR and the EC Sixth Framework Programme. NP acknowledges support from NSF grant PHY-0653183. Calculations have been performed on the HPC facility {\it Wiglaf} at the Physics Department of the University of Trento and on the BEN cluster at ECT$^{\ast}$ in Trento. \section*{References}
2024-02-18T23:40:23.842Z
2010-02-19T11:33:44.000Z
algebraic_stack_train_0000
2,270
11,307
proofpile-arXiv_065-11223
\section{Introduction} For elements $u, v, w$ in a free group, the equation of the form $u^\ell = v^n w^m$ $(\ell, n, m \ge 2)$ is known as the {\it Lyndon-Sch\"{u}tzenberger equation} (LS equation for short). Lyndon and Sch\"{u}tzenberger~\cite{LySc62} investigated the question of finding all possible solutions for this equation in a free group, and proved that if the equation holds, then $u$, $v$, and $w$ are all powers of a common element. This equation can be also considered on the semigroup of all finite words over a fixed alphabet $\Sigma$, and an analogous result holds. \begin{theorem}[see, e.g., \cite{HaNo04,LySc62,Man02}]\label{thm:original} For words $u, v, w \in \Sigma^+$ and integers $\ell, n, m \ge 2$, the equation $u^\ell = v^n w^m$ implies that $u, v, w$ are powers of a common word. \end{theorem} The Lyndon-Sch\"{u}tzenberger equation has been generalized in several ways; e.g., the equation of the form $x^k = z_1^{k_1} z_2^{k_2} \cdots z_n^{k_n}$ was investigated by Harju and Nowotka~\cite{HaNo05} and its special cases in~\cite{ApDj68,Len65}. Czeizler et al.~\cite{CCKS09} have recently proposed another extension, which was originally motivated by the information encoded as DNA strands for DNA computing. In this framework, a DNA strand is modeled by a word $w$ and encodes the same information as its Watson-Crick complement. In formal language theory, the Watson-Crick complementarity of DNA strands is modeled by an antimorphic involution $\theta$~\cite{KKT02,PRY01}, i.e., a function $\theta$ on an alphabet $\Sigma^*$ that is {\em (a)} antimorphic, $\theta(x y) = \theta(y) \theta(x)$, $\forall x, y \in \Sigma^*$, and {\em (b)} involution, $\theta^2 = id$, the identity. Thus, we can model the property whereby a DNA single strand binds to and is completely equivalent to its Watson-Crick complement, by considering a word $u$ and its image $\theta (u)$ equivalent, for a given antimorphic involution $\theta$. For words $u$, $v$, $w$, integers $\ell, n, m \ge 2$, and an antimorphic involution $\theta$, an extended Lyndon-Sch\"{u}tzenberger equation (ExLS equation) is of the form \begin{equation}\label{eq:exLS} u_1u_2 \cdots u_\ell = v_1 \cdots v_n w_1 \cdots w_m, \end{equation} with $u_1, \ldots, u_\ell \in \{u, \theta(u)\}$, $v_1, \ldots, v_n \in \{v, \theta(v)\}$, and $w_1, \ldots, w_m \in \{w, \theta(w)\}$. The question arises as to whether an equation of this form implies the existence of a word $t$ such that $u, v, w \in \{t, \theta(t)\}^+$. A given triple $(\ell, n, m)$ of integers is said to {\it impose pseudo-periodicity, with respect to $\theta$, on $u, v, w$}, or simply, to {\it impose $\theta$-periodicity on $u, v, w$} if (\ref{eq:exLS}) implies $u, v, w \in \{t, \theta(t)\}^+$ for some word $t$. Furthermore, we say that the triple $(\ell, n, m)$ {\it imposes $\theta$-periodicity} if it imposes $\theta$-periodicity on all $u, v, w$. The known results on ExLS equations \cite{CCKS09} are summarized in Table~\ref{tbl:exLS_summary}. \begin{table}[h] \label{tbl:exLS_summary} \tbl{Summary of the known results regarding the extended Lyndon-Sch\"{u}tzenberger equation.} {\begin{tabular}{r@{\hspace{8mm}}r@{\hspace{8mm}}r@{\hspace{10mm}}c} \toprule $l$ & $n$ & $m$ & $\theta$-periodicity \\ \colrule $\ge 5$ & $\ge 3$ & $\ge 3$ & YES \\ \colrule $3$ or $4$ & $\ge 3$ & $\ge 3$ & ? \\ $2$ & $\ge 2$ & $\ge 2$ & ? \\ \colrule $\ge 3$ & $2$ & $\ge 2$ & NO \\ \botrule \end{tabular}} \end{table} This paper is a step towards solving the unsettled cases of Table~\ref{tbl:exLS_summary}, by using the following strategy. Concise proofs exist in the literature for Theorem~\ref{thm:original}, that make use of fundamental properties such as: \begin{enumerate}[(i)] \item The periodicity theorem of Fine and Wilf (FW theorem), \label{item:FW} \item The fact that a primitive word cannot be a proper infix of its square, and \label{item:overlap} \item The fact that the class of primitive words is closed under cyclic permutation. \label{item:cycle} \end{enumerate} (For details of each, see~\cite{ChKa97}.) In contrast, the proof given in~\cite{CCKS09} for the result about ExLS equations, stating that $(\ge 5, \ge 3, \ge 3)$ imposes $\theta$-periodicity, involves techniques designed for this specific purpose only. Should Properties (\ref{item:FW}), (\ref{item:overlap}), (\ref{item:cycle}) be generalized so as to take into account the informational equivalence between a word $u$ and $\theta(u)$, they could possibly form a basis for a concise proof of the solutions to the ExLS equation. The approach we use in this paper is thus to seek analog properties for this extended case, and use the results we obtain to approach the unsettled cases in Table~\ref{tbl:exLS_summary}. Czeizler, Kari, and Seki generalized Property~(\ref{item:FW}) in \cite{CKS08}. There, first the notion of primitive words was extended to that of pseudo-primitive words with respect to a given antimorphic involution $\theta$ (or simply, $\theta$-primitive words). A word $u$ is said to be {\it $\theta$-primitive} if there does not exist another word $t$ such that $u \in t\{t, \theta(t)\}^+$. For example, if $\theta$ is the mirror image over $\{a, b\}^*$ (the identity function on $\{a, b\}$ extended to an antimorphism on $\{a, b\}^*$), $aabb$ is $\theta$-primitive, while $abba$ is not because it can be written as $ab \theta(ab)$. Based on the $\theta$-primitivity of words, Property~(\ref{item:FW}) was generalized as follows: ``For words $u, v$, if a word in $u\{u, \theta(u)\}^*$ and a word in $v\{v, \theta(v)\}^*$ share a long enough prefix (for details, see Theorems~\ref{thm:exFWlcm} and \ref{thm:exFWgcd}), then $u, v \in t\{t, \theta(t)\}^*$ for some $\theta$-primitive word $t$.'' In contrast, little is known about Properties (\ref{item:overlap}) and (\ref{item:cycle}) except that they cannot be generalized as suggested in the previous example: non-trivial overlaps between two words in $\{t, \theta(t)\}^+$ are possible, and cyclic permutations do not in general preserve the $\theta$-primitivity of words. As a preliminary step towards an extension of Property (\ref{item:overlap}), Czeizler et al.~examined the non-trivial overlap of the form $v_1 \cdots v_m x = y v_{m+1} \cdots v_{2m}$, where $m \ge 1$, $v_i$ is either $v$ or $\theta(v)$ for some $\theta$-primitive word $v$ $(1 \le i \le 2m)$, and both $x$ and $y$ are properly shorter than $v$ \cite{CCKS09}. Some of the results obtained there will be employed in this paper. One purpose of this paper is to explore further the extendability of Properties (\ref{item:overlap}) and (\ref{item:cycle}). The main result here is Theorem~\ref{thm:overlap3}, which states that for a $\theta$-primitive word $x$, neither $x\theta(x)$ nor $\theta(x)x$ can be a proper infix of a word $x_1x_2x_3$, where $x_1, x_2, x_3 \in \{x, \theta(x)\}$. Based on this result, we consider two problems: For a $\theta$-primitive word $x$, (1) does $v, yvz \in \{x, \theta(x)\}^+$ imply that $y$ and $z$ are in $\{x, \theta(x)\}^*$?; and (2) if the catenation of words $u, v$ is in $\{x, \theta(x)\}^+$, under what conditions does $u, v \in \{x, \theta(x)\}^*$ hold? In particular, our investigation into the second problem will reveal close relationships between primitive words, $\theta$-primitive words, and $\theta$-palindromes (fixed points of $\theta$). These relationships further present several cyclic permutations under which the $\theta$-primitivity of words is preserved. The results thus obtained enable us to prove that the triple $(4, \ge 3, \ge 3)$ imposes $\theta$-periodicity (Theorem~\ref{thm:exLS4}) in a much simpler manner than the proof in \cite{CCKS09} for $(\ge 5, \ge 3, \ge 3)$. Even for $(3, n, m)$ ExLS equations, these results give some insight and narrow down the open cases of ExLS equations. The paper is organized as follows: in the next section, we provide required notions and notation. Section~\ref{sec:combinatorial} begins with the proof of some basic properties of $\theta$-primitive words, and then proves some consequences of overlaps between $\theta$-primitive words of a similar flavour with Properties~(\ref{item:overlap}) and (\ref{item:cycle}) (e.g., Theorem~\ref{thm:overlap3}, Corollary~\ref{cor:conj_pal}). These tools are used in Section~\ref{sec:ExLS}, where we prove that the $(4, \ge 3, \ge 3)$ ExLS equation has only $\theta$-periodic solutions (Theorem~\ref{thm:exLS4}), and study particular cases of $(3, n, m)$ ExLS equations. \section{Preliminaries}\label{sec:pre} An alphabet is a finite and non-empty set of symbols. In the sequel, we shall use a fixed non-singleton alphabet $\Sigma$. The set of all words over $\Sigma$ is denoted by $\Sigma^*$, which includes the empty word $\lambda$, and let $\Sigma^+ = \Sigma^* \setminus \{\lambda\}$. The length of a word $w \in \Sigma^*$ is denoted by $|w|$. A word $v$ is an {\it infix} (resp. {\it prefix}, {\it suffix}) of a word $w$ if $w = xvy$ (resp. $w = vy$, $w = xv$) for some $x, y \in \Sigma^*$; in any case, if $w \neq v$, then the infix (prefix, suffix) is said to be {\it proper}. For a word $w$, denote by $\ensuremath{\mathrm{Pref}}(w)$ the set of prefixes of $w$ and by $\ensuremath{\mathrm{Suff}}(w)$ the set of its suffixes. A language $L$ is a subset of $\Sigma^*$. For a non-negative integer $n \ge 0$, we write $L^n$ for the language consisting of all words of the form $w_1 \cdots w_n$ such that each $w_i$ is in $L$. We also write $L^{\ge n}$ for $L^n \cup L^{n+1} \cup L^{n+2} \cup \cdots$. Analogously, we can define $L^{\le n} = L^0 \cup L^1 \cup \cdots \cup L^n$. For $L^{\ge 0}$ and $L^{\ge 1}$, we employ the traditional notation $L^*$ and $L^+$. A mapping $\theta: \Sigma^* \to \Sigma^*$ is called an {\it antimorphic involution} of $\Sigma^*$ if $\theta(xy) = \theta(y)\theta(x)$ for any $x, y \in \Sigma^*$ (antimorphism), and $\theta^2$ is equal to the identity (involution). Throughout this paper, $\theta$ denotes an antimorphic involution. The {\it mirror image}, which maps a word to its reverse, is a typical example of antimorphic involution. A word $w \in \Sigma^*$ is called a {\it $\theta$-palindrome} if $w = \theta(w)$. A word which is a $\theta$-palindrome with respect to a given but unspecified antimorphic involution $\theta$ is also called a {\it pseudo-palindrome} \cite{dLDL06a}. A non-empty word $w \in \Sigma^+$ is said to be {\it primitive} if $w = v^n$ implies $n = 1$ for any word $v \in \Sigma^+$. It is known that any non-empty word $w \in \Sigma^+$ can be written as a power of a unique primitive word, which is called the {\it primitive root} of $w$, and denoted by $\rho(w)$. Two words which {\it commute} share a primitive root, that is, $uv = vu$ implies $\rho(u) = \rho(v)$ (see~\cite{ChKa97}). In literature, it is said that $uv = vu$ causes a {\it defect effect} (for details of defect effects and defect theorems, see~\cite{ChKa97,Man02}). The LS equation also causes defect effect, since $u^\ell = v^n w^m$ with $\ell, n, m \ge 2$ implies $\rho(u) = \rho(v) = \rho(w)$ (Theorem~\ref{thm:original}). The following results describe other relations causing a defect effect. \begin{lemma}[\cite{CKS08}]\label{lem:pq-qp} Let $u \in \Sigma^+$ such that $u = p q$ for some $\theta$-palindromes $p, q \in \Sigma^+$. If $q \in \ensuremath{\mathrm{Pref}}(u)$ and $|q| \geq |p|$, then $\rho(p) = \rho(q) = \rho(u)$. \end{lemma} \begin{theorem}[\cite{ChKa97}]\label{th:uv-expr} Let $u, v \in \Sigma^+$. If there exist $\alpha(u, v) \in u\{u, v\}^*$ and $\beta(u, v) \in v\{u, v\}^*$ which share a prefix of length at least $|u| + |v|$, then $\rho(u) = \rho(v)$. \end{theorem} The notion of primitive word was generalized into that of pseudo-primitive word by Czeizler, Kari, and Seki~\cite{CKS08}. For an antimorphic involution $\theta$, a non-empty word $w \in \Sigma^+$ is said to be {\it pseudo-primitive with respect to $\theta$}, or simply {\it $\theta$-primitive}, if $w \in \{v, \theta(v)\}^n$ implies $n = 1$ for any word $v \in \Sigma^+$. In \cite{CKS08} it was proved that for any non-empty word $w \in \Sigma^+$, there exists a unique $\theta$-primitive word $t$ satisfying $w \in t\{t, \theta(t)\}^*$. Such a word $t$ is called the {\it $\theta$-primitive root} of $w$. The next lemma describes a property of the $\theta$-primitive root of a $\theta$-palindrome of even length. \begin{lemma}\label{lem:pali_even} Let $x \in \Sigma^+$ be a $\theta$-primitive word and $p$ be a $\theta$-palindrome of even length. If $p = x_1 x_2 \cdots x_m$ for some $m \ge 1$ and $x_1, \ldots, x_m \in \{x, \theta(x)\}$, then $m$ has to be even. \end{lemma} \begin{proof} Suppose that the equality held for some odd $m$. Then $x$ must be of even length because $|p|$ is even. Hence $x_{(m-1)/2}$ becomes a $\theta$-palindrome. Thus $x = y\theta(y)$ for some $y \in \Sigma^+$. However, this contradicts the $\theta$-primitivity of $x$. \end{proof} The {\it theorem of Fine and Wilf} (FW theorem) is one of the fundamental results on periodicity \cite{FiWi65}. It states that for two words $u, v \in \Sigma^+$, if a power of $u$ and a power of $v$ share a prefix of length at least $|u|+|v|-\gcd(|u|, |v|)$, then $\rho(u) = \rho(v)$, where $\gcd(\cdot, \cdot)$ denotes the greatest common divisor of two arguments (for its proof, see, e.g.,~\cite{ChKa97}). This theorem has been generalized in~\cite{CKS08}, by taking into account the equivalence between a word and its image under $\theta$, in the following two forms. \begin{theorem}[\cite{CKS08}]\label{thm:exFWlcm} Let $u, v \in \Sigma^+$. If a word in $\{u, \theta(u)\}^*$ and a word in $\{v, \theta(v)\}^*$ share a prefix of length at least $\ensuremath{\mathrm{lcm}}(|u|, |v|)$, then $u, v \in \{t, \theta(t)\}^+$ for some $\theta$-primitive word $t \in \Sigma^+$, where $\ensuremath{\mathrm{lcm}}(\cdot, \cdot)$ denotes the least common multiple of two arguments. \end{theorem} \begin{theorem}[\cite{CKS08}]\label{thm:exFWgcd} Let $u, v \in \Sigma^+$ with $|u| \ge |v|$. If a word in $\{u, \theta(u)\}^*$ and a word in $\{v, \theta(v)\}^*$ share a prefix of length at least $2|u|+|v|-\gcd(|u|, |v|)$, then $u, v \in \{t, \theta(t)\}^+$ for some $\theta$-primitive word $t \in \Sigma^+$. \end{theorem} In a way, we can say that these theorems describe relations causing a {\it weak defect effect} because they all imply that $u, v \in \{t, \theta(t)\}^+$ for some $\theta$-primitive word $t \in \Sigma^+$, which is strictly weaker than the usual defect effect $\rho(u) = \rho(v)$~\cite{CKS08}. Various relations causing such a weak defect effect were presented in~\cite{CKS08}. Besides, the commutativity $xy = yx$ was extended to the $\theta$-commutativity $xy = \theta(y)x$ in~\cite{KaMa08}. This is a special case of $xy = zx$, whose solutions are given as $x = r(tr)^i$, $y = (tr)^j$, and $z = (rt)^j$ for some $i \ge 0$, $j \ge 1$, and $r, t \in \Sigma^*$ such that $rt$ is primitive (see, e.g.,~\cite{ChKa97}). The next proposition immediately follows from this; note that the $\theta$-commutativity equation guarantees that both $r, t$ are $\theta$-palindromes. \begin{proposition}[\cite{KaMa08}]\label{prop:th-commute} For $x, y \in \Sigma^+$, the solutions of $xy = \theta(y)x$ are given by $x = r(tr)^i$ and $y = (tr)^j$ for some $i \ge 0$, $j \ge 1$, and $\theta$-palindromes $r, t$ such that $rt$ is primitive. \end{proposition} Although this equation does not cause even a weak defect effect, one encounters it often when considering word equations which involve $\theta$. Note that for words $u, v \in \Sigma^*$, it was proved in~\cite{CKS08} that the system $uv = \theta(uv)$ and $vu = \theta(vu)$ causes a weak defect effect: $u, v \in \{t, \theta(t)\}^*$ for some $t \in \Sigma^+$. Thus for words $x, y, z$ satisfying $xy = zx$, if both $y$ and $z$ are $\theta$-palindromes, then the representation of solutions of $xy = zx$ implies $tr = \theta(tr)$ and $rt = \theta(rt)$. Hence the next result holds. \begin{proposition}[\cite{CCKS09}]\label{prop:pali_conjugate} For a word $x \in \Sigma^+$ and two $\theta$-palindromes $y, z \in \Sigma^+$, the equation $xy = zx$ implies that $x, y, z \in \{t, \theta(t)\}^*$ for some $t \in \Sigma^+$. \end{proposition} \section{Properties of Pseudo-Primitive Words}\label{sec:combinatorial} The primitivity of words is one of the most essential notions in combinatorics on words. The past few decades saw a considerable number of studies on this topic (see e.g.,~\cite{ChKa97,Lot83,Shy01}). In contrast, research on the pseudo-primitivity of words has just been initiated in~\cite{CCKS09,CKS08}. For instance, although the class of pseudo-primitive words was proved to be properly included in that of primitive words~\cite{CKS08}, nothing else is known about the relation between these two classes. The purpose of this section is to prove various properties of pseudo-primitive words. Throughout this section, $\theta$ is assumed to be a given antimorphic involution. We begin this section with a simple extension of a known result on the primitive root (Lemma~\ref{lem:rt_rootshare}) to the $\theta$-primitive root (Lemma~\ref{lem:rt_th-rootshare}). \begin{lemma}[e.g.,~\cite{Shy01}]\label{lem:rt_rootshare} For words $u, v \in \Sigma^+$ and a primitive word $w \in \Sigma^+$, the following properties hold: \begin{arabiclist} \item $u^n \in w^+$ implies $u \in w^+$; \item $uv, u \in w^+$ or $uv, v \in w^+$ implies $u, v \in w^+$. \end{arabiclist} \end{lemma} \begin{lemma}\label{lem:rt_th-rootshare} For words $u, v \in \Sigma^+$ and a $\theta$-primitive word $x \in \Sigma^+$, the following properties hold: \begin{arabiclist} \item for some $n \ge 1$, $u^n \in \{x, \theta(x)\}^+$ implies $u \in \{x, \theta(x)\}^+$; \item $uv, u \in \{x, \theta(x)\}^+$, or $uv, v \in \{x, \theta(x)\}^+$ implies $u, v \in \{x, \theta(x)\}^+$; \item $\theta(u)v, u \in \{x, \theta(x)\}^+$, or $u\theta(v), v \in \{x, \theta(x)\}^+$ implies $u, v \in \{x, \theta(x)\}^+$. \end{arabiclist} \end{lemma} \begin{proof} The first property follows from Theorem~\ref{thm:exFWlcm}, while the others are immediately proved by comparing the length of words. \end{proof} As mentioned in the introduction, if a word $w$ is primitive, then the equation $w^2 = ywz$ implies either $y = \lambda$ or $z = \lambda$. Since a $\theta$-primitive word is primitive, this applies to $\theta$-primitive words, too; a $\theta$-primitive word $x$ cannot be a proper infix of $x^2$. However, due to the informational equivalence between $x$ and $\theta(x)$, we should consider equations like $x^2 = y\theta(x)z$ as well, and in fact this equation can hold with non-empty $y$ and $z$. Nevertheless, we can state an analogous theorem based on the next lemma. \begin{lemma}[\cite{CKS08}]\label{lem:overlap2} Let $x \in \Sigma^+$ be a $\theta$-primitive word, and $x_1$, $x_2$, $x_3$, $x_4 \in \{x, \theta(x)\}$. If $x_1 x_2 y = z x_3 x_4$ for some non-empty words $y, z \in \Sigma^+$ with $|y|, |z| < |x|$, then $x_2 \neq x_3$. \end{lemma} \begin{theorem}\label{thm:overlap3} For a $\theta$-primitive word $x \in \Sigma^+$, neither $x\theta(x)$ nor $\theta(x)x$ can be a proper infix of a word in $\{x, \theta(x)\}^3$. \end{theorem} \begin{proof} Let $x_1, x_2, x_3 \in \{x, \theta(x)\}$ and suppose that $x\theta(x)$ is a proper infix of $x_1x_2x_3$. That is to say, there exist words $y, z, y', z' \in \Sigma^+$, $0 < |y|, |z|, |y'|, |z'| < |x|$ such that $zx\theta(x) = x_1x_2y$ and $x\theta(x)y' = z'x_2x_3$. By Lemma~\ref{lem:overlap2}, the first equation implies that $x_2 \ne x$ and the second that $x_2 \ne \theta(x)$, this is in contradiction with $x_2 \in \{x, \theta(x)\}$. We prove similarly that $\theta(x)x$ cannot be a proper infix of $x_1x_2x_3$. \end{proof} This theorem will lead us to two propositions (Propositions~\ref{prop:pali_split} and~\ref{prop:clean_split}), as well as to several other results. The main usage of these propositions in this paper is the following ``splitting strategy,'' which shall prove useful in solving ExLS equations in Section~\ref{sec:ExLS}. Given ``complicated'' words in $\{x, \theta(x)\}^+$ for a $\theta$-primitive word $x$, these propositions make it possible to split such words into ``simple'' component words which are still in $\{x, \theta(x)\}^+$. Then, Lemmas~\ref{lem:rt_rootshare} and~\ref{lem:rt_th-rootshare} are often applicable to subdivide these simple components into smaller units in $\{x, \theta(x)\}^+$. Recall that a primitive word cannot be a proper infix of its square. It is hence evident that for a primitive word $w$, if a word $u$ in $w^+$ contains $w$ as its infix like $u = ywz$ for some $y, z \in \Sigma^*$, then $y, z \in w^*$. For such $w$, more generally, $v, yvz \in w^+$ implies $y, z \in w^*$. This raises a naturally extended question of whether for a $\theta$-primitive word $x$, if $v, yvz \in \{x, \theta(x)\}^+$, then $y, z \in \{x, \theta(x)\}^*$ holds or not. Although this is not always the case, we provide some positive cases based on the following lemma, which is a natural consequence of Theorem~\ref{thm:overlap3}. \begin{lemma}\label{lem:xtex_infix1} Let $x$ be a $\theta$-primitive word, and $v \in \Sigma^+$. For $y, z \in \Sigma^*$, either $yx\theta(x)z \in \{x, \theta(x)\}^*$ or $y\theta(x)xz \in \{x, \theta(x)\}^*$ implies $y, z \in \{x, \theta(x)\}^*$. \end{lemma} \begin{proof} We prove that $yx\theta(x)z \in \{x, \theta(x)\}^*$ implies $y, z \in \{x, \theta(x)\}^*$. Let $yx\theta(x)z = x_1 \cdots x_n$ for some $n \ge 2$ and $x_1, \ldots, x_n \in \{x, \theta(x)\}$. In light of Theorem \ref{thm:overlap3}, there must exist such $i$ that $y = x_1 \cdots x_{i-1}$, $x\theta(x) = x_ix_{i+1}$, and $z = x_{i+2} \cdots x_n$. \end{proof} \begin{lemma}\label{lem:xtex_infix2} Let $x$ be a $\theta$-primitive word, and $v \in \Sigma^+$. If $v, yvz \in \{x, \theta(x)\}^*$ for some $y, z \in \Sigma^*$ and either $x\theta(x)$ or $\theta(x)x$ is an infix of $v$, then $y, z \in \{x, \theta(x)\}^*$. \end{lemma} \begin{proof} Here we consider only the case when $x\theta(x)$ is an infix of $v$. Due to Lemma \ref{lem:xtex_infix1}, we can let $v = x'x\theta(x)x''$ for some $x', x'' \in \{x, \theta(x)\}^*$. Thus, $yvz = yx' x\theta(x) x''z \in \{x, \theta(x)\}^{\ge 2}$. From this, the same lemma derives $yx', x''z \in \{x, \theta(x)\}^*$. Based on Lemma \ref{lem:rt_th-rootshare}, we obtain $y, z \in \{x, \theta(x)\}^*$. \end{proof} Lemma~\ref{lem:xtex_infix2} is a generalization of Lemma~\ref{lem:xtex_infix1}, and makes it possible to prove the following two propositions. \begin{proposition} Let $x$ be a $\theta$-primitive word, and $v \in \Sigma^+$. If $v, yvz \in \{x, \theta(x)\}^{\ge 2}$ for some $y, z \in \Sigma^*$ and $v$ is primitive, then $y, z \in \{x, \theta(x)\}^*$. \end{proposition} \begin{proof} Let $v = x_1 \cdots x_m$ for some $m \ge 2$ and $x_1, \ldots, x_m \in \{x, \theta(x)\}$. Since $v$ is primitive, there exists $1 \le i \le m$ such that $x_ix_{i+1} \in \{x\theta(x), \theta(x)x\}$. Now we can employ Lemma \ref{lem:xtex_infix2} to get this result. \end{proof} \begin{proposition}\label{prop:pali_split} Let $x$ be a $\theta$-primitive word, and $v \in \Sigma^+$. If $v, yvz \in \{x, \theta(x)\}^+$ for some $y, z \in \Sigma^*$ and $v$ is a non-empty $\theta$-palindrome, then $y, z \in \{x, \theta(x)\}^*$. \end{proposition} \begin{proof} Let $v = x_1 \cdots x_n$ for some $n \ge 1$ and $x_1, \ldots, x_n \in \{x, \theta(x)\}$. If $n$ is odd, then $v = \theta(v)$ implies $x_{(n+1)/2} = \theta(x_{(n+1)/2})$ and this means $x = \theta(x)$. Thus we have $v, yvz \in x^+$, and hence $y, z \in x^*$. If $n$ is even, then $x_{n/2}x_{n/2+1} \in \{x\theta(x), \theta(x)x\}$ so that $y, z \in \{x, \theta(x)\}^*$ due to Lemma~\ref{lem:xtex_infix2}. \end{proof} >From now on, we address the following question: ``for a $\theta$-primitive word $x$ and two words $u, v \in \Sigma^*$ such that $uv \in \{x, \theta(x)\}^+$, under what conditions on $u, v$, we can say $u, v \in \{x, \theta(x)\}^*$?''. Here we provide several such conditions. Among them is Proposition~\ref{prop:clean_split}, which serves for the splitting strategy. As its corollary, we will obtain relationships between primitive words and $\theta$-primitive words (Corollaries~\ref{cor:prime-te-prime} and \ref{cor:pq2_te-prime}). \begin{proposition}\label{prop:pref_suff_split} Let $x$ be a $\theta$-primitive word, $u \in \ensuremath{\mathrm{Suff}}(\{x, \theta(x)\}^+)$, and $v \in \ensuremath{\mathrm{Pref}}(\{x, \theta(x)\}^+)$. If $uv = x_1 \cdots x_m$ for some integer $m \ge 2$ and $x_1, \ldots, x_m \in \{x, \theta(x)\}$, then either $u, v \in \{x, \theta(x)\}^+$ or $x_1 = \cdots = x_m$. \end{proposition} \begin{proof} Let us prove that when $u, v \not\in \{x, \theta(x)\}^+$, $x_1 = \cdots = x_m$ must hold. Let $u = z_s' x_{i-1}' \cdots x_1'$ for some $i \ge 1$, $x_i', \ldots, x_1' \in \{x, \theta(x)\}$, and some non-empty words $z_p', z_s' \in \Sigma^+$ such that $z_p'z_s' = x'_i$. We can also let $v = x_1'' \cdots x_{j-1}'' z_p''$ for some $j \ge 1$, $x_1'', \ldots, x_j'' \in \{x, \theta(x)\}$, and $z_p'', z_s'' \in \Sigma^+$ such that $z_p''z_s'' = x_j$. Now we have $x_i' \cdots x_1' x_1'' \cdots x_j'' = z_p' uv z_s'' = z_p' x_1 \cdots x_m z_s''$. Since $0 < |z_p'| < |x|$, Theorem~\ref{thm:overlap3} implies $x_1 = \cdots = x_m$. \end{proof} \begin{corollary} Let $x$ be a $\theta$-primitive word, and $u \in \ensuremath{\mathrm{Suff}}(\{x, \theta(x)\}^+)$, $v \in \ensuremath{\mathrm{Pref}}(\{x, \theta(x)\}^+)$. If $uv$ is in $\{x, \theta(x)\}^{\ge 2}$ and primitive, then $u, v \in \{x, \theta(x)\}^+$. \end{corollary} Proposition~\ref{prop:pref_suff_split} gives the following two propositions which play an important role in investigating the ExLS equation. \begin{proposition}\label{prop:conjugacy} Let $x$ be a $\theta$-primitive word, and $u, v \in \Sigma^+$. If $uv, vu \in \{x, \theta(x)\}^n$ for some $n \ge 2$, then one of the following statements holds: \begin{arabiclist} \item $u, v \in \{x, \theta(x)\}^+$; \item $uv = x^n$ and $vu = \theta(x)^n$; \item $uv = \theta(x)^n$ and $vu = x^n$. \end{arabiclist} \end{proposition} \begin{proof} We have $v \in \ensuremath{\mathrm{Pref}}(\{x, \theta(x)\}^+)$ and $u \in \ensuremath{\mathrm{Suff}}(\{x, \theta(x)\}^+)$ because $vu \in \{x, \theta(x)\}^n$. Proposition~\ref{prop:pref_suff_split} implies that either the first property holds or $uv \in \{x^n, \theta(x)^n\}$. Here we consider only the case when $uv = x^n$. Then $u = x^i x_p$ and $v = x_s x^{n-i-1}$ for some $1 \le i \le n$ and $x_p, x_s \in \Sigma^+$ with $x = x_p x_s$. Thus, we have $x_p vux_s = x^{n+1}$, from which can deduce $vu = \theta(x)^n$ with the aid of Theorem~\ref{thm:overlap3} and the fact that $x$ cannot be a proper infix of its square. \end{proof} \begin{proposition}\label{prop:clean_split} Let $x \in \Sigma^+$ be a $\theta$-primitive word, and $p, q \in \Sigma^+$ be $\theta$-palindromes. If $pq$ is primitive, and $pq = x_1 \cdots x_n$ for some $n \ge 2$ and $x_1, \ldots, x_n \in \{x, \theta(x)\}$, then there are integers $k, m \ge 1$ such that $n = 2m$, $p = x_1 \cdots x_{2k}$, and $q = x_{2k+1} \cdots x_{2m}$. \end{proposition} \begin{proof} It is clear from $pq = x_1 \cdots x_n$ that $p \in \ensuremath{\mathrm{Pref}}(\{x, \theta(x)\}^+)$ and $q \in \ensuremath{\mathrm{Suff}}(\{x, \theta(x)\}^+)$. Since both $p$ and $q$ are $\theta$-palindromes, these mean that $p \in \ensuremath{\mathrm{Suff}}(\{x, \theta(x)\}^+)$ and $q \in \ensuremath{\mathrm{Pref}}(\{x, \theta(x)\}^+)$. Hence we can apply Proposition~\ref{prop:pref_suff_split} to obtain $p = x_1 \cdots x_{i}$ and $q = x_{i+1} \cdots x_{n}$ for some $i$ (since $pq$ is primitive, the case $x_1 = \cdots = x_n$ is impossible). The integer $i$ has to be even ($i = 2k$ for some $k \ge 1$). Suppose not, then $p$ being a $\theta$-palindrome implies that $x_{(i+1)/2}$ is a $\theta$-palindrome, and hence so is $x$. As a result, $pq = x^n$ but this contradicts the assumption that $pq$ is primitive. Similarly, $n-i$ proves to be even, too, and we obtain $n=2m$. \end{proof} The next two corollaries follow from Proposition~\ref{prop:clean_split}. The first one provides us with a sufficient condition for a primitive word that is a catenation of two non-empty $\theta$-palindromes to be $\theta$-primitive. \begin{corollary}\label{cor:prime-te-prime} For non-empty $\theta$-palindromes $p, q$, if $pq$ is primitive but there does not exist any $x$ such that $p, q \in \{x, \theta(x)\}^+$, then $pq$ is $\theta$-primitive. \end{corollary} \begin{corollary}\label{cor:pq2_te-prime} Let $p, q$ be non-empty $\theta$-palindromes such that $pq$ is primitive. Then some word in $\{p, q\}^{+}$ is $\theta$-primitive if and only if $pq$ is $\theta$-primitive. \end{corollary} \begin{proof} The converse implication is trivial because $pq \in \{p, q\}^+$. The direct implication can be proved by considering its contrapositive, which is immediately given by Proposition~\ref{prop:clean_split}. \end{proof} Note that in the statement of Corollary \ref{cor:pq2_te-prime} we cannot replace the quantifier ``some'' with ``all''. A trivial example is $(pq)^2 \in \{p, q\}^+$, which is not even primitive. We can also provide a non-trivial example as follows: \begin{example} Let $\theta$ be the mirror image over $\{a, b\}^*$, $p = a$, and $q = baaab$. It is clear that $pq = abaaab$ is $\theta$-primitive. On the other hand, $qppp = (baaa)^2 \in \{p, q\}^+$ is not even primitive. \end{example} Corollary~\ref{cor:pq2_te-prime} gives a further corollary about the case in which a word obtained from a $\theta$-primitive word by cyclic permutation remains $\theta$-primitive. \begin{corollary}\label{cor:conj_pal} For two non-empty $\theta$-palindromes $p, q$, if $pq$ is $\theta$-primitive, then $qp$ is $\theta$-primitive. \end{corollary} \begin{proof} Since $pq$ is $\theta$-primitive, it is primitive and hence its conjugate $qp$ is also primitive. Applying Corollary~\ref{cor:pq2_te-prime} to $qp$ gives the result. \end{proof} Corollary~\ref{cor:conj_pal} gives a partial answer to one of our questions on the preservation of $\theta$-primitivity under cyclic permutation. Now let us examine the equation $pq = x_1 \cdots x_n$ from a different perspective to get some results useful in Section \ref{sec:ExLS}. Here we see that the assumptions considered in Proposition \ref{prop:clean_split}: $pq$ being primitive and both of $p, q$ being a $\theta$-palindrome are critical to obtain $p, q \in \{x, \theta(x)\}^+$. \begin{lemma}\label{lem:pal_x1-xn} For a $\theta$-primitive word $x \in \Sigma^+$ and $k \ge 2$, let $x_1, x_2, \ldots, x_k \in \{x, \theta(x)\}$. If $pz = x_1x_2 \cdots x_k$ for some $\theta$-palindrome $p$ and non-empty word $z \in \Sigma^+$ with $|z| < |x|$, then $x_1 = x_2 = \cdots = x_{k-1}$. Moreover, if $z$ is also a $\theta$-palindrome, then $x_k = x_{k-1}$. \end{lemma} \begin{proof} Due to the length condition on $z$, we can let $x_k = yz$ for some non-empty word $y \in \Sigma^+$. Hence we have $p = x_1x_2 \cdots x_{k-1}y$. Since $p$ is a $\theta$-palindrome, $p = \theta(y) \theta(x_{k-1}) \cdots \theta(x_1)$. This means that $\theta(x_{k-1}) \cdots \theta(x_1)$ is a proper infix of $x_1 \cdots x_k$, and we can say that $x_1 = \cdots = x_{k-1}$ using Theorem~\ref{thm:overlap3} (we can assume $k \ge 3$, since if $k = 2$ the consequence is trivial). Now we consider the additional result when $z = \theta(z)$. Without loss of generality, we can assume that $x_1 = x$. So we have $p = x^{k-1}y = \theta(y)\theta(x)^{k-1}$. Since $|y| < |\theta(x)|$, this equation gives $\theta(x) = qy$ for some non-empty word $q$. Actually $q$ is a $\theta$-palindrome. Indeed, we have $qy \in \ensuremath{\mathrm{Suff}}(p) = \ensuremath{\mathrm{Suff}}(x^{k-1}y)$, hence as $|q| < |x|$, $q \in \ensuremath{\mathrm{Suff}}(x)$. Moreover, by definition, $q \in \ensuremath{\mathrm{Pref}}(\theta(x))$, therefore $\theta(q) \in \ensuremath{\mathrm{Suff}}(x)$ and thus $q$ has to be a $\theta$-palindrome. Thus, if $x_k = \theta(x)$, then $\theta(x) = qy = yz$ and hence $\theta(x)$ could not be $\theta$-primitive due to Proposition~\ref{prop:pali_conjugate}, raising a contradiction. \end{proof} For two $\theta$-palindromes $p, q$, a $\theta$-primitive word $x$, and $x_1, \ldots, x_k \in \{x, \theta(x)\}$ ($k \ge 1$), if $|q| < |x|$, then the equation $pq = x_1 \cdots x_k$ turns into $pq = x^k$ due to Lemma \ref{lem:pal_x1-xn} and its solution is $x = p'q$ for some $\theta$-palindrome $p'$ such that $p = x^{k-1}p'$. If we replace $q$ in this equation with a word $z$, which is not assumed to be a $\theta$-palindrome, and if $k \ge 3$, then we can still find an intriguing non-trivial solution to the equation $pz = x^{k-1}\theta(x)$. \begin{example} Let $p$ be a $\theta$-palindrome, $x$ be a $\theta$-primitive word, and $z \in \Sigma^+$ with $|z| < |x|$. For some $i \ge 0$, $j \ge 1$, $k \ge 3$, and $\theta$-palindromes $r, t$ such that $rt$ is primitive, we can see that $x = [r(tr)^i]^2 (tr)^j$, $p = x^{k-1} r(tr)^i$, and $z = (tr)^j r(tr)^i$ satisfy $pz = x^{k-1}\theta(x)$. \end{example} Note that $r$ and $t$ in this example are given by Proposition \ref{prop:th-commute}. Further research on the properties of words in $\{r(tr)^i, (tr)^j\}^*$ may shed light on the properties of $\theta$-primitive words. In Section \ref{subsec:non-trivial_ExLS4}, we will provide some results along this line, such as the ones in Propositions~\ref{prop:rt2_prime_present1} and \ref{prop:rt2_prime_present2}. \section{Extended Lyndon-Sch\"{u}tzenberger equation}\label{sec:ExLS} As an application of the results obtained in Section~\ref{sec:combinatorial}, we address some open cases of the extended Lyndon-Sch\"{u}tzenberger equation in this section. For $u, v, w \in \Sigma^+$, the ExLS equation under consideration is of the form \[ u_1 \cdots u_\ell = v_1 \cdots v_n w_1 \cdots w_m, \] where $u_1, \ldots, u_\ell \in \{u, \theta(u)\}$, $v_1, \ldots, v_n \in \{v, \theta(v)\}$, and $w_1, \ldots, w_m \in \{w, \theta(w)\}$, for $\ell, n, m \ge 2$. The open cases are $\ell \in \{2, 3, 4\}$ and $m,n\geq 3$ (see Table \ref{tbl:exLS_summary}). It suffices to consider the case when both $v$ and $w$ are $\theta$-primitive; otherwise we simply replace them with their $\theta$-primitive roots and increase the parameters $n$ and $m$. The words $v_1 \cdots v_n$ and $w_1 \cdots w_m$ being symmetric with respect to their roles in the equation, it is also legitimate to assume that $|v_1 \cdots v_n| \ge |w_1 \cdots w_m|$. Throughout Subsections \ref{subsec:ExLS4_setting} to \ref{subsec:ExLS4_01}, we prove that the triple $(4, \ge 3, \ge 3)$ imposes $\theta$-periodicity. First of all, in Subsection~\ref{subsec:ExLS4_setting}, the problem which we actually work on is formalized as Problem~\ref{prob:main}, and we solve some special instances of ExLS equation to which the application of the generalized Fine and Wilf's theorem (Theorem~\ref{thm:exFWlcm}) immediately proves the existence of a word $t$ satisfying $u, v, w \in \{t, \theta(t)\}^+$. We call such instances {\it trivial ExLS equations}. In Subsection~\ref{subsec:non-trivial_ExLS4}, we provide additional conditions which can be assumed for non-trivial ExLS equations. Several lemmas and propositions are also proved there. They are interesting in their own and our proof techniques for them probably include various applications beyond the investigation on the non-trivial ExLS equations in Subsection \ref{subsec:ExLS4_00} (the case when $u_2 = u_1$) and Subsection \ref{subsec:ExLS4_01} (the case when $u_2 \neq u_1$). In each of these subsections, we analyze four cases depending on the values of $u_3$ and $u_4$ one at a time. All of these proofs merely consist of direct applications of the results obtained so far and in Subsection~\ref{subsec:non-trivial_ExLS4}. In Subsection~\ref{subsec:ExLS3}, we prove that for $n, m \ge 2$, the triple $(3, n, m)$ does not impose $\theta$-periodicity. We provide several (parametrized) examples which verify that for some specific values of $n, m$, the triple $(3, n, m)$ does not impose $\theta$-periodicity. Our survey will expose complex behaviors of $(3, n, m)$ ExLS equations. \subsection{Problem setting for the ExLS equation $\ell = 4$} \label{subsec:ExLS4_setting} Taking the assumptions mentioned above into consideration, the problem which we are addressing is described as follows: \begin{problem}\label{prob:main} Let $u, v, w \in \Sigma^+$ and integers $n, m \ge 3$. Let $u_1, u_2, u_3, u_4 \in \{u, \theta(u)\}$, $v_1, \ldots, v_n \in \{v, \theta(v)\}$, and $w_1, \ldots, w_m \in \{w, \theta(w)\}$. Does the equation $u_1u_2u_3u_4 = v_1 \cdots v_n w_1 \cdots w_m$ imply $u, v, w \in \{t, \theta(t)\}^+$ for some $t \in \Sigma^+$ under all of the following conditions? \begin{arabiclist} \item\label{cond:te-primitive} $v$ and $w$ are $\theta$-primitive, \item\label{cond:symmetry} $|v_1 \cdots v_n| \ge |w_1 \cdots w_m|$, \item\label{cond:fixed_word} $u_1 = u$, $v_1 = v$, and $w_m = w$, \item\label{cond:length} $|v|, |w| < |u|$. \end{arabiclist} The condition~\ref{cond:symmetry} means that $2|u| \le n|v|$. Besides, the condition~\ref{cond:length} follows from the conditions~\ref{cond:te-primitive} and~\ref{cond:symmetry} as shown in the next lemma. \end{problem} \begin{lemma}\label{lem:uv_length} Let $u, v, w \in \Sigma^+$ such that $v$, $w$ are $\theta$-primitive. If $u_1u_2u_3u_4 = v_1 \cdots v_n w_1 \cdots w_m$ for some $n, m \ge 3$, $u_1, u_2, u_3, u_4 \in \{u, \theta(u)\}$, $v_1, \ldots, v_n \in \{v, \theta(v)\}$, and $w_1, \ldots, w_m \in \{w, \theta(w)\}$, then $|v| < |u|$ and $|w| < |u|$. \end{lemma} \begin{proof} Due to Condition~\ref{cond:symmetry}, $|v_1 \cdots v_n| \ge |w_1 \cdots w_m|$. This means that $m|w| \leq 2|u|$, which in turn implies $|w| \le \frac{2}{3}|u|$ because $m \geq 3$. Thus $|w| < |u|$. Now suppose that the ExLS equation held with $|v| \ge |u|$. Then $v_1 \cdots v_n$ is a prefix of $u_1u_2u_3u_4$ of length at least $3|v| \geq 2|v|+|u|$, and hence $u, v \in \{t, \theta(t)\}^+$ for some $\theta$-primitive word $t \in \Sigma^+$ due to Theorem~\ref{thm:exFWgcd}. Unless $|v| = |u|$, we reach the contradiction that $v$ would not be $\theta$-primitive. Even if $|v| = |u|$, we have $u_4 = w_1 \cdots w_m$. Therefore $v_1 = u_1$ could not be $\theta$-primitive. \end{proof} The next lemma reduces the number of steps required to prove a positive answer to Problem~\ref{prob:main}. \begin{lemma}\label{lem:two_enough} Under the setting of Problem~\ref{prob:main}, if $u, v \in \{t, \theta(t)\}^+$ for some $t \in \Sigma^+$, then $w \in \{t, \theta(t)\}^+$. \end{lemma} In fact, we can say more strongly that if two of $u, v, w$ are proved to be in $\{t, \theta(t)\}^+$ for some $t$, then the other one is also in this set. First of all, we distinguish the case in which the existence of such $t$ that $u, v, w \in \{t, \theta(t)\}^+$ is trivial due to the generalized Fine and Wilf theorem (Theorem \ref{thm:exFWlcm}). \begin{theorem}\label{thm:trivial} Under the setting of Problem~\ref{prob:main}, if there exists an index $i$, $1 \le i \le n$, such that $u_1u_2 = v_1 \cdots v_i$, then $u, v, w \in \{t, \theta(t)\}^+$ for some word $t \in \Sigma^+$. \end{theorem} \begin{proof} Since $v$ is assumed to be $\theta$-primitive, Theorem~\ref{thm:exFWlcm} implies $u \in \{v, \theta(v)\}^+$. Then $w \in \{v, \theta(v)\}^+$ due to Lemma~\ref{lem:two_enough} (in fact, $w \in \{v, \theta(v)\}$ because $w$ is also assumed to be $\theta$-primitive). \end{proof} If a given $(4, n, m)$ ExLS equation satisfies the condition in Theorem~\ref{thm:trivial}, then we say that this equation is {\it trivial}. Before initiating our study on non-trivial ExLS equations, we provide one important condition which makes the equation trivial according to the generalized Fine and Wilf theorem (Theorem \ref{thm:exFWgcd}). \begin{proposition}\label{prop:longenough} Under the setting of Problem~\ref{prob:main}, if $n|v| \ge 2|u|+|v|$, then the equation is trivial. \end{proposition} \begin{proof} We can employ Theorem~\ref{thm:exFWgcd} to obtain $u, v \in \{t, \theta(t)\}^+$ for some $t \in \Sigma^+$. In fact, $t$ is either $v$ or $\theta(v)$ because $v$ is assumed to be $\theta$-primitive. Hence we can find such $i$ stated in Theorem~\ref{thm:trivial}, and by definition this equation is trivial. \end{proof} \subsection{Non-trivial $(4, \ge 3, \ge 3)$ ExLS equations and related combinatorial results} \label{subsec:non-trivial_ExLS4} Now we shift our attention to the non-trivial $(4, \ge 3, \ge 3)$ ExLS equation. What we will actually prove here is that under the setting of Problem~\ref{prob:main}, any {\it non-trivial} equation cannot hold. Along with Theorem~\ref{thm:trivial}, this implies that $(4, \ge 3, \ge 3)$ imposes $\theta$-periodicity. >From this theorem and Proposition~\ref{prop:longenough}, the equation is {\it non-trivial} if and only if $(n-1)|v| < 2|u| < n|v|$. Thus, the next proposition, which was proposed in~\cite{CCKS09} to decrease the amount of case analyses for the $(5, \ge 3, \ge 3)$ ExLS equation, is still available for the investigation of non-trivial $(4, \ge 3, \ge 3)$ ExLS equations. \begin{proposition}[\cite{CCKS09}]\label{prop:CCKS09-a} Let $u, v \in \Sigma^+$ such that $v$ is $\theta$-primitive, $u_2, u_3 \in \{u, \theta(u)\}$, and $v_2, \ldots, v_n \in \{v, \theta(v)\}$ for some integer $n \ge 3$. If $vv_2 \cdots v_n \in \ensuremath{\mathrm{Pref}}(uu_2u_3)$ and $(n-1)|v| < 2|u| < n|v|$, then there are only two possible cases. \begin{arabiclist} \item $u_2 = \theta(u)$: and $v_2 = \cdots = v_n = v$ with $u\theta(u) = (pq)^{n-1}p$ and $v = pq$ for some non-empty $\theta$-palindromes $p, q$. \item $u_2 = u$: $n$ is even, $v_2 = \cdots = v_{n/2} = v$, and $v_{n/2+1} = \cdots = v_n = \theta(v)$ with $v = r(tr)^i (rt)^{i+j}r$ and $u = v^{n/2-1} r(tr)^i (rt)^j$ for some $i \ge 0$, $j \ge 1$, and non-empty $\theta$-palindromes $r, t$ such that $rt$ is primitive. \end{arabiclist} \end{proposition} This proposition helps in proving that non-trivial $(4, \ge 3, \ge 3)$ ExLS equations verify the one more condition that $|v| \neq |w|$ as shown in the next proposition. \begin{proposition}\label{prop:different_length} Non-trivial ExLS equations under the setting of Problem~\ref{prob:main} imply $|v| \neq |w|$. \end{proposition} \begin{proof} Suppose that the equation were non-trivial with $|v| = |w|$. Combining $|v|=|w|$ and the non-trivial length condition together implies $m = n-1$ and furthermore the border between $u_2$ and $u_3$ splits $v_{n}$ into exactly halves. Hence if $u_3 = \theta(u_2)$, then $v_n = x\theta(x)$ for some $x \in \Sigma^+$, contradicting the $\theta$-primitivity of $v$. Besides, due to the condition~\ref{cond:length} of Problem~\ref{prob:main}, if $u_4 = \theta(u_1)$, then $w = \theta(v)$, and hence $u_1u_2u_3u_4 \in \{v, \theta(v)\}^+$. Taking $(n-1)|v| < 2|u| < n|v|$ into account, this implies that $v$ is not $\theta$-primitive, raising a contradiction. Therefore, the only possible solutions verify $u_3 = u_2$ and $u_4 = u_1 = u$. If $u_2 = u_3 = u$, then according to Proposition~\ref{prop:CCKS09-a}, $n$ is even, and by substituting the representations of $u$ and $v$ given there into $u^4 = v^{n/2}\theta(v)^{n/2} w_1 \cdots w_m$, we obtain that $w_1 \cdots w_m = (tr)^j [r(tr)^i r(tr)^{i+j}]^{n/2-1} [r(tr)^{i+j}r(tr)^i]^{n/2-1} (rt)^j$, which is a $\theta$-palindrome of even length. Since $w$ is $\theta$-primitive, $m$ has to be even (Lemma~\ref{lem:pali_even}). It is however impossible because $m = n-1$ and $n$ is even. If $u_2 = u_3 = \theta(u)$, then Proposition~\ref{prop:CCKS09-a} gives $v = pq$ and $u_1u_2 = u\theta(u) = (pq)^{n-1}p$ for some $\theta$-palindromes $p, q \in \Sigma^+$. Note that the left side of the ExLS equation is as long as its right side ($4|u| = n|v| + m|w| = (2n-1)|pq|$). Substituting $2|u| = (n-1)|pq| + |p|$ into this yields $|p| = |q|$ and it in turn implies that both $p$ and $q$ are of even length. Let $p = p'\theta(p')$ and $q = q'\theta(q')$ for some $p', q' \in \Sigma^+$ of the same length. Then $u_1 = u$ ends with either $\theta(p')qp'$ or $\theta(q')pq'$, and so $w_m$ is either of them. However, neither is $\theta$-primitive. This contradiction proves that the equation is trivial. \end{proof} Supposing that some non-trivial $(4, \ge 3, \ge 3)$ ExLS equation held, the next claim would follow from this proposition. Although our conclusion in this section will prove that this claim cannot hold, the equation proposed there, $u_3u_4 = q w_1 \cdots w_m$, or more generally the relation $q w_1 \cdots w_m \in \{u, \theta(u)\}^{\ge 2}$ provides in its own right challenging themes. \begin{claim}\label{claim:non-trivial} Under the setting of Problem \ref{prob:main}, if the ExLS equation were non-trivial, then we would have $u_3u_4 = q w_1 \cdots w_m$ for some non-empty $\theta$-palindrome $q$. \end{claim} \begin{proof} According to the presentations of $u$ and $v$ given in Proposition~\ref{prop:CCKS09-a}, if $u_2 = \theta(u)$, then $u\theta(u)q = v^n$ and hence $u_3u_4 = q w_1 \cdots w_m$; otherwise, $uu [r(tr)^i]^2 = v^{n/2} \theta(v)^{n/2}$ so that $u_3u_4 = [r(tr)^i]^2 w_1 \cdots w_m$. Since $q, r, t$ are $\theta$-palindromes, this claim holds. \end{proof} As we shall see soon in Claim~\ref{claim:u3nequ4}, the next lemma is of use when considering non-trivial ExLS equations with $u_3 \neq u_4$, that is, $u_3u_4$ being a $\theta$-palindrome. \begin{lemma}\label{lem:pali_pref_pali} Let $p, q$ be non-empty $\theta$-palindromes and let $w$ be a $\theta$-primitive word. For some $k \ge 1$ and words $w_1, \ldots, w_k \in \{w, \theta(w)\}$, if $p = q w_1 \cdots w_k$ holds, then either $p, q \in \{w, \theta(w)\}^+$ or $w_1 = \cdots = w_k$. \end{lemma} \begin{proof} First we prove that $q \in \ensuremath{\mathrm{Suff}}((w_1 \cdots w_k)^+)$. Since $w_1 \cdots w_k \in \ensuremath{\mathrm{Suff}}(p)$, $p$ being a $\theta$-palindrome implies $\theta(w_1 \cdots w_k) \in \ensuremath{\mathrm{Pref}}(p)$. Thus if $|q| \le k|w|$, then $q \in \ensuremath{\mathrm{Pref}}(\theta(w_1 \cdots w_k))$, that is, $q \in \ensuremath{\mathrm{Suff}}(w_1 \cdots w_k)$ and we are done. Otherwise, $w_1 \cdots w_k \in \ensuremath{\mathrm{Suff}}(q)$ so that $(w_1 \cdots w_k)^2 \in \ensuremath{\mathrm{Suff}}(p)$. By repeating this process, eventually we will find some integer $i \ge 1$ such that $q \in \ensuremath{\mathrm{Suff}}((w_1 \cdots w_k)^i)$. If $q \in \{w, \theta(w)\}^+$, then obviously $p \in \{w, \theta(w)\}^+$. Otherwise, let $q = w'w_{j+1}\cdots w_k (w_1 \cdots w_k)^i$ for some $1 \leq j \leq k$ and $i \geq 0$, where $w'$ is a non-empty proper suffix of $w_{j}$. Then, $p = w'w_{j+1}\cdots w_k (w_1 \cdots w_k)^{i+1}$ overlaps in a non-trivial way with $p = \theta(p) = (\theta(w_k)\cdots \theta(w_1))^{i+1}\theta(w_k)\cdots \theta(w_{j+1})\theta(w')$, and Theorem~\ref{thm:overlap3} implies that $w_1 = \cdots = w_k$. \end{proof} \begin{claim}\label{claim:u3nequ4} Under the setting of Problem \ref{prob:main}, if the ExLS equation were non-trivial and $u_3 \neq u_4$, then $w_1 = \cdots = w_m = w$ and $u_3u_4 \in \ensuremath{\mathrm{Suff}}(w^+)$. \end{claim} \begin{proof} We have $u_3u_4 = x w_1 \cdots w_m$ for some non-empty $\theta$-palindrome $x \in \Sigma^+$ due to Proposition~\ref{prop:CCKS09-a}. As suggested before, we can employ Lemma~\ref{lem:pali_pref_pali} to get either $x, u_3u_4 \in \{w, \theta(w)\}^+$ or $w_1 = \cdots = w_m$. In the first case, Theorem~\ref{thm:exFWlcm} implies $u \in \{w, \theta(w)\}^+$ because $w$ is assumed to be $\theta$-primitive. Then the ExLS equation in turn implies that $v_1 \cdots v_n \in \{w, \theta(w)\}^+$ and hence $v \in \{w, \theta(w)\}$ for the same reason. As a result the equation would be trivial. Consequently $w_1 = \cdots = w_m$. \end{proof} The main strategy used in the analyses of non-trivial ExLS equations is to split $w_1 \cdots w_m$ into smaller components which are still in $\{w, \theta(w)\}^+$, until we reach a contradiction. The split is mainly achieved by Propositions~\ref{prop:pali_split} and \ref{prop:clean_split}. Note that the word to which Proposition~\ref{prop:clean_split} is applied must be primitive. The next two lemmas work for this purpose in Subsection~\ref{subsec:ExLS4_00}, but we provide them in more general form. An interesting point is that Lyndon and Sch\"{u}tzenberger's original result (Theorem~\ref{thm:original}) plays an essential role in their proofs; hence for the ExLS equation. \begin{proposition}\label{prop:rt2_prime_present1} Let $r, t \in \Sigma^+$ such that $rt$ is primitive. For any $i \ge 0$, $j, k \ge 1$, and $n \ge 2$, $(tr)^j [(r(tr)^i)^n (tr)^j]^k$ is primitive. \end{proposition} \begin{proof} Suppose that the given word were not primitive; namely, for some $\ell \ge 2$ and a primitive word $x$, let $(tr)^j [(r(tr)^i)^n (tr)^j]^k = x^\ell$. Catenating $(r(tr)^i)^n$ to the left to the both sides of this equation gives $[(r(tr)^i)^n (tr)^j]^{k+1} = (r(tr)^i)^n x^\ell$. As $k \geq 1$ and $n, \ell \geq 2$, we can apply Theorem \ref{thm:original} to this equation to obtain $\rho((r(tr)^i)^n (tr)^j) = \rho(r(tr)^i) = x$. Using Lemma~\ref{lem:rt_rootshare}, one can obtain $\rho((tr)^j) = x$, and furthermore, $\rho(tr) = x$. Combining this with $\rho(r(tr)^i) = x$ gives us $\rho(r) = \rho(t)$ and hence $rt$ would not be primitive, which contradicts the hypotheses. \end{proof} \begin{proposition}\label{prop:rt2_prime_present2} Let $r, t \in \Sigma^+$ such that $rt$ is primitive. For any $i \ge 0$, $j, k, m \ge 1$, $(tr)^j [(r(tr)^i)^m (tr)^j]^{k-1} (r(tr)^i)^{m-1} (rt)^j$ is primitive. \end{proposition} \begin{proof} Suppose that we had $(tr)^j [(r(tr)^i)^m (tr)^j]^{k-1} (r(tr)^i)^{m-1} (rt)^j = x^\ell$ for some primitive word $x$ and $\ell \ge 2$. Catenating $(r(tr)^i)^{m+1}$ to the right to the both sides of this equation gives $[(tr)^j (r(tr)^i)^m]^{k+1} = x^\ell (r(tr)^i)^{m+1}$. Now as in the proof of Proposition~\ref{prop:rt2_prime_present1}, we reach the contradicting conclusion that $rt$ is not primitive. \end{proof} There are some results which can be used for the splitting strategy, once we apply Proposition~\ref{prop:CCKS09-a} to non-trivial ExLS equations with $u_1 \neq u_2$, which will be considered in Subsection \ref{subsec:ExLS4_01}. As before, they are provided in more general form than required for the purpose. \begin{lemma}\label{lem:zp-nonprime} Let $z, w \in \Sigma^+$ with $|z| < |w|$ and let $p$ be a $\theta$-palindrome. If $zp = w^n$ for some $n \ge 2$, then $z = \theta(z)$. \end{lemma} \begin{proof} Let $w = zy$ for some $y \in \Sigma^+$. Then $p = y(zy)^{n-1}$, from which we can obtain $y = \theta(y)$ and $z = \theta(z)$ because $p = \theta(p)$ and $n-1 \ge 1$. \end{proof} \begin{proposition}\label{prop:uteuqnu} Let $x$ be a $\theta$-primitive word, $u \in \Sigma^+$, and $q$ be a non-empty $\theta$-palindrome. If for some $n \ge 2$ and $\ell \ge 1$, $u[\theta(u)q^n u]^\ell \in \{x, \theta(x)\}^{\ge 2}$, then $u, q \in \{x, \theta(x)\}^+$. \end{proposition} \begin{proof} Let $u[\theta(u)q^n u]^\ell = x_1 \cdots x_m$ for some $m \ge 2$ and $x_1, \ldots, x_m \in \{x, \theta(x)\}$. Let $u = x_1 \cdots x_{k-1}z_1$ and $[\theta(u)q^n u]^\ell = z_2 x_{k+1} \cdots x_m$ for some $1 \le k \le m$ with $x_k = z_1 z_2$ and $z_1 \neq \lambda$, i.e. $|z_2| < |x|$. If $z_2 = \lambda$, then $u, [\theta(u)q^n u]^\ell \in \{x, \theta(x)\}^+$. Lemma~\ref{lem:rt_th-rootshare} implies $\theta(u)q^n u \in \{x, \theta(x)\}^+$ and the same lemma further gives $q^n \in \{x, \theta(x)\}^+$, that is, $q \in \{x, \theta(x)\}^+$. Now we prove that $z_2$ cannot be non-empty. Without loss of generality, we assume $x_m = x$. So suppose $z_2 \neq \lambda$ ($0 < |z_1| < |x|$). We can apply Lemma~\ref{lem:pal_x1-xn} to $z_1[\theta(u)q^nu]^\ell = x_k \cdots x_m$ to get $x_{k+1} = \cdots = x_m = x$ because $[\theta(u)q^n u]^\ell$ is a $\theta$-palindrome and $|z_1| < |x|$. Thus if $|x| \le |u|$, then $|z_2| < |u|$ and so $[\theta(u)q^n u]^\ell = z_2 x^{k-1}$ gives $x \in \ensuremath{\mathrm{Suff}}(u)$ and hence $\theta(x) \in \ensuremath{\mathrm{Pref}}(\theta(u))$. These further imply that $x \in \ensuremath{\mathrm{Suff}}(x_1 \cdots x_{k-1}z_1)$ and $\theta(x) \in \ensuremath{\mathrm{Pref}}(z_2 x_{k+1} \cdots x_m)$. Thus $x\theta(x)$ is a proper infix of $x_{k+1}x_kx_{k-1}$, which is in contradiction with the $\theta$-primitivity of $x$ by Theorem~\ref{thm:overlap3}. Therefore, $|x| > |u|$, which means $k = 1$, that is, we have $x_2 = \cdots = x_m = x$. Note that $x \neq \theta(x)$ must hold because of $z_2 x^{m-1}$ being a $\theta$-palindrome, $0 < |z_2| < |x|$ and $x$ is primitive (and cannot be a proper infix of its square). If $x_1 = \theta(x)$, then $u \in \ensuremath{\mathrm{Pref}}(\theta(x)) \cap \ensuremath{\mathrm{Suff}}(x)$ holds and so $u = \theta(u)$. Now Lemma~\ref{lem:pal_x1-xn} would imply $x_1 = x$, which contradicts $x \neq \theta(x)$. Otherwise ($x_1 = x$), $u[\theta(u)q^n u]^\ell = x^m$ and from this Lemma~\ref{lem:zp-nonprime} derives $u = \theta(u)$. Then we have $u(uq^nu)^\ell = x^m$; in other words, $(uq^nu)^{\ell+1}$ and $x^m$ share a suffix of length at least $\eta = \max(m|x|,\ell|uq^nu|)$. If $\ell \ge 2$, then $\eta \geq |x| + |uq^nu|$, and the Fine and Wilf theorem implies $\rho(uq^nu) = x$. With $u(uq^nu)^\ell = x^m$, this implies $\rho(u) = x$. However, this contradicts $|u| < |x|$. If $\ell = 1$, then $uuq^nu = x^m$. Using cyclic permutation, we obtain $u^3q^n = x'^m$, where $x'$ is a conjugate of $x$. This is of the form of LS equation, and Theorem~\ref{thm:original} concludes $\rho(u) = \rho(q) = x'$. Now we reached the same contradiction because $|x'| = |x|$. \end{proof} \begin{lemma}\label{lem:u2qwm2} Let $w$ be a $\theta$-primitive word, and $w_1, \ldots, w_m \in \{w, \theta(w)\}$ for some $m \ge 2$. Let $u, q \in \Sigma^+$ such that $q$ is a $\theta$-palindrome with $|q| < |u|$. If $u^2 = q w_1 \cdots w_m$, then either $u, q \in \{w, \theta(w)\}^+$ or $u = qr$ for some non-empty $\theta$-palindrome $r$. \end{lemma} \begin{proof} It is trivial that the case $u, q \in \{w, \theta(w)\}^+$ is possible. Hence assume that $u, q \not\in \{w, \theta(w)\}^+$. Without loss of generality, we can also assume that $w_m = w$. Let $u = qr$ for some $r \in \Sigma^+$. Then $rqr = w_1 \cdots w_m$. We prove that $r$ is a $\theta$-palindrome. Let $r = w_1 \cdots w_{k-1} z_1 = z_2 w_{m-k+2} \cdots w_m$ for some $k \ge 1$, where $z_1 \in \ensuremath{\mathrm{Pref}}(w_{k})$ and $z_2 \in \ensuremath{\mathrm{Suff}}(w_{m-k+1})$ with $|z_1| = |z_2| < |w|$. If $z_1 = \lambda$, then $r \in \{w, \theta(w)\}^+$ and then $rqr = w_m \cdots w_1$ implies $q \in \{w, \theta(w)\}^+$ by Lemma~\ref{lem:rt_th-rootshare}, but this contradicts the assumption. Thus $z_1 \neq \lambda$. Then we have two cases, $k \ge 2$ and $k = 1$. Lemma~\ref{lem:overlap2} (for $k = 2$) or Theorem~\ref{thm:overlap3} (for $k \ge 3$) works to give $w_1 = \cdots = w_{k-1} = \theta(w)$ and $w_{m-k+2} = \cdots = w_m = w$. Thus, $z_2 = \theta(z_1)$ and hence $r = \theta(r)$. Even for $k = 1$, if $w_1 \neq w_m$, then $r \in \ensuremath{\mathrm{Pref}}(\theta(w)) \cap \ensuremath{\mathrm{Suff}}(w)$ so that $r = \theta(r)$. Otherwise $w = r q_p = q_s r$ for some $q_p \in \ensuremath{\mathrm{Pref}}(q)$ and $q_s \in \ensuremath{\mathrm{Suff}}(q)$. Since $q = \theta(q)$, $q_s = \theta(q_p)$ so that we have $r q_p = \theta(q_p) r$. According to Proposition~\ref{prop:th-commute}, $r = \theta(r)$. \end{proof} \begin{proposition}\label{prop:u2qwm3odd} Let $w$ be a $\theta$-primitive word, and $w_1, \ldots, w_m \in \{w, \theta(w)\}$ for some odd integer $m \ge 3$. Let $u, q \in \Sigma^+$ such that $q$ is a $\theta$-palindrome with $|q| < |u|$. If $u^2 = q w_1 \cdots w_m$, then $w = \theta(w)$. If additionally $|u| \ge 2|q|$ holds, then $\rho(u) = \rho(q) = w$. \end{proposition} \begin{proof} Lemma~\ref{lem:u2qwm2} implies that either $q, u \in \{w, \theta(w)\}^+$ or $u = qr$ for some non-empty $\theta$-palindrome $r$. In the former case, let $u \in \{w, \theta(w)\}^k$ for some $k \ge 1$ and we can see $q \in \{w, \theta(w)\}^{2k-m}$ and $2k-m$ is odd because $m$ is odd. Then $q = \theta(q)$ implies $w = \theta(w)$, and hence $u, q \in w^+$. In the latter case, we have $rqr = w_1 \cdots w_m$. This implies $w_{(m+1)/2} = \theta(w_{(m+1)/2})$ (i.e, $w = \theta(w)$) because $rqr$ is a $\theta$-palindrome and $m$ is odd. Now we consider the additional hypothesis $|u| \ge 2|q|$. Since $2|u| = |q| + m|w|$, $|u| = (|q|+m|w|)/2 \geq 2|q|$, which leads to $|q| \leq \frac{1}{3}m|w|$. As seen above, $rqr = w^m$, hence $|r| = (m|w|-|q|)/2 \geq \frac{1}{3}m|w| \geq |w|$ as $m \ge 3$. With this, the equation $rqr = w^m$ gives $r = w^k w_p' = w_s' w^k$ for some $k \ge 1$, $w_p' \in \ensuremath{\mathrm{Pref}}(w)$, and $w_s' \in \ensuremath{\mathrm{Suff}}(w)$. Since $w$ is primitive, $w_p'$ and $w_s'$ have to be empty. Consequently $\rho(r) = \rho(q) = w$ and hence $\rho(u) = w$ by using Lemma~\ref{lem:rt_th-rootshare}. \end{proof} \subsection{ExLS equation of the form $u^2u_3u_4 = v_1 \cdots v_n w_1 \cdots w_m$} \label{subsec:ExLS4_00} In this subsection, we prove that an ExLS equation of the form $u^2u_3u_4 = v_1 \cdots v_n w_1 \cdots w_m$ implies that $u, v, w \in \{t, \theta(t)\}^+$ for some $t \in \Sigma^+$. We have already seen that for this purpose it suffices to show that any non-trivial equation of this form cannot hold. Recall that we assumed $u_1 = u$, $v_1 = v$, and $w_m = w$, and that Proposition~\ref{prop:different_length} allows us to assume $|v| \neq |w|$. We can apply Proposition~\ref{prop:CCKS09-a} to the non-trivial equation to obtain that $n$ is an even integer except 2, $v_1 = \cdots = v_{n/2} = v$ and $v_{n/2+1} = \cdots = v_n = \theta(v)$ (i.e., $v_1 \cdots v_n$ is a $\theta$-palindrome), $u = [r(tr)^i r(tr)^{i+j}]^{n/2-1} r(tr)^i (rt)^j$, and $v = r(tr)^i r(tr)^{i+j}$ for some $i \ge 0$, $j \ge 1$, and non-empty $\theta$-palindromes $r, t$ such that $rt$ is primitive. Actually $rt$ has to be $\theta$-primitive due to Corollary~\ref{cor:pq2_te-prime} because $v \in \{r, t\}^+$ is assumed to be $\theta$-primitive. Let us now study all possible values of $u_3u_4$. \begin{proposition} Under the setting of Problem \ref{prob:main}, if $u_1u_2u_3u_4 = u^4$, then $u, v, w \in \{t, \theta(t)\}^+$ for some $t \in \Sigma^+$. \end{proposition} \begin{proof} According to the representations of $u$ and $v$ in terms of $r$ and $t$, we obtain \[ w_1 \cdots w_m = (tr)^j [(r(tr)^i)^2 (tr)^j]^{n/2-1} [(rt)^j (r(tr)^i)^2]^{n/2-1} (rt)^j\enspace. \] This expression is a $\theta$-palindrome of even length and hence $m$ has to be even (Lemma~\ref{lem:pali_even}). Therefore, $w_1 \cdots w_{m/2} = [(tr)^j (r(tr)^i)^2]^{n/2-1} (tr)^j$, and this was proved to be primitive in Proposition~\ref{prop:rt2_prime_present1}. Moreover, its right hand side is the catenation of two $\theta$-palindromes $p_1 = (tr)^j [r(tr)^i r(tr)^{i+j}]^{n/2-2} r(tr)^i (rt)^j$ and $p_2 = r(tr)^i$. Proposition~\ref{prop:clean_split} gives $p_2 = r(tr)^i \in \{w, \theta(w)\}^+$. Furthermore, applying Proposition~\ref{prop:pali_split} to $p_1 p_2 = (tr)^j [r(tr)^i r(tr)^{i+j}]^{n/2-2} r(tr)^i \cdot p_2 \cdot (tr)^j$ gives $(tr)^j \in \{w, \theta(w)\}^+$. Finally Lemma~\ref{lem:rt_th-rootshare} derives $r, t \in \{w, \theta(w)\}^+$ from $r(tr)^i, (tr)^j \in \{w, \theta(w)\}^+$, but this contradicts the $\theta$-primitivity of $rt$. As a result, there are no solutions to the non-trivial equation. \end{proof} \begin{proposition}\label{prop:LS4_0001} Under the setting of Problem \ref{prob:main}, if $u_1u_2u_3u_4 = u^3\theta(u)$, then $u, v, w \in \{t, \theta(t)\}^+$ for some $t \in \Sigma^+$. \end{proposition} \begin{proof} Since $u_4$ is $\theta(u)$ instead of $u$, we have $w_1 \cdots w_m = x^2 (r(tr)^i)^2$, where $x = (tr)^j [(r(tr)^i)^2 (rt)^j]^{n/2-1} r(tr)^i (rt)^j$. Claim~\ref{claim:u3nequ4} gives that $w_1 = \cdots = w_m = w$, and hence $w^m = x^2 (r(tr)^i)^2$. This is a classical LS equation; thus Theorem~\ref{thm:original} is applicable to conclude that $\rho(x) = \rho(r(tr)^i)$. However, this contradicts the primitivity of $x$ obtained in Proposition~\ref{prop:rt2_prime_present2} because $|x| > |r(tr)^i|$. \end{proof} \begin{proposition} Under the setting of Problem \ref{prob:main}, if $u_1u_2u_3u_4 = u^2\theta(u)u$, then $u, v, w \in \{t, \theta(t)\}^+$ for some $t \in \Sigma^+$. \end{proposition} \begin{proof} Since $u_3 \neq u_4$, $w_1 = \cdots = w_m = w$ due to Claim~\ref{claim:u3nequ4}. Using the representations of $u$ and $v$ by $r$ and $t$, we can see that $u_3u_4 = \theta(u)u$ is equal to both sides of the following equation: \[ (tr)^j r(tr)^i [(rt)^j (r(tr)^i)^2]^{n/2-1} [(r(tr)^i)^2 (tr)^j]^{n/2-1} r(tr)^i (rt)^j = (r(tr)^i)^2 w^m\enspace. \] By catenating $(r(tr)^i)^4$ to the left of both sides, we get $(r(tr)^i)^6 w^m = x^2$, where $x = (r(tr)^i)^2 [(r(tr)^i)^2 (tr)^j]^{n/2-1} r(tr)^i (rt)^j$. Then, Theorem~\ref{thm:original} implies that $\rho(x) = \rho(r(tr)^i) = w$. Since $x$ contains $r(tr)^i$ as its infix, the share of primitive root between $x$ and $r(tr)^i$ gives $\rho(r(tr)^i) = \rho((rt)^j)$. We deduce from this using Lemma~\ref{lem:rt_rootshare} that $rt$ would not be primitive, which contradicts our hypothesis. \end{proof} \begin{proposition} Under the setting of Problem \ref{prob:main}, if $u_1u_2u_3u_4 = u^2\theta(u)^2$, then $u, v, w \in \{t, \theta(t)\}^+$ for some $t \in \Sigma^+$. \end{proposition} \begin{proof} Recall that $v_1 \cdots v_n$ is a $\theta$-palindrome. Since $u^2\theta(u)^2$ is a $\theta$-palindrome, $\theta(w_1 \cdots w_m)$ is one of its prefixes and the assumption $|w_1 \cdots w_m| < |v_1 \cdots v_n|$ implies that $\theta(w_1 \cdots w_m) \in \ensuremath{\mathrm{Pref}}(v_1 \cdots v_n)$. Hence $w_1 \cdots w_m \in \ensuremath{\mathrm{Suff}}(v_1 \cdots v_n)$ and now we have $(w_1 \cdots w_m)^2 \in \ensuremath{\mathrm{Suff}}(u^2\theta(u)^2)$. We prove that this suffix is long enough to apply the extended Fine and Wilf theorem. Since $(n-1)|v| < 2|u|$ and $n \geq 4$, we have $|v| < \frac{2}{3} |u|$ and, in turn, $n|v| < 2|u| + \frac{2}{3} |u| = \frac{8}{3} |u|$. From this we obtain $m|w| > \frac{4}{3}|u|$. Then, $2m|w| - (|w|+2|u|) > (2m-1)|w| - \frac{3}{2}m|w| = (\frac{1}{2}m-1)|w| > 0$ since $m \ge 3$. Thus, $u^2 \theta(u)^2$ and $(w_m \cdots w_1)^2$ share a suffix of length at least $2|u|+|w|$ and Theorem~\ref{thm:exFWgcd} concludes that $u \in \{w, \theta(w)\}^+$ because $w$ is $\theta$-primitive. Now it is clear that also $v \in \{w, \theta(w)\}^+$, but in fact $v \in \{w, \theta(w)\}$ must hold because $v$ is also $\theta$-primitive. However this contradicts the assumption that $|v| \neq |w|$. \end{proof} \subsection{ExLS equation of the form $u\theta(u)u_3u_4 = v_1 \cdots v_n w_1 \cdots w_m$} \label{subsec:ExLS4_01} Note that in the following propositions, we consider only the non-trivial equations; hence Proposition~\ref{prop:different_length} allows to assume $|v| \neq |w|$. Using Proposition~\ref{prop:CCKS09-a}, $u\theta(u) = (pq)^{n-1}p$ and $v_1 = \cdots = v_n = v = pq$ for some non-empty $\theta$-palindromes $p, q$. Unlike the case considered before, in the current case $n$ can be odd. In fact, if $n$ is odd, then $u = (pq)^{(n-1)/2}y$, where $p = y\theta(y)$ for some $y \in \Sigma^+$; while if $n$ is even, then $u = (pq)^{n/2-1}px$, where $q = x\theta(x)$ for some $x \in \Sigma^+$. Again, we consider the four cases associated with the four possible values of $u_3u_4$. The last two, $u_3 = u_4 = u$ and $u_3 = u_4 = \theta(u)$, are merged and studied in two separate propositions depending on the parity of $m$ instead. \begin{proposition} Under the setting of Problem \ref{prob:main}, if $u_1u_2u_3u_4 = u\theta(u)u\theta(u)$, then $u, v, w \in \{t, \theta(t)\}^+$ for some $t \in \Sigma^+$. \end{proposition} \begin{proof} In this setting, $u_3u_4 = u\theta(u) = q w_1 \cdots w_m$. Since both $u\theta(u)$ and $q$ are $\theta$-palindromes, we can employ Claim~\ref{claim:u3nequ4} to obtain $w_1 = \cdots = w_m = w$. Now the equation turns into the LS equation $(u\theta(u))^2 = v^n w^m$, and hence $\rho(v) = \rho(w)$ due to Theorem~\ref{thm:original}. Both $v$ and $w$ being primitive, this contradicts the assumption $|v| \neq |w|$ and consequently the existence of non-trivial solutions. \end{proof} \begin{proposition} Under the setting of Problem \ref{prob:main}, if $u_1u_2u_3u_4 = u\theta(u)\theta(u)u$, then $u, v, w \in \{t, \theta(t)\}^+$ for some $t \in \Sigma^+$. \end{proposition} \begin{proof} Recall that $u\theta(u) = (pq)^{n-1}p$. Claim~\ref{claim:u3nequ4} implies that $\theta(u)u = q w^m$ with $q = w' w^{k-1}$ for some $1 \le k \le m$ and a non-empty proper suffix $w'$ of $w$. {\bf Case 1 ($n$ is odd)}: Then we have $\theta(u)u = qw^m = x_s x$, where $x_s = \theta(y) q(pq)^{(n-1)/2-1} y$ and $x = \theta(y) (pq)^{(n-1)/2} y$; note that $x_s \in \ensuremath{\mathrm{Suff}}(x)$. One can easily calculate that $|w| = \frac{1}{m}[n|p| + (n-2)|q|]$ and $|x_s| = \frac{1}{2}(n-1)(|p| + |q|)$, and hence $|x_s|-|w| = \frac{(m-2)(n-1)-2}{2m}|p|+\frac{(m-2)(n-1)+2}{2m}|q|$, which is positive because $n, m \ge 3$. Thus we can say that $x^2$ and $w^{m+k}$ share a prefix of length at least $|x|+|w|$ so that by the Fine and Wilf theorem, $\rho(x) = \rho(w) = w$. Starting from $\theta(y)yqw^m = \theta(y)yx_sx = x^2$, we can verify that $2|x|- m|w| = |pq|$, that is, $|pq|$ is a multiple of $|w|$. The suffix of $x$ of length $|pq|$ is $\theta(y)qy$, which is $w^j$ for some $j \ge 2$ because $|pq| = |v| \neq |w|$. Therefore, this conjugate of $v$ is not primitive, either. This is a contradiction with the $\theta$-primitivity of $v$. {\bf Case 2 ($n$ is even)}: In this case, $u = (pq)^{n/2-1}px$ for some $x \in \Sigma^+$ such that $q = x\theta(x)$. Substituting this into $\theta(u)u = qw^m$ gives \begin{equation}\label{eq:0110-1} [\theta(x)px]^{n/2-1} \theta(x)p^2 x [\theta(x)px]^{n/2-1} = x\theta(x) w^m. \end{equation} From this equation, we can obtain $x = \theta(x)$ and hence $px = xz$ for some $z \in \Sigma^+$. If $|x| \ge |p|$, then Lemma \ref{lem:pq-qp} implies $\rho(x) = \rho(p)$ so that $v = pq = px^2$ would not be primitive. Hence $|x| < |p|$ must hold and under this condition, the solution of $px = xz$ is given by $p = xy$ and $z = yx$ for some $y \in \Sigma^+$. Since $p = \theta(p)$, we have $p = xy = \theta(y)x$. Proposition \ref{prop:th-commute} gives $x = r(tr)^i$ and $y = (tr)^j$ for some $i \ge 0$, $j \ge 1$, and $\theta$-palindromes $r, t$ such that $rt$ is primitive. Both of $r$ and $t$ should be non-empty; otherwise, $\rho(p) = \rho(x)$ and $v = pq = px^2$ would not be primitive. Substituting these into Eq.~(\ref{eq:0110-1}) yields the following equation. \[ \bigl[(tr)^j r(tr)^i [r(tr)^i r(tr)^{i+j} r(tr)^i]^{n/2-1} \bigr]^2 = w^m. \] Since $w$ is $\theta$-primitive, this equation means that $m$ has to be even. Then $w^{m/2} = (tr)^j r(tr)^i [r(tr)^i r(tr)^{i+j} r(tr)^i]^{n/2-1}$. By catenating $(r(tr)^i)^2$ from the left to the both sides of this equation, we obtain an LS equation $[r(tr)^i]^2 w^{m/2} = [r(tr)^i r(tr)^{i+j} r(tr)^i]^{n/2}$. Theorem \ref{thm:original} gives $\rho(r(tr)^i) = \rho(r(tr)^i r(tr)^{i+j} r(tr)^i)$ and Lemma \ref{lem:rt_rootshare} reduces it to $\rho(r) = \rho(t)$, but this contradicts the primitivity of $pq = r(tr)^{i+j} (r(tr)^i)^2$. \end{proof} \begin{proposition} Under the setting of Problem \ref{prob:main}, if $u_1u_2 = u\theta(u)$, $u_3 = u_4$, and $m$ is odd, then $u, v, w \in \{t, \theta(t)\}^+$ for some $t \in \Sigma^+$. \end{proposition} \begin{proof} We have $u_3 u_4 = q w_1 \cdots w_m$. Since $u_3 = u_4$ and $|q| < |u|$, we can employ Proposition~\ref{prop:u2qwm3odd} to obtain $w = \theta(w)$. Moreover, when $n \geq 5$, we have $|u| \ge 2|q|$ and the proposition also gives $\rho(u_3) = \rho(q) = w$. Since $w = \theta(w)$, we can see that $\rho(u) = w$. Then $\rho(p) = w$ because $\rho(u) = \rho(q) = w$ and $pq \in \ensuremath{\mathrm{Pref}}(u)$. However, $\rho(p) = \rho(q)$ means that $v = pq$ would not be even primitive. Therefore in the following let $n$ be either 3 or 4. First we consider the case when $u_3 = u$. Then we have either $(pqy)^2 = qw^m$ (when $n = 3$) where $p = y\theta(y)$, or $(pqpx)^2 = qw^m$ (when $n = 4$) where $q = x\theta(x)$, for some $x,y \in \Sigma^+$. In both cases, if $|p| \le |q|$, Lemma~\ref{lem:pq-qp} can be applied and we have $\rho(p) = \rho(q)$, so $v = pq$ would not be even primitive. Hence $|p| > |q|$ must hold, but then $|u| \ge 2|q|$ and then Proposition~\ref{prop:u2qwm3odd} implies $\rho(p) = \rho(q)$. Next we consider the case when $u_3 = \theta(u)$ and $n = 3$. Then $\theta(u) = \theta(y)qp$ so that $\theta(y)qp\theta(y)qp = qw^m$. Let $\theta(y)q = qz$ for some $z$ with $|y| = |z|$. Using $pq = y\theta(y)q = yqz$, from $\theta(y)qp\theta(y)qp = qw^m$ we can obtain $zyqzzy\theta(y) = w^m$. Since $w = \theta(w)$, this equation gives $z = y = \theta(y)$. Then $\theta(y)q = qz$ turns into $yq = qy$ and hence $\rho(y) = \rho(q)$ by Theorem~\ref{th:uv-expr}. This however implies that $v = y\theta(y)q$ would not be $\theta$-primitive. Finally we consider the case when $u_3 = \theta(u)$ and $n = 4$. Then we have $[\theta(x)pqp]^2 = qw^m$, which gives $x = \theta(x)$ because $q = x\theta(x)$. Then $\theta(u)^2 = x^2 w^m$, which is an LS equation and Theorem~\ref{thm:original} implies $\rho(\theta(u)) = \rho(x) = w$. However since $x^2p = qp \in \ensuremath{\mathrm{Suff}}(\theta(u))$, we also get $\rho(p) = w$ (otherwise $w$ would be a proper infix of its square in $x^2$). This leads to the usual contradiction that $v = px^2$ would not be primitive. \end{proof} \begin{proposition} Under the setting of Problem \ref{prob:main}, if $u_1u_2 = u\theta(u)$, $u_3 = u_4$, and $m$ is even, then $u, v, w \in \{t, \theta(t)\}^+$ for some $t \in \Sigma^+$. \end{proposition} \begin{proof} As before we consider only non-trivial equation so that we have $u_3 u_4 = q w_1 \cdots w_m$ and $|v| \neq |w|$. Lemma~\ref{lem:u2qwm2} gives two cases, but actually it suffices to consider the case when $u = qr$ for some non-empty $\theta$-palindrome $r$. First we consider the case when $u_3 = u$ and $n$ is even. Then $[(pq)^{n/2-1}px]^2 = q w_1 \cdots w_m$, where $q = x\theta(x)$ for some $x \in \Sigma^+$. If $|p| \le |q|$, then $pq = qp$ and $v$ would not be even primitive. Hence let $p = qz_1$ for some $z_1 \in \Sigma^+$. Then $r = z_1x\theta(x)(pq)^{n/2-2}x\theta(x)z_1x$. Since $r = \theta(r)$, this equation gives $z_1x = \theta(z_1x)$ and $x = \theta(x)$. Thus we have $z_1x = x\theta(z_1)$ and $p = x^2 z_1 = \theta(z_1) x^2$. Then $x^3 z_1 = x \theta(z_1) x^2 = z_1 x^3$ so that $\rho(x) = \rho(z_1)$ by Theorem~\ref{th:uv-expr}. However, this result contradicts the primitivity of $v = pq = x^2z_1x^2$. The second case is when $u_3 = u$ an $n$ is odd. We have $[(pq)^{(n-1)/2}y]^2 = q w_1 \cdots w_m$, where $p = y\theta(y)$. From this equation, $q$ is of even length so let $q = x\theta(x)$. If $|p| \le |q|$, then we can apply Lemma~\ref{lem:pq-qp} to the equation above to prove that $\rho(p) = \rho(q)$, which contradicts the primitivity of $v$. Thus we can let $y = xz_2$ for some $z_2 \in \Sigma^+$. Then $[(xz_2\theta(z_2)\theta(x)x\theta(x))^{(n-1)/2}xz_2]^2 = x\theta(x) w_1 \cdots w_m$. We can easily check that $w_{m/2+1} \cdots w_m = z_2 [\theta(z_2)\theta(x)x\theta(x)xz_2]^{(n-1)/2}$. According to Proposition~\ref{prop:uteuqnu}, we can deduce from this that $z_2, \theta(x)x \in \{w, \theta(w)\}^+$ and this further implies $x \in \{w, \theta(w)\}^+$. However then $v = pq = xz_2\theta(z_2)\theta(x)x\theta(x)$ would not be $\theta$-primitive. Thirdly we consider the case when $u_3 = \theta(u)$ and $n$ is even. We have $[\theta(x)p(qp)^{n/2-1}]^2 = x\theta(x) w_1 \cdots w_m$, and this equation immediately gives $x = \theta(x)$. Then $p(qp)^{n/2-1}xp(qp)^{n/2-1} = x w_1 \cdots w_m$. Since the left-hand side and $x$ are $\theta$-palindromes, we have either $x \in \{w, \theta(w)\}^+$ or $w_1 = \cdots = w_m = w$ by Lemma~\ref{lem:pali_pref_pali}. In the former case, $\theta(u)^2 = x^2 w_1 \cdots w_m \in \{w, \theta(w)\}^+$ and hence $\theta(u), u \in \{w, \theta(w)\}^+$ (Lemma~\ref{lem:rt_th-rootshare}). Then $v^n = u\theta(u)x\theta(x) \in \{w, \theta(w)\}^+$, and hence $v \in \{w, \theta(w)\}$ because of Lemma~\ref{lem:rt_th-rootshare} and the $\theta$-primitivity of $v, w$. However, this contradicts the assumption $|v| \neq |w|$. In the latter case, we have $\theta(u)^2 = x^2 w^m$ and hence $\rho(\theta(u)) = \rho(x) = w$ (Theorem~\ref{thm:original}). However since $qp = x^2p \in \ensuremath{\mathrm{Suff}}(\theta(u))$, we reach the contradictory result $\rho(p) = w$. The final case is when $u_3 = \theta(u)$ and $n$ is odd. Then $[\theta(y)(qp)^{(n-1)/2}]^2 = q w_1 \cdots w_m$, where $p = y\theta(y)$ for some $y \in \Sigma^+$. Let $\theta(y)q = qz_4$ for some $z_4$ with $|y| = |z_4|$. Then $r = z_4(y\theta(y)q)^{(n-1)/2}y\theta(y)$, which is a $\theta$-palindrome so that $z_4 = y = \theta(y)$. Now we can transform $\theta(y)q = qz_4$ into $yq = qy$ and hence $\rho(y) = \rho(q)$ (Theorem~\ref{th:uv-expr}). However, then $v = y\theta(y)q$ would not be $\theta$-primitive. \end{proof} Combining the results obtained in this section, we can give a positive answer to Problem~\ref{prob:main}. Furthermore, with the result proved in~\cite{CCKS09} (also see Table~\ref{tbl:exLS_summary}), this positive answer concludes the following theorem, the strongest positive result we obtain on the ExLS equation. \begin{theorem}\label{thm:exLS4} Let $u, v, w \in \Sigma^+$ and let $u_1, \ldots, u_\ell \in \{u, \theta(u)\}$, $v_1, \ldots, v_n \in \{v, \theta(v)\}$, and $w_1, \ldots, w_m \in \{w, \theta(w)\}$. For $\ell \ge 4$ and $n, m \ge 3$, the equation $u_1 \cdots u_\ell = v_1 \cdots v_n w_1 \cdots w_m$ implies $u, v, w \in \{t, \theta(t)\}^+$ for some $t \in \Sigma^+$. \end{theorem} \subsection{The case $\ell \le 3$ of the ExLS equation}\label{subsec:ExLS3} We conclude this section with some examples which prove that an extended Lyndon-Sch\"{u}tzenberger theorem cannot be stated for $\ell = 2$, and for some particular cases when $\ell = 3$. \begin{example}\label{ex:ExLS2} Let $\Sigma = \{a, b\}$ and $\theta$ be an antimorphic involutions on $\Sigma^*$ defined as $\theta(a) = a$ and $\theta(b) = b$. Let $v = a^{2m} b^{2}$ and $w = aa$ (i.e., $w = \theta(w)$) for some $m \ge 1$. Then $v^n w^m = (a^{2m} b^{2})^n a^{2m}$. By letting either $u = (a^{2m} b^{2})^{n/2} a^m$ if $n$ is even or $u = (a^{2m} b^{2})^{(n-1)/2} a^{2m} b$ otherwise, we have $u\theta(u) = v^n w^m$. Nevertheless, there cannot exist a word $t$ such that $u, v, w \in \{t, \theta(t)\}^+$ because $v$ contains $b$, while $w$ does not. In conclusion, for arbitrary $n, m \ge 2$, $(2, n, m)$ does not impose $\theta$-periodicity. \end{example} Next we examine briefly the $(3, n, m)$ ExLS equation. The actual problem which we address is formalized as follows: \begin{problem}\label{prob:ExLS3} Let $u, v, w \in \Sigma^+$ and integers $n, m \ge 3$. Then, let $u_1, u_2, u_3 \in \{u, \theta(u)\}$, $v_1, \ldots, v_n \in \{v, \theta(v)\}$, and $w_1, \ldots, w_m \in \{w, \theta(w)\}$. Does the equation $u_1u_2u_3 = v_1 \cdots v_n w_1 \cdots w_m$ imply $u, v, w \in \{t, \theta(t)\}^+$ for some $t \in \Sigma^+$ under all of the following conditions? \begin{arabiclist} \item $v$ and $w$ are $\theta$-primitive, \item $|v_1 \cdots v_n| \ge |w_1 \cdots w_m|$, \item $u_1 = u$, $v_1 = v$, and $w_m = w$. \end{arabiclist} \end{problem} As shown from now by examples, the general answer is ``No''. More significant is the fact that depending on the values of variables $u_2, u_3$ and on the lengths of $v_1 \cdots v_n$ and $w_1 \cdots w_m$, the $(3, n, m)$ ExLS equation exhibits very complicated behavior. First we present a parameterized example to show that for arbitrary $m \ge 2$, $(3, 3, m)$ does not impose $\theta$-periodicity. \begin{example}\label{ex:ExLS33m} Let $\Sigma = \{a, b\}$ and $\theta$ be the mirror image over $\Sigma^*$. For $u = (abb)^{2m-1}ab$, $v = (abb)^{m-1}ab$, and $w = (bba)^3$, we have $u^2 \theta(u) = v \theta(v)^2 w^m$ for any $m \ge 2$. Nevertheless, there does not exist a word $t \in \Sigma^+$ satisfying $u, v, w \in \{t, \theta(t)\}^+$. \end{example} In this example, the border between $v\theta(v)^2$ and $w^m$ is located at $u_2$. Intriguingly, as long as $u_1u_2u_3 = uu\theta(u)$ we cannot shift the border to $u_3$ without imposing $u, v, w \in \{t, \theta(t)\}^+$ for some $t \in \Sigma^+$. \begin{proposition} For any $n, m \ge 3$, if $uu\theta(u) = v_1 \cdots v_n w_1 \cdots w_m$ and $n|v| > 2|u|$, then $u, v, w \in \{t, \theta(t)\}^+$ for some $t \in \Sigma^+$. \end{proposition} \begin{proof} It suffices to consider the case when $(n-1)|v| < 2|u| < n|v|$, otherwise Theorem~\ref{thm:exFWgcd} applies. As done in the analyses on the ExLS equation with $\ell = 4$, we can assume that both $v$ and $w$ are $\theta$-primitive. Then, using Proposition~\ref{prop:CCKS09-a}, we obtain that $n$ is even, $u = [r(tr)^i r(tr)^i (tr)^j]^{n/2-1} r(tr)^i (rt)^j$ and $v = r(tr)^i r(tr)^i (tr)^j$ for some $i \ge 0$, $j \ge 1$, and two non-empty $\theta$-palindromes $r, t$ such that $rt$ is primitive. Moreover, $\theta(u) = (tr)^j r(tr)^i [(rt)^j r(tr)^i r(tr)^i]^{n/2-1} = r(tr)^i r(tr)^i w_1 \cdots w_m$. Hence if $i \ge 1$, then $tr = rt$, which contradicts the primitivity of $rt$ (Theorem~\ref{th:uv-expr}). Thus we have \begin{equation}\label{eq:exLS-001-3-1} (tr)^j r [(rt)^j r^2]^{n/2-1} = r^2 w_1 \cdots w_m. \end{equation} If $|t| \leq |r|$, then $t \in \ensuremath{\mathrm{Pref}}(r)$ from which $rt \in \ensuremath{\mathrm{Pref}} (r^2 w_1 \cdots w_m)$, and finally $rt = tr$, contradicting the primitivity of $rt$ again. If $|r| < |t| \leq 2|r|$, then we can write $rrs = tr$ for some $s \in \Sigma^+$ such that $|r|+|s|=|t|$. Since $s \in \ensuremath{\mathrm{Suff}}(r)$ and $r$ is a $\theta$-palindrome, $\theta(s) \in \ensuremath{\mathrm{Pref}}(r)$, i.e., $r=\theta(s)r_1$ for some $r_1 \in \Sigma^+$. Then, $rrs = r\theta(s)r_1s = tr$, so $r\theta(s) = t$ because their length is the same. Since $\theta(s) \in \ensuremath{\mathrm{Suff}}(t)$ and $t$ is a $\theta$-palindrome, it holds that $s \in \ensuremath{\mathrm{Pref}}(t)$ and $rrs \in \ensuremath{\mathrm{Pref}}(rrt)$. Therefore, $rrt$ and $tr$ share a prefix of length $|t|+|r|$ so that Theorem~\ref{th:uv-expr} concludes that $\rho(r) = \rho(t)$, contradicting the primitivity of $rt$. Thus both $i = 0$ and $|t| > 2|r|$ must hold. Eq.~(\ref{eq:exLS-001-3-1}) implies that $r^2 \in \ensuremath{\mathrm{Pref}}(t)$, that is, $r^2 \in \ensuremath{\mathrm{Suff}}(t)$ ($t$ is a $\theta$-palindrome), and hence $r^4 \in \ensuremath{\mathrm{Suff}}((rt)^j r^2)$. So we can let $r^4 = z_1 w_{k+1} \cdots w_m$ for some $k \ge 1$ and $z_1 \in \ensuremath{\mathrm{Suff}}(w_k)$. If $z_1 = \lambda$, then this equation gives $r \in \{w, \theta(w)\}^+$ because $w$ is assumed to be $\theta$-primitive due to Theorem \ref{thm:exFWlcm}. Then Eq.~(\ref{eq:exLS-001-3-1}) means $(tr)^j r [(rt)^j r^2]^{n/2-1} \in \{w, \theta(w)\}^+$. Using Proposition~\ref{prop:pali_split}, we obtain $t \in \{w, \theta(w)\}^+$, but this contradicts the $\theta$-primitivity of $v$. Otherwise, catenating $r^2$ from the left to the both sides of Eq.~(\ref{eq:exLS-001-3-1}) gives us $r[(rt)^j r^2]^{n/2} = z_1 w_{k+1} \cdots w_m w_1 \cdots w_m$. Note that the left hand side of this equation is a $\theta$-palindrome so that Lemma~\ref{lem:pal_x1-xn} implies $w_1 = \cdots = w_m = w$. Now catenating $r$ in the same way to Eq.~(\ref{eq:exLS-001-3-1}) gives $[(rt)^j r^2]^{n/2} = r^3 w^m$. This is in the form of LS equation and Theorem~\ref{thm:original} implies $\rho((rt)^j r^2) = \rho(r) = w$ because $w$ is primitive. From this we further deduce that $\rho(t) = w$. However, then $rt$ would not be primitive. \end{proof} Once we change $u_1u_2u_3$ from $u^2\theta(u)$ to $u\theta(u)^2$, it becomes possible to construct a parameterized example for $(3, 3, m)$ with the border between $v_1 \cdots v_n$ and $w_1 \cdots w_m$ on $u_3$, though it works only when $m$ is a multiple of 3. \begin{example} Let $\Sigma = \{a, b\}$ and $\theta$ be the mirror image over $\Sigma^*$. For $i, j \ge 0$, let $u = (ab)^{i+j+1} (ba)^{2i+2j+2} b(ab)^j$, $v = (ab)^{i+j+1} (ba)^{i+2j+1} b$, and $w = ab$. Then $u\theta(u)^2 = v^3 w^{2(i+j+1)} \theta(w)^{i+j+1}$, but we cannot find such $t$ that $u, v, w \in \{t, \theta(t)\}^+$. \end{example} Next we increase $n$ to 4, and prove that still we can construct a parameterized example of the $(3, 4, 2i)$ ExLS equation. \begin{example}\label{ex:ExLS34even} Let $\Sigma = \{a, b\}$ and $\theta$ be the mirror image over $\Sigma$. For $i \ge 1$, let $u = a^4(ba^3)^i(a^3b)^i$, $v = a^4(ba^3)^{i-1}ba^2$, and $w = ba^3$. Then we have $u^3 = v^2 \theta(v)^2 w^i \theta(w)^i$, but there does not exist a word $t \in \Sigma^+$ satisfying $u, v, w \in \{t, \theta(t)\}^+$. \end{example} The cases $(3,n,m)$ when $n=4$ and $m$ is odd, as well as when $m,n \geq 5$, remain open. \begin{table}[h] \tbl{Updated summary on the results regarding the extended Lyndon-Sch\"{u}tzenberger equation\label{tbl:new}} {\begin{tabular}{r@{\hspace{8mm}}r@{\hspace{8mm}}r@{\hspace{10mm}}c@{\hspace{10mm}}l} \toprule $l$ & $n$ & $m$ & $\theta$-periodicity & \\ \colrule $\ge 4$ & $\ge 3$ & $\ge 3$ & YES & Theorem \ref{thm:exLS4} \\ \colrule $3$ & $\ge 5$ & $\ge 5$ & ? & \\ $3$ & $4$ & odd & ? & \\ \colrule $3$ & $4$ & even & NO & Example \ref{ex:ExLS34even} \\ $3$ & $3$ & $\ge 3$ & NO & Example \ref{ex:ExLS33m} \\ \multicolumn{3}{c}{one of them is 2} & NO & Example \ref{ex:ExLS2} \\ \botrule \end{tabular}} \end{table} \section{Conclusion}\label{sec:conclusion} In this paper, we proved several consequences of the overlap between pseudo-primitive words. They made it possible to prove that, for a given antimorphic involution $\theta$ and words $u, v, w \in \Sigma^+$, if $\ell \ge 4$ and $n, m \ge 3$, then the ExLS equation $u_1 \cdots u_\ell = v_1 \cdots v_n w_1 \cdots w_m$ implies that $u, v, w \in \{t, \theta(t)\}^+$ for some $t$. This is the strongest result obtained so far on the ExLS equation. Our case analyses on $(3, \ge 3, \ge 3)$ ExLS equations demonstrated that these tools may not be sufficient to provide a complete characterization of ExLS equations. Further investigation on the overlaps of $\theta$-primitive words, reduction schemes from ExLS equations to LS equations, and the weak defect effect seems promising and required to fill the gap in Table \ref{tbl:new}. \section*{Acknowledgments} This research was supported by Natural Sciences and Engineering Research Council of Canada Discovery Grant R2824A01, and Canada Research Chair Award to L.K. \bibliographystyle{plain}
2024-02-18T23:40:24.344Z
2010-02-22T10:40:13.000Z
algebraic_stack_train_0000
2,311
15,093
proofpile-arXiv_065-11315
\section{Introduction} Stellar ages are fundamental calibrators for our understanding of star formation, planet evolution, and the evolution of the Milky Way. Generally speaking though, it easiest to measure accurate and precise ages using ensembles of identically-aged stars, rather than lone field stars. In recent years, parallaxes and proper motions measured by the Gaia satellite \citep{gaia-collaboration2018} have enabled the identification of thousands of such stellar groups. In particular, \citet[hereafter, Paper I]{kounkel2019} and \citet[hereafter, Paper II]{kounkel2020} identified almost 1 million stars that are candidate members of more than 8,000 co-moving groups within $\sim$3 kpc. These structures include many previously known open clusters and moving groups, and they are contiguous in 3D position and 2D velocity space. The ages of these populations can be estimated through isochrone fitting. However, over time, stellar groups lose their memory of the initial kinematics with which they formed. Increasingly fewer and fewer stars can be identified as members of the moving groups at older ages, and populations (especially the ones that originally had relatively few members) will eventually entirely dissolve into the Galaxy. After this happens, other methods are needed to determine their age. Ages of low mass stars up to several tens of million years (Myr) can be measured via comparisons to isochrones in photometric or spectroscopic HR diagrams if they are pre-main sequence \citep{marigo2017,mcbride2021}. However, solar type stars quickly reach the main sequence and are not susceptible to isochrone dating after that point, until they near the end of their main-sequence lifetime. Gyrochronology offers an alternative opportunity. As a star gets older, magnetic braking slows its rotation citep{weber1967,skumanich1972}. If the star has inhomogeneities such as starspots or faculae on its surface, it can be possible to measure the star's rotation period and relate it to stellar age \citep{barnes2003,barnes2007,barnes2010}. With the advent of all-sky surveys that provide detailed light curves for a large number of stars, such as the Transiting Exoplanet Survey Satellite \citep[TESS;][]{ricker2015}, applying gyrochronology relations to a large number of stars to determine ages is becoming increasingly more feasible. This technique is particularly effective for solar-type stars, due to the most pronounced evolution of periods in this mass range. However, a major limitation is a lack of sufficiently accurate gyrochrone models. While theoretical gyrochrone models have made substantial progress \citep[e.g.,][]{Matt2015,spada2020,gossage2021}, they still do not appear to fully reproduce the observed features of the empirical period--temperature relations across all ages and masses \citep{curtis2020}, and often require parameters that are difficult to measure empirically. The current state-of-the-art method of estimating stellar ages through gyrochronology is through comparing rotation periods of a young population to the rotation periods of a handful of pristine open clusters to confirm similarity \citep[e.g.,][]{curtis2019,andrews2022, bouma2021}. However, this method does not allow the precise interpolation between the calibrating clusters' ages due to limited number of clusters for which rotation periods have been measured. \citet{angus2019} have created empirical isochrones that have been calibrated to Praesepe, but, although they perform well on FG-type stars in older populations \citep{angus2020}, they significantly underestimate ages for populations younger than $\sim$500 Myr \citep{curtis2020}. TESS is well positioned to provide rotation periods that could be used to anchor a new, better empirical gyrochronology model for stars in this age range, as the 28 day duration of its light curves provide the maximum sensitivity to periods shorter than $\sim$10 days (corresponding to the rotation periods of F, G, and M stars younger than Praesepe). In this work we develop an empirical gyrochrone fit using the catalog of sources presented in \citetalias{kounkel2020}. This fit is based on populations that are near-continuous in their age distribution, and extends in age up to 1 Gyr. In Section \ref{sec:data} we assemble the catalog of rotation periods for the stars with known ages using TESS FFIs. In Section \ref{sec:results} we describe the properties of the stars with convective envelopes that are observed in these data, as well as develop the empirical relation of stellar angular momentum evolution. In Section \ref{sec:discussion} we discuss the potential of the gyrochrone relations to determine ages of the field stars. In Section \ref{sec:conclusion} we summarize and conclude this work. \section{Data} \label{sec:data} \subsection{Selection of stars with ages from Theia catalog}\label{sec:theia} In \citetalias{kounkel2020} we identified stellar associations and moving groups in Gaia DR2 \citep{gaia-collaboration2018} using hierarchical clustering with HDBSCAN \citep{hdbscan1}. The clustering was performed in 5D phase space, including $l$, $b$, parallax, as well as tangential velocities. An average age of all the moving groups was determined using Gaia and 2MASS photometery using a data-driven tool Auriga that performs pseudo-isochrone fitting through interpolating photometry of a variety of clusters with known ages. The final catalog (collectively referred to as Theia), consisted of 8293 moving groups within 3 kpc. These groups, ranging in age from $<$10 Myr to $>$1 Gyr, span a large variety of scales, consisting from 40 to 10,000 stars, some concentrated in compact clusters, some spanning long strings extending upwards of 200~pc. The typical precision in age in this catalog is $\sim$0.1 dex, reaching $\sim$0.2 dex in some of the sparser and more evolved groups where the precise location of the main sequence turn-off is uncertain. To extract a sample of nearby stars with good prospects for TESS rotation period measurements, we selected sources within 500~pc and that have $T<16$ mag, which is the typical magnitude limit for $\sim$1\% precision in fluxes with TESS. We adopted $T$ band magnitudes from the TESS Input Catalog \citep{stassun2019}. Additionally we included all the sources for which 2-minute cadence target pixel data was available (typically $T<13$ mag), regardless of their distance. Although the catalog does include sources as young as a few Myr, such sources exhibit complex, aperiodic variability \citep{cody2014} that will compete with the stellar rotation signals of primary interest in this work. To avoid this complication, we limit the sample to primarily focus on $>$10 Myr based on the ages provided in the Theia catalog, and defer a more comprehensive discussion of the younger sample to a subsequent work. \subsection{Matching to TESS and processing of light curves}\label{sec:tess} We matched the sources from the Theia catalog to the footprint of TESS in the first 35 sectors. This upper limit was set by the available data at the time of our analysis. The first 26 sectors covered 70\% of the sky, excluding a $\sim15^\circ$ gap around the ecliptic, and also excluding a $140^\circ \times 20^\circ$ region north of the galactic center that would have been heavily affected by scattered light from the Earth and Moon had it been observed. Sectors 27--35 roughly repeated the coverage of the first nine sectors. We used the \texttt{eleanor} package \citep{feinstein2019} to download light curves for the 102,112 Theia sources that were observed in the TESS full frame images of sectors 1-35. As all of the sectors were processed separately, this resulted in 226,003 light curves\footnote{We have made them available at \url{https://doi.org/10.5281/zenodo.6757610}}. We have attempted to use other light curve generation techniques, such as those from \citet{huang2020} or those generated through TESS SIP \citep{hedges2020}. However, the former was not sensitive to rotation periods longer than 2 days due to detrending routines used to generate those light curves, and the latter introduced noise, particularly in form of an artificial $\sim$90 day period (which was particularly pronounced in stars in the continuous viewing zone due to abundance of longer baseline data) that has made real periods of a few days difficult to detect. Of these, \texttt{eleanor} provided the most robust light curves to all periods $<$12 days. Unfortunately, longer periods cannot be effectively recovered with it at this time. As each 28 day sector consists of two TESS orbits, $\sim$14 days each, there are strong systematics that introduce periodicity comparable to the duration of an orbit. Correcting these systematics preferentially suppresses long periods of true variable stars as well. For the specific problem of detecting signals with periods longer than ten days, TESS is a systematics-limited instrument. Data quality issues most often related to scattered light from the Earth and Moon near the time of spacecraft perigee passes can yield artificial spikes and drops in the light curves. To automatically exclude light curve cadences that are problematic, we first normalize data in each sector by the median, and limit the flux to be within 50\% of the median (i.e, normalized flux in the range of 0.5--1.5). We then further limit the flux to be within three standard deviations of the flux found within the remaining epochs within the sector; this step is repeated four times. This excludes the vast majority of artificial data issues. If a source is an eclipsing binary, typically the eclipses are also deleted from the light curve, leaving only the continuum (from which the rotational signature is more cleanly measured). However, contact binaries may remain in the sample. Additionally, close binaries can have stellar rotation periods that are tidally synchronized to their orbital periods. Such sources would be retained in the sample. \subsection{Measurement of periods from TESS light curves}\label{sec:periods} We measure periodic signals using Lomb--Scargle periodograms for each source in each individual sector. Stitching multiple sectors together, unfortunately, is not effective in recovering periods $>$12 days. Additionally, short periods that can be detected within a single sector can evolve over time, producing a slightly different period, or the periodic signal can completely disappear into the noise due to a decrease in the amplitude. This can happen due to the evolution of spots on the stellar surface. \begin{figure*} \epsscale{1.1} \plotone{lcs_example.pdf} \caption{Example of the periodic light curves and the corresponding Lomb--Scargle periodograms of the sources in this work. \label{fig:lcs_ex}} \end{figure*} The periodic signal can often be somewhat weak; furthermore, there may be multiple peaks in the periodogram. Due to the number of sources, it is difficult to visually examine all sources to select those with only a single dominant period. We use \texttt{GaussPy} to perform an Autonomous Gaussian Deconvolution \citep{lindner2015,lindner2019} to identify all possible signals. The periodogram is created using the \texttt{astropy} timeseries package, using an automatically generated list of periods from 4 hours to 33 days, with 10 samples per typical peak, such that the sampling is higher at higher frequencies. Separately we also generate a periodogram from 40 minutes to 4 hours to look for rapid variability such as that attributable to $\delta$ Scuti variables, but without affecting the ability of recovering long period signal. We then fit the periodogram in pixel space, resulting in all peaks having an approximately similar width. We set the \texttt{GaussPy} $\alpha$ parameter (which controls the typical width of identified Gaussians in the data) to 0.5, and subtracting an amplitude of 0.01 from the periodogram to ignore insignificant peaks. Following the identification of all the significant periodogram peaks, we interpolate their pixel mean location back to the period space $P$, which we record alongside with the amplitude $A$ of the Gaussians. \texttt{eleanor} allows light curves to be generated with different levels of processing and detrending. One of the outputs, CORR\_FLUX, is a light curve that is corrected for various known systematic effects on an orbit-by-orbit basis. These CORR\_FLUX outputs appear to be the most stable for the vast majority of the light curves of all intensities for periods $<$8 days (or, approximately, on the timescales of half of an orbit of TESS). However, the CORR\_FLUX processing completely suppresses longer period signals. As such, to compensate, we independently process periodograms for PCA\_FLUX (light curve generated via subtraction of co-trending basis vectors derived via the principle component analysis) and RAW\_FLUX (light curve without any processing applied). Comparing the derived periods to the periods measured by other surveys such as ASAS-SN \citep{jayasinghe2018}, we devise the set of criteria to most faithfully recover the underlying periodic signal. \begin{figure}[!ht] \includegraphics[width=\linewidth]{powerspectrum.pdf} \caption{An average periodogram of 7431 CORR, PCA, and RAW lightcurves for stars with \teff\ between 8000 and 9000 K that appear to be aperiodic. This highlights the artificially injected periods due to the systematics from TESS, most notably at $\sim$7 days and 14 days, corresponding to 0.5 and 1 orbit of TESS around the Earth. RAW light curves are most susceptible to these systematics but can recover the real periods $>$7 days most faithfully compared to the literature. CORR light curves are least dominated by systematics, but they largely suppress longer periods in their corrections. \label{fig:power}} \end{figure} \begin{enumerate} \item We record a primary period from the CORR\_FLUX periodogram for a star if the primary amplitude $A_{1,CORR}$ is higher than $>0.3$. Additionally, we also require the large difference in ratios between primary and secondary amplitudes $A_{2,CORR}/A_{1,CORR}<0.7$ for sources with period $>$0.2 days. Sources with shorter ($<$ 0.2d) periods tend to be multi-periodic---not only intrinsically, but also due to the minimum cadence of TESS FFIs introducing a beat frequency. We record their primary period as is if they meet the amplitude check, even if they fail the ratio check. \item For the remaining sources we consider other alternatives. PCA\_FLUX can be somewhat more sensitive to the longer periods, but it can be more susceptible to noise (Figure \ref{fig:power}). As such, we require $A_{1,PCA}>0.45$, $P_{1,PCA}<11$ days, as well as, similar to the above $A_{2,PCA}/A_{1,PCA}<0.7$ or $P_{1,PCA}<0.2$ day. \item Raw light curves can faithfully preserve long periods. They are also prone to the failure case of primarily detecting signal that is comparable to the orbital period of TESS (Figure \ref{fig:power}). However, generally, when the signal is real, there is also sufficient power in PCA periodogram, even though the period in PCA light curve may be less inconsistent with literature than the period from the raw light curve. As such, for the remaining sources we record $P_{1,RAW}$ if $A_{1,PCA}>0.3$ (not $A_{1,RAW}$). However, we similarly require $A_{2,RAW}/A_{1,RAW}<0.7$ or $P_{1,RAW}<0.2$ day. The maximum period we record is $P_{1,RAW}<12$ days. \item Furthermore, we retain the primary period regardless of its amplitude in the periodogram, if it is within 5\% in all three lightcurves for the same source in the same sector. \end{enumerate} \subsection{Evaluation of TESS period recovery} TESS has a low spatial resolution, with each pixel having a size of 21$''$. Because of this, contamination from other stars along a similar line of sight, particularly along the Galactic plane and/or within dense clusters, can often be a concern for this instrument. \texttt{eleanor} has the ability to automatically select the most appropriate aperture for the star to integrate over, depending on its brightness and density around it, to attempt to minimize this contamination. This can make it difficult to determine what aperture was used, and how many other sources may contribute to the flux within that light curve. We compute the flux contamination ratio, defined as the total flux of all stars in Gaia in a given radius divided by the flux of a target star, to test the effect of potential contamination. We check 6 different radii, in steps of 10$''$, from 10$''$ (i.e., radius of a pixel) to 60$''$ (exceeding the size of a typical aperture). We also record the renormalised unit weight error (RUWE) from Gaia to identify likely close binaries which can also lead to contamination. We note that the results presented in Section~\ref{sec:results} are robust against any cuts in either RUWE or the flux ratio, but applying these cuts preferentially excludes lower mass stars, as such, in the interest of retaining maximum sensitivity to all masses we do not apply any filtering. Approximately half of all sources have been observed in multiple sectors---most of which are due to repeated observations of Year 1 in Year 3. About 23\% of the stars in the sample are detected in more than two sectors, and $\sim$1\% are found in the TESS continuous viewing zone. As all of the sectors are treated independently, this provides an opportunity to check the internal quality of the measured periods from one sector to the next (see Figure~\ref{fig:pairmatch}a). \begin{figure*}[!ht] \includegraphics[width=0.5\linewidth]{pairmatchtess.pdf} \includegraphics[width=0.5\linewidth]{pairmatchasas.pdf} \includegraphics[width=0.5\linewidth]{pairmatchkepler.pdf} \includegraphics[width=0.5\linewidth]{pairmatchgaia.pdf} \caption{Top left: comparison of periods presented in this work for the same sources observed in multiple sectors. Top right: comparison of periods in this work relative to periods in ASAS-SN survey \citep{jayasinghe2019}. Note the difference in beat frequency of 0.02 days in TESS and 1 day in ASAS-SN. Bottom left: comparison of periods to \citet{mcquillan2014} sample for the Kepler field. Dashed lines correspond to 2x and 0.5x harmonics. Bottom right: comparison of periods to Gaia DR3 periods \citep{eyer2022} from Short Timescale Variability and Rotational Modulation catalogs. \label{fig:pairmatch}} \end{figure*} Performing an internal cross-match, we find that 89\% of sources have self-consistent periods between any two pairs of light curves when both had a period recovered. 7\% of pairs have harmonic period discrepancy by a factor of 2 (which is a common issue when searching for periodic signal), or they have a beat frequency of 0.02 days between the true period of a star with the minimum sampling of TESS FFIs (only affecting sources with periods $<$0.1 days in these data). Approximately 2\% of sources have one of the epochs in the pair dominated by the artificial period of $\sim$7 days, corresponding to the half an orbit for TESS. The remaining 2\% have periods that are uncorrelated between epochs. Thus, although there are sources that do have inconsistent periods, the vast majority of them ($>$95\%) do appear to be well-behaved (i.e., providing periods that agree directly, or with well understood harmonics). However, of $\sim$150,000 pairs where periodicity was detected in at least one of the sectors, only 50\% of the sources have recovered periodicity in both sectors. Examining the light curves in the sectors for which the periodicity was not recovered shows that these stars usually continue to exhibit periodicity, but the recovery failed in most cases because of the noise. Different sectors of TESS data can have different noise properties due to the different orientations of the spacecraft, Moon, and Earth. In addition, the position of any given star on the detectors changes between sectors due to the rotation of the spacecraft. This rotation of the focal plane can yield different ``optimal apertures'' between sectors. The light curves in which periods were not recovered often have a greater degree of scatter, thus they fail to meet the strict criteria outlined in Section~\ref{sec:periods}. An additional astrophysical effect that might be relevant is that the intrinsic amplitude of starspot-induced variability does evolve over time, in concert with the spot-covering fractions on the stellar surface \citep{basri2020}. For each light curve we record the typical $\sigma_{\rm inst}$, which corresponds to the typical uncertainty reported by \texttt{eleanor} normalized relative to the flux. We also measure $\sigma_{\rm p2p}$ corresponding to the point-to-point scatter in the normalized light curve that is not affected by the amplitude imposed by large scale (periodic) variability. We then compare it to $\sigma_{\rm var}$ which is a standard deviation of the light curve. We find that the sources in which periodicity was recovered $\sigma_{\rm var}/\sigma_{\rm p2p}$ tends to be significantly higher than in the sources with partial period recovery in only one of the sectors. The sectors in which period recovery for a given source has occurred have higher $\sigma_{\rm var}/\sigma_{\rm p2p}$ than in the sectors where it did not, however, they tend to have a more comparable $\sigma_{\rm var}$, as well as $\sigma_{\rm inst}$ (Figure \ref{fig:sigmavar}). This is driven by two factors. First, higher amplitude of variability $\sigma_{\rm var}$ makes it more robust against both noise and intrinsic amplitude evolution. Second, period recovery is more likely to occur in sources with higher SNR. The higher mass stars tend to be recovered as periodic more reliably than lower mass stars (as such, excluding periodic sources with, e.g., $\sigma_{\rm var}/\sigma_{\rm p2p}>1$ or $\sigma_{\rm var}/\sigma_{\rm inst}>2$ would only preferentially exclude lower mass stars). And, similarly, a light curve with higher SNR in one sector for the same star is more likely to be recovered to be periodic. \begin{figure}[!ht] \includegraphics[width=0.9\linewidth]{sigmavar.pdf} \includegraphics[width=0.9\linewidth]{sigmavarp2p.pdf} \includegraphics[width=0.9\linewidth]{sigmavarinst.pdf} \caption{Distributions of the light curve standard deviations ($\sigma_{\rm var}$), standard deviation normalized by the point-to-point RMS ($\sigma_{\rm var}/\sigma_{\rm p2p}$), and standard deviation normalized by the reported mean flux uncertainty ($\sigma_{\rm var}/\sigma_{\rm inst}$). Sources for which periodicity was recovered in all sectors are shown in black, and sources that appear to be aperiodic in all sectors are in yellow. Sources that were detected as periodic in some but not all sectors are shown in blue and red, with blue corresponding to the specific sectors in which the source is recovered as periodic, and red corresponding to the sectors where the same sources are not recovered as periodic. \label{fig:sigmavar}} \end{figure} We compare the recovered periods to periods measured by ASAS-SN \citep{jayasinghe2019}, see Figure~\ref{fig:pairmatch}b. There are 3,766 pairs of observations between periods presented here, and the sources in ASAS-SN. Of these, 90\% are classified as rotational variables or as YSOs. 8\% are classified as eclipsing binaries (3\% detatched, 2\% semi-detatched, 4\% contact), and 1\% are pulsating variables. Periods between ASAS-SN and TESS tend to be in good agreement, with 74\% of sources having identical period, 15\% having a harmonic period offset by a factor of 2. Five percent of sources have a a beat frequency of $\sim$1 day, driven by the cadence in ASAS-SN observations: unlike the beat frequency of TESS, it can affect periods up to 10 days \citep[for an example of a similar phenomenon see also ][]{covey2016}. Similarly, we compare periods to those measured in the Kepler field \citep{mcquillan2014}, see Figure~\ref{fig:pairmatch}c. Eighty percent of 108 sources in common have good agreement. Unlike the internal match, or a match against ASAS-SN, almost no sources have a harmonic period offset, with an exception of those sources with periods $>$10 days in Kepler, which get aliased to the 2:1 harmonic. Finally, we compare the recovered rotational periods in this work to the recently released periods from Gaia DR3 \citep{eyer2022}. Gaia has a relatively sparse light curve (often containing as few as 30 measurements). Given such sparsity of measurements, the quality of the recovered periods by Gaia can be surprising, but there is a good agreement between them and the periods measured in this work with TESS for sources with periods $>0.1$ days. At the periods reported by Gaia $<$0.1 days, there is little to no agreement, in almost all cases there is a much more pronounced longer period signal that is recovered in TESS. \begin{figure}[!t] \includegraphics[width=\linewidth]{fraction.pdf} \caption{Fraction of sources as a function of Gaia color and age in which confident periodic signal (up to 12 days) is observed in TESS data. The dashed line shows the sample solely among the strings of stars, defined in \citet{kounkel2019a}. Note that in almost all of the cases the stellar strings have members with similar fraction of the periodic variables as the full sample, and it is usually somewhat higher. \label{fig:fraction}} \end{figure} \begin{deluxetable*}{ccl}[!ht] \tablecaption{Measured TESS periods \label{tab:data}} \tabletypesize{\scriptsize} \tablewidth{\linewidth} \tablehead{ \colhead{Column} & \colhead{Unit} & \colhead{Description} } \startdata Gaia id & & Gaia DR2 source ID \\ TIC & & TESS input catalog ID \\ RA & deg & Right ascention in J2000 \\ Dec & deg & Declination in J2000 \\ Sector & & TESS sector \\ Theia ID & & Group id from \citet{kounkel2020} \\ Age & log yr & Average age of the stellar group \\ e\_age & log yr & Uncertainty in age \\ c\_bp\_rp & mag & Extinciton corrected Gaia BP-RP color \\ c\_absg & mag & Extinction corrected Gaia G magnitude \\ Tmag & mag & TESS magnitude \\ Teff & K & Effective temperature from TESS Input Catalog \\ e\_teff & K & Uncertainty in Teff \\ Mass & Msun & Mass from TESS Input Catalog \\ e\_mass & Msun & Uncertainty in mass \\ Radius & Rsun & Radius from TESS Input Catalog \\ e\_radius & Rsun & Uncertainty in radius \\ corr\_p\_1 & day & Primary period in CORR light curves \\ e\_corr\_p\_1 & day & Uncertainty in corr\_p\_1 \\ corr\_p\_2 & day & Secondary period in CORR light curves \\ e\_corr\_p\_2 & day & Uncertainty in corr\_p\_2 \\ corr\_a\_1 & & Primary amplitude in CORR light curves \\ corr\_a\_2 & & Secondary amplitude in CORR light curves \\ pca\_p\_1 & day & Primary period in PCA light curves \\ e\_pca\_p\_1 & day & Uncertainty in pca\_p\_1 \\ pca\_p\_2 & day & Secondary period in PCA light curves \\ e\_pca\_p\_2 & day & Uncertainty in pca\_p\_2 \\ pca\_a\_1 & & Primary amplitude in PCA light curves \\ pca\_a\_2 & & Secondary amplitude in PCA light curves \\ raw\_p\_1 & day & Primary period in RAW light curves \\ e\_raw\_p\_1 & day & Uncertainty in raw\_p\_1 \\ raw\_p\_2 & day & Secondary period in RAW light curves \\ e\_raw\_p\_2 & day & Uncertainty in raw\_p\_2 \\ raw\_a\_1 & & Primary amplitude in RAW light curves \\ raw\_a\_2 & & Secondary amplitude in RAW light curves \\ Period & day & Adopted period from the light curve \\ e\_period & day & Uncertainty in period \\ ptype & & Adopted period source \\ L & log kg m$^2$ s$^{-1}$ & Calculated angular momentum \\ e\_L & log kg m$^2$ s$^{-1}$ & Uncertainty in angular momentum \\ RUWE & & Renormalised unit weight error from Gaia EDR3 \\ sigma\_var & & Standard deviation of the normalized flux (full amplitude) \\ sigma\_inst & & Typical reported uncertainty in the flux of the normalized flux (instrumental) \\ sigma\_p2p & & 68th percentile point-to-point variability of the normalized flux (measured scatter) \\ c1 & & Flux contamination ratio within 10$''$ \\ c2 & & Flux contamination ratio within 20$''$ \\ c3 & & Flux contamination ratio within 30$''$ \\ c4 & & Flux contamination ratio within 40$''$ \\ c5 & & Flux contamination ratio within 50$''$ \\ c6 & & Flux contamination ratio within 60$''$ \enddata \end{deluxetable*} Figure~\ref{fig:fraction} presents an overview of the rate of period detection as a function of stellar color (optical, Gaia passbands) and age (inherited from the moving group membership, using the isochrone fitting of the photometry of all the stars in the group). There is a clear dependence of the recovery of the periods on age, with youngest stars being most variable, and mass, with solar-type stars being most variable. The decrease towards lower mass stars may be partially attributable to the lower signal-to-noise due to the increased faintness. The decrease of periodicity with mass is not monotonic for stars older than 7.8 dex, developing a secondary local maxima at colors corresponding to M dwarfs. Periodicity also sharply decreases towards early type stars as the dominant mode of variability switches from rotation to pulsation. Finally, we compare the frequency of periodicity recovery in stellar strings, the extended comoving groups that were identified in \citetalias{kounkel2019a}. These strings are ubiquitous at ages $<100$ Myr as they consist of populations that still retain their primordial morphology that have not yet been fully dissolved in the field. We find that the stars found in strings tend to have either comparable or greater fraction of variable stars compared to the full sample. Thus, they tend to have a greater degree of variability compared to the more compact, isolated, and cluster-like groups lacking diffuse halos, even after accounting for the age/mass dependency. This could perhaps be due to a few factors---(i) the environment could impact the rotation period distributions to a considerable degree for stars of the same age \citep[however, see][]{Fritzewski2020}, (ii) the TESS light curves for stars in denser clusters are more strongly affected by source confusion and signal interference, (iii) or the sample of strings have a smaller degree of contamination from field stars than the more compact groups. We note that the level of contamination is expected to be on a few percent level, and that it is somewhat more pronounced at the edges of the diffuse halos in comparison to the kinematic ``core'' of a given population \citep{bouma2021}. We further discuss the the physical significance of the trends in ages and masses in the next section. \section{Results}\label{sec:results} Figure~\ref{fig:periodogram} presents the fundamental result of this work: the empirical distribution of periods (determined from TESS light curves; see Sections~\ref{sec:tess} and \ref{sec:periods}) as a function of stellar color and age (determined from the Theia catalog; see Section~\ref{sec:theia}). All the colors have been corrected by the estimated extinction in \citet{kounkel2020}. At $(G_{\rm BP} - G_{\rm RP})$ color of 0.4, there appears to be a gap in the distribution of periodic variables. This gap most likely corresponds to the well-known Kraft break \citep{kraft1967,avallone2022}, separating the stars with convective and radiative envelopes. One surprise however is that we see the gap at $\approx$6700 K, whereas the Kraft break is traditionally placed at $\approx$6300 K, based on a sharp transition in stellar equatorial velocity as a function of mass or temperature. While we believe the same underlying physics to be at play, the general implication is likely that stars do in fact remain spotted up to mildly larger masses ($\approx$1.4\,\msun) than the traditional $\approx$1.25\,\msun cutoff. \begin{figure*} \epsscale{1.0} \plotone{per00_labels1.pdf} \caption{Recovered distribution of periods of the stars in the sample as a function of Gaia color, extinction corrected, color-coded by their age, annotated with the features of the parameter space. Sources redder than the labelled ``Kraft break'' have the dominant mode of variability being caused by rotation. They can be separated into slow rotators (sources that show a clear evolution of their periods with age), and fast rotators, having typical periods $<$1 day. Sources bluer than the Kraft break are commonly pulsators. $\gamma$ Dor sources have pulsation frequency comparable to the stellar rotation period, consisting only of the main sequence stars. $\delta$ Scuti stars in the age limited sample form a bimodal distribution, with the main sequence $\delta$ Scuti variables having periods shorter than 1.2 hours, and the subgiants having periods longer than 2 hours. Slowly pulsating B-type variables (SPB) are found among the early type stars. \label{fig:periodogram}} \end{figure*} Our primary purpose is to understand the evolution of stellar angular momentum and the refinement of relations for gyrochronology. Thus, for the remainder of our analysis we restrict our discussion to stars with convective envelopes, which experience wind-driven rotational evolution and for which gyrochronology can be applied. An overview of the periodic variables with radiative envelopes is provided in Appendix~\ref{sec:radiative}. The full sample of stars with rotation periods and ages resulting from our analysis is presented in Table~\ref{tab:data}. \subsection{A catalog of rotation periods and ages in the field} At the most basic level, for stars with $(G_{\rm BP} - G_{\rm RP})>$ 0.4, Figure~\ref{fig:periodogram} manifests a clear morphology and gradient that represents (a) the dependence of stellar rotation period with stellar color (i.e., stellar temperature or mass), and (b) the evolution of this mass--rotation relationship with stellar age (i.e., stars across this mass range spin down over time). We note that all of the quantities in Figure~\ref{fig:periodogram} are empirically determined and are moreover determined independently of one another. The colors are measured directly, the periods are determined by us here from light curves obtained by TESS, and the ages have been previously estimated from a spatial-kinematic clustering analysis of Gaia data \citep{kounkel2020}. This represents the largest collection to date of stars in the field with empirically and independently determined rotation periods and ages. Because the spin-down of stars with age is well established observationally and theoretically, we can on the one hand regard the clear gradient in Figure~\ref{fig:periodogram} as a fundamental validation of the ages (or at least the relative time ordering) inferred by the Theia analysis of stellar associations. On the other hand, taking the Theia ages at face value, Figure~\ref{fig:periodogram} can be regarded as an affirmation of the soundness of gyrochronology as an approach to inferring stellar ages in the field. In the subsections that follow, we make use of this unique sample (Table~\ref{tab:data}) to revisit and refine empirical gyrochronology relations and to characterize the evolution of magnetic features on the stellar surfaces. \subsection{The slow sequence in the color--period diagram} As noted above, Figure~\ref{fig:periodogram} shows a clear gradient of rotation periods as a function of age and color, especially at its upper envelope (i.e., slower rotators). Indeed, separating the stars into individual age bins in Figure~\ref{fig:periodogramage}, it becomes more apparent the majority of stars of a given age populate a coherent, inverted horseshoe-shaped ``slow sequence" \citep[or ``I-type" sequence; see, e.g.,][]{barnes2010} in the color--period diagram that evolves smoothly with age. \begin{figure*} \epsscale{1.0} \gridline{\fig{per01a.pdf}{0.33\textwidth}{} \fig{per02a.pdf}{0.33\textwidth}{} \fig{per03a.pdf}{0.33\textwidth}{} }\vspace{-0.5cm} \gridline{\fig{per04a.pdf}{0.33\textwidth}{} \fig{per05a.pdf}{0.33\textwidth}{} \fig{per06a.pdf}{0.33\textwidth}{} }\vspace{-0.5cm} \gridline{\fig{per07a.pdf}{0.33\textwidth}{} \fig{per08a.pdf}{0.33\textwidth}{} \fig{per09a.pdf}{0.33\textwidth}{} }\vspace{-0.5cm} \gridline{\fig{per10a.pdf}{0.33\textwidth}{} \fig{per11a.pdf}{0.33\textwidth}{} \fig{per12a.pdf}{0.33\textwidth}{} } \caption{Same as Figure \ref{fig:periodogram}, but as a function of temperature instead of color, zooming on the stars with convective envelopes and splitting sources in different age bins into different panels. \teff\ is obtained from TESS Input Catalog. The 2d histogram shows the distribution of periods in this work, color coded by the number of sources in each bin, blue dots are from literature: Upper Sco \citep{rebull2018}, Pleiades \citep{rebull2016}, NGC 3532 \citep{fritzewski2021a}, Praesepe \citep{douglas2019} and NGC 6811 \citep{curtis2019a}. Black line is drawn at the period of 7 days, corresponding to half an orbital period from TESS. Note that the overdensity at that period that is apparent at older ages is artificial, most likely dominated by real rotators with longer period aliased to the harmonic period close to the half an orbit of TESS. Additionally, the sample in TESS may be more incomplete than the samples from the literature due to the magnitude limits. \label{fig:periodogramage}} \end{figure*} Figure~\ref{fig:periodogramage} also compares the sequences we observed with those reported by previous authors for some well-studied clusters of known ages \citep{rebull2018,rebull2016,fritzewski2021a,douglas2019,curtis2019a}. In general there is excellent agreement at those ages where the previous samples exist, except for the slowest rotators, where our incompleteness for periods longer than $\sim$12~d becomes most apparent (see Section~\ref{sec:periods}). At the youngest age bins, the distribution of rotation periods appears to be fairly complete (Figure \ref{fig:periodogramage}). There is an artifact in the catalog showing an excess of sources with a period of $\approx$7 days; as previously mentioned this is due to the systematics due to the orbit from TESS. This excess becomes particularly pronounced at older age bins, $>$100 Myr, but seems to be largely absent in the younger populations. This is most likely a signature of a periodic star, but having a dominant period longer than what is recoverable, as at older ages our period search is truncated at 12 days (see Section~\ref{sec:periods}). The TESS sample reported here for stars in the large set of stellar associations from the Theia catalog \citep[][see Section~\ref{sec:theia}]{kounkel2020} significantly enlarges the overall sample of stars with rotation periods and ages, and importantly it also smoothly fills in the entire age range from $\lesssim$10~Myr to $\gtrsim$1~Gyr. This smoothly evolving and well populated slow sequence over such a large age range forms the basis for empirical gyrochronology relations, as we discuss in Section~\ref{sec:gyro}. \subsection{Rapid rotators in the color--period diagram}\label{sec:binaries} Outside of the dominant slow sequence, Figure~\ref{fig:periodogramage} also shows at most ages a less coherent population of much more rapidly rotating stars with periods $<$2 days. These have been identified in early works as members of a so-called ``C-type sequence" \citep[see, e.g.,][and references therein]{barnes2010}. In this section, we consider the nature of the rapid rotators and their evolution with stellar age. \subsubsection{Rapid rotators as binaries} Recent studies with Kepler and K2 have uncovered a strong tendency for rapid rotators to be binary stars, presumably undergoing tidal interactions. For example, \citet{douglas2016}, \citet{Douglas:2017}, and \citet{simonian2019} all find that binaries dominate among FGK-type stars with periods faster than 2 days, and \citet{stauffer2018} find that binaries dominate among M-type stars with periods faster than 1.5 days. Indeed, comparing the stars in our sample that sit on the main sequence versus those on the binary sequence (Figure~\ref{fig:binaries}), we find that the binary sequence is heavily dominated by fast rotators with periods of 2 days and faster. Of course, not all true binaries sit on the binary sequence (i.e., binaries with small brightness ratios sit on the main sequence), and similarly not all of the unresolved binaries are tight enough to have experienced spinup to short periods. We also compare RUWE, but there is no significant difference in RUWE between fast and slow rotators. \begin{figure*}[t] \centering \epsscale{1.1} \plottwo{binary1.pdf}{binary2.pdf} \plottwo{fast.pdf}{fastcomp.pdf} \caption{Top left: an HR diagram for the stars with ages between 8--8.3 dex; sources are color-coded by the recovered rotation period. Top right: a kernel density estimate showing the period distribution in the same age and color range for the sources located on the main sequence versus those found on the binary sequence. Fast rotators with periods shorter than 2 days are preferentially found on the binary sequence. Bottom left: Fraction of fast rotators (sources with extinction corrected colors 0.8$<(G_{\rm BP,0} - G_{\rm RP,0})<$2.2, with period $<$1 day), relative to all the stars (periodic or not) in the same color range. Bottom right: contour plot showing periods and temperatures of fast rotators as a function of age; as population ages, fast rotator sequence tend to shift towards longer periods and cooler temperatures. \label{fig:binaries}} \end{figure*} \subsubsection{Evolution of the rapid rotators} There is some evolution of the fastest rotators with age. For example, Figure~\ref{fig:binaries} examines their rough fraction relative to the total sample, selecting stars with colors of 0.8$<(G_{\rm BP} - G_{\rm RP})<$2.2 and period $<$1 day, relative to all the stars (periodic or not) in the same color range. We find that these very fast rotators are most common at ages of 40--60~Myr. There are approximately twice as many in this age bin compared to younger stars at $\sim$5~Myr, and very few are found at $\sim$3~Myr. This is consistent to what has been previously observed in various young open clusters \citep[see, e.g.,][]{bouvier2014}, and has been interpreted as being due to the initial releasing of ``disk-locked" stars \citep[e.g.,][]{affer2013} and their subsequent spin-up caused by core--envelope decoupling \citep[e.g.,][]{moraux2013}. There is also a decline of the fastest rotators at ages $>$60~Myr, which is at least partially driven by the overall decrease in recovered periodicity at older ages (Figure \ref{fig:fraction}) due to smaller starspot sizes (see Section~\ref{sec:spots}). It has been previously suggested that the evolution of fast rotators has some dependence on mass \citep{barnes2003}. Indeed, Figures \ref{fig:periodogramage} and \ref{fig:binaries} do show it. While the fast rotators tend to have a significant spread, there does appear to be a slight overdensity of stars that may resemble a fast ``sequence'' that becomes pronounced at the ages of 7.4 dex through 8.6 dex, and it persistently evolves towards cooler temperatures and longer periods for older stars. This fast sequence is significantly more diffuse than the slow sequence, and the relative density of sources along it is low, as such it is difficult to fully fit its evolution. But, eventually, it appears to merge into the slow sequence, leaving behind only a handful of faster rotators that remain outlying. We return to a discussion of the rapid rotators in Section~\ref{sec:discussion}. For the purposes of exploring gyrochronology and angular momentum evolution of nominally single stars, in the following we restrict our analysis to ``slow sequence" stars with rotation periods longer than 2 days. \subsection{Angular momentum evolution and gyrochronology}\label{sec:gyro} As noted above, the stars found in the slow/I-type sequence show a strong dependence of rotation periods on age, with F, G, and early K type stars rotating increasingly slower as they age. In contrast, following the turn-over in the color--period diagram at a color of $(G_{\rm BP} - G_{\rm RP})\gtrsim 2.3$ or $T_{\rm eff} \lesssim 3800$~K (Figure~\ref{fig:periodogramage}), the M dwarfs appear to follow a very steep relationship such that the stars overlap each other in color--period diagram regardless of their age (Figure~\ref{fig:periodogram}), which makes it difficult to use periods alone to estimate their ages. Consequently, empirical gyrochronology relations have by necessity focused on relating rotation periods to age for mid-F to mid-K type stars. With the advent of precise Gaia parallaxes for the large catalog of stars with TESS rotation periods and Theia ages, we now have the opportunity to consider the stars' masses and radii along with their rotation periods. Thus, it is possible to estimate the total angular momentum ($L$) carried by each star (Figure \ref{fig:angmom}), using the Gaia-based radii and masses from the TESS input catalog \citep{stassun2019}. For simplicity, we calculate $L$ as for a solid-body rotator: $L=2/5MR^2\Omega$, for $\Omega = 2\pi/P$. We do not attempt in this exercise to account for possible core--envelope decoupling or other possible effects (e.g., metallicity).\footnote{Metallicity is implicitly included in the radius calculation, as it is derived from the Stefan--Boltzmann Law, thus requiring only apparent $G$, parallax, a color-based \teff, and a bolometric correction. Masses, however, were calculated solely from \teff, and so do not account for metallicity or stellar evolution \citep{stassun2019}.} Consequently, the quantitative $L$ values that we report here should be regarded as a proxy for the true angular momentum content that we will utilize for the purpose of relating the observables to age for gyrochronology applications. \begin{figure}[!t] \includegraphics[width=\linewidth]{angmomg2.pdf \caption{Angular momentum of the full sample for ``slow sequence" stars with rotation periods longer than 2 days. Larger points are from the literature for clusters observed by Kepler. Note the significant change in $L$ as a function of age for the M dwarfs, despite the lack of a significant change in the rotation period (Figure \ref{fig:periodogram}). \label{fig:angmom}} \end{figure} We find that after excluding the fast rotators/binaries (i.e., excluding stars with periods shorter than 2 days; see Section~\ref{sec:binaries}), the age gradient is much more pronounced in $L$ space than it is in period space, indeed becoming flatter and monotonic with $T_{\rm eff}$ instead of inverse horseshoe-shaped (i.e, having a peak at $\sim$4000 K, and decreasing towards hotter and cooler stars). In addition, whereas rotation periods of slow-sequence stars do not evolve much during the pre-main-sequence (see Figure~\ref{fig:periodogramage}), the stars are in fact rapidly shrinking in radius, such that their $L$ content is rapidly decreasing and manifesting as a greater spread in $L$ than in rotation period. Although the turnover still occurs, it is found at somewhat cooler temperatures than in the period space, and the cool stars beyond the turnover are largely excluded by the cut of 2 days. Using the angular momentum, $L$, we fit an empirical gyrochronology relation. We first find the average $L$ for a star of a given temperature and age. Each age bin is spaced by 0.1~dex, including sources within $\pm$0.1~dex of the bin center. We further bin \teff\ within each age bin, with \teff\ bins spaced by 50~K, and including stars within $\pm$200~K. We clean the sample in each 2~d period bin by finding a standard deviation in $L$ and excluding sources that deviate more than 1$\sigma$ from the median. We then recompute the median $L$ of the bin and iterate to convergence. This process is applied using periods presented in this work up to ages of 8.5~dex. At older ages, our sample is highly incomplete due to missing longer periods (see Section~\ref{sec:periods}). As such, we tether our sample to the angular momentum of 670-Myr-old Praesepe \citep{douglas2019,cantat-gaudin2020} and 1-Gyr-old NGC~6811 \citep{curtis2019a}, thus extending the relations up to 1~Gyr. We fit a simple polynomial relationship to the resulting data to estimate $L$ as a function of \teff\ and age as: \begin{equation}\label{eqn1} \begin{split} \log L=a_0+a_1\logt+a_2(\logt)^2+a_3(\logt)^3+\\ b_0\log t +b_1\log t\logt+b_2\log t(\logt)^2 \end{split} \end{equation} \noindent where $\logt$ is the $\log_{10}$ of \teff\ in K, and $\log t$ is $\log_{10}$ of age in years. This formalism is chosen to be reversible, such that, given a measurement of $L$, the age can be estimated via: \begin{equation} \begin{split} \log t=(\log L-a_0-a_1\logt-a_2(\logt)^2-\\a_3(\logt)^3)/(b_0+b_1\logt+b_2(\logt)^2) \end{split} \end{equation} The fitted coefficients are presented in Table~\ref{tab:coeff} and the fit itself shown in Figure \ref{fig:fit}. The coefficients are strongly correlated, as such, the uncertainties for them are not provided, however, the number of coefficients cannot be reduced to minimize the correlation, as this leads to a poorer fit. Separately, we also repeat a similar exercise with specific angular momentum $H \equiv L/M$. Note that the actual computation of $H$ does not invoke the stellar mass at all; we simply calculate $H$ using the stellar radius and rotation period alone. Interestingly, the scatter about the fitted relations is generally tighter in $L$ than in $H$. \begin{deluxetable}{ccc} \tablecaption{Fitted coefficients for gyrochronology relations \label{tab:coeff}} \tabletypesize{\footnotesize} \tablewidth{\linewidth} \tablehead{ \colhead{Coefficient} & \colhead{Value ($L$)} & \colhead{Value ($H$)} } \startdata $a_0$ & $-$3506.5298 & $-$3618.6016 \\ $a_1$ & 2697.8409 & 2712.4212 \\ $a_2$ & $-$677.8322 & $-$667.4448 \\ $a_3$ & 56.2650 & 54.0097 \\ $b_0$ & 113.3048 & 135.0571 \\ $b_1$ & $-$62.9542 & $-$74.6817 \\ $b_2$ & 8.7066 & 10.2867 \\ \enddata \end{deluxetable} \begin{figure*} \epsscale{1.1} \plotone{angmominterp1.pdf} \caption{A fit of the angular momentum $L$. Yellow dots show the data within each age bin; small dots are for sources with period measured in this work, larger dots are from literature: Upper Sco \citep{rebull2018}, Pleiades \citep{rebull2016}, NGC 3532 \citep{fritzewski2021a}, Praesepe \citep{douglas2019} and NGC 6811 \citep{curtis2019a}. The red line shows the median $L$ estimated within each age/\teff\ bin. The thick blue line shows the fit of $L$ for that age. Thinner blue lines are offset by $\pm$0.3 dex in age, and dotted lines are offset by $\pm$0.6 dex in age. $\sigma_L$ shows the typical scatter in $L$ relative to the fit in a given age bin. The range in $L$ corresponds to the valid domain for the fit. \label{fig:fit}} \end{figure*} The resulting relations are valid for nominally single pre-main-sequence and main-sequence stars (but not evolved stars) that follow the slow sequence, with 3000$<$\teff$<$6700~K, with rotation periods between 2 and 12 days, and with $\log L$(kg m$^2$ s$^{-1})>$41.8. This threshold restricts stars with ages of older than 300 Myr to $\gtrsim$4000 K. To estimate the precision with which our fit enables measurement of gyrochronological ages, we compared its gyrochronological ages against the isochrone ages from the Theia catalog. The result is shown in Figure~\ref{fig:scatter}, which shows the distribution of age residuals (in the sense of gyrochronology-inferred age minus Theia nominal age, in dex) for the sample stars in broad age bins from $<$7.5~dex to $>$8.5~dex. The distributions have been ``deconvolved" by subtracting in quadrature the systematic errors intrinsic to the Theia-based age scale \citep[see][]{kounkel2020}. Therefore, Figure~\ref{fig:scatter} represents the relative age precision, not the absolute age accuracy, obtainable from our angular-momentum based gyrochronology relations. For the youngest age bin ($<$7.5~dex), the rotation-derived ages are largely uninformative beyond confirming their overall youth. For the older age bins between 100 Myr and 1 Gyr, the distribution of the differences between the gyrochrone and the isochrone ages peaks at 0.1--0.2~dex, with a tail of significantly larger uncertainty. This is most likely due stars with ages $>$100 Myr begin to fully converge on a slow gyrochrone sequence in comparison to their younger counterparts. \begin{figure}[!ht] \centering \includegraphics[width=\linewidth]{scatter1.pdf} \includegraphics[width=\linewidth]{scatter.pdf} \caption{Top: A comparison of residuals between the age derived from angular momentum vs original ages derived from isochrone fitting, Bottom: same as above, but with the uncertainty in the isochrone age subtracted in quadrature from the difference. Different lines show the typical scatter at different ages: it is most pronounced at the younger age bins. Sources older than 8.5 dex are from Kepler, younger sources are from TESS.} \label{fig:scatter} \end{figure} \subsection{Mapping the evolution of starspot properties}\label{sec:spots} Sources redder than the Kraft break have variability dominated by the rotation of the spots. The amplitude of variability decreases the closer the star in color is to the gap, as the size of the convective envelope shrinks, from typical variability of 0.5--1\% among young, low mass stars, to $<0.1$\% for the stars near the Kraft break. There is also a significant dependence of variability on the age of the star. Among the sources younger than 10 Myr, as many as 60--80\% of stars have a strong dominant rotation period (Figure \ref{fig:fraction}). However, fewer than $<10$\% of stars with ages older than 1 Gyr appear to be periodic. The cause of this is twofold. First: many older stars may have a rotation period longer than 12 days, which, so far, cannot be recovered with TESS.\footnote{Kepler light curves towards 1 Gyr old cluster NGC 6811 recovered periodicity of 171 stars out of 203, resulting in a 84\% fraction} But also, the overall amplitude of variability also decreases with age, as the activity is decreased, potentially making periodicity harder to recover \citep{Morris2020}. A simplistic formula for translating the observed amplitude of variability into a spot coverage fraction $f_{\rm spot}$ uses the Stefan--Boltzmann relation, and can be written \begin{equation} f_{\rm spot}=2\sigma_{\rm var}/(1-T_{\rm spot}^4/T_{*}^4), \end{equation} where $T_*$ is the temperature of the unspotted photosphere, $T_{\rm spot}$ is the temperature of the spot, and $\sigma_{\rm var}$ is the standard deviation of the normalized light curve. This formula assumes that $2\sigma_{\rm var}$ corresponds to the total amplitude difference in flux between the time a single spot is facing toward us versus the time when it is facing away from us. Of course, there is no reason to expect a rotating star to have a single spot (group) on its surface, and the general problem of inferring a starspot map given only a light curve is inherently under-constrained \citep{Basri_2020,Luger_2021}. In reality, the observed amplitude of the spot-induced signal depends on whether any asymmetry is present in the longitudinal surface brightness distribution of the star. A star heavily covered with starspots shows little light curve variation if the observed ratio of dark to bright regions remains constant over each rotational cycle. Our approximation for the spot coverage fraction $f_{\rm spot}$ should therefore be interpreted as a suggestive number, but not one with a solid quantitative foundation. To use Equation 3 to infer spot filling fractions from the photometric amplitudes of the periodic variability in each light curve, we must adopt an assumed spot-photosphere temperature contrast (i.e., a value for $T_{\rm spot}/T_{*}$). We follow \citet{berdyugina2005} and \citet{herbst2021} in making $T_{\rm spot}$ and explicit function of T$_{*}$: $T_{\rm spot}=(-3.58\times10^{-5}T_*^2+1.0188T_*-239.3)$ K. While we lack directly measured spectroscopic \teff\ measurements for the bulk of the sample, we adopt \teff\ from TESS Input Catalog \citep{stassun2019} as $T_*$, and then use the equation above to predict the associated $T_{\rm spot}$. With this approach, we find $T_{\rm spot}$ = 4530 K and 2440 K for stars with $T_{*}=6000 K$ and $3000K$, respectively. The resulting median spot fraction for the sample is shown in Figure \ref{fig:spotsize}. Indeed, we see a significant gradient in $f_{\rm spot}$ as both a function of age and temperature. The decrease in spot sizes is pronounced most strongly at two periods: after a pre-main-sequence star begins to develop a radiative core, and later, shortly after it settles onto the main sequence. For a solar-type star, we see a typical spot fraction of $\sim$5\% at the beginning of its life, decreasing by more than an order of magnitude beyond 1 Gyr. Such spot sizes are consistent with those observed by Kepler towards various young clusters \citep{Morris2020}. A mid-M dwarf initially has $f_{\rm spot}\sim$10\%, modestly decreasing to $\sim$7\% after 1 Gyr. As such stars remain fully convective throughout their life, they are able to sustain large spots even at a relatively advanced age (which is only a small fraction of their total lifetime). \begin{figure}[!t] \includegraphics[width=\linewidth]{spotfraction1.pdf} \includegraphics[width=\linewidth]{spotfraction2.pdf} \caption{Top: Median spot fraction size at different temperature and age bins. The white contours correspond to the levels of 1.0, 2.6, 4.5, 5.9, 7.3\%. Bottom: Same plot, but the color corresponding to the standard deviation of the light curve ($\sigma_{\rm var}$), with \citet{baraffe2015} isochrones plotted on top of it in white. \label{fig:spotsize}} \end{figure} A young 4000 K star appears to have a typical spot size of $\sim$7\%. This is similar to the spectropolarimetric spot filling factor measurement of 4.5--6\% of the young T Tauri star DN~Tau \citep{donati2013}. However, this is significantly lower than 80\% observed for LkCa 4 \citep{gully-santiago2017}, obtained through fitting a two-temperature model to the H and K band spectra. Similarly, it is inconsistent with the spot coverage of 50\% of cool dwarfs in Pleiades through fitting TiO bands \citep{fang2016}. The difference may be systematic---it is possible that these spectroscopic studies preferentially track smaller spots that are more uniformly distributed along the photosphere. Such spots would not cause significant variability as both sides of a star would be equally spotted. On the other hand, \citet{donati2013} reconstruct only a singular large spot, which offers a better match to our underlying assumptions. But, while a large spot filling fraction is far from typical in our sample, we do observe some individual stars that do have inferred $f_{\rm spot}$ as high as $>30$\%. Among young 4000 K stars, only $\sim$3\% of them appear to have such large spots. Some of these sources with large $f_{\rm spot}$ may be attributable to the remaining eclipsing binary stars that cannot be easily filtered out from the data. However, known spectroscopic binaries \citep{price-whelan2020}, or sources with large RUWE, or the sources on the binary sequence do not appear to dominate this large $f_{\rm spot}$ tail. \section{Discussion} \label{sec:discussion} \subsection{On the ability to use gyrochronology in the field} There have been many attempts to construct theoretical models and empirical relations describing the evolution of stellar rotation \citep[e.g.,][]{barnes2003, barnes2007, barnes2010, angus2019, spada2020} and angular momentum \citep[e.g.,][]{vanSaders2013, Matt2015, Garraffo2018}. The most popular parameterization for empirical gyrochronology, following \citet{barnes2007}, assumes that rotation's dependencies on mass (via $B-V$ color) and age are separable, such that stars already converged on the slowly rotating sequence spin down continuously with a common braking index. \citet{mamajek2008} report a typical age precision of 0.05 dex based on a fit to the rotation periods of FGK-type members of young canonical open clusters (e.g., $\alpha$~Per, Pleiades, M34, Hyades, spanning 80--700 Myr) to describe the mass/color dependence, with the Sun anchoring the age dependence (i.e., the braking index). One of the limitations of this approach is that young clusters show stars with the same age and mass spinning at a range of periods. For example, single K dwarfs in the Pleiades, excluding those rotating more rapidly than $<$2 days, span a factor of five in period (and therefore angular momentum). This degeneracy places a limit on the age precision attainable with gyrochronology at particular masses and ages that are not yet well defined. This is presumably a relic of the initial angular momentum distribution, which some approaches account for \citep[e.g.,][]{barnes2010, Matt2015}; however, our relation instead tracks the broad trend defined by our large sample of stars and structures. In addition to this fundamental limitation due to the convergence process and timescale, all models proposed to date have also failed to describe the detailed spin-down behavior for FGK stars on the slowly rotating sequence. Magnetic braking appears to effectively shut down in the later main-sequence as stars approach Rossby numbers of $Ro \approx 2$ \citep[e.g., occurring at $\sim$2 Gyr and $\sim$4 Gyr for F and G dwarfs, respectively]{vanSaders2016, vanSaders2019, Hall2021}; this causes similar stars to pile up in rotation at a range of old ages \citep{David2022}. Earlier on in their main sequence evolution, K~dwarfs appear to temporarily stall for an extended period of time at ages of 0.6--1.5~Gyr, which also piles up stars over this age range before spin-down resumes \citep[e.g.,][]{Agueros2018, curtis2019, curtis2020}. These deviations from a pure Skumanich-style $P_{\rm rot} \propto t^n$ relation \citep[e.g.,][]{skumanich1972, barnes2007, angus2019} greatly reduce the true age precision attainable with rotation at certain masses and ages. It also has impacted the accuracy of most calibrations. For example, \citet{curtis2020} showed that this class of model predicts ages for the three components of 36~Oph that vary by a factor of two, while the gyrochronal age of 61~Cyg A and B are half that of their interferometry-constrained isochronal ages. In other words, although \citet{mamajek2008}, \citet{barnes2007}, and others have achieved seemingly high gyrochronal age precisions with their models based on the calibration data available at the time, newer data have proven the underlying assumptions of these models to be invalid, and the resulting ages to be quite inaccurate for low-mass stars in particular. While the sample studied here is younger than the age when K dwarf stars appear to temporarily stall ($<$1~Gyr), \citet{curtis2020} suggested that G stars might also spin down very little between 100 and 200 Myr (8.1 and 8.3~dex in log age). This is on par with the typical 0.1--0.2~dex age uncertainties found for our empirical fit. Comparing our $L$--age relation with data for the Pleiades \citep{rebull2016a}, we find a rather poor fit across 4000--6000~K (see the $\log t = 8.1$~dex panel of Figure~\ref{fig:fit}): although the average rotation age for the ensemble is approximately correct, the warmer stars appear too young ($\sim$7.5~dex or 30~Myr) and the cooler stars appear too old ($\sim$8.6~dex or 400~Myr). It is possible that our empirical fit is simply not flexible enough to describe the evolution of $L$ across time and \teff. By 300--400~Myr, our relation appears to accurately describe the shape of the $L$--\teff\ distribution. However, we found that ages for the $\sim$350~Myr clusters NGC~3532 and NGC~7092 come out at 800--900~Myr ($\sim$2.5 times too high). This is, perhaps, partially due to inaccurate interstellar reddening corrections affecting the calculation of \teff, mass, and radius, but might also signify a systematic bias in the ages derived with our fit. We note that inaccuracies in interstellar reddening will most strongly affect the ages for the warmer late-F and early-G dwarfs in the steeper portion of the $L$--\teff--age space. For example, perturbing $A_V$ by $\pm$0.1 mag would cause 6000~K members of the $\sim$700~Myr Praesepe cluster to appear to be 100~Myr or over 1~Gyr if $A_V$ is under- or overestimated. Such issues ubiquitously plague all current approaches to age-date stars with rotation. Despite these challenges, Figure~\ref{fig:angmom} does show a continuous evolution of $L$ versus \teff\ over time that allows for ages to be estimated with a typical uncertainty of 0.1--0.2~dex.\footnote{It will be necessary in the future to cross-examine our empirical fit with other approaches in the literature \citep[e.g.,][]{spada2020} to determine if there is real improvement in the ages attained \citep[e.g., following][instead of taking each model's claimed uncertainties at face value]{Otani2022}, but this is beyond the scope of our present study.} Thus it is now possible to estimate a prior on individual stellar ages inferred from angular momenta, which can now be measured across nearly the entire sky. At the moment, few rotation periods have been reported for field stars in TESS; however, rotation periods from Kepler \citep[e.g.,][]{mcquillan2014, Santos2021} and K2 \citep{reinhold2020, Gordon2021} are now available for large numbers of stars. We apply the aforementioned cuts to the data on period and fitted age, and we also require stars with radius $<$1.2 \rsun, to exclude subgiants, as well as exclude sources on the binary sequence. This significantly limits the catalogs to only 7\% of their original size, as most of the remaining stars have longer periods. We examine the resulting age distribution of the subset where the age fit can be considered valid in Figure~\ref{fig:reinhold2020}. \begin{figure*}[!ht] \includegraphics[width=\linewidth]{gyro2.pdf} \caption{Sky map of sources in \citet{mcquillan2014} and \citet{reinhold2020} catalogs in Galactic coordinates that satisfy the criteria of fit. All the stars are color-coded by their estimated age, as inferred from the empirical gyrochrone relations given in equation 2. The large panels zoom in on certain fields containing notable clusters and the Kepler field (not drawn to scale). \label{fig:reinhold2020}} \end{figure*} We find that the stars that can be associated with various young open clusters that have been included in these catalogs stand out from the field, and that their average age well matches the true age of the cluster. As such, with a large enough catalog of rotation periods, in future it may be possible to identify groups of stars that have formed together even after the point where a population is kinematically coherent as a moving group. Additionally, combined with completeness information as a function of age (e.g., Figure \ref{fig:fraction}), it may help with with better deriving the star forming history of the solar neighborhood. Furthermore, combining priors on age derived via gyrochronology with the priors derived from kinematics \citep{angus2020, LucyLu2021} or from chemical clocks \citep[e.g.,][]{daSilva2012,Spina2018, moya2022} may allow the derivation of more precise ages even for individual stars. \subsection{Implications for models of angular momentum evolution} The ability to make empirical measurements of angular momenta for large numbers of stars with empirical age estimates from a few Myr to a Gyr offers a new opportunity to empirically inform theoretical models of angular momentum evolution. Indeed, long-standing questions persist regarding the respective roles of the different mechanisms by which stars of different masses/structures can shed angular momentum, and how the respective roles of those mechanisms may change as the stars evolve. As a first examination utilizing our large catalog, we examine the {\it rate of change} in both $L$ and $H$ as a function of age and \teff\ through taking a derivative of our empirical relations (Equation~\ref{eqn1}) with respect to time. The results are represented in Figure~\ref{fig:dlh}. \begin{figure}[!ht] \centering \includegraphics[width=\linewidth]{dl.pdf} \includegraphics[width=\linewidth]{dh.pdf} \caption{Rate of loss of angular momentum $L$ (top) and specific angular momentum $H$ (bottom), as a function of temperature and age, assuming the empirical relation in Figures~\ref{fig:fit}. The black line tracks the temperature at which the greatest loss-rate occurs for a given age.} \label{fig:dlh} \end{figure} We make the following observations. First, angular momentum loss is a ubiquitous feature of mid-F to mid-M type stars at all ages (up to the 1~Gyr age limit of the sample investigated in this work). Second, the rate of angular momentum loss changes precipitously during the first Gyr, decreasing by up to 3 orders of magnitude during this time. Third, the angular momentum loss rate is greatest for stars of roughly solar mass and above at all ages, with the maximal angular momentum loss rate shifting to the slightly more massive stars as they age. Finally, M-type stars exhibit a significantly reduced rate of angular momentum loss at all ages compared to even slightly more massive stars, and this difference appears to be somewhat discontinuous across the fully convective boundary, suggesting that the dominant angular momentum loss mechanism(s) may be qualitatively different for fully convective stars and not merely quantitatively weaker. We also make consider rapid rotators. We have seen in Section \ref{sec:binaries} that they tend to be binaries. Although an initial assumption might be that they are tight, tidally locked systems, Figure \ref{fig:binaries} shows that rapid rotation occur ubiquitously across the binary sequence, including unresolved systems spanning separations of several au (the pixel scale for Gaia of 0.06'' corresponds to 30 au at the distance of 500 pc). We also find that there is an apparent evolution of the rapid rotators with age, such that they slow their rotation periods on timescales of $\sim$1 Gyr and eventually approach the gyrochronological ``slow sequence''. This suggests that unresolved binaries---across large scales of separations---emerge from the formation process as rapid rotators and then, because they are not all tidally locked, the majority of them are able to spin down. While we cannot answer precisely how the youngest binaries come to be rapidly rotating even if they are not close enough to tidally interact, the observational evidence presented here appears to require such a scenario. Confronting models of angular momentum evolution with these empirical trends will be an important next step in understanding the physics of angular momentum loss, a capability that is now possible given the availability of empirical angular momentum measurements together with fine-grained, empirical age estimates for large numbers of stars in the field. \section{Conclusions} \label{sec:conclusion} We generate light curves from TESS FFI data for stars within 500~pc and with known ages from the Gaia-based Theia catalog \citepalias{kounkel2020}. Our resulting catalog of $\sim$100,000 periodic variable stars contains primary, and in some cases secondary, periods up to a maximum of 12 days. The zoo of variability classes represented in this catalog is illustrated in Figure~\ref{fig:periodogram}. We find a clear separation in the period distribution between stars with convective and radiative envelopes occurring at \teff$\sim$6700~K, corresponding to the Kraft break. In stars with outer convective envelopes, we find a strong correlation of variability with age. Up to 80\% of the youngest stars with age $\lesssim$10~Myr appear to be periodic, whereas only $\sim$10\% of stars with age $\gtrsim$1~Gyr can be recovered as such. Partially this decrease is driven by these older stars having rotation periods longer than periods that can be measured within a single sector of TESS, and partially it is driven by the decrease of starspot sizes, resulting in variability amplitude lower than can be securely detected. In addition to age, spot size also strongly depends on \teff, with lower mass stars being more heavily spotted on average. However, the recovered spot filling fraction may be underestimated in younger stars, as we are not sensitive to small and uniformly distributed spots. A significant fraction of stars are fast rotators. They are most common at ages of 40--60 Myr and less prominent at the oldest and younger ages. Initially this is due to the stars spinning up as they settle onto the main sequence. Eventually, magnetic braking allows them to spin down to have rotation periods that place them on the so-called ``slow/I-type sequence" that has been found in previous works to be useful for gyrochronology \citep[see, e.g.,][and references therein]{barnes2007,barnes2010,mamajek2008}. We observe a clear and quite well defined slow sequence at all ages considered in this study, from 10~Myr to 1~Gyr, confirming previous results \citep[e.g.,][]{rebull2018,douglas2019, curtis2019a}. We also corroborate previous findings that rapid rotators with periods faster than two days are dominated by likely binaries, presumably due to tidal interactions that spin up the stars and maintain the rapid rotation \citep[see, e.g.,][]{douglas2016,Douglas:2017,simonian2019}. Limiting the sample to only sources on the slow/I-type sequence, with periods between 2--12 days, we develop an empirical gyrochronology relation of angular momentum evolution of pre- and main-sequence stars with convective envelopes. This fit allows angular momentum to be used to estimate ages from as young as 10~Myr and up to 1 Gyr. Importantly, through the use of angular momentum as opposed to rotation period alone, these empirical relations enable estimating ages of not only FGK-type stars but also M-type stars, due to the angular momentum distribution being much smoother, simpler, continuous and monotonic as compared to the rotation period distribution. As a result, we are also able to begin tracing in fine detail the nature of angular momentum loss in stars as functions of mass and age. For example, we find that the time-derivative of the angular momentum distribution exhibits a clear inflection at the fully convective boundary, suggesting that the dominant angular momentum loss mechanism(s) may be qualitatively different for fully convective stars and not merely quantitatively weaker. The typical scatter in the resulting gyrochronology-based age estimates is $\approx$0.2--0.3~dex, with the largest fractional uncertainties at the youngest ages where the intrinsic scatter in the rotation periods is a much larger fraction of the average. Variable stars with radiative envelopes also have some sensitivity to age (see Appendix~\ref{sec:radiative}). We find a clear separation in primary periods between main sequence and subgiant $\delta$~Scuti stars in this age limited sample. Similarly, we find a slight age gradient in slowly pulsating B-type variables, with the youngest stars stars having shorter periods. In $\gamma$~Dor type variables there does not appear to be a strong link between the dominant period and their age, however, their pulsation period does appear to be related to their rotation period, making it possible to extend the analysis of angular momentum evolution beyond the Kraft break. In the future, as all these pulsating variables tend to be highly periodic, it may be possible to further analyze higher pulsation frequencies to better understand their dependence on age. These data represent an important step forward in understanding rotation period and angular momentum evolution of stars as a function of age. No longer limited to just a handful of canonical clusters, angular momenta can now be measured for a large number of populations with a near-continuous age distribution. Such an unprecedented data volume allows for the most comprehensive analysis of gyrochronology relations to date, making it possible to estimate ages of field stars with greater precision.
2024-02-18T23:40:24.769Z
2022-08-03T02:03:14.000Z
algebraic_stack_train_0000
2,335
12,402
proofpile-arXiv_065-11519
\section{Introduction}\label{s:intro} In this article we introduce an efficient method to find analytical solution for boundary value problem of partial differential equations. The method is simple straightforward yet powerful. The method is based on the works of Paker-Sochacki \cite{PS} and Semiyari-Shafer \cite{SS}. \section{The Algorithm for first order}\label{s:alg} Let us consider a first order partial differential equation \begin{equation}\label{basic.bvp} \begin{aligned} u_t &= G(t,x, u,u_x)\\ u(a,x) &= f(x)\\ \end{aligned} \end{equation} and \begin{equation}\label{bvp} \begin{aligned} u(t,b) &=\alpha(t)\\ \end{aligned} \end{equation} where $u_x\;=\;\frac{\partial u}{\partial x}$ and $u_t\;=\;\frac{\partial u}{\partial t}.$ Equation \eqref{basic.bvp} is equivalent to \begin{equation}\label{e.baic} \begin{aligned} u(t,x) &= f(x) \,+\, \int_a^t\,G(s,x, u(s,x),u_x(s,x))\,ds. \\ \end{aligned} \end{equation} If $u(t,x)$ is a solution of \eqref{basic.bvp} then \eqref{bvp} can be written as \begin{equation}\label{bvp-1} \begin{aligned} u(t,b) \,-\, \alpha(t) =& 0. \\ \end{aligned} \end{equation} Hence, $u(t,x)$ is equivalent to \begin{equation}\label{bvp-2} \begin{aligned} u(t,x) &= u(t,x)\,-\,(u(t,b) \,-\, \alpha(t)). \\ \end{aligned} \end{equation} The key idea in our proposed method to solve \eqref{basic.bvp} is that in a picard iteration scheme we use \eqref{bvp-2} to update the approximation of $u(t,x)$ so that the condition \eqref{bvp} be satisfied in each iteration. Thus, the iterates are, \begin{equation}\label{it.1} \begin{aligned} u^{[0]}(t,x) &= f(x), \\ \end{aligned} \end{equation} and \begin{equation}\label{it.2} \begin{aligned} u^{[k+1]}(t,x) &= f(x)\,+\, \int_a^t\,G(s,x, u^{[0]},u^{[k]}_{x})\,ds \\ u^{[k+1]}(t,x) &= u^{[k+1]}(t,x)\,-\,( u^{[k+1]}(t,b)\,-\,\alpha(t)) \\ \end{aligned} \end{equation} where $u^{[k]}_{x}\;=\;\frac{\partial u^{[k]}}{\partial x}.$\\ This gives us the approximation solution to \eqref{basic.bvp} as a function of $t$ and $x$. \subsection{Test Example} \begin{example} Consider \begin{equation}\label{ex1} \begin{aligned} u_t &= -u_x+2+t+x\\ u(0,x) &= f(x)\\ u(t,0) &= \alpha(t)\\ \end{aligned} \end{equation} Where $f(x)=1+x$ and $\alpha(t)=1+t$. \\ The exact solution is given by \[ \begin{aligned} u(t,x) &= (1+t)(1+x)\\ \end{aligned} \] The equation \eqref{ex1} can be set up as follow \begin{equation}\label{N.ex1} \begin{aligned} u_t &= -u_x+2+t+x\\ u(0,x) &= f(x)\\ \end{aligned} \end{equation} and \begin{equation}\label{b.ex1} \begin{aligned} u(t,0) &= \alpha(t)\\ \end{aligned} \end{equation} The initial value is $u^{[0]}(t,x)=1+x$. Thus the iterates are \begin{equation} \label{e:iteration1} \begin{aligned} u^{[k+1]}(t,x) &= 1+x \, - \, \int_0^t\,(u^{[k]}_x\,+\, 2\,+\,s\,+\,x)\,ds. \\ u^{[k+1]}(t,x) &= u^{[k+1]}(t,x)\,-\,( u^{[k+1]}(t,0)\,-\,(1\,+\,t)) \\ \end{aligned} \end{equation} We obtain the exact solution by our very first iteration, \begin{equation} \begin{aligned} u^{[1]}(t,x) &= (1+x) + \, \int_0^t\,-1+2+s+x\,ds \\ &=(1+x)+ t+\frac{t^2}{2}+xt\\ u^{[1]}(t,x) &= u^{[1]}(t,x)\,-\,(u^{[1]}(t,0)\,-\,\alpha(t)) \\ &= 1+x+ t+\frac{t^2}{2}+xt\,-\,(1+ t+\frac{t^2}{2}\,-\,(1+t) \\ &=1+x+t+xt\\ &=(1+x)(1+t) \end{aligned} \end{equation} Hence the absolute and relative errors are zero. \end{example} \begin{example} In this example we consider a non-linear first order PDE. \begin{equation}\label{basic.bvp.2} \begin{aligned} A_2\,u_t + A_1\,u^{m+1}_x &= A_3\,u^j\,+\,A_4\,\exp(A_5\,u)+E(t,x). \end{aligned} \end{equation} where $u_x\;=\;\frac{\partial u}{\partial x}$ and $u_t\;=\;\frac{\partial u}{\partial t}$ $$E(t,x)=\exp(t+y)\{A_2+A_1b^{-1}(m+1)\exp[m(t+y)]\}\,-\,A_3\exp[j(t+y)]\,-\,A_4[\exp[A_5\exp(t+y)]$$ $f(x)=\exp(\frac{x}{b})$ and $\alpha(t)=\exp(t)$.\\ The initial values \begin{equation}\label{iv.bvp2} \begin{aligned} u(0,x) &= f(x) \end{aligned} \end{equation} and boundary values \begin{equation}\label{bv.bvp2} \begin{aligned} u(t,0) &= \alpha(t). \end{aligned} \end{equation} The exact solution is \[ \begin{aligned} u(t,x) &= \exp(t+y),\quad y=\frac{x}{b}.\\ \end{aligned} \] The problem \eqref{basic.bvp.2}-\eqref{bv.bvp2} is written as \begin{equation}\label{n.basic.bvp2} \begin{aligned} u_t &=\frac{1}{A_2}\Big(-A_1\,u^{m+1}_x+ A_3\,u^j\,+\,A_4\,\exp(A_5\,u)+E(t,x)\Big).\\ u(0,x) &= f(x) \end{aligned} \end{equation} and \begin{equation}\label{n.bv.bvp2} \begin{aligned} u(t,0) &= \alpha(t). \end{aligned} \end{equation} We define the axillary variables to be \begin{equation} \label{e:aux} \begin{aligned} v &= \exp(t) \\ T &= \exp(-u) \\ P &= \exp(t+y) \\ R &= \exp(-P) \\ \end{aligned} \end{equation} For ease of calculation we define $A_6\;=\;-A_5$. Hence we will have a system of first order ode \begin{equation} \label{e:system} \begin{aligned} u' &= \frac{1}{A_2}\{(-A_1(u^{m+1})_x +A_3 u^j + A_4T^{A_6} + E\}\\ v' &= v \\ T' &= -u'T \\ P' &= P \\ R' &= -PR \\ E' &= P\{A_2+A_1b^{-1}(m+1)^2P^m\}\,-\,A_3jP^j\,+\,A_4A_6PR^{A_6} \end{aligned} \end{equation} where ``prime`` notation represents derivative with respect to $t$.\\ The initial values or the first approximation for each variables are \begin{equation} \label{e:aux} \begin{aligned} u^{[0]} &= f\\ v^{[0]} &= 1 \\ T^{[0]} &= \exp(-f) \\ P^{[0]} &= \exp(y) \\ R^{[0]} &= \exp(-\exp(y)) \\ E^{[0]} &=\exp(y)\{A_2+A_1b^{-1}(m+1)\exp[my]\}\,\\ &\quad-\,A_3\exp(jy)\,-\,A_4\exp[-A_6\exp(y)] \end{aligned} \end{equation} Let us define $$E_0\;=\;E[0].$$ The iterations would be \begin{equation} \label{e:iteration1.bc} \begin{aligned} u^{[k+1]} &= f + \frac{1}{A_2}\int_0^t\,-A_1([u^{[k]}]^{m+1})_x +A_3 [u^{[k]}]^j + A_4[T^{[k]}]^{A_6} + E^{[k]} \,ds \\ v^{[k+1]} &= 1 + \int_0^t\,v^{[k]}\,ds \\ T^{[k+1]} &= \exp(-f) - \int_0^t\, u^{[k+1]}T^{[k]}\,ds \\ P^{[k+1]} &= \exp(y) + \int_0^t\, P^{[k]}\,ds \\ R^{[k+1]} &= \exp(-\exp(y)) - \int_0^t\,P^{[k]}R^{[k]} \,ds \\ E^{[k+1]} &=E_0 + \int_0^t\,P^{[k]}\{A_2+A_1b^{-1}(m+1)^2[P^{[k]}]^m\}\,-\,A_3k[P^{[k]}]^j\,+\,A_4A_6P^{[k]}[R^{[k]}]^{A_6}\,ds \\ u^{[k+1]}(t,x) &= u^{[k+1]}(t,x) \,-\,(u^{[k+1]}(t,0)\,-\,\alpha(t)) \\ \end{aligned} \end{equation} Let us take the initial data suggested in \cite{Y}, where $m=1,\;b=1,\;A_1=1,\;A_2=1$. We have considered the several cases. After 4 iterations the error plot and maximum error provided \begin{enumerate} \item {\bf Case 1:} $j=1,\;A_3=0,\;A_4=0,\;A_5=0$.\\ Absolute and Relative plots of the error in the final approximation $u^{[5]}(t,x)$ are shown in Figure \ref{Case 1} and \ref{Case 1.1} respectively. The maximum error rounded to 5 decimal digit; after 4 iterations the maximum error is $\epsilon \approx 0.00439$ \begin{figure}[H] \centering \includegraphics[width=3in]{C1} \caption{Error plot, $|u(t,x)-u_{approx}|$, with $n=4$ iterations. $u(t,x)$ represents the exact solution and $u_{approx}$ represents the approximate solution.}\label{Case 1} \end{figure} \begin{figure}[H] \centering \includegraphics[width=3in]{C1-1} \caption{Relative Error plot, $\frac{|u(t,x)-u_{approx}|}{|u(t,x)|}$, with $n=4$ iterations. $u(t,x)$ represents the exact solution and $u_{approx}$ represents the approximate solution.}\label{Case 1.1} \end{figure} \item {\bf Case 2:} $j=1,\;A_3=-1,\;A_4=0,\;A_5=0$.\\ Absolute and Relative plots of the error in the final approximation $u^{[5]}(t,x)$ are shown in Figure \ref{Case 2} and \ref{Case 2.1} respectively. \begin{figure}[H] \centering \includegraphics[width=3in]{C2} \caption{Error plot, $|u(t,x)-u_{approx}|$, with $n=4$ iterations. $u(t,x)$ represents the exact solution and $u_{approx}]$ represents the approximate solution.}\label{Case 2} \end{figure} \begin{figure}[H] \centering \includegraphics[width=3in]{C2-1} \caption{Relative Error plot.}\label{Case 2.1} \end{figure} \item {\bf Case 3:} $j=2,\;A_3=-1,\;A_4=0,\;A_5=0$.\\ Absolute and Relative plots of the error in the final approximation $u^{[5]}(t,x)$ are shown in Figure \ref{Case 3} and \ref{Case 3.1} respectively. \begin{figure}[H] \centering \includegraphics[width=3in]{C3} \caption{Error plot, $|u(t,x)-u_{approx}|$, with $n=4$ iterations. $u(t,x)$ represents the exact solution and $u_{approx}$ represents the approximate solution.}\label{Case 3} \end{figure} \begin{figure}[H] \centering \includegraphics[width=3in]{C3-1} \caption{Relative Error plot.}\label{Case 3.1} \end{figure} \item {\bf Case 4:} $j=2,\;A_3=1,\;A_4=0,\;A_5=0$.\\ Absolute and Relative plots of the error in the final approximation $u^{[5]}(t,x)$ are shown in Figure \ref{Case 4} and \ref{Case 4.1} respectively. \begin{figure}[H] \centering \includegraphics[width=3in]{C4} \caption{Error plot, $|u(t,x)-u_{approx}|$, with $n=4$ iterations. $u(t,x)$ represents the exact solution and $u_{approx}$ represents the approximate solution.}\label{Case 4} \end{figure} \begin{figure}[H] \centering \includegraphics[width=3in]{C4-1} \caption{Relative Error plot.}\label{Case 4.1} \end{figure} \item {\bf Case 5:} $j=1,\;A_3=1,\;A_4=0,\;A_5=0$.\\ Absolute and Relative plots of the error in the final approximation $u^{[5]}(t,x)$ are shown in Figure \ref{Case 5} and \ref{Case 5.1} respectively. \begin{figure}[H] \centering \includegraphics[width=3in]{C5} \caption{Error plot, $|u(t,x)-u_{approx}|$, with $n=4$ iterations. $u(t,x)$ represents the exact solution and $u_{approx}$ represents the approximate solution.}\label{Case 5} \end{figure} \begin{figure}[H] \centering \includegraphics[width=3in]{C5-1} \caption{Relative Error plot.}\label{Case 5.1} \end{figure} \item {\bf Case 6:} $j=1,\;A_3=0,\;A_4=1,\;A_5=-1$.\\ Absolute and Relative plots of the error in the final approximation $u^{[5]}(t,x)$ are shown in Figure \ref{Case 6} and \ref{Case 6.1} respectively. \begin{figure}[H] \centering \includegraphics[width=3in]{C6} \caption{Error plot, $|u(t,x)-u_{approx}|$, with $n=4$ iterations. $u(t,x)$ represents the exact solution and $u_{approx}$ represents the approximate solution.}\label{Case 6} \end{figure} \begin{figure}[H] \centering \includegraphics[width=3in]{C6-1} \caption{Relative Error plot.}\label{Case 6.1} \end{figure} \item {\bf Case 7:} $j=1,\;A_3=1,\;A_4=1,\;A_5=-1$.\\ Absolute and Relative plots of the error in the final approximation $u^{[5]}(t,x)$ are shown in Figure \ref{Case 7} and \ref{Case 7.1} respectively. \begin{figure}[H] \centering \includegraphics[width=3in]{C7} \caption{Error plot, $|u(t,x)-u_{approx}|$, with $n=4$ iterations. $u(t,x)$ represents the exact solution and $u_{approx}$ represents the approximate solution.}\label{Case 7} \end{figure} \begin{figure}[H] \centering \includegraphics[width=3in]{C7-1} \caption{Relative Error plot.}\label{Case 7.1} \end{figure} \end{enumerate} \end{example} \section{The Algorithm for second order}\label{s:alg2} Consider a second order PDE, \begin{equation}\label{basic.bvp1} u_{xx}\;=\;G(t,x, u,u_x,u_x, u_{tt}, u_{tx}). \end{equation} Similar to the first order, we will set up a system of the first order Ordinary Differential Equations, ODE, by using auxiliary variables and then we will satisfy the boundary condition(s) iteratively. \subsection{Test Examples} \begin{example} Consider \begin{equation}\label{s.ex.1} \begin{aligned} u_{tt}&=-u_{xx} \end{aligned} \end{equation} Where $u(t,a)=\exp(t)$, $u(t,b)=0$, $u(a,x)=\cos x$ and $u_t(a,x)=\cos x$. \\ The exact solution is given by \[ u(t,x)=\exp(t) \, \cos x\] Let $\alpha=\exp(t)$, $\beta=0$, $a=0$, and $b=\frac{\pi}{2}$. The equation \eqref{s.ex.1} with its conditions can be written as \begin{equation}\label{e.ex.1} \begin{aligned} u_{tt}&=-u_{xx}\\ u(a,x)&=\cos x\\ u_t(a,x)&=\cos x\\ \end{aligned} \end{equation} and \begin{equation}\label{c.ex.1} \begin{aligned} u(t,a)&=\exp(t)\;(=\alpha)\\ u(t,b)&=0\;(=\beta)\\ \end{aligned} \end{equation} If $u(t,x)$ is a solution to \eqref{e.ex.1} then by \eqref{c.ex.1} we can represent $u(t,x)$ as \begin{equation}\label{c.ex1} \begin{aligned} u(t,x)&= u(t,x) \,-\,\frac{x-a}{b-a}(u(t,b)\,-\,\beta)-\,\frac{b-x}{b-a}(u(t,a)\,-\,\alpha) \\ \end{aligned} \end{equation} We define the axillary variables to be \begin{equation} \label{s:aux1} \begin{aligned} v &= u_t \\ U &= \exp(t). \end{aligned} \end{equation} Hence we will have a system of first order ode \begin{equation} \label{s:system1} \begin{aligned} u' &= v\\ v' &= -u_{xx} \\ U' &= U\\ \end{aligned} \end{equation} where ``prime`` notation means derivative with respect to variable $t$. The initial conditions are \begin{equation} \label{s:int1} \begin{aligned} u^{[0]} &= \cos x\; (=u_0)\\ v^{[0]} &= \cos x \;(=v_0) \\ U^{[0]} &= 1\; (=U_0) \\ \end{aligned} \end{equation} To solve \eqref{e.ex.1} is that in a picard iteration scheme we use \eqref{c.ex1} to update the approximation of $u(t,x)$ so that the condition \eqref{c.ex.1} be satisfied in each iteration. Thus, the iterates are \begin{equation} \label{s:iteration1.bc.11} \begin{aligned} u^{[k+1]}(t,x) &= u_0 + \int_0^t\,v^{[k]}(s,x) \,ds \\ v^{[k+1]}(t,x) &= v_0 - \int_0^t\,u^{[k]}_{xx}(s,x)\,ds \\ U^{[k+1]}(x) &= U_0+ \int_0^t\, U^{[k]}(s)\,ds \\ u^{[k+1]}(t,x) &= u^{[k+1]}(t,x) \,-\,\frac{x-a}{b-a}(u^{[k+1]}(t,b)\,-\,\beta)-\,\frac{b-x}{b-a}(u^{[k+1]}(t,a)\,-\,\alpha). \\ \end{aligned} \end{equation} Absolute and Relative plots of the error in the final approximation $u^{[5]}(t,x)$ are shown in Figure \ref{p3} and \ref{p3-1} respectively. The maximum error rounded to 5 decimal digit; after 4 iterations the maximum error is $\epsilon \approx 0.0003$ \begin{figure}[H] \centering \includegraphics[width=3in]{L-2nd} \caption{Error plot, $|u(t,x)-u_{approx}|$, with $n=4$ iterations. $u(t,x)$ represents the exact solution and $u_{approx}$ represents the approximate solution.}\label{p3} \end{figure} \begin{figure}[H] \centering \includegraphics[width=3in]{L-2nd-1} \caption{Relative Error plot, $\frac{|u(t,x)-u_{approx}|}{|u(t,x)|}$, with $n=4$ iterations. $u(t,x)$ represents the exact solution and $u_{approx}$ represents the approximate solution.}\label{p3-1} \end{figure} \end{example} \begin{example}{\bf Sine Gorden.}\\ Consider \begin{equation}\label{s.ex2} \begin{aligned} u_{xx} &= u_{tt} - \sin u \end{aligned} \end{equation} Where $u(t,a)=f(t)$, $u_x(t,a)=g(t)$, $u(a,x)=\alpha$ and $u(b,x)=\beta$. \\ The exact solution is given by \cite{Wolf} \[ u(t,x)=-4\, \tan^{-1}\Big(\frac{m}{\sqrt{1-m^2}}\frac{\sin \sqrt{1-m^2}t}{\cosh (mx)}\Big)\] Let $\alpha=0$, $\beta=-4\, \tan^{-1}\Big(\frac{m}{\sqrt{1-m^2}}\frac{\sin \sqrt{1-m^2}}{\cosh (mx)}\Big)$, $a=0$, and $b=1$. \[m^2\;<\;1\] The equation \eqref{s.ex2} with its conditions can be written as \begin{equation}\label{n.s.ex2} \begin{aligned} u_{xx} &= u_{tt} - \sin u\\ u(t,a) &= f(t)\\ u_x(t,a) &= g(t) \end{aligned} \end{equation} and \begin{equation}\label{c.s.ex2} \begin{aligned} u_{xx} &= u_{tt} - \sin u\\ u(a,x) &= \alpha(x)\\ u(b,x) &= \beta(x) \end{aligned} \end{equation} We define the axillary variables to be \begin{equation} \label{s:aux2} \begin{aligned} v &= u_x \\ U &= \sin u \\ V &= \cos u \\ \end{aligned} \end{equation} Hence we will have a system of first order ode \begin{equation} \label{s:system2} \begin{aligned} u' &= v\\ v' &= u_{tt} - U \\ U' &= vV\\ V' &= -vU\\ \end{aligned} \end{equation} where the ``prime`` notation represent derivative with respect to $x$.\\ The initial values or the first approximation for each variables are \begin{equation} \label{s:int2} \begin{aligned} u^{[0]} &= f(t)\; (=u_0)\\ v^{[0]} &= g(t) \;(=v_0) \\ U^{[0]} &= \sin u_0\; (=U_0) \\ V^{[0]} &= \cos V_0\; (=V_0) \\ \end{aligned} \end{equation} The iterations would be \begin{equation} \label{s:iteration1.bc2} \begin{aligned} u^{[k+1]}(t,x) &= u_0 + \int_0^x\,v^{[k]}(t,s) \,ds \\ v^{[k+1]}(t,x) &= v_0 - \int_0^x\,u^{[k]}_{xx}(t,s)-\sin (u(t,s))\,ds \\ U^{[k+1]}(t,x) &= U_0+ \int_0^x\, v^{[k]}V^{[k]}(s)\,ds \\ V^{[k+1]}(t,x) &= V_0- \int_0^x\, v^{[k]}U^{[k]}(s)\,ds \\ u^{[k+1]}(t,x) &= u^{[k+1]}(t,x) \,-\,\frac{t-a}{b-a}(u^{[k+1]}(b,x)\,-\,\beta)-\,\frac{b-t}{b-a}(u^{[k+1]}(a,x)\,-\,\alpha) \\ \end{aligned} \end{equation} Absolute and Relative plots of the error in the final approximation $u^{[5]}(t,x)$ are shown in Figure \ref{p4.1} and \ref{p4.1-1} respectively with $m=0.1$. The maximum error approximated to $\epsilon \approx 0.010$ \begin{figure}[H] \centering \includegraphics[width=3in]{Gorden-m01} \caption{Error plot, $|u(t,x)-u_{approx}|$, with $n=4$ iterations. $u(t,x)$ represents the exact solution and $u_{approx}$ represents the approximate solution.}\label{p4.1} \end{figure} \begin{figure}[H] \centering \includegraphics[width=3in]{Gorden-m01-1} \caption{Relative Error plot, $\frac{|u(t,x)-u_{approx}|}{|u(t,x)|}$, with $n=4$ iterations. $u(t,x)$ represents the exact solution and $u_{approx}$ represents the approximate solution.}\label{p4.1-1} \end{figure} Absolute and Relative plots of the error in the final approximation $u^{[5]}(t,x)$ are shown in Figure \ref{p4.2} and \ref{p4.2-1} respectively with $m=0.5$. The maximum error approximated to $\epsilon \approx 0.05$ \begin{figure}[H] \centering \includegraphics[width=3in]{Gorden-m05} \caption{Error plot, $|u(t,x)-u_{approx}|$, with $n=4$ iterations. $u(t,x)$ represents the exact solution and $u_{approx}$ represents the approximate solution.}\label{p4.2} \end{figure} \begin{figure}[H] \centering \includegraphics[width=3in]{Gorden-m05-1} \caption{Relative Error plot.}\label{p4.2-1} \end{figure} Absolute and Relative plots of the error in the final approximation $u^{[5]}(t,x)$ are shown in Figure \ref{p4.3} and \ref{p4.3-1} respectively with $m=0.9$. The maximum error approximated to $\epsilon \approx 0.12$ \begin{figure}[H] \centering \includegraphics[width=3in]{Gorden-m09} \caption{Error plot, $|u(t,x)-u_{approx}|$, with $n=4$ iterations. $u(t,x)$ represents the exact solution and $u_{approx}$ represents the approximate solution.}\label{p4.3} \end{figure} \begin{figure}[H] \centering \includegraphics[width=3in]{Gorden-m09-1} \caption{Relative Error plot.}\label{p4.3-1} \end{figure} \end{example} \begin{example}\label{ex5} Consider \begin{equation}\label{s.ex5} \begin{aligned} u_{xx} &= -2u_t\,u_x \end{aligned} \end{equation} Where $u(t,a)=\alpha$ and $u(t,b)=\beta$ and $u(a,x)=\frac{2}{x+1}$. where $\alpha=t+2$, $\beta=\frac{2+t}{2}$, $a=0$ and $b=1$. \\ The exact solution is given by \[ \begin{aligned} u(t,x) &= \frac{2+t}{1+x} \\ \end{aligned} \] The equation \eqref{s.ex5} with its condition is written as \begin{equation}\label{n.s.ex5} \begin{aligned} u_{xx} &= -2u_t\,u_x\\ u(t,0) &= \alpha(t)\\ u(t,1) &= \beta(t)\\ \end{aligned} \end{equation} and \begin{equation}\label{c.s.ex5} \begin{aligned} u(0,x) &= \frac{2}{x+1}\\ \end{aligned} \end{equation} We define the axillary variables to be \begin{equation} \label{s:aux5} \begin{aligned} v &= u_x \\ \end{aligned} \end{equation} Hence we will have a system of first order ode \begin{equation} \label{s:system5} \begin{aligned} u' &= v\\ v' &= -2u_tv \\ \end{aligned} \end{equation} where the ``prime`` notation represents derivative with respect to $x$.\\ With initial values \begin{equation} \label{s:initial5} \begin{aligned} u(t,0) &= \alpha\\ v(t,0) &= \gamma\\ \end{aligned} \end{equation} where $\gamma$ is unknown. However, it can be approximated by Semiyari-Shafer's method \cite{SS}. The first approximation for each variables are \begin{equation} \label{s:aux} \begin{aligned} u^{[0]} &= t+2\\ v^{[0]} &= \frac{\beta-\alpha}{b-a} \\ \gamma^{[0]} &=\frac{1}{b-a}(\beta-\alpha + 2\int_0^1\,(b-s) u^{[0]}_t(t,s)\,v^{[0]}(t,s)\,ds) \end{aligned} \end{equation} The iterations would be \begin{equation} \label{s:iteration1.bc} \begin{aligned} u^{[k+1]}(t,x) &= \alpha + \int_0^x\,v^{[k]}(t,s) \,ds \\ v^{[k+1]}(t,x) &= \gamma^{[k]} - 2\int_0^x\,l u^{[k]}_{t}(t,s)\,v^{[k]}(t,s)\,ds \\ \gamma^{[k+1]} &= \frac{1}{b-a}(\beta-\alpha + 2\int_0^1\,(b-s) u^{[k]}_{t}(t,s)\,v^{[k+1]}(t,s)\,ds) \\ u^{[k+1]}(t,x) &= u^{[k+1]}(t,x) - (u^{[k+1]}(0,x) -\frac{2}{x+1}).\\ \end{aligned} \end{equation} After 4 iterations the exact $\gamma=-(t+2)$ approximated with 5 decimal digits to $\gamma^{[5]}=-(0.99673\,t+1.99346)$ and the maximum error approximated to $\epsilon \approx 0.010$ The error in the final approximation $u^{[5]}(t,x)$ is shown in Figure \ref{p5} \begin{figure}[H] \centering \includegraphics[width=3in]{SemSh} \caption{Error plot, $|u(t,x)-u_{approx}|$, with $n=4$ iterations. $u(t,x)$ represents the exact solution and $u_{approx}$ represents the approximate solution.}\label{p5} \end{figure} The relative error is shown in Figure \ref{p5-1} \begin{figure}[H] \centering \includegraphics[width=3in]{SemSh-1} \caption{The Relative Error plot, $\frac{|u(t,x)-u_{approx}|}{|u(t,x)|}$, with $n=4$ iterations. $u(t,x)$ represents the exact solution and $u_{approx}$ represents the approximate solution.}\label{p5-1} \end{figure} \end{example} \begin{example} Let us revisit example \eqref{ex5}. The problem can be rewritten as \begin{equation}\label{s.ex6} \begin{aligned} u_t &= -\frac{u_{xx}}{2u_x} \end{aligned} \end{equation} Where $u(t,a)=\alpha$ and $u(t,b)=\beta$ and $u(0,x)=f$. where $\alpha=t+2$, $\beta=\frac{2+t}{2}$, $f=\frac{2}{x+1}$, $a=0$ and $b=1$. \\ The exact solution is given by \[ \begin{aligned} u(t,x) &= \frac{2+t}{1+x} \end{aligned} \] The problem \eqref{s.ex6} with its conditions is written as \begin{equation}\label{n.s.ex6} \begin{aligned} u_t &= -\frac{u_{xx}}{2u_x}\\ u(0,x) &= f \end{aligned} \end{equation} and \begin{equation}\label{c.s.ex6} \begin{aligned} u(t,0) &= \alpha(t)\\ u(t,1) &= f\beta(t) \end{aligned} \end{equation} if $u(t,x)$ is a solution of \eqref{n.s.ex6} then it is equivalent to \begin{equation} \begin{aligned} u(t,x) &= u(t,x) - \frac{x-a}{b-a}(u(t,b) -\beta)- \frac{b-x}{b-a}(u(t,a) -\alpha).\\ \end{aligned} \end{equation} The system of first order ode \begin{equation} \begin{aligned} u_t &= -\frac{u_{xx}}{2u_x} \end{aligned} \end{equation} With initial values \begin{equation} \begin{aligned} u(0,x) &= f\\ \end{aligned} \end{equation} The first approximation is \begin{equation} \begin{aligned} u^{[0]} &= f\\ \end{aligned} \end{equation} The iterations would be \begin{equation} \begin{aligned} u^{[k+1]}(t,x) &= f - \int_0^t\,\frac{u^{[k]}_{xx}}{2u^{[k]}_x}(t,s) \,ds \\ u^{[k+1]}(t,x) &= u^{[k+1]}(t,x) - \frac{x-a}{b-a}(u^{[k+1]}(t,b) -\beta)- \frac{b-x}{b-a}(u^{[k+1]}(t,a) -\alpha).\\ \end{aligned} \end{equation} After 2 iterations the $u^{[3]}(t,x)=\frac{t+2}{x+1}$ which is same as the exact solution. Hence the Absolute and Relative errors are zero. \end{example} \section{Conclusion}\label{s:concl} We have introduced a new analytic method for solving boundary value problem of partial differential equations. The method is easy to implement, computationally efficient, and it is highly accurate. The output of my method is a function that approximates the exact solution, whereas in the current methods the output is a sequence of n+1 points that approximate the values of the unknown solution at n+1 t-values and we are able to compute the error only at these n+1 points and then must interpolate between them. \section{Acknowledgement}\label{s:Ack} We thank James Sochacki, professor of Mathematics at James Madison University, for sharing his pearls of wisdom with us during the course of this research that greatly improved the manuscript.
2024-02-18T23:40:25.816Z
2016-11-22T02:12:32.000Z
algebraic_stack_train_0000
2,381
4,205
proofpile-arXiv_065-11662
\subsection{Baselines} \paragraph{Sentence-Concat:} To demonstrate the difference between sentence-level and paragraph captions, this baseline samples and concatenates five sentence captions from a model~\cite{karpathy2015deep} trained on MS COCO captions~\cite{lin2014microsoft}. The first sentence uses beam search (beam size $=2$) and the rest are sampled. The motivation for this is as follows: the image captioning model first produces the sentence that best describes the image as a whole, and subsequent sentences use sampling in order to generate a diverse range of sentences, since the alternative is to repeat the same sentence from beam search. We have validated that this approach works better than using either only beam search or only sampling, as the intent is to make the strongest possible comparison at a task-level to standard image captioning. We also note that, while Sentence-Concat is trained on MS COCO, all images in our dataset are also in MS COCO, and our descriptions were also written by users on Amazon Mechanical Turk. \paragraph{Image-Flat:} This model uses a flat representation for both images and language, and is equivalent to the standard image captioning model NeuralTalk~\cite{karpathy2015deep}. It takes the whole image as input, and decodes into a paragraph token by token. We use the publically available implementation of \cite{karpathy2015deep}, which uses the 16-layer VGG network~\cite{simonyan2015very} to extract CNN features and projects them as input into an LSTM~\cite{hochreiter1997long}, training the whole model jointly end-to-end. \paragraph{Template:} This method represents a very different approach to generating paragraphs, similar in style to an open-world version of more classical methods like BabyTalk~\cite{kulkarni2011baby}, which converts a structured representation of an image into text via a handful of manually specified templates. The first step of our template-based baseline is to detect and describe many regions in a given target image using a pre-trained dense captioning model~\cite{johnson2016densecap}, which produces a set of region descriptions tied with bounding boxes and detection scores. The region descriptions are parsed into a set of subjects, verbs, objects, and various modifiers according to part of speech tagging and a handful of TokensRegex~\cite{chang2014tokensregex} rules, which we find suffice to parse the vast majority ($\geq 99\%$) of the fairly simplistic and short region-level descriptions. Each parsed word is scored by the sum of its detection score and the log probability of the generated tokens in the original region description. Words are then merged into a coherent graph representing the scene, where each node combines all words with the same text and overlapping bounding boxes. Finally, text is generated using the top $N = 25$ scored nodes, prioritizing \verb|subject-verb-object| triples first in generation, and representing all other nodes with existential ``there is/are'' statements. \paragraph{DenseCap-Concat:} This baseline is similar to Sentence-Concat, but instead concatenates DenseCap~\cite{johnson2016densecap} predictions as separate sentences in order to form a paragraph. The intent of analyzing this method is to disentangle two key parts of the Template method: captioning and detection (\ie DenseCap), and heuristic recombination into paragraphs. We combine the top $n=14$ outputs of DenseCap to form DenseCap-Concat's output based on validation CIDEr+METEOR. \paragraph{Other Baselines:} ``Regions-Flat-Scratch'' uses a flat language model for decoding and initializes it from scratch. The language model input is the projected and pooled region-level image features. ``Regions-Flat-Pretrained'' uses a pre-trained language model. These baselines are included to show the benefits of decomposing the image into regions and pre-training the language model. \subsection{Implementation Details} All baseline neural language models use two layers of LSTM~\cite{hochreiter1997long} units with 512 dimensions. The feature pooling dimension $P$ is 1024, and we set $\lambda_{sent}=5.0$ and $\lambda_{word}=1.0$ based on validation set performance. Training is done via stochastic gradient descent with Adam~\cite{kingma2015adam}, implemented in Torch. Of critical note is that model checkpoint selection is based on the best combined METEOR and CIDEr score on the validation set -- although models tend to minimize validation loss fairly quickly, it takes much longer training for METEOR and CIDEr scores to stop improving. \subsection{Main Results} \begin{figure*} \includegraphics[width=0.90\textwidth]{ResultsFigure.pdf} \caption{ Example paragraph generation results for our model (Regions-Hierarchical) and the Sentence-Concat and Template baselines. The first three rows are positive results and the last row is a failure case.} \label{fig:qualitative} \end{figure*} We present our main results at generating paragraphs in Tab.~\ref{tab:main_results}, which are evaluated across six language metrics: CIDEr~\cite{vedantam2015cider}, METEOR~\cite{denkowski2014meteor}, and BLEU-\{1,2,3,4\}~\cite{papineni2002bleu}. The Sentence-Concat method performs poorly, achieving the lowest scores across all metrics. Its lackluster performance provides further evidence of the stark differences between single-sentence captioning and paragraph generation. Surprisingly, the hard-coded template-based approach performs reasonably well, particularly on CIDEr, METEOR, and BLEU-1, where it is competitive with some of the neural approaches. This makes sense: the template approach is provided with a strong prior about image content since it receives region-level captions~\cite{johnson2016densecap} as input, and the many expletive ``there is/are'' statements it makes, though uninteresting, are safe, resulting in decent scores. However, its relatively poor performance on BLEU-3 and BLEU-4 highlights the limitation of reasoning about regions in isolation -- it is unable to produce much text relating regions to one another, and further suffers from a lack of ``connective tissue'' that transforms paragraphs from a series of disconnected thoughts into a coherent whole. DenseCap-Concat scores worse than Template on all metrics except CIDEr, illustrating the necessity of Template's caption parsing and recombination. Image-Flat, trained on the task of paragraph generation, outperforms Sentence-Concat, and the region-based reasoning of Regions-Flat-Scratch improves results further on all metrics. Pre-training results in improvements on all metrics, and our full model, Regions-Hierarchical, achieves the highest scores among all methods on every metric except BLEU-4. One hypothesis for the mild superiority of Regions-Flat-Pretrained on BLEU-4 is that it is better able to reproduce words immediately at the end and beginning of sentences more exactly due to their non-hierarchical structure, providing a slight boost in BLEU scores. To make these metrics more interpretable, we performed a human evaluation by collecting an additional paragraph for 500 randomly chosen images, with results in the last row of Tab.~\ref{tab:main_results}. As expected, humans produce superior descriptions to any automatic method, performing better on all language metrics considered. Of particular note is the large gap between humans our the best model on CIDEr and METEOR, which are both designed to correlate well with human judgment~\cite{vedantam2015cider,denkowski2014meteor}. Finally, we note that we have also tried the SPICE evaluation metric~\cite{anderson2016spice}, which has shown to correlate well with human judgements for sentence-level image captioning. Unfortunately, SPICE does not seem well-suited for evaluating long paragraph descriptions -- it does not handle coreference or distinguish between different instances of the same object category. These are reasonable design decisions for sentence-level captioning, but is less applicable to paragraphs. In fact, human paragraphs achieved a considerably lower SPICE score than automated methods. \subsection{Qualitative Results} \vspace{-0mm} We present qualitative results from our model and the Sentence-Concat and Template baselines in Fig.~\ref{fig:qualitative}. Some interesting properties of our model's predictions include its use of coreference in the first example (``a bus'', ``it'', ``the bus'') and its ability to capture relationships between objects in the second example. Also of note is the order in which our model chooses to describe the image: the first sentence tends to be fairly high level, middle sentences give some details about scene elements mentioned earlier in the description, and the last sentence often describes something in the background, which other methods are not able to capture. Anecdotally, we observed that this follows the same order with which most humans tended to describe images. The failure case in the last row highlights another interesting phenomenon: even though our model was wrong about the semantics of the image, calling the girl ``a woman'', it has learned that ``woman'' is consistently associated with female pronouns (``she'', ``she'', ``her hand'', ``behind her''). It is also worth noting the general behavior of the two baselines. Paragraphs from Sentence-Concat tend to be repetitive in sentence structure and are often simply inaccurate due to the sampling required to generate multiple sentences. On the other hand, the Template baseline is largely accurate, but has uninteresting language and lacks the ability to determine which things are most important to describe. In contrast, Regions-Hierarchical stays relevant and furthermore exhibits more interesting patterns of language. \begin{table*}[t] \begin{tabular}{lcccccccc} \hline & \multiline{Average\\Length} & \multiline{Std. Dev.\\Length} & Diversity & Nouns & Verbs & Pronouns & \multiline{Vocab\\Size} \\ \hline Sentence-Concat & 56.18 & 4.74 & 34.23 & 32.53 & 9.74 & 0.95 & 2993 \\ Template & 60.81 & 7.01 & 45.42 & 23.23 & 11.83 & 0.00 & 422 \\ Regions-Hierarchical\hspace{-2mm} & 70.47 & 17.67 & 40.95 & 24.77 & 13.53 & 2.13 & 1989 \\ Human & 67.51 & 25.95 & 69.92 & 25.91 & 14.57 & 2.42 & 4137 \\ \hline \end{tabular} \caption{Language statistics of test set predictions. Part of speech statistics are given as percentages, and diversity is calculated as in Section~\ref{sec:paragraphs}. ``Vocab Size'' indicates the number of unique tokens output across the entire test set, and human numbers are calculated from ground truth. Note that the diversity score for humans differs slightly from the score in Tab.~\ref{tab:data_stats}, which is calculated on the entire dataset. } \label{tab:output_language_stats} \end{table*} \begin{figure*} \centering \includegraphics[width=0.85\textwidth]{RegionFigure.pdf} \caption{Examples of paragraph generation from only a few regions. Since only a small number of regions are used, this data is extremely out of sample for the model, but it is still able to focus on the regions of interest while ignoring the rest of the image. \vspace{-4mm}} \label{fig:regions} \end{figure*} \subsection{Paragraph Language Analysis} To shed a quantitative light on the linguistic phenomena generated, in Tab.~\ref{tab:output_language_stats} we show statistics of the language produced by a representative spread of methods. Our hierarchical approach generates text of similar average length and variance as human descriptions, with Sentence-Concat and the Template approach somewhat shorter and less varied in length. Sentence-Concat is also the least diverse method, though all automatic methods remain far less diverse than human sentences, indicating ample opportunity for improvement. According to this diversity metric, the Template approach is actually the most diverse automatic method, which may be attributed to how the method is hard-coded to sequentially describe each region in the scene in turn, regardless of importance or how interesting such an output may be (see Fig.~\ref{fig:qualitative}). While both our hierarchical approach and the Template method produce text with similar portions of nouns and verbs as human paragraphs, only our approach was able to generate a reasonable quantity of pronouns. Our hierarchical method also had a much wider vocabulary compared to the Template approach, though Sentence-Concat, trained on hundreds of thousands of MS COCO~\cite{lin2014microsoft} captions, is a bit larger. \subsection{Generating Paragraphs from Fewer Regions} As an exploratory experiment in order to highlight the interpretability of our model, we investigate generating paragraphs from a smaller number of regions than the $M=50$ used in the rest of this work. Instead, we only give our method access to the top few detected regions as input, with the hope that the generated paragraph focuses only on those particularly regions, preferring not to describe other parts of the image. The results for a handful of images are shown in Fig.~\ref{fig:regions}. Although the input is extremely out of sample compared to the training data, the results are still quite reasonable -- the model generates paragraphs describing the detected regions without much mention of objects or scenery outside of the detections. Taking the top-right image as an example, despite a few linguistic mistakes, the paragraph generated by our model mentions the batter, catcher, dirt, and grass, which all appear in the top detected regions, but does not pay heed to the pitcher or the umpire in the background. \subsection{Region Detector} The region detector receives an input image of size $3\times H\times W$, detects regions of interest, and produces a feature vector of dimension $D=4096$ for each region. Our region detector follows \cite{ren2015faster,johnson2016densecap}; we provide a summary here for completeness: The image is resized so that its longest edge is 720 pixels, and is then passed through a convolutional network initialized from the 16-layer VGG network~\cite{simonyan2015very}. The resulting feature map is processed by a region proposal network~\cite{ren2015faster}, which regresses from a set of anchors to propose regions of interest. These regions are projected onto the convolutional feature map, and the corresponding region of the feature map is reshaped to a fixed size using bilinear interpolation and processed by two fully-connected layers to give a vector of dimension $D$ for each region. Given a dataset of images and ground-truth regions of interest, the region detector can be trained in an end-to-end fashion as in \cite{ren2015faster} for object detection and \cite{johnson2016densecap} for dense captioning. Since paragraph descriptions do not have annotated groundings to regions of interest, we use a region detector trained for dense image captioning on the Visual Genome dataset~\cite{krishnavisualgenome}, using the publicly available implementation of \cite{johnson2016densecap}. This produces $M=50$ detected regions. One alternative worth noting is to use a region detector trained strictly for object detection, rather than dense captioning. Although such an approach would capture many salient objects in an image, its paragraphs would suffer: an ideal paragraph describes not only objects, but also scenery and relationships, which are better captured by dense captioning task that captures \emph{all} noteworthy elements of a scene. \subsection{Region Pooling} The region detector produces a set of vectors $v_1,\ldots,v_M\in\mathbb{R}^{D}$, each describing a different region in the input image. We wish to aggregate these vectors into a single pooled vector $v_p\in\mathbb{R}^P$ that compactly describes the content of the image. To this end, we learn a projection matrix $W_{pool}\in\mathbb{R}^{P\times D}$ and bias $b_{pool}\in\mathbb{R}^P$; the pooled vector $v_p$ is computed by projecting each region vector using $W_{pool}$ and taking an elementwise maximum, so that $v_p=\max_{i=1}^M(W_{pool}v_i + b_{pool})$. While alternative approaches for representing collections of regions, such as spatial attention~\cite{xu2015show}, may also be possible, we view these as complementary to the model proposed in this paper; furthermore we note recent work~\cite{qi2016pointnet} which has proven max pooling sufficient for representing any continuous set function, giving motivation that max pooling does not, in principle, sacrifice expressive power. \subsection{Hierarchical Recurrent Network} The pooled region vector $v_p\in\mathbb{R}^P$ is given as input to a hierarchical neural language model composed of two modules: a \emph{sentence RNN} and a \emph{word RNN}. The sentence RNN is responsible for deciding the number of sentences $S$ that should be in the generated paragraph and for producing a $P$-dimensional \emph{topic vector} for each of these sentences. Given a topic vector for a sentence, the word RNN generates the words of that sentence. We adopt the standard LSTM architecture~\cite{hochreiter1997long} for both the word RNN and sentence RNN. As an alternative to this hierarchical approach, one could instead use a non-hierarchical language model to directly generate the words of a paragraph, treating the end-of-sentence token as another word in the vocabulary. Our hierarchical model is advantageous because it reduces the length of time over which the recurrent networks must reason. Our paragraphs contain an average of 67.5 words (Tab.~\ref{tab:data_stats}), so a non-hierarchical approach must reason over dozens of time steps, which is extremely difficult for language models. However, since our paragraphs contain an average of 5.7 sentences, each with an average of 11.9 words, both the paragraph and sentence RNNs need only reason over much shorter time-scales, making learning an appropriate representation much more tractable. \vspace{-3mm} \paragraph{Sentence RNN} The sentence RNN is a single-layer LSTM with hidden size $H=512$ and initial hidden and cell states set to zero. At each time step, the sentence RNN receives the pooled region vector $v_p$ as input, and in turn produces a sequence of hidden states $h_1,\ldots,h_S\in\mathbb{R}^H$, one for each sentence in the paragraph. Each hidden state $h_i$ is used in two ways: First, a linear projection from $h_i$ and a logistic classifier produce a distribution $p_i$ over the two states $\{\texttt{CONTINUE}=0, \texttt{STOP}=1\}$ which determine whether the $i$th sentence is the last sentence in the paragraph. Second, the hidden state $h_i$ is fed through a two-layer fully-connected network to produce the topic vector $t_i\in\mathbb{R}^P$ for the $i$th sentence of the paragraph, which is the input to the word RNN. \paragraph{Word RNN} The word RNN is a two-layer LSTM with hidden size $H=512$, which, given a topic vector $t_i\in\mathbb{R}^{P}$ from the sentence RNN, is responsible for generating the words of a sentence. We follow the input formulation of~\cite{vinyals2015show}: the first and second inputs to the RNN are the topic vector and a special \texttt{START} token, and subsequent inputs are learned embedding vectors for the words of the sentence. At each timestep the hidden state of the last LSTM layer is used to predict a distribution over the words in the vocabulary, and a special \texttt{END} token signals the end of a sentence. After each Word RNN has generated the words of their respective sentences, these sentences are finally concatenated to form the generated paragraph. \subsection{Training and Sampling} Training data consists of pairs $(x, y)$, with $x$ an image and $y$ a ground-truth paragraph description for that image, where $y$ has $S$ sentences, the $i$th sentence has $N_i$ words, and $y_{ij}$ is the $j$th word of the $i$th sentence. After computing the pooled region vector $v_p$ for the image, we unroll the sentence RNN for $S$ timesteps, giving a distribution $p_i$ over the $\{\texttt{CONTINUE}, \texttt{STOP}\}$ states for each sentence. We feed the sentence topic vectors to $S$ copies of the word RNN, unrolling the $i$th copy for $N_i$ timesteps, producing distributions $p_{ij}$ over each word of each sentence. Our training loss $\ell(x, y)$ for the example $(x, y)$ is a weighted sum of two cross-entropy terms: a \emph{sentence loss} $\ell_{sent}$ on the stopping distribution $p_i$, and a \emph{word loss} $\ell_{word}$ on the word distribution $p_{ij}$: \vspace{-5mm} \begin{align} \ell(x, y) = &\lambda_{sent}\sum_{i=1}^S \ell_{sent}(p_i, \mathbf{I}\left[i = S\right])\\ + &\lambda_{word}\sum_{i=1}^S\sum_{j=1}^{N_i} \ell_{word}(p_{ij}, y_{ij}) \end{align} To generate a paragraph for an image, we run the sentence RNN forward until the stopping probability $p_i(\texttt{STOP})$ exceeds a threshold $T_{\texttt{STOP}}$ or after $S_{MAX}$ sentences, whichever comes first. We then sample sentences from the word RNN, choosing the most likely word at each timestep and stopping after choosing the \texttt{STOP} token or after $N_{MAX}$ words. We set the parameters \mbox{$T_{\texttt{STOP}}=0.5$}, \mbox{$S_{MAX}=6$}, and \mbox{$N_{MAX}=50$} based on validation set performance. \subsection{Transfer Learning} \vspace{-1mm} Transfer learning has become pervasive in computer vision. For tasks such as object detection~\cite{ren2015faster} and image captioning~\cite{donahue2015long,karpathy2015deep,vinyals2015show,xu2015show}, it has become standard practice not only to process images with convolutional neural networks, but also to initialize the weights of these networks from weights that had been tuned for image classification, such as the 16-layer VGG network~\cite{simonyan2015very}. Initializing from a pre-trained convolutional network allows a form of knowledge transfer from large classification datasets, and is particularly effective on datasets of limited size. Might transfer learning also be useful for paragraph generation? We propose to utilize transfer learning in two ways. First, we initialize our region detection network from a model trained for dense image captioning~\cite{johnson2016densecap}; although our model is end-to-end differentiable, we keep this sub-network fixed during training both for efficiency and also to prevent overfitting. Second, we initialize the word embedding vectors, recurrent network weights, and output linear projection of the word RNN from a language model that had been trained on region-level captions~\cite{johnson2016densecap}, fine-tuning these parameters during training to be better suited for the task of paragraph generation. Parameters for tokens not present in the region model are initialized from the parameters for the \texttt{UNK} token. This initialization strategy allows our model to utilize linguistic knowledge learned on large-scale region caption datasets~\cite{krishnavisualgenome} to produce better paragraph descriptions, and we validate the efficacy of this strategy in our experiments. \section{Introduction} \label{sec:intro} \input{intro.tex} \section{Related Work} \label{sec:related} \input{related.tex} \section{Paragraphs are Different} \label{sec:paragraphs} \input{paragraphs.tex} \section{Method} \label{sec:method} \input{method.tex} \section{Experiments} \label{sec:experiments} \input{experiments.tex} \section{Conclusion} \label{sec:discussion} \input{discussion.tex} {\small \bibliographystyle{ieee}
2024-02-18T23:40:26.396Z
2017-04-11T02:14:54.000Z
algebraic_stack_train_0000
2,410
3,642
proofpile-arXiv_065-11689
\section{Introduction} \label{sec:introduction} Let $V$ be a complex vector space, and let $G$ be a connected reductive algebraic group with a fixed faithful linear action on $V$. Attached to this data, we have two interesting spaces, which physicists call the {\bf Higgs} and {\bf Coulomb} branches (of the associated 3-dimensional $\mathcal{N}=4$ supersymmetric gauge theory). \begin{itemize} \item The Higgs branch is well-known to mathematicians: it is given by an algebraic symplectic reduction of the cotangent bundle $T^*V$. That is, we have \[\fM_{H}:=\mu^{-1}(0)/\!\!/ G=\Spec(\C[\mu^{-1}(0)]^G)\] where $\mu\colon T^*V\to \fg$ is the moment map. \item The Coulomb branch has only been precisely defined in a recent series of papers by Nakajima, Braverman and Finkelberg. It is defined as the spectrum of a ring constructed as a convolution algebra in the homology of the affine Grassmannian. The choice of representation $V$ is incorporated as certain ``quantum corrections'' to convolution in homology, which are kept track of by an auxilliary vector bundle. To readers unhappy with the terms that appear in the sentences above: in this paper, we will use a purely algebraic description of the Coulomb branch; the geometric description given above will be only used to show that this algebraic presentation is correct, so readers can safely set the affine Grassmannian to one side if they desire. \end{itemize} A conjecture of Braden, Licata, Proudfoot and the author suggests a surprising relationship between these spaces: they should be {\it symplectic dual} \cite{BLPWgco}. This conjecture requires a number of different geometric and representation theoretic properties, the most important of which is a Koszul duality between generalizations of category $\cO$ over quantizations of these varieties. The existence of such a duality has been proven in several special cases (see \cite{BLPWtorico,Webqui}) but in this paper, we will give a general construction of this Koszul duality. First, let us be a bit more precise about what we mean by Koszul duality. For any algebra $A$ over a field $\K$ graded by the non-negative integers with $A_0$ finite dimensional and semi-simple, we can define a Koszul dual $A^!$ which is a quadratic algebra with the same properties. By \cite[Thm. 30]{MOS}, we have that $A\cong (A^!)^!$ if and only if $A$ is Koszul in the usual sense. For a graded category $\mathcal{C}$ equivalent to $A\operatorname{-gmod}$ for $A$ as above, the category $\mathcal{C}^!\cong A^!\operatorname{-gmod}$ only depends on $\mathcal{C}$ up to canonical equivalence. In order to construct category $\cO$'s, we need to choose auxilliary data, which determine finiteness conditions: we must choose a flavor $\phi$ (a $\C^*$-action on $\fM_{H}$ with weight $1$ on the symplectic form and commuting with the $\C^*$-action induced by scaling), and a stability parameter $\xi\in (\fg^*)^G$. Note that the choice of $\xi$ allows us to define the GIT quotient $\fM_{H,\xi}$ with $\xi$ as stability condition. Taking the unique closed orbit in the closure of a semi-stable orbit defines a map $\fM_{H,\xi}\to \fM_{H}$. In many cases, this is a resolution of singularities, but $\fM_{H,\xi}$ may not be smooth, or may be a resolution of a subvariety of $\fM_{H}$. The variety $\fM_{H,\xi}$ has a natural quantization obtained from the Hamiltonian reduction of microlocal differential operators on $T^*V$ (for the usual moment map sending $X\in \fg$ to the action vector field $X_V$), as defined in \cite[\S 3.4]{BLPWgco}. Associated to the data of $(G,V,\phi,\xi)$, we have two versions of category $\cO$: \begin{enumerate} \item We let $\OHiggs$ be the geometric category $\cO$ over the quantized structure sheaf on $\fM_{H,\xi}$ discussed above, associated to the flavor $\phi$. \item We let $\OCoulomb$ be the algebraic category $\cO$ for the quantization of $\fM_C$ defined by the flavor $\phi$ with integral weights. The element $\xi$ induces an inner grading on this algebra which we use to define the category $\cO$. \end{enumerate} While there is a small asymmetry here since one of these categories is a category of sheaves, and the other a category of modules, the difference is smaller than it may appear. By \cite[Cor. 3.19]{BLPWgco}, we can compare algebraic and geometric category $\cO$'s and express $\OHiggs$ as an algebraic category $\cO$ at the cost of requiring more care regarding parameters. The category $\OHiggs$ has an intrinsically defined graded lift $\tOHiggs$, which uses the category of mixed Hodge modules on $V$; the category $\OCoulomb$ has a graded lift which we'll give an explicit algebraic definition of below. \begin{itheorem}\label{th:A} There is a functor $\tOCoulomb^!\to \tOHiggs$. If $\fM_H$ is a Nakajima quiver variety or smooth hypertoric variety, then this functor is an equivalence. \end{itheorem} There is a general geometric property \hyperlink{dagger}{$(\dagger)$} which assures the equivalences above. We expect this holds in all cases where $\fM_H$ is smooth and is proven in the quiver and smooth hypertoric cases in \cite{Webqui}, but at the moment, we lack general tools to prove it in full generality. For hypertoric varieties, Theorem \ref{th:A} is proven in \cite{BLPWtorico}. For the quiver cases, the connection to Coulomb branches was only recently made precise, so this version of the theorem was not proved before, but the results of \cite{SVV,Webqui} were very suggestive for the affine type A case. Since the case of finite-type quiver varieties is the most novel and interesting case of this result, we'll discuss it in more detail in Section \ref{sec:quiver}. In certain other cases, such as non-smooth hypertoric varieties, this functor is an equivalence onto a block of $\OHiggs$. One can also strengthen this theorem to include the case where the flavor $\phi$ is a vector field which does not integrate to a $\C^*$ action or we allow non-integral weights. In this case, we have an analogous functor from $\OCoulomb^!$ to the category $\cO$ attached to a Higgs branch, but one associated to a subspace of $V$ as a representation over a Levi of $G$. This phenomenon is a generalization of the theorem proved in \cite{WebRou,Webalt} relating blocks of the Cherednik category $\cO$ to weighted KLR algebras (see also Section \ref{sec:quiver}). This result depends on an explicit calculation. For arbitrary $(G,V,\phi,\xi)$, we give two explicit presentations of the endomorphisms of the projective generators in $\OCoulomb$; one of these is more natural from a geometric perspective, but the other has the advantage of being graded, and thus allowing us to define the the graded lift $\tOCoulomb$. After this paper circulated as a preprint, H. Nakajima pointed out to us that the connection between these presentations has a geometric explanation, using the concentration map to the fixed points of a complexified cocharacter, as in the work of Varagnolo and Vasserot \cite[\S 2]{MR3013034}, which concerns the case of the adjoint representation in connection with double affine Hecke algebras. This will be explained in more detail in forthcoming work of his \cite{NaPP}. This second presentation also appears naturally in the Ext algebra of certain semi-simple $G$-equivariant D-modules on $V$, which makes the functor $\tOCoulomb^!\to \tOHiggs$ manifest. If instead of category $\cO$, we consider the category $\mathscr{W}_{\operatorname{Coulomb}}$ of all integral weight modules, which has a graded lift $\mathscr{\tilde W}_{\operatorname{Coulomb}}$ defined using the same presentation. We obtain a fully faithful functor $\mathscr{\tilde W}_{\operatorname{Coulomb}}^!\to \mathcal{D}(V/G)\mmod$ to the category of strongly $G$-equivariant D-modules on $V$, independent of any properties of $V$ or $G$. The functor $\tOCoulomb^!\to \tOHiggs$ is induced by this functor, and the hypertoric or quiver hypothesis is needed to assure that the quotient functor from $\mathcal{D}(V/G)\mmod$ to modules over the quantization of $\fM_{H,\xi}$ has the correct properties. Thus, Theorem \ref{th:A} can be strengthened to not just give an equivalence between these categories, but in fact a combinatorial description of both of them. The algebras that appear are an interesting generalization of (weighted) KLR algebras. Considering the richness of the theory developed around KLR algebras, there is reason to think these new algebras will also prove quite interesting from the perspective of combinatorial representation theory. Particularly interesting context in which consider these is when the Coulomb branch is considered over a field of characteristic $p$. In this case, there is a natural relationship between quantizations, tilting bundles and coherent sheaves, which we will consider in more detail in future work. Because of the nature of our proof of Theorem \ref{th:A}, it extends easily to show that these equivalences are compatible with certain natural autoequivalences of derived categories, called shuffling and twisting functors. See \cite[\S 8]{BLPWgco} for more on these functors. \begin{itheorem}\label{th:B} Under the hypothesis \hyperlink{dagger}{$(\dagger)$}, the functor of Theorem \ref{th:A} induces an equivalence of graded derived categories $D^b(\tOCoulomb)\to D^b(\tOHiggs)$ which intertwines twisting functors with shuffling functors and vice versa. \end{itheorem} This verifies two of the most important predictions of the conjecture that Higgs and Coulomb branches of a single theory are symplectic dual to each other in the sense of \cite[Def. 10.1]{BLPWgco}; it remains to confirm the more geometric aspects of this duality, such as a bijection between special strata. \subsection*{Acknowledgements} \label{sec:acknowledgements} We would like to thank Hiraku Nakajima for pointing out the connection to Varagnolo and Vasserot's past work, as well his forthcoming work. Many thanks to Tom Braden, Alexander Braverman, Joel Kamnitzer, Anthony Licata, Nick Proudfoot, Alex Weekes and Oded Yacobi for many useful discussions on these topics. \section{The Higgs side} \label{sec:higgs} Let $V$ be a complex vector space, and let $G$ be a connected reductive algebraic group with a fixed faithful linear action on $V$ with no trivial summands. Let $H=\Aut_{G}(V)$ and let $Z=H\cap G=Z(G)$. Let $\mathbb{T}$ be a copy of $\C^*$ acting on $T^*V\cong V\oplus V^*$, commuting with the action of $G$, and acting with weight $1$ on the symplectic form $\Omega$. Note that this means we have a perfect pairing between the $k$ weight space on $V$ and the $-k-1$ weight space on $V^*$; this action is necessarily faithful. Let $\tilde{G}$ be the subgroup in $GL(T^*V)$ generated by $\mathbb{T}$ and $G$. Our constructions will only depend on the representation of $\tilde G$ on $T^*V$, and not on the choice of invariant Lagrangian subspace $V$. However, making a distinguished choice will be useful moving forward. The reader might prefer to consider symplectic representations of $G$ with a commuting action of $\mathbb{T}$ that has weight 1 on the symplectic form, without a choice of Lagrangian subspace. However, in this situation, one will always exist, since the non-negative weight spaces for the action of $\mathbb{T}$ form a $G$-invariant Lagrangian subspace. \begin{remark} This depends very sensitively on the fact that $\mathbb{T}$ has weight 1 on the symplectic form. Every symplectic representation has a commuting $\mathbb{T}$ action with all weights negative (the inverse scalar multiplication) which has weight $2$ on $\Omega$. \end{remark} The group $\mathbb{T}$ acts naturally on the Higgs branch $\fM_{H,\gamma}$ for any character $\gamma$. If $\fM_{H,\gamma} $ is smooth\footnote{If $\fM_{H,\gamma} $ is not smooth then this is still true, but this requires a more careful argument. Since we don't need this fact, we will not include this argument; see \cite[Proposition 2.11]{Webqui}. } then considering the action of $\mathbb{T}$ on the tangent space at any point of $\fM_{H,\gamma}^\mathbb{T}$, we see that the fixed subspace $\fM_{H,\gamma}^\mathbb{T}$ is isotropic (in the Poisson sense), and the set \[\fM_{H,\gamma}^+=\{m\in \fM_{H,\gamma}\mid \lim_{t\to 0} t\cdot m\text{ exists }\}\] is Lagrangian (in the Poisson sense). Let $\mathcal{D}$ be a quantization of the structure sheaf compatible with a conical $\C^*$ action, as in \cite[\S 3.2]{BLPWquant}. Note that there is a subtlety here: we have to choose a conical $\C^*$-action (we usually denote the corresponding copy of $\C^*$ by $\mathbb{S}$) in order to make sense of this category, but this geometric category $\cO$ will not depend on the choice (since the underlying sheaves are unchanged). For simplicity, we will fix this action to let $\mathbb{S}$ be the action induced by the scaling action of $T^*V$. Recall that a {\bf good} $\mathcal{D}$ module is one which admits a coherent $\mathcal{D}(0)$-lattice. We wish to define a special category of $\mathcal{D}$-modules based on the structure of the action of the flavor $\phi$. This is a generalization of the geometric category $\cO$ defined in \cite{BLPWgco}: the key difference is that our $\mathbb{T}$-action has weight 1 on the symplectic form, rather than weight $0$ as in \cite{BLPWgco}. However, by correctly writing this definition, we can give a consistent definition for both cases. We endow $\fM_{H,\gamma}^+$ with the scheme structure defined by the ideal generated by all global functions on $\fM_{H,\gamma}$ with positive weight under the action of $\mathbb{T}$. \begin{definition}\label{def:Og} A good $\mathcal{D}$-module $\mathcal{M}$ on $\fM_{H,\gamma}$ lies in category $\mathcal{O}_{\!\operatorname{g}}$ for the flavor $\phi$ if it has a $\mathcal{D}(0)$-lattice $\mathcal{M}(0)$ such that $\mathcal{M}(0)/h\mathcal{M}(0)$ is scheme-theoretically supported on $\fM_{H,\gamma}^+$. \end{definition} \begin{lemma} If $\phi_0\colon \mathbb{T}\to T_H$, the the category $\mathcal{O}_{\!\operatorname{g}}$ for $\phi_0 $ in \cite[Def. 3.15]{BLPWgco} is the same as that of Definition \ref{def:Og} for the pointwise product of $\phi_0^\ell$ and the action of $\mathbb{S}$ for $\ell\gg 0$. \end{lemma} \begin{proof} This follows immediately from \cite[Prop. 3.18]{BLPWgco}: the functions with positive weight under this pointwise product for $\ell \gg 0$ are those where $\mathbb{T}$ has positive weight, or $\mathbb{T}$-weight 0 and positive $\mathbb{S}$-weight. Note that only the constant functions have $\mathbb{S}$-weight $0$ and no functions have $\mathbb{S}$-weight $1$ since $V\oplus V^*$ has no $G$-invariants, so all of these functions must have $\mathbb{S}$ weight $\geq 2$. Thus, these are precisely the functions in the ideal $J$ defined in \cite[\S 3.1]{BLPWgco}. \end{proof} \subsection{Lifts and chambers} \label{sec:lifts-chambers} In this section, we make some combinatorial definitions needed in order to understand this category $\cO$. We have a natural character $\nu\colon \tilde{G}\to \mathbb{T}$ splitting the inclusion of $\mathbb{T}$; this is induced by the action of a group element on the symplectic form: $g\cdot \Omega=\nu(g)\Omega$. We call a splitting of this character $\gamma\colon \mathbb{T}\to \tilde{G}$ a {\bf lift} of $\mathbb{T}$. This is the same as a choice of linear $\mathbb{T}$-action on $V$ such that the Hamiltonian reduction is $\mathbb{T}$-equivariant. A rational (real, etc.) lift is a splitting of the derivative of $\nu$ on the rational Lie algebras $\mathbbm{t}_{\Q}\to \tilde{\fg}$. Pick a maximal torus $\tilde{T}\subset \tilde{G}$, and let $T=\tilde{T}\cap G$. Let $\sptc$ denote the space of lifts with image in $T$ (this is a torsor of the cocharacter lattice $X_*(T)$) and $\spt=d\nu^{-1}(1) \subset \tilde{\ft}_{\R}$ be the space of real splittings. The affine space $\spt$ is naturally equipped with a cooriented affine hyperplane arrangment, defined by the vanishing sets of the weights of $T^*V$. Let $\{\varphi_1,\dots, \varphi_d\}$ be an enumeration of the weights of $V$, with multiplicity. Thus, we can choose a decomposition $V\cong \oplus V_{\varphi_i}$ into 1-dimensional subspaces, such that every $V_{\varphi_i}$ has weight $\varphi_i$ under $\tilde{T}$, and generates a simple $\tilde{G}$-representation which is a direct sum \[U(\tilde{\fg})\cdot V_{\varphi_i}=\bigoplus_{j\sim i}V_{\varphi_j}\] for $j\sim i$ the equivalence relation $V_{\varphi_j}\subset U(\tilde{\fg})\cdot V_{\varphi_i}$. \begin{example} We'll use the example of $GL(2)$ with $V\cong \C^2\oplus \C^2$ as our standard example throughout. In this case, $d=4$, with $\varphi_1=\varphi_3=\gamma_1$ and $\varphi_2=\varphi_4=\gamma_2$. We choose the relation so that $1\sim 2$ and $3\sim 4$. We let $\mathbb{T}$ be the $\C^*$ action with weight $1$ on the spaces $V_{\varphi_1}$ and $V_{\varphi_2}$ and weight $-1$ on the spaces $V_{\varphi_3}$ and $V_{\varphi_4}$. The symplectic condition forces it to have weight $-2$ and $0$ on the duals of these spaces. Thus, in this basis, $\widetilde{GL(2)}\cong GL(2)\times \mathbb{T}$ acts on $V$ by the matrices \[ \left[ \begin{array}{c|c} tA & 0\\\hline 0 & t^{-1}A\\ \end{array} \right] \qquad \qquad A\in GL(2), t\in \mathbb{T}\cong \C^* \] \end{example} \begin{definition} For a sign sequence $\sgns\in \{+,0,-\}^d$, we let $V_{\sgns}$ be the sum of the subspaces $V_{\varphi_i}$ with $\sigma_i=+$. We let $(T^*V)_{\sgns}$ be the sum of $V_{\varphi_i}$ with $\sigma_i=+$ and $V_{\varphi_i}^*$ with $\sigma_i=-$. \end{definition} If we have any other set $\indx$ equipped with a map $\iota\colon \indx\to \{+,0,-\}^d$, then we can denote $V_{x}=V_{\iota(x)}$ and $(T^*V)_{x}=(T^*V)_{\iota(x)}$ for any $x\in \indx$. This notation leaves $\iota$ implicit, but in all examples we will consider, this map will be unambiguous. \begin{definition} We call $\sgns$ compatible with a Borel $\tilde{B}$ containing $\tilde{T}$ if $ (T^*V)_{\sgns}$ is $\tilde{B}$-invariant. We let $\compat$ be the set of pairs of sign vectors in $\{+,0,-\}^d$ and compatible Borels. \end{definition} If we fix a preferred Weyl chamber of $\tilde{G}$ (and thus a standard Borel $\tilde{B}$), we have a bijection of all the Weyl chambers with the Weyl group $W$ of $G$ (which is also the Weyl group of $\tilde{G}$), and thus can think of $\compat$ as a subset of $\{+,0,-\}^d\times W$. \begin{example} In our running example, if $\tilde{B}$ is the standard Borel, then the non-compatible sign vectors are of the form \[(-,+,*,*)\quad (-,0,*,*)\quad (0,+,*,*)\quad (*,*,-,+)\quad (*,*,-,0)\quad (*,*,0,+).\] If we consider the opposite Borel (the only other), then $+$'s and $-$'s exchange places. \end{example} Now, we let \[\varphi_i^+=\varphi_i\qquad \varphi_i^-=-\varphi_i-\nu.\] Together, these give the weights of $\tilde{\ft}$ acting on $T^*V$. \begin{example} In our example, $\tilde{\ft}$ is 3 dimensional, and identified with the diagonal matrices of the form $\operatorname{diag}(a+t,b+t,a-t,b-t)$; passing to $\ft_1$ means considering these with $t=1$. The weights $\varphi_i^+$ are the entries of this diagonal matrix; the weights $\varphi_i^-$ are the weights on $V^*$, which come from the matrix $\operatorname{diag}(-a-2t,-b-2t,-a,-b)$. \end{example} \begin{definition}\label{def:chambers} For a sign sequence $\sgns\in \{+,-\}^d$, we let \[c_{\sgns}=\{ \gamma\in \sptc\mid \varphi_i^{\sigma_i}(\gamma)\geq 0 \} \qquad C_{\sgns}=\{ \gamma\in \spt\mid \varphi_i^{\sigma_i}(\gamma)\geq 0 \}.\] We let $C_{\sgns,w}$ be the intersection of $C_{\sgns}$ with the open Weyl chamber attached to $w$. Note that if $C_{\sgns,w}\neq \emptyset$, then ${\sgns}$ is compatible with $wBw^{-1}$. \end{definition} We can extend this notation to sequences in $\{+,0,-\}^d$ by requiring $\varphi_i^{\pm}(\gamma)\in (-1,0)$ if $\sigma_i=0$. \begin{example} Thus, if we use $a$ and $b$ as our coordinates on $\ft_1$, we obtain the hyperplane arrangement: \[ \begin{tikzpicture}[very thick,scale=1.5] \foreach \x in {-2.5,-2.4,...,2.4} \draw (1.5,\x) -- (1.55,\x+.05); \foreach \x in {-2.5,-2.4,...,2.4} \draw (-.5,\x) -- (-.45,\x+.05); \foreach \x in {-2.4,-2.3,...,2.5} \draw (.5,\x) -- (.45,\x-.05); \foreach \x in {-2.4,-2.3,...,2.5} \draw (-1.5,\x) -- (-1.55,\x-.05); \foreach \x in {-2.5,-2.4,...,2.4} \draw (\x,1.5) -- (\x+.05,1.55); \foreach \x in {-2.5,-2.4,...,2.4} \draw (\x,-.5) -- (\x+.05,-.45); \foreach \x in {-2.4,-2.3,...,2.5} \draw (\x,.5) -- (\x-.05,.45); \foreach \x in {-2.4,-2.3,...,2.5} \draw (\x,-1.5) -- (\x-.05,-1.55); \draw (1.5,2.5)-- (1.5,-2.5) node[at start,above]{$\varphi_1^+=0$};\draw (.5,2.5)-- (.5,-2.5) node[at end,below]{$\varphi_1^-=0$}; \draw (-.5,2.5)-- (-.5,-2.5) node[at start,above]{$\varphi_3^+=0$};\draw (-1.5,2.5)-- (-1.5,-2.5) node[at end,below]{$\varphi_3^-=0$}; \draw (2.5,1.5)-- (-2.5,1.5) node[at start,right]{$\varphi_2^+=0$}; \draw (2.5,.5)-- (-2.5,.5) node[at start,right]{$\varphi_2^-=0$}; \draw (2.5,-.5)-- (-2.5,-.5) node[at start,right]{$\varphi_4^+=0$}; \draw (2.5,-1.5)-- (-2.5,-1.5) node[at start,right]{$\varphi_4^-=0$}; \node[scale=.8] at (2,2) {$C_{+,+,+,+}$}; \node[scale=.8] at (1,2) {$C_{0,+,+,+}$}; \node[scale=.8] at (0,2) {$C_{-,+,+,+}$}; \node[scale=.8] at (-1,2) {$C_{-,+,0,+}$}; \node[scale=.8] at (-2,2) {$C_{-,+,-,+}$}; \node[scale=.8] at (2,1) {$C_{+,0,+,+}$}; \node[scale=.8] at (1,1) {$C_{0,0,+,+}$}; \node[scale=.8] at (0,1) {$C_{-,0,+,+}$}; \node[scale=.8] at (-1,1) {$C_{-,0,0,+}$}; \node[scale=.8] at (-2,1) {$C_{-,0,-,+}$}; \node[scale=.8] at (2,0) {$C_{+,-,+,+}$}; \node[scale=.8] at (1,0) {$C_{0,-,+,+}$}; \node[scale=.8] at (0,0) {$C_{-,-,+,+}$}; \node[scale=.8] at (-1,0) {$C_{-,-,0,+}$}; \node[scale=.8] at (-2,0) {$C_{-,-,-,+}$}; \node[scale=.8] at (2,-1) {$C_{+,-,+,0}$}; \node[scale=.8] at (1,-1) {$C_{0,-,+,0}$}; \node[scale=.8] at (0,-1) {$C_{-,-,+,0}$}; \node[scale=.8] at (-1,-1) {$C_{-,-,0,0}$}; \node[scale=.8] at (-2,-1) {$C_{-,-,-,0}$}; \node[scale=.8] at (2,-2) {$C_{+,-,+,-}$}; \node[scale=.8] at (1,-2) {$C_{0,-,+,-}$}; \node[scale=.8] at (0,-2) {$C_{-,-,+,-}$}; \node[scale=.8] at (-1,-2) {$C_{-,-,0,-}$}; \node[scale=.8] at (-2,-2) {$C_{-,-,-,-}$}; \end{tikzpicture} \] The side of a hyperplane carrying a fringe indicates the positive side (which thus includes the hyperplane itself). \end{example} Given a lift $\gamma\in\sptc$, let \[V_\gamma=\{x\in V\mid \lim_{t\to 0}\gamma(t)\cdot x\text{ exists}\}\] be the sum of the non-negative weight spaces for $\gamma$, and $(T^*V)_\lift$ be the corresponding sum for $T^*V$. Using the notation above, we have $V_\gamma=V_{\sgns}$ for all $\gamma\in C_{\sgns}$. The space $(T^*V)_\lift$ is Lagrangian and thus the conormal to $V_\lift$ for any (integral) lift; for a real or rational lift, this space may be isotropic (and not Lagrangian) if $\varphi^\pm_i(\gamma)\in (-1,0)$, or equivalently, if $\gamma\notin C_{\sgns}$ for any $\sgns\in \{+,-\}^d$. Furthermore, $(T^*V)_{\gamma}=(T^*V)_{\gamma'}$ for lifts $\gamma$ and $\gamma'$ if and only if both lie in $C_{\sgns}$ for some $\sgns\in \{+,0,-\}^d$, in which case, both are equal to $(T^*V)_{\sgns}$. In the diagram below, we've marked the chambers corresponding to sign vectors in $\{+,-\}^d$ with $V_\sgns$ (represented in terms of which coordinates are non-zero). \[\tikz[very thick,scale=1.5]{ \foreach \x in {-2.5,-2.4,...,2.4} {\draw (1.5,\x) -- (1.55,\x+.05);} \foreach \x in {-2.5,-2.4,...,2.4} {\draw (-.5,\x) -- (-.45,\x+.05);} \foreach \x in {-2.4,-2.3,...,2.5} {\draw (.5,\x) -- (.45,\x-.05);} \foreach \x in {-2.4,-2.3,...,2.5} {\draw (-1.5,\x) -- (-1.55,\x-.05);} \foreach \x in {-2.5,-2.4,...,2.4} {\draw (\x,1.5) -- (\x+.05,1.55);} \foreach \x in {-2.5,-2.4,...,2.4} {\draw (\x,-.5) -- (\x+.05,-.45);} \foreach \x in {-2.4,-2.3,...,2.5} {\draw (\x,.5) -- (\x-.05,.45);} \foreach \x in {-2.4,-2.3,...,2.5} {\draw (\x,-1.5) -- (\x-.05,-1.55);} \draw (1.5,2.5)-- (1.5,-2.5) node[at start,above]{$\varphi_1^+=0$};\draw (.5,2.5)-- (.5,-2.5) node[at end,below]{$\varphi_1^-=0$}; \draw (-.5,2.5)-- (-.5,-2.5) node[at start,above]{$\varphi_3^+=0$};\draw (-1.5,2.5)-- (-1.5,-2.5) node[at end,below]{$\varphi_3^-=0$}; \draw (2.5,1.5)-- (-2.5,1.5) node[at start,right]{$\varphi_2^+=0$}; \draw (2.5,.5)-- (-2.5,.5) node[at start,right]{$\varphi_2^-=0$}; \draw (2.5,-.5)-- (-2.5,-.5) node[at start,right]{$\varphi_4^+=0$}; \draw (2.5,-1.5)-- (-2.5,-1.5) node[at start,right]{$\varphi_4^-=0$}; \node[scale=.7] at (2,2) {$(*,*,*,*)$}; \node[scale=.7] at (0,2) {$(0,*,*,*)$}; \node[scale=.7] at (-2,2) {$(0,*,0,*)$}; \node[scale=.7] at (2,0) {$(*,0,*,*)$}; \node[scale=.7] at (0,0) {$(0,0,*,*)$}; \node[scale=.7] at (-2,0) {$(0,0,0,*)$}; \node[scale=.7] at (2,-2) {$(*,0,*,0)$}; \node[scale=.7] at (0,-2) {$(0,0,*,0)$}; \node[scale=.7] at (-2,-2) {$(0,0,0,0)$}; }\] \subsection{The Steinberg algebra} \label{sec:steinberg-algebra} For each pair $(\sgns, w)\in \compat$, we have an attached space $X_{\sgns,w}=G\times_{wBw^{-1}}V_{\sgns}$, with the induced map $p_{\sgns,w}\colon X_{\sgns,w}\to V$ sending $(g,v)$ to $gv$. For any collection of these pairs $I\subset \compat$, we can define a Steinberg variety by taking the fiber product of each pair of them over $V$: \[\mathbb{X}_I:=\bigsqcup_{\substack{(\sgns,w)\in I\\ (\sgns',w'))\in I}} X_{\sgns,w}\times_V X_{\sgns',w'}\] with a natural $G$ action. \begin{definition} The $G$-equivariant Borel-Moore homology $H_*^{BM, G}(\mathbb{X}_I)$ equipped with its convolution multiplication is called the {\bf Steinberg algebra} in \cite{SauQH}. \end{definition} Equivalently, we can think of the {\bf Steinberg category} $\Stein_I$ whose objects are elements of $I$ and where morphisms $(\sgns',w')\to (\sgns,w)$ are given by $ H_*^{BM, G}(X_{\sgns,w}\times_V X_{\sgns',w'})$, with composition given by convolution. The Steinberg algebra is simply the sum of all the morphisms in this category; modules over the Steinberg algebra are naturally equivalent to the category of modules over the category $\Stein_I$ (that is, functors from this category to the category of $\K$-vector spaces). This category has a sheaf-theoretic interpretation as well. By \cite[Thm. 8.6.7]{CG97}, we have that \[H_*^{BM, G}(X_{\sgns,w}\times_V X_{\sgns',w'})\cong \Ext^\bullet((p_{\sgns,w})_*\K_{X_{\sgns,w}},(p_{\sgns',w'})_*\K_{X_{\sgns',w'}})\] with convolution product matching Yoneda product. The argument in \cite{CG97} in fact shows that that this can be enhanced to a dg-functor $\Stein_I\to D_{\operatorname{dg}}^b(V)$, where $\Stein_I$ is made into a dg-category by replacing $H_*^{BM, G}(X_{\sgns,w}\times_V X_{\sgns',w'})$ with the Borel-Moore chain complex on $X_{\sgns,w}\times_V X_{\sgns',w'}$. As argued in \cite[Prop. 2.19]{Webqui}, this induced dg-structure on $\Stein_I$ is formal (and thus, can essentially by ignored). Of course, we can define the same space, algebra or category when $I$ is a set with a map to $\compat$. The Steinberg category $\Stein_I$ attached to a set with such a map is equivalent to the category attached to its image (so the corresponding algebras are Morita equivalent). Furthermore, the spaces $X_{\sgns,1}$ and $X_{w\cdot \sgns,w}$ are isomorphic via the action of any lift of $w$ to $\tilde{G}$, so the graph of this isomorphism provides an isomorphism between the objects $(\sgns,1)$ and $(w\cdot \sgns,w)$ in the Steinberg category. \subsection{A presentation of the Steinberg category} \label{sec:pres-steinb-categ} We will give an explicit presentation of Steinberg algebras for certain sets which generalize both the KLR algebras of \cite{KLI,Rou2KM} and the hypertoric algebras of \cite{GDKD,BLPWtorico}. To give a simpler presentation, we will make an auxilliary modification to our chambers. Choose $\ep_i\in (-1,0)$ generically with respect to the constraint that $\ep_i= \ep_j$ if $i\sim j$. Now, consider the larger chambers \[C_{\sgns}'=\{ \gamma\in \spt\mid \varphi_i(\gamma)>\ep_i \text{ if $\sgns_i=+$ and } \varphi_i(\gamma) <\ep_i \text{ if $\sgns_i=-$,}\}\] with $C_{\sgns,w}'$ defined as in Definition \ref{def:chambers}. \begin{example} We'll choose $\ep_1=\ep_2=-1/3$ and $\ep_3=\ep_4=-2/3$ in our running example. Thus, our arrangement becomes \[\tikz[very thick,scale=1.2]{ \draw (1.2,2.5)-- (1.2,-2.5) node[at start,above]{$\varphi_1=-1/3$}; \draw (-1.2,2.5)-- (-1.2,-2.5) node[at start,above]{$\varphi_3=-2/3$}; \draw (2.5,1.2)-- (-2.5,1.2) node[at start,right]{$\varphi_2=-1/3$}; \draw (2.5,-1.2)-- (-2.5,-1.2) node[at start,right]{$\varphi_4=-2/3$}; \node[scale=.8] at (2,2) {$C'_{+,+,+,+}$}; \node[scale=.8] at (0,2) {$C'_{-,+,+,+}$}; \node[scale=.8] at (-2,2) {$C'_{-,+,-,+}$}; \node[scale=.8] at (2,0) {$C'_{+,-,+,+}$};\node[scale=.8] at (0,0) {$C'_{-,-,+,+}$}; \node[scale=.8] at (-2,0) {$C'_{-,-,-,+}$}; \node[scale=.8] at (2,-2) {$C'_{+,-,+,-}$}; \node[scale=.8] at (0,-2) {$C'_{-,-,+,-}$}; \node[scale=.8] at (-2,-2) {$C'_{-,-,-,-}$}; \draw[dotted] (-2.5,-2.5) -- node[above right,at end]{$\alpha=0$}(2.5,2.5); }\] \end{example} We'll call the hyperplane $H_{i}=\{\gamma\mid \varphi_i(\gamma) =\ep_i \}$ a {\bf matter hyperplane}, and $\cox_{\al}=\{\gamma\mid \al(\gamma) =0 \}$ a {\bf Coxeter hyperplane}. We'll draw matter hyperplanes with solid lines and Coxeter hyperplanes with dotted lines in diagrams. Just as the chambers $C_{\sgns}$ capture the behavior of the subspace $V_{\lift}$ as $\lift$ changes, the chambers $C_{\sgns}'$ give the corresponding subspaces for the action of $\ft_{\epsilon}$, which is the same subspace as $\ft_1$, but with a different action on $T^*V$: we let $\ft$ act on $V_{\varphi_i}$ by $\varphi_i-\ep_i$ and on the dual space by $-\varphi_i+\ep_i$. Note that $C_{\sgns,w}\subset C_{\sgns,w}'$. \begin{definition}\label{def:I} We let $I$ (resp. $I'$) be the set of sign vectors $\sgns\in \{+,-\}^d$ such that there exists a choice of flavor $\phi$ such that $c_{\sgns,1}\neq 0$ (resp. $C_{\sgns,1}'\neq 0$); note that $I'\supseteq I$. If we fix the flavor $\phi$, we denote the corresponding sets $I_{\phi}$ and $I'_{\phi}$. \end{definition} \begin{example} In our running example, we have \[I'=I=\{ (+,+,+,+), (+,-,+,+), (+,-,+,-), (-,-,+,+), (-,-,+,-), (-,-,-,-)\}.\] This can change if we choose a different $\phi$: if $\phi$ acts trivially on $V$ and $\ep_1>\ep_2$ then $I'$ is unchanged but \[I=\{ (+,+,+,+), (+,-,+,-),(-,-,-,-)\}.\] \end{example} If $C_\sgns\neq \emptyset$, then there is a unique sign vector $w\sgns$ such that $C_{w\sgns}=w\cdot C_{\sgns}$. This is the unique permutation of $\sgns$ such that each $\varphi_i$ is switched with $\varphi_j=w\varphi_i$ such that $i\sim j$. This is well-defined since if $\varphi_i=\varphi_k$ and $i\sim k$, then these have the same sign (since $C_\sgns\neq \emptyset$). In particular, if $\sgns\in I'$, the translate $w\sgns$ is well-defined. Given a pair $(\sgns,\sgns' )$, we let $\varphi(\sgns,\sgns' )$ be the product of the weights $\varphi_i$ such that $\sgns_i=+$ and $\sgns'=-$. Given a triple $(\sgns,\sgns',\sgns'')$, we let $\varphi (\sgns,\sgns',\sgns'')$ be the product of the weights $\varphi_i$ such that $\sgns_i=\sgns_i''=-\sgns_i'$. Let $\partial_\al(f)=\frac{s_\al\cdot f-f}{\al}$ be the usual BGG-Demazure operator on $\Symt:=\Sym(\ft^*)\cong H^*_G(G/B)$. \begin{definition} We let $A_{I'}$ denote the free category with objects given by the sign vectors $\sgns\in I'$, and morphisms generated by \begin{itemize} \item An action of $\Symt$ on each object $\sgns$. \item Wall-crossing elements $\wall(\sgns;\sgns')\colon \sgns'\to \sgns$. \item Elements $\psi_\al(\sgns)\colon \sgns \to \sgns$ for roots $\al$ such that $ s_\al\cdot \sgns=\sgns$. \end{itemize} subject to the ``codimension 1'' relations: \newseq \begin{align*} \wall(\sgns,\sgns')\wall(\sgns',\sgns'')&=\varphi (\sgns,\sgns',\sgns'') \wall(\sgns,\sgns'') \label{wall}\subeqn\\ \mu \wall(\sgns,\sgns')& = \wall(\sgns,\sgns')\mu \label{dot-commute}\subeqn\\ \psi_\al(\sgns)^2&=0 \label{demazure1}\subeqn\\ \psi_\al(\sgns) (\mu)-(s_\al \mu) \psi_\al(\sgns)&=\partial_\al(\mu) \label{demazure2}\subeqn \end{align*} with $\sgns,\sgns',\sgns''\in I', \mu\in \ft^*$ and $\al$ a root with $s_\al\cdot C_{\sgns}'=C_{\sgns}' $ and the ``codimension 2'' relations (\ref{coxeter}--\ref{GDKD}) below. We get one of these for every codimension 2 intersections of hyperplanes which forms a face of $C_{\sgns,1}'$. There are 3 possible types of these intersections, which in each case below, we represent by drawing a transverse neighborhood to the codimension 2 intersection: \begin{enumerate} \item The codimension 2 subspace is the intersection of 2 Coxeter hyperplanes $\cox_\al$ and $\cox_{\beta}$. For any chamber $C_{\sgns,1}'$ adjacent to these hyperplanes, we have the usual Coxeter relations for $m=\al^\vee(\beta)\cdot \beta^\vee(\al)$: \begin{equation*} \begin{tikzpicture}[very thick] \draw[dotted] (-1,-1) -- node[above,at end]{$\beta$}(1,1); \draw[dotted] (-1,1) --(1,-1); \draw[dotted] (-1,0) --(1,0); \draw[dotted] (0,-1) -- node[above,at end]{$\al$} (0,1); \node at (.26,.64){$\sgns$}; \end{tikzpicture} \end{equation*} \begin{equation*} \underbrace{\psi_\al(\sgns) \psi_\beta(\sgns) \psi_\al(\sgns)\cdots }_{m\text{ times}} = \underbrace{\psi_\beta(\sgns) \psi_\al(\sgns) \psi_\beta(\sgns) \cdots}_{m\text{ times}}\label{coxeter}\subeqn \end{equation*} \item The codimension 2 subspace is the intersection of a Coxeter hyperplane $\cox_\al$ and $H_j$. In this case, the codimension 2 subspace lies in 1 other hyperplane: the one corresponding to the Weyl translate $\vp_k=s_\al \vp_j$, given by $H_k$. Both these hyperplanes have a multiplicity $w$, given by dimension of the corresponding weight space in $V$. We label the adjacent chambers $\boldsymbol{\rho},\sgns, \boldsymbol{\tau}$ as shown, with $\boldsymbol{\rho}$ on the positive side of both hyperplanes (and thus $\boldsymbol{\tau}$ on the negative side of both). \begin{equation*} \begin{tikzpicture}[very thick] \draw[dotted] (-1,0) -- node [at start,left]{$\al$}(1,0); \draw (1,-1)-- node[below, at start]{$j$} (-1,1); \draw (-1,-1)-- node[below, at start]{$k$} (1,1); \node at (0,-.7) {$\sgns$}; \node at (.7,-.3) {$\boldsymbol{\rho}$}; \node at (-.7,-.3) {$\boldsymbol{\tau}$}; \end{tikzpicture} \end{equation*} We then have the relations \begin{align*} \psi_\al(\boldsymbol{\rho}) \wall(\boldsymbol{\rho},\sgns) \wall(\sgns,\boldsymbol{\tau})&= \wall(\boldsymbol{\rho}, \sgns) \wall(\sgns,\boldsymbol{\tau}) \psi_\al (\boldsymbol{\tau}) \label{triple1}\subeqn\\ \psi_\al (\boldsymbol{\tau}) \wall(\boldsymbol{\tau},\sgns) \wall(\sgns,\boldsymbol{\rho})&= \wall(\boldsymbol{\tau}, \sgns) \wall(\sgns,\boldsymbol{\rho}) \psi_\al (\boldsymbol{\rho})\label{triple2}\subeqn\\ \wall(\sgns,\boldsymbol{\tau})\psi_\al (\boldsymbol{\tau}) \wall(\boldsymbol{\tau},\sgns)&= \wall(\sgns,\boldsymbol{\rho})\psi_\al (\boldsymbol{\rho}) \wall(\boldsymbol{\rho},\sgns)-\partial_\al(\varphi_j^w)e(\sgns)\label{triple3}\subeqn \end{align*} \item The codimension 2 subspace is the intersection of two hyperplanes $\vp_i(\gamma)=\ep_i$ and $\vp_j(\gamma)=\ep_j$. The resulting relation here is a consequence of (\ref{wall}), but we include it for completeness. In this case, the codimension 2 subspace lies in no other hyperplanes by the genericity of $\ep_i$. We label the adjacent chambers $\boldsymbol{\pi},\boldsymbol{\rho},\sgns, \boldsymbol{\tau}$ as shown. \begin{equation*} \begin{tikzpicture}[very thick] \draw (1,-1)-- node[below, at start]{$j$} (-1,1); \draw (-1,-1)-- node[below, at start]{$i$} (1,1); \node at (0,-.7) {$\sgns$}; \node at (.7,0) {$\boldsymbol{\rho}$}; \node at (-.7,0) {$\boldsymbol{\tau}$};\node at (0,.7) {$\boldsymbol{\pi}$}; \end{tikzpicture} \end{equation*} We then have the relation \begin{equation*} \wall(\boldsymbol{\pi},\boldsymbol{\rho}) \wall(\boldsymbol{\rho},\sgns)= \wall(\boldsymbol{\pi},\boldsymbol{\tau}) \wall(\boldsymbol{\tau},\sgns)\label{GDKD}\subeqn \end{equation*} \end{enumerate} \end{definition} \begin{example}\label{ex:KLR} In our running example, the resulting algebra is well-known: we can represent the positive Weyl chamber as a pair of points on the real line giving the coordinates $(a,b)$. Since we are in the positive Weyl chamber $a>b$, there is no ambiguity. We cross a hyperplane when these points meet, or when they cross $x=2/3$ or $x=-5/3$. Thus, if we add red points at $x\in \{2/3,-5/3\}$, we'll obtain a bijection between chambers and configurations of points up to isotopy leaving the red points in place. \[\tikz[very thick,scale=1.2]{ \draw (1.2,1.2)-- (1.2,-2.5) node[at end,below]{$\varphi_1=-1/3$}; \draw (-1.2,-1.2)-- (-1.2,-2.5) node[at end,below]{$\varphi_3=-2/3$}; \draw (2.5,1.2)-- (1.2,1.2) node[at start,right]{$\varphi_2=-1/3$}; \draw (2.5,-1.2)-- (-1.2,-1.2) node[at start,right]{$\varphi_4=-2/3$}; \draw[dotted] (-2.5,-2.5) -- node[above right,at end]{$\alpha=0$}(2.5,2.5); \node[draw,fill=white,scale=.7] at (2,-2) {\tikz[line width=2pt]{\draw (-.5,0)--(-.5,1);\draw[red] (0,0)--(0,1); \draw[red] (.5,0) --(.5,1);\draw(1,0)--(1,1); }}; \node[draw,fill=white,scale=.7] at (0,-2) {\tikz[line width=2pt]{\draw (-.5,0)--(-.5,1);\draw[red] (0,0)--(0,1); \draw (.5,0) --(.5,1);\draw[red](1,0)--(1,1); }}; \node[draw,fill=white,scale=.7] at (2,0) {\tikz[line width=2pt]{\draw[red] (-.5,0)--(-.5,1);\draw (0,0)--(0,1); \draw[red] (.5,0) --(.5,1);\draw(1,0)--(1,1); }}; \node[draw,fill=white,scale=.7] at (-2,-2) {\tikz[line width=2pt]{\draw (-.5,0)--(-.5,1);\draw (0,0)--(0,1); \draw[red] (.5,0) --(.5,1);\draw [red](1,0)--(1,1); }}; \node[draw,fill=white,scale=.7] at (0,0) {\tikz[line width=2pt]{\draw [red](-.5,0)--(-.5,1);\draw (0,0)--(0,1); \draw (.5,0) --(.5,1);\draw[red](1,0)--(1,1); }}; \node[draw,fill=white,scale=.7] at (2,2) {\tikz[line width=2pt]{\draw[red] (-.5,0)--(-.5,1);\draw[red] (0,0)--(0,1); \draw (.5,0) --(.5,1);\draw(1,0)--(1,1); }}; }\] We'll represent morphisms $\sgns\to \sgns'$ by Stendhal diagrams (as defined in \cite[\S 4]{Webmerged}) that match $\sgns$ at the bottom and $\sgns'$ at the top (with composition given by stacking, using isotopies to match the top and bottom if possible). We send the \begin{itemize} \item identity on $\sigma$ to a diagram with all strands vertical, \item the action of $\C[\gamma_1,\gamma_2]$ to a polynomial ring placing dots on the two strands, \item $\wall(\sgns;\sgns')$ to diagram with straight lines interpolating between the top and bottom \item $\psi_\al(\sgns) $ is only well-defined if there is no red line separating the two black lines; we send this to a crossing of the two black strands. \end{itemize} \begin{equation*}\tikz[baseline]{ \node[label=below:{$\gamma_i$}] at (0,0){ \tikz[very thick]{ \draw (-.5,-.5)-- (-.5,.5); \draw (.5,-.5)-- (.5,.5) node [midway,fill=black,circle,inner sep=2pt]{} ; \draw (1.5,-.5)-- (1.5,.5); \node at (1,0){$\cdots$}; \node at (0,0){$\cdots$}; } }; \node[label=below:{$\psi_i(\sgns)$}] at (3,0){ \tikz[very thick]{ \draw (-.5,-.5)-- (-.5,.5); \draw (.1,-.5)-- (.9,.5); \draw (.9,-.5)-- (.1,.5); \draw (1.5,-.5)-- (1.5,.5); \node at (1.1,0){$\cdots$}; \node at (-.1,0){$\cdots$}; } }; \node[label=below:{$\wall(\sgns;\sgns')$}] at (6,-0){ \tikz[very thick]{ \draw (-.5,-.5)-- (-.5,.5); \draw[red] (.1,-.5)-- (.9,.5); \draw (.9,-.5)-- (.1,.5); \draw (1.5,-.5)-- (1.5,.5); \node at (1.1,0){$\cdots$}; \node at (-.1,0){$\cdots$}; } }; \node[label=below:{$\wall(\sgns';\sgns)$}] at (9,-0){ \tikz[very thick]{ \draw (-.5,-.5)-- (-.5,.5); \draw (.1,-.5)-- (.9,.5); \draw[red] (.9,-.5)-- (.1,.5); \draw (1.5,-.5)-- (1.5,.5); \node at (1.1,0){$\cdots$}; \node at (-.1,0){$\cdots$}; } }; }\label{eq:diagrams} \end{equation*} The relations (\ref{wall}--\ref{triple3}) exactly match those of $\tilde{T}^{2}_{-2}$ as defined in \cite[Def. 2.3]{WebTGK} (a special case of the algebras defined in \cite[\S 4]{Webmerged}). This is a special case of a much more general result, which we will discuss in Sections \ref{sec:examples} and \ref{sec:quiver} \end{example} Let $G_i\subset G/B\times G/B$ be the preimage of the diagonal in $G/P_i\times G/P_i$. Given a $P_i$-representation $Q$, we let $L_{P_i}(Q)$ be the pullback of the associated bundle on $G/P_i$ to $G_i$, and if $Q$ is a representation of the Borel, then let $L(Q)$ be the associated vector bundle on the diagonal. \begin{theorem}\label{th:Steinberg} We have a natural equivalence $A_{I'}\cong \Stein_{I'}$ which matches objects in the obvious way, and sends \begin{enumerate} \item $\mu \colon \sgns\to \sgns$ to the Euler class $e(L(\mu))$ of the associated bundle on the diagonal copy of $X_{\sgns,1}$ in $\mathbb{X}_I$. \item $\wall(\sgns,\sgns')$ to the fundamental class of the associated variety $[L(V_{\sgns}\cap V_{\sgns})])$ embedded naturally in $X_{\sgns,1}\times_{V} X_{\sgns',1}\subset G/B\times G/B\times V$. \item $\psi_{\al_i}(\sgns)$ to the fundamental class of the associated variety $[L_{P_i}(V_{\sgns})]$ embedded naturally in $X_{\sgns}\times_{V} X_{\sgns}\subset G/B\times G/B\times V$. \end{enumerate} \end{theorem} We will prove this theorem below, once we have developed some of the theory of these algebras. \begin{lemma} The algebra $A_{I'}$ has a natural representation $Y$ which sends each object $\sgns$ to the polynomial ring $\Symt$. The action is defined by the formulae: \begin{align} \wall(\sgns,\sgns')\cdot f &=\varphi(\sgns,\sgns') f \nonumber\\ \psi_\al(\sgns)\cdot f &=\partial_\al(f) \label{Y-action}\\ \mu \cdot f &=\mu f \nonumber \end{align} \end{lemma} For each pair $(\sgns,\sgns')\in I'\times I'$, and $w\in W$, we fix a path of minimal length (i.e. crossing a minimal number of hyperplanes) from $C_{\sgns',1}'$ to $C_{w\sgns,w}'$. Now, fold this path so that it lies in the positive Weyl chamber: the first time it crosses a root hyperplane, apply the corresponding simple reflection to what remains of the path. Then follow this new path until it strikes another wall, and apply that simple reflection to the remaining path, etc. The result is a sequence $\beta_1,\beta_2,\dots, \beta_p$ of simple root hyperplanes and sign vectors $\sgns_1,\dots, \sgns_p$ corresponding to the chambers where we reflect. Now, consider the product \begin{equation} \tilde{\wall}(\sgns,\sgns',w)=\wall(\sgns,\sgns_{p})\psi_{\beta_p}(\sgns_{p}) \wall(\sgns_{p},\sgns_{p-1}) \psi_{\beta_{p-1}}(\sgns_{p-1})\cdots \psi_{\beta_1}(\sgns_{1}) \wall(\sgns_1,\sgns').\label{eq:tilde-wall} \end{equation} \begin{example} In our running example, this is given by the diagrams without dots which join the black strands with no crossing if $w=1$ and with a crossing if $w=s_{\al}$, and a minimal number of red/black crossings possible. In the diagram below, we show one possible path $\sgns'=(+,-,+,-)\to s_{\al}\sgns=(-,+,+,+)$, and its reflection. \[\tikz[very thick,scale=.9]{ \draw (1.2,2.5)-- (1.2,-2.5) node[at start,above,scale=.8]{$\varphi_1=-1/3$}; \draw (-1.2,2.5)-- (-1.2,-2.5) node[at start,above,scale=.8]{$\varphi_3=-2/3$}; \draw (2.5,1.2)-- (-2.5,1.2) node[at start,right,scale=.8]{$\varphi_2=-1/3$}; \draw (2.5,-1.2)-- (-2.5,-1.2) node[at start,right,scale=.8]{$\varphi_4=-2/3$}; \draw[dotted] (-2.5,-2.5) -- node[above right,at end,scale=.8]{$\alpha=0$}(2.5,2.5); \draw[->,dashed] (2,-2) to[in=-60,out=90] (.7,.7) to[in=-70,out=120] (.2,2.2); }\qquad \tikz[very thick,scale=.9]{ \draw (1.2,1.2)-- (1.2,-2.5) node[at end,below]{$\varphi_1=-1/3$}; \draw (-1.2,-1.2)-- (-1.2,-2.5) node[at end,below]{$\varphi_3=-2/3$}; \draw (2.5,1.2)-- (1.2,1.2) node[at start,right]{$\varphi_2=-1/3$}; \draw (2.5,-1.2)-- (-1.2,-1.2) node[at start,right]{$\varphi_4=-2/3$}; \draw[dotted] (-2.5,-2.5) -- node[above right,at end]{$\alpha=0$}(2.5,2.5); \draw[->,dashed] (2,-2) to[in=-60,out=90] (.71,.69) to[in=160,out=-30] (2.2,.2); } \] The resulting element $\tilde{\wall}((+,-,+,+) ,(+,-,+,-),s_\al)$ is given by: \begin{equation*} \wall((+,-,+,+),(-,-,+,+))\psi_{\al}((-,-,+,+)) \wall((-,-,+,+),(+,-,+,-)) \end{equation*} and represented by the diagram \begin{equation*}\tikz[baseline,very thick,scale=1.5]{ \draw (-.5,-.5)to[in=-150,out=30] (1.5,.5); \draw (1.5,-.5)to[in=-70,out=130] (.5,.5); \draw[wei] (0,-.5)-- (0,.5); \draw[wei] (1,-.5)-- (1,.5); }. \end{equation*} \end{example} \begin{theorem} The elements $\tilde{\wall}(\sgns,\sgns',w)$ are a basis of the morphisms in $A_{I'}$ as a right module over $\Symt$. \end{theorem} \begin{proof} First, we note that they span. For this, it suffices to show that their span contains the identity of each object, which is $\tilde{\wall}(\sgns,\sgns,1)$ and is closed under right multiplication by the generators $\wall(-,-)$ and $\psi(-)$. Note that \begin{align*} \tilde{\wall}(\sgns,\sgns',w) \psi_{\al}(\sgns) &= \begin{cases} \tilde{\wall}(\sgns, \sgns', w s_\al) & w s_\al>w\\ 0& w s_\al<w \end{cases}\\ \tilde{\wall}(\sgns,\sgns',w) \wall(\sgns',\sgns'')&=\tilde{\wall}(\sgns,\sgns'',w) \varphi (\sgns_p,\sgns',\sgns'') \end{align*} so this shows that these vectors span. Now, consider the action of these operators in the representation $Y$ localized over the fraction field of $\Symt$. The action of $\tilde{\wall}(\sgns,\sgns',w) $ is given by the element $w$, times a non-zero rational function, plus elements which are shorter in Bruhat order. Thus, the operators $\sgns\to \sgns'$ span the twisted group algebra of $W$ over rational functions. Since this group algebra is a vector space of dimension $\# W$ over the fraction field, this is only possible if the elements $ \tilde{\wall}(\sgns,\sgns',w)$ are linearly independent over $\Symt$. \end{proof} \begin{proof}[Proof of Theorem \ref{th:Steinberg}] The functor described in the statement matches the action of $A_I$ on $Y$ with that of $H_*^{BM, G}(\mathbb{X}_I)$ on $H_*^{BM, G}(X_I)$, as simple computations with pushforward and pullback confirm (for example, as in \cite{VV}). The action on the latter is faithful following the argument in \cite[Proposition 4.7]{SWschur}, so this shows we have a faithful functor $A_I\to H_*^{BM, G}(\mathbb{X}_I)$. Let $\mathbb{X}(w)$ be the subset of the space $\mathbb{X}_{I'}$ where the relative position of the two flags is $w\in W$. The surjectivity follows from the fact that $\tilde{\wall}(\sgns,\sgns',w)$ is supported on $\overline{\mathbb{X}(w)}$, and pulls back to the fundamental class on $\mathbb{X}(w)$. The intersection of this space with $X_{\sgns'}\times_VX_{\sgns}$ is an affine bundle with fiber given by a conjugate of $V_{w\sgns'}\cap V_{\sgns}$. If a weight is positive or negative for both $w\sgns'$ and $\sgns$, then a minimal length path does not cross the corresponding hyperplane, whereas it will cross it once if the signs are different. \end{proof} \subsection{Variations} \label{sec:variations} As earlier, we can generalize these algebras by taking any set $P$ with a map $\iota\colon P\to I'$, and considering the category $\Stein_{P}$ with objects given by $P$ where \[\Hom_{\Stein_{P}}(p,p'):=\Hom_{\Stein_{I'}}(\iota(p),\iota(p')).\] We let $J$ (resp. $J'$) be the subset of $\compat$ such that $C_{\sgns,w}\neq 0$ (resp. $C_{\sgns,w}'\neq 0$); note that $J'\supseteq J$. In this case, the map $J'\to I'$ is given by $(\sgns,w)\mapsto (w^{-1}\sgns,1)$. The algebra $\Stein_{J'}$ is Morita equivalent to $\Stein_{I'}$. However, it is a convenient framework for understanding this category, because we can define certain special elements of it. We let $w\colon (\sgns,w')\to (w\sgns,ww')$ to be the image of the identity on $(w^{-1}\sgns,1)$ under the isomorphism \[\Hom_{\Stein_{J'}}( (w\sgns,ww'), (\sgns,w')):=\Hom_{\Stein_{I'}}(((w')^{-1}\sgns,1),((w')^{-1}\sgns,1)).\] These obviously satisfy the relations of $W$. It's more natural to think of the $\Symt$ action on $(\sgns,w)$ to be the conjugate by $w$ of that on $(w^{-1}\sgns,1)$. For each pair of pairs $(\sgns,w)$ and $(\sgns',w')$, we have a well-defined element of this algebra ${\wall}(\sgns',w';\sgns,w) $ defined using the folding of a minimal path from $C'_{w^{-1}\sgns,1}$ to $C'_{w^{-1}\sgns',w^{-1}w'}$ (using the same notation as \eqref{eq:tilde-wall}) by \[{\wall}(\sgns',w';\sgns,w)=w'\wall(\sgns',\sgns_{p})s_{\beta_p} \wall(\sgns_{p},\sgns_{p-1})\cdots s_{\beta_1}\wall(\sgns_1,\sgns)w^{-1}.\] When we extend the polynomial representation $Y$ to this category, we thus still send every object to a copy of $\Symt$ with the action given by \begin{align} \wall(\sgns,w;\sgns',w')\cdot f &=\varphi(\sgns,\sgns') f \nonumber\\ \psi_\al(\sgns',w')\cdot f &=\partial_\al(f) \label{Y'-action1}\\ w\cdot f &=f^w \nonumber\\ \mu \cdot f &=\mu f \nonumber \end{align} Note that if a sign vector $\sgns$ is compatible with $w$ and $w'$, then $\wall(\sgns,w;\sgns,w')$ gives an isomorphism between these objects. Thus, we can reduce the size of our category by only choosing one object per sign vector $\sgns$, and identifying it any others via the elements $\wall(\sgns,w;\sgns,w')$. This is the algebra $\Stein_{\redu}$ attached to the set $\redu$ of sign vectors with $C_{\sgns}\neq 0$ for some $\phi$ (similarly, we can define $\redu'$), with the map to $I'$ associating a sign vector to the unique Weyl translate compatible with $1\in W$. Note that in this category, if $s_\al\sgns=\sgns$, then $s_\al$ is an endomorphism of this object, and computation in the polynomial representation confirms the relation $s_\al=\al\psi_\al+1$. In $\Stein_{\redu}$, we have morphisms $\wall(\sgns,\sgns'),\psi_\al(\sgns),w\in W,\mu\in \ft^*$ as above, labeled by feasible sign vectors $\sgns,\sgns'$, and these act as in \eqref{Y'-action1}. This algebra contains as a subcategory $\Stein^{\ab}_{\redu}$, the category attached to the representation $V$ and the torus $\tilde{T}\subset \tilde{G}$. This is generated over $\Symt$ by the elements $\wall(\sgns,\sgns')$. We can also consider the set $X_*(T)_1$ of lifts. Every element of this set lies in the one of the chambers in $\redu$. Thus, we have a map $X_*(T)_1\to \{+,-\}^d $ sending each lift to its chamber. We have an associated Steinberg category $\Stein_X=\Stein_{X_*(T)_1}$. Finally, we consider the extended arrangement on $\ft_\ep$ defined by the hyperplanes $\varphi_i(\xi)=n+\ep_i$ for $n\in \Z$. Note that if we use the isomorphism $\ft_\ep\cong \ft$, with $\xi'$ denoting the image of $\xi$, we have that $\varphi_i(\xi)=\varphi_i(\xi')-\ep_i$. The chambers $\ACs$ of this arrangment are defined not by sign vectors, but rather by integer vectors: associated to $\mathbf{a}=(a_1,\dots, a_d)$ we have the chamber \begin{equation} \AC_{\Ba}'=\{\xi \in \ft_\ep\mid a_i<\varphi_i(\xi)<a_i+1\text{ for all $i$}\}.\label{eq:aff-cham} \end{equation} As usual, we call $\Ba$ feasible if this set is non-empty. Considering the inclusion of chambers induces a map $\eta\colon\ACs \to \redu'$, which gives us a category $\Stein_{\ACs}$. Since the map $\ACs\to \redu'$ is surjective, $\Stein_{\ACs}$ is equivalent to $\Stein_{I'}$, but it will be useful to have this category for comparison to the Coulomb case. As before, we can generate the morphisms of this category with morphisms $\wall(\Ba,\Ba')$ and copies of $\K[W]$ and $\Symt$. These act in the polynomial representation by \begin{align} \wall(\Ba;\Ba')\cdot f &=\varphi(\eta(\Ba),\eta(\Ba')) f \nonumber\\ \psi_\al(\Ba)\cdot f &=\partial_\al(f) \label{Y'-action2}\\ w\cdot f &=f^w \nonumber\\ \mu \cdot f &=\mu f. \nonumber \end{align} \begin{definition}\label{def:bimod} Given two subsets $P,P'\subset I'$, we define the $A_{P'}\operatorname{-}A_{P}$-bimodule ${}_{P}A_{P}$ (or similarly a $\Stein_{P'}\operatorname{-}\Stein_{P}$-bimodule ${}_{P'}\Stein_{P}$) by simply associating to the pair $(p',p)\in P'\times P$ the vector space $\Hom_{A_{I'}}(p,p')$. \end{definition} This extends in an obvious way to $P,P'$ simply mapping to $I'$ (or to $\redu'$, etc.). \subsection{The quiver and hypertoric cases} \label{sec:examples} If $G$ is abelian, then all relations involving $\psi$ do not occur, since there are no Coxeter hyperplanes. We are left with the relations (\ref{wall}, \ref{dot-commute}, \ref{GDKD}), which appeared in \cite{GDKD,BLPWtorico}. The result is the algebra $A^!_{\operatorname{pol}}(\vartheta,-)$ from \cite[\S 8.6]{BLPWtorico}. Now we fix a quiver $\Gamma$ and let $V=\oplus_{i\to j}\Hom(\C^{d_i},\C^{d_j})$ as a module over $G=\prod GL_{d_i}$ as usual. In this case we obtain the relations of a weighted KLR algebra as defined in \cite{WebwKLR}. Let $\mathbbm{t}$ act on $T^*V$ by cotangent scaling and write the lifts in $\spt$ as this action, plus an element of $\ft$. The Lie coalgebra $\ft^*$ is generated by the weights of the defining representations $z_{i,k}$ for $i\in V(\Gamma)$ and $k=1,\dots, d_i$. In this case, the chambers $C_\sgns$ are bounded by the inequalities $z_{i,k}\geq z_{j,m}$ or $z_{i,k}\leq z_{j,m}-1$ if we have an arrow $j\to i$. Recall that the KLR category of a graph $\Gamma$ is an category whose objects are lists $\Bi\in V(\Gamma)^n$ and morphisms are certain string diagrams carrying dots. The KLR algebra is the formal sum of all morphisms in this category. We'll use a slightly unorthodox generating set for this category: \begin{itemize} \item The dots acting as a polynomial ring on each object. \item Given $\Bi,\Bj\in V(\Gamma)^n$, there is a unique diagram ${}_{\Bi}1_{\Bj}\colon \Bi\to \Bj$, as defined in the proof of \cite[Thm. 2.5]{KLI}, which connects these objects with a minimal number of crossings. \item If $i_k=i_{k+1}$, then $\psi_k\colon \Bi\to \Bi$ switches these strands (in many other sources, $\psi_k$ is used for the morphism switching these strands no matter what the label; we have absorbed those with different labels into the diagrams ${}_{\Bi}1_{\Bj}$ above). \end{itemize} Given a list $\Bi\in V(\Gamma)^n$ where $n=\sum d_i$, we let $\xi_{\Bi}$ be the unique coweight where $z_{j,1}< \cdots < z_{j,d_j}$ and $z_{j,k}\in [1,n]$ satisfies $i_{z_{j,k}}=j$ for all $k$. Note that switching two entries of $\Bi$ which are not connected will not change the underlying chamber. Let $\EuScript{I}$ be the set of coweights occurring this way, with the obvious map $ \EuScript{I}\to \redu$ just remembering the chamber where each coweight lies. By comparing the representation \eqref{Y-action} with \cite[3.12]{Rou2KM}, we see immediately that: \begin{proposition} We have an equivalence of $A_{\EuScript{I}}$ to the KLR algebra of $\Gamma$ for the dimension vector $\mathbf{d}$, sending: \begin{enumerate} \item $z_{j,k}$ to the dot on the $k$th strand from the right labeled $j$, \item $\wall(\Bi,\Bj)$ to ${}_{\Bi}1_{\Bj}$, and \item $\psi_{\al_{j,k}}(\xi)$ to the element $\psi$ crossing the $k$th and $k+1$st strands from the left with label $j$ (these must be adjacent for $\psi_{\al_{j,k}}(\xi)$ to be defined). \end{enumerate} \end{proposition} Note that this quite similar to the isomorphism discussed in Example \ref{ex:KLR}. We can simplify this a bit in the case where $\Gamma$ is bipartite, composed of ``odd'' and ``even'' vertices; we choose an orientation pointing from odd vertices to even vertices, and reindex by adding $\nicefrac{1}{2}$ to all the weights for odd vertices. In this case, our inequalities become $z_{i,k}\geq z_{j,m}+ \nicefrac{1}{2}$ independent of orientation. The existence of a non-trivial flavor complicates the situation. For each edge $e\colon i\to j$, we have a weight $\phi_e$ and a choice of $\epsilon_{i,j}\in (0,-1)$. In this case, the chambers $C_\sgns'$ are defined by inequalities of the form \begin{align*} z_{i,k}&\leq z_{j,m}+ \ep_e+w_e& \sigma_{i,j;k,m}&=+\\ z_{i,k}&\geq z_{j,m}+ \ep_e+w_e& \sigma_{i,j;k,m}&=-. \end{align*} Thus, the chambers $C_{\sgns,w}'$ are precisely the equivalence classes of loadings for the KLR algebra with the weighting $\vartheta_e=\ep_e+\phi_e$ by \cite[2.12]{WebwKLR}. Given a set $B$ of loadings, we have a map $B\to \redu'$ sending a loading to the corresponding chamber as above. Comparing \cite[2.7]{WebwKLR} to \eqref{Y-action} shows that: \begin{proposition}\label{prop:wKLR} We have an isomorphism ${W}^\vartheta_B\cong A_B$. \end{proposition} \subsection{Category $\cO$} \label{sec:category-co} In this section, we'll assume that $\K=\C$; furthermore, we'll fix an index set $Q$ with a map $Q\to I'$, and let $A:=A_Q$. In this case, the Steinberg algebra has an interpretation in terms of $G$-equivariant D-modules on $V$. This corresponds to the sheaf theoretic interpretation discussed before by the Riemann-Hilbert correspondence. Consider the union $X=\sqcup_{\sgns\in I} X_{\sgns}$ and let $p\colon X\to V$ be the projection to the second factor. Let $L=p_*\mathfrak{S}_X$ be the D-module pushforward of the structure sheaf on $X$ by this proper map and $L_\sgns=p_*\mathfrak{S}_{X_{\sgns}}$. As discussed earlier, \cite[Thm. 8.6.7]{CG97} together with the Riemann-Hilbert correspondence shows that: \begin{proposition} We have a quasi-isomorphism of dg-algebras $A\cong \Ext^\bullet (L,L)$ where the left hand side is thought of as a dg-algebra with trivial differential. \end{proposition} This isomorphism induces an equivalence between the dg-subcategory of bounded complexes of D-modules on $V/G$ generated by $L$, and the category of dg-modules over $A$. Alternatively, it shows that the dg-subcategory $D^b_L({\operatorname{MHM}}(V/G))$ of mixed Hodge modules on $V/G$ generated by Tate twists of $L$ is equivalent to the category of dg-category of complexes of graded $A$-modules. This equivalence sends simple mixed Hodge modules to complexes with a single indecomposable projective term satisfying $\Hom(P,A)\cong P$, and intertwines Tate twist with simultaneous shift of internal and homological grading grading on $A_I$-modules. This leads us to: \begin{theorem}\label{th:MHM-LPC} This equivalence induces an equivalence of categories between the abelian category ${\operatorname{MHM}}_L $ of mixed Hodge modules lying in $D^b_L({\operatorname{MHM}}(V/G))$ and the graded category $\LPC(A)$ of linear projective complexes over $A$. \end{theorem} \begin{proof} We need only show that a complex in $D^b(A\operatorname{-gmod})$ is quasi-isomorphic to a linear projective complex if and only if it is the image of a mixed Hodge module. A linear projective complex has a filtration where the subquotients are single term linear projective complexes. These correspond to Tate twists of the mixed Hodge modules which are shifts of summands of $L$. Thus, the corresponding complex in $D^b_L({\operatorname{MHM}}(V/G))$ has a filtration whose subquotients are these mixed Hodge modules, and thus is itself a mixed Hodge module. Now, we will show the converse by induction on the length of the mixed Hodge module. We have already discussed the length 1 case. If $X$ is a mixed Hodge module in ${\operatorname{MHM}}_L $ then it has a simple submodule $K$ which is a shift of a summand of $L$. By assumption, the complex corresponding to $X/K\oplus K$ is linear projective, and $X$ has a corresponding complex with the same underlying module, and a different differential (we take the cone of the corresponding element of $\Ext^1(X/K,K)$), which is thus also linear projective. \end{proof} Now, we consider how this construction behaves under reduction. For a given character $\xi$, we can define a GIT quotient $\fM_{H,\xi}=T^*V/\!\!/\!\!/\!\!/G$, which is a quasi-projective variety. We wish to study modules over a quantization of $\fM_\xi$ as in \cite{BLPWquant,BLPWgco} (also called DQ-modules in the terminology of Kashiwara and Schapira \cite{KSdq}). As introduced in \cite[\S 3.3]{BLPWgco}, we have a category $\mathcal{O}_{\!\operatorname{g}}$ attached to any quantization on this variety. In our case, we can construct these quantizations as noncommutative Hamiltonian reductions by the standard non-commutative moment map, sending $X\in \fg\mapsto X_V$, the corresponding vector field on $V$. The category $\mathcal{O}_{\!\operatorname{g}}$ is a quotient of a category called $p\cOg$ defined in \cite[2.8]{Webqui}, which contains $L$ by \cite[Thm. 2.18]{Webqui} if $Q\mapsto I'_{\phi}\subset I'$. We let $D_{\mathcal{O}_{\!\operatorname{g}}}$ and $D_{p\cOg}$ be the subcategories of the derived category generated by these abelian categories. For a given character $\xi$, we call a sign vector $\sgns$ {\bf unsteady} if there is a cocharacter $\nu$ with $\langle \xi,\nu\rangle \geq 0$, and $(T^*V)_\nu\supset (T^*V)_\sgns$. Let $\ideal\subset A$ be the ideal in $A$ generated by all morphisms factoring through the object ${\sgns}$ given by an unsteady sign vector. Since $\mathfrak{r}(L_\sgns)=0$ by \cite[Thm. 2.18]{Webqui}, we have an induced functor $\mathfrak{r}(L)\Lotimes_{A/\ideal}-\colon A/\ideal\operatorname{-dg-mod} \to \mathcal{O}_{\!\operatorname{g}}$. In order to have the strongest version of our results, we need to make some assumptions introduced in \cite[\S 2.6]{Webqui}. The strongest of these is: \begin{itemize} \item [$(\dagger)$] \hypertarget {dagger} {Each} simple module in $p\cOg$ is a summand of a shift of $L$, and every simple with unstable support is a summand of $L_\sgns$ for $\sgns$ unsteady. \end{itemize} Note that whether $(\dagger)$ holds depends on the choice of $Q$. If it holds for any $Q$, then it holds for $Q=I'_\phi$. In this case, this assumption holds for quiver varieties and smooth hypertoric varieties, as shown in \cite{Webqui}. A slightly weaker assumption is: \begin{itemize} \item [$(\dagger')$] \hypertarget {daggerprime} {Each} simple module $M$ in $p\cOg$ such that $\Ext^\bullet(L,M)\neq 0$ is a summand of a shift of $L$, and every such simple with unstable support is a summand of $L_\lift$ for $\lift\in B$ unsteady. \end{itemize} This holds for all hypertoric varieties, and seems likely to be the correct statement when $\fM_{H,\xi}$ is not smooth. We know of no situation where $(\dagger')$ fails, but it seems to be a quite difficult statement to prove; it is not simple to describe the condition of being $p\cOg$ using only the geometry of $V/G$, since it is a microlocal property. As argued in \cite[Thm 2.25]{Webqui}, we have that: \begin{theorem}\label{th:D-ff} If $(\dagger')$ holds, then the functor $\mathfrak{r}(L)\Lotimes_{A/\ideal}-\colon A/\ideal\operatorname{-dg-mod} \toD_{\cOg}$ is fully faithful. If $(\dagger)$ holds, then it is an equivalence. \end{theorem} We can define a graded version of category $\cO$ by considering the right adjoint $\mathfrak{r}_!$. A {\bf grading} on an object $M\in\mathcal{O}_{\!\operatorname{g}}$ is a mixed Hodge structure on $\mathfrak{r}_!(M)$. Let $\mathcal{\tilde O}_{\!\operatorname{g}}$ be the category of graded objects in $\mathcal{O}_{\!\operatorname{g}}$ with morphisms given by $\Hom_{{\operatorname{MHM}}}(\mathfrak{r}_!(M),\mathfrak{r}_!(N))$. \begin{theorem}\label{th:O-ff} If $(\dagger')$ holds, then the functor $\mathfrak{r}(L)\otimes_{A/\ideal}-\colon \operatorname{LCP}(A/\ideal) \to \mathcal{\tilde O}_{\!\operatorname{g}}$ is fully faithful. If $(\dagger)$ holds, then it is an equivalence. \end{theorem} For our purposes, we wish to have a more user friendly characterization of instability. \begin{lemma}\label{lem:unsteady-unbounded} If $C_\sgns'\neq \emptyset$, then the sign vector $\sgns$ is unsteady if $\xi$ does not attain a maximum on a bounded subset of $C_{\sgns}'$. \end{lemma} \begin{proof} The condition that $\langle \xi,\nu\rangle >0$ is equivalent to the claim that $\xi$ does not attain a unique maximum on any ray parallel to $\nu$ and $(T^*V)_\nu\supset (T^*V)_\sgns$ if and only if $C_{ \sgns}$ contains a ray parallel to $\nu$. By a standard result of linear programing, we have a maximum on a bounded subset of the chamber if and only if we have a unique maximum on each ray in the chamber. \end{proof} Thus, if $Q=I_{\phi}'$, we could also define $\ideal$ as the ideal generated by projections to $L_\sgns$ with $\xi$ not attaining a maximum on a bounded subset of $C_{\sgns}'$. \section{The Coulomb side} \label{sec:coulomb} The Coulomb side of our correspondence is given by a remarkable recent construction of Braverman, Finkelberg and Nakajima \cite{NaCoulomb,BFN}. As we mentioned in the introduction, a more algebraic minded reader could ignore this geometric construction and take Theorem \ref{thm:BFN-pres} as a definition. We'll wish to modify this construction somewhat, so let us describe it in some detail. As before, $G$ be a reductive algebraic group over $\C$, with $G((t)), G[[t]]$ its points over $\C((t)), \C[[t]]$. For a fixed Borel $B\subset G$, we let $I$ be the associated Iwahori subgroup \[\Iwahori=\{g(t)\in G[[t]]\mid g(0)\in B\}\subset G[[t]].\] The {\bf affine flag variety} $\AF=G((t))/I$ is just the quotient by this Iwahori. Let $V$ be a $G$-representation fixed in the previous section, and $U\subset V((t))$ a subspace invariant under $I$. We equip $V((t))$ with a loop $\C^*$-action such that $vt^a$ has weight $a$. This is compatible with the standard loop action on $G((t))$. We'll be interested in the infinite-dimensional vector bundle on $\AF$ given by $\VB_U:=(G((t)) \times U)/\Iwahori$. Note that we have a natural $G((t))$-equivariant projection map $\VB_U\to V((t))$. \newcommand{\wtG}{\widetilde{G((t))}} \begin{definition} The flag BFN space is the fiber product $\VB_{V[[t]]}\times_{V((t))}\VB_{V[[t]]}$. \end{definition} We'll consider this as a set of triples $\VB_{V[[t]]}\times_{V((t))}\VB_{V[[t]]}\subset V((t))\times \AF \times \AF$. This space has a natural action of $G((t))$ by the diagonal action, as well as an action of $H$ and a loop action of $\mathbb{C}^*$ induced by that on $V((t))$ and $G((t))$. Let $\wtG$ be the subgroup of $\tilde{G}((t))\rtimes \mathbb{C}^*$ generated by $G((t))$, and the image of $\tilde{G}\hookrightarrow \tilde{G}\rtimes \C^*$ included via the identity times $\nu$. We'll want to consider the equivariant homology $H_*^{BM, \wtG}(\VB_{V[[t]]}\times_{V((t))}\VB_{V[[t]]})$. Defining this properly is a finicky technical issue, since the space $\VB_{V[[t]]}\times_{V((t))}\VB_{V[[t]]}$ can be thought of as a union of affine spaces which are both infinite dimensional and infinite codimensional, making it hard to define their degree in homology. First, we note that it is technically more convenient to consider the space \[{}_{V[[t]]}\VB_{V[[t]]}=\left\{(g,v(t))\in G((t))\times V[[t]]\mid g\cdot v(t)\in V[[t]]\right\}/I\] Basic properties of equivariant homology lead us to expect that \[H_*^{BM, \wtG}(\VB_{V[[t]]}\times_{V((t))}\VB_{V[[t]]}) \cong H_*^{BM,\tilde{T}}({}_{V[[t]]}\VB_{V[[t]]});\] we will use this as a definition of the left hand side. The preimage in ${}_{V[[t]]}\VB_{V[[t]]}$ of a Schubert cell in $\AF$ is a cofinite dimensional affine subbundle of $V((t))$; thus, using both the dimension of the Schubert cell, and the codimension of the affine bundle, we can make sense of the difference between the dimensions of these cells. With a bit more work, this allows us to make precise the notion of this homology, as in \cite[\S 2(ii)]{BFN}. For our purposes, we can use their construction as a black-box, only knowing that basic properties of pushforward and pullback operate as expected. \begin{definition} The BFN Steinberg algebra $\EuScript{A}$ is the equivariant Borel-Moore homology $H_*^{BM, \wtG}(\VB_{V[[t]]}\times_{V((t))}\VB_{V[[t]]})$. \end{definition} An important special case of this algebra has also been considered in \cite[\S 4]{BEF}, when the representation is of quiver type (as discussed in Section \ref{sec:examples}). As usual, we let $h$ be the equivariant parameter corresponding to the character $\nu$. Note that this algebra contains a copy of $\Cth=S[h]\cong \C[\tilde{\ft}]$, the coordinate ring of $\tilde{\ft}$, embedded as $H_*^{BM,\widetilde{G((t))}}(\VB_{V[[t]]})\cong H_*^{\tilde{I}}(*)$. The algebra $\EuScript{A}$ also possesses a natural action on this cohomology ring. The original BFN algebra $\EuScript{A}^{\operatorname{sph}}$ is defined in essentially the same way, using $\EuScript{Y}_{V[[t]]}:=(G((t)) \times V[[t]])/G[[t]]$. Pullback by the natural map $\VB_{V[[t]]}\to \EuScript{Y}_{V[[t]]}$ defines a homomorphism $\EuScript{A}^{\operatorname{sph}}\to \EuScript{A}$. \begin{theorem}\label{th;Morita} The algebras $\EuScript{A}^{\operatorname{sph}}$ and $\EuScript{A}$ are Morita equivalent. In fact, the latter is a matrix algebra over the former of rank $\# W$. \end{theorem} \begin{proof} This is a standard result that holds whenever we have a fiber bundle $X\to Y$ such that the pushforward of $\K_X$ to $Y$ is a sum of constant sheaves and any map $Y\to Z$: the convolution algebra $H^*(X\times_Z X)$ is a matrix algebra over $H^*(Y\times_Z Y)$ with rank given by the sum of Betti numbers of the fiber. We have a natural homomorphism $H_*^{BM, \wtG}(\VB_{V[[t]]}\times_{\EuScript{Y}_{V[[t]]}}\VB_{V[[t]]})\to \EuScript{A}$. The map $\VB_{V[[t]]}\to \EuScript{Y}_{V[[t]]}$ is a fiber bundle with fiber $G/B$, and equivariance shows that the pushforward is a sum of constant sheaves. Thus, the former convolution algebra is a matrix algebra over $H_*^{BM, \wtG}(\EuScript{Y}_{V[[t]]})\cong \Cth$ of rank $\# W$. The image of any primitive idempotent in $H_*^{BM, \wtG}(\VB_{V[[t]]}\times_{\EuScript{Y}_{V[[t]]}}\VB_{V[[t]]})$ gives an idempotent $e\in \EuScript{A}$ such that $\EuScript{A}e \EuScript{A}=\EuScript{A}$ and $\EuScript{A}^{\operatorname{sph}} \cong e\EuScript{A}e$, thus these algebras are Morita equivalent. \end{proof} Note that $\EuScript{A}$ contains as a subalgebra $\EuScript{A}_\ab$, the BFN Steinberg algebra for the subgroup $\widetilde{T((t))}$. If we identify the Steinberg algebra with the homology $\EuScript{A} \cong H_*^{BM, \tilde{T}}\left({}_{V[[t]]}\VB_{V[[t]]}\right)$ then $\EuScript{A}_\ab$ is the image of pushforward $\pi_{\ab}$ from \[\EuScript{A}_{\ab} \cong H_*^{BM, \tilde{T}}\left(\left\{(g,v(t))\in T((t))\times V[[t]]\mid g\cdot v(t)\in V[[t]]\right\}/T[[t]]\right).\] \excise{ Let $\beta_i$ be the product of the weights of $T\times \C^*$ on $V[[t]]/(V[[t]]\cap s_i^{-1} V[[t]])$. For $q\in \ft_\Z$, let $\beta_q$ be the product of the weights of $T\times \C^*$ on $V[[t]]/(V[[t]]\cap t^q V[[t]])$. \begin{definition} Let $\gamma^{(n)}=(\gamma+1)\cdots (\gamma+n)$ be the rising factorial; by convention, if $n\leq 0$, then we have that $\gamma^{(n)}=1$. We let $\gamma\circ j$ be $\langle\al_j^\vee,\gamma\rangle$ times the weight of the loop $\C^*$ on $\al_j$. \end{definition} Note that: \begin{lemma} \[\beta_j=\prod_{i=1}^d\gamma^{(\gamma\circ j)}_i \qquad \beta_q=\prod_{i=1}^d\gamma^{(\langle \gamma,q\rangle)}_i.\] \end{lemma} Given a reflection $s_i$, we let \[X_{s_i}={\left\{(gv(t),gs_iI ,gI)\in \VB_{V[[t]]}\times_{V((t))}\VB_{V[[t]]} \mid g\in G((t)), v(t)\in V[[t]]\cap s_i^{-1} V[[t]] \right\}}.\] We consider the elements of $\EuScript{A}$ defined by the homology class $u_i=[ \overline{X_{s_i}} ].$ On the other hand, if $q\in \ft_\Z$, then we let \[r_q=\pi_{\ab}([\{(t^q,v(t))\mid v(t)\in V[[t]]\cap t^q V[[t]]\}]).\] \begin{proposition} These elements act in the representation on $H_*^{I}(*)$ by \[r_q\cdot f= \beta_q(q\cdot f) \qquad u_i\cdot t=\partial_i(\beta_i t).\] \end{proposition}} \subsection{The extended category} \label{sec:extended-category} While the Coulomb branch is our focus, it is easier to study it in a larger context: there is an extended category in which it appears as the endomorphisms of one object. Given any $\acham\in \ft_\ep$, we can consider the induced action on the vector space $V((t))$. \begin{itemize} \item Let $\Iwahori_\acham$ be the subgroup whose Lie algebra is the sum of positive weight spaces for the adjoint action of $\acham$. This only depends on the alcove in which $\acham$ lies, i.e. which chamber of the arrangment given by the hyperplanes $\{\alpha(\acham)=n\mid n\in \Z\}$ contains $\acham$; the subgroup $\Iwahori_\acham$ is an Iwahori if $\acham$ does not lie on any of these hyperplanes. \item Let $U_\acham$ be the subspace of elements of non-negative weight under $\acham$. This subspace is closed under the action of $\Iwahori_\acham$. This only depends on the vector $\Ba$ such that $\acham\in \AC_{\Ba}'$, as defined in \eqref{eq:aff-cham}. \end{itemize} We call $\acham$ generic if does not lie on the hyperplanes $\{\varphi_i(\acham)=n+\ep_i\mid n\in \Z\}$ or $\{\alpha(\acham)=n\mid n\in \Z\}$; we'll call these hyperplanes the {\bf unrolled hyperplane arrangment}. \begin{example} We illustrate this arrangement in the case of our running example: \[\tikz[very thick,scale=1.4]{ \fill [gray!40!white] (0.16,0.16) -- (0.16,-.84) -- (-.84,-.84) -- (-.84,.16)--cycle; \draw (1.16,2.5)-- (1.16,-2.5) node[scale=.5, at start,above]{$\varphi_1=5/3$}; \draw (-1.16,2.5)-- (-1.16,-2.5) node[scale=.5, at end,below]{$\varphi_3=-2/3$}; \draw (2.5,1.16)-- (-2.5,1.16) node[scale=.5, at start,right]{$\varphi_2=5/3$}; \draw (2.5,-1.16)-- (-2.5,-1.16) node[scale=.5, at end,left]{$\varphi_4=-2/3$}; \draw (2.16,2.5)-- (2.16,-2.5) node[scale=.5, at start,above]{$\varphi_1=8/3$}; \draw (-.16,2.5)-- (-.16,-2.5) node[scale=.5, at end,below]{$\varphi_3=1/3$}; \draw (2.5,2.16)-- (-2.5,2.16) node[scale=.5, at start,right]{$\varphi_2=8/3$}; \draw (2.5,-.16)-- (-2.5,-.16) node[scale=.5, at end,left]{$\varphi_4=1/3$}; \draw (0.16,2.5)-- (0.16,-2.5) node[scale=.5, at start,above]{$\varphi_1=2/3$}; \draw (.84,2.5)-- (.84,-2.5) node[scale=.5, at end,below]{$\varphi_3=4/3$}; \draw (2.5,.16)-- (-2.5,.16) node[scale=.5, at start,right]{$\varphi_2=2/3$}; \draw (2.5,.84)-- (-2.5,.84) node[scale=.5, at end,left]{$\varphi_4=4/3$}; \draw (-.84,2.5)-- (-.84,-2.5) node[scale=.5, at start,above]{$\varphi_1=-1/3$}; \draw (1.84,2.5)-- (1.84,-2.5) node[scale=.5, at end,below]{$\varphi_3=7/3$}; \draw (2.5,-.84)-- (-2.5,-.84) node[scale=.5, at start,right]{$\varphi_2=-1/3$}; \draw (2.5,1.84)-- (-2.5,1.84) node[scale=.5, at end,left]{$\varphi_4=7/3$}; \draw (-1.84,2.5)-- (-1.84,-2.5) node[scale=.5, at start,above]{$\varphi_1=-4/3$}; \draw (-2.16,2.5)-- (-2.16,-2.5) node[scale=.5, at end,below]{$\varphi_3=-5/3$}; \draw (2.5,-1.84)-- (-2.5,-1.84) node[scale=.5, at start,right]{$\varphi_2=-4/3$}; \draw (2.5,-2.16)-- (-2.5,-2.16) node[scale=.5, at end,left]{$\varphi_4=-5/3$}; \draw[dotted] (-2.5,-2.5) -- node[scale=.5, right,at end]{$\alpha=0$}(2.5,2.5); \draw[dotted] (-2.5,-1.5) -- node[scale=.5, below left,at start]{$\alpha=1$}(1.5,2.5); \draw[dotted] (-1.5,-2.5) -- node[scale=.5, above right,at end]{$\alpha=-1$}(2.5,1.5); \draw[dotted] (-2.5,-0.5) -- node[scale=.5, below left,at start]{$\alpha=2$}(0.5,2.5); \draw[dotted] (-0.5,-2.5) -- node[scale=.5, above right,at end]{$\alpha=-2$}(2.5,0.5); \draw[dotted] (-2.5,.5) -- node[scale=.5, below left,at start]{$\alpha=3$}(-.5,2.5); \draw[dotted] (.5,-2.5) -- node[scale=.5, above right,at end]{$\alpha=-3$}(2.5,-.5); \draw[dotted] (-2.5,1.5) -- node[scale=.5, below left,at start]{$\alpha=4$}(-1.5,2.5); \draw[dotted] (1.5,-2.5) -- node[scale=.5, above right,at end]{$\alpha=-4$}(2.5,-1.5); }\] The spaces $U_\chi$ which arise will be of the form: \[U_\chi=\{(f_1t^{a},f_2t^{b},f_3t^{a-\delta_1},f_4t^{b-\delta_2})\mid f_i\in \C[[t]]\}\] for $a,b\in\Z$, $\delta_i\in \{0,1\}$ are uniquely characterized by the inequalities: \begin{align*} -a+\ep_1&<\chi(\gamma_1)<-a+1+\ep_1& -b+\ep_2&<\chi(\gamma_2)<-b+1+\ep_2\\ -a+\delta_1+\ep_3&<\chi(\gamma_1)<-a+\delta_1+1+\ep_3& -b+\delta_2+\ep_4&<\chi(\gamma_2)<-b+\delta_2+1+\ep_4\\ \end{align*} Tracing these definitions through, we see that: \begin{itemize} \item The case $\delta_1=\delta_2=0$ corresponds to the larger squares in the diagram above.\item The case $\delta_1=\delta_2=1$ corresponds to the smaller squares. \item The case $\delta_1=1,\delta_2=0$ corresponds to the tall rectangles. \item The case $\delta_1=0,\delta_2=1$ corresponds to the fat rectangles. \end{itemize} The region where $a=b=0$ is shaded in the diagram above. \end{example} For any generic $\acham\in \ft_\ep$, we can consider $\VB_{\acham}:=\VB_{U_\acham}:=G((t))\times_{\Iwahori_{\acham}}U_{\acham}$, the associated vector bundle. The space $ \ft_\ep$ has a natural adjoint action of $\widehat {W}=N_{\wtG}(T)/T$, and of course, $U_{w\cdot \acham}=w\cdot U_{\acham}$. \excise{The ring $\Cft$ carries two actions of the extended affine Weyl group $\widehat{W}=\ft_\Z\rtimes W$: the action induced by the identification of $H_*^{I\times \C^*}(*)$ with $h$ specialized at $1$, where $\ft_\Z$ acts in the natural way by translation, and the action factoring through $W$ where $\ft_\Z$ acts trivially. The latter action will be denoted by $w\star t$ to avoid confusion.} We let \[{}_{\acham}\VB_{\acham'}=\left\{(g,v(t))\in G((t))\times U_{\acham}\mid g\cdot v(t)\in U_{\acham'}\right\}/\Iwahori_{\acham}.\] \begin{definition} Let the {\bf extended BFN category} $\mathscr{B}$ be the category whose objects are generic cocharacters $\acham\in \ft_\ep$, and such that \begin{equation*} \Hom(\acham,\acham')=H_*^{BM, \wtG}(\VB_{\acham}\times_{V((t))}\VB_{\acham'})\\ \cong H_*^{BM, \tilde T}\left({}_{\acham}\VB_{\acham'}\right). \end{equation*} \end{definition} As before, this homology is defined using the techniques in \cite[\S 2(ii)]{BFN}. Note that any lift of $w\in \widehat{W}$ to $G((t))$ induces an isomorphism $\VB_{\acham}\cong \VB_{w\cdot \acham}$ given by $(g,v(t))\mapsto (wgw^{-1},w\cdot v(t))$. We denote the homology class of the graph of this isomorphism by $y_w$ (note that this class is independent of the choice of lift). In this case, we have $U_{0}=V[[t]]$. Then, we have that $\EuScript{A}=\Hom_{{\mathscr{B}}}(0,0)$. Thus, this extended category encodes the structure of $\EuScript{A}$. Furthermore, the category of representations of $\EuScript{A}$ is closely related to that of $\mathscr{B}$. Let $M$ be a representation of $\mathscr{B}$, that is, a functor from $\mathscr{B}$ to the category of $\K$-vector spaces. The vector space $N:=M(0)$ has an induced $\EuScript{A}$-module structure. Since $\Hom(\acham,0)$ and $\Hom(0,\acham)$ are finitely generated as $\EuScript{A}$-modules, this functor preserves finite generation, and is in fact a quotient functor, with left adjoint given by \[N\mapsto \mathscr{B}\otimes_{\EuScript{A}}N(\acham):=\Hom(\acham,0)\otimes_{\EuScript{A}}N.\] Note that there is a natural subcategory $\mathscr{B}_{\ab}$ (with the same objects), where the morphisms are given by \[\Hom_{\ab}(\acham,\acham') \cong H_*^{BM, T\times \C^*}\left(\left\{(g,v(t))\in T((t))\times U_{\acham}\mid g\cdot v(t)\in U_{\acham'}\right\}/T[[t]]\right).\] The inclusion is induced by pushforward in homology. \subsection{A presentation of the extended category} \label{sec:pres-extend-categ} Let \[r(\acham',\acham)=[\{(e,v(t)) \in T((t))\times U_{\acham}\mid v(t)\in U_{\acham'}\}/T[[t]]]\in \Hom_{\ab}(\acham,\acham') .\] If $\Iwahori_{ \acham}=\Iwahori_{\acham'}$ (that is, the chambers are in the same alcove), this is sent to the class in $\Hom(\acham,\acham')$ of the space \[Y(\acham,\acham)=\{(e,v(t)) \in G((t))\times U_{\acham}\mid v(t)\in U_{\acham'}\}/\Iwahori_{\acham}\] but this is not the case for $\acham,\acham'$ in different alcoves. We also have a morphism $y_{\zeta}\in \Hom_{\ab}(\acham,\acham+\zeta)$ for $\zeta\in \ft_\Z$ (thought of as a translation in the extended affine Weyl group. Let $\Phi(\acham,\acham')$ be the product of the terms $\varphi^+_i-nh$ over pairs $(i,n) \in [1,d]\times \Z$ such that we have the inequalities \[\varphi_i(\acham)>n+\ep_i \qquad \varphi_i(\acham')<n+\ep_i \] hold. Let $\Phi(\acham,\acham',\acham'')$ be the product of the terms $\varphi^+_i-nh$ over pairs $(i,n)\in [1,d]\times \Z$ such that we have the inequalities \[\varphi_i(\acham'')>n+\ep_i \qquad \varphi_i(\acham')<n+\ep_i \qquad \varphi_i(\acham)>n+\ep_i \] or the inequalities \[\varphi_i(\acham'')<n+\ep_i \qquad \varphi_i(\acham')>n+\ep_i \qquad \varphi_i(\acham)<n+\ep_i. \] These terms correspond to the hyperplanes that a path $\acham\to\acham'\to \acham''$ must cross twice. Note that if $h$ is specialized to $0$, then we just get each weight $\varphi_i$ raised to a power given by the number of corresponding unrolled hyperplanes crossed. \begin{proposition} The morphisms $\Hom_{\ab}(\acham,\acham')$ have a basis over $\Cth$ of form $y_{\zeta} \cdot r(\acham'-\zeta,\acham)$ for $\zeta\in \ft_\Z$, with the relations in the category $\mathscr{B}_{\ab}$ generated by: \newseq\begin{align*}\subeqn\label{eq:coweight1} y_{\zeta}\cdot y_{\zeta'}&=y_{\zeta+\zeta'}\\ \subeqn\label{eq:conjugate1} y_{\zeta}\cdot r(\acham',\acham)\cdot y_{-\zeta}&=r(\acham'+\zeta,\acham+\zeta) \\ \subeqn\label{eq:weyl1} y_{\zeta}\cdot\mu\cdot y_{-\zeta}&=\mu+\zeta \\ \subeqn\label{eq:wall-cross1} r(\acham''',\acham'') r(\acham',\acham)&= \delta_{\acham'',\acham'}\Phi(\acham''',\acham',\acham) r(\acham''',\acham) \end{align*} \end{proposition} \begin{proof} This is just a restatement of \cite[Sec. 4(i--iii)]{BFN}. \end{proof} If we draw $r(\eta',\eta)$ as a straightline path in $\ft$, and thus compositions of these elements as piecewise linear paths, with the unrolled arrangment drawn in, we can visualize the relation (\ref{eq:wall-cross1}) as saying that when we remove two crossings of the hyperplane $\varphi_i(\acham)=n+\ep_i$ from the path, we do so at the cost of multiplying by $\varphi_i^+-nh$. We can thus represent elements of $\Hom_{\ab}(\acham,\acham)$ as paths which start at $\acham$ and go to any other chamber of the form $\acham-\zeta$ (we implicitly follow these with translation $y_\zeta$). Composition of two paths $p$ and $q$ is thus accomplished by translating $p$ so its start matches the end of $q$, and then straightening using the relation (\ref{eq:wall-cross1}). \begin{example} In our running example, let us fix $\acham=0$. We let $\xi_1,\xi_2$ be the usual coordinate cocharacters of the diagonal $2\times 2$ matrices. The algebra $\EuScript{A}_{\ab}=\Hom_{\ab}(0,0)$ is generated over $\Cth$ by \[w_1= y_{-\xi_1} \cdot r(\xi_1,0)\qquad w_2= y_{-\xi_2} \cdot r(\xi_2,0)\qquad z_1= y_{\xi_1} \cdot r(-\xi_1,0)\qquad z_2= y_{\xi_2} \cdot r(-\xi_2,0)\] with the relations \[[z_1,w_1]=[z_1,w_2]=[z_2,w_1]=[z_2,w_2]=0\] \[ z_1w_1=\gamma_1(\gamma_1-2h)\qquad w_1z_1=(\gamma_1+h)(\gamma_1-h)\] \[ z_2w_2=\gamma_2(\gamma_2-2h)\qquad w_2z_2=(\gamma_2+h)(\gamma_2-h)\] since \[\varphi_1^+=\gamma_1+h\quad \varphi_2^+=\gamma_2+h\quad \varphi_1^+=\gamma_1-h\quad \varphi_2^+=\gamma_2-h.\] In terms of our path description: \[\tikz[very thick,scale=1.6]{ \draw (1.16,1.5)-- (1.16,-2.5) node[scale=.5, at start,above]{$\varphi_1=5/3$}; \draw (-1.16,1.5)-- (-1.16,-2.5) node[scale=.5, at end,below]{$\varphi_3=-2/3$}; \draw (1.5,1.16)-- (-2.5,1.16) node[scale=.5, at start,right]{$\varphi_2=5/3$}; \draw (1.5,-1.16)-- (-2.5,-1.16) node[scale=.5, at end,left]{$\varphi_4=-2/3$}; \draw (-.16,1.5)-- (-.16,-2.5) node[scale=.5, at end,below]{$\varphi_3=1/3$};\draw (1.5,-.16)-- (-2.5,-.16) node[scale=.5, at end,left]{$\varphi_4=1/3$}; \draw (0.16,1.5)-- (0.16,-2.5) node[scale=.5, at start,above]{$\varphi_1=2/3$}; \draw (.84,1.5)-- (.84,-2.5) node[scale=.5, at end,below]{$\varphi_3=4/3$}; \draw (1.5,.16)-- (-2.5,.16) node[scale=.5, at start,right]{$\varphi_2=2/3$}; \draw (1.5,.84)-- (-2.5,.84) node[scale=.5, at end,left]{$\varphi_4=4/3$}; \draw (-.84,1.5)-- (-.84,-2.5) node[scale=.5, at start,above]{$\varphi_1=-1/3$}; \draw (1.5,-.84)-- (-2.5,-.84) node[scale=.5, at start,right]{$\varphi_2=-1/3$}; \draw (-1.84,1.5)-- (-1.84,-2.5) node[scale=.5, at start,above]{$\varphi_1=-4/3$}; \draw (-2.16,1.5)-- (-2.16,-2.5) node[scale=.5, at end,below]{$\varphi_3=-5/3$}; \draw (1.5,-1.84)-- (-2.5,-1.84) node[scale=.5, at start,right]{$\varphi_2=-4/3$}; \draw (1.5,-2.16)-- (-2.5,-2.16) node[scale=.5, at end,left]{$\varphi_4=-5/3$}; \draw[dashed,->] (-.5,-.5) --(.4,-.5) node[ at end,right]{$w_2$}; \draw[dashed,->] (-.5,-.5) --(-.5,.4) node[ at end,above]{$w_1$}; \draw[dashed,->] (-.5,-.5) --(-1.4,-.5) node[ at end,left]{$z_2$}; \draw[dashed,->] (-.5,-.5) --(-.5,-1.4) node[ at end,below]{$z_1$}; }\] \end{example} Now, we turn to generalizing this presentation to the nonabelian case. We can easily check that the relations (\ref{eq:coweight1}--\ref{eq:weyl1}) hold in $\mathscr{B}$ for all elements of the extended affine Weyl group: \newseq\begin{align*} \subeqn\label{eq:coweight2} y_w\cdot y_{w'}&=y_{ww'}\\ \subeqn\label{eq:conjugate2} y_w r(\acham',\acham) y_w^{-1}&=r(w\cdot \acham',w\cdot\acham) \\ \subeqn\label{eq:weyl2} y_w \mu y_w^{-1}&=w\cdot \mu \end{align*} Finally, if $\al(\acham)=0$ for some affine root $\al$ but no other weights or roots vanish, then we can make this generic in two different ways: $\acham_\pm:=\acham\pm \ep \al^\vee$. Let $\Iwahori_{\pm}$ be the corresponding Iwahoris. Note that for $\ep\ll 1$, we have $U_\acham=U_{\acham_\pm }$. Let \[X_{\al}(\acham)={\left\{(gv(t),g \Iwahori_{\pm} ,g \Iwahori_{\mp})\in \VB_{\acham^\pm}\times_{V((t))}\VB_{\acham^{\mp}} \mid g\in G((t)), v(t)\in U_\acham\right\}}.\] Let $\Bpsi_\al(\acham)=[\overline{ X_{\al}(\acham)}]\in \Hom(\acham_\pm,\acham_\mp)$. \begin{theorem}\label{thm:BFN-pres} The morphisms in the extended BFN category are generated by \begin{enumerate} \item $y_w$ for $w\in \widehat{W}$, \item $r(\acham,\acham')$ for $\acham,\acham'\in \ft_\ep$ generic, \item the polynomials in $\Cth$, \item $\Bpsi_\al(\acham)$ for $\acham_\pm$ affine chambers adjacent across $\al(\acham)=n$. \end{enumerate} These act in the polynomial representation by: \newseq \begin{align*} \subeqn\label{eq:wact} wf&=w\cdot f\\ \subeqn\label{eq:ract} r(\acham,\acham') f&=\Phi(\acham,\acham') f\\ \subeqn\label{eq:muact} \mu f&=\mu\cdot f\\ \subeqn\label{eq:psiact} \Bpsi_\al f&=\partial_{\al}(f) \end{align*} The relations between these operators are given by (\ref{eq:coweight1}--\ref{eq:weyl2}) and the further relations \newseq \begin{align*} \subeqn\label{eq:psi2} \Bpsi_{\al}^2&=0\\ \subeqn\label{eq:psi} \underbrace{\Bpsi_{\al}\Bpsi_{s_\al \beta}\Bpsi_{s_\al s_{\beta}\al}\cdots}_{m_{\al\be}} &=\underbrace{\Bpsi_{\beta}\Bpsi_{s_{\beta}\al}\Bpsi_{s_{\beta}s_\al\beta}\cdots}_{m_{\al\be}}\\ \subeqn\label{eq:psiconjugate} w\Bpsi_{\al}w^{-1}&=\Bpsi_{w\cdot \al}\\ \subeqn\label{eq:psipoly} \Bpsi_\al \mu-(s_\al\cdot\mu)\Bpsi_\al &=\partial_{\al}(\mu) \end{align*} whenever these morphisms are well-defined and finally, if $\acham'_\pm$ and $\acham''_\pm$ are two pairs of chambers opposite across $\al(\acham)=0$ which both lie on minimal length paths from $\acham$ to $s_\al\acham$, then \begin{multline*} \subeqn\label{eq:triple} r(s_\al\acham,\acham'_\pm)\Bpsi_{\al} r(\acham'_\mp,\acham) -r(s_\al\acham,\acham''_\pm)\Bpsi_{\al} r(\acham''_\mp,\acham)\\=\left(\Phi(s_\al\acham,\acham'_\pm)\partial_\al\big(\Phi(\acham'_\mp,\acham)\big)-\Phi(s_\al\acham,\acham''_\pm)\partial_\al\big(\Phi(\acham''_\mp,\acham)\big)\right)s_\al. \end{multline*} \end{theorem} Let $\acham,\acham'$ be two affine chambers, and let $(\acham^{(i)}_\pm,\beta_i)$ be the list of Coxeter hyperplanes with the corresponding opposite chambers crossed by some minimal length path $\acham\to \acham'$. Let \[\tilde{r}(\acham',\acham)=r(\acham',\acham_\pm^{(k)})\Bpsi_{\beta_k}r(\acham_\mp^{(k)},\acham_\pm^{(k-1)})\Bpsi_{\beta_{k-1}}\cdots \Bpsi_{\beta_{1}}r(\acham_\mp^{(1)},\acham).\] As in the abelian case, we can represent morphisms in our category by paths, but now we have to insert Demazure operators every time that we cross an unrolled root hyperplane. \begin{proof} The verification of the action is straightforward using the formula \cite[A.2]{BFNplus}. Given the representation and its faithfulness, the reader readily verify that the relations (\ref{eq:coweight1}--\ref{eq:triple}) are satisfied. The most interesting of these relations is (\ref{eq:triple}), so let us verify this relation in more detail. The action of the LHS in the polynomial rep on a polynomial $f$ is: \begin{multline} \Phi(s_\al\acham,\acham'_\pm)\partial_{\al} (\Phi(\acham'_\mp,\acham)f) -\Phi(s_\al\acham,\acham''_\pm)\partial_{\al}(\Phi(\acham''_\mp,\acham)f)\\ =\Phi(s_\al\acham,\acham'_\pm)\Phi(\acham'_\mp,\acham)\partial_{\al}\label{eq:LHS} (f)+\Phi(s_\al\acham,\acham''_\pm)\partial_{\al}(\Phi(\acham''_\mp,\acham))f^{s_\al}\\-\Phi(s_\al\acham,\acham''_\pm)\Phi(\acham''_\mp,\acham)\partial_{\al}(f)-\Phi(s_\al\acham,\acham''_\pm)\partial_{\al}(\Phi(\acham''_\mp,\acham))f^{s_\al}. \end{multline} Since $\acham'$ and $\acham''$ are both on the minimal length paths, neither is separated from both $s_\al\acham$ and $\acham$ by any given unrolled hyperplane. Thus, we have that \[\Phi(s_\al\acham,\acham'_\pm)\Phi(\acham'_\mp,\acham)=\Phi(s_\al\acham,\acham''_\pm)\Phi(\acham''_\mp,\acham)=\Phi(s_\al\acham,\acham).\] It follows that the first positive and negative terms in \eqref{eq:LHS} cancel, and we obtain the RHS of (\ref{eq:triple}). This confirms the relation. Using the action of the elements of $\widehat{W}$ we can reduce to the case where $\acham$ and $\acham'$ are in the same alcove. The space $\Hom(\acham,\acham')$ has a filtration by the length of the relative position of the two affine flags. Let $\Hom^{\leq w}(\acham,\acham')$ be the homology classes supported on the pairs of relative distance $\leq w$. By basic algebraic topology, $\Hom^{\leq w}(\acham,\acham')/\Hom^{< w}(\acham,\acham')$ is a free module of rank $1$ over $\Cth$, since this space is isomorphic to the $\Iwahori$-equivariant Borel-Moore homology of a (infinite dimensional) affine space. We'll prove that \begin{itemize} \item [$(*)$] the $\Cth$-module $\Hom^{\leq w}(\acham,\acham')/\Hom^{< w}(\acham,\acham')$ is generated by the element $\tilde{r}(\acham',w\acham)w$. \end{itemize} The element $\tilde{r}(\acham',w\acham)w$ is the pushfoward of the fundamental class by the map \begin{multline*} Y(\acham',\acham_\pm^{(k)})\times_{I_{\acham_\pm^{(k)}}}X({\beta_k})\times_{I_{\acham_\mp^{(k)}}}Y(\acham_\mp^{(k)},\acham_\pm^{(k-1)})\times_{I_{\acham_\pm^{(k-1)}}}X({\beta_k})\times_{I_{\acham_\mp^{(k-1)}}}\\ \cdots \times_{I_{\acham_\pm^{(1)}}}X({\beta_k})\times_{I_{\acham_\mp^{(1)}}}Y(\acham_\mp^{(1)},\acham)\to \VB_{\xi'}\times_{V((t))}\VB_{\xi}. \end{multline*} This map is an isomorphism on the set of affine flags of relative position $w$. Thus, these elements give a free basis of the associated graded for this filtration. This implies that they are a basis of the original module; in particular, this implies that the elements from the list above are generators. On the other hand, we can also easily show that the relations displayed are enough to bring any element into the form of a sum of elements $\tilde{r}(\acham',w\acham)w$. We can pull all elements of the Weyl group to the right using (\ref{eq:conjugate2}, \ref{eq:weyl2}, \ref{eq:psiconjugate}), all elements of $\Cth$ to the right using the relation (\ref{eq:psipoly}), and rewrite any crossing of a Coxeter hyperplane by $r(-,-)$ using the relation $r(\acham_\pm,\acham_\mp)=s_{\al}-\al\Bpsi_{w\cdot \al}$. This shows these relations suffice, since there can be no further relations between our basis. \end{proof} Let $\tilde{H}$ be the group generated by the action on $V[[t]]$ of $G((t))$, $H$ and the loop rotation $\C^*$; let $T_H$ be the torus of the group generated by $G$ and $H$. Note that $\wtG\subset \tilde{H}$. \begin{definition} The deformed extended BFN category is the category with the same objects as $\mathscr{B}$ and \begin{equation*} \Hom(\acham,\acham')=H_*^{BM, \tilde{H}}(\VB_{\acham}\times_{V((t))}\VB_{\acham'})\\ \cong H_*^{BM, \tilde{T}_H}\left({}_{\acham}\VB_{\acham'}\right). \end{equation*} \end{definition} The results above, such as Theorem \ref{thm:BFN-pres}, carry over in a straightforward way to this category. The only difference is that we must interpret the products of weights $\Phi(-,-)$ as weights of $ \tilde{T}_H$, and rather than an action of $\Cth$, we have one of $\K[ \tilde{\ft}_H]$. This is naturally a subcategory inside the extended BFN category for the group $GT_H$ acting on $V$. It is the subcategory where we only consider cocharacters in $\ft_\epsilon$ as objects, and only allow ourselves to use $\ft_\Z$ in the extended affine Weyl group, rather than all of $(\ft_H)_\Z$. The category attached to $GT_H$ has a natural action of the dual torus $(T_H/T)^\vee$ on the morphisms between any two objects with $r(-,-),\K[\tilde{\ft}_H]$ and $\widehat{W}$ having weight 0, and the copy of $\K[T_H]$ having the obvious action. The classes of weight $\nu$ (which is a coweight of $T_H/T$) correspond to homology classes concentrated on the components of the affine flag variety whose corresponding loop has a homotopy class hitting $\nu$ under the map $\pi_1(\AF)\to X(T_H/T)$. This shows that: \begin{lemma} The deformed extended BFN category equivalent to the subcategory of $\mathscr{B}(GT_H)$ where we only allow $(T_H/T)^\vee$-invariant morphisms and objects lying in $\ft_\epsilon$. \end{lemma} Of course, if we instead fix $\nu\in X(T_H/T)$ and look only at morphisms with this weight, we obtain a bimodule over the extended BFN category, which we denote $\mathscr{T}(\nu)$. The cocharacter lattice $ X(T_H/T)$ acts by pointwise multiplication on the space of flavors into $\tilde{T}_H$ up to different choices of lift. Thus, given one choice of flavor $\phi$ and the associated $\ft_1$, we have $\ft_1+\nu$ is the subspace associated to flavor $\phi+\bar \nu$ (where $\bar \nu$ is any lift of $\nu$ to $X(T_H)$). If we think of a weight $\mu$ of $T_H/T$ as a morphism in the extended BFN category (i.e. as an equivariant cohomology class), then its left action on $\mathscr{T}(\nu)$ is equal to the action of $\mu+\langle \nu,\mu\rangle$ on the right. Since the ideal $I(\ft_1)$ cutting out $\ft_1$ are of this form, we have that the image of the right action of $I(\ft_1)$ is the same as the image of the left action of $I(\ft_1+\nu)$. \begin{definition} The $\mathscr{B}_{\phi+\nu}\operatorname{-}\mathscr{B}_{\phi}$ bimodule ${}_{\phi+\nu}\mathscr{T}{}_{\phi}$ is the quotient of $\mathscr{T}(\nu)$ by $I(\ft_1)$ acting on the right or $I(\ft_1+\nu)$ acting on the left. Let ${}_{\phi+\nu}T_{\phi}={}_{\phi+\nu}\mathscr{T}{}_{\phi}(0,0)$ be the corresponding bimodule over $\mathscr{A}_{\phi+\nu}$ and $\mathscr{A}_{\phi}$. \end{definition} \subsection{Representation theory} \label{sec:repr-theory} Throughout this section, we specialize $h=1$; we let $\Cft=\Cth/(h-1)\cong \C[\ft_1]$. We call a $\mathscr{B}$-module $M$ (resp. $\mathscr{A}$-module $N$) a {\bf weight module} if for every $\acham$, we have that $M(\acham)$ (resp. $N$) is locally finite as a module over $\Cft$ with finite dimensional generalized weight spaces. Obviously, if $M$ is a weight module, then $N=M(0)$ is as well. The adjoint $ \mathscr{B}\otimes_{\mathscr{A}}-$ also sends weight modules to weight modules, since the adjoint action of $\Cft$ on $\Hom(\acham,0)$ is semi-simple with eigenspaces finitely generated over $\Cft$. For each $\upsilon\in \ft_1,\acham\in \ft_\ep$, we can consider the functor $W_{\upsilon,\acham}\colon \mathscr{B}\mmod_W\to \K\mmod$ defined by \begin{equation} \label{eq:W-def} W_{\upsilon,\acham}=\{m\in M(\acham)\mid \mathfrak{m}_{\upsilon}^Nm=0 \text{ for } N\gg 0\}, \end{equation} with $\mathfrak{m}_{\upsilon}$ for $\upsilon\in \ft_1$ be the corresponding maximal ideal in $\Cft$. These functors are exact, and prorepresentable. If we let $\EuScript{A}_{\acham}:=\Hom(\acham,\acham)$, then they are represented by the projective limit \[P_{\upsilon,\acham}:=\varprojlim\mathscr{B}\otimes_{\EuScript{A}_{\acham}}\left(\EuScript{A}_{\acham}\big/\EuScript{A}_{\acham}\mathfrak{m}_{\upsilon}^N\right)\] as $N\to \infty$. Thus, as in \cite{MVdB}, we can present the category of weight modules as modules over $\End(\oplus P_{\upsilon,\acham})$. If we restrict the weights we allow in our modules, then the result will be the representations of a subquotient of this ring. The morphisms $\Hom(P_{\upsilon,\acham},P_{\upsilon',\acham'})$ are relatively easy to understand: up to completion, such a morphism is given by right multiplication by a morphism $f\in \Hom(\acham', \acham)$ such that $\mathfrak{m}_{\upsilon'}f=f\mathfrak{m}_{\upsilon}$. The space of such morphisms is spanned by $w\cdot r(w^{-1}\acham,\acham')$ for $w\in \widehat{W}$ satisfying $w\cdot \upsilon'=\upsilon$. In particular: \begin{itemize} \item If $\upsilon\notin \widehat{W}\cdot \upsilon'$, then $\Hom(P_{\upsilon,\acham},P_{\upsilon',\acham'})=0$. \item If $\upsilon\in \widehat{W}\cdot \upsilon'$, then $\Hom(P_{\upsilon,\acham},P_{\upsilon',\acham'})$ has rank equal to $\operatorname{Stab}_{\widehat{W}}(\upsilon)$ over the completion of $\Cft$ at $\upsilon$. \end{itemize} \begin{definition} Let $\widehat{\mathscr{B}}$ be the category whose objects are the set $\EuScript{J}$ of pairs of generic $\acham\in \ft_\ep$ and any $\upsilon\in \ft_1$, such that \[\Hom_{\widehat{\mathscr{B}}}((\acham',\upsilon'),(\acham,\upsilon))=\varprojlim \Hom_{{\mathscr{B}}}(\acham',\acham)/(\mathfrak{m}_{\upsilon}^N \Hom_{{\mathscr{B}}}(\acham',\acham)+\Hom_{{\mathscr{B}}}(\acham',\acham)\mathfrak{m}_{\upsilon'}^N).\] We let $\widehat{\mathscr{B}}_{\upsilon'}$ be the subcategory where we only allow objects with $\upsilon\in \upsilon'+\ft_\Z$. \end{definition} It might seem more natural to consider the larger category where we allow $\upsilon\in \widehat{W}\cdot \upsilon'$, but the resulting categories are equivalent, since $(\acham, \upsilon)\cong (w\acham,w\upsilon)$ for $w\in W$ in the the finite Weyl group. The results above establish: \begin{lemma} The category of weight modules over $\mathscr{B}$ is equivalent to the category of representations of $\widehat{\mathscr{B}}$ in the category of finite dimensional vector spaces. The category of weight modules over $\mathscr{B}$ with weights in $\widehat{W}\cdot \upsilon'$ is equivalent to the category of representations of $\widehat{\mathscr{B}}_{\upsilon'}$ in the category of finite dimensional vector spaces. \end{lemma} The category $\widehat{\mathscr{B}}_{\upsilon'}$ contains a subcategory $\widehat{\mathscr{A}}_{\upsilon'}$ given by the objects of the form $(0,\upsilon)$ for $\upsilon \in \upsilon'+\ft_\Z$. Let $\EuScript{A}\mmod_{\upsilon'}$ denote the representations of this category, which are equivalent to the category of weight modules over $\EuScript{A}$ with weights in $\widehat{W}\cdot \upsilon'$. \section{Higgs and Coulomb} \label{sec:higgs-coulomb} \subsection{The isomorphism} \label{sec:isomorphism} Assume that $\K$ has characteristic $0$; we are still specializing $h=1$. Consider $\rho\in \ft_1$, and let $\EuScript{I}=\rho+\ft_\Z$. We'll call a weight $\vp_i$ of $V$ or root $\al_i$ of $\fg$ {\bf relevant} if it has integral value on $\rho$, and {\bf irrelevant} if it does not. The relevant roots form the root system of a Levi subalgebra $\fl=\fl_{\EuScript{I}}\subset \fg$, and the sum $V_{\EuScript{I}}$ of relevant weight spaces carries an $\fl$ action. Note that $W_{\fl}$ is isomorphic to the stabilizer in $\widehat{W}$ of any element of $\EuScript{I}$, and in particular, $\#W_{\fl}$ is the rank of the Hom space between any two objects of $\widehat{\mathscr{B}}_{\rho}$. We now turn to considering the constructions of Section \ref{sec:higgs} for the group $L$ acting on the vector space $V_{\EuScript{I}}$. We can consider the categories $\Stein_{\ACs},\Stein_{X}$ as discussed in Section \ref{sec:variations}. Let $\widehat{\Stein}_{\ACs},\widehat{\Stein}_{X}$ be the completions of these categories with respect to their gradings. Let $\Phi_0(\acham,\acham',\rho)$ be the product of the terms $\varphi^+_i-n$ over pairs $(i,n) \in [1,d]\times\Z$ such that we have the inequalities \[\varphi_i(\acham)>n+\ep_i \qquad \varphi_i(\acham')<n+\ep_i \qquad \langle\varphi^+_i,\rho\rangle\neq n.\] Note that if $\mu\in \ft^*$, then $\mu$ does not have a canonical extension to $\ft_1$, but the expression $\mu-\langle\mu,\rho\rangle$ is well-defined on $\ft_1$, giving the same answer for any extension of $\mu$ to $\tilde{\ft}$. \begin{definition} Let $\gamma$ be the functor \[\gamma\colon\widehat{\Stein}_{\ACs}\to \widehat{\mathscr{B}}_\rho\] which sends each $\Ba\in \ACs$ to $(\xi_{\Ba},\rho)$, for $\xi_{\Ba}$ a generic element of $\AC_{\Ba}'$, and which acts on morphisms by \newseq \begin{align*} \gamma(\wall(\Ba,\Bb))&= \frac{1}{\Phi_0(\acham_{\Ba},\acham_{\Bb},\rho)}r(\acham_{\Ba},\acham_{\Bb})\subeqn \label{gamma1}\\ \gamma (w)&= w \subeqn \label{gamma2}\\ \gamma(\psi_i(\Ba)) &= s_{\al_i}u_{\al_i}\subeqn \label{gamma3}\\ \gamma(\mu)&= \mu-\langle\mu,\rho\rangle \subeqn \label{gamma4} \end{align*} \end{definition} As mentioned in the introduction, these formulae can be explained geometrically. In particular, $\Phi_0(\acham_{\Ba},\acham_{\Bb},\rho)$ can be interpreted as the Euler class of a normal bundle, just as in \cite[Proposition 2.4.7]{MR3013034}. \begin{theorem}\label{main-iso} The functor $\gamma$ is an equivalence $\widehat{\Stein}_{\ACs}\cong \widehat{\mathscr{B}}_\rho$ which induces an equivalence $\widehat{\Stein}_{X}\cong \widehat{\mathscr{A}}_{\rho}$. \end{theorem} \begin{proof} First, we must check that this functor is well-defined. We let $\Cft^\upsilon=\varprojlim \Cft/\mathfrak{m}_\upsilon^{N}$. There is a natural faithful representation of the category $\widehat{\mathscr{B}}_\rho$ sending $(\acham,\upsilon)$ to $\Cft^\upsilon$. As in many previous proofs, we'll prove the equivalence by comparing this with the polynomial representation of $\widehat{\Stein}_{\ACs}$ given in \eqref{Y'-action2}. We consider the completion of this polynomial representation with respect to the grading. Consider the induced isomorphism $\mathbbm{s}_\rho\colon \C[[\ft]]\mapsto S_\rho$ via shifting, that is, the image of an linear function $\mu\in \ft^*$ is $ \mu-\langle\rho,\mu\rangle $. The match for (\ref{gamma2}--\ref{gamma4}) is clear, but perhaps we should say a bit more about (\ref{gamma1}). The image of $\phi(\Ba,\Bb)$ is the product of the linear factors $\varphi^+_i-\langle \rho,\varphi_i\rangle$ over the $\varphi^+_i$ which are relevant and which have different signs on $\Ba$ and $\Bb$. This always divides $\Phi(\xi_{\Ba},\xi_{\Bb})$, and the remaining factors are precisely $\Phi_0(\xi_{\Ba},\xi_{\Bb},\rho)$, which shows the compatibility of \eqref{Y'-action2} and (\ref{gamma1}) with (\ref{eq:ract}). The functor $\gamma$ is clearly full, since all but one of the generators is an explicit image and $r(\xi_{\Ba},\xi_{\Bb})=\gamma(\gamma^{-1}(\Phi_0(\xi_{\Ba},\xi_{\Bb},\rho))\wall(\Ba,\Bb))$. Thus, the map $\Hom_{\widehat{\Stein}_{\ACs}}(\Ba,\Bb)\to \Hom_{\widehat{\mathscr{B}}_\rho}(\xi_{\Ba},\xi_{\Bb})$ is surjective. These are both free modules over $\C[[\ft]]$ with rank equal to $\#W_\fl$, so a surjective map between them must be an isomorphism. Finally, we note that if $\AC'_\Ba$ contains a point of $X_1^*(T)$, then we can take this to be $\xi_\Ba$ and in $ \widehat{\mathscr{B}}_\rho$, we have an isomorphism $(\xi_\Ba,\rho)\cong (0,\rho-\xi_\Ba)$. The latter is an object in $\widehat{\mathscr{A}}_\rho$, and every one is of this form. \end{proof} As before, let $I$ be the set of sign vectors $\sgns$ whose chamber $C_{\sgns,1}$ contains an element of $X$. The sum of morphisms in the category $A_I$ gives a finite dimensional algebra; we'll abuse notation and let the same symbol denote this finite dimensional algebra. The identity $1_\sgns$ on $\sgns$ can be thought of as an idempotent in this algebra. \begin{corollary} The category $\EuScript{A}\mmod_{\rho}$ of weight modules with weights in $\widehat{W}\cdot \rho$ is equivalent to the category of representations of $A_I$ in finite dimensional vector spaces where $\Symt$ acts nilpotently; this functor matches the weight space for any weight in $C_{\sgns,1}$ with the image of the idempotent $1_\sgns$. \end{corollary} This isomorphism also allows us to define a graded lift of the category of weight modules given by modules with a grading on their weight spaces such that ${\Stein}_{\ACs}$ acts homogeneously. We can easily extend this isomorphism to the bimodule ${}_{\phi+\nu}\mathscr{T}_{\phi}$. This has a natural completion ${}_{\phi+\nu}\widehat{\mathscr{T}}_{\phi}$ to a bimodule over the categories $\widehat{\mathscr{B}}$ associated to the flavors $\phi+\nu$ and $\phi$. Applying Theorem \ref{main-iso} to the action of $GT_H$ on $V$, we find an isomorphism: \begin{corollary}\label{cor:bimodule-iso} ${}_{I_{\phi+\nu}}\widehat{A}_{I_{\phi}}\cong {}_{\phi+\nu}\widehat{\mathscr{T}}_{\phi}$. \end{corollary} \subsection{Koszul duality} \label{sec:Koszul} Assume that $A$ is an algebra over a field $\K$ graded by the non-negative integers with $A_0$ finite dimensional and semi-simple. The {\bf Koszul dual} of $A$ is, by definition, the algebra $A^!\cong T_{A_0}A_1^*/R^\perp$ where $R\subset A_1\otimes_{A_0}A_1$ is the space of quadratic relations, the kernel of the map to $A_2$. The representation category of $A^!$ is equivalent to the abelian category $\operatorname{LCP}(A)$ of linear complexes of projectives over $A$. If an abelian category is equivalent to the modules over an algebra $A$ as above, then the Koszul dual of the category is the category of representations of $A^!$. Let \[M:=\bigoplus_{C_{\sgns,w}\neq \emptyset}(p_{\sgns,w})_*\mathfrak{S}_{X_{\sgns,w}}.\] Combining Theorems \ref{th:MHM-LPC} \& \ref{main-iso} shows that: \begin{proposition} The Koszul dual of the category $\EuScript{A}\mmod_{\EuScript{I}}$ is the category ${\operatorname{MHM}}_M$ of $L_{\EuScript{I}}$-equivariant mixed Hodge modules of type $M$. \end{proposition} \begin{remark} In general, $D^b({\operatorname{MHM}}_M)$ may not be a full subcategory of the derived category of $D^b({\operatorname{MHM}})$, so $\Ext^\bullet_{\operatorname{MHM}}(M,M)\cong A_I$ is not the same as the Ext-algebra in the category ${\operatorname{MHM}}_M$. For example, in the ``pure gauge field'' case of $V=0$, we have that $A_I$ is the cohomology of $B L_{\EuScript{I}}$, which we can think of as symmetric functions on $\ft$ for the action of the integral Weyl group $W_{\EuScript{I}}$. The algebra $\EuScript{A}$ in this case is just the smash product $\Cth\rtimes W$, and the subcategory $\EuScript{A}\mmod_{\EuScript{I}}$ corresponds to the modules which are the sum of their weight spaces over $\Cft$ for the $W$-translates of $\rho$. The $\rho$ weight space has a natural action of the stabilizer $W_\rho$, and considering the $W_\rho$ invariants defines a functor from $\EuScript{A}\mmod_{\EuScript{I}}$ to $H^*(B L_{\EuScript{I}})$-modules where we let $H^*(B L_{\EuScript{I}})\cong \C[\ft]^{W_\rho}$ act by the $\rho$-shifted action. This gives the equivalence induced by Theorem \ref{main-iso}. In this case, since $H^*(BL_{\EuScript{I}})$ has no elements of degree 1, the only linear complexes over this ring are those with trivial differentials. Thus, the category ${\operatorname{MHM}}_M$ is equivalent to the category of vector spaces. \end{remark} \begin{example} Consider the case of $V=\C$ with $G=\C^*$ acting naturally, and flavor $\phi$ giving weight $a$ on $\C^*$ and $-a-1$ on its dual space. In this case $\Cft\cong \C[t]$ with $t$ the natural cocharacter. The algebra $\EuScript{A}=\EuScript{A}^{\operatorname{sph}}$ has generators generators $r^+$ and $r^-$ with \[r^-r^+=t -a-1 \qquad \qquad r^+r^-=t-a.\] Note that $r^\pm$ give an isomorphism between the $k$ and $k+1$ weight spaces unless $k=a$. Thus, if the weights of $t$ are not in $a+\Z$, then all weight spaces are isomorphic, and we are equivalent to the pure gauge situation. If we take weight spaces of the form $a+\Z$, then there are two isomorphism classes, represented by $a$ and $a+1$, with $r^\pm$ giving morphisms in both directions between them, with the composition in either direction acting by the nilpotent part of $t$. Thus, we obtain the completed path algebra of an oriented 2-cycle as $\End(P_{0,a}\oplus P_{0,a+1})$. The Koszulity of this path algebra algebra is easily verified directly (since every simple has a length 2 linear projective resolution). Since this path algebra has no quadratic relations, its quadratic dual is given by imposing all (two) possible quadratic relations: it is the path algebra of an oriented 2-cycle with all length-2 paths set to 0. This is the endomorphism ring of the projective generator in the category of strongly $\C^*$-equivariant D-modules on $\mathbb{A}^1$ generated by the functions and the $\delta$-functions at the origin. The two indecomposable projective D-modules in this category are $D_{\mathbb{A}^1}/D_{\mathbb{A}^1}(z\frac{\partial}{\partial z})$ and $D_{\mathbb{A}^1}/D_{\mathbb{A}^1}(\frac{\partial}{\partial z}z)$; their sum has the desired endomorphism algebra. The untruncated path algebra appears as the Ext-algebra of the sum of simple D-modules $D_{\mathbb{A}^1}/D_{\mathbb{A}^1}z\oplus D_{\mathbb{A}^1}/D_{\mathbb{A}^1}\frac{\partial}{\partial z}$. \end{example} Let $\cO_{\EuScript{I}}$ be the intersection of $\EuScript{A}\mmod_{\EuScript{I}}$ with category $\cO$ for $\xi\in (\fg^*)^G$, that is the modules such that the eigenspaces for $\xi$ are finite dimensional and bounded above. We have a graded lift $\tilde{\cO}_{\EuScript{I}}$ of this category, defined as modules in $\cO_{\EuScript{I}}$ endowed with a grading on which the induced action of $\Stein_{\ACs}$ is homogeneous. This is a subcategory of $\EuScript{A}\mmod_{\EuScript{I}}$, consisting of all modules whose composition factors are all killed by $e(\sgns)$ if $\xi$ does not attain a maximum on a bounded subset of $C_{\sgns}$. Its Koszul dual is thus a quotient of ${\operatorname{MHM}}_M$ by the subcategory of modules whose composition factors all appear as summands of $L_\sgns$ such that $\xi$ does not attain a maximum on a bounded subset of $C_{\sgns}$. By Lemma \ref{lem:unsteady-unbounded}, this is the same as the quotient by the unsteady sign vectors. Thus, we have that: \begin{theorem}\label{thm:Koszul-duality} If \hyperlink{daggerprime}{$(\dagger')$} holds, then the Koszul dual of the category $\tcO_{\EuScript{I}}$ for the character $\xi$ and flavor $\phi$ is equivalent to a block of $\mathcal{\tilde O}_{\!\operatorname{g}}^!$ for the flavor $\phi$ on $\fM_\xi=T^*V_{\fl}/\!\!/\!\!/\!\!/_{\!\xi} L$ for the integral quantization. If \hyperlink{dagger}{$(\dagger)$} holds, then the Koszul dual of the category $\tcO_{\EuScript{I}}$ for the character $\xi$ is equivalent to $\mathcal{\tilde O}_{\!\operatorname{g}}^!$ for the flavor $\phi$ on $\fM_\xi=T^*V_{\fl}/\!\!/\!\!/\!\!/_{\!\xi} L$ for the integral quantization. \end{theorem} \subsection{Twisting and shuffling functors} \label{sec:twist-shuffl-funct} Throughout this section, we assume \hyperlink{dagger}{$(\dagger)$} holds for simplicity. Recall the category $\cO$'s for the varieties $\fM_H$ and $\fM_C$ are each endowed with actions of two collections of functors: twisting and shuffling functors. We refer the reader to \cite{BLPWquant,BLPWgco} for a more thorough discussion of these functors. In this paper, we will only consider pure shuffling and twisting functors for simplicity; it will be more natural to discuss the impure functors after a longer discussion of the Namikawa Weyl group of a Higgs branch. Let us describe the form these functors take in the cases we are considering. Throughout the description below, we let $\star\in \{!,*\}$. On $\fM_H$: \begin{itemize} \item The pure twisting functors are generated by functors $\mathfrak{r}^\xi\circ \mathfrak{r}^{\xi'}_{\star}$ composing the reduction functor $\mathfrak{r}_\xi\cong D_{p\cOg}\to D_{\cOg}$ to the category $\cO$ on $\fM_\xi$ with the left or right adjoint of this functor. \item The pure shuffling functors are generated by composing the inclusion functor $i^\phi$ of $D_{\cOg}$ into $D^b(\mathcal{D}\mmod)$ with its left or right adjoint $ i^\phi_\star$. \end{itemize} On $\fM_C$: \begin{itemize} \item The pure twisting functors are generated by tensor product with ${}_{\phi'} T_{\phi}$ for $\phi$ and $\phi'$ both generic in $\mathscr{I}$, and its adjoint. \item The pure shuffling functors are generated by composing the inclusion $i^\xi$ functor of $\OCoulomb$ into $\EuScript{A}\mmod$ with its left or right adjoint $i^\xi_{\star}$ in the derived category (i.e. the derived functor of taking the largest quotient or submodule in category $\cO$). \end{itemize} \begin{theorem} The Koszul duality of Theorem \ref{thm:Koszul-duality} switches pure twisting and shuffling functors matching $\mathfrak{r}^{\xi'}\circ \mathfrak{r}_*^{\xi}$ with $i^{\xi'}_!\circ i^{\xi}$ and ${}_{\phi'} T_{\phi}\otimes_{\EuScript{A}_\phi} -$ with $i^{\phi'}_*\circ i^{\phi}$. \end{theorem} \begin{proof} The proof of this fact is roughly the same as in \cite[8.24]{BLPWtorico}. The shuffling functors come from inclusion of an projection to a subcategory, and the twisting functors come from projection to and adjoint inclusion of a quotient category; these naturally interchange under Koszul duality. Now, let use be more precise. Let \[A^\xi_P:=A_P/\ideal_\xi\qquad {}_{P'}^{}A^\xi_{P}:={}_{P'}A_P/(\ideal_\xi{}_{P'}^{}A^\xi_{P}+{}_{P'}^{}A^\xi_{P}\ideal_\xi),\] where ${}_{P'}A_P$ is the bimodule defined in Definition \ref{def:bimod}. \begin{itemize} \item Under the equivalence of $D_{p\cOg}$ to $A_{I_\phi}\operatorname{-dg-mod}$ and $\OHiggs$ with $A^{\xi}_{I_\phi}\operatorname{-dg-mod}$, the functor $\mathfrak{r}_*$ is intertwined with inflation of a $A^{\xi}_{I_\phi} $ to an $A_{I_\phi}$ module, and thus $\mathfrak{r}$ with its left adjoint $A^{\xi'}_{I_\phi}\Lotimes_{A_{I_\phi}}-$. Since $\mathfrak{r}\circ \mathfrak{r}_*$ is an equivalence, its left and right adjoints agree and $\mathfrak{r}\circ \mathfrak{r}_!$ is intertwined with $\RHom_{A_{I_\phi}}(A^{\xi'}_{I_\phi},-)$. \item The categories $\OCoulomb$ for different choices of $\xi$ are equivalent to the modules over $A^\xi$, and the inclusion $i^\xi$ corresponds to the pullback of $A^\xi$-modules to $A$-modules by the quotient map. Thus, the shuffling functors are $i^{\xi'}_!\circ \xi$ is intertwined with $A^{\xi'}_{I_\phi}\Lotimes_{A_{I_\phi}}-$ and $i^{\xi'}_*\circ \xi$ with its adjoint. \end{itemize} This shows the first desired match of functors. \begin{itemize} \item Under the equivalence $D_{\cOg}$ with $A/\ideal_{\xi}\operatorname{-dg-mod}$, the shuffling functors are determined by taking Ext of $\mathfrak{r}(M)$ and $\mathfrak{r}(M')$ for the different flavors $\phi$ and $\phi+\nu$ respectively. These are summands of $M''$, the corresponding sheaf for $GT_H$ and any flavor, so ultimately, we find that \[\Ext^\bullet(\mathfrak{r}(M'),\mathfrak{r}(M))\cong \Ext^\bullet(\mathfrak{r}(M'),i_*^{\phi+\nu}\circ i^\phi (\mathfrak{r}(M)))\cong {}_{I_{\phi+\nu}}^{}A_{I_{\phi}}^{\xi}\] Thus, we have that $i_*^{\phi+\nu}\circ i^\phi$ corresponds to ${}_{I_{\phi+\nu}}^{}A_{I_{\phi}}^{\xi}\Lotimes_{A^{\xi}_{I_{\phi}}}-$ and $i_!^{\phi+\nu}\circ i^\phi$ to $\RHom_{A^{\xi}_{I_{\phi}} }({}_{I_{\phi}}^{}A_{I_{\phi+\nu}}^{\xi},-)$. \item Under the isomorphism of Theorem \ref{main-iso}, the tensor product with ${}_{\phi} T_{\phi}$ corresponds to ${}_{I_{\phi'}}^{}A_{I_{\phi}}^{\xi}\otimes_{A^{\xi}_{I_{\phi}}}-$. \end{itemize} This shows the second desired match. \end{proof} As usual, this applies to the quiver and smooth hypertoric cases, since \hyperlink{dagger}{$(\dagger)$} holds there. \subsection{Quiver varieties} \label{sec:quiver} The most important examples for us are hypertoric and quiver varieties. In the hypertoric case, we just recover the results of \cite{BLPWtorico} (in fact, the arguments given here have already been given in the hypertoric case in \cite[\S 3]{Webqui}), so there is no need for a detailed discussion. Interestingly, Theorem \ref{thm:Koszul-duality} gives a new proof of the Koszul duality discussed in \cite{GDKD} (of course, this duality is fairly easy to prove algebraically). The quiver variety case is much more rich and interesting. Here we mean that we have a quiver $\Gamma$, and dimension vectors $\mathbf{v},\mathbf{w}$ and \[V=\bigoplus_{i\to j}\Hom(\C^{v_i},\C^{v_j})\oplus \bigoplus_{i}\Hom(\C^{v_i},\C^{w_i})\qquad G=\prod GL(v_i).\] The Higgs side of this case is studied in \cite[\S 4--5]{Webqui}. In particular, the Steinberg algebras in this case are reduced weighted KLR algebras ${\bar W}^\vartheta$as shown in \cite[Cor. 4.11]{WebwKLR}; they are reduced since we do not include the action of the $\C^*$ attached to the Crawley-Boevey vertex. As in Proposition \ref{prop:wKLR}, we can let $B$ be any set of loadings, and let $\xi$ be the sum of the trace characters on $\mathfrak{sl}(v_i)$ for all $i$. For any set of loadings $B$ with the induced map $B\to I'$: \begin{proposition} We have an isomorphism ${\bar W}^\vartheta_{B}\cong A_B$; the ideal $\ideal_\xi$ is precisely that generated by the idempotents for unsteady loadings. \end{proposition} Regarding the Coulomb branch, the resulting categories are closely related to the truncated shifted Yangians introduced by the author, Kamnitzer, Weekes and Yacobi in \cite{KWWY}. For quivers of type ADE, the algebra $\EuScript{A}^{\operatorname{sph}}$ is a truncated shifted Yangian by \cite[Cor. B.26]{BFNplus}. Applying Theorem \ref{main-iso} in this case requires a little care, however, since we must throw out irrelevant weights and roots. We chose a flavor $\phi$. That is, up to conjugacy, we must choose of weight $\phi_e$ for each edge, and a cocharacter into $\prod GL(\C^{w_i})$, that is, $\phi_{i,1},\dots, \phi_{i,w_i}$ for each $i\in V(\Gamma)$. This is the same data as a weighting of the Crawley-Boevey graph of $\Gamma$. Now, we consider a coset of $\ft_\Z$ in $\ft_1$; this is given by fixing the class in $\C/ \Z$ for each $z_{i,m}$. Considering only relevant roots means expanding the vertex set of our graph to form a new graph $\Gamma_{\Bz,\phi}$. Its vertex set is \[V(\Gamma_{\Bz,\phi})=\{(i,[z])\in \Gamma\times \C/\Z\mid z\equiv z_{i,m}\pmod\Z \text{ for some $m\in [1,v_i]$}\}.\] The edges $(i,[z]) \to (j,[u])$ are in bijection with edges $e\colon i\to j$ with $\phi_e\equiv z-u\pmod \Z$. Note that we can lift paths in $\Gamma$ to $\Gamma_{\Bz,\phi}$, and that a closed path will lift to a closed path if and only if it lies in the kernel of the homomorphism $\pi_1(\Gamma)\to H_1(\Gamma;\Z)\to \C/\Z$ with the last map induced by the weighting $\phi_e$ reduced $\pmod \Z$, thought of as a cohomology class in $H^1(\Gamma;\C/\Z)$. Thus, $\Gamma_{\Bz,\phi}$ is a subgraph of the union of some number of copies of the cover of $\Gamma$ corresponding to this kernel. We have dimension vectors given by \[v_{ (i,[z])}=\#\{m\in [1,v_i]\mid z\equiv z_{i,m}\pmod\Z\}\qquad w_{ (i,[z])}=\#\{m\in [1,w_i]\mid \phi_{i,m}\equiv z\pmod \Z\}.\] \begin{lemma} The subspace $V_{\Bz,\phi}$ and group $L_{\Bz,\phi}$ attached to this choice of flavor and coset are isomorphic to \[V_{\Bz,\phi}\cong \bigoplus_{\substack{e\in E(\Gamma')\\e\colon (i,[z]) \to (j,[u])}}\Hom(\C^{v_{ (i,[z])}},\C^{v_{ (j,[u])}}) \oplus \bigoplus_{ (i,[z])\in V( \Gamma')}\Hom(\C^{v_{ (i,[z])}},\C^{w_{ (i,[z])}}) \]\[ L_{\Bz,\phi}=\prod GL(d_{ (i,[z])})\] \end{lemma} Thus, Theorem \ref{main-iso} shows that: \begin{theorem} The category $\EuScript{A}\mmod_\rho$ is equivalent to the representations of a reduced weighted KLR algebra (associated to a set of loadings) for the Crawley-Boevey quiver of $\Gamma_{\Bz,\phi}$, and the intersection $\cO_\rho$ of this subcategory with category $\cO$ is equivalent to representations of its steadied quotient. \end{theorem} Note that this is not necessarily equivalent to the full reduced weighted KLR algebra, since we may not have $I'_\phi=I_{\phi}$, but its representation category is always a quotient of the representation category for the full algebra. The difference between the sets $ I'_\phi$ and $I_{\phi}$ is closely related to the monomial crystal (whose connection to shifted Yangians is discussed in \cite{KTWWY}). This theorem is particularly interesting in the case where the quiver $\Gamma$ is of type ADE; in this case, the reduced weighted KLR algebra appearing is $\tilde{T}^\bla$ as defined in \cite[\S 4]{Webmerged}, and $\cO_\rho$ is a tensor product categorification in the sense of \cite{LoWe}. Another important special case is when $\Gamma$ is a single loop; in this case our dimension vectors are just integers $v,w$, and $V\cong (\C^v)^{\oplus w}\oplus \mathfrak{gl}_v$ as a representation of $GL_v$. In this case, the algebra $\EuScript{A}^{\operatorname{sph}}$ is isomorphic to the spherical rational Cherednik algebra for the group $S_v\wr \Z/w\Z$ (assuming $w>0$), as recently shown by Kodera-Nakajima \cite{KoNa}, and expanded upon by Braverman-Etingof-Finkelberg \cite{BEF} and the author \cite{Webalt}. This isomorphism matches the weight of the loop to the parameter $k$ in the Cherednik algebra. The reader might find it a bit dissatisfying that in the case where $w=1$, the algebra $\EuScript{A}$ is an algebra containing the spherical Cherednik algebra as a submodule and of rank $(v!)^2$ as a module over it, but it is not the full Cherednik algebra $\mathsf{H}(S_n)$. Rather, it's the matrix algebra $M_{v!}(e \mathsf{H}(S_n)e)$, as Theorem \ref{th;Morita} shows. The algebra $\mathsf{H}(S_n)$ appears as the endomorphisms of another object in $\mathscr{B}$; the convolution description is as $H_*^{BM, \wtG}(\VB_{U}\times_{V((t))}\VB_{U})$ where $U=\C^v[[t]]\oplus \mathfrak{i}$ and $\mathfrak{i}$ is the Lie algebra of the standard Iwahori. This matches the presentation of the Cherednik algebra in \cite{Webalt}, which also appeared in \cite{GrifII}. This is a rational version of the K-theoretic description of the double affine Hecke algebra in \cite[Thm. 2.5.6]{MR3013034}. In this loop case, the map induced by the cohomology class is $\pi_1(\Gamma)\cong \Z\to \C/\Z$ sending $1\mapsto k$. In particular, if $k\notin \Q$, then the corresponding cover is an $A_\infty$ graph, and $\Gamma_{\Bz,\phi}$ is a union of segments. If $k=a/e\in \Q$, then the corresponding cover is an $e$-cycle, and so $\Gamma_{\Bz,\phi}$ is a union of segments and $e$-cycles. The most interesting case is when $\phi_{i,m},z_{i,k}\in \frac{1}{e}\Z$, so $ \Gamma_{\Bz,\phi}$ is a single $e$-cycle (assuming every $\Z$-coset contains at least one $ z_{i,k}$). The resulting equivalence between modules over the Cherednik algebra and weighted KLR algebras was developed first in \cite{WebRou}, and is extended in \cite{Webalt}.
2024-02-18T23:40:26.520Z
2016-11-29T02:06:41.000Z
algebraic_stack_train_0000
2,416
21,556
proofpile-arXiv_065-11790
\section{Introduction} Nowadays, some of the shortcomings that the Standard Model (SM) still cannot answer reveal the existence of new physics beyond the SM, and the necessity of a high-energy collider to search for evidence has become clear. After the discovery of the Higgs boson at the LHC, the question of which collider will be built in the future has come to the fore. Continuing particle physics research in the post-LHC period depends on analyzing new future machine options. In this study, the physics potential and the theoretical advantages of future multi-TeV muon colliders are emphasized. There are many remarkable features of the multi-TeV muon collider. The fact that muons are heavier than electrons causes less synchrotron radiation to be produced, making it easier to accelerate them to high energies with a circular collider. Due to the nature of lepton colliders, muons use up all of their center of mass energy since they are not compound particles like protons. Lepton colliders also have much smaller backgrounds and cleaner environment than hadron colliders. In addition to these theoretical advantages, experimental studies such as the Muon Accelerator Program (MAP) \cite{Palmer:2014asd,Delahaye:2013hvz}, the Muon Ionization Cooling Experiment (MICE) \cite{Bogomilov:2020twm}, and the Low Emittance Muon Accelerator (LEMMA) \cite{Antonelli:2016ezx} have greatly increased the motivation to build a future muon collider. In the light of experimental predictions for muon colliders, interest in phenomenological studies has also increased day by day. These studies are generally examined under the headings such as the electroweak parton distribution function (EW PDF) formalism in muon collisions \cite{Han:2021gbv,Han:2022rws,Ruiz:2021oec}, measuring the Higgs boson couplings \cite{Costantini:2020tkp,Chiesa:2020yhn,Han:2021pas,Chen:2021pln,Chen:2022ygc}, BSM studies with new scalars \cite{Eichten:2014evo,Chakrabarty:2015gwe,Buttazzo:2018wmg,Bandyopadhyay:2021lja,Han:2021hrq,Liu:2021gtr}, minimal dark matter \cite{Han:2021twq,Capdevilla:2021xku,Bottaro:2021res,Han:2022epx}, the muon $g-2$ anomaly \cite{Yin:2020gre,Capdevilla:2021ooc,Buttazzo:2021ooc,Dermisek:2021sdf,Capdevilla:2022yza}, the neutral current B-meson anomalies \cite{Huang:2021edc,Asadi:2021wsd} and investigation of anomalous neutral triple gauge couplings (aNTGC) \cite{Spor:2022rcx,Senol:2022evm} and anomalous quartic gauge couplings (aQGC) \cite{Yang:2022ubn,Yang:2022emz}. Four different scenarios of muon collider are considered in this paper. The center-of-mass energies for these scenarios, along with the energies of incoming muon and the integrated luminosities, are given in Table~\ref{tab1} and these values have been used in previous studies for the muon collider \cite{Han:2021twq,Ali:2021wsd,Chiesa:2021tyr}. \begin{table}[H] \centering \caption{Energy and integrated luminosity in scenarios of muon collider.} \label{tab1} \begin{tabular}{p{3cm}p{1.5cm}p{1.5cm}p{1.5cm}p{1.5cm}} \hline \hline $\sqrt{s}$ (TeV) & 3 & 6 & 10 & 14\\ \hline $E_{\mu}$ (TeV) & 1.5 & 3 & 5 & 7\\ \hline ${\cal L}_{\text{int}}$ (ab$^{-1}$) & 1 & 4 & 10 & 20\\ \hline \hline \end{tabular} \end{table} Since the Z boson has no electric charge, there is no coupling between the Z boson and the photon at the tree level in the SM. Therefore, the presence of aNTGCs between the photon and the $Z$ boson ($Z\gamma\gamma$ and $ZZ\gamma$) is a clue to detect deviation from the SM prediction and plays a key role in the exploration of new physics beyond the SM \cite{Spor:2022omz}. This possible deviations on aNTGC interactions is investigated with a model-independent framework in the Effective Field Theory (EFT). The effective Lagrangian of NTGC consists of the SM Lagrangian with dimension-four operators and new physics contributions beyond the SM with dimension-eight operators and can be written as \cite{Degrande:2014ydn} \begin{eqnarray} \label{eq.1} {\cal L}^{\text{nTGC}}={\cal L}_{\text{SM}}+\sum_{i}\frac{C_i}{\Lambda^{4}}({\cal O}_i+{\cal O}_i^\dagger) \end{eqnarray} {\raggedright where new physics terms are suppressed by the inverse powers of the new physics scale $\Lambda$ and ${\cal O}_i$ defines the four operators given below} \begin{eqnarray} \label{eq.2} {\cal O}_{\widetilde{B}W}=iH^{\dagger} \widetilde{B}_{\mu\nu}W^{\mu\rho} \{D_\rho,D^\nu \}H, \end{eqnarray} \begin{eqnarray} \label{eq.3} {\cal O}_{BW}=iH^\dagger B_{\mu\nu}W^{\mu\rho} \{D_\rho,D^\nu \}H, \end{eqnarray} \begin{eqnarray} \label{eq.4} {\cal O}_{WW}=iH^\dagger W_{\mu\nu}W^{\mu\rho} \{D_\rho,D^\nu \}H, \end{eqnarray} \begin{eqnarray} \label{eq.5} {\cal O}_{BB}=iH^\dagger B_{\mu\nu}B^{\mu\rho} \{D_\rho,D^\nu \}H. \end{eqnarray} The first of these dimension-eight operators is CP-even and the last three are CP-odd. $B_{\mu\nu}$ and $W^{\mu\nu}$ are the field strength tensors and $D_\mu$ is the covariant derivative. The following expressions are written for the definition of operators: \begin{eqnarray} \label{eq.6} B_{\mu\nu}=\left(\partial_\mu B_\nu - \partial_\nu B_\mu\right), \end{eqnarray} \begin{eqnarray} \label{eq.7} W_{\mu\nu}=\sigma^i\left(\partial_\mu W_\nu^i - \partial_\nu W_\mu^i + g\epsilon_{ijk}W_\mu^j W_\nu^k\right), \end{eqnarray} \begin{eqnarray} \label{eq.8} D_\mu \equiv \partial_\mu - i\frac{g^\prime}{2}B_\mu Y - ig_W W_\mu^i\sigma^i. \end{eqnarray} Although the dimension-six operators do not contribute new physics on aNTGC at tree-level, their contribution at one-loop is of the order ${\alpha \hat{s}}/{4\pi\Lambda^2}$. The tree-level contribution of the dimension-eight operators is of the order ${\upsilon^2\hat{s}}/{\Lambda^4}$. In conclusion, the contribution of the dimension-eight operators is more dominant than the contribution of the dimension-six operator at one-loop with $\Lambda \lesssim \sqrt{4\pi\hat{s}/\alpha}$ \cite{Degrande:2014ydn}. Effective Lagrangian for aNTGC with dimension-six and dimension-eight operators is written by \cite{Gounaris:2000svs} \begin{eqnarray} \label{eq.9} \begin{split} {\cal L}_{\text{aNTGC}}^{\text{dim-6,8}}=&\frac{g_e}{m_Z^2}\Bigg[-[f_4^\gamma(\partial_\mu F^{\mu\beta})+f_4^Z(\partial_\mu Z^{\mu\beta})]Z_\alpha (\partial^\alpha Z_\beta)+[f_5^\gamma(\partial^\sigma F_{\sigma\mu})+f_5^Z (\partial^\sigma Z_{\sigma\mu})]\widetilde{Z}^{\mu\beta}Z_\beta \\ &-[h_1^\gamma (\partial^\sigma F_{\sigma\mu})+h_1^Z (\partial^\sigma Z_{\sigma\mu})]Z_\beta F^{\mu\beta}-[h_3^\gamma(\partial_\sigma F^{\sigma\rho})+h_3^Z(\partial_\sigma Z^{\sigma\rho})]Z^\alpha \widetilde{F}_{\rho\alpha} \\ &-\bigg\{\frac{h_2^\gamma}{m_Z^2}[\partial_\alpha \partial_\beta \partial^\rho F_{\rho\mu}]+\frac{h_2^Z}{m_Z^2}[\partial_\alpha \partial_\beta(\square+m_Z^2)Z_\mu]\bigg\}Z^\alpha F^{\mu\beta} \\ &+\bigg\{\frac{h_4^\gamma}{2m_Z^2}[\square\partial^\sigma F^{\rho\alpha}]+\frac{h_4^Z}{2m_Z^2}[(\square+m_Z^2)\partial^\sigma Z^{\rho\alpha}]\bigg\}Z_\sigma\widetilde{F}_{\rho\alpha}\Bigg], \end{split} \end{eqnarray} {\raggedright where $\widetilde{Z}_{\mu\nu}=1/2\epsilon_{\mu\nu\rho\sigma}Z^{\rho\sigma}$ $(\epsilon^{0123}=+1)$ with field strength tensor $Z_{\mu\nu}=\partial_\mu Z_\nu - \partial_\nu Z_\mu$ and similarly for the electromagnetic field tensor $F_{\mu\nu}$. Three CP-violating couplings are $f_4^V$, $h_1^V$, $h_2^V$ while three CP-conserving couplings are $f_5^V$, $h_3^V$, $h_4^V$ with $(V=\gamma$, $Z)$ \cite{Gounaris:2000thb}. All couplings are zero in the SM at tree-level. At the Lagrangian in Eq.~(\ref{eq.9}), the couplings $h_2^V$ and $h_4^V$ correspond to dimension-eight and the other four couplings to dimension-six.} The couplings of the effective Lagrangian in Eq.~(\ref{eq.9}) are associated to the couplings of the operators in Eqs.~(\ref{eq.2}-\ref{eq.5}) on the gauge invariance under the $SU(2)_L \times U(1)_Y$ group \cite{Rahaman:2020fdf}. The CP-conserving anomalous couplings with two on-shell $Z$ bosons and one off-shell $V=\gamma$ or $Z$ boson for the $ZZV$ coupling are written by \cite{Degrande:2014ydn} \begin{eqnarray} \label{eq.10} f_5^Z=0, \end{eqnarray} \begin{eqnarray} \label{eq.11} f_5^\gamma=\frac{\upsilon^2 m_Z^2}{4c_\omega s_\omega} \frac{C_{\widetilde{B}W}}{\Lambda^4} \end{eqnarray} {\raggedright and the CP-violating anomalous couplings by} \begin{eqnarray} \label{eq.12} f_4^Z=\frac{m_Z^2 \upsilon^2 \left(c_\omega^2 \frac{C_{WW}}{\Lambda^4}+2c_\omega s_\omega \frac{C_{BW}}{\Lambda^4}+4s_\omega^2 \frac{C_{BB}}{\Lambda^4}\right)}{2c_\omega s_\omega}, \end{eqnarray} \begin{eqnarray} \label{eq.13} f_4^\gamma=-\frac{m_Z^2 \upsilon^2 \left(-c_\omega s_\omega \frac{C_{WW}}{\Lambda^4}+\frac{C_{BW}}{\Lambda^4}(c_\omega^2-s_\omega^2)+4c_\omega s_\omega \frac{C_{BB}}{\Lambda^4}\right)}{4c_\omega s_\omega}. \end{eqnarray} The CP-conserving anomalous couplings with one on-shell $Z$ boson, one on-shell photon and one off-shell $V=\gamma$ or $Z$ boson for the $Z\gamma V$ coupling are written by \cite{Degrande:2014ydn} \begin{eqnarray} \label{eq.14} h_3^Z=\frac{\upsilon^2 m_Z^2}{4c_\omega s_\omega} \frac{C_{\widetilde{B}W}}{\Lambda^4}, \end{eqnarray} \begin{eqnarray} \label{eq.15} h_4^Z=h_3^\gamma=h_4^\gamma=0 , \end{eqnarray} {\raggedright and the CP-violating anomalous couplings by} \begin{eqnarray} \label{eq.16} h_1^Z=\frac{m_Z^2 \upsilon^2 \left(-c_\omega s_\omega \frac{C_{WW}}{\Lambda^4}+\frac{C_{BW}}{\Lambda^4}(c_\omega^2-s_\omega^2)+4c_\omega s_\omega \frac{C_{BB}}{\Lambda^4}\right)}{4c_\omega s_\omega}, \end{eqnarray} \begin{eqnarray} \label{eq.17} h_2^Z=h_2^\gamma=0, \end{eqnarray} \begin{eqnarray} \label{eq.18} h_1^\gamma=-\frac{m_Z^2 \upsilon^2 \left(s_\omega^2 \frac{C_{WW}}{\Lambda^4}-2c_\omega s_\omega \frac{C_{BW}}{\Lambda^4}+4c_\omega^2 \frac{C_{BB}}{\Lambda^4}\right)}{4c_\omega s_\omega}. \end{eqnarray} The anomalous couplings with two on-shell photons and one off-shell $Z$ boson are not considered because it is prohibited by the Landau-Yang theorem \cite{Landau:1948gbe,Yang:1950hyb}. In Eqs.~(\ref{eq.11}-\ref{eq.14},\ref{eq.16},\ref{eq.18}) the coefficients $C_{BB}/{\Lambda^4}$, $C_{BW}/{\Lambda^4}$, $C_{\widetilde{B}W}/{\Lambda^4}$, $C_{WW}/{\Lambda^4}$ are dimension-eight aNTGC and $C_{\widetilde{B}W}/{\Lambda^4}$ is CP-conserving coupling while $C_{BB}/{\Lambda^4}$, $C_{BW}/{\Lambda^4}$, $C_{WW}/{\Lambda^4}$ are CP-violating couplings. In this study, the sensitivity of dimension-eight $C_{BB}/{\Lambda^4}$, $C_{BW}/{\Lambda^4}$, $C_{\widetilde{B}W}/{\Lambda^4}$ and $C_{WW}/{\Lambda^4}$ couplings in anomalous $ZZ\gamma$ and $Z\gamma\gamma$ vertices is investigated at multi-TeV muon colliders with the process $\mu^+\mu^-\,\rightarrow\,\mu^+\gamma^*\mu^-\,\rightarrow\,\mu^+ Z\mu^-$. The current experimental limits on dimension-eight aNTGCs are studied through process $pp\rightarrow Z\gamma$ and $pp\rightarrow ZZ$ with center-of-mass energy of 13 TeV at the CERN and given in Table \ref{tab2}. \begin{table}[H] \caption{The current experimental limits on dimension-eight aNTGCs.} \label{tab2} \begin{ruledtabular} \begin{tabular}{lcccc} \multirow{2}{*}{Experimental limits} & \multicolumn{4}{c}{Couplings (TeV$^{-4}$)}\\ & ${C_{\widetilde{B}W}}/{\Lambda^4}$ & ${C_{WW}}/{\Lambda^4}$ & ${C_{BW}}/{\Lambda^4}$ & ${C_{BB}}/{\Lambda^4}$\\ \hline ATLAS \cite{Aaboud:2018ybz} $Z\gamma\rightarrow\nu\bar{\nu}\gamma$ & \multirow{2}{*}{-1.1; 1.1} & \multirow{2}{*}{-2.3; 2.3} & \multirow{2}{*}{-0.65; 0.64} & \multirow{2}{*}{-0.24; 0.24}\\ ($\sqrt{s}=13$ TeV, ${\cal L}_{\text{int}}=36.1$ fb$^{-1}$) & & & & \\ \hline ATLAS \cite{Aaboud:2018onm} $ZZ\rightarrow\ell^+\ell^-\ell^{\prime+}\ell^{\prime-}$ & \multirow{2}{*}{-5.9; 5.9} & \multirow{2}{*}{-3.0; 3.0} & \multirow{2}{*}{-3.3; 3.3} & \multirow{2}{*}{-2.7; 2.8}\\ ($\sqrt{s}=13$ TeV, ${\cal L}_{\text{int}}=36.1$ fb$^{-1}$) & & & & \\ \hline CMS \cite{Sirunyan:2021edk} $ZZ\rightarrow\ell^+\ell^-\ell^{\prime+}\ell^{\prime-}$ & \multirow{2}{*}{-2.3; 2.5} & \multirow{2}{*}{-1.4; 1.2} & \multirow{2}{*}{-1.4; 1.3} & \multirow{2}{*}{-1.2; 1.2}\\ ($\sqrt{s}=13$ TeV, ${\cal L}_{\text{int}}=137$ fb$^{-1}$) & & & & \\ \end{tabular} \end{ruledtabular} \end{table} \section{Cross-sections measurements at the muon collider} \label{Sec2} Four Feynman diagrams at the tree level for the process $\mu^-\gamma^*\,\rightarrow\,Z\mu^-$ are shown in Fig.~\ref{fig1}. The first two diagrams include SM contributions, while the others include new physics contributions beyond the SM with the anomalous $Z\gamma\gamma$ and $ZZ\gamma$ couplings. In this study, the decay of the $Z$ boson into neutrino-antineutrino pair is considered. Because, the process involving the decay of the $Z$ boson into neutrino-antineutrino pair is advantageous, since the hadron channel with a large multi-jet background does not have clean data, and the decay of the $Z$ boson into charged leptons has a low branching ratio compared to the neutrinos decay. In this study, $\mu\gamma^*$ collisions occur when the photon $\gamma^*$ emitted from the muon collides with another muon. In Equivalent Photon Approximation (EPA), the spectrum of photon emitted from muon is written by \cite{Budnev:1975kyp,Koksal:2019ybm}: \begin{eqnarray} \label{eq.19} \begin{split} f_{\gamma^{*}}(x)=&\, \frac{\alpha}{\pi E_{\mu}}\Bigg\{\left[\frac{1-x+x^{2}/2}{x}\right]\text{log}\left(\frac{Q_{\text{max}}^{2}}{Q_{\text{min}}^{2}}\right)-\frac{m_{\mu}^{2}x}{Q_{\text{min}}^{2}}\left(1-\frac{Q_{\text{min}}^{2}}{Q_{\text{max}}^{2}}\right) \\ &-\frac{1}{x}\left[1-\frac{x}{2}\right]^{2}\text{log}\left(\frac{x^{2}E_{\mu}^{2}+Q_{\text{max}}^{2}}{x^{2}E_{\mu}^{2}+Q_{\text{min}}^{2}}\right)\Bigg\}\,, \end{split} \end{eqnarray} {\raggedright where $x=E_{\gamma^{*}}/E_{\mu}$ and $Q_{\text{min}}^{2}=m_{\mu}^{2}x^{2}/(1-x)$. $Q_{\text{max}}^{2}$ is maximum virtuality of the photon.} The total cross-section of the process $\mu^+\mu^-\,\rightarrow\,\mu^+\gamma^*\mu^-\,\rightarrow\,\mu^+ Z\mu^-$ is obtained by integrating over both the cross-section of the subprocess $\mu^-\gamma^*\,\rightarrow\,Z\mu^-$ and the spectrum of photon emitted from muon. The total cross-section is given as follows: \begin{eqnarray} \label{eq.20} \sigma\left( \mu^+\mu^-\,\rightarrow\,\mu^+\gamma^*\mu^-\,\rightarrow\,\mu^+ Z\mu^- \right)=\int f_{\gamma^{*}}(x)\hat{\sigma}\left({\mu^-\gamma^*\,\rightarrow\,Z\mu^-}\right) dx\,. \end{eqnarray} Any non-zero anomalous couplings of $C_{BB}/{\Lambda^4}$, $C_{BW}/{\Lambda^4}$, $C_{\widetilde{B}W}/{\Lambda^4}$, $C_{WW}/{\Lambda^4}$ and SM contributions in the process $\mu^-\gamma^*\,\rightarrow\,Z(\nu\bar{\nu})\mu^-$ are considered a signal. In the analysis of the process $\mu^-\gamma^*\,\rightarrow\,Z\mu^-\,\rightarrow\,\nu\bar{\nu}\mu^-$, it is seen that the final state topology of the signal process consists of a isolated lepton and missing transverse energy. The SM background process ($\mu^-\gamma^*\,\rightarrow\,Z(\nu\bar{\nu})\mu^-$) has the same final state as the signal process, including only SM contributions from the first two Feynman diagrams in Fig.~\ref{fig1}. Other backgrounds: (1) The production of a $W$ boson that decays into a charged lepton and a neutrino is considered ($\mu^-\gamma^*\,\rightarrow\,W^-(\ell\bar{\nu})\nu_\mu$). Thus in the final state, it include one charged lepton and two neutrinos. (2) In order to minimize the experimental effect such as fake photons, since the photon may be misidentified, the process ($\mu^-\gamma^*\,\rightarrow\,Z(\nu\bar{\nu})\mu^-\gamma$) is considered another background and the $Z$ boson decays into two neutrinos. (3) Due to the experimental effect of the fake photon, we can also consider the production process of the $W$ boson ($\mu^-\gamma^*\,\rightarrow\,W^-(\ell\bar{\nu})\nu_\mu\gamma$) and the $W$ boson undergoes leptonic decay. All signal and background events are generated using {\sc MadGraph5}$\_$aMC@NLO \cite{Alwall:2014cvc} with $500$k events for each. It is necessary to apply some kinematic cuts to distinguish the signal from the relevant backgrounds and after the suitable cuts selected, the relevant background is suppressed. As suitable cuts, we can use the charged lepton pseudo-rapidity $\eta^{\ell}$, the charged lepton transverse momentum $p^\ell_T$ and the transverse missing energy $\slashed{E}_T$ for the final state of the process $\mu^-\gamma^*\,\rightarrow\,\nu\bar{\nu}\mu^-$. The signal analysis is described with kinematic cuts $|\eta^{\ell}| < 2.5$, $\slashed{E}_T > 200$ GeV and $p^\ell_T > 30$ GeV, denoted by selected cut. Besides, the following cuts are imposed at the generator level for background processes denoted by basic cut: charged lepton pseudo-rapidity $|\eta^{\ell}| < 2.5$, charged lepton transverse momentum $p^\ell_T > 10$ GeV, photon pseudo-rapidity $|\eta^{\gamma}| < 2.5$, photon transverse momentum $p^\gamma_T > 10$ GeV and minimum distance between photons and leptons $\Delta R^{\gamma\ell}_{\text{min}} > 0.4$ In this paper, each coupling should be considered separately in order to compare the effects of anomalous couplings on the signal. The method applied for this is to assign a value to a certain anomalous coupling and set all other anomalous couplings equal to zero. The distribution of charged lepton pseudo-rapidity $\eta^{\ell}$ for signal ($C_{BB}/{\Lambda^4}=2$ TeV$^{-4}$, $C_{BW}/{\Lambda^4}=2$ TeV$^{-4}$, $C_{\widetilde{B}W}/{\Lambda^4}=2$ TeV$^{-4}$, $C_{WW}/{\Lambda^4}=2$ TeV$^{-4}$ couplings) and for four different backgrounds is presented in Fig.~\ref{fig2} and the change in center-of-mass energies $\sqrt{s}=3$ TeV, $6$ TeV, $10$ TeV and $14$ TeV is investigated by (a)-(d), respectively. It can be seen that the distribution of charged lepton pseudo-rapidity occurs in the range of about $\pm2.5$ and there is an increasing deviation between the signals and the SM background as the center-of-mass energy increases. In Fig.~\ref{fig3}, the distribution of the charged lepton transverse momentum is given for signal ($C_{BB}/{\Lambda^4}=9$ TeV$^{-4}$, $C_{BW}/{\Lambda^4}=9$ TeV$^{-4}$, $C_{\widetilde{B}W}/{\Lambda^4}=9$ TeV$^{-4}$, $C_{WW}/{\Lambda^4}=9$ TeV$^{-4}$ couplings) and for four different backgrounds. In these distributions, which are ordered from (a) to (d) according to the center-of-mass energies, it is seen that there are varying deviations between the signals and the SM background due to the increase in the center-of-mass energy. The distribution of transverse missing energy given from (a) to (d) with the center-of-mass energies is examined in Fig.~\ref{fig4} for signal ($C_{BB}/{\Lambda^4}=1$ TeV$^{-4}$, $C_{BW}/{\Lambda^4}=1$ TeV$^{-4}$, $C_{\widetilde{B}W}/{\Lambda^4}=1$ TeV$^{-4}$, $C_{WW}/{\Lambda^4}=1$ TeV$^{-4}$ couplings) and for four different backgrounds. There are different deviations between the signals and the SM background according to the center-of-mass energy. However, the deviations between the signals and the backgrounds seen in Figs.~\ref{fig2}-\ref{fig4} indicate the necessity of improving the signals by suppressing the background with $\eta^{\ell}$, $p^\ell_T$ and $\slashed{E}_T$ cuts. The cross-sections are given in Table~\ref{tab3} to examine the effects on the signals and relevant backgrounds after the basic cuts and selected cuts. It is understood that the selected cuts suppress the relevant background according to the low values of the $B_2/B_1$ percentage and do not cause signal loss according to the high values of the $S_2/S_1$ percentage. If the total backgrounds $B_{tot_1}$ and $B_{tot_2}$ after the basic cuts and selected cuts are analyzed as the signal to total background ratio, the ratio after the selected cuts improves significantly compared to the ratio after the basic cuts. It is seen that the selected cuts suppress the relevant backgrounds and thus the signals are much more prominent than these backgrounds. \begin{table}[H] \centering \caption{Cross-sections for the signals and the backgrounds according to the basic and selected cuts at 14 TeV muon collider.} \label{tab3} \begin{ruledtabular} \begin{tabular}{llclcc} \multirow{3}{*}{Signals} & Cross-sections & \multirow{3}{*}{$S_1/B_{tot_1}$} & Cross-sections & \multirow{3}{*}{$S_2/B_{tot_2}$} & $S_2/S_1$\\ & with basic cuts & & with selected cuts & & [$\%$]\\ & $S_1$ (pb) & & $S_2$ (pb) & & \\ \hline $C_{BB}/{\Lambda^4}=5$ TeV$^{-4}$ & 28.700 & 11.426 & 28.630 & 236.670 & $\%$99.7\\ $C_{BW}/{\Lambda^4}=5$ TeV$^{-4}$ & 2.486 & 0.990 & 2.440 & 20.170 & $\%$98.1\\ $C_{\widetilde{B}W}/{\Lambda^4}=5$ TeV$^{-4}$ & 4.067 & 1.619 & 4.019 & 33.223 & $\%$98.8\\ $C_{WW}/{\Lambda^4}=5$ TeV$^{-4}$ & 0.403 & 0.160 & 0.360 & 2.976 & $\%$89.4\\ \hline \multirow{3}{*}{Backgrounds} & Cross-sections & & Cross-sections & & $B_2/B_1$\\ & with basic cuts & & with selected cuts & & [$\%$]\\ & $B_1$ (pb) & & $B_2$ (pb) & & \\ \hline $Z(\nu\bar{\nu})\mu^-$ & 0.06610 & & 0.00363 & & $\%$5.5\\ $W^-(\ell\bar{\nu})\nu_\mu$ & 2.38300 & & 0.10880 & & $\%$4.6\\ $Z(\nu\bar{\nu})\mu^-\gamma$ & 0.00064 & & 0.00012 & & $\%$18.8\\ $W^-(\ell\bar{\nu})\nu_\mu\gamma$ & 0.06223 & & 0.00842 & & $\%$13.5\\ & $B_{tot_1}=2.51197$ & & $B_{tot_2}=0.12097$ & & \\ \end{tabular} \end{ruledtabular} \end{table} The total cross-sections of the process $\mu^+\mu^-\,\rightarrow\,\mu^+\gamma^*\mu^-\,\rightarrow\,\mu^+ Z(\nu\bar{\nu})\mu^-$ as a function of anomalous $C_{BB}/{\Lambda^4}$, $C_{BW}/{\Lambda^4}$, $C_{\widetilde{B}W}/{\Lambda^4}$ and $C_{WW}/{\Lambda^4}$ couplings in the muon collider with $\sqrt{s}=3$ TeV, $6$ TeV, $10$ TeV, $14$ TeV are presented in Fig.~\ref{fig5}. The total cross-section corresponding to the function of an anomalous coupling has studied by fixing the other three couplings to zero. In addition, the selected cuts have applied in the analysis of these total cross-sections. It is seen that the total cross-section of each anomalous coupling increases as the center-of-mass energy of the muon collider increases. In Fig.~\ref{fig6}, the total cross-sections of anomalous $C_{BB}/{\Lambda^4}$, $C_{BW}/{\Lambda^4}$, $C_{\widetilde{B}W}/{\Lambda^4}$ and $C_{WW}/{\Lambda^4}$ couplings in the muon collider with $\sqrt{s}=14$ TeV are compared with each other. It is seen that the anomalous $C_{BB}/{\Lambda^4}$ coupling has the highest cross-section and they are ordered from highest to lowest as $C_{\widetilde{B}W}/{\Lambda^4}$, $C_{BW}/{\Lambda^4}$ and $C_{WW}/{\Lambda^4}$. \section{Sensitivities on the anomalous neutral triple gauge couplings} \label{Sec3} The 95$\%$ C.L. limit is calculated using a $\chi^2$ test with systematic errors to probe the sensitivities of anomalous $C_{BB}/{\Lambda^4}$, $C_{BW}/{\Lambda^4}$, $C_{\widetilde{B}W}/{\Lambda^4}$ and $C_{WW}/{\Lambda^4}$ couplings. $\chi^2$ test is defined by \begin{eqnarray} \label{eq.21} \chi^2=\left(\frac{\sigma_{B_{tot}}-\sigma_{NP}}{\sigma_{B_{tot}}\sqrt{\left(\delta_{st}\right)^2+\left(\delta_{sys}\right)^2}}\right)^2 \end{eqnarray} {\raggedright where $\sigma_{B_{tot}}$ is only the cross section of total background and $\sigma_{NP}$ is the cross section in the presence of both new physics beyond the SM and total background. $\delta_{st}=\frac{1}{\sqrt {N_{B_{tot}}}}$ and $\delta_{sys}$ are the statistical error and the systematic error, respectively. The number of events in the total backgrounds is described as $N_{B_{tot}}={\cal L}_{\text{int}} \times \sigma_{B_{tot}}$, where ${\cal L}_{\text{int}}$ is the integrated luminosity. The systematic uncertainties are included in the statistical analysis of the $\chi^2$ test for many reasons \cite{Khoriauli:2008xza}. In this study, the systematic uncertainties of $0\%$, $3\%$ and $5\%$ in the anomalous couplings are discussed.} The 95$\%$ C.L. limits of anomalous $C_{BB}/{\Lambda^4}$, $C_{BW}/{\Lambda^4}$, $C_{\widetilde{B}W}/{\Lambda^4}$ and $C_{WW}/{\Lambda^4}$ couplings through process $\mu^+\mu^-\,\rightarrow\,\mu^+\gamma^*\mu^-\,\rightarrow\,\mu^+ Z(\nu\bar{\nu})\mu^-$ at the muon collider are investigated in Tables~\ref{tab4}-\ref{tab7} without and with systematic error $3\%$, $5\%$. Sensitivities are evaluated for both simple and selected cuts using center-of-mass energies and integrated luminosities of the muon collider given in Table~\ref{tab1}. \begin{table}[H] \caption{The 95\% C.L. limits on the anomalous $C_{BB}/{\Lambda^4}$ coupling with respect to center-of-mass energies and cuts for systematic errors of $0\%$, $3\%$ and $5\%$.} \label{tab4} \begin{ruledtabular} \begin{tabular}{ccccc} \multicolumn{2}{c}{} & \multicolumn{3}{c}{$C_{BB}/{\Lambda^4}$ (TeV$^{-4}$)} \\ \hline $\sqrt{s}$ & Cuts & $\delta_{sys}=0\%$ & $\delta_{sys}=3\%$ & $\delta_{sys}=5\%$ \\ \hline \hline \multirow{2}{*}{3 TeV} & Basic & [-1.10801; 1.10863] & [-6.89875; 6.89937] & [-8.90615; 8.90676]\\ & Selected & [-0.46229; 0.46393] & [-1.20321; 1.20482] & [-1.54811; 1.54969]\\ \hline \multirow{2}{*}{6 TeV} & Basic & [-0.20008; 0.19976] & [-1.86375; 1.86347] & [-2.40599; 2.40575]\\ & Selected & [-0.09094; 0.08779] & [-0.37388; 0.37074] & [-0.48196; 0.47883]\\ \hline \multirow{2}{*}{10 TeV} & Basic & [-0.05785; 0.05698] & [-0.69363; 0.69276] & [-0.89534; 0.89447]\\ & Selected & [-0.02620; 0.02679] & [-0.14735; 0.14795] & [-0.19029; 0.19088]\\ \hline \multirow{2}{*}{14 TeV} & Basic & [-0.02505; 0.02415] & [-0.35917; 0.35828] & [-0.46356; 0.46267]\\ & Selected & [-0.01173; 0.01132] & [-0.07894; 0.07853] & [-0.10185; 0.10144]\\ \end{tabular} \end{ruledtabular} \end{table} \begin{table}[H] \caption{Same as in Table~\ref{tab4}, but for the anomalous $C_{BW}/{\Lambda^4}$ coupling.} \label{tab5} \begin{ruledtabular} \begin{tabular}{ccccc} \multicolumn{2}{c}{} & \multicolumn{3}{c}{$C_{BW}/{\Lambda^4}$ (TeV$^{-4}$)} \\ \hline $\sqrt{s}$ & Cuts & $\delta_{sys}=0\%$ & $\delta_{sys}=3\%$ & $\delta_{sys}=5\%$ \\ \hline \hline \multirow{2}{*}{3 TeV} & Basic & [-3.87037; 3.88206] & [-21.9068; 21.9476] & [-27.0370; 27.0874]\\ & Selected & [-1.58588; 1.59006] & [-4.12688; 4.13005] & [-5.30988; 5.31228]\\ \hline \multirow{2}{*}{6 TeV} & Basic & [-0.68456; 0.68465] & [-6.38510; 6.38475] & [-8.24600; 8.24535]\\ & Selected & [-0.30673; 0.30639] & [-1.27744; 1.27711] & [-1.64823; 1.64790]\\ \hline \multirow{2}{*}{10 TeV} & Basic & [-0.19644; 0.19731] & [-2.37658; 2.37730] & [-3.06829; 3.06891]\\ & Selected & [-0.08986; 0.09186] & [-0.50527; 0.50726] & [-0.65248; 0.65447]\\ \hline \multirow{2}{*}{14 TeV} & Basic & [-0.08475; 0.08396] & [-1.23049; 1.22974] & [-1.58844; 1.58770]\\ & Selected & [-0.03945; 0.03960] & [-0.26997; 0.27012] & [-0.34853; 0.34868]\\ \end{tabular} \end{ruledtabular} \end{table} \begin{table}[H] \caption{Same as in Table~\ref{tab4}, but for the anomalous $C_{\widetilde{B}W}/{\Lambda^4}$ coupling.} \label{tab6} \begin{ruledtabular} \begin{tabular}{ccccc} \multicolumn{2}{c}{} & \multicolumn{3}{c}{$C_{\widetilde{B}W}/{\Lambda^4}$ (TeV$^{-4}$)} \\ \hline $\sqrt{s}$ & Cuts & $\delta_{sys}=0\%$ & $\delta_{sys}=3\%$ & $\delta_{sys}=5\%$ \\ \hline \hline \multirow{2}{*}{3 TeV} & Basic & [-3.00010; 2.97063] & [-18.0441; 18.0444] & [-22.8804; 22.8963]\\ & Selected & [-1.24170; 1.23166] & [-3.22038; 3.20997] & [-4.14156; 4.13087]\\ \hline \multirow{2}{*}{6 TeV} & Basic & [-0.53481; 0.53214] & [-4.97509; 4.97262] & [-6.42320; 6.42085]\\ & Selected & [-0.24046; 0.23703] & [-0.99641; 0.99302] & [-1.28516; 1.28181]\\ \hline \multirow{2}{*}{10 TeV} & Basic & [-0.15361; 0.15305] & [-1.85152; 1.85099] & [-2.39021; 2.38970]\\ & Selected & [-0.07014; 0.07139] & [-0.39368; 0.39493] & [-0.50834; 0.50959]\\ \hline \multirow{2}{*}{14 TeV} & Basic & [-0.06532; 0.06610] & [-0.95784; 0.95860] & [-1.23668; 1.23743]\\ & Selected & [-0.03110; 0.03047] & [-0.21065; 0.21003] & [-0.27184; 0.27121]\\ \end{tabular} \end{ruledtabular} \end{table} \begin{table}[H] \caption{Same as in Table~\ref{tab4}, but for the anomalous $C_{WW}/{\Lambda^4}$ coupling.} \label{tab7} \begin{ruledtabular} \begin{tabular}{ccccc} \multicolumn{2}{c}{} & \multicolumn{3}{c}{$C_{WW}/{\Lambda^4}$ (TeV$^{-4}$)} \\ \hline $\sqrt{s}$ & Cuts & $\delta_{sys}=0\%$ & $\delta_{sys}=3\%$ & $\delta_{sys}=5\%$ \\ \hline \hline \multirow{2}{*}{3 TeV} & Basic & [-10.2732; 10.3186] & [-74.9988; 74.5413] & [-81.2340; 80.8042]\\ & Selected & [-4.14148; 4.14707] & [-10.7709; 10.7660] & [-13.8537; 13.8408]\\ \hline \multirow{2}{*}{6 TeV} & Basic & [-1.78327; 1.78858] & [-16.5655; 16.5468] & [-21.3136; 21.2795]\\ & Selected & [-0.79978; 0.79930] & [-3.33120; 3.33094] & [-4.29799; 4.29789]\\ \hline \multirow{2}{*}{10 TeV} & Basic & [-0.51283; 0.51241] & [-6.19366; 6.19269] & [-7.99956; 7.99821]\\ & Selected & [-0.23711; 0.23693] & [-1.32074; 1.32058] & [-1.70475; 1.70460]\\ \hline \multirow{2}{*}{14 TeV} & Basic & [-0.21984; 0.22008] & [-3.20764; 3.20774] & [-4.14120; 4.14119]\\ & Selected & [-0.10314; 0.10312] & [-0.70461; 0.70459] & [-0.90957; 0.90955]\\ \end{tabular} \end{ruledtabular} \end{table} In order to easily compare the sensitivities of anomalous couplings between scenarios of muon collider, 95$\%$ C.L. limits without systematic error in selected cuts given in Tables~\ref{tab4}-\ref{tab7} are examined in Fig.~\ref{fig7}. The comparison reveals that the increase in center-of-mass energies and integrated luminosities in the muon collider, as well as the use of selected cuts, increase the sensitivity of the limits. The most sensitive limits appear to be obtained from the selected cuts at the 14 TeV muon collider. These most sensitive limits without systematic error are as follows: \begin{eqnarray} \label{eq.22} C_{BB}/{\Lambda^4}=[-0.01173; 0.01132]\,\text{TeV}^{-4}\,, \end{eqnarray} \begin{eqnarray} \label{eq.23} C_{BW}/{\Lambda^4}=[-0.03945; 0.03960]\,\text{TeV}^{-4}\,, \end{eqnarray} \begin{eqnarray} \label{eq.24} C_{\widetilde{B}W}/{\Lambda^4}=[-0.03110; 0.03047]\,\text{TeV}^{-4}\,, \end{eqnarray} \begin{eqnarray} \label{eq.25} C_{WW}/{\Lambda^4}=[-0.10314; 0.10312]\,\text{TeV}^{-4}\,. \end{eqnarray} \section{Conclusions} \label{Sec4} The photon-induced process $\mu^+\mu^-\,\rightarrow\,\mu^+\gamma^*\mu^-\,\rightarrow\,\mu^+ Z(\nu\bar{\nu})\mu^-$ has been preferred to probe $Z\gamma\gamma$ and $ZZ\gamma$ aNTGC in the multi-TeV muon collider. In the analysis, cuts such as the charged lepton pseudo-rapidity, the charged lepton transverse momentum and the transverse missing energy are applied to separate the signal and the relevant SM background. The distinction between signal and background is tested by discussing the distributions of these cuts according to the number of events. In addition, it is concluded that the signal to background ratio increase with the selected cuts. Total cross-sections of the process for the anomalous couplings are presented at different scenarios of muon collider. The significance of the study is demonstrated by obtaining the sensitivities of $95\%$ C.L. limits for dimension-eight CP-conserving $C_{\widetilde{B}W}/{\Lambda^4}$ coupling and CP-violating $C_{BB}/{\Lambda^4}$, $C_{BW}/{\Lambda^4}$, $C_{WW}/{\Lambda^4}$ couplings. Sensitivity limits are calculated for both simple and selected cuts for $0\%$, $3\%$ and $5\%$ systematic uncertainties in muon collider scenarios with different center-of-mass energies and integrated luminosities. In this way, it has been checked that the sensitivity of the limits increases with the selected cuts, and more importantly, it is proven that the sensitivity of the limits increased significantly as center-of-mass energies and integrated luminosities increase of the muon collider. If the most sensitive limits in Eqs.~\ref{eq.22}-\ref{eq.25} are compared with the latest and sensitive experimental results from the LHC, there is an improvement of approximately 21, 16, 36, and 25 times in sensitivities of the aNTGC couplings, respectively. Except for the muon collider scenario with center-of-mass energy of 3 TeV and integrated luminosities of 1 ab$^{-1}$, the other three scenarios seem to have sensitive limits above the experimental limits at the LHC. These results reveal that multi-TeV muon colliders have impressive potential at future collider studies for the post-LHC and the investigation of the photon-induced process adds a new perspective to the phenomenology of muon collider. \begin{acknowledgments} The numerical calculations reported in this paper were partially performed at TUBITAK ULAKBIM, High Performance and Grid Computing Center (TRUBA resources). \end{acknowledgments}
2024-02-18T23:40:26.958Z
2022-07-26T02:11:49.000Z
algebraic_stack_train_0000
2,435
5,401
proofpile-arXiv_065-11850
\section{Introduction} Surface Plasmon resonance (SPR) is one of the most promising optical techniques for sensing applications due to their high sensitivity, label-free detection as well as versatility in various fields such as physical \cite{Wang, Taylor}, chemical, \cite{Homola} and bio-molecules \cite{Homola, Jatschka, Tomyshev}. In this technique, the electron density oscillations at the metal-dielectric interface are excited by the \textit{\textit{p}}-polarized light at a specific angle/wavelength. The matching of the momentum of incident light to that of those oscillations leads to the resonance \cite{KC}. The electromagnetic field near the metal-dielectric interface is strongly localized and therefore, any changes in the refractive index of the surrounding medium alters the resonance condition making SPR highly sensitive \cite{Yu}. In order to achieve the SPR, various configurations have been widely reported such as prism based Kretschmann configuration \cite{KC}, metallic gratings \cite{Roh}, directional couplers based on dielectric waveguide to metal strip coupling, metal coated waveguides and fibers. SPR sensors based on Krestman configuration have been widely used for sensing of liquids, gases, bio-chemicals etc. The refractive index (RI) sensitivity of such sensor in the visible region is $\sim$ 5 $\mu m/RIU$. In order to enhance the sensitivity several attempts were made by researchers such as the integration of two-dimensional materials like graphene, tungsten disulfide ($WS_2$), molybdenum selenium ($MoSe_2$) with metallic thin films \cite{Luo, Wang1, Liu, Patil, Xu}. Luo \textit{et al.} investigated the atomically thin layer of $MoSe_2$ on gold thin film and reported 36.3 $\%$ increased sensitivity compared to standard gold film SPR \cite{Luo}. Wang \textit{et al.} reported the sensitivity enhancement by 26.6 $\%$ by modification of gold film with $WS_2$ nano-sheet overlayer \cite{Wang1}. Xu \textit{et al.} proposed a hybrid structure of graphene-aluminum-graphene which is capable of increasing the sensitivity by 3.4 times compared to aluminum based sensors \cite{Xu}. Liu \textit{et al.} proposed silver-gold bimetallic film to enhance the sensitivity \cite{Liu} of a graphene–barium titanate-based SPR biosensor by 15 $\%$. Chabot et al. demonstrated sensitivity enhancement by 50 $\%$ using long-range surface Plasmon for toxicity monitoring with living cells \cite{Chabot}. Although these sensors give better sensing performance compared to single metal film, fabrication of such sensors requires a multi-step process. In this letter, we report that an ultra-high sensitivity can be realized by just optimizing the angle of incidence in Kretchmann configuration based RI sensor. At the optimized incident angle, the phase matching condition is satisfied for two different wavelengths in the NIR region, resulting in two highly sensitive resonance dips in the reflection spectrum. Furthermore, both the dips have opposite spectral shift with the ambient refractive index (ARI) change, enhancing the differential spectral shift and hence the RI sensitivity. Dual resonance with opposite spectral shift have been widely reported in modal interferometers and long period gratings in optical fibers and waveguides, which is due to the dispersion turning point (DTP) on either sides of which the group effective index difference of the participating modes has opposite sign \cite{SMT, GB, Li}. However, unlike DTP, in the considered structure the opposite spectral shift is due to the opposite dispersion behaviour of the phase matching condition at the two dips. Furthermore, the same sensing probe can be utilized to achieve ulta-high sensitivity for the detection of bio-chemicals as well as gases by just optimizing the angle of incidence. \section{Modelling} \begin{figure}[b] \begin{center} \includegraphics [width=8 cm]{Fig1.png} \end{center} \caption {Schematic diagram of the substrate/gold/ambient layered structure.} \label{schematic} \end{figure} For the study of SP behavior of the gold thin film, a layered structure is considered, which is shown schematically in Fig. \ref{schematic}. It consists of a thin gold film on a glass prism (substrate) and a sensing medium on its top. The refractive indices of the substrate, gold layer, and the analyte layer are denoted by $n_s$, $n_g$ and $n_a$, respectively. The thickness of the metal layer is denoted by $d$, which is taken as 45 nm. In the considered structure, the \textit{p}-polarized electromagnetic wave at an angle $\theta_i$ is launched from the substrate region, which excites the surface plasmon polariton (SPP) modes supported by the structure and when the phase matching condition is satisfied, i.e., the wave vector of the incident beam matches with the propagation constant of the modes, incident wave gets coupled to that SPP mode. The equation governing the phase matching condition is given by \begin{equation} n_sSin \theta_i = n_{eff} \label{n_eff} \end{equation} where $n_{eff}$ is an effective refractive index of the SPP mode supported by the considered structure. Due to power coupling from incident wave to the SPP mode, a dip will appear in the reflection spectrum at the phase matching wavelength, for a fixed incident angle. With the change in ARI, the phase matching condition is satisfied at a different wavelength resulting in a shift in the reflection dip. The spectral shift in the reflection dip for a unit change in the ARI is termed as RI sensitivity. The reflection spectrum for the structure shown in Fig. \ref{schematic} can be obtained numerically using the following equation \begin{equation} R = \left|\frac{r_{sg}+r_{ga}exp(2ikd)}{1+r_{sg} r_{ga}exp(2ikd)} \right|^2 \label{reflectance} \end{equation} where, $r_{sg}$ and $r_{ga}$ are the reflection coefficient at the substrate-gold interface and gold-analyte interface, respectively, and \textit{k} is the propagation constant corresponds to a wavelength $\lambda$. \section{Results and Discussion} Since the phase matching condition and hence, the sensitivity depends on the interaction of the modal fields with ARI, we first analyze the modes supported by the structure shown in Fig. \ref{schematic}. The magnetic field ($H_y$) distribution of the SPP modes supported by the structure at a wavelength of $\lambda$ = 1.133 $\mu m$ and $n_a$ = 1.330 are plotted in Fig. \ref{1133nm_Hx}. Here the modes are obtained by solving Maxwell’s equations using the method outlined in \cite{Ghatak2}. In our calculations, the wavelength dependence of the refractive index of fused silica is taken from the Sellmeier equation \cite{Ghatak}, and that of gold is taken form the Drude-Lorentz model \cite{Rakic}. The first mode (Fig. \ref{1133nm_Hx} (a)) having $(n_{eff})_R$ = 1.35827 is bounded in the ambient region and is oscillatory in the substrate whereas the second mode (Fig. \ref{1133nm_Hx} (b)) having $(n_{eff})_R$ = 1.49077 is bounded in both the substrate and ambient regions. Figure \ref{modes} shows the spectral variation of effective indices of both the SPP modes and the substrate RI. It may be noted that the $n_{eff}$ corresponding to the second mode (Fig. \ref{1133nm_Hx} (b)) is higher than the substrate RI, and hence the phase matching condition can't be satisfied for any angle of incidence. Furthermore, for a given wavelength, the phase matching condition can be satisfied for the first mode (Fig. \ref{1133nm_Hx} (a)) at an appropriate value of the angle of incidence. \begin{figure}[h!] \begin{center} \includegraphics [width=12 cm]{Fig2.png} \end{center} \caption {Modal field distribution of magnetic field ($H_y$) for (a) leaky and (b) bound mode for wavelength 1.133 $\mu m$ and ambient refractive index of 1.330.} \label{1133nm_Hx} \end{figure} For a range of wavelengths, there is a minimum angle at which the momentum of the incident photon becomes equal to that of the layered structure and the resonance occurs, \textit{e.g.} for resonant wavelength to be in visible range the incident angle should be higher than 75$^\circ$, Fig \ref{angle_optimization}. This is a general case where only one resonant wavelength occurs. To study the dependence of the resonance behavior on the incident angle in detail, the phase matching condition is explored by varying the angle of incidence. Figure \ref{angle_optimization} shows the variation of phase mismatch factor ($n_{eff} - n_sSin \theta_i$) with the incident wavelength and the angle of incidence. Here the incident angle is varied in the range 85-65$^\circ$ in an interval of 0.1$^\circ$ for the wavelength range 0.800-3.000 $\mu m$ and ARI of 1.330. At the incident angle of 85$^\circ$, single resonant position is observed at wavelength 0.630 $\mu m$ and shifts towards the longer wavelength with a decrease in the incident angle. It also can be observed that at a specific point, the Eq. \ref{n_eff} is satisfied at two wavelengths. For example, at an incidence angle of 69.5$^\circ$ the phase matching wavelengths corresponds to 1.133 and 2.755 $\mu m$. These two wavelengths shift opposite with the further decrease in the incident angle and converge to a single one at a certain angle. The variation in phase matching wavelength with ambient is also studied and plotted in Fig. \ref{neff_2ARI} for two ARI; 1.330 and 1.335. It may be noted that in the NIR region, for fixed angle, the shift in phase matching wavelength with ARI change is much higher than that in the visible region and hence very high sensitivity is expected in this region. \begin{figure} \begin{center} \includegraphics [width=7 cm]{Fig3.png} \end{center} \caption {Dispersion curve of the substrate refractive index and the modal effective refractive indices of the considered structure for an ambient refractive index of 1.330.} \label{modes} \end{figure} \begin{figure}[h] \begin{center} \includegraphics [width=8 cm]{Fig4.png} \end{center} \caption {Variation of the phase mismatch factor ($n_sSin \theta_i - n_{eff}$) with the incident angle and wavelength for the ambient refractive index of 1.330. Here solid red line corresponds to the phase matching points.} \label{angle_optimization} \end{figure} \begin{figure} \begin{center} \includegraphics [width=7 cm]{Fig5.png} \end{center} \caption {Variation of the phase matching wavelength with the incident angle for two different ARI values 1.330 and 1.335.} \label{neff_2ARI} \end{figure} In order to estimate the sensitivity of the considered sensor structure, we have shown the reflection spectrum (calculated using Eq. \ref{reflectance}) in Fig. \ref{reflection} for different ARI values. Here the incidence angle is selected as 69.5$^\circ$ such that the phase matching is satisfied for two different wavelengths near the turning point simultaneously, in the ARI range 1.330-1.340. The presence of dual resonance can be seen clearly from the reflection spectrum, appearing as two spectral dips having opposite shift, similar to those reported in the case of long-period fiber grating \cite{SMT}. \begin{figure} \begin{center} \includegraphics [width=8 cm]{Fig6.png} \end{center} \caption {Reflection spectra of considered structure at the incident angle of 69.5$^\circ$ for the ambient refractive index range 1.330-1.340.} \label{reflection} \end{figure} \begin{figure} \begin{center} \includegraphics [width=6 cm]{Fig7.png} \end{center} \caption {Phase matching curve for ambient refractive index range 1.330-1.340 with incident angle of 69.5$^\circ$.} \label{neff} \end{figure} For convenience, the resonant position in lower wavelength region is denoted as $\lambda_{R1}$ and that in higher wavelength region is $\lambda_{R2}$. For $n_a$ = 1.330, the resonance appears at 1.133 and 2.755 $\mu m$, and with an increase in the refractive index of the surrounding, it is observed that $\lambda_{R1}$ is red-shifted whereas $\lambda_{R2}$ is blue-shifted. These two resonance wavelengths converse to a single one ($\lambda_0$) at 1.729 $\mu m$ for the refractive index of 1.340, which corresponds to the \textit{Turning Point}. From Fig. \ref{reflection}, it is also observed that for a change in ARI, the spectral shift in resonant position near the \textit{Turning Point} is maximum. The opposite shift in resonance dips is due to the opposite shift in the phase matching wavelengths, which can be observed clearly in Fig. \ref{neff}. Here the intersection points correspond to the phase matching / resonance positions. The opposite shift in resonant wavelengths provides the increased differential spectral shift as compared to the single resonant dip, leading to enhanced sensitivity which is given by \begin{equation} S = \left| \dfrac{\Delta (\lambda_{R1}-\lambda_{R2})}{\Delta n_{a}}\right| \label{1} \end{equation} \begin{figure}[h] \begin{center} \includegraphics [width=7 cm]{Fig8.png} \end{center} \caption {Variation of the sensitivity for ambient refractive index $n_a \pm 0.0001$ and incident angle of 69.5 $^\circ$.} \label{sensitivity} \end{figure} The sensitivity calculated at the incident angle of 69.5 $^\circ$ for an $n_a$ varying from 1.330 to 1.338 is plotted in Fig. \ref{sensitivity}, considering ARI change $\pm$ 0.0001 at each $n_a$. The maximum sensitivity, 290 $\mu m/RIU$, is obtained around ARI 1.3380, for which the resonance dip is closest to the \textit{Turning point}. The sensitivity is observed to be decreased for the resonance dip appearing away from the \textit{Turning Point} and obtained to be 85 $\mu m/RIU$ for ARI $1.330$. However, similar sensitivity can be obtained for all values of $n_a$ if we get \textit{Turning point} corresponding to that $n_a$. The tuning of the incident angle solves this issue. Figure \ref{DR} shows the phase matching curve for ARI range 1.330-1.400. It is observed that there is always an incident angle for which every refractive index has a \textit{Turning point}. An expression is derived using polynomial fit in order to obtain an incident angle corresponds to the \textit{Turning point} for $n_a$ in range 1.33-1.40 and given by \begin{equation} \theta_T = An_a^3 + Bn_a^2 + Cn_a + D \label{angle_turning_bio} \end{equation} where, $\theta_T$ is the angle of incidence corresponding to the \textit{Turning Point} for $n_a$ in the range 1.330-1.400, $ A = 5303.03 $, $ B = -21114.72 $, $ C = 28137.37 $, and $ D = -12480.40 $. \begin{figure}[h] \begin{center} \includegraphics [width=7 cm]{Fig9.png} \end{center} \caption {Phase matching curve for ambient refractive index,$n_a$, ranging 1.330-1.400 and incident photons at various angles. Here the angle is optimized to get the optimum value of sensitivity for each $n_a$, in the dual resonance.} \label{DR} \end{figure} The idea of obtaining the \textit{Turning Point} for a wide range of refractive index in biological applications (1.330-1.400) is extended to gaseous RI range also. It is observed that the considered sensor with the same geometrical parameters possesses \textit{Turning Point} in a gaseous refractive index regime also but at a much lower incident angle. For the RI range 1.001-1009, the incident angle is optimized to be 44.7$^\circ$, which has all the features as discussed above for the RI range in biological applications. The calculated reflectivity at the incident angle of 44.7 $^\circ$ is plotted in Fig. \ref{ref_gas}. The sensitivity is obtained to be higher compared to that in biological RI range, which is found to be varying from 95 $\mu m/RIU$ and 460 $\mu m/RIU$ for ARI $1.000 \pm 0.0001$ and $1.009 \pm 0.0001$, respectively. In order to obtain the maximum sensitivity for the whole range of refractive index in the gaseous regime, the incident angle should be closest to the \textit{Tuning Point}, for which an expression is derived and given by \begin{equation} \phi_T = Pn_a + Q \label{angle_turning_gas} \end{equation} where, $\phi_T$ is the angle of incidence corresponding to \textit{Turning Point} for $n_a$ in the range 1.100-1.200, $ P = 63.5519 $, and $ Q = -19.6419 $. \begin{figure} \begin{center} \includegraphics [width=8 cm]{Fig10.png} \end{center} \caption {Reflection spectra at the optimized incident angle of 44.7 $^\circ$ with ambient refractive index varying from 1.000 to 1.010.} \label{ref_gas} \end{figure} For some bio-applications, measurand molecules bind to the surface to the extent of few nanometers, which changes the refractive index at the surface, causing the shift in resonance dip. This shift facilitates the sensing of the measurand specimen. For the calculation of surface sensitivity, considering a protein layer having a refractive index of 1.45, the spectral shift is found to be 38 nm to 56 nm for the binding thickness of 1 nm to 5 nm, respectively. \section{Conclusion} In conclusion, we have shown the existence of \textit{Turning point} in the phase matching condition for surface plasmon in gold thin film by optimizing the incident angle. Around the \textit{Turning point}, the phase matching condition is satisfied for two different wavelengths in the NIR region resulting in dual resonance dip in the reflection spectrum. Both the dips have opposite spectral shift with the ARI change, thereby increasing the differential shift and hence the sensitivity. The sensitivity is maximum closest to the \textit{Turning point}, the position of which can be tailored for different RI range. For the optimized parameters, the maximum sensitivity of the sensor is calculated to be 290 $\mu m/RIU$ for ARI range 1.330-1.338 and 460 $\mu m/RIU$ for ARI range 1.000-1.010. The considered sensor is easy to fabricate and versatile for measuring the sensitivity of biological as well as gaseous specimen with ultra-high sensitivity only by changing the angle of incidence. The surface sensitivity is also found to be very high 38-56 nm/nm for the binding thickness 1-5 nm. \section{Acknowledgement} The work was financially supported by the Science and Engineering Research Board, Government of India through projects PDF/2017/002679 and EMR/2016/007936.
2024-02-18T23:40:27.261Z
2020-07-07T02:06:29.000Z
algebraic_stack_train_0000
2,450
2,938
proofpile-arXiv_065-12223
\section{Introduction} Despite a rather low elemental abundance of $\sim 3\times 10^{-7}$ (Asplund et al. 2009), Phosphorus is one of the main biogenic elements, present in all life forms on Earth. As such, phosphorus-bearing compounds, in particular their P--O bonds, play a key role in many biochemical and metabolic processes in living systems (see e.g. Mac\'{\i}a et al. 2005 for a review). In our Solar System, the presence of Phosphorus has been recently reported in comet 67P/Churyumov-Gerasimeno (Altwegg et al. 2016), although the nature of the actual carriers remain to be identified. Phosphorus-bearing compounds appear to be rather ubiquitous in meteorites (Mac\'{\i}a et al. 2005). A theoretical study by Thorne et al. (1984) based on laboratory experiments predicted that Phosphorus monoxyde PO should be the most abundant P-bearing molecule in molecular clouds, hence the main reservoir of phosphorus in the gas phase. Phosphorus monoxyde PO was detected for the first time by Tenenbaum et al. (2007) towards the evolved star VY Cma. The P-bearing species HCP, PH$_3$, CP, CCP radicals, PO, and PN have been identified around evolved stars IRC+10216 (see e.g. Agundez et al. 2007 for a review) and IK Tau (De Beck et al. 2013). PO was found as abundant as PN in the envelope of VY CMa, a fact which led the authors to propose that these species are formed from shocks in the circumstellar envelope; they also concluded that PO and PN are the main reservoir of Phosphorus in the gas phase. PN was first detected towards a few high-mass star forming regions: Ori(KL) (Ziurys, 1987), W51M and SgrB2 (Turner \& Bally 1987). A systematic search for PO by Matthews, Feldman \& Bernath (1987) yielded only upper limits in the massive Star-Forming regions Ori(KL), SgrB2 and DR21(OH). Recently, Fontani et al. (2016) observed a sample of 26 massive cores at various evolutionary stages, from prestellar to ultra-compact HII regions and report detection of the PN $J$=2--1 line in about 30\% of the their sample. Rivilla et al. (2016) have reported the first detection of PO towards high-mass star forming regions and found that Phosphorus seems strongly depleted from the gas phase. The first evidence of P-bearing species in low-mass star forming regions was provided by Yamaguchi et al. (2011), who reported the tentative detection of the PN transition $J$=2--1 towards the shock positions B1 and B2 in the outflow of the low-mass Class 0 protostar L1157-mm (d=250pc). This is the first and only P-bearing molecule tentatively detected in a solar-type star forming region until now. Given the importance of Phosphorus for prebiotic chemistry and its presence in the early stages of our own Solar system, we have led a systematic search for the presence of phosphorus-bearing molecules in solar-type star forming regions, in the framework of the Large Program dedicated to Astrochemical Surveys At IRAM (ASAI; Lefloch \& Bachiller, in prep), with the 30m telescope. In this Letter, we present the results of our study towards the outflow shock L1157-B1 and the driving protostar L1157-mm. No emission was detected towards the protostar. Towards L1157-B1, we have detected the emission of various rotational transitions of PN and of PO, for the first time. A search for more complex P-Bearing species (e.g. PH$_3$) yielded only negative results. After presenting the observational results on our systematic search (Section 2), we derive the physical conditions and molecular abundances for both species, and discuss the implication on their formation and shock chemistry. \section{Observations} The observations of L1157-B1 and L1157-mm were acquired with the IRAM-30m telescope at Pico Veleta (Spain), during several runs in 2011 and 2012. The observed position of L1157-B1 and L1157-mm are $\alpha_{J 2000} =$ 20$^{\text h}$ 39$^{\text m}$ 10.$^{\text s}$2, $\delta_{J 2000} =$ +68$^{\circ}$ 01$^{\prime}$ 10$^{\prime\prime}$ and $\alpha_{J 2000} =$ 20$^{\text h}$ 39$^{\text m}$ 06.$^{\text s}$3, $\delta_{J 2000} =$ +68$^{\circ}$ 02$^{\prime}$ 15$^{\prime\prime}$.8, respectively. The survey was carried out using the broad-band EMIR receivers at 3~mm (80 -- 116 GHz), 2~mm (128 -- 173 GHz), 1.3~mm (200 -- 272 GHz). Fast Fourier Transform Spectrometers were connected to the EMIR receivers, providing a spectral resolution of 195 kHz. The high-frequency part of the 1.3mm band (260--272 GHz) was observed with the WILMA autocorrelator, at 2 MHz resolution. The final kinematic resolution of the FTS data was degraded to $1\kms$. The observations were carried out in Wobbler Switching Mode, with a throw of $3^{\prime}$, in order to ensure a flat baseline across the spectral bandwith observed (4 GHz to 8 GHz, depending on the receiver). The data reduction was performed using the GILDAS/CLASS90 package\footnote{http://www.iram.fr/IRAMFR/GILDAS/}. The line intensities are expressed in units of antenna temperature corrected for atmospheric attenuation and rearward losses ($T_A^{\star}$). For the ASAI data, the calibration uncertainties are typically 10, 15, and $20\%$ at 3mm, 2mm, and 1.3mm, respectively. For subsequent analysis, fluxes were expressed in main beam temperature units ($T_{mb}$). The telescope and receiver parameters (main-beam efficiency Beff, forward efficiency Feff, Half Power beam Width HPBW) were taken from the IRAM webpage\footnote{http://www.iram.es/IRAMES/mainWiki/Iram30mEfficiencies}. We used in the following the CASSIS software\footnote{http://cassis.irap.omp.eu} for the line identification. We have summarized the spectroscopic properties and observational parameters of the lines detected in the Table~1. We show in Fig.~1 a montage of the PO and PN lines detected towards L1157-B1. \section{Results} \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{PN-rev.eps} \includegraphics[width=1.0\columnwidth]{PO-rev.eps} \caption[]{Montage of the PN and PO lines detected towards protostellar shocks L1157-B1. The red curve represents the best fit obtained from the PO and PN line radiative transfer analysis (see Sect.~4). } \end{center} \end{figure} \subsection{PN} We used the spectroscopic parameters provided by CDMS for the phosphorus nitride molecule (Cazzoli et al. 2006). The four PN rotational transitions $J$=2--1, 3--2, 5--4 and 6--5 fall into the ASAI bands (see Table~1). No emission at all was detected towards L1157-mm (see the $J$=2--1 spectrum in red in the bottom panel of Fig.~2). On the contrary, all of them were detected towards L1157-B1, with a SNR between 4 and 10. We confirm the previous detection by Yamaguchi et al. (2011) of the $J$=2--1 and report the detection of the transitions $J$=3--2, 5--4 and 6--5, from levels with upper energy levels $E_{\rm up}$ up to $47\K$. The spectra display a linewidth in the range 4.5--$6.2\kms$, similar to the values measured for several other molecular species (see Codella et al. 2010), which testify of the shock origin of the emission. The $J$=2--1 and 3--2 lines are detected with a typical intensity of 30~mK. The SNR in the wings of the $J$=2--1 and $J$=3--2 line profiles is high enough to constrain the slope of the intensity distribution as a function of velocity, following the approach of Lefloch et al. (2012) (see also G\'omez-Ruiz et al. 2015). The intensity profile is well described by an exponential law $\propto \exp(v/v_0)$ with $v_0\approx 4.4\kms$ (see Fig.~2). As discussed by Lefloch et al. (2012) and G\'omez-Ruiz et al. (2015), this slope is the specific signature of the outflow cavity associated with L1157-B1 ($g_2$). A blueshifted wing is detected up to $\approx -20\kms$ in the PN $J$=2--1 line profile (see Fig.~2), and we find a very good match with the SiO $J$=2--1 line profile obtained at a similar angular resolution of $27\arcsec$ with the IRAM 30m telescope. This suggests that the PN and SiO spatial distributions are very similar, and that PN comes from the region where the impact of the shock on the surrounding material is violent, with velocities larger than $\approx 25\kms$. This provides us with an estimate for the size of the PN emitting region $\approx 18\arcsec$. \begin{figure} \includegraphics[width=0.9\columnwidth]{compar_PN_SiO_PO_v5.eps} \caption[]{From top to bottom: (a) Spectral profile of the PN $J$=2--1 displayed on a lin-log scale. We have superposed in red (blue) a fit to the spectral slope of the type $T_{\rm A}^{*}(v)\propto$ exp(v/v$_0$) with $v_0$= 4.4$\kms$ (2.2$\kms$), corresponding to the signature of components $g_2$ ($g_3$) of the outflow. (b) Superposition of the PN $J$=2--1 (solid) and SiO $J$=2--1 (dashed red) line profiles. A scaling factor of 0.013 was applied to SiO $J$=2--1 so to match the PN $J$=2--1 emission peak. (c) Comparison of the PN $J$=2--1 (solid) and PO 109.206 GHz (dashed red) lines. (d) Comparison of the PN $J$=2--1 line emission towards L1157-B1 (solid) and L1157-mm (dashed red). The vertical line in dashed marks the ambient cloud velocity $v_{lsr}$= $+2.6\kms$.} \end{figure} \subsection{PO} We searched for all PO transitions in the ASAI bands. We used the spectroscopic parameters for phosphorus monoxide provided by the CDMS database (Kawaguchi et al. 1983, Bailleux et al. 2002). A grand total of twelve PO transitions have $A_{ij} \geq 10^{-5}\smu$ between 80 and 272 GHz. All the transitions fulfilling this criterion in the 3mm and 1.3mm bands were detected. This is the first detection of PO in a solar-type star forming region. Overall, lines are rather weak, with typical intensities of 5--8mK. In the 2mm bands, two line doublets have $A_{ij}$ larger than $10^{-5}\smu$ . Unfortunately, the higher rms noise in that part of the spectrum (5 mK per interval of $1\kms$) permits marginal detection of one doublet only. As a result, the flux of the 2mm transitions is rather uncertain, and we did not take them into account into the subsequent analysis to determine the excitation conditions of the PO gas. The large number of transitions detected (10) makes us confident in the identification of PO in the line spectrum of L1157-B1. The PO line emission peak is located close to $\approx -2\kms$, i.e. it is shifted from PN by $\sim 2$--$3\kms$ (see Fig.~2). The velocity difference between the PO and PN emission peaks implies that both species probably form in different regions of the shock. We note that the full width at zero intensity (FWZI) is approximately $10\kms$: it is narrower for PO than for PN ($20\kms$). \begin{table*} {\small \caption{Spectroscopic and observational parameters of the molecular transitions of PO and PN observed towards L1157-B1, whose Einstein coefficient of spontaneous emission A$_{ij}$ is larger than $10^{-5}\smu$. Uncertainties on the line parameters are given in brackets. Line intensity uncertainties are measured in a velocity interval of $1\kms$). Upper limits are estimated for a $3\sigma$ rms.} \label{tab:PN} \begin{tabular}{lcclccrrrrrr} \hline \multicolumn{3}{c}{Transition} & Frequency &$E_{up}$&HPBW &F$_{eff}$ & B$_{eff}$& $\int T_{A}^{*}dv$ & $V$ & $\Delta V$ & $T_{A}^{*}$ \\ & & & (MHz) & (K) & ($\arcsec$) & & & (mK $\kms$) & ($\kms$)& ($\kms$) & (mK) \\ \hline \multicolumn{3}{l}{PN} & & & & & & & & \\ \multicolumn{3}{l}{2 -- 1} & 93979.7689 & 6.8 & 26.2 & 0.95 & 0.80 & 148(13)& --0.24(0.25)& 5.97(.70) & 23.3(3.0) \\ \multicolumn{3}{l}{3 -- 2} &140967.6921 & 13.5 & 17.5 & 0.93 & 0.74 & 186(13)& --0.67(0.23)& 6.27(.54) & 27.8(3.4) \\ \multicolumn{3}{l}{5 -- 4 } &234935.2619 & 33.8 & 10.5 & 0.91 & 0.58 & 43(9) & --0.62(.48) & 4.31(.88) & 9.4(1.5) \\ \multicolumn{3}{l}{6 -- 5} &281914.2033 & 47.4 & 8.7 & 0.88 & 0.49 & 28(9) & --1.47(.97) & 4.61(2.05)& 5.6(2.5) \\ \hline \multicolumn{3}{l}{PO ${}^2\Pi_{1/2}$ }& & & & & & & & \\ J -- (J -- 1) & Parity & F--(F-1)& & & & & & & & & \\ \cline{1-3} & & & & & & & & & \\ 5/2 -- 3/2 & e & 3 --2 & 108998.445 & 8.4 & 22.6 & 0.95 & 0.79 & 63.2(7.0)&--2.18(.37) & 7.38(.94) & 8.0(1.1) \\ 5/2 -- 3/2 & e & 2 -- 1 &109045.040 & 8.4 & 22.6 & 0.95 & 0.79 & 38.5(7.0)&--3.19(.73) & 7.80(1.60) & 4.6(1.2) \\ 5/2 -- 3/2 & f & 3 -- 2 &109206.200 & 8.4 & 22.5 & 0.95 & 0.79 & 27.3(6.0)&--2.34(.37) & 5.63(.84) & 7.9(1.3) \\ 5/2 -- 3/2 & f & 2 -- 1 &109281.189 & 8.4 & 22.5 & 0.95 & 0.79 & 24.5(7.0)& 0.20(.63) & 5.18(1.90) & 4.5(1.1) \\ 7/2 -- 5/2 & f & 4 -- 3 &152656.979 & 15.7 & 16.1 & 0.93 & 0.72 & 118.0(19) & --1.34(.26) & 5.60(.99) & 19.8(5.4) \\ 7/2 -- 5/2 & f & 3 -- 2 &152680.282 & 15.7 & 16.1 & 0.93 & 0.72 & 66.6(17) & --1.95(1.04) & 5.60 (.00) & 11.2(5.0) \\ 7/2 -- 5/2 & e & 4 -- 3 &152855.454 & 15.7 & 16.1 & 0.93 & 0.72 & <35 & -- & -- & <15.0 \\ 7/2 -- 5/2 & e & 3 -- 2 &152888.128 & 15.7 & 16.1 & 0.93 & 0.72 & <35 & -- & -- & <15.0 \\ 11/2 -- 9/2 & f & 6 -- 5 &239948.978 & 36.7 & 10.3 & 0.91 & 0.58 & 51.1(6.0)& --1.54(.30)& 5.06(.65) & 10.0(1.9) \\ 11/2 -- 9/2 & f & 5 -- 4 &239958.096 & 36.7 & 10.3 & 0.91 & 0.58 & 47.1(6.5)& --1.45(.35)& 4.86(.79) & 9.1(1.9) \\ 11/2 -- 9/2 & e & 6 -- 5 &240141.054 & 36.7 & 10.2 & 0.91 & 0.58 & 14.5(5.3)& --3.04(.43) & 2.25(.98) & 6.2(2.2) \\ 11/2 -- 9/2 & e & 5 -- 4 &240152.530 & 36.7 & 10.2 & 0.91 & 0.58 & 28.5(9.0)& --4.89(.85) & 4.42(1.90)&6.1(2.2) \\ \hline \multicolumn{3}{l}{PH$_3$} & & & & & & & & & \\ \multicolumn{3}{l}{$1_0$ -- $0_0$ } & 266944.662 & 12.8 & 9.2 & 0.89 & 0.52 & <32 & - & - & <9.6(*) \\ \hline \end{tabular} (*) per interval of 2 MHz ($2.25\kms$) } \end{table*} \subsection{PH$_3$} We used the spectroscopic parameters for PH$_3$ provided by the CDMS database (M\"uller et al. 2013). All the rotational transitions of PH$_3$ accessible in the ASAI bands are characterized by very low Einstein coefficients $A_{ij}$, typically less than $10^{-7}\smu$ and very high upper level energies $E_{up}$ (typically higher than $60\K$). The {\em only} transition with favourable excitation conditions is the $1_0$--$0_0$ at 266.944662 GHz, which has an $E_{up}$= $12.8\K$ and an $A_{ij}$ of $3.808\times 10^{-4}\smu$. We failed to detect this transition down to an rms of 3.2 mK ($\rm T_{A}^{*}$) in an interval of 2 MHz. Adopting a typical linewidth of $5\kms$, we obtain a $3\sigma$ upper limit of 32 mK$\kms$ (in $T_{A}^{*}$) for the line flux. A search for other P-bearing molecules yielded only negative results. \section{Molecular abundances} Plateau de Bure observations by Gueth et al. (1998) indicate a typical, round size of $18\arcsec$ for the SiO $J$=2--1 line emission, centered close to the nominal position of L1157-B1. Observations with the IRAM Plateau de bure interferometer at $3\arcsec$ resolution reveal an inhomogeneous structure with the presence of three compact clumps of gas in L1157-B1, named "B1a-b-c", with a typical size of $4\arcsec$ (Benedettini et al. 2013). We have considered two cases in the derivation of the physical conditions and abundance of PO and PN: a) Phosphorus emission arises from the whole shock region detected in SiO $J$=2--1 (size= $18\arcsec$), as suggested in Sect.3.2; b) Phosphorus emission arises from one of the compact clumps present in the SiO shock region (size= $4\arcsec$). We adopted a linewidth $\Delta V$= $6\kms$, in good agreement with the value obtained from a gaussian fit to the PO and PN line profiles of the $J$=2--1 and 3--2 transitions. The corresponding fluxes are listed in Table~1. \subsection{PN} We derived the physical conditions in the PN gas from an analysis in the Large Velocity Gradient (LVG) approximation with the radiative transfer code MADEX. We used the PN-He collisional coefficients from Tobola et al. (2007) and scaled by a factor of 1.37 to take into account the difference in mass of H$_2$. We first consider the case a) for the origin of PN emission. We find as best fit solution a gas temperature $T_{\rm kin}$= $60\K$, n($\htwo$)= $9\times 10^4\cmmt$, and a source-averaged column density N(PN)= $9.0\times 10^{11}\cmmd$. The fit of the PN transitions is superposed in red on the line spectra in Fig. 1. A minimum $\chi^2$ analysis shows that best fitting solutions are obtained for N(PN)= $(9.0\pm 1.0)\times 10^{11}\cmmd$; the kinetic temperature is not well constrained and solutions in the range 40--$80\K$ are possible. On the contrary, the density is well constrained in the range (0.5-1.0)$\times 10^5\cmmt$. We note that these physical conditions are consistent with the gas kinetic temperature and the density previously determined in the L1157-B1 cavity, from a CO and CS multi-transition analysis (Lefloch et al. 2012; G\'omez-Ruiz et al. 2015). Lefloch et al. (2012) estimated a CO gas column density of $1.0\times 10^{17}\cmmd$ for the L1157-B1 outflow cavity. Adopting a standard abundance ratio [CO]/[$\htwo$]= $10^{-4}$, we derive the abundance of PN relative to \htwo\ [PN]$\simeq 0.9\times 10^{-9}$. This value is in reasonable agreement with the previous determination of Yamaguchi et al. (2011):(0.2-0.6)$\times 10^{-9}$. We now consider the case b), i.e. the alternative situation where PN arises mainly from one of the shocked gas clumps reported by Benedettini et al. (2013). We then obtain solutions of the type N(PN)= $4\times 10^{13}\cmmd$, $n(\htwo)$= $1\times 10^4\cmmt$, T=55K. This would result in a typical abundance [PN]$\simeq 4\times 10^{-8}$. Such low densities are hard to reconcile with the overall density structure of L1157-B1, as derived by G\'omez-Ruiz et al. (2015), or even the density estimates obtained towards the compact clumps reported by Benedettini et al. (2013). Our analysis favors case a): PN arises from an extended region in the shock, close to the apex of the outflow cavity, where efficient grain sputtering is taking place. \subsection{PO} Due to the absence of coefficients available for PO-\htwo\ collisions, we carried out a simple LTE analysis using CASSIS. The column density and the excitation temperature were determined from a best fit analysis ($\chi^2$) on the 5/2--3/2 transitions at 3mm and 1.3mm. We could check a posteriori that the lines are optically thin. Proceeding as above, we first assumed a size of $18\arcsec$ for the PO emitting region. The best fit solution is obtained for a rotational temperature $T_{\rm rot}$= $12.0\pm 0.9 \K$ and a source-averaged column density N(PO)= $(2.3\pm 0.4)\times 10^{12}\cmmd$, resulting into a gas phase abundance [PO]$\simeq$ $2.5\times 10^{-9}$. This is about 3 times the abundance of PN. The fit to the PO transitions is superposed in red on the line spectra in Fig.~1, with a fixed Full Width at Half Maximum (FWHM) of $5.5\kms$ and $v_{lsr}$ of $-2.5\kms$, with the above parameters. Like for PN, we consider the case where the PO emitting region would arise from a compact shock region. Not surprisingly, a larger column density is required to account for the PO emission: typically $3\times 10^{13}$ and $1.3\times 10^{14}\cmmd$ for a source size of $4\arcsec$ and $2\arcsec$, respectively. This corresponds to a PO abundance [PO]= $3\times 10^{-8}$ and $1.3\times 10^{-7}$, respectively. \subsection{PH$_3$} We used CASSIS to obtain an upper limit on the abundance of phosphine from the non-detection of the ground transition $1_0$--$0_0$ at 266944.662 GHz. We adopted a typical excitation temperature of $10\K$, a linewidth of $5\kms$, and adopted a typical source size of $18\arcsec$. Taking into account the rms of 3mK, comparison with the ASAI data shows that N(PH$_3$) has to be less than $\approx 10^{12}\cmmd$, which results into an upper limit of $10^{-9}$ for the phosphine abundance in the gas phase. As a conclusion, a careful determination of the abundance of P-bearing species in the gas phase shows that they contain {\em only} a few percent of the total elemental phosphorus. \section{Phosphorus chemistry in the L1157-B1 shock} In Section 3, we showed that the emission from PN and PO arises from the L1157-B1 outflow cavity, which can be described by a C-type shock (Lefloch et al. 2012; Holdship et al. 2016). We therefore investigate the origin of the PO and PN emission in this region by the use of the chemical gas-grain model UCL\_CHEM (Viti et al. 2004a) coupled with the parametric C-shock model by Jim\'enez-Serra et al. (2008). We use the same approach as Viti et al. (2011) where we modelled the ammonia and water emission across the same region. We note that a similar approach was used by Aota \& Aikawa (2012) to reproduce the previous observations of PN and PO performed by Yamaguchi et al. (2011) toward the same outflow cavity. We stress that our new observational constraints lead to different conclusions about the history of the region and the shock parameters. This point is more specifically addressed in Sect.~5.3. We study the evolution of phosphorous-bearing species in a one dimensional C-shock, but with the specific aim of determining whether PO and PN are mainly formed and destroyed in the gas phase prior to the shock passage, on the grains, or during the shock event(s). Moreover, as in Viti et al. (2011), our model is able to determine whether the differences in line profiles discussed above may be explained by the differences in abundances at different velocities. \subsection{Modelling} A detailed description of UCL\_CHEM coupled with the shock model can be found in Viti et al. (2011). Here we briefly summarize the main characteristics of the code. The code is run in two phases, where Phase I forms a dense core out of a diffuse medium, starting from an essentially atomic gas. We adopt an initial density of 100 cm$^{-3}$ for the diffuse medium. During this phase, gas-phase chemistry, freezing on to dust particles and subsequent surface processing occur. The sticking efficiency for all species is assumed to be 100\% but the rate of depletion is a function of density (in a similar manner as in Rawlings et al. 1992). The density at the end of Phase I is a free parameter, called from now on the pre-shock density. The second phase computes the time-dependent chemical evolution of the gas and dust during the passage of a shock. The model includes both thermal desorption, due to the dust being heated by the presence of the outflow, as well as sputtering of the icy mantles once the dynamical age across the C-shock has reached the "saturation timescales", as in Jimenez-Serra et al. (2008). The saturation timescale corresponds to the time when most of the ices have been injected into the gas phase, due to the passage of the shock (Jimenez-Serra et al. 2008). Such timescales are inversely proportional to the density of the gas and for pre-shock \htwo\ densities of $10^4$--$10^5\cmmt$, these times range between 10 and 100 years, i.e. factors 10-100 shorter than the typical dynamical ages of young molecular outflows ($10^3\yr$). Our chemical gas-phase network is taken from UMIST 12\footnote{http://udfa.ajmarkwick.net/}and is augmented with updates from the KIDA database\footnote{http://kida.obs.u-bordeaux1.fr/}. The surface network comprises mainly hydrogenation reactions. In particular we note that neither UMIST 12 nor KIDA contains a network for PH$_3$. Rather than creating an {\it ad hoc} set of reactions for this species we made the assumptions that this species will be preferentially formed on the grains by hydrogenation and that therefore PH$_2$ could act as a proxy for PH$_3$. Of course this also means that we are assuming that the routes for gas phase destruction for PH$_3$ and PH$_2$ are similar. In all our figures we therefore present PH$_2$ as a proxy for PH$_3$. We ran a small grid of models where we varied: \begin{itemize} \item the pre-shock density; we have adopted two possible values for the total hydrogen nuclei density n(H+$2\times$ H$_2$)based on previous studies of this cavity: 10$^4$ and 10$^5\cmmt$. \item the shock velocity, from 20 to $40\kms$. From the pre-shock density and the shock velocity the maximum temperature of the neutral fluid attained within the shock is taken from Figures 8b and 9b in Draine et al. (1983). \item the initial elemental abundance of phosphorous: while its abundance in the diffuse, warm ISM, is found to be solar (e.g. Jenkins, Savage \& Spitzer 1986), all the studies so far of phosphorous-bearing species in star forming regions have found that the initial elemental abundance of phosphorous needs to be depleted by up to a factor of 100 in order to match the observations (Aota \& Aikawa 2012; Fontani et al. 2016; Rivilla et al. 2016). We have therefore varied this parameter and adopted values of solar (2.57$\times$10$^{-7}$) down to a depletion factor of 100. \item the duration of the pre-shock phase (Phase I). We have considered a short-lived and a long-lived scenario, respectively, to investigate the impact of different initial gas compositions, as a result of different cloud ages. In the first case, the core is immediately subjected to the passage of a shock when it reaches the pre-shock density; in the second case, the core is allowed to remain static for about 2 million years. \end{itemize} \subsection{Results} \begin{figure*} \includegraphics[width=1.8\columnwidth]{P001-L.eps} \caption[]{Fractional abundances of selected species as a function of time for the shocked phase. In all models the initial elemental abundance of phosphorous is depleted by a factor of 100 with respect to its solar abundance. The pre-shock phase is long-lived.} \end{figure*} \begin{figure*} \includegraphics[width=1.8\columnwidth]{P-001.eps} \caption[]{Fractional abundances of selected species as a function of time for the shocked phase. In all models the initial elemental abundance of phosphorous is depleted by a factor of 100 with respect to its solar abundance. The pre-shock phase is short-lived.} \end{figure*} The observations yield abundance values for PO and PN of the order of 1--3$\times 10^{-9}$, for a common source size of $18\arcsec $, with PO larger by a factor of 3 or so than PN. PH$_3$ is undetected, and has an upper limit of 10$^{-9}$. We present in Fig.~3 the evolution of phosphorous bearing species as well as nitrogen as a function of time during the passage of the shock (Phase II), when the initial elemental abundance of phosphorus is depleted by a factor of 100 and the pre-shock phase is long-lived. We show the result for three shock velocities (20,30, $40\kms$), pre-shock densities of $10^4$ and $10^5\cmmt$, and long-/short-lived pre-shock phase in Figs.~3 and 4, respectively. We note that the magnitude of the depletion effect affects only the overall abundance of P-bearing species. Our first result is that none of the models where phosphorous is solar can fit the observations, as all phosphorous bearing species are about two orders of magnitude higher than what is observed. Models where the total gas-phase phosphorous is $\sim 10^{-9}$ are therefore favoured. As can be seen in Fig.~3, we find that PH$_2$ is immediately released in the gas phase, as a result of the grain mantle sputtering at the passage of shock. PN follows closely the evolution of PH$_2$, while PO forms gradually and its abundance rises slowly in the shock. This is in agreement with our observations, where there is a clear shift toward more blue-shifted velocities for PO than for PN, which is consistent with a delay in the formation of PO in the gas phase with respect to PN. If the pre-shock phase (Phase I) is short-lived (Fig.~4) we never reach a situation where PO is clearly larger than PN throughout the shock evolution. As explained below, this is due to the lack of significant depletion of atomic N for time-scales <1 Myr. Atomic phosphorus also remains abundant in the gas phase, at abundances of about $10^{-9}$, comparable to or larger than that of PN, depending on the pre-shock density. If the pre-shock phase (Phase I) is short-lived we never reach a situation where PO is clearly larger than PN throughout the shock evolution. This is the reason why Aota \& Aikawa (2012) opted for a short-lived pre-shock phase to explain the upper limits of PO measured toward L1157-B1 by Yamaguchi et al. (2011). Given the uncertainties in the derivation of PO and PN abundances, we cannot discard a situation the PO/PN ratio is simply $\approx 1$, however. A chemical analysis on the formation and destruction of PO and PN shows that the key player in their chemistry is atomic nitrogen (as also found by Aota \& Aikawa 2012), which in turn is mainly released from ammonia NH$_3$ in the shock: for example, for the higher pre-shock densities ($10^5\cmmt$) and weak shocks ($20\kms$), a significant fraction of atomic nitrogen is still present in the gas phase and hence PO is destroyed by the reaction $$ N + PO \rightarrow PN $$. This reaction contributes to $\sim$85\% to the total destruction of PO. When the shock velocity is higher, about $30 \kms$, the temperature of the neutral fluid is also higher and favours the conversion of N into NH$_3$, and therefore PO survives in the gas phase for longer. However, at higher velocities ($40\kms$), as also shown in Viti et al. (2011), NH$_3$ is efficiently destroyed in the post-shock gas, liberating N into the gas phase and augmenting the destruction of PO via N (a route that here contributes to 100\% of the PO destruction). This leads to a faster drop for PO. This process requires that the maximum temperature in the shock is larger than $4000\K$. Because the amount of available nitrogen is key to the survival of PO, a higher pre-shock density favours it: for higher densities, more nitrogen is locked onto grains in the form of NH$_3$. The atomic Nitrogen abundance decreases by $\approx 10$ when the pre-shock density increases from $10^4$ to $10^5\cmmt$ (see e.g. Figs.~3--4). Since from previous discussion in Sect.~3.2, the terminal velocity of PO is smaller (about $-10\kms$) than the PN one (almost $-20\kms$), the model with a shock velocity of $40\kms$ (bottom right of Fig.~3) is the best match, as PO is destroyed at the end of the post-shock region as a consequence of the destruction of NH$_3$. The best matching models are in fact those where the pre-shock phase is longer-lived (Fig.~3), again, because atomic nitrogen has more time to get locked in the icy mantles before the arrival of the shock. In all the models from Fig.~3, PN and PO are comparable for most of the dissipation length. The only model for which we find a sharp decrease in abundance in the postshock region is the model with pre-shock density of $10^5\cmmt$ and a shock velocity of $40\kms$. Such a drop occurs at a time of $\sim 600\yr$ and translates into lower terminal velocities of the PO line emission. Models with a shock velocity of $20\kms$ and $30\kms$ also indicate a PO abundance drop from the gas phase. This abundance drop occurs at a longer time however, once passed the dissipation length, which makes it difficult to quantify accurately based on our simple modelling. Because of this, and because similar shock velocities are also required to explain the different line profiles observed toward L1157-B1 for NH$_3$ and H$_2$O (Viti et al. 2011), we favour the scenario in which PN and PO are formed in a shock with a velocity of $40\kms$. Finally, we note that a high pre-shock density also favours the idea that dense clumps along outflows pre-exist the outflow event itself and are not in fact formed by compression due to the arrival of the shock (Viti et al. 2004b). \subsection{Comparison with previous results} Using the detection of the PN $J$=2--1 line and the non-detection of PO by Yamaguchi et al. (2011) in L1157-B1, Aota \& Aikawa (2012) modelled the phosphorous chemistry in the shocked region of L1157-B1 by using a chemical model coupled with the parametric shock model by Jimenez-Serra et al. (2008). These authors searched for shock solutions whereby PN was abundant while PO was not. The wider and richer spectral content of the ASAI data brings more robust constraints on the properties of P-bearing species PO, PN (and PH$_3$) in L1157-B1, affecting some of their conclusions on the shock properties: their study favours a shock velocity of $20\kms$, while our results suggest that PN and PO are formed in a $40\kms$ shock. In our model a maximum temperature of 4000 K is indeed required in the shock to liberate atomic N into the gas phase from NH3 and to destroy PO at higher velocities in the shock (as suggested by the observed different terminal velocities of PN and PO; Section 3.2). Our conclusions on the depletion of elemental phosphorus by a factor of 100 in molecular dark clouds, and of the importance of atomic N in the chemistry of PO and PN, agree with those of Aota \& Aikawa (2012). Recent studies of PN and PO towards the high-mass star forming regions W51 e1/e2 and W3(OH) by Fontani et al. (2016) and Rivilla et al. (2016) reach conclusions similar to ours: phosphorus appears to be depleted in quiescent molecular gas by more than one order of magnitude. The abundances they derive towards these massive sources are of the order $10^{-10}$, typically one order of magnitude less than towards the shock L1157-B1. Interestingly, they report similar PO/PN abundance ratios ($\simeq$ 2--3). Observations with the NOEMA array would permit more accurate determination of the depletion factor by resolving the size of the emitting region of PO and PN at a few arcsec scale. Mapping the relative spatial distribution of PO and PN as a function of velocity would permit confirmation of our conclusions on the shock parameters. \section{Conclusions} We report on a systematic search for P-bearing species towards the solar-type star forming region L1157. We have unambiguously detected emission from PN and, for the first time, from the pre-biotic molecule PO in the outflow shock region L1157-B1. No emission from P-bearing species was detected towards the envelope of the protostar L1157-mm. Spectral line profile analysis suggests that the emission originates from the same region as SiO, with a typical size of 15--$20\arcsec$. The abundances of PO and PN are found comparable, with values of 2.5 and $0.9\times 10^{-9}$, respectively. A simple modelling using the C-shock code of Jimenez-Serra et al. (2006) coupled the chemical gas-grain model UCL\_CHEM (Viti et al. 2004) allows us to reproduce the main features of PO and PN emission in the shock. Our main conclusions are as follows:\\ - Phosphorus is depleted by about a factor 100 in the gas phase. \\ - Atomic Nitrogen plays a key role in the formation and destruction routes of PO and PN\\ - The observed PO/PN abundance ratio is $\approx 3$, and brings constraints on the duration of the pre-shock phase, which has to be larger than $\simeq 10^6\yr$ and the pre-shock density, of the order of $10^5\cmmt$. \\ - The maximum temperature in the shock has to be $\sim 4000\K$ in order to account for the difference of terminal velocities between PO and PN. \\ Follow-up observations at arcsec scale with the NOEMA array would permit probing the model we propose for the formation of PO and PN in the shock. By resolving the size of the emitting region of PO and PN, it would possible to confirm that both species form in different regions of the shock, and to estimate more accurately the Phosphorus depletion factor in the gas phase. \section*{Acknowledgements} Based on observations carried out as part of the Large Program ASAI (project number 012-12) with the IRAM 30m telescope. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain). This work was supported by the CNRS program "Physique et Chimie du Milieu Interstellaire" (PCMI) and by a grant from LabeX Osug@2020 (Investissements d'avenir - ANR10LABX56). E. M. acknowledges support from the Brazilian agency FAPESP (grant 2014/22095-6 and 2015/22254-0). I.J.-S. acknowledges the financial support received from the STFC through an Ernest Rutherford Fellowship (proposal number ST/L004801/1).
2024-02-18T23:40:28.801Z
2016-08-02T02:02:30.000Z
algebraic_stack_train_0000
2,520
6,478
proofpile-arXiv_065-12246
\section{Introduction}\vspace{-3mm} Characterizing a finite population whose individuals are partitioned into different classes is a fundamental research topic in physical, biological, environmental, and social sciences. One common problem is to estimate certain quantities of a sample taken from the population. For example, to disseminate survey data to the public, the government statistical agency has the responsibility to assess the risk for the disclosed microdata records to be matched to specific individuals of the surveyed population, based on the size and resolution of the microdata, while making them informative enough to be useful for education, research, business, and social welfare \citep{bethlehem1990disclosure,fienberg1998confidentiality,skinner2002measure,skinner2008assessing,manrique2012estimating}. In practice, one may not observe the population but only a sample taken from it. This brings another problem often more challenging to solve: to predict how the $n$ individuals of a finite population are partitioned into different classes, on observing the partitions of a sample of $m<n$ individuals randomly taken from this population. For example, in high-throughput sequencing, one is often interested in estimating how many more new genomic sequences not found in the current sample would be detected if the sequencing depth is increased \citep{wang2009rna,liu2014rna,sims2014sequencing}. To address this problem, one may define an appropriate procedure to extrapolate the random partitions of the population from the sample. One may also consider constructing a statistical model to fit the random partitions of the observed sample, with the assumption that the same model parameters inferred from the sample also apply to the population. The size-independent assumption, however, could considerably limit the flexibility of the selected statistical model. In addition, it could be restrictive to assume that the individuals of a random sample taken without replacement from a finite subpopulation are partitioned in the same way as those of a random sample taken without replacement from a larger population to which the subpopulation belongs. To address all these problems under a coherent statistical framework, we will construct nonparametric Bayesian models to describe both the exchangeable random partitions of the population and those of a random sample taken without replacement from the population. The distribution of the random partitions of a sample will be constructed to be dependent on the population size, which is motivated by our observation that given the model parameters, the structural property of a sample's random partitions could strongly depend on both the size of the sample and that of the population. The layout of the paper is as follows: In Section \ref{sec:pre} we provide some background information. In Section \ref{sec:species}, we discuss frequency of frequencies (FoF) distributions and introduce the new model for constructing size dependent species sampling models. In Section \ref{sec:gNBP} we apply the theory in Section \ref{sec:species} to the generalized negative binomial process and provide the asymptotics on both the number and sizes of clusters. We present real data applications in Section \ref{sec:results}. We conclude the paper in Section \ref{sec:conclusion} and provide the proofs in Appendix E. \vspace{-3.5mm}\subsection{Notation and preliminaries}\label{sec:pre}\vspace{-1.5mm} \textbf{Frequency of frequencies.} Consider a finite population with $n$ individuals from $K$ different classes, and let $z_i\in\{1,\ldots,K\}$ denote the class individual $i$ is assigned to, let $n_k=\sum_{i=1}^n \delta(z_i=k)$ denote the number of individuals in class $k$, and let $m_i=\sum_{k=1}^K \delta(n_k=i)$ denote the number of classes having $i$ individuals in this finite population, where $\delta(x)=1$ if the condition $x$ is satisfied and $\delta(x)=0$ otherwise. Thus, by definition, we have $$K = \sum_{i=1}^\infty m_i, ~~~n=\sum_{i=1}^\infty i m_i $$ almost surely (a.s.), and since $m_i=0$ a.s. for all $i\ge n+1$, it is also common to use $\sum_{i=1}^n$ to replace the infinite sum $\sum_{i=1}^\infty$ in the above equation. For example, we may represent $(z_1,\ldots,z_{14})=(1,2,3,4,5,5,6,6,6,6,7,7,7,7)$ as $(n_1,\ldots,n_7)=(1,1,1,1,2,4,4)$, or $\{m_1,m_2,m_4\}=\{4, 1, 2\}$ and $m_i=0$ for $i\notin \{1,2,4\}$. Since $m_i$ represents the frequency of the classes appearing $i$ times, we refer the count vector $\mathcal{M}= \{m_i\}_i$ as the frequency of frequencies (FoF) vector, the distribution of which is commonly referred to as the FoF distribution \citep{good1953population}. \vspace{2mm} \noindent\textbf{Exchangeable partition probability functions.} Assuming the population size $n$ is given, one may define a probability distribution to partition the $n$ individuals into exchangeable random partitions, and hence generate a FoF vector by defining each partition as a class. Let $[m]:=\{1,\ldots,m\}$ denote a subset of the set $[n]:=\{1,\ldots,n\}$, where $m\le n$. For a random partition $\Pi_m=\{A_1,\ldots,A_l\}$ of the set $[m]$, where there are $l$ clusters and each individual $i\in[m]$ belongs to one and only one set $A_k$ from $\Pi_m$, we denote $P(\Pi_m\,|\,n)$ as the marginal partition probability for $[m]$ when it is known the population size is~$n$. Note that $P(\Pi_m\,|\, n)=P(z_1,\ldots,z_m\,|\, n)$ if individual $i$ belongs to $A_{z_i}$. % If $P(\Pi_m\,|\,n)$ depends only on the number and sizes of the $(A_k)$, regardless of their order, and the population size $n$, then it is referred to in this paper as a size dependent exchangeable partition probability function (EPPF) of~$\Pi_m$. If $P(\Pi_m\,|\,m) = P(\Pi_m\,|\,n)$ for all $n\ge m$, then it is referred to as a size independent EPPF. Typical examples of size independent EPPFs include the Ewens sampling formula \citep{ewens1972sampling,Antoniak74}, Pitman-Yor process \citep{perman1992size,pitman1997two}, and those governed by normalized random measures with independent increments (NRMIs) \citep{regazzini2003distributional,BeyondDP}. We provide a review on size independent EPPFs in Appendix C. See \citet{csp} for a detailed treatment of EPPFs.\vspace{2mm \noindent\textbf{Completely random measures.} Let us denote $G$ as a completely random measure \citep{Kingman,PoissonP} defined on the product space $\mathbb{R}_+\times \Omega$, where $\mathbb{R}_+=\{x:x>0\}$ and $\Omega$ is a complete separable metric space. It assigns independent infinitely divisible random variables $G(A_j)$ to disjoint Borel sets $A_j\subset \Omega$, with Laplace transforms \vspace{0mm}\begin{equation}\label{eq:Laplace0} \mathbb{E}\left[e^{-\phi\,G(A)}\right] = \exp\bigg\{- \int_{\mathbb{R}_+\times A} (1-e^{-\phi r})\nu(drd\omega)\bigg\}, \vspace{0mm}\end{equation} where $\nu(drd\omega)$ is the L\'evy measure. A random draw from $G$ can be expressed as $$ G=\sum_{k=1}^Kr_k\delta_{\omega_k}, ~K\sim\mbox{Poisson}(\nu^+),~(r_k,\omega_k)\stackrel{iid}\sim \pi(drd\omega), $$ where $r_k$ is the weight of atom $\omega_k$, $\nu^+ = \nu(\mathbb{R}_+\times \Omega)$, and $ \nu(drd\omega) = \nu^+\pi(drd\omega)$. The completely random measure $G$ is well defined if $\int_{\mathbb{R}_+\times \Omega}\min\{1,r\} \nu(drd\omega)<\infty$, even if the Poisson intensity $\nu^+ $ is infinite. In this paper, we consider homogenous completely random measures where the L\'evy measure can be written as $ \nu(drd\omega) =\rho(dr)G_0(d\omega)$, where $G_0$ is a finite and continuous base measure over~$\Omega$. The generalized gamma process $G\sim\mbox{g}\Gamma\mbox{P}(G_0,a,1/c)$ of \citet{brix1999generalized}, where $a< 1$ is a discount parameter and $1/c$ is a scale parameter, is defined with the L\'{e}vy measure as \vspace{0mm}\begin{eqnarray}\label{eq:LevyGGP} \nu(drd\omega) =\rho(dr)G_0(d\omega) = \frac{1}{\Gamma(1-a)}r^{-a-1}e^{-cr}\,dr\, G_0(d\omega). \vspace{0mm}\end{eqnarray} A detailed description on the generalized gamma process is provided in Appendix D. \vspace{-3mm}\section{Bayesian modeling of frequency of frequencies}\label{sec:species} \vspace{-2mm}\subsection{Frequency of frequencies distributions}\label{sec:FoF}\vspace{-2mm} The need to model the distributions of the class sizes $\{n_k\}_k$, or the FoF vector, arises in a wide variety of settings. For example, in computational linguistics and natural language processing, if we let $n_k$ denote the frequency of the $k$th most frequent word in a text corpus, then $\ln(n_k)$ and $\ln(k)$ would be approximately linearly related according to Zipf's law \citep{zipf1949human}. Alternatively, if we let $m_i$ denote the frequency of the words appearing $i$ times, then $\ln(m_i)$ often appears to follow a straight line as a function of $\ln(i)$, as shown in Figures \ref{fig:FoF}(a)-(d) for the words of four different novels. For many other natural and artificial phenomena, the FoF distributions also exhibit similar behavior in their tails, such as those on the number of citations of scientific papers, the degrees of proteins in a protein-interaction network, and the peak gamma-ray intensity of solar flares, to name a few; see \citet{newman2005power} and \cite{clauset2009power} for reviews. In addition, we find that the tails of the FoF distributions for the genomic sequences in high-throughput sequencing data and the classes of the microdata also often exhibit similar behaviors. For example, in Figure~\ref{fig:FoF} are the FoF vectors for the words of four different novels\footnote{\href{https://www.gutenberg.org/ebooks/}{https://www.gutenberg.org/ebooks/}}, the RNA sequences of three different RNA-seq samples\footnote{\href{http://bowtie-bio.sourceforge.net/recount/}{http://bowtie-bio.sourceforge.net/recount/}} provided by \citet{frazee2011recount}, and the classes of a microdata consists of 87,959 household records, shown in Table A.6 of \citet{greenberg1990geographic}. \begin{figure}[!tb] \begin{center} \includegraphics[width=0.76\columnwidth]{figures/FoF_v2.pdf} \end{center} \vspace{-7.5mm} \caption{ \label{fig:FoF} \small The log-log plots of the frequency of frequencies (FoF) vectors for (a) the words in ``The Adventures of Tom Sawyer'' by Mark Twain, (b) the words in ``The Adventures of Sherlock Holmes'' by Arthur Conan Doyle, (c) the words in ``A Tale of Two Cities'' by Charles Dickens, (d) the words in ``War and Peace'' by Leo Tolstoy and translated by Louise and Aylmer Maude, (e) the RNA sequences studied in \citet{Core19122008}, (f) the RNA sequences studied in \citet{Sultan15082008}, (g) the RNA sequences studied in \citet{yang2010global}, and (h) the microdata provided in Table A.6 of \citet{greenberg1990geographic}. For each subfigure, a least squares line with the slope fixed as $-\alpha$ is fitted to $\{[\ln i, \ln(m_i)]\}_{i:i\ge i_{\min},m_i\ge 3}$, where $i_{\min}$ is a lower cutoff point and $\alpha$ is a scaling parameter estimated using the software provided for \citet{clauset2009power}. }\vspace{-2mm} \end{figure} \begin{figure}[!tb] \begin{center} \includegraphics[width=0.93\columnwidth]{figures/tomsawyer_v2.pdf} \end{center} \vspace{-7.5mm} \caption{ \label{fig:tomsawyer} \small The log-log plots of the frequency of frequencies (FoF) vectors for the words in the novel ``The Adventure of Tom Sawyer'' by Mark Twain. Each subfigure consists of 20 FoF vectors displayed in different colors. (a) The 20 FoF vectors, with one curve coming from all the words and each of the other 19 curves coming from a sample of words taken with replacement from the novel, with a sampling ratio of 1; (b)-(e) The 20 FoF vectors, each of which comes from a sample of words taken without replacement from the novel, with the sampling ratios of 1/4, 1/16, 1/64, and 1/256, respectively. For each FoF vector, a straight line fitting the points $\{[\ln(i),\ln(m_i)]\}_{i:i\ge i_{\min},m_i\ge 3}$ with slope $-\alpha$, is also plotted, where both the lower cutoff point $i_{\min}$ and scaling parameter $\alpha$ are estimated using the software provided for \citet{clauset2009power}. } \begin{center} \includegraphics[width=0.58\columnwidth]{figures/tomsawyer_alpha_v2.pdf} \end{center} \vspace{-7.5mm} \caption{ \label{fig:tomsawyer_alpha} \small Box plots of (a) the slopes of the fitted lines and (b) the ratios of the clusters of size one for the FoF vectors in the log-log plots shown in Figure \ref{fig:tomsawyer}. For each sampling ratio, the box plot in each subfigure is based on the corresponding 20 FoF vectors used in Figure \ref{fig:tomsawyer}. \vspace{-2mm} } \end{figure} To illustrate how the characteristics of the FoF vector of a sample are related to the size of the sample, we show in Figure \ref{fig:tomsawyer}(a) the FoF distribution for all the words in the novel ``The Adventures of Tom Sawyer'' by Mark Twain on the logarithmic scale, and also plot the FoF distributions for $1/4$, $1/16$, $1/64$, and $1/256$ of the words taken without replacement from the novel, in Figures \ref{fig:tomsawyer}(b)-(e), respectively. We further show in Figure \ref{fig:tomsawyer_alpha}(a) the box plots of the slopes of the least squares regression lines fitted to the tails of these FoF vectors, and show in Figure \ref{fig:tomsawyer_alpha}(b) the box plots of the ratios of unit-size clusters (clusters of size one). In addition, we provide Figures \ref{fig:sultan}-\ref{fig:sultan_alpha} in Appendix A as the analogous plots to Figures \ref{fig:tomsawyer}-\ref{fig:tomsawyer_alpha} for the FoF vectors for a high-throughput sequencing sample for the human transcriptome from a B cell line, as studied in \citet{Sultan15082008}. Note that to estimate the lower cutoff point and slope of the regression line, we use the software provided for \citet{clauset2009power}, as described in detail in Appendix B. It is clear from Figures \ref{fig:tomsawyer}-\ref{fig:tomsawyer_alpha} and \ref{fig:sultan}-\ref{fig:sultan_alpha} that the slope of the fitted straight line and the ratio of unit-size clusters tend to decrease and increase, respectively, as the subsampling ratio decreases. Therefore, for a sample taken without replacement from a population, its estimated scaling parameter often clearly depends on the sample size. Moreover, it seems that a FoF distribution in some case could be more accurately described with a decreasing concave curve than with a straight line, such as those for the RNA sequences shown in Figures \ref{fig:FoF}(e)-(g) and Figure \ref{fig:sultan} in Appendix A. All these empirical observations motivate us to model the FoF distribution with a statistical model that could model the entire FoF distribution of a finite population, and more importantly, could take both the population and sample sizes into consideration, providing a principled way to extrapolate the FoF vector of a finite population given a random sample taken without replacement from the population. \vspace{-3mm}\subsection{Structure of the model}\label{sec:CountMixture As discussed in Section \ref{sec:FoF} and shown in Figures \ref{fig:tomsawyer}-\ref{fig:tomsawyer_alpha} and \ref{fig:sultan}-\ref{fig:sultan_alpha} in Appendix A, the structural property of a FoF distribution can strongly depend on $n$. Hence to use the same set of model parameters $\boldsymbol{\theta}$ to describe the FoF distributions for various sample sizes, we intend to construct a model that describe the distribution $P(\Pi_m\,|\,n,\boldsymbol{\theta})$, meaning that the EPPF and hence the FoF distribution for a sample of size $m$, taken without replacement from a population of size $n$, depends not only on the model parameters $\boldsymbol{\theta}$, but also on the population size $n$. To develop this theme, and to allow the mathematics to proceed in a neat way, and without forcing any restrictions, we first make $n$ a random object within the model. Here we describe how the random allocations of individuals to classes are distributed based on the independent random jumps of a completely random measure. With a random draw from a completely random measure expressed as $G = \sum_{k=1}^{K} r_k \delta_{\omega_k}$, by introducing a categorical latent variable $z$ with $ P(z=k\,|\,G) =r_k/G(\Omega), \notag $ when a population of size $n$ is observed we have \vspace{0mm}\begin{eqnarray}\label{eq:f_G_N} p(\boldsymbol{z}\,|\,G,n)= \prod_{i=1}^n \frac{r_{z_i}}{\sum_{k=1}^{K} r_k} = \left(\sum_{k=1}^{K} r_k\right)^{-n \prod_{k=1}^{K} {r_k^{n_k}} , \vspace{0mm}\end{eqnarray} where $\boldsymbol{z}=(z_1,\ldots,z_n)$ is a sequence of categorical random variables indicating the class memberships, $n_k = \sum_{i=1}^n \delta(z_{i}=k)$ is the number of data points assigned to category $k$, and $n=\sum_{k=1}^{K} n_k$. A random partition $\Pi_n$ of $[n]$ is defined by the ties between the $(z_i)$. So at this point, (\ref{eq:f_G_N}) is standard. Now (\ref{eq:f_G_N}) exhibits a lack of identifiabilty in that the scale of the $(r_k)$ is arbitrary; the model is the same if we set $\widetilde{r}_k=\kappa\,r_k$ for any $\kappa>0$. Hence, the total mass $\sum_{k=1}^K r_k$ is unidentified. Additionally, for the standard models, when $G$ is integrated out, $n$ disappears and we have $p(\boldsymbol{z})$ depending solely on the model parameters $\boldsymbol{\theta}$. We solve both these issues by linking the population size $n$ to the total random mass of $G$ with a Poisson distribution, allowing $n$ to depend on $G$ via \vspace{0mm}\begin{equation} p(n\,|\,G)=\mbox{Poisson}\big[ G(\Omega)\big].\label{Poisson} \vspace{0mm}\end{equation} Since the $n$ data points are clustered according to the normalized random probability measure $G/G(\Omega)$, we have the equivalent sampling mechanism given by \vspace{0mm}\begin{equation} p(n_k\,|\,G)=\mbox{Poisson}( r_k)\quad\mbox{independently for}\quad k=1,2,\ldots\,, \notag \vspace{0mm}\end{equation} and, since $n=\sum_k n_k$, we obviously recover (\ref{Poisson}). We note here then that the prior model is for $p(n,G)$ and, consequently, $p(G\,|\,n)$ means $G$ depends on $n$; $i.e.$, for each $n$ we will have a different random measure for $G$. Therefore, we link directly the cluster sizes $(n_k)$ to the weights $(r_k)$ with independent Poisson distributions, which is in itself an appealing intuitive feature. The mechanism to generate a sample of arbitrary size is now well defined and $G$ is no longer scaled freely. The new construction also allows $G(\Omega)=0$, for which $n= 0$ a.s. Allowing $G(\Omega)=0$ with a nonzero probability relaxes the requirement of $\nu^+=\infty$ ($i.e.$, $K=\infty$ a.s.), a necessary condition to normalize a completely random measure \citep{regazzini2003distributional,BeyondDP}. For us we will not necessarily be assuming that $K=\infty$ a.s. In fact our model is such that $K=0\iff n=0$, which is coherent, and, moreover, $P(K=0\,|\,n>0)=0$. With $G$ marginalized out from the $G$ mixed Poisson process, the joint distribution of $n$ and its exchangeable random partition $\Pi_n$ is called an exchangeable cluster probability function (ECPF), which further leads to a FoF distribution that is shown to be an infinite product of Poisson distributions. On observing a population of size $n$, we are interested in the EPPF $P(\Pi_n\,|\,n,\boldsymbol{\theta})$ and, marginalizing over $n-m$ elements, we would consider $P(\Pi_m\,|\,n,\boldsymbol{\theta})$. Note that distinct from a partition structure of \citet{kingman1978random,kingman1978representation} that requires $P(\Pi_m\,|\,n,\boldsymbol{\theta})=P(\Pi_m\,|\,m,\boldsymbol{\theta})$ for all $n>m$, we no longer have or require this condition for exchangeable random partitions generated under a $G$ mixed Poisson process, which will be referred to as a cluster structure. We provide in Section \ref{sec:prop} the general form for both $p(\boldsymbol{z},n) = P(\Pi_n,n\,|\, \boldsymbol{\theta})$ and $p(\boldsymbol{z}\,|\,n)=P(\Pi_n\,|\,n,\boldsymbol{\theta})$, and make connections to previous work in Section \ref{sec:2.4} by letting $G$ be drawn from the gamma process. We provide in Section \ref{sec:gNBP} the specific case when $G$ is drawn from the generalized gamma process $G\sim\mbox{g}\Gamma\mbox{P}(G_0,a,1/c)$ and the asymptotics on the number and sizes of clusters as $n\rightarrow \infty$. In Section \ref{sec:results} we use MCMC methods to extrapolate the FoF vector of the population from a random sample taken without replacement from it. \vspace{-3mm} \subsection{Properties of the model}\label{sec:prop} \vspace{-2mm} A key insight of this paper is that a completely random measure mixed Poisson process produces a cluster structure that is identical in distribution to ($i$) the one produced by assigning the total random count of the Poisson process into exchangeable random partitions, using the random probability measure normalized from that completely random measure, ($ii$) the one produced by assigning the total (marginal) random count $n$ of the mixed Poisson process into exchangeable random partitions using an EPPF of $\Pi_n$, and ($iii$) the one produced by constructing a FoF vector, the $i$th element of which is generated from a Poisson distribution parameterized by a specific function of $i$. For example, when the generalized gamma process $G\sim\mbox{g}\Gamma\mbox{P}[G_0,a,p/(1-p)]$ is used as the completely random measure in this setting, our key discoveries are summarized in Figure \ref{fig:gNBPdraw}, which will be discussed further in Section \ref{sec:gNBP}. \begin{figure}[!tb] \begin{center} \includegraphics[width=0.7\columnwidth]{figures/gNBPgCRP_2.pdf} \end{center} \vspace{-6.5mm} \caption{ \label{fig:gNBPdraw} \small The cluster structure of the generalized negative binomial process can be either constructed by assigning $\mbox{Poisson}[G(\Omega)]$ number of customers to tables following a normalized generalized gamma process $G/G(\Omega)$, where $G\sim\mbox{g}\Gamma\mbox{P}[G_0,a,p/(1-p)]$, or constructed by assigning $n\sim\mbox{gNB}(\gamma_0,a,p)$ number of customers to tables following a generalized Chinese restaurant sampling formula $\boldsymbol{z}\sim$~$\mbox{gCRSF}(n,\gamma_0,a,p)$, where $\gamma_0=G_0(\Omega)$. A equivalent cluster structure can be generated by first drawing $\mbox{Poisson}\big(\gamma_0\frac{1-(1-p)^a}{ap^a}\big)$ number of tables, and then drawing $\mbox{TNB}(a,p)$ number of customers independently at each table. Another equivalent one can be generated by drawing $\mbox{Poisson}\big(\frac{\Gamma(i-a)\gamma_0 p^{i-a}}{\Gamma(1-a)i!}\big)$ number of tables, each of which with $i$ customers, for $i\in\{1,2,\ldots\}$. }\vspace{-2mm} \end{figure} In Theorem \ref{thm:compoundPoisson}, we establish the marginal model for the $(n_k)$ with $G$ marginalized out. We provide the L\'evy measure, ECPF, EPPF, FoF distribution, stick-breaking construction, and prediction rule in Corollaries \ref{cor:compoundPoisson}-\ref{thm:predict}. The proofs are provided in Appendix E. \begin{thm}[Compound Poisson Process]\label{thm:compoundPoisson} It is that the $G$ mixed Poisson process is also a compound Poisson process; a random draw of which can be expressed as $$X(\cdot)=\sum_{k=1}^{l} n_k \,\delta_{\omega_k}(\cdot)\quad\mbox{with }~l\sim\emph{\mbox{Poisson}}\left[G_0(\Omega)\int_{0}^\infty(1-e^{-r})\rho(dr)\right],$$ and independently $$P(n_k=j)=\frac{ {\int_{0}^\infty r^j e^{-r} \rho(dr)} }{{j!}\int_{0}^\infty(1-e^{-r})\rho(dr)}~~\mbox{for}~~j=1,2,\ldots\notag$$ where $\int_{0}^\infty(1-e^{-r})\rho(dr) <\infty$ is a condition required for the characteristic functions of $G$ to be well defined, $\omega_k\stackrel{iid}{\sim} g_0$, and $g_0(d\omega)=G_0(d\omega)/G_0(\Omega)$. \end{thm} \begin{cor} \label{cor:compoundPoisson} The L\'evy measure of the $G$ mixed Poisson process can be expressed as $$ \nu(dnd\omega) = \sum_{j=1}^\infty\int_{0}^\infty \frac{ r^j e^{-r} }{j!} \rho(dr)~\delta_j(dn)G_0(d\omega). $$ \end{cor} The compound Poisson representation dictates the model to have a Poisson distributed finite number of clusters, whose sizes follow a positive discrete distribution. The mass parameter $\gamma_0=G_0(\Omega)$ has a linear relationship with the expected number of clusters, but has no direct impact on the cluster-size distribution in the prior. Note that a draw from $G$ contains $K< \infty$ or $K=\infty$ atoms a.s., but only $l$ of them would be associated with nonzero counts if $G$ is mixed with a Poisson process. Since the cluster indices are unordered and exchangeable, without loss of generality, in the following discussion, we relabel the atoms with nonzero counts in order of appearance from $1$ to $l$ and then $z_i\in\{1,\ldots,l\}$ for $i=1,\ldots,n$, with $n_k>0$ if and only if $1\le k \le l$ and $n_k=0$ if $k>l$. \begin{cor}[Exchangeable Cluster/Partition Probability Functions]\label{thm:ECPF} The model has a fully factorized exchangeable cluster probability function (ECPF) as $$p(\boldsymbol{z},n\,|\,\gamma_0,\rho) = \frac{\gamma_0^l}{n!} \exp\left\{\gamma_0\int_{0}^\infty(e^{-r}-1)\rho(dr)\right\} \prod_{k=1}^l \int_0^\infty r^{n_k} e^{-r} \rho(dr),$$ the marginal distribution for the population size $n=X(\Omega)$ has probability generating function $$\mathbb{E}[t^{n}\,|\,\gamma_0,\rho] = \exp\left\{ \gamma_0 \int_{0}^\infty (e^{-(1-t)r}-1)\rho(dr)\right\}$$ and probability mass function $\left. p_N(n\,|\,\gamma_0,\rho)=\frac{d^n (\mathbb{E}[t^n\,|\,\gamma_0,\rho])} { n! d t^n } \right |_{t=0},$ and an exchangeable partition probability function (EPPF) of $\Pi_n$ as $$p(\boldsymbol{z}\,|\,n,\gamma_0,\rho) = {p(\boldsymbol{z},n\,|\,\gamma_0,\rho) }\big/{p_N(n\,|\,\gamma_0,\rho)}.\notag$$ \end{cor} The proof of this is straightforward given the representation in Theorem \ref{thm:compoundPoisson} and given the one-to-many-mapping combinatorial coefficient taking $(n_1,\ldots,n_l,l)$ to $(z_1,\ldots,z_n,n)$ is $$\frac{l!}{n!}\,\prod_{k=1}^l n_k!\,.$$ \begin{cor}[Frequency of Frequencies Distribution] \label{cor:m_i} Let $\mathcal{M}=\{m_{i}\}_i$ be the frequency of frequencies (FoF) vector, where $m_{i}=\sum_{k=1}^{l}\delta(n_k=i)$ is the number of distinct types of size~$i$, $\sum_{i=1}^\infty m_{i}=l$, and $\sum_{i=1}^\infty im_{i}=n$. For the $G$ mixed Poisson process, we can generate a random sample of $\mathcal{M}$ by drawing each of its element independently as \vspace{0mm}\begin{equation} m_i\sim\emph{\mbox{Poisson}}\left(m_i; \frac{\gamma_0\int_0^\infty r^{i} e^{-r} \rho(dr)}{i!}\right) \vspace{0mm}\end{equation} for $i\in\{1,2,\ldots\}$. Alternatively, we may first draw $$l\sim\emph{\mbox{Poisson}}\left(\gamma_0\int_{0}^\infty(1-e^{-r})\rho(dr)\right) $$ as the total number of distinct clusters (species) with nonzero counts, then draw $m_i$ sequentially using a stick-breaking construction as \vspace{0mm}\begin{equation} m_{i}\,|\,l, m_1,\ldots,m_{i-1} \sim \emph{\mbox{Binomial}}\left(l-\sum_{t=1}^{i-1}m_{t} , \frac{\frac{\int_0^\infty r^{i} e^{-r} \rho(d r)}{i!}}{ \sum_{t=i}^{\infty} \frac{\int_0^\infty r^{t} e^{-r} \rho(d r)}{t!} } \right) \vspace{0mm}\end{equation} for $i = 1,2,\ldots$ until $l = \sum_{t=1}^i m_{i}$, and further let $m_{i+\kappa}=0$ for all $\kappa\in\{1,2,\ldots\}$. \end{cor} \begin{cor}[Prediction Rule]\label{thm:predict} Let $l^{-i}$ represent the number of clusters in $\boldsymbol{z}^{-i}:=\boldsymbol{z}\backslash z_i$ and $n_k^{-i}:=\sum_{j\neq i} \delta(z_j=k)$. We can express the prediction rule of the model as \be P(z_{i} = k\,|\,\boldsymbol{z}^{-i},n,\gamma_0,\rho) \propto \begin{cases}\vspace{2mm} \displaystyle\frac{\int_{0}^\infty r^{n_k^{-i}+1} e^{-r} \rho(d r)}{\int_0^\infty r^{n_k^{-i}}e^{-r} \rho(d r)} , & \emph{\mbox{for }} k=1,\ldots,l^{-i};\\ \displaystyle \gamma_0\int_0^\infty re^{-r}\rho(dr), & \emph{\mbox{if } }k=l^{-i}+1. \end{cases}\notag \vspace{0mm}\end{equation} This prediction rule can be used to simulate an exchangeable random partition of $[n]$ via Gibbs sampling. \end{cor} \subsection{Related work}\label{sec:2.4} To make connections to previous work, let us first consider the special case that $G$ is a gamma process with L\'evy measure $\nu(drd\omega) = r^{-1}e^{-p^{-1}(1-p)r}drG_0(d\omega)$, which is a special case of the generalized gamma process $G\sim\mbox{g}\Gamma\mbox{P}[G_0,a,p/(1-p)]$ with $a=0$. This $G$ mixed Poisson process is defined as the negative binomial process $X\sim\mbox{NBP}(G_0,p)$ in \citet{NBP2012}. For $X\sim\mbox{NBP}(G_0,p)$, with Corollary \ref{cor:compoundPoisson}, the L\'evy measure can be expressed as $ \nu(dnd\omega) = \sum_{j=1}^\infty j^{-1}{p^j} \delta_j(dn) G_0(d\omega). $ With Corollary \ref{thm:ECPF}, we have the ECPF $ p(\boldsymbol{z},n\,|\, \gamma_0,p) = ({n!})^{-1} p^{n}(1-p)^{\gamma_0}\gamma_0^l \prod_{k=1}^{l} \Gamma(n_k)\notag $ and probability mass function (PMF) $p_N(n\,|\, \gamma_0,p)=\frac{\Gamma(n+\gamma_0)}{\Gamma(\gamma_0)} p^n(1-p)^{\gamma_0}$, which is the PMF of the negative binomial (NB) distribution $n\sim\mbox{NB}(\gamma_0,p)$. Thus the EPPF for $X$ can be expressed~as \begin{align}\label{eq:CRPEPPF} p(\boldsymbol{z}\,|\, \gamma_0) &=\frac{p(\boldsymbol{z},n\,|\, \gamma_0,p)}{p_N(n\,|\, \gamma_0,p)}= \frac{\Gamma(\gamma_0)\gamma_0^l}{\Gamma(n+\gamma_0)} \prod_{k=1}^{l} \Gamma(n_k), \end{align} which is the EPPF of the Chinese restaurant process (CRP) \citep{aldous:crp}, a variant of the widely used Ewens sampling formula \citep{ewens1972sampling,PolyaUrn}. For the CRP, multiplying its EPPF $p(\boldsymbol{z}\,|\, \gamma_0)$ by the PMF of $n\sim\mbox{NB}(\gamma_0,p)$ leads to the ECPF, and as in Corollary \ref{cor:m_i}, further multiplying its ECPF with the combinatorial coefficient ${n!}/[{\prod_{i=1}^n(i!)^{m_i}m_i!}]$ leads to the distribution of a FoF vector $\mathcal{M}=\{m_i\}_i$ as \begin{align} p(\mathcal{M},n\,|\,\gamma_0,p) &=\left\{\prod_{i=1}^\infty \mbox{Poisson}\left(m_i; \gamma_0\frac{p^i}{i}\right)\right\} \times \delta\left(n=\sum_{i=1}^\infty i m_i\right), \notag \end{align} which can be generated by simulating countably infinite Poisson random variables, or using a stick-breaking construction that first draws $l\sim\mbox{Poisson}[-\gamma_0\ln(1-p)]$ number of of nonempty clusters, and then draws $m_i$ sequentially \vspace{0mm}\begin{equation} m_{i}\,|\,l, m_1,\ldots,m_{i-1} \sim {\mbox{Binomial}}\left(l-\sum_{t=1}^{i-1}m_{t} , \frac{i^{-1}{p^i}}{ -\ln(1-p) - \sum_{t=1}^{i-1} t^{-1}{p^t} } \right) \vspace{0mm}\end{equation} for $i = 1,2,\ldots$ until $l = \sum_{t=1}^i m_{i}$, and further lets $m_{i+\kappa}=0$ for all $\kappa\in\{1,2,\ldots\}$. The EPPF of the widely used Piman-Yor process \citep csp}, with mass parameter $\gamma_0$ and discount parameter $a\in[0,1)$, can be expressed as \begin{align} P(\boldsymbol{z}\,|\, \gamma_0,a) &= \frac{\Gamma(\gamma_0)}{\Gamma(n+\gamma_0)} \prod_{k=1}^{l} \frac{\Gamma(n_k-a)}{\Gamma(1-a)}[\gamma_0+(k-1)a]\notag. \end{align} However, unless $a=0$, it is unclear whether the Pitman-Yor process can be related to a FoF vector whose countably infinite elements simply follow the Poisson distributions. There exists the class of Gibbs-type EPPF that provides a generalization of the EPPF induced by the Pitman-Yor process. See \citet{gnedin2006exchangeable} for details and \citet{de2015gibbs} for a Bayesian nonparametric treatment. Note that the ideas of mixing multiple group-specific Poisson processes with a gamma process, or mixing multiple group-specific negative binomial (NB) processes with a gamma or beta process have been exploited in \citet{NBP2012} to construct priors for mixed-membership modeling, and in \citet{NBP_CountMatrix} to construct priors for random count matrices. When the number of groups reduces to one, the NB process in \citet{NBP2012} and \citet{NBP_CountMatrix} becomes a special case of the generalized NB process to be thoroughly investigated in Section \ref{sec:gNBP}. Following the hierarchical construction in \citet{NBP2012} and \citet{NBP_CountMatrix}, the proposed generalized NB process or other completely random measure mixed Poisson processes may also be extended to a multiple group setting to construct more sophisticated nonparametric Bayesian priors for both mixed-membership modeling and random count matrices. Below we will study a particular process: the generalized NB process, whose ECPF and FoF distribution both have simple analytic expressions and whose exchangeable random partitions can not only be simulated via Gibbs sampling using the above prediction rule, but also be sequentially constructed using a recursively calculated prediction rule. \vspace{-4mm}\section{Generalized negative binomial process}\label{sec:gNBP} \vspace{-3mm} In the following discussion, we study the generalized NB process (gNBP) model where $G\sim\mbox{g}\Gamma\mbox{P}[G_0,a,p/(1-p)]$ with $a<0$, $a=0$, or $0<a<1$. Here we apply the results in Section~3 to this specific case. Using (\ref{eq:LevyGGP}), we have $\int_{0}^\infty r^{n}e^{-r}\rho(dr)= {\frac{\Gamma(n-a)}{{\Gamma(1-a)} }p^{n-a}}$ and $\int_0^\infty(1-e^{-r})\rho(dr) = \frac{1-(1-p)^a}{ap^{a}}.$ Marginalizing out $G(\Omega)$ from $n\,|\, \lambda\sim\mbox{Poisson}[G(\Omega)]$ with $G\sim{{}}\mbox{g}\Gamma\mbox{P}[\gamma_0,a,p/(1-p)]$, leads to a generalized NB distribution; $i.e.$, $n\sim\mbox{gNB}(\gamma_0,a,p)$, with shape parameter $\gamma_0$, discount parameter $a<1$, and probability parameter $p$. Denote by $\sum_{*}$ as the summation over all sets of positive integers $(n_1,\ldots,n_l)$ with ${\sum_{k=1}^l n_k = n}$. As derived in Appendix F, the PMF of the generalized NB distribution can be expressed as \vspace{0mm}\begin{equation}\label{eq:f_M0} p_N(n\,|\,\gamma_0,a,p) = {p^n}e^{-{\gamma_0}\frac{1-(1-p)^a}{ap^a}} \sum_{l=0}^n \gamma_0^l p^{-al} \frac{S_a(n,l)}{n!}, \vspace{0mm}\end{equation} where $S_a(n,l)$, as defined in detail in Appendix F, multiplied by $a^{-l}$ are generalized Stirling numbers \citep{charalambides2005combinatorial,csp}. Marginalizing out $G$ in the generalized gamma process mixed Poisson process \vspace{0mm}\begin{equation}\label{eq:gGaPP0} X\,|\,G\sim\mbox{PP}(G)\quad\mbox{and}\quad G\sim{{}}\mbox{g}\Gamma\mbox{P}\left[G_0,a, {p}/{(1-p)}\right] \vspace{0mm}\end{equation} leads to a generalized NB process $ X\sim\mbox{gNBP}(G_0,a,p), $ such that for each $A\subset \Omega$, $X(A)\sim\mbox{gNB}(G_0(A),a,p)$. This process is also a compound Poisson process as \begin{align}\label{eq:GNBPdraw} X(\cdot)=\sum_{k=1}^{l} n_k\delta_{\omega_k}(\cdot),~l\sim\mbox{Poisson}\Big(\gamma_0\frac{1-(1-p)^a}{ap^a}\Big),~n_k \stackrel{iid}{\sim} \mbox{TNB}(a,p),~\omega_k \stackrel{iid}{\sim} g_0, \end{align} where $\mbox{TNB}(a,p)$ denotes a truncated NB distribution, with PMF \vspace{0mm}\begin{eqnarray}\label{eq:TNB} p_U(u\,|\,a,p)= \frac{\Gamma(u-a)}{u!\Gamma(-a)}\frac{p^u(1-p)^{-a}}{1-(1-p)^{-a}},~u=1,2,\ldots. \vspace{0mm}\end{eqnarray} Note that $\lim_{a\rightarrow 0}\frac{1-(1-p)^a}{ap^a} = -\ln(1-p)$ and $\lim_{a\rightarrow 0}\mbox{TNB}(a,p)$ becomes the logarithmic distribution with parameter $p$ \citep{Fisher1943,LogPoisNB,johnson2005univariate}. The L\'evy measure of the gNBP can be expressed as $ \nu(dnd\omega) = \sum_{j=1}^\infty \frac{\Gamma(j-a)}{j!\Gamma(1-a)}p^{j-a} \delta_j(dn) G_0(d\omega). $ The ECPF of the gNBP model is given by \vspace{0mm}\begin{eqnarray} \label{eq:f_Z_M} p(\boldsymbol{z},n\,|\,\gamma_0,a,p) =\frac{1}{n!}e^{-\gamma_0\frac{1-(1-p)^a}{ap^{a}}} \gamma_0^{l{}} p^{n-al{}} \prod_{k=1}^{l{}} \frac{\Gamma(n_k-a)}{{\Gamma(1-a)} }, \vspace{0mm}\end{eqnarray} which is fully factorized and will be used as the likelihood to infer $\gamma_0$, $a$, and $p$. The EPPF of $\Pi_n$ is the ECPF in (\ref{eq:f_Z_M}) divided by the marginal distribution of $n$ in (\ref{eq:f_M0}), given by \begin{align} \label{eq:EPPF} p(\boldsymbol{z}\,|\,n,\gamma_0,a,p) = \frac{\gamma_0^{l} p^{-al}}{ \sum_{\ell=0}^n \gamma_0^\ell p^{-a\ell} S_a(n,\ell)} \prod_{k=1}^{l{}}\frac{\Gamma(n_k-a)}{\Gamma(1-a)}. \end{align} We define the EPPF in (\ref{eq:EPPF}) as the generalized Chinese restaurant sampling formula (gCRSF), and we denote a random draw under this EPPF as $$\boldsymbol{z}\,|\,n\sim{\mbox{gCRSF}}(n,\gamma_0,a,p).$$ The conditional distribution of the number of clusters in a population of size $n$ can be expressed as \begin{align}\label{eq:f_L2_0} p_L(l\,|\,n,\gamma_0,a,p) =\frac{1}{l!}\sum_{*}\frac{n!}{\prod_{k=1}^l n_k!}p(\boldsymbol{z}\,|\,n,\gamma_0,a,p)= \frac{\gamma_0^{l} p^{-al}S_a(n,l)}{ \sum_{\ell=0}^n \gamma_0^\ell p^{-a\ell} S_a(n,\ell)}. \end{align} Recall that $m_{i}=\sum_{k=1}^{l}\delta(n_k=i)$ represents the number of distinct types of size $i$, with $\sum_{i=1}^\infty m_{i}=l$ and $\sum_{i=1}^\infty im_{i}=n$. With Corollary \ref{cor:m_i}, we can express the joint distribution of $n$ and $\mathcal{M}$, under the constraint that $n=\sum_{i=1}^\infty im_{i}$, as \begin{align} p(\mathcal{M},n\,|\, \gamma_{0},a,p) &=\left\{\prod_{i=1}^{\infty}\mbox{Poisson}\left(m_i; \frac{\Gamma(i-a)\gamma_{0}p^{i-a}}{\Gamma(1-a)i!}\right)\right\} \times \delta\left(n = \sum_{i=1}^\infty i m_i\right) \label{eq:SpeciesSeries}, \end{align} where we apply the fact that $\sum_{i=1}^n \frac{\Gamma(i-a)}{i!\Gamma(-a)}{p^i(1-p)^{-a}} = 1-(1-p)^{-a}$ for $a<1$. Thus to generate a cluster structure governed by the generalized negative binomial process, one may draw $m_i\sim\mbox{Poisson}\left(\frac{\Gamma(i-a)\gamma_{0}p^{i-a}}{\Gamma(1-a)i!}\right)$ independently for each $i$, or first draw \vspace{0mm}\begin{equation} l\sim\mbox{Poisson}\left(\gamma_0\frac{1-(1-p)^a}{ap^{a}}\right) \label{eq:ell} \vspace{0mm}\end{equation} number of unique partitions (species), and then draw $m_i$ for $i\ge1$ using \vspace{0mm}\begin{equation} \label{eq:Bino} m_{i}\,|\,l, m_1,\ldots,m_{i-1} \sim \mbox{Binomial}\left(l -\sum_{t=1}^{i-1}m_{t} , \frac{\frac{\Gamma(i-a)p^{i}}{i!}}{ \sum_{t=i}^{\infty} \frac{\Gamma(t-a)p^{t}}{t!} } \right) \vspace{0mm}\end{equation} until $l = \sum_{t=1}^im_{t}$. Note that in the prior, $\mathbb{E}[m_i]=\left(\frac{\Gamma(i-a)\gamma_{0}p^{i-a}}{\Gamma(1-a)i!}\right)$ and hence, using the property of the gamma function, we have $$\ln(\mathbb{E}[m_i]) ~\sim ~ - (a+1)\ln(i) + \ln(p) i $$ as $i\rightarrow \infty$. Thus if $p\rightarrow 1$, we may consider $a+1$ as a power-law scaling parameter. Note that if $a\rightarrow 0$, we recover from (\ref{eq:SpeciesSeries}) the logarithmic series of \citet{Fisher1943}, as also discussed in \citet{Sampling_NB_1950} and \citet{watterson1974models}, and we recover from (\ref{eq:EPPF}) the EPPF for the CRP, as shown in \eqref{eq:CRPEPPF}. When $a\neq 0$, we generalize CRP by making the EPPF be dependent on the population size $n$. This generalization differs from those in \citet{ishwaran2003generalized} and \citet{cerquetti2008generalized}, where the EPPFs are independent of $n$. The prediction rule for the EPPF in (\ref{eq:EPPF}) can be expressed as \vspace{0mm}\begin{equation}\label{eq:PredictRule} P(z_{i} = k\,|\,\boldsymbol{z}^{-i},n,\gamma_0,a,p) \propto \begin{cases} n_k^{-i} -a, & {\mbox{for }} k=1,\ldots,l^{-i};\\ \gamma_0 p^{-a}, & {\mbox{if } }k=l^{-i}+1. \end{cases} \vspace{0mm}\end{equation} This prediction rule can be used in a Gibbs sampler to simulate an exchangeable random partition $\boldsymbol{z}\,|\,n\sim{\mbox{gCRSF}}(n,\gamma_0,a,p)$ of $[n]$. As it is often unclear how many Gibbs sampling iterations are required to generate an unbiased sample from this EPPF, below we present a sequential construction for this EPPF to directly generate an unbiased sample. Marginalizing out $z_n$ from (\ref{eq:EPPF}), we have \begin{align} {p(z_{1:n-1}\,|\,n,\gamma_0,a,p) ~~=&~~~p(z_{1:n-1}\,|\,n-1,\gamma_0,a,p) \notag\\ &\times \frac{\sum_{\ell =0}^{n-1} \gamma_0^\ell p^{-a\ell}S_a(n-1,\ell)}{\sum_{\ell=0}^n \gamma_0^\ell p^{-a\ell }S_a(n,\ell)}\left[\gamma_0p^{-a}+ (n-1)- a l_{(n-1)}\right],\notag \end{align} where $z_{1:i}:=\{z_1,\ldots,z_i\}$, $l_{(i)}$ denotes the number of partitions in $z_{1:i}$, and $l_{(n)}=l$. Further marginalizing out $z_{n-1},\ldots,z_{i+1}$, we have \begin{align} {p(z_{1:i}\,|\,n,\gamma_0,a,p)} &=p(z_{1:i}\,|\,i,\gamma_0,a,p)\frac{\sum_{\ell=0}^{i} \gamma_0^\ell p^{-a\ell}S_a(i,\ell)}{\sum_{\ell=0}^n \gamma_0^\ell p^{-a\ell }S_a(n,\ell)} R_{n,\gamma_0,a,p}(i,l_{(i)}) \notag\\ &= \frac{R_{n,\gamma_0,a,p}(i,l_{(i)}) \gamma_0^{l_{(i)}} p^{-al_{(i)}}}{\sum_{\ell=0}^n \gamma_0^\ell p^{-a\ell}S_a(n,\ell)} \prod_{k\,:\,n_{k,(i)}>0}\frac{\Gamma(n_{k,(i)}-a)}{\Gamma(1-a)} ,\label{eq:SizeEPPF} \end{align} where $n_{k,(i)}:=\sum_{j=1}^i \delta(z_j=k)$; $R_{n,\gamma_0,a,p}(i,j)= 1$ if $i=n$ and is recursively calculated for $i=n-1,n-2,\ldots,1$ with \vspace{0mm}\begin{equation}\label{eq:R} R_{n,\gamma_0,a,p}(i,j) = R_{n,\gamma_0,a,p}(i+1,j)(i-a j) + R_{n,\gamma_0,a,p}(i+1,j+1)\gamma_0p^{-a}. \vspace{0mm}\end{equation} We name (\ref{eq:SizeEPPF}) as a size-dependent EPPF as its distribution on an exchangeable random partition of $[i]$ is a function of the population size $n$. Note that if $a=0$, the EPPF becomes the same as that of the Chinese restaurant process and no longer depends on $n$. In Appendix F, we show the sequential prediction rule of the generalized Chinese restaurant sampling formula that constructs $\Pi_{i+1}$ from $\Pi_i$ in a population of size $n$ by assigning element $(i+1)$ to $A_{z_{i+1}}$, and show the predictive distribution of $z_{i+1\,:\,n}$ given $z_{1:i}$, the population size $n$, and model parameters. In summary, a draw from the generalized NB process (gNBP) represents a cluster structure with a Poisson distributed finite number of clusters, whose sizes follow a truncated NB distribution. Marginally, the population size follows a generalized NB distribution. These three count distributions and the prediction rule are determined by a discount, a probability, and a mass parameter, which together with~$i$ are used to parameterize the Poisson rate for the random number of clusters of size $i$ for the FoF distribution. These parameters are convenient to infer using the fully factorized ECPF. Since $P(\Pi_m\,|\,n)= P(\Pi_m\,|\,m)$ is often not true for $n>m$, the EPPF of the gNBP, which is derived by applying Bayes' rule on the ECPF and the generalized NB distribution, generally violates the addition rule required in a partition structure and hence is dependent on the population size. This size dependent EPPF is referred to as the generalized Chinese restaurant sampling formula. To generate an exchangeable random partition of $[n]$ under this EPPF, we show we could use either a Gibbs sampler or a recursively-calculated sequential prediction rule. We conclude this section by investigating the large $n$ asymptotic behavior of both the number of clusters $p_L(l\,|\,n,\gamma_0,a,p)$ shown in \eqref{eq:f_L2_0} and the sizes of clusters $p(\mathcal{M}\,|\, n, \gamma_{0},a,p) = p(\mathcal{M},n\,|\, \gamma_{0},a,p) / p_N(n\,|\, \gamma_{0},a,p)$, which can be obtained with (\ref{eq:SpeciesSeries}) and (\ref{eq:f_M0}). An interesting question to answer is if we fix the model parameters $\gamma_0$, $a$, and $p$, where $0<\gamma_0<\infty$, $a<1$, and $0<p<1$, and assume the population size $n$ is given, how $l_{(n)}$, the cluster number, and $M_{i,n}$, the number of clusters of size $i$, would behave as the population size $n$ approaches infinity. We summarize our findings in Table \ref{tab:asymptotics} and provide the details in Appendices \ref{sec_asym_1} and \ref{sec_asym_2}. Table \ref{tab:asymptotics} characterizes three asymptotic regimes according to the choice of the parameter $a$, that is $a\in(0,1)$, $a=0$, and $a\in\{-1,-2,\ldots\}$. For $a=0$ the distribution \eqref{eq:f_L2_0} coincides with the distribution of the number of clusters in a sample of size $n$ from a Dirichlet process. Hence, the large $n$ asymptotic behavior of $l_{(n)}$ is known from \citet{hollander73} whereas the large $n$ asymptotic behavior of $M_{i,n}$ is known from \citet{ewens1972sampling}. For any $a\in(0,1)$ the number of clusters minus one, $l_{(n)}-1$, converges weakly to $\mbox{Poisson}[\gamma_0/(ap^a) ]$, whereas $M_{i,n}$ converges weakly to $\mbox{Poisson}\left(\frac{\Gamma(i-a) \gamma_{0}p^{-a}}{\Gamma(1-a) i!}\right)$. Note that, for any $a\in(0,1)$, $a\frac{\Gamma(i-a)}{\Gamma(1-a) i!}$ is a proper probability distribution over the natural numbers, that is $a\frac{\Gamma(i-a)}{\Gamma(1-a) i!}\in(0,1)$ for any $i\geq1$ and $\sum_{i=1}^\infty a\frac{\Gamma(i-a)}{\Gamma(1-a) i!}=1$. In other terms, for large $n$ the number $M_{i,n}$ of clusters of size $i$ becomes a proportion $a\frac{\Gamma(i-a)}{\Gamma(1-a) i!}$ of $l_{(n)}-1$, and such a proportion decreases with the index $i$. It is also interesting to notice that the logarithmic of $\frac{\Gamma(i-a) \gamma_{0}p^{-a}}{\Gamma(1-a) i!}$ can be approximated by $$-(a+1)\ln( i) + C$$ when $i$ is large, where the coefficient $C=\ln\left(\frac{ \gamma_{0}p^{-a}}{\Gamma(1-a) }\right)$ is not related to the index $i$. Thus we may consider $a+1$ as a power-law scaling parameter as $n\rightarrow \infty$. \begin{table}[!tb]\small \caption{Large $n$ asymptotic regimes with respect to the parameter $a$. }\label{tab:asymptotics} \begin{center}\small \begin{tabular}{c*{8}{c}r} $a$ & Distinct types $l_{(n)}$ & Distinct types $M_{i,n}$ \\[0.2cm] \hline\hline\\ $(0,1)$ & $\displaystyle l_{(n)}\rightarrow 1+\mbox{Poisson}\left(\frac{\gamma_{0}}{ap^{a}}\right)$ & $\displaystyle M_{i,n}\rightarrow \mbox{Poisson}\left(\frac{\Gamma(i-a) \gamma_{0}p^{-a}}{\Gamma(1-a) i!}\right)$ \\[0.4cm] 0 & $\displaystyle\frac{l_{(n)}}{\log n}\rightarrow \gamma_{0}$ & $\displaystyle M_{i,n}\rightarrow \mbox{Poisson}\left(\frac{\gamma_{0}}{i}\right)$ \\[0.4cm] $-a\in\{1,2,\ldots\}$ & $\displaystyle\frac{l_{(n)}}{n^{\frac{-a}{1-a}}}\rightarrow\frac{(\gamma_{0}p^{-a})^{\frac{1}{1-a}}}{-a}$ & $\displaystyle M_{i,n}\rightarrow \mbox{Poisson}\left( { \frac{\Gamma(i-a)\gamma_{0}p^{-a}}{\Gamma(1-a){i!}}}\right)$ \end{tabular} \vspace{-5mm} \end{center} \end{table} Finally, for any $a\in\{-1,-2,\ldots\}$ the number of clusters rescaled by $n^{-a/(1-a)}$ converges weakly to the constant $\frac{(\gamma_{0}p^{-a})^{\frac{1}{1-a}}}{-a}$, whereas $M_{i,n}$ converges weakly to $\mbox{Poisson}\left(\frac{\Gamma(i-a) \gamma_{0}p^{-a}}{\Gamma(1-a) i!}\right)$. Note that, differently from the case $a\in(0,1)$, for any $a\in\{-1,-2,\ldots\}$, $\sum_{i=1}^\infty a\frac{\Gamma(i-a)}{\Gamma(1-a) i!}=+\infty$, that is $a\frac{\Gamma(i-a)}{\Gamma(1-a) i!}$ is not a probability distribution over the natural numbers. In particular, $a\frac{\Gamma(i-a)}{\Gamma(1-a) i!}$ is a constant when $a=-1$ and increases with the index $i$ when $a\in\{-2,-3,\ldots\}$. \vspace{-3mm}\section{Illustrations}\label{sec:results}\vspace{-2mm} Species abundance data of a population is usually represented with a FoF vector as $\mathcal{M}=\{m_i\}_i$, where $m_i$ denotes the number of species that have been observed $i$ times in the population. As discussed before, this data can also be converted into a sequence of cluster indices $\boldsymbol{z}=(z_1,\ldots,z_n)$ or a cluster-size vector $(n_1,\ldots,n_l)$, where $n_k$ is the number of individuals in cluster $k$, $n = \sum_i im_i=\sum_{k=1}^l n_k$ is the size of the population and $l=\sum_i m_i$ is the number of distinct clusters in the population. For example, we may represent $\{m_1, m_2, m_3\}=\{2,1,2\}$ as $\boldsymbol{z}=(1,2,3,3,4,4,4,5,5,5)$ or $(n_1,\ldots,n_5)=(1,1,2,3,3)$. For species frequency counts, we use (\ref{eq:f_Z_M}) as the likelihood for the model parameters $\boldsymbol{\theta}=\{\gamma_0,a,p\}$. With appropriate priors imposed on $\boldsymbol{\theta}$, we use MCMC to obtain posterior samples $\boldsymbol{\theta}^{(j)}=\{\gamma_0^{(j)}, a^{(j)}, p^{(j)}\}$. The details of MCMC update equations are provided in Appendix I. To understand the structural properties of the population, one often has to make a choice between taking more but smaller size samples and taking fewer but larger size samples. For example, in high-throughput sequencing, to increase the number of detected sequences given a fixed budget, one may need to decide whether to reduce the sequencing depth per sample to allow collecting more biological replicates \citep{sims2014sequencing}. These motivate us to consider the fundamental problem of extrapolating the FoF vector of a sample, taken without replacement from the population, to reconstruct the FoF vector of the population. This extrapolation problem is readily answered under our framework by $p(z_{i+1\,:\,n}\,|\,z_{1:i},n,\gamma_0,a,p)$ in (\ref{eq:extrapolate}), which shows the joint distribution of the cluster indices of the unobserved $n-i$ individuals of the population given the observed clusters indices $(z_1,\ldots,z_i)$ of the sample of size $i$, the population size $n$, and the model parameters. To reconstruct $(z_{i+1},\ldots,z_{n})$, one can either use (\ref{eq:PredictRule}) to sequentially construct the vector from $z_{i+1}$ to $z_n$, or randomly initialize the vector and then use (\ref{eq:PredictRulej}) in a Gibbs sampling algorithm. For a population with tens of thousands or millions of individuals, we prefer the second method as it is often more computationally efficient. We consider the novel ``The Adventures of Tom Sawyer'' by Mark Twain, with a total of $n=77,514$ words from $l=7,772$ terms; the novel ``The Adventures of Sherlock Holmes'' by Arthur Conan Doyle, with a total of $n=106,007$ words from $l=7,896$ terms; the high-throughput sequencing dataset studied in \citet{Sultan15082008}, with a total of $n=418,650$ sequences from $l=6,712$ unique sequences; the high-throughput sequencing dataset studied in \citet{Core19122008}, with a total of $n=125,794$ sequences from $l=7,124$ unique sequences; and the mircodata provided in Table A.6 of \citet{greenberg1990geographic}, with a total of $n=87,959$ household records from $l=929$ groups. We randomly take $1/32$, $1/16$, $1/8$, $1/4$, or $1/2$ of the individuals without replacement from the population to form a sample $(z_1,\ldots,z_i)$, where $i$ is the sample size, from which we use Gibbs sampling to simulate the indices of the remaining individuals $(z_{i+1},\ldots,z_n)$, where $n$ is the population size. In each Gibbs sampling iteration, we draw $T=5$ times the indices in $\{z_{i+1},\ldots, z_n\}$ in a random order using (\ref{eq:PredictRulej}) and then sample the model parameters $\gamma_0$, $a$, and $p$ once. For comparison, we consider using the software provide for \citet{clauset2009power} to estimate a lower cutoff point $i_{\min}$ and a scaling parameter $\alpha$ from a random sample taken without replacement from the finite population, and then find $-\alpha_h$, the slope of the least squares line fitting the first $i_{\min}-1$ FoF points of the random sample on the log-log plot. We then fit a straight line to the population FoF points $\{\ln i, \ln(m_i)\}_{i<i_{\min}}$, with $-\alpha_h$ as the slope and $[\sum_{i\in I_h}(\ln(m_i)+\alpha_h \ln(m_i)]/|I_h|$ as the intercept, where $I_h=\{i:1\le i < i_{\min},\, m_i>=1\}$, and another straight line to the population FoF points $\{\ln i, \ln(m_i)\}_{i\ge i_{\min}}$, with $-\alpha$ as the slope and $[\sum_{i\in I_t}(\ln(m_i)+\alpha \ln(m_i)]/|I_t|$ as the intercept, where $I_t=\{i: i \ge i_{\min},\, m_i>=3\}$. We emphasize that this least squares (LS) procedure is merely used as a baseline, which refits the population FoF points under the assumption that $i_{\min}$, $\alpha_h$, and $\alpha$ all all stay unchanged as the sample size varies; it may fit the tail well, but may perform poorly in fitting the center part of a FoF distribution. We also make comparisons with the Pitman-Yor process \citep{perman1992size,pitman1997two,csp}, a widely used nonparametric Bayesian prior with a size independent EPPF that $P(\Pi_m\,|\, \gamma_0,a, m)=P(\Pi_m\,|\, \gamma_0,a, n)$ for all $n\ge m$, where $\gamma_0$ and $a$ are the concentration and discount parameters, respectively, for the Pitman-Yor process. We describe a Gibbs sampling algorithm in Appendix I, using data augmentation techniques developed in \citet{teh2006bayesian}. In addition, we also consider the Chinese restaurant process. \begin{figure}[!tb] \begin{center} \includegraphics[width=0.88\columnwidth]{figures/tomsawyer_FoF_extrapolate_v2.pdf} \end{center} \vspace{-6mm} \caption{ \label{fig:tomsawyer_FoF_extrapolate} \small The posterior means of the population FoF vectors extrapolated from sample FoF vectors for ``The Adventures of Tom Sawyer'' by Mark Twain, using the least squares (LS) refitting procedure, the Chinese restaurant process, the Pitman-Yor (PY) process, and the generalized negative binomial process (gNBP), whose discount parameter is set as $a=-1$, $a=0$, $a\in(-\infty,0)$, or $a\in(-\infty,1)$. Each sample is taken without replacement from the population with a sampling ratio of $1/32$, $1/16$, $1/8$, $1/4$, or $1/2$. The performance of the Chinese restaurant process is found to be almost identical to the gNBP with $a=0$, and hence omitted for brevity. \begin{center} \includegraphics[width=0.6\columnwidth]{figures/tomsawyer_RMSE_v2.pdf} \end{center} \vspace{-6mm} \caption{ \label{fig:tomsawyer_extrapolate} \small (a) RMSEs and (b) chi-squared ($\chi^2$) test statistics for the extracted FoF vectors shown in Figure \ref{fig:tomsawyer_FoF_extrapolate}. } \end{figure} \begin{figure}[!tb] \begin{center} \includegraphics[width=0.88\columnwidth]{figures/sultan_FoF_extrapolate_v2.pdf} \end{center} \vspace{-6mm} \caption{ \label{fig:sultan_FoF_extrapolate} \small Analogous plots to Figure \ref{fig:tomsawyer_FoF_extrapolate} for a RNA-seq data studied in \citet{Sultan15082008}. \begin{center} \includegraphics[width=0.6\columnwidth]{figures/sultan_RMSE_v2.pdf} \end{center} \vspace{-6mm} \caption{ \label{fig:sultan_extrapolate} \small Analogous plots to Figure \ref{fig:tomsawyer_extrapolate} for a RNA-seq data studied in \citet{Sultan15082008}. } \end{figure} For all MCMC based algorithms, we consider 1000 iterations and collect the last 500 samples, for each of which we convert the cluster index vector $(z_1,\ldots,z_n)$ to a population FoF vector, and take the average of all the 500 collected vectors, denoted by $\widehat{\mathcal{M}}=(\hat{m}_1,\ldots,\hat{m}_n)$, as the posterior mean of the population FoF vector, given the sample $(z_1,\ldots,z_i)$ and the population size $n$. Using the observed population FoF vector $\mathcal{M}$, we measure the extrapolation performance using the root mean squared error (RMSE), defined as \vspace{0mm}\begin{equation} \mbox{RMSE} = \sqrt{\frac{\sum_{i=1}^{100} \delta(m_i>0)\left[\ln(m_i) - \ln(\hat{m}_i)\right]^2}{ \sum_{i=1}^{100} \delta(m_i>0)}} \vspace{0mm}\end{equation} and the chi-squared test statistic, defined as \vspace{0mm}\begin{equation} \chi^2 = \frac{(\sum_{i=50}^n m_i-\sum_{i=50}^n\hat{m}_i)^2}{\sum_{i=50}^n\hat{m}_i}+\sum_{i=1}^{49}\frac{(m_i-\hat{m}_i)^2}{\hat{m}_i} . \vspace{0mm}\end{equation} The RMSE and chi-squared test statistic measure the distances between the observed population FoF vector and the extrapolated FoF vector in the logarithmic and original scales, respectively. Examining the trace plots of the inferred model parameters, we find that 1000 MCMC iterations are sufficient for both the Pitman-Yor and generalized NB process, as the Markov chains appear to converge fast and mix well in all experiments. We provide example trace plots for three different datasets in Figures \ref{fig:MCMC_tomsawyer}-\ref{fig:MCMC_microdata} of Appendix A. \begin{figure}[!tb] \begin{center} \includegraphics[width=0.88\columnwidth]{figures/microdata_FoF_extrapolate_v2.pdf} \end{center} \vspace{-6mm} \caption{ \label{fig:microdata_FoF_extrapolate} \small Analogous plots to Figure \ref{fig:tomsawyer_FoF_extrapolate} for the microdata provided in Table A.6 of \citet{greenberg1990geographic}. \begin{center} \includegraphics[width=0.6\columnwidth]{figures/microdata_RMSE_v2.pdf} \end{center} \vspace{-6mm} \caption{ \label{fig:microdata_extrapolate} \small Analogous plots to Figure \ref{fig:tomsawyer_extrapolate} for the microdata provided in Table A.6 of \citet{greenberg1990geographic}. } \end{figure} Shown in Figure \ref{fig:tomsawyer_FoF_extrapolate} are the posterior means of the population FoF vectors extrapolated from sample FoF vectors for ``The Adventures of Tom Sawyer'' by Mark Twain, using least squares (LS) lines fitted to the population FoF points on the log-log plots, using the Pitman-Yor process, or using the generalized negative binomial process under various settings of the discount parameter $a$. Shown in Figure \ref{fig:tomsawyer_extrapolate} are the corresponding RMSEs and chi-squared test statistics. Note that the slopes of these LS lines are estimated from the sample FoF vectors, whereas the intercepts are obtained by refitting these straight lines to the population FoF vectors. Thus the LS procedure is appropriate for fitting the data but impractical for out-of-sample prediction. The results of the Chinese restaurant process are almost identical to these of the generalized negative binomial process with $a=0$, and hence are omitted from these figures. Figures \ref{fig:sultan_FoF_extrapolate}-\ref{fig:sultan_extrapolate} are analogous plots to Figures \ref{fig:tomsawyer_FoF_extrapolate}-\ref{fig:tomsawyer_extrapolate} for a high-throughput RNA-seq data studied in \citet{Sultan15082008}, and Figures \ref{fig:microdata_FoF_extrapolate}-\ref{fig:microdata_extrapolate} are analogous plots to Figures \ref{fig:tomsawyer_FoF_extrapolate}-\ref{fig:tomsawyer_extrapolate} for a microdata. In Appendix A, we also provide corresponding Figures \ref{fig:holmes_FoF_extrapolate}-\ref{fig:holmes_extrapolate} for ``The Adventures of Sherlock Holmes'' by Arthur Conan Doyle, and Figures \ref{fig:core_FoF_extrapolate}-\ref{fig:core_extrapolate} for a high-throughput RNA-seq data studied in \citet{Core19122008}. As shown in Figures \ref{fig:tomsawyer_FoF_extrapolate}-\ref{fig:microdata_extrapolate} and Figures \ref{fig:holmes_FoF_extrapolate}-\ref{fig:core_extrapolate} of Appendix A, the LS refitting procedure, impractical for real applications, consistently underperforms both the Pitman-Yor process and the gNBP with $a<1$, and may perform poorly if the population FoF vector appears to follow a decreasing concave curve. The gNBP with $a=-1$ appears to strongly discourage the frequencies of small-size clusters. Although it has poor performance for all the data considered in the paper, it shows that $a=-1$ or even smaller values could be used for certain applications that favor the population FoF vector to follow a concave shape. Both the gNBP with $a=0$, with almost identical performance to that of the Chinese restaurant process, and the gNBP with $a<0$ perform well on both RNA-seq genomic data, each of whose population FoF vectors clearly follows a decreasing concave curve, but clearly underperform both the Pitman-Yor process and gNBP with $a<1$ on the other three datasets, whose population FoF vectors more closely follow decreasing straight lines. The Pitman-Yor process performs well for all datasets, but in general clearly underperforms the gNBP with $a<1$. In addition to the five datasets, we have also examined the other three datasets shown in Figure \ref{fig:FoF}. Our observations on all these datasets consistently suggest that choosing the gNBP, with $a$ vary freely within $(-\infty,1)$, achieves the performance that is either the best or close to the best, which is hence recommended as the preferred choice, if there is no clear prior information on how the population FoF vector is distributed. \vspace{-4mm}\section{Conclusions} \label{sec:conclusion} We propose an infinite product of Poisson density functions to model the entire frequency of frequencies (FoF) distribution of a population consisting of a random number of individuals, and propose a size dependent exchangeable random partition function to model the FoF distribution of a population whose number of individuals is given. We first present a general framework that uses a completely random measure mixed Poisson process to support a FoF distribution, and then focus on studying the generalized negative binomial process constructed by mixing the generalized gamma process with the Poisson process. Our asymptotic analysis shows how the generalized negative binomial process can adjust its discount parameter to model different tail behaviors for the FoF distributions. On observing a single sample taken without replacement from a population, we propose a simple Gibbs sampling algorithm to extrapolate the FoF vector of the population from the FoF vector of that sample. The performance of the algorithm is demonstrated in estimating FoF vectors for text corpora, high-throughput sequencing data, and microdata, where a population typically consists of tens of thousands or millions of individuals. Since various kinds of statistics commonly used to characterize the properties of a population can often be readily calculated given the population FoF vector, being able to accurately model the FoF distributions of big datasets brings new opportunities to advance the state-of-the-art of a wide array of real discrete data applications, such as making comparisons between different text corpora, finding a good compromise between the depth and coverage of high-throughput sequencing for genomic data, estimating entropy in a nonparametric Bayesian manner, and assessing disclosure risk for microdata. \vspace{-4mm} \section*{Acknowledgements} \vspace{-3mm} The authors thank the Associate Editor and three anonymous referees, whose invaluable comments and suggestions have helped us to improve the paper substantially. M. Zhou thanks Lawrence Carin, Fernando A. Quintana, Peter M\"uller for their comments on an earlier draft of this paper, and thanks Xiaoning Qian and Siamak Zamani Dadaneh for discussions on high-throughput sequencing count data. S. G. Walker is supported by the U. S. National Science Foundation through grant DMS-1506879. S. Favaro is supported by the European Research Council (ERC) through StG N-BNP 306406. \begin{spacing}{1.03} \small \vspace{-2mm} \bibliographystyle{plainnat} \section{Additional figures} \begin{figure}[!h] \begin{center} \includegraphics[width=0.88\columnwidth]{figures/gene_sultan_v2.pdf} \end{center} \vspace{-6mm} \caption{ \label{fig:sultan} \small Analogous plots to Figure \ref{fig:tomsawyer} for the frequency of frequencies (FoF) vectors for the RNA sequences of a high-throughput sequencing sample studied in \citep{Sultan15082008}. } \begin{center} \includegraphics[width=0.6\columnwidth]{figures/gene_sultan_alpha_v2.pdf} \end{center} \vspace{-6mm} \caption{ \label{fig:sultan_alpha} \small Analogous plots to Figure \ref{fig:tomsawyer_alpha} for the frequency of frequencies (FoF) vectors for the RNA sequences of a high-throughput sequencing sample studied in \citep{Sultan15082008}. } \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=0.65\columnwidth]{figures/MCMC_tomsawyer.pdf} \end{center} \vspace{-6mm} \caption{ \label{fig:MCMC_tomsawyer} \small For ``The Adventures of Tom Sawyer'' by Mark Twain, with a sampling ratio of $1/8$, the trace plots in the first row are for the concentration parameter $\gamma_0$, discount parameter $a$, and RMSE, respectively, for the Pitman-Yor (PY) process; the trace plots in the second row are for the mass parameter $\gamma_0$, discount parameter $a$, and RMSE, respectively, for the generalized negative binomial process (gNBP) with $a$ varying freely within $(-\infty,1)$. \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=0.65\columnwidth]{figures/MCMC_sultan.pdf} \end{center} \vspace{-6mm} \caption{ \label{fig:MCMC_sultan} \small Analogous plots to Figure \ref{fig:MCMC_sultan} for a RNA-seq data studied in \citet{Sultan15082008}, with a sampling ratio of $1/8$. } \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=0.65\columnwidth]{figures/MCMC_microdata.pdf} \end{center} \vspace{-6mm} \caption{ \label{fig:MCMC_microdata} \small Analogous plots to Figure \ref{fig:MCMC_tomsawyer} for the Microdata provided in Table A.6 of \citet{greenberg1990geographic}, with a sampling ratio of $1/8$. } \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=0.88\columnwidth]{figures/holmes_FoF_extrapolate_v2.pdf} \end{center} \vspace{-6mm} \caption{ \label{fig:holmes_FoF_extrapolate} \small Analogous plots to Figure \ref{fig:tomsawyer_FoF_extrapolate} for the novel ``The Adventures of Sherlock Holmes'' by Arthur Conan Doyle. \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=0.6\columnwidth]{figures/holmes_RMSE_v2.pdf} \end{center} \vspace{-6mm} \caption{ \label{fig:holmes_extrapolate} \small Analogous plots to Figure \ref{fig:tomsawyer_extrapolate} for the novel ``The Adventures of Sherlock Holmes'' by Arthur Conan Doyle. } \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=0.88\columnwidth]{figures/sultan_FoF_extrapolate_v2.pdf} \end{center} \vspace{-6mm} \caption{ \label{fig:core_FoF_extrapolate} \small Analogous plots to Figure \ref{fig:tomsawyer_FoF_extrapolate} for a RNA-seq data studied in \citet{Core19122008}. \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=0.6\columnwidth]{figures/sultan_RMSE_v2.pdf} \end{center} \vspace{-6mm} \caption{ \label{fig:core_extrapolate} \small Analogous plots to Figure \ref{fig:tomsawyer_extrapolate} for a RNA-seq data studied in \citet{Core19122008}. } \end{figure} \section{Characterizing the tails of FoF distributions } As in \citet{newman2005power}, to model the tail of a FoF distribution that follows a power law, one may define a probability mass function for the class sizes as $$P(n_k=i) ={i^{-\alpha}}\big/{\zeta(\alpha,i_{\min}) }, ~~i\in\{i_{\min},i_{\min}+1,\ldots\},$$ where $i_{\min}$ is the cutoff integer which one considers as the starting point for the power law, and $\zeta(\alpha,i_{\min}) = \sum_{j=i_{\min}}^\infty j^{-\alpha} $ is the Hurwitz zeta function. Thus, given $K^*=\sum_{i=i_{\min}}^n m_i$, one has $\mathbb{E}[m_i] = K^* P(n_k=i)$ and hence $\ln(\mathbb{E}[m_i]) = -\alpha \ln(i) + C$ for $i\in\{i_{\min},i_{\min}+1,\ldots\} $, where $C$ is a constant not related to $i$. To estimate the scaling parameter $\alpha$ for a finite population of $n$ individuals, a straightforward approach is to plot $\ln(m_i)$ against $\ln(i)$, and then estimate $-\alpha$ using the slope of a straight line fitted to the points on the plot. This simple approach is criticized in \cite{clauset2009power}, who suggest estimating $\alpha$ by maximizing the likelihood \mathcal{L}(\alpha) = - \sum_{i=i_{\min}}^n m_i\left[\ln \zeta(\alpha,i_{\min})+\alpha\ln(i)\right]. \notag $ For each subfigure in Figure \ref{fig:FoF}, we use the software\footnote{\href{http://tuvalu.santafe.edu/~aaronc/powerlaws/}{http://tuvalu.santafe.edu/$\sim$aaronc/powerlaws/}} provided for \cite{clauset2009power} to estimate both the power-law lower cutoff point $i_{\min}$ and the scaling parameter $\alpha$, and fit a straight line to the FoF points on the loglog plot using $-\alpha$ as the slope and $\left[\sum_{i \in I}\ln(m_i) +\alpha \sum_{i\in I}\ln i \right]/|I| $, where $I=\{i:i\ge i_{min}, m_i\ge 3 \}$, as the intercept. \vspace{-2mm}\section{Size independent species sampling models} The underlying structure of existing Bayesian species sampling models is built on Kingman's concept of a partition structure \citep{kingman1978random,kingman1978representation}, which defines a family of consistent probability distributions for random partitions of a set $[m]:=\{1,\ldots,m\}$. The sampling consistency requires the probability distribution of the random partitions of a subset of size $m$ of a set of size $n\geq m$ to be the same for all~$n$. More specifically, for a random partition $\Pi_m=\{A_1,\ldots,A_l\}$ of the set $[m]$, such a constraint requires that $P(\Pi_m\,|\,n)=P(\Pi_m\,|\,m)$ does not depend on $n$. As further developed in \citet{pitman1995exchangeable,csp}, if $P(\Pi_m\,|\,m)$ depends only on the number and sizes of the $(A_k)$, regardless of their order, then it is called an exchangeable partition probability function (EPPF) of~$\Pi_m$, expressed as $P(\Pi_m=\{A_1,\ldots,A_l\}\,|\,m)=p_m(n_1,\ldots,n_l)$, where $n_k=|A_k|$. The sampling consistency amounts to an addition rule \citep{csp,Gnedin_deletion} for % the EPPF; that $p_1(1) = 1$ and \vspace{0mm}\begin{eqnarray}\label{eq:addrule} p_m(n_1,\ldots,n_l) = p_{m+1}(n_1,\ldots,n_l,1)+ \sum_{k=1}^l p_{m+1}(n_1,\ldots,n_k+1,\ldots,n_l). \vspace{0mm}\end{eqnarray} An EPPF of $\Pi_m$ satisfying this constraint is considered as an EPPF of $\Pi :=(\Pi_1,\Pi_2,\ldots)$. For an EPPF of $\Pi$, $\Pi_{m+1}$ can be constructed from $\Pi_m$ by assigning element $(m+1)$ to $A_{z_{m+1}}$ based on the prediction rule as \vspace{0mm}\begin{equation}\notag z_{m+1}\,|\,\Pi_m= \begin{cases} \vspace{3mm} l+1& \mbox{with probability }\displaystyle\frac{p_{m+1}(n_1,\ldots,n_l,1)}{p_m(n_1,\ldots,n_l)} , \\ k & \mbox{with probability }\displaystyle\frac{p_{m+1}(n_1,\ldots,n_k+1,\ldots,n_l)}{p_m(n_1,\ldots,n_l)}.\end{cases} \vspace{0mm}\end{equation} A basic EPPF of $\Pi$ is the Ewens sampling formula \citep{ewens1972sampling,Antoniak74}. Moving beyond the Ewens sampling formula, various approaches, including the Pitman-Yor process \citep{perman1992size,pitman1997two}, normalized random measures with independent increments (NRMIs) \citep{regazzini2003distributional}, Poisson-Kingman models \citep{pitman2003poisson}, species sampling \citep{Pitman96somedevelopments}, stick-breaking priors \citep{ishwaran2001gibbs}, and Gibbs-type random partitions \citep{gnedin2006exchangeable}, have been proposed to construct more general size independent EPPFs. See \citet{muller2004nonparametric}, \citet{BeyondDP} and \citet{Muller2013} for reviews. Among these approaches, there has been increasing interest in normalized random measures with independent increments (NRMIs) \citep{regazzini2003distributional}, where a completely random measure \citep{Kingman,PoissonP} with a finite and strictly positive total random mass is normalized to construct a random probability measure. For example, the normalized gamma process is a Dirichlet process \citep{ferguson73}. More advanced completely random measures, such as the generalized gamma process of \citet{brix1999generalized}, can be employed to produce more general size-independent exchangeable random partitions \citep{pitman2003poisson,csp,lijoi2007controlling}. However, the expressions of the EPPF and its associated prediction rule usually involve integrations that are difficult to calculate. \vspace{-2mm}\section{Completely random measures}\label{sec:preliminary} In this section we provide the mathematical foundations for an independent increment process with no Gaussian component. These are pure jump processes and for us will have finite limits so that the process can be normalized by the total sum of the jumps to provide a random distribution function. The most well known of such processes is the gamma process (see, for example, \citet{ferguson1972representation}) and we will be specifically working with a generalized gamma process in Section \ref{sec:ggp}. \vspace{-2mm}\subsection{Generalized gamma process}\label{sec:ggp} The generalized gamma process, denote by $G\sim\mbox{g}\Gamma\mbox{P}(G_0,a,1/c)$, is a completely random (independent increment) measure defined on the product space $\mathbb{R}_+\times \Omega$, where $a< 1$ is a discount parameter, $1/c$ is a scale parameter, and $G_0$ is a finite and continuous base measure over a complete separable metric space $\Omega$ \citep{brix1999generalized}. It assigns independent infinitely divisible generalized gamma ($\mbox{g}\Gamma$) distributed random variables $G(A_j)\sim{{}}\mbox{g}\Gamma(G_0(A_j),a,1/c)$ to disjoint Borel sets $A_j\subset \Omega$, with Laplace transform given by \vspace{0mm}\begin{equation}\label{eq:Laplace} \mathbb{E}\left[e^{-\phi\,G(A)}\right] = \exp\left\{-\frac{G_0(A)}{a}\left[(c+\phi)^a-c^a\right]\right\}. \vspace{0mm}\end{equation} The generalized gamma distribution was independently suggested by \citet{tweedie1984index} and \citet{hougaard1986survival} and also studied in \citet{bar1986reproducibility,alen1992modelling}, and \citet{jorgensen1997theory}. When $a\rightarrow0$, we recover the gamma process \citep{ferguson73,PoissonP}, and if $a=1/2$, we recover the inverse Gaussian process \citep{lijoi2005inverseGaussian}. A draw $G$ from $\mbox{g}\Gamma\mbox{P}(G_0,a,1/c)$ can be expressed as \vspace{0mm}\begin{equation} G = \sum_{k=1}^{K} r_k \delta_{\omega_k},\notag \vspace{0mm}\end{equation} with $K\sim\mbox{Poisson}(\nu^+)$ and $(r_k,\omega_k)\stackrel{i.i.d.}{\sim} \pi(drd\omega)$, where $r_k=G(\omega_k)$ is the weight for atom $\omega_k$ and $\pi(dr\, ,d\omega)\nu^{+} = \nu(dr\,,d\omega)$. Except where otherwise specified, we only consider $a<1$ and $c>0$. If $0\le a<1$, since the Poisson intensity $\nu^+ = \nu(\mathbb{R}_+\times \Omega) = \infty$ ($i.e.$, $K=\infty$ a.s.) and $ \int_{\mathbb{R}_+\times \Omega} \min\{1, s\} \nu(dr\, d\omega) $ is finite, a draw from $\mbox{g}\Gamma\mbox{P}(G_0,a,1/c)$ consists of countably infinite atoms. On the other hand, if $a<0$, then $\nu^+=-\gamma_0c^a/a$ and thus $K\sim \mbox{Poisson}(-\gamma_0c^a/a)$ ($i.e.$, $K$ is finite a.s.) and $r_k\stackrel{i.i.d.}{\sim}\mbox{Gamma}(-a,1/c)$. \vspace{-2mm}\subsection{Normalized random measures }\label{NRMI} A NRMI model \citep{regazzini2003distributional} is a normalized completely random measure $$\widetilde{G}=G/G(\Omega)$$ where $G(\Omega)=\sum_{k=1}^{K} r_{k}$ is the total random mass, which is required to be finite and strictly positive. Note that the strict positivity of $G(\Omega)$ implies that $\nu^+=\infty$ and hence $K=\infty$ a.s. \citep{regazzini2003distributional,BeyondDP}. For MCMC inference, following \citet james2009posterior}, a specific auxiliary variable $T>0$, with $ p_T(t\,|\,n,G(\Omega)) =\mbox{Gamma}[n,1/G(\Omega)] $, can be introduced to yield a fully factorized likelihood, stimulating the development of a number of posterior simulation algorithms including \citet{griffin2011posterior,barrios2012modeling}, and \citet favaromcmc}. Marginalizing out $G$ and then $T$ from that fully factorized likelihood leads to an EPPF of $\Pi$ \citep{pitman2003poisson,csp,lijoi2007controlling}. However, the prediction rule of the EPPF may not be easy to calculate. \section{Proofs} \begin{proof}[Proof for Theorem \ref{thm:compoundPoisson}] Let us consider the process $X_G$, conditional on $G$, given by $$X_G(A)=\sum \nolimits_{k} n_k\,\delta(\omega_k\in A).$$ Now it is easy to see that $$\mathbb{E}[\exp\{-\phi X_G(A)\}\,|\,G]=\exp\{-G(A)(1-e^{-\phi})\},$$ and using the well known result for homogeneous L\'evy processes, we have \vspace{0mm}\begin{equation} \mathbb{E}[\exp\{-\lambda G(A)\}]=\exp\left\{-G_0(A)\,\int_0^\infty \left[1-e^{-\lambda r}\right]\,\rho(dr)\right\}.\label{one} \vspace{0mm}\end{equation} Now, the key observation is the following identity: \vspace{0mm}\begin{equation} 1-e^{-(1-e^{-\phi})r}=1-e^{-r}\sum_{j=0}^\infty \frac{r^j}{j!}e^{-\phi j}=(1-e^{-r})-e^{-r}\sum_{j=1}^\infty \frac{r^j}{j!}e^{-\phi j} = \sum_{j=1}^\infty \frac{r^je^{-r}}{j!}(1-e^{-\phi j}).\label{eq:Iden} \vspace{0mm}\end{equation} Let us put this to one side for now and consider the model for $\tilde{X}$ given by $$\tilde{X}(A)=\sum_{k=1}^{l} n_k\,\delta(\omega_k\in A)$$ with $l\sim\mbox{Poisson}[\gamma G_0(\Omega)]$ for some non-negative $\gamma$ and independently $P(n_k=j)=\pi_j$ for some $\pi_j\leq 1$ and $j\in\{1,2,\ldots\}$. Now given $l$, we have $$\mathbb{E}[ \exp\{-\phi \tilde{X}(A)\}|l]=\prod_{k=1}^{l} \mathbb{E} [\exp\{-\phi n_k\,\delta(\omega_k\in A)\}]$$ and each of these expectations is given by $$\psi=\sum_{j=1}^\infty e^{-\phi j}\pi_j.$$ Thus $$\mathbb{E}[\exp\{-\phi \tilde{X}(A)\}] \exp\{-\gamma\, G_0(A)\, (1-\psi)\}$$ which is given by \vspace{0mm}\begin{equation} \exp\left[-\gamma\,G_0(A)\, \left(1-\sum_{j=1}^\infty e^{-\phi j}\,\pi_j\right)\right].\label{two} \vspace{0mm}\end{equation} Comparing (\ref{one}) and (\ref{two}) we see that we have a match when $$\gamma=\int_0^\infty (1-e^{-r})\,\rho(dr)$$ and $$\pi_j=\frac{\int_0^\infty r^j\,e^{-r}\,\rho(dr)}{j! \gamma}\,,$$ and note that it is easy to verify that $$\sum_{j=1}^\infty \pi_j=1.$$ \end{proof} \begin{proof}[Proof for Corollary \ref{cor:compoundPoisson}] Using \eqref{eq:Iden} and \eqref{two}, we have \begin{align} \mathbb{E}[\exp\{-\phi {X}(A)\}] &= \exp\left\{-\gamma\,G_0(A)\, \left[1-\sum_{j=1}^\infty e^{-\phi j}\,\pi_j\right]\right\}\notag\\ &= \exp\left[-\,G_0(A)\, \int_0^\infty { \bigg(1-e^{-r}- \sum_{j=1}^\infty e^{-\phi j}\, \frac{r^j\,e^{-r}}{j!}\bigg)} \rho (dr) \right]\notag\\ &= \exp\left\{-G_0(A)\, \int_0^\infty \sum_{j=1}^\infty (1-e^{-\phi j })\frac{r^j\,e^{-r}}{j!}\,\rho(dr) \right\}.\notag \end{align} Substituting the definition of the L\'evy measure $\nu(dnd\omega)$ in Corollary 2 into \eqref{eq:Laplace0}, we have \begin{align} \mathbb{E}[\exp\{-\phi {X}(A)\}] & = \exp\left\{-\int_{\mathbb{R}_+\times A} \, \sum_{j=1}^\infty (1-e^{-\phi j }) \int_0^\infty \frac{r^j\,e^{-r}}{j!}\,\rho(dr)~ \delta_j(dn) G_0(d\omega) \right\}\notag\\ & = \exp\left\{-G_0(A) \, \sum_{j=1}^\infty (1-e^{-\phi j }) \int_0^\infty \frac{r^j\,e^{-r}}{j!}\,\rho(dr) \right\}.\notag \end{align} The proof is complete by changing the order of the summation and integration. \end{proof} \begin{proof}[Proof for Corollary \ref{cor:m_i}] Since $\sum_{i=1}^\infty r^{i}e^{-r}/i! = 1-e^{-r}$, we can express the joint distribution of $\mathcal{M}$ and the population size $n$ as \begin{align} p(\mathcal{M},n\,|\,\gamma_0,\rho) &= \frac{n!}{\prod_{i=1}^n(i!)^{m_i}m_i!}p(\boldsymbol{z}\,|\,n,\gamma_0,\rho) p_N(n\,|\, \gamma_0,\rho)\notag\\ &=\exp\left\{\gamma_0\int_{0}^\infty(e^{-r}-1)\rho(dr)\right\} \prod_{i=1}^{n} \left(\frac{\gamma_0\int_0^\infty r^{i} e^{-r} \rho(dr)}{i!}\right)^{m_i} \frac{1}{m_i!}\notag\\ &=\left\{\prod_{i=1}^\infty \mbox{Poisson}\left(m_i; \frac{\gamma_0\int_0^\infty r^{i} e^{-r} \rho(dr)}{i!}\right)\right\} \times \delta\left(n=\sum_{i=1}^\infty i m_i\right). \notag \end{align} Therefore, we can generate each $m_i$ independently from a Poisson distribution. The stick-breaking construction to generate $\mathcal{M}$ directly follows the relationships between the Poisson, multinomial, and binomial distributions. \end{proof} \begin{proof}[Proof for Corollary \ref{thm:predict}] This follows directly from Bayes' rule, since $p(z_i\,|\,\boldsymbol{z}^{-i},n,\gamma_0,\rho) = \frac{p(z_i,\boldsymbol{z}^{-i},n\,|\,\gamma_0,\rho)}{p(\boldsymbol{z}^{-i},n\,|\,\gamma_0,\rho)}$, where $$p(z_i,\boldsymbol{z}^{-i},n\,|\,\gamma_0,\rho)=\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\hfill$ $$n^{-1\,} p(\boldsymbol{z}^{-i},n-1\,|\,\gamma_0,\rho)\,\left[\gamma_0\int_0^\infty re^{-r}\rho(dr)\,{\bf 1}(z_i=l^{-i}+1) \,+ \,\sum_{k=1}^{l^{-i}} \frac{\int_0^\infty r^{n_k^{-i}+1} e^{-r} \rho(dr)}{\int_0^\infty r^{n_k^{-i}} e^{-r} \rho(dr)} {\bf 1}(z_i=k) \right].\notag$$ Marginalizing out the $z_i$ from $p(z_i,\boldsymbol{z}^{-i},n\,|\,\gamma_0,\rho)$ we have \vspace{0mm}\begin{eqnarray} &p(\boldsymbol{z}^{-i},n\,|\,\gamma_0,\rho) n^{-1}\,p(\boldsymbol{z}^{-i},n-1\,|\,\gamma_0,\rho)\left[{\gamma_0\int_0^\infty re^{-r}\rho(dr)+ \sum_{k=1}^{l^{-i}} \frac{\int_0^\infty r^{n_k^{-i}+1} e^{-r} \rho(dr)}{\int_0^\infty r^{n_k^{-i}} e^{-r} \rho(dr)} }\right].\notag \vspace{0mm}\end{eqnarray} \end{proof} \vspace{-4mm}\section{Derivations for the generalized negative binomial process} Marginalizing out $\lambda$ from $n| \lambda\sim\mbox{Poisson}(\lambda)$ with $\lambda\sim{{}}\mbox{g}\Gamma\mbox{P}[\gamma_0,a,p/(1-p)]$, leads to a generalized NB distribution; $n\sim\mbox{gNB}(\gamma_0,a,p)$, with shape parameter $\gamma_0$, discount parameter $a<1$, and probability parameter $p$. The probability generating function (PGF) is given by $$ \mathbb{E}[t^n] = \mathbb{E}[\mathbb{E}[t^n\,|\,\lambda]] = \exp\left\{-\frac{\gamma_0[(1-pt)^a-(1-p)^a)]}{ap^a}\right\}, $$ the mean value is $\gamma_0\big[p/(1-p)\big]^{1-a}$ and the variance is $\gamma_0\big[p/(1-p)\big]^{1-a}(1-ap)/(1-p)$. The PGF was originally presented in \citet{willmot1988remark} and \citet{gerber1992generalized}. With the PGF written as $$\begin{array}{ll} \mathbb{E}(t^n) & =\exp\left\{\gamma_0\frac{(1-p)^a}{ap^a}\right\}\sum_{k=0}^\infty \frac{1}{k!} {\left(\frac{-\gamma_0(1-pt)^a}{ap^a}\right)^k} \\ \\ & =\exp\left\{\gamma_0\frac{(1-p)^a}{ap^a}\right\} \sum_{k=0}^\infty \frac{1}{k!} {\left(\frac{-\gamma_0}{ap^a}\right)^k} \sum_{j=0}^\infty \binom{ak}{j}(-pt)^j,\end{array}$$ we can derive the PMF as \vspace{0mm}\begin{eqnarray}\label{eq:f_M} p_N(n\,|\,\gamma_0,a,p) = \frac{p^n}{n!}e^{{\gamma_0}\frac{(1-p)^a}{ap^a}} \sum_{k=0}^\infty \frac{1}{k!}{\left(-\frac{\gamma_0}{ap^a}\right)^k} \frac{\Gamma(n-ak)}{\Gamma(-ak)}, ~n\in\{0,1,\ldots\}. \vspace{0mm}\end{eqnarray} We can also generate $n\sim{{}}\mbox{gNB}(\gamma_0,a,p)$ from a compound Poisson distribution, as $ n=\sum_{k=1}^l n_k$, with the $(n_k)$ independent from $\mbox{TNB}(a,p)$, and $l\sim\mbox{Poisson}\big(\frac{\gamma_0(1-(1-p)^a)}{ap^a}\big), $ where $\mbox{TNB}(a,p)$ denotes a truncated NB distribution, with PGF $\mathbb{E}[t^{u}] = \frac{1-(1-pt)^a}{1-(1-p)^a}$ and PMF \vspace{0mm}\begin{eqnarray}\label{eq:TNB} p_U(u|a,p)= \frac{\Gamma(u-a)}{u!\Gamma(-a)}\frac{p^u(1-p)^{-a}}{1-(1-p)^{-a}},~u\in\{1,2,\ldots\}. \vspace{0mm}\end{eqnarray} Note that as $a\rightarrow 0$, $u\sim\mbox{TNB}(a,p)$ becomes a logarithmic distribution \citep{LogPoisNB} with PMF $p_U(u|p)=\frac{-1}{\ln(1-p)}\frac{p^u}{u}$ and $n\sim\mbox{gNB}(\gamma_0,a,p)$ becomes a NB distribution; $n\sim\mbox{NB}(\gamma_0,p)$. The truncated NB distribution with $0<a<1$ is the extended NB distribution introduced in \citet{engen1974species}. Here we provide a useful identity which will be used later in this section. Denote by $\sum_{*}$ as the summation over all sets of positive integers $(n_1,\ldots,n_l)$ with ${\sum_{k=1}^l n_k = n}$. We call $n\sim\mbox{SumTNB}(l,a,p)$ as a sum-truncated NB distributed random variable that can be generated via $n=\sum_{k=1}^l n_k, ~n_k\sim\mbox{TNB}(a,p)$. Using both (\ref{eq:TNB}) and $$ \left[\frac{1-(1-pt)^a}{1-(1-p)^a}\right]^{l}= \frac{ \sum_{k=0}^l \binom{l}{k} (-1)^k \sum_{j=0}^\infty\binom{ak}{j} (-pt)^j }{ [1-(1-p)^a]^l}, $$ we may express the PMF of the sum-truncated NB distribution as $$p_N(n|l,a,p) = \sum_{*} \prod_{k=1}^l {\frac{\Gamma(n_k-a)}{n_k!\Gamma(-a)} \frac{p^{n_k}(1-p)^{-a}}{1-(1-p)^{-a}} =\frac{p^n}{ [1-(1-p)^a]^l} {\sum_{k=0}^l (-1)^k \binom{l}{k} \frac{\Gamma(n-ak)}{n!\Gamma(-ak)} } $$ leading to the identity \begin{align}\label{eq:gStirling} S_a(n,l) = \frac{n!}{l!}\sum_{*} \prod_{k=1}^l \frac{\Gamma(n_k-a)}{n_k!\Gamma(1-a)}= \frac{1}{l!a^{l}}\sum_{k=0}^l (-1)^k \binom{l}{k} \frac{\Gamma(n-ak)}{\Gamma(-ak)}, \end{align} where $S_a(n,l)$ can be recursively calculated via $S_a(n,1)={\Gamma(n-a)}/{\Gamma(1-a)}$, $S_a(n,n)=1$ and $S_a(n+1,l) = (n-al)S_a(n,l)+S_a(n,l-1)$. Multiplying $S_a(n,l)$ by $a^{-l}$ leads to generalized Stirling numbers \citep{charalambides2005combinatorial,csp}. Note that when $-ak$ is a nonnegative integer, $\Gamma(-ak)$ is not well defined but $\Gamma(n-ak)/\Gamma(-ak)=\prod_{i=0}^{n-1}(i-ak)$ is still well defined. We notice that the generalized NB distribution could be matched to the the power variance mixture distribution derived in \citet{hougaard1997analysis}, who attributed the key difficulty in applying this distribution to the complicated PMF. The EPPF is the ECPF in (\ref{eq:f_Z_M}) divided by the marginal distribution of $n$ in (\ref{eq:f_M}), given by \begin{align} \label{eq:EPPF1} p(\boldsymbol{z}\,|\,n,\gamma_0,a,p) &= p_n(z_1,\ldots,z_n\,|\,n) =\frac{e^{-\frac{\gamma_0}{ap^{a}}} }{ \sum_{k=0}^\infty \frac{1}{k!}{\left(-\frac{\gamma_0}{ap^a}\right)^k}\frac{\Gamma(n-ak)}{\Gamma(-ak)} } \gamma_0^{l{}} p^{-al{}} \prod_{k=1}^{l{}}\frac{\Gamma(n_k-a)}{\Gamma(1-a)}. \end{align} Using the EPPF in (\ref{eq:EPPF}) and the identity in (\ref{eq:gStirling}), the conditional distribution of the number of clusters $l$ in a sample of size $n$ can be expressed as \begin{align}\label{eq:f_L2} p_L(l\,|\,n,\gamma_0,a,p)& = \frac{1}{l!}\sum_{*}\frac{n!}{\prod_{k=1}^l n_k!}p(\boldsymbol{z}\,|\,n,\gamma_0,a,p =\frac{ \gamma_0^l p^{-al} S_a(n,l)}{e^{\frac{\gamma_0}{ap^{a}}} \sum_{k=0}^\infty \frac{1}{k!} {\left(\frac{-\gamma_0}{ap^a}\right)^k} \frac{\Gamma(n-ak)}{\Gamma(-ak)}} , \end{align} which, since $\sum_{l=0}^n p_L(l\,|\,n,\gamma_0,a,p)=1$, further leads to identity \vspace{0mm}\begin{equation} e^{\frac{\gamma_0}{ap^{a}}} \sum_{k=0}^\infty \frac{1}{k!} {\left(\frac{-\gamma_0}{ap^a}\right)^k} \frac{\Gamma(n-ak)}{\Gamma(-ak)} = \sum_{l=0}^n \gamma_0^l p^{-al} S_a(n,l). \notag \vspace{0mm}\end{equation} Applying this identity on (\ref{eq:f_M}), (\ref{eq:EPPF1}) and (\ref{eq:f_L2}) lead to (\ref{eq:f_M0}), (\ref{eq:EPPF}) and (\ref{eq:f_L2_0}). \begin{cor} The distribution of the number of clusters in $z_{1:i}$ in a population of size $n$ can be expressed as \begin{align}\label{eq:l_i} {p(l_{(i)}\,|\,n,\gamma_0,a,p)} &=p(l_{(i)}\,|\,i,\gamma_0,a,p)\frac{\sum_{\ell=0}^{i} \gamma_0^\ell p^{-a\ell} S_a(i,\ell)}{\sum_{\ell=0}^n \gamma_0^\ell p^{-a\ell}S_a(n,\ell)} R_{n,\gamma_0,a,p}(i,l_{(i)}),\notag\\ &=\frac{ \gamma_0^{l_{(i)}}p^{-al_{(i)}}S_a(i,l_{(i)})R_{n,\gamma_0,a,p}(i,l_{(i)})}{\sum_{\ell=0}^n \gamma_0^\ell p^{-a\ell}S_a(n,\ell)}. \end{align} \end{cor} This can be directly derived using (\ref{eq:SizeEPPF}) and the relationship between the EPPF and the distribution of the number of clusters. From this PMF, we obtain a useful identity \vspace{0mm}\begin{equation}\notag {\sum_{\ell=0}^n \gamma_0^\ell p^{-a\ell}S_a(n,\ell)} = \gamma_0p^{-a}R_{n,\gamma_0,a,p}(1,1), \vspace{0mm}\end{equation} which could be used to calculate the PMF of the generalized NB distribution in (\ref{eq:f_M0}) and the EPPF in (\ref{eq:EPPF}) without the need to compute the generalized Stirling numbers $a^{-l}S_a(n,l)$. \begin{cor}[Sequential Construction] Since $p(z_{i+1} \,|\,z_{1:i},n,\gamma_0,a,p) = \frac{p(z_{1:i+1} \,|\,n,\gamma_0,a,p)} {p(z_{1:i} \,|\,n,\gamma_0,a,p)}$, conditioning on the population size $n$, the sequential prediction rule of the generalized Chinese restaurant sampling formula $\boldsymbol{z}\,|\,n\sim\emph{\mbox{gCRSF}}(n,\gamma_0,a,p) $ can be expressed as \vspace{0mm}\begin{equation}\label{eq:PredictRulej} P(z_{i+1} = k\,|\,z_{1:i},n,\gamma_0,a,p) = \begin{cases}\vspace{3mm} (n_{k,(i)} -a) \frac{R_{n,\gamma_0,a,p}(i+1, ~l_{(i)})}{R_{n,\gamma_0,a,p}(i,~ l_{(i)})}, & {\mbox{for }} k=1,\ldots,l_{(i)};\\ \gamma_0 p^{-a}\frac{R_{n,\gamma_0,a,p}(i+1, ~l_{(i)}+1)}{R_{n,\gamma_0,a,p}(i, ~l_{(i)})}, & {\mbox{if } }k=l_{(i)}+1; \end{cases} \vspace{0mm}\end{equation} where $i=1,\ldots,n-1$.\end{cor} With this sequential prediction rule, we can construct $\Pi_{i+1}$ from $\Pi_i$ in a population of size $n$ by assigning element $(i+1)$ to $A_{z_{i+1}}$. When $a=0$, this sequential prediction rule becomes the same as that of a Chinese restaurant process with concentration parameter $\gamma_0$. \begin{cor} The distribution of $z_{i+1\,:\,n}$, given $z_{1:i}$, the population size $n$, and the model parameters $\gamma_0$, $a$ and $p$, can be expressed as \vspace{0mm}\begin{equation} \label{eq:extrapolate} p(z_{i+1\,:\,n}\,|\,z_{1:i},n,\gamma_0,a,p) = \frac{ \gamma_0^{l_{(n)}-l_{(i)}} p^{-a(l_{(n)}-l_{(i)})}}{ R_{n,\gamma_0,a,p}(i,l_{(i)}) } \prod_{k=1}^{l_{(i)}} \frac{\Gamma(n_{k,(n)}-a)}{\Gamma(n_{k,(i)}-a)} \prod_{k=l_{(i+1)}}^{l_{(n)}} \frac{\Gamma(n_{k,(n)}-a)}{\Gamma(1-a)} . \vspace{0mm}\end{equation} \end{cor} \vspace{-4mm}\section{Large $n$ asymptotics for $l_{(n)}$}\label{sec_asym_1} For $a=0$ it is known from \citet{hollander73} that, as $n\rightarrow+\infty$, $l_{(n)}/\log n$ converges weakly to $\gamma_{0}$. Let us consider the case $a\in(0,1)$. We start by recalling a representation for $\sum_{1\leq l\leq n}(xa)^{l}S_{a}(n,l)$, for any positive $x$. Specifically, let $f_{a}$ denote the density function of a positive stable random variable $X$ with index $a\in(0,1)$, that is $\mathbb{E}[\exp\{-\lambda X\}]=\exp\{-\lambda^{a}\}$. Then, along lines similar to the proof of Proposition 1 in \citet{favaro15}, one may show that \begin{equation}\label{rap_gen} \sum_{l=1}^{n}(ax)^{l}S_{a}(n,l)=\exp\{xa\}(xa)^{n/a}\int_{0}^{+\infty}y^{n}\exp\{-(xa)^{1/a}y\}f_{a}(y)dy. \end{equation} In order to study the large $n$ asymptotic behavior of $l_{(n)}$, we consider its moment generating function, and we use the representation \eqref{rap_gen}. Specifically, we can write \begin{align*} \mathbb{E}[\text{e}^{\lambda l_{(n)}}]&=\sum_{l=0}^{n}\frac{\left(\frac{\text{e}^{\lambda}\gamma_{0}}{p^{a}}\right)^{l}S_{a}(n,l)}{\sum_{l=0}^{n}\left(\frac{\gamma_{0}}{p^{a}}\right)^{l}S_{a}(n,l)}\\ &=\frac{\exp\left\{\frac{\text{e}^{\lambda}\gamma_{0}}{ap^{a}}\right\}\left(\text{e}^{\lambda}\right)^{n/a}}{\exp\left\{\frac{\gamma_{0}}{ap^{a}}\right\}}\frac{\int_{0}^{+\infty}y^{n}\exp\left\{-\left(\frac{\text{e}^{\lambda}\gamma_{0}}{ap^{a}}\right)^{1/a}y\right\}f_{a}(y)dy}{\int_{0}^{+\infty}y^{n}\exp\left\{-\left(\frac{\gamma_{0}}{ap^{a}}\right)^{1/a}y\right\}f_{a}(y)dy}. \end{align*} For large $n$, the ratio of integrals behaves like $\exp\{-n\lambda/a+\lambda\}$. This can be easily verified by using the expression for $f_{a}$, and then solving the integrals. Therefore one obtains $\mathbb{E}[\exp\{{\lambda l_{(n)}}\}]\rightarrow\exp\{\lambda\}\exp\{\gamma_{0}(\exp\{\lambda\}-1)/ap^{a}\}$, as $n\rightarrow+\infty$. This implies that for any $a\in(0,1)$, as $n\rightarrow+\infty$, $l_{(n)}$ converges weakly to $1+X$ where $X$ is a Poisson random variable with parameter $\gamma_{0}/ap^{a}$. Now we consider the case $a=-t$, for $t=1,2,\ldots$ We still use the moment generating function of $l_{(n)}$. Let us define $c_{n}(a)=n^{-a/(1-a)}$, for $a=-t$ with $t=1,2,\ldots$. We can write the moment generating function of $l_{(n)}/c_{n}(-t)$ as \begin{align*} \mathbb{E}[\text{e}^{\lambda\frac{l_{(n)}}{c_{n}(-t)}}]&=\sum_{l=0}^{n}\frac{\left(\frac{\text{e}^{\frac{\lambda}{c_{n}}}\gamma_{0}}{(-t)p^{-t}}\right)^{l}S^{\ast}_{-t}(n,l)}{\sum_{l=0}^{n}\left(\frac{\gamma_{0}}{p^{-t}}\right)^{l}S^{\ast}_{-t}(n,l)}\\ &=\sum_{l=0}^{n}\frac{\left(\frac{\text{e}^{\frac{\lambda}{c_{n}(-t)}}\gamma_{0}}{(-t)p^{-t}}\right)^{l}\frac{1}{l!}\sum_{i=0}^{l}(-1)^{i}{l\choose i}\frac{\Gamma(ti+n)}{\Gamma(ti)}}{\sum_{l=0}^{n}\left(\frac{\gamma_{0}}{(-t)p^{-a}}\right)^{l}\frac{1}{l!}\sum_{i=0}^{l}(-1)^{i}{l\choose i}\frac{\Gamma(ti+n)}{\Gamma(ti)}}\\ &=\sum_{i=0}^{n}\frac{(-1)^{i}\frac{\Gamma(ti+n)}{\Gamma(ti)}\frac{1}{i!}\left(\frac{\text{e}^{\frac{\lambda}{c_{n}(-t)}}\gamma_{0}}{(-t)p^{-t}}\right)^{i}\sum_{l=i}^{n}\left(\frac{\text{e}^{\frac{\lambda}{c_{n}(-t)}}\gamma_{0}}{(-t)p^{-t}}\right)^{l-i}\frac{1}{(l-i)!}}{\sum_{i=0}^{n}(-1)^{i}\frac{\Gamma(ti+n)}{\Gamma(ti)}\frac{1}{i!}\left(\frac{\gamma_{0}}{(-t)p^{-t}}\right)^{i}\sum_{l=i}^{n}\left(\frac{\gamma_{0}}{(-t)p^{-t}}\right)^{l-i}\frac{1}{(l-i)!}}. \end{align*} Accordingly, for large $n$ we obtain the following approximated moment generating function \begin{align*} \mathbb{E}[\text{e}^{\lambda\frac{L}{c_{n}(-t)}}]&\sim\sum_{i=1}^{n}\frac{\frac{n^{ti}}{i!\Gamma(ti)}\left(\frac{\text{e}^{\frac{\lambda}{n^{t/(t+1)}}}\gamma_{0}}{tp^{-t}}\right)^{i}}{\sum_{i=1}^{n}\frac{n^{ti}}{i!\Gamma(ti)}\left(\frac{\gamma_{0}}{tp^{-t}}\right)^{i}}\\ &\sim\frac{\text{e}^{\frac{\lambda}{n^{t/(t+1)}}}F(-;\frac{t+1}{t},\frac{t+2}{t},\ldots,\frac{t+t-1}{t},2;\frac{\text{e}^{\frac{\lambda}{n^{t/(t+1)}}}\gamma_{0}n^{t}}{t^{t+1}p^{-t}})}{F(-;\frac{t+1}{t},\frac{t+2}{t},\ldots,\frac{t+t-1}{r},2;\frac{\gamma_{0}n^{t}}{t^{t+1}p^{-t}})} \end{align*} where $F$ denotes the generalized hypergeometric function. We can make use of asymptotic results for $F$ in Section 5.7 and 5.10 of \citet{luke69} and Section 5.9 of \citet{luke75}. In particular, $\mathbb{E}[\text{e}^{\lambda l_{(n)}/c_{n}(-t)}]\rightarrow\exp\{\lambda(t^{-1}\gamma_{0}p^{t})^{1/(t+1)}\}$. This implies that for any $a=-t$ with $t=1,2,\ldots$, as $n\rightarrow+\infty$, $l_{(n)}/c_{n}(-t)$ converges weakly to $t^{-1}(\gamma_{0}p^{t})^{1/(t+1)}$. \vspace{-4mm}\section{Large $n$ asymptotics for $M_{i,n}$}\label{sec_asym_2} For $a=0$ it is known from \citet{ewens1972sampling} that, as $n\rightarrow+\infty$, $M_{i,n}$ converges weakly to a Poisson random variable with parameter $\gamma_{0}/i$. In order to prove the limiting behavior of $M_{i,n}$, for any $a<1$, we make use of the descending factorial moment of order $r$ of $M_{i,n}$. This moment can be easily computed, and it corresponds to \begin{align}\label{eq_freq} &\mathbb{E}\left[\prod_{k=0}^{r-1}(M_{i,n}-k)\right]\\ &\notag\quad=\prod_{k=0}^{ir-1}(n-k)\left[{a\choose i}\right]^{r}\left(-\frac{\gamma_{0}}{p^{a}a}\right)^{r}(-1)^{ir}\frac{\sum_{j=0}^{n-ir}\left(\frac{\gamma_{0}}{p^{a}}\right)^{j}S_{a}(n-ir,j)}{\sum_{j=0}^{n}\left(\frac{\gamma_{0}}{p^{a}}\right)^{j}S_{a}(n,j)}. \end{align} Let us consider the case $a\in(0,1)$. As for the case of $l_{(n)}$, we use the representation \eqref{rap_gen}. In particular, \begin{align*} &\mathbb{E}\left[\prod_{k=0}^{r-1}(M_{i,n}-k)\right]\\ &\quad=\prod_{k=0}^{ir-1}(n-k)\left[{a\choose i}\right]^{r}\left(-\frac{\gamma_{0}}{p^{a}a}\right)^{r}(-1)^{ir}\frac{\sum_{j=0}^{n-ir}\left(\frac{\gamma_{0}}{p^{a}}\right)^{j}S_{a}(n-ir,j)}{\sum_{j=0}^{n}\left(\frac{\gamma_{0}}{p^{a}}\right)^{j}S_{a}(n,j)}\\ &\quad=\prod_{k=0}^{ir-1}(n-k)\left[{a\choose i}\right]^{r}\left(-\frac{\gamma_{0}}{p^{a}a}\right)^{r}(-1)^{ir}\\ &\quad\quad\times\frac{\left(\frac{\gamma_{0}}{ap^{a}}\right)^{-ir/a}\int_{0}^{+\infty}y^{n-ir}\exp\left\{-\left(\frac{\gamma_{0}}{ap^{a}}\right)^{1/a}y\right\}f_{a}(y)dy}{\int_{0}^{+\infty}y^{n}\exp\left\{-\left(\frac{\gamma_{0}}{ap^{a}}\right)^{1/a}y\right\}f_{a}(y)dy}.\end{align*} Again, we can use the expression for the $a$-stable density function $f_{a}$ and then solving the integrals in the last expression. In particular, it can be verified the following asymptotics \begin{displaymath} \prod_{k=0}^{ir-1}(n-k)\frac{\int_{0}^{+\infty}y^{n-ir}\exp\left\{-\left(\frac{\gamma_{0}}{ap^{a}}\right)^{1/a}y\right\}f_{a}(y)dy}{\int_{0}^{+\infty}y^{n}\exp\left\{-\left(\frac{\gamma_{0}}{ap^{a}}\right)^{1/a}y\right\}f_{a}(y)dy}\rightarrow\left(\frac{\gamma_{0}}{ap^{a}}\right)^{ir/a} \end{displaymath} as $n\rightarrow+\infty$. Accordingly, we obtain the following asymptotic descending factorial moments \begin{align*} &\mathbb{E}\left[\prod_{k=0}^{r-1}(M_{i,n}-k)\right]\rightarrow\left[{a\choose i}\right]^{r}\left(-\frac{\gamma_{0}}{p^{a}a}\right)^{r}(-1)^{ir}=\left(\frac{a\frac{\Gamma(i-a)}{\Gamma(1-a)}}{i!}\frac{\gamma_{0}}{ap^{a}}\right)^{r}. \end{align*} This implies that for any $a\in(0,1)$, as $n\rightarrow+\infty$, $M_{i,n}$ converges weakly to a Poisson random variable with parameter $\Gamma(i-a)\gamma_{0}p^{-a}/i!\Gamma(1-a)$. Now we consider the case $a=-t$, for $t=1,2,\ldots,$. We still use the descending factorial moments. In particular, \begin{align*} &\mathbb{E}\left[\prod_{k=0}^{r-1}(M_{i,n}-k)\right]\\ &\quad=\prod_{k=0}^{ir-1}(n-k)\left[{-t\choose i}\right]^{r}\left(-\frac{\gamma_{0}}{p^{-t}(-t)}\right)^{r}(-1)^{ir}\frac{\sum_{j=0}^{n-ir}\left(\frac{\gamma_{0}}{p^{-t}}\right)^{j}S_{-t}(n-ir,j)}{\sum_{j=0}^{n}\left(\frac{\gamma_{0}}{p^{-t}}\right)^{j}S_{-t}(n,j)}\\ &\quad=\prod_{k=0}^{ir-1}(n-k)\left[{-t\choose i}\right]^{r}\left(-\frac{\gamma_{0}}{p^{-t}(-t)}\right)^{r}(-1)^{ir}\\ &\quad\quad\times\frac{\sum_{h=0}^{n-ir}(-1)^{h}\frac{\Gamma(th+n-ir)}{\Gamma(th)}\frac{1}{h!}\left(\frac{\gamma_{0}}{(-t)p^{-t}}\right)^{h}\sum_{j=h}^{n-ir}\left(\frac{\gamma_{0}}{(-t)p^{-t}}\right)^{j-h}\frac{1}{(j-h)!}}{\sum_{h=0}^{n}(-1)^{h}\frac{\Gamma(th+n)}{\Gamma(th)}\frac{1}{h!}\left(\frac{\gamma_{0}}{(-t)p^{-t}}\right)^{h}\sum_{j=h}^{n}\left(\frac{\gamma_{0}}{(-t)p^{-t}}\right)^{j-h}\frac{1}{(j-h)!}}. \end{align*} Accordingly, for large $n$ we obtain the following approximated descending factorial moments \begin{align*} &\mathbb{E}\left[\prod_{k=0}^{r-1}(M_{i,n}-k)\right]\\ &\sim\left[{-t\choose i}\right]^{r}\left(-\frac{\gamma_{0}}{p^{-t}(-t)}\right)^{r}(-1)^{ir}\frac{\sum_{h=0}^{n-ir}\frac{n^{th}}{h!\Gamma(th)}\left(\frac{\gamma_{0}}{tp^{-t}}\right)^{h}}{\sum_{h=0}^{n}\frac{n^{th}}{h!\Gamma(th)}\left(\frac{\gamma_{0}}{tp^{-t}}\right)^{i}}\\ &\rightarrow\left[{-t\choose l}\right]^{r}\left(-\frac{\gamma_{0}}{p^{-t}(-t)}\right)^{r}(-1)^{ir}=\left(\frac{\frac{\Gamma(t+i)}{1+t}\gamma_{0}p^{t}}{i!}\right)^{r}. \end{align*} This implies that for any $a=-t$ with $t=1,2,\ldots$, as $n\rightarrow+\infty$, $M_{i,n}$ converges weakly to a Poisson random variable with parameter $\Gamma(t+i)\gamma_{0}p^{t}/i!\Gamma(1+t)$. \vspace{-4mm}\section{MCMC inference} \subsection{MCMC for the generalized negative binomial process} For the gNBP, the ECPF in (\ref{eq:f_Z_M}) defines a fully factorized likelihood for $\gamma_0$, $a$ and $p$. We sample $\boldsymbol{z}$ using either (\ref{eq:PredictRule}) or (\ref{eq:PredictRulej}). With a gamma prior $\mbox{Gamma}(e_0,1/f_0)$ placed on $\gamma_0$, we have \vspace{0mm}\begin{eqnarray} (\gamma_0\,|\, -)\sim\mbox{Gamma}\bigg(e_0 + l{},\frac{1}{f_0+ \frac{1-(1-p)^a}{ap^a}}\bigg). \vspace{0mm}\end{eqnarray} As $a\rightarrow 0$, we have $ (\gamma_0\,|\, -)\sim\mbox{Gamma}\left(e_0 + l{},\frac{1}{f_0- \ln(1-p)}\right). $ This paper sets $e_0=f_0=0.01$. Since $a<1$, we have $\tilde{a}=\frac{1}{1+(1-a)} \in(0,1)$. With a uniform prior placed on $\tilde{a}$ in $(0,1)$ and the likelihood of gNBP in (\ref{eq:f_Z_M}), we use the griddy-Gibbs sampler \citep{griddygibbs} to sample $a$ from a discrete distribution \vspace{0mm}\begin{equation} P(a\,|\, -)\propto e^{-\gamma_0\frac{1-(1-p)^a}{ap^{a}}} p^{-al{}} \prod_{k=1}^{l{}} \frac{\Gamma(n_k-a)}{{\Gamma(1-a)} } \vspace{0mm}\end{equation} over a grid of points $\frac{1}{1+(1-a)}=0.0001,0.0002,\ldots,0.9999$. We place a uniform prior on $p$ in $(0,1)$. When $a\rightarrow 0$, the likelihood of the gNBP in (\ref{eq:f_Z_M}) becomes proportional to $p^{m}(1-p)^{\gamma_0}$, thus we have $ (p\,|\, -)\sim\mbox{Beta}(1+n,1+\gamma_0). $ When $a\neq 0$, we use the griddy-Gibbs sampler to sample $p$ from a discrete distribution \vspace{0mm}\begin{equation} P(p\,|\, -)\propto e^{-\gamma_0\frac{1-(1-p)^a}{ap^{a}}} p^{n-al{}} \vspace{0mm}\end{equation} over a grid of points $p=0.0001,0.0002,\ldots,0.9999$. \subsection{MCMC for the Pitman-Yor process} Given the mass parameter $\gamma_0$ and discount parameter $a\in[0,1)$, the EPPF of $(z_1,\ldots,z_i)$ for the Pitman-Yor process \citep csp} can be expressed as \begin{align} P(z_1,\ldots,z_i\,|\, \gamma_0,a) &= \frac{\Gamma(\gamma_0)}{\Gamma(i+\gamma_0)} \prod_{k=1}^{l_i} \frac{\Gamma(i_k-a)}{\Gamma(1-a)}[\gamma_0+(k-1)a]\notag\\ &= \frac{\Gamma(1+\gamma_0)}{\Gamma(i+\gamma_0)} (1-a)^{l_i}\left[\prod_{k=1}^{l_i} \frac{\Gamma(i_k-a)}{\Gamma(2-a)}\right]\left[ \prod_{k=1}^{l_i-1} (\gamma_0+ka)\right], \end{align} where $l_i$ represents the number of clusters in $\{z_{1},\ldots,z_i\}$. We set in the prior that $\gamma_0\sim\mbox{Gamma}(e_0,1/f_0)$ and $a\sim\mbox{Beta}(1,1)$. Following \citet{teh2006bayesian}, with auxiliary variables \begin{align} (p\,|\, i,\gamma_0) &\sim\mbox{Beta}(i-1,\gamma_0+1),\notag\\ (y_k \,|\, \gamma_0,a)&\sim\mbox{Bernoulli}\left(\frac{\gamma_0}{\gamma_0+ka}\right),~k\in\{1,\ldots,l_i-1\}, \label{eq:PY1} \end{align} we sample $\gamma_0$ as \vspace{0mm}\begin{equation} (\gamma_0\,|\, -) \sim\mbox{Gamma}\left(e_0 + \sum_{k=1}^{l_i-1} y_k,~\frac{1}{f_0- \ln(1-p)}\right), \vspace{0mm}\end{equation} and further with auxiliary variables \vspace{0mm}\begin{equation} (b_{kj}\,|\, a) \sim\mbox{Bernoulli}\left(\frac{j-1}{j-a}\right),~k\in\{1,\ldots,l_i\},~j\in\{2,\ldots,i_k-1\}, \vspace{0mm}\end{equation} we sample $a$ as \vspace{0mm}\begin{equation} (a\,|\, -)\sim\mbox{Beta}\left(1 +\sum_{k=1}^{l_i-1} (1-y_k), 1+ l + \sum_{k=1}^{l_i} \sum_{j=2}^{i_k-1} (1-b_{kj})\right). \vspace{0mm}\end{equation} We then use the prediction rule of the Pitman-Yor process as \vspace{0mm}\begin{equation} P(z_{i+1} = k\,|\, z_1,\ldots,z_i) = \begin{cases} \vspace{3mm} \displaystyle\frac{i_k-a}{i+\gamma_0} & \text{if } k\in\{1,\ldots,l_i\}, \\ \displaystyle\frac{\gamma_0+l_i a}{i+\gamma_0} & \text{if } k=l_i+1.\end{cases} \label{eq:PYend} \vspace{0mm}\end{equation} to sequentially sample $z_{i+1},\ldots,z_{n}$. Each Gibbs sampling iteration proceeds from \eqref{eq:PY1} to \eqref{eq:PYend}.
2024-02-18T23:40:28.913Z
2016-08-02T02:09:37.000Z
algebraic_stack_train_0000
2,526
17,372
proofpile-arXiv_065-12250
\section{Introduction}\label{intro} Let $F$ be a totally real number field and $F_\infty$ be a Galois extension such that $\mathrm{Gal}(F_\infty/F)$ is a pro-$p$, $p$-adic Lie group ${\mathcal G}$ for an odd prime $p$. We assume that the cyclotomic ${\mathbb Z}_p$-extension $\cy{F}$ is contained in $\inft{F}$. For every $n\in\mathbb N$, let $F_n$ be a finite Galois extension of $F$ contained in $F_\infty$ such that $F_n\subset F_{n+1}$ and $F_\infty=\cup_n F_n$. Over the totally real field $F$, consider a Hecke eigenform $f_0\in S_{\kappa}^{n.ord}(\mathfrak{N},\varepsilon_0;W)$ of weight $\kappa=(0,I)$ (see \S\ref{adelic-hmf} for a precise definition of these weights). Let $\rho_0$ be the representation of $\mathrm{Gal}(\overline F/F)$ that is associated to $f_0$. As $\mathrm{Gal}(F_\infty/F)$ is a pro-$p$, $p$-adic Lie group it is solvable and we can consider the base change $f_n$ of $f_0$ to the totally real field $F_n$. Then the representation $\rho_n:=\rho_{f_n}$ is isomorphic to the restriction $\rho_n:=\rho_0\mid_{\mathrm{Gal}_{F_n}}$. For each of these representations, we consider the deformations of the residual representations $\overline\rho_n$. Assuming the conditions {\bf (sf), (h1)-(h4)} which we present in Section 2.5 and also absolute irreducibility of the residual representation $\overline\rho_0$, a universal deformation ring $\mathcal R_{F_n}$ exists for $\overline\rho_n$ for all $n$. In this article, we study the deformation rings over the $p$-adic Lie extension $F_\infty$. This allows us to study the Selmer groups of the adjoint representations $\ad{\rho_0}$ defined over the $p$-adic Lie extension $F_\infty$. In fact, if $\boldsymbol\rho_0$ denotes the deformation of $\rho_0$, then we consider the Selmer groups of the adjoint $\ad{\boldsymbol\rho_0}$ along an irreducible component $\mathbb I$ of $\mathcal R_0$. Let ${\mathcal G}:=\mathrm{Gal}(\inft{F}/F)$ and ${\mathcal H}:=\mathrm{Gal}(\inft{F}/\cy{F})$. Then there is a natural action of the group ${\mathcal G}$ on these Selmer groups, making these Selmer groups modules over $\mathbb I[[{\mathcal G}]]$ in a natural way. We formulate a Main conjecture for Selmer groups of $\ad{\boldsymbol{\rho_0}}$ along the irreducible component $\mathbb I$. A Main conjecture for Galois representations attached to the ordinary Hida family was also formulated by Barth in his thesis \cite{barth}. Our formulation is slightly different from his formulation. A noncommutative generalization of the Main Conjecture of Iwasawa theory was presented in the paper \cite{cfksv} for Galois representations arising from motives which are ordinary at a prime $p$. A requirement for this formulation is that the Pontryagin dual of Selmer groups defined over $\inft{F}$ are in the category $\mathfrak{M}_{\mathcal H}({\mathcal G})$. This category $\mathfrak{M}_{\mathcal H}({\mathcal G})$ consists of finitely generated modules over ${\mathbb Z}_p[[{\mathcal G}]]$ and torsion with respect to a certain Ore Set (see Section 7). This Ore set is defined to be the set of all the elements of ${\mathbb Z}_p[[{\mathcal G}]]$ such that ${\mathbb Z}_p[[{\mathcal G}]]/x$ is a finitely generated module over ${\mathbb Z}_p[[{\mathcal H}]]$. It is conjectured in \cite{cfksv} that the Selmer groups defined over $\inft{F}$ attached to $p$-ordinary Galois representations are in the category $\mathfrak{M}_{\mathcal H}({\mathcal G})$. As a generalization, we consider the set $\sS$ which consists of elements $x\in\mathbb I[[\cG]]$ such that $\mathbb I[[\cG]]/x$ is a finitely generated module over $\mathbb I[[{\mathcal H}]]$. Then we consider the category $\mathfrak{M}_{\mathcal H}^\mathbb I[[\cG]]$ which consists of finitely generated modules over $\mathbb I[[{\mathcal G}]]$ which are $\sS$-torsion (see section \ref{non-commmutative} for more details). To formulate a noncommutative Main conjecture, we also require that the Selmer group of $\ad{\boldsymbol\rho_0}$ along $\mathbb I$ is in the category $\mathfrak{M}_{\mathcal H}^\mathbb I({\mathcal G})$. We state this as a conjecture and in fact, this is a straight forward generalization of the conjecture for the Selmer group of $\ad{\rho_0}$ (\cite{cfksv}). This conjecture still remains hard to understand even in the in the first non-trivial and crucial case, namely the case when ${\mathcal G}$ is a 2 dimensional $p$-adic Lie group. In this case, when the Selmer group for $\ad{\rho_0}$ has its $\mu$-invariant defined over the cyclotomic ${\mathbb Z}_p$-extension is zero, Theorem B, below seems to indicate the difficulty in trying to solve this problem as $\mathcal R_\infty$ is not noetherian. We confess that we began with the hope that we may be able to say something about $\mathcal R_\infty$. The main conjecture for Selmer group of $\ad{\boldsymbol\rho_0}$ along $\mathbb I$ when $F_\infty$ is the cyclotomic ${\mathbb Z}_p$-extension of $F$ was studied by Hida in many papers. However, for most of the parts in this article we refer to the book \cite{hida-hmf}. Our aim in this paper is to explore the noncommutative Iwasawa theory for a $p$-adic family of modular forms and also take Hida's results along a new direction. After formulating the Main conjecture for Selmer groups over $\mathbb I[[{\mathcal G}]]$, we also show that a suitable generalization of the strategy of Burns, Kato, Kakde and Hara can be used to prove the Main conjecture over $\mathbb I[[{\mathcal G}]]$. Their strategy has been successfully used to prove the main conjecture over totally real fields and for the trivial Galois representation. We also give generalizations of the torsion congruences which played a crucial role in the proof of the noncommutative Main conjecture over totally real fields. We first show a relation between the conjectures regarding the categories $\mathfrak{M}_{\mathcal H}({\mathcal G})$ and $\mathfrak{M}_{\mathcal H}^\mathbb I({\mathcal G})$. \begin{thmalph}[Theorem \ref{big-torsion-small-torsion}] Consider the representation $\boldsymbol{\rho}_\mathbb I:\mathrm{Gal}_F\longrightarrow\gl{\mathbb I}$ which arises from the irreducible component $\mathbb I$ and let $\phi_k:\mathbb I\longrightarrow\cO$ be a morphism of local algebras which give rise to a locally cyclotomic point $P$ of weight $k$. The dual Selmer group $\sgd{E_\infty}{\ad{{\rho}_\mathbb I}}$ is $\sS$-torsion if and only if $\sgd{E_\infty}{\ad{\rho_P}}$ is $S$-torsion. \end{thmalph} \begin{thmalph}[Prop \ref{not-noetherian}] Let $\inft{F}$ be a $p$-adic Lie extension of a totally real field $F$ such that ${\mathcal G}:=\mathrm{Gal}(\inft{F}/F)$ is a $p$-adic Lie group of dimension two. Let ${\mathcal H}:=\mathrm{Gal}(\inft{F}/\cy{F})$ and $\Gamma:=\mathrm{Gal}(\cy{F}/F)$. Then \begin{enumerate} \item the dual Selmer group $\Om{\mathcal R_\infty}{W}\otimes W$ is a finitely generated module over $W[[{\mathcal H}]]$, \item the ring $\mathcal R_\infty$ is not noetherian. \end{enumerate} \end{thmalph} Over the cyclotomic ${\mathbb Z}_p$-extension, results of Hida show that the noetherian property of $\cy\mathcal R$ is related to the vanishing of $\mu$-invariant of the dual Selmer group of $\ad{\rho}$. However, as a deviation from the cyclotomic theory, we come across the strange property that the ring $\mathcal R_\infty$ is not noetherian. Let $\mathbb I\cong\cO[[X_1,\cdots,X_r]]$, for some $r$, with $\cO$ unramified over ${\mathbb Z}_p$. and ${\mathcal G}$ a $p$-adic Lie group of dimension 1. Let $\Sigma({\mathcal G})$ be any set of rank 1 subquotients of ${\mathcal G}$ of the form $U^{ab}$ with $U$ an open subgroup of ${\mathcal G}$ that has the following property: \begin{description} \item[($\ast$)] For each Artin representation $\rho$ of ${\mathcal G}$, there is a finite subset $\{U^{ab}_i:i\in I\}$ of $\Sigma({\mathcal G})$ and for each index $i$ an integer $m_i$ and a degree one representation $\rho_i$ of $U^{ab}$ such that there is an isomorphism of virtual representations $\rho\cong\sum_{i\in I}m_i.\Ind{{\mathcal G}}{U_i}{\Ind{U_i}{U_i^{ab}}{\rho_i}}$. \end{description} Let $U^{ab}$ be a subquotient satisfying the above property $(\ast)$, and for any group $G$, let $\mathbb I(G):=\mathbb I[[G]]$. Note that we have the following natural homomorphism, \begin{equation} \kone{\mathbb I[[\cG]]_\sS}\longrightarrow \kone{\mathbb I(U)_\sS}\longrightarrow \kone{\mathbb I(U^{ab})_\sS}\longrightarrow\mathbb I(U^{ab})_\sS^\times\subset Q_\mathbb I(U^{ab})^\times. \end{equation} Taking all the $U^{ab}$ in $\Sigma({\mathcal G})$ we get the following homomorphism \begin{equation} \Theta_{\Sigma({\mathcal G})}:\kone{\mathbb I[[\cG]]}\longrightarrow\prod_{U^{ab}\in\Sigma({\mathcal G})}Q_\mathbb I(U^{ab})^\times. \end{equation} For any subgroup $P$ of $\overline{\mathcal G}$, we write $\maptheta{\overline{\mathcal G},ab}{P}$ for the following natural composite homomorphism \begin{equation*} \konep{\mathbb I[[\cG]]}\stackrel{\maptheta{\overline{\mathcal G}}{P}}{\longrightarrow}\kone{\mathbb I(U_P)}\longrightarrow\kone{\mathbb I(U_P^{ab})}\cong\mathbb I(U_P^{ab})^\times, \end{equation*} where the isomorphism is induced by taking determinants over $\mathbb I(U_P^{ab})$. \begin{thmalph}[Theorem \ref{cong} Let $\Xi\in\konep{\mathbb I[[\cG]]}$ and for all subgroups $P$ of $\overline{\mathcal G}$, put $\Xi_P:=\maptheta{\overline{\mathcal G},ab}{P}(\Xi)\in\mathbb I(U_P^{ab})^\times$. \begin{enumerate} \item For all subgroups $P, P'$ of $\overline{\mathcal G}$ with $[P',P']\leq P\leq P'$, we have \begin{equation*} \mathrm{Nr}_P^{P'}(\Xi_{U_{P'}^{ab}})=\Pi_P^{P'}(\Xi_{U_{P'}^{ab}}). \end{equation*} \item For all subgroups $P$ of $\overline{\mathcal G}$ and all $g$ in $\overline{\mathcal G}$ we have $\Xi_{gU_{P}^{ab}g^{-1}}=g\Xi_{U_{P}^{ab}}g^{-1}$. \item For every $P\in\overline{\mathcal G}$ and $P\neq (1)$, we have \begin{equation*} \mathrm{ver}_P^{P'}(\Xi_{U_{P'}^{ab}})\equiv \Xi_{U_P^{ab}} \pmod{{\mathscr T}_{P,P'}} (\mbox{ resp. } {\mathscr T}_{P,P',\sS} \mbox{ and } \widehat{\mathscr T}_{P,P'}). \end{equation*} \item For all $P\in C(\overline{\mathcal G})$ we have $\alpha_P(\Xi_{U_{P}^{ab}})\equiv\prod_{P'\in C_P(\overline{\mathcal G})}\alpha_{P'}(\Xi_{U_{P'}^{ab}})\pmod{p{\mathscr T}_P}$. \end{enumerate} Conversely, if $\Xi_{U_P^{ab}}\in\mathbb I(U_P^{ab})^\times$ for all subgroups $P$ of $\overline{\mathcal G}$, such that the above congruences hold then there exists an element $\Xi\in\konep{\mathbb I({\mathcal G})}$ such that $\maptheta{\overline{\mathcal G},ab}{P}(\Xi)=\Xi_{U_P^{ab}}\in\mathbb I(U_P^{ab})^\times$. \end{thmalph} Crucial in the proof is the existence of the following logarithmic map $\konep{\mathbb I[[\cG]]}\stackrel{\mathfrak{L}}{\longrightarrow}\mathbb I(Z)[\mathrm{Conj}{\overline{\mathcal G}}]^\tau$. Further, we show that the integral logarithm map fits in the following commutative diagram: \begin{equation*} \xymatrix{ 1\ar[r] &\mu(\cO)\times\mathbb W\times{\mathcal G}^{ab}\ar[r]\ar[d]_{=} &\konep{\mathbb I[[\cG]]}\ar[r]^{\!\!\!\!\mathfrak L}\ar[d]^{\Theta^{\overline{\mathcal G}}} &\mathbb I(Z)[\mathrm{Conj}{\overline{\mathcal G}}]^\tau\ar[r]\ar[d]^{\beta^{\overline{\mathcal G}}}_\cong &\mathbb W\times{\mathcal G}^{ab}\ar[r]\ar[d]_{=} &1\\ 1\ar[r] &\mu(\cO)\times\mathbb W\times{\mathcal G}^{ab}\ar[r] &\Phi^{\overline{\mathcal G}}\ar[r]_{\!\!\!\!\mathcal L} &\Psi^{\overline{\mathcal G}}\ar[r] &\mathbb W\times{\mathcal G}^{ab}\ar[r] &1, } \end{equation*} where $\mathbb{W}:=(1+p{\mathbb Z}_p)^r$, and $\mathfrak{L}$ and $\mathcal{L}$ are the integral logarithm maps. In Theorem \ref{Theta-iso}, we show that the map $\Theta^{\overline{\mathcal G}}$ is an isomorphism and the congruences in the theorem above are derived from this isomorphism. \begin{thmalph}[Theorem \ref{Theta-iso}] The map $\Theta^{\overline{\mathcal G}}$ is an isomorphism. \end{thmalph} To mention briefly, the logarithm maps transfer the multiplicative theory to the additive theory. To achieve this, among many other algebraic results, we need a generalization of a classical result of Higman in \cite{higman}, regarding the torsion subgroup of units of group rings. More precisely, we have the following generalization of Higman's theorem: \begin{thmalph}[Theorem \ref{k-tors}] For any finite $p$-group $G$, we have \begin{equation*} (\kone{\mathbb I[G]})_{\mathrm{tors}}\cong\mu_K\times G^{ab}\times S\kone{\mathbb I[G]}. \end{equation*} \end{thmalph} In Section \ref{nearly-ordinary}, we recall Hilbert modular forms, the action of the Hecke algebra on the space of nearly ordinary Hilbert modular forms and the nearly ordinary Hecke algebra. In section \ref{deformation}, we recall the deformation of two-dimensional representations of the Galois group $\mathrm{Gal}_F$, where $F$ is a totally real field. We then recall the results that relate the nearly ordinary deformation to the nearly ordinary Hilbert modular forms both in Sections \ref{nearly-ordinary} and \ref{deformation}, in some detail, keeping in mind future applications and also for the convenience of the reader. In Section \ref{adjoint} we introduce the Selmer groups of the adjoint representation and then prove a control theorem. We recall that Selmer groups can be viewed as K\"ahler differentials. We then study the deformation rings over a $p$-adic Lie extension in section \ref{admissible}. In Section \ref{non-commmutative}, we give a sufficient condition for Selmer groups to be in the category $\mathfrak{M}_{\mathcal H}^\mathbb I({\mathcal G})$ in terms of the deformation rings. We then present a noncommutative Main Conjecture for these Selmer groups. We also give some results regarding the structure of the deformation rings over the $p$-adic Lie extension $F_\infty$. In Section \ref{k-one}, we extend the strategy of Burns, Kato, Kakde, and of Ritter and Weiss to prove the Main Conjecture. This section has some results on $K$-groups and logarithm maps that may be of independent interest. We extend some of the results of Oliver and this required us to generalize results on logarithm maps and computation of certain $K$-groups. In addition, we have defined a suitable generalization of the $SK_1$-groups in Definition \ref{sk-def2}. The results in this section may be used to establish the Main Conjecture of a $p$-adic family of Galois representations arising from motives. In the section after this, we show that the $p$-adic L-function over $\mathbb I[[\cG]]$ specializes to the $p$-adic L-function for each of the members in the family. Section \ref{nearly-ordinary}, where we have recalled the main results regarding Hecke algebras is very crucial for our work. It is evident how our results here are a generalization of results of Hida. Along with this, section \ref{k-one} is the main section where we do the computation of the $K$-groups and it is clear how many of our results are generalizations of those of Burns, Kato, Kakde, Ritter and Weiss, and Oliver. This section owes a lot to their works. In addition, the paper \cite{cfksv} of Coates, Fukaya, Kato, Sujatha and Venjakob has been a strong influence on our work. \section{Nearly ordinary Hilbert modular Hecke algebra}\label{nearly-ordinary} \subsection{Adelic Hilbert Modular forms}\label{adelic-hmf} Let $F$ be a totally real number field, and $\cO$ denote the ring of integers of $F$. Let $\mathfrak{N}$ denote an integral ideal of $F$. Consider the algebraic group $G=\mathrm{Res}_{\cO/{\mathbb Z}}GL(2)$ over ${\mathbb Z}$. Then for each commutative ring $A$, we have $G(A)=GL_2(A\otimes_{{\mathbb Z}}\cO)$. Let $T_0=\mathbb{G}_{m/\cO}^2$ be the diagonal torus of $GL(2)_{/\cO}$. Then consider $T=\mathrm{Res}_{\cO/{\mathbb Z}}$ and $T_G=\mathrm{Res}_{\cO/{\mathbb Z}}T_0$. Then $T_G$ contains the center $Z$ of $G$. Writing $I=\homcat{field}{F}{\overline{\mathbb Q}}$, the group of algebraic characters $X(T_G)=\homcat{alg\, gp}{T_{G/\overline{\mathbb Q}}}{\mathbb{G}_{m/\overline{\mathbb Q}}}$ can be identified with ${\mathbb Z}[I]^2$ so that $\kappa=(\kappa_1,\kappa_2)\in{\mathbb Z}[I]^2$ induces the following character on $T_G({\mathbb Q})=F^\times\times F^\times$: \begin{equation*} T_G({\mathbb Q})\longrightarrow\overline{\mathbb Q}^\times: (\xi_1,\xi_2)\mapsto \kappa(\xi_1,\xi_2)=\xi_1^{\kappa_1}\xi_2^{\kappa_2}, \end{equation*} where $\xi_j^{\kappa_j}=\prod_{\sigma\in I}\sigma(\xi_j)^{\kappa_{j,\sigma}}\in\overline{\mathbb Q}^\times$. Then consider the ``Neben'' characters defined as the triple \begin{equation*} \varepsilon=(\varepsilon_1,\varepsilon_2:T(\widehat{\mathbb Z})\longrightarrow\mathbb C^\times,\varepsilon_+:Z(\mathbb{A})/Z({\mathbb Q})\longrightarrow\mathbb C^\times). \end{equation*} This is the way in which the Neben characters have been considered in \cite[2.3.2]{hida-hmf} and this is so defined so that the character $\varepsilon_+$ is the central character of the automorphic form that corresponds to the Hilbert modular form over $GL_2(\mathbb{A}_F)$. Note that any character $\psi:T(\widehat{\mathbb Z})\longrightarrow\mathbb C^\times$ which is continuous is of finite order, and we have an ideal $\mathfrak{c}(\psi)$ which is maximal among the integral ideals $\mathfrak{c}$ satisfying $\psi(x)=1$ for all $x\in T(\widehat{\mathbb Z})=\widehat\cO^\times$ with $x-1\in\mathfrak{c}\widehat\cO$. The ideal $\mathfrak{c}(\psi)$ is called the conductor of $\psi$. The character $\varepsilon_+:Z(\mathbb{A})/Z({\mathbb Q})\longrightarrow\mathbb C^\times$ is an arithmetic Hecke character such that $\varepsilon_+(z)=\varepsilon_1(z)\varepsilon_2(z)$ for $z\in Z(\widehat{\mathbb Z})$ and $\varepsilon_+(x_\infty)=x^{-(\kappa_1+\kappa_2)+I}$. The infinity type of $\varepsilon_+$ is therefore $I-\kappa_1-\kappa_2$. Then the conductor $\mathfrak{c}(\varepsilon_+)$ is defined in the same manner as above by taking the restriction to $Z(\widehat{\mathbb Z})\cong T(\widehat{\mathbb Z})$. Then we define $\mathfrak{c}(\varepsilon)=\mathfrak{c}(\varepsilon_1)\mathfrak{c}(\varepsilon_2)\subset\mathfrak{c}(\varepsilon_+)$. Note that the two characters $\varepsilon_1, \varepsilon_2$ are purely local and may not extend to the Hecke characters of the idele class group $F_\mathbb{A}^\times/F^\times$. Now put $\varepsilon^-=\varepsilon_1\varepsilon_2^{-1}$, and assume that $\varepsilon^-$ factors through ${(\cO/\mathfrak{N})}^\times$, i.e., $\mathfrak{c}(\varepsilon^-)\supset\mathfrak{N}$. Consider the standard level group \begin{eqnarray} \widehat\Gamma_0(\mathfrak{N})&=&\left\lbrace\begin{pmatrix}a & b\\ c &d \end{pmatrix}\in G(\widehat{\mathbb Z})\mid c\equiv 0\mod\mathfrak{N}\widehat\cO\right\rbrace.\\ \widehat\Gamma(\mathfrak{N})&=&\left\lbrace\begin{pmatrix}a & b\\ c &d \end{pmatrix}\in \widehat\Gamma_0(\mathfrak{N})\mid a,d\equiv 1\mod\mathfrak{N}\widehat\cO\right\rbrace. \end{eqnarray} The the characters $\varepsilon^-$ and $\varepsilon_2$ induce a continuous character of the compact group $\widehat\Gamma_0(\mathfrak{N})$ which we also denote by $\varepsilon$ and defined by \begin{equation}\label{character-eps} \varepsilon:\widehat\Gamma_0(\mathfrak{N})\longrightarrow\mathbb C^\times: \begin{pmatrix}a & b\\ c& d\end{pmatrix}\mapsto \varepsilon_2(ad-bc)\varepsilon^-(a_\mathfrak{N})=\varepsilon_1(ad-bc)(\varepsilon^{-})^{-1}(d_\mathfrak{N}). \end{equation} Now assume that $\kappa_1+\kappa_2=[\kappa]I$, and define a factor of automorphy associated to $\kappa$ as follows: \begin{equation} J_\kappa(g,z)=\det(g)^{\kappa_1-I}j(g,z)^{\kappa_2-\kappa_1+I}, \mbox{ for } g\in G(\R) \mbox{ and } z\in\mathfrak{H}^I, \end{equation} where, for $g=(g_\sigma)\in GL_2(\R)^I=GL_2(F_\infty)$ and $z=(z_\sigma)\in\mathfrak{H}^I$; $j(g,z)=(c_\sigma z_\sigma+d_\sigma)_{\sigma\in I}\in\mathbb C^I=F\otimes_\R\mathbb C$. Then $S_\kappa(\mathfrak{N},\varepsilon;\mathbb C)$ is defined to be the space of functions $f:G(\AA)\longrightarrow\mathbb C$ satisfying the following three conditions. \begin{enumerate}[(S1)] \item $f(\alpha xuz)=\varepsilon_+(z)\varepsilon(u)f(x)J_\kappa(u_\infty,{\bf i})^{-1}$, for all $\alpha\in G({\mathbb Q}), z\in Z(\AA),$ and $u\in\Gamma_0(\mathfrak{N})C_{\bf i}$, for the stabilizer $C_{\bf i}$ of ${\bf i}=(\sqrt{-1},\cdots,\sqrt{-1})\in\mathfrak{H}^I$ in $G(\R)^+=$ identity connected component of $G(\R)$. \item for any $u\in G(\R)$ with $u({\mathbf{i}})=z$ for every $z\in\mathfrak{H}^I$, the function $f_g:\mathfrak{H}^I\longrightarrow\mathbb C$ defined by $f_g(z)=f(g\inft{u})J_\kappa(\inft{u},{\mathbf{i}})$ for each $g\in G(\AA^{(\infty)})$, is a holomorphic function on $\mathfrak{H}^I$ for every $g$; \item for every $z$, the function $f_g(z)$ is rapidly decreasing as $\mathrm{Im}(z_\sigma)\longrightarrow\infty$ for all $\sigma\in I$ uniformly. \end{enumerate} A function in $S_\kappa(\mathfrak{N},\varepsilon;\mathbb C)$ is called a \emph{Hilbert cusp form} of level $\mathfrak{N}$ and character $\varepsilon$. It is easy to check that the function $f_g$ satisfies the classical automorphy condition (\cite[2.3.5]{hida-hmf}): \begin{equation} f_g(\gamma(z))=\varepsilon^{-1}(g^{-1}\gamma g)f_g(z)J_\kappa(\gamma,z), \mbox{ for all } \gamma\in\Gamma_g(\mathfrak{N}), \end{equation} where $\Gamma_g(\mathfrak{N})=g\Gamma_0(\mathfrak{N})g^{-1}G(\R)^+\cap G({\mathbb Q})$. Now consider the level $\mathfrak{N}$ semigroup of level $\Delta_0(\mathfrak{N})\subset M_2(\widehat\cO)\cap G(\AA^{\infty})$ by \begin{equation} \Delta_0(\mathfrak{N})=\left\lbrace\begin{pmatrix}a & b \\ c & d\end{pmatrix}\in M_2(\widehat\cO)\cap G(\AA^{(\infty)})\mid a_\mathfrak{N}\in\cO_\mathfrak{N}^\times,c\in\mathfrak{N}\widehat\cO \right\rbrace, \end{equation} where $\cO_\mathfrak{N}=\prod_{\mathfrak{l}\mid\mathfrak{N}}\cO_\mathfrak{l}$ with $\mathfrak{l}$ running over primes dividing $\mathfrak{N}$. The opposite semigroup $\Delta^\ast_0(\mathfrak{N})$ is defined to be \begin{equation} \Delta^\ast_0(\mathfrak{N})=\left\lbrace\begin{pmatrix}a & b \\ c & d\end{pmatrix}\in M_2(\widehat\cO)\cap G(\AA^{(\infty)})\mid d_\mathfrak{N}\in\cO_\mathfrak{N}^\times,c\in\mathfrak{N}\widehat\cO \right\rbrace. \end{equation} We now extend the character $\varepsilon_2$ to $T(\AA^{(\infty)})$ by trivially extending on $\oplus{\mathfrak q}\varpi^{\mathbb Z}$ and then extend $\varepsilon_1$ to $T(\AA^{(\infty)})$ by $\varepsilon_1\varepsilon_2(x)=\varepsilon_+(x^{(\infty)})$. We put $\varepsilon^{-}(a)=\varepsilon_2^{-1}(a)\varepsilon_1(a)$ for all $a\in T(\AA^{(\infty)})$. Next extend the character $\varepsilon$ of $\widehat\Gamma_0(\mathfrak{N})$ in \eqref{character-eps} to the semigroup $\Delta_0(\mathfrak{N})$ by \begin{equation} \varepsilon\left(\begin{pmatrix}a & b \\ c & d\end{pmatrix}\right)=\varepsilon_1(ad-bc)(\varepsilon^-)^{-1}(d_\mathfrak{N}). \end{equation} Let $f\in S_\kappa(\mathfrak{N},\varepsilon;\mathbb C)$ be a Hilbert modular form. The Hecke operator $T(y)$ of the double coset $\widehat\Gamma_0(\mathfrak{N})\begin{pmatrix}y &0 \\ 0 &1\end{pmatrix}\widehat\Gamma_0(\mathfrak{N})=\sqcup_\delta\delta\widehat\Gamma_0(\mathfrak{N})$ is defined by \begin{equation} f\mid T(y)(g)=\sum_\delta\varepsilon(\delta)^{-1}f(g\delta). \end{equation} This operator preserves the space $S_\kappa(\mathfrak{N},\varepsilon;\mathbb C)$. Then, as in \cite[4.3]{hida-hmf}, the operator $\mathbb T(y)=y_p^{-\kappa_1}T(y)$ is optimally $p$-integral. If $f$ is a Hecke eigenform, then the eigenvalue $a(y,f)$ of $T(y)$ depends only on the ideal $\mathfrak{n}=y\widehat\cO\cap F$. Therefore, for each prime $\mathfrak{l}$ of $F$, we write $a({\mathfrak l},f)=a(\varpi_{\mathfrak l},f)$ and we put $T(\mathfrak l):=T(\varpi_{\mathfrak{l}})$. Therefore the $y$-th Fourier coefficient of $f$ is $\varepsilon_1(y)a(y,f)$ for each Hecke eigenform $f$ normalized so that $c(1,f)=1$, and the Fourier coefficient depends on $y$ (if $\varepsilon_1\neq 1$) and not just on the ideal $\mathfrak{n}$. A $T({\mathfrak p})$-eigenform $f$ has ${\mathfrak p}$-slope equal to $0$ if the absolute value $\mid y_p^{-\kappa_1}a({\mathfrak p},f) \mid_p=1$. A ${\mathfrak p}$-slope 0-form can have positive slope at primes ${\mathfrak p}'\mid p$ different from ${\mathfrak p}$. For a Hecke eigenform $f\in S_\kappa(\mathfrak{N}{\mathfrak p}^{r+1},\varepsilon;\mathbb C)$ (${\mathfrak p}\nmid\mathfrak{N},r\geq 0$) and a subfield $K$ of $\overline{\mathbb Q}$, the Hecke field $K(f)$ inside $\mathbb C$ is generated over $K$ by the eigenvalues $a(\mathfrak l,f)$ for the Hecke operators $T(\mathfrak{l})$ for all primes $\mathfrak{l}$ and the values of $\varepsilon$ over finite fields. We now recall the Fourier expansion of adelic modular forms (cf. \cite[Prop 2.26]{hida-hmf}). Recall the embedding $\overline{\mathbb Q}\hookrightarrow\mathbb C$. Also recall the differential idele $d\in F_\AA^\times$ with $d^{(\mathfrak{d})}=1$ and $d\widehat\cO=\mathfrak{d}\widehat\cO$. Then every $f\in S_\kappa(\mathfrak{N},\varepsilon;\mathbb C)$ has a Fourier expansion: \begin{equation} f\left(\begin{pmatrix}y & x\\0 & 1\end{pmatrix}\right)=|y|_\AA\sum_{0<<\xi\in F}c(\xi yd,f)(\xi\inft{y})^{-\kappa_1}\mathbf{e}_F(i\xi\inft{y})\mathbf{e}_F(\xi x), \end{equation} where $\mathbf{e}_F:F_\AA/F\longrightarrow\mathbb C^\times$ is the additive character with $\mathbf{e}_F(\inft{x})=\mathrm{exp}(2\pi i\sum_{\sigma\in I}x_\sigma)$ for $\inft{x}=(x_\sigma)_\sigma\in\R^I=F\otimes_{\mathbb Q}\R$. Let $F[\kappa]$ denote the field fixed by $\left\lbrace\sigma\in Gal(\overline{\mathbb Q}/F)\mid\kappa\sigma=\kappa\right\rbrace$, over which the character $\kappa$ is rational. Let $\cO[\kappa]$ denote the ring of integers of $F[\kappa]$. Further, let $F[\kappa,\varepsilon]$ be the field generated by the values of $\varepsilon$ over $F[\kappa]$. For any $F[\kappa,\varepsilon]$-algebra $A$ inside $\mathbb C$, we define \begin{equation} S_\kappa(\mathfrak{N},\varepsilon;A)=\left\lbrace f\in S_\kappa(\mathfrak{N},\varepsilon;\mathbb C)\mid c(y,f)\in A \mbox{ if } y \mbox{ is integral}\right\rbrace. \end{equation} There is an interpretation of $S_\kappa(\mathfrak{N},\varepsilon;A)$ as the space of $A$-rational global sections of a line bundle on a variety defined over $A$. Therefore, by the flat base-change theorem (cf. \cite[Lemma 1.10.2]{hida-gme}), we have \begin{equation} S_\kappa(\mathfrak{N},\varepsilon;A)\otimes_A\mathbb C=S_\kappa(\mathfrak{N},\varepsilon;\mathbb C). \end{equation} Therefore, for any $\overline{\mathbb Q}_p$-algebra $A$, we may define \begin{equation} S_\kappa(\mathfrak{N},\varepsilon;A)=S_\kappa(\mathfrak{N},\varepsilon;\overline{\mathbb Q})\otimes_{\overline{\mathbb Q},i_p}A. \end{equation} By linearity, $(y,f)\mapsto c(y,f)$ extends to a function on $F_\AA^\times\times S_\kappa(\mathfrak{N},\varepsilon;A)$ with values in $A$. If $u\in\widehat\cO^\times$, then by \cite[2.3.20]{hida-hmf}, we have \begin{equation} c(yu,f)=\varepsilon_1(u)c(y,f). \end{equation} The formal $q$-expansion of an $A$-rational form $f$ has values in the space of functions on $(F_\AA^{(\infty)})^\times$ with values in the formal monoid algebra $A[[q^\xi]]_{\xi\in F}$ of the multiplicative semi-group $F_+$ made up of totally positive elements, which is defined by \begin{equation} f(y)=\mathcal{N}(y)^{-1}\sum_{\xi>>0} c_p(\xi yd,f)q^\xi, \end{equation} where $\mathcal{N}:F_\AA^\times/F^\times\longrightarrow\overline{\mathbb Q}_p^\times$ is the character given by $\mathcal{N}(y)=y_p^{-I}|y^{(\infty)}|_\AA^{-1}$, and $c_p(y,f)=y_p^{-\kappa_1}c(y,f)$. Let $\cO[\kappa,\varepsilon]$ be the ring of integers of the field $F[\kappa,\varepsilon]$. Then for any $p$-adically complete $\cO[\kappa,\varepsilon]$-algebra $A$ in $\mathbb C_p$, we define \begin{equation} S_\kappa(\mathfrak{N},\varepsilon;A)=\left\lbrace f\in S_\kappa(\mathfrak{N},\varepsilon;\mathbb C_p)\mid c_p(y,f)\in A \mbox{ if } y \mbox{ is integral} \right\rbrace. \end{equation} On this space $S_\kappa(\mathfrak{N},\varepsilon;A)$, there are Hecke operators acting (cf. \cite[2.3.4]{hida-hmf}). The Hecke operators form an $A$-subalgebra of $\mathrm{End}_A(S_\kappa(\mathfrak{N},\varepsilon;A))$ generated by $T_p(y)$ for all $y$ of the form $\prod_{\mathfrak q}\varpi_{\mathfrak q}^{e({\mathfrak q})}$. We denote the $A$-subalgebra of Hecke operators by $h_\kappa(\mathfrak{N},\varepsilon;A)$. \subsection{Hecke algebras}\label{hecke-algebras} Consider the subgroups $U_\alpha=\widehat\Gamma_0(\mathfrak{N})\cap\widehat\Gamma({\mathfrak p}^\alpha)$. Then for all $\alpha\geq\beta$, we have \begin{equation} S_\kappa(\mathfrak{N}{\mathfrak p}^\beta,\varepsilon;A)\hookrightarrow S_\kappa(\mathfrak{N},\varepsilon;A)\hookrightarrow S_\kappa(U_\alpha,\varepsilon;A). \end{equation} Now let $\mathbf\Gamma$ denote the torsion free part of $\cO_{\mathfrak p}^\times$ and $\Delta $ be the torsion part. Then $\cO_{\mathfrak p}^\times=\mathbf{\Gamma}\times\Delta$ and hence $\mathbf{G}=\mathbf{\Gamma}\times\Delta\times(\cO/\mathfrak{N}')^\times$. We then fix $\kappa$ and the initial $\varepsilon=(\varepsilon_1,\varepsilon_2,\varepsilon_+)$. We then assume that $\varepsilon_1, \varepsilon_2$ factors through $\mathbf{G}/\mathbf{\Gamma}$ factors through $\mathbf{G}/\mathbf{\Gamma}\cong{\Delta\times(\cO/\mathfrak{N}')^\times},$ for some ${\mathfrak p}^{r_0+1}$ for some prime. It is easy to see that there exists a projective system of Hecke algebras $\{h_\kappa(U,\varepsilon;A)\}_U$, where $U$ runs over all the open subgroups $\widehat\Gamma_0(\mathfrak{N}{\mathfrak p}^{r+1})$. When $\kappa_2-\kappa_1\geq I$, we get the universal Hecke algebra $\bh_\kappa(\mathfrak{N}{\mathfrak p}^\infty,\varepsilon;A)=\varprojlim_U h_\kappa(U,\varepsilon;A)$. Note that the character defined by $T:\widehat\cO^\times\longrightarrow\bh_\kappa(\mathfrak{N}^\infty,\varepsilon;A)$ which maps an element $u$ to the Hecke operator $T(u)$, factors through $\mathbf{\Gamma}=\mathbf{G}/(\Delta\times(\cO/\mathfrak{N}')^\times)$ and induces a canonical algebra structure of $\bh_\kappa(\mathfrak{N}{\mathfrak p}^\infty,\varepsilon;A)$ over $A[[\mathbf\Gamma]]$. Suppose that $W$ is a sufficiently large complete discrete valuation ring inside $\overline{\mathbb Q}_p$ containing the values of $\varepsilon$. We set $\mathbf{\Lambda}=W[[\mathbf\Gamma]]$, and let $W[\varepsilon]\subset\overline{\mathbb Q}_p$ be the $W$-subalgebra generated by the values of $\varepsilon$ (over the finite adeles). For the Hecke operator $\mathbb{T}(\varpi_{\mathfrak p})$, one considers the nearly ${\mathfrak p}$-ordinary projector $e=\lim_n\mathbb{T}(\varpi_{\mathfrak p})^{n!}$. The limit is independent of the choice of $\varpi_{\mathfrak p}$. Now consider the \emph{nearly-ordinary} Hecke algebras $h_\kappa^{\mathrm{n.ord}}(\mathfrak{N}{\mathfrak p}^r,\varepsilon;W):=e(h_\kappa(\mathfrak{N}{\mathfrak p}^r,\varepsilon;W))$ and $\bh_\kappa^{\mathrm{n.ord}}=\bh_\kappa^{\mathrm{n.ord}}(\mathfrak{N}{\mathfrak p}^\infty,\varepsilon;W)=\varprojlim_rh_\kappa^{\mathrm{n.ord}}(\mathfrak{N}{\mathfrak p}^r,\varepsilon;W)$. If the weight $\kappa=(0,I)$, then we set (cf. \cite[3.1.5]{hida-hmf}): \begin{equation*} h^{\mathrm{n.ord}}(\mathfrak{N},\varepsilon;W):=h_{(0,I)}^{\mathrm{n.ord}}(\mathfrak{N},\varepsilon;W). \end{equation*} \begin{defn}\label{arithmetic}Recall, from section \ref{adelic-hmf}, that $\kappa=(\kappa_1,\kappa_2)$ induces a character on the torus $T_G$. If $\kappa_2-\kappa_1+I\geq I$, then the pair $(\kappa,\varepsilon)$ is called \emph{arithmetic}. For an integral domain $\mathbb I$ finite and flat over $W[[T_G({\mathbb Z}_p)]]$, if a $W$-algebra homomorphism $(P:\mathbb I\longrightarrow W)\in\mathrm{Spf}(\mathbb I)(W)$ coincides with an arithmetic weight on an open subgroup of $T_G({\mathbb Z}_p)$, then $P$ is referred to as an \emph{arithmetic} point. The set of arithmetic points of $\mathrm{Spf}(\mathbb I)$ with values in $W$ is denoted by $\mathrm{Spf}^{arith}(\mathbb I)(W)$. \end{defn} Let $\Sigma_p$ denote the set of primes of $F$ lying above $p$. A pair $(\kappa,\varepsilon)$, with $\kappa\in X(T_G)$ such that $\kappa_j,\varepsilon_j$ factors through the local norm maps \begin{equation} \begin{split} &T({\mathbb Z}_p)\longrightarrow\prod_{{\mathfrak p}\mid p}{\mathbb Z}_p^\times\\ &N_p((x_{\mathfrak p})_{\mathfrak p})=(N_{\mathfrak p}(x_{\mathfrak p}))_{\mathfrak p}, \mbox{ where } N_{\mathfrak p}(x_{\mathfrak p})=N_{F_{\mathfrak p}/{\mathbb Q}_p}(x_{\mathfrak p}). \end{split} \end{equation} is called \emph{locally cyclotomic}. If $(\kappa,\varepsilon)$ factor through the global norm map $N_{F/{\mathbb Q}}:T({\mathbb Z}_p)\longrightarrow\mathbb{G}_m({\mathbb Z}_p)={\mathbb Z}_p^\times$, then $(\kappa,\varepsilon)$ is called \emph{cyclotomic}. The pair $(\kappa,\varepsilon)$ induces a character $T_G(\widehat{\mathbb Z})\longrightarrow W^\times$ given by \begin{equation} \begin{pmatrix} a& 0\\ 0 &d\end{pmatrix}\mapsto \varepsilon_1(a)a_p^{-\kappa_1}\varepsilon_2(d)d_p^{-\kappa_2}. \end{equation} This further induces a $W$-algebra homomorphism $\pi_{\kappa,\varepsilon}:W[[T_G({\mathbb Z}_p)]]\longrightarrow W$ induced by the restriction of this character to $T_G({\mathbb Z}_p)$. A $W$-point $P$ of the formal spectrum $\mathrm{Spf}(W[[T_G({\mathbb Z}_p)]])$ is called \emph{arithmetic} if $P=\mathrm{ker}(\pi_{\kappa,\varepsilon})$ with $\kappa_2-\kappa_1-I\geq0$. Similarly, an arithmetic point $P\in\mathrm{Spec}(W[[T_G({\mathbb Z}_p)]])(W)$ associated with $(\kappa,\varepsilon)$ is called \emph{locally cyclotomic} (resp. \emph{cyclotomic}) if $(\kappa,\varepsilon)$ is locally cyclotomic (resp. cyclotomic). Thus locally cyclotomic (resp. cyclotomic) points are arithmetic. Let $\mathbb I$ be an integral domain which is an algebra over $W[[T_G({\mathbb Z}_p)]]$. Then a point $P\in\mathrm{Spec}(\mathbb I)(W)$ is said to be \emph{locally cyclotomic} (resp. \emph{cyclotomic}) if the structure homomorphism $W[[T_G({\mathbb Z}_p)]]\longrightarrow\mathbb I$ is locally cyclotomic (resp. cyclotomic). \subsection{Locally cyclotomic Hecke algebra} Let $\Gamma_{\mathfrak p}$ be the $p$-Sylow subgroup of $\mathrm{Gal}(F_{\mathfrak p}^{unr}(\mu_{p^\infty})/F_{{\mathfrak p}}^{unr})$. Then the cyclotomic character $$\mathcal{N}:\mathrm{Gal}(F_{\mathfrak p}^{unr}(\mu_{p^\infty})/F_{{\mathfrak p}}^{unr})\longrightarrow{\mathbb Z}_p^\times$$ induces an embedding $\Gamma_{\mathfrak p}\hookrightarrow 1+p{\mathbb Z}_p\subset{\mathbb Z}_p^\times$. Set $\Gamma_F=\prod_{{\mathfrak p}\mid p}\Gamma_{\mathfrak p}$. Consider the following isomorphism \begin{align* T({\mathbb Z}/p^r{\mathbb Z})^2&\to\widehat\Gamma_0(p^r)/\widehat\Gamma_1(p^r)\\ (a,d) &\mapsto \begin{pmatrix}a & 0\\ 0 &d\end{pmatrix}. \end{align*} Since $T({\mathbb Z}/p^r{\mathbb Z})\cong (\cO/p^r\cO)^\times\cong\prod_{\mathfrak p}(\cO_p/p^r\cO_{\mathfrak p})^\times$; so the local norm map $N_{\mathfrak p}:\cO_{\mathfrak p}^\times\longrightarrow{\mathbb Z}_p^\times$ induces the map $N_p=\prod_{{\mathfrak p}\mid p}N_{\mathfrak p}:T({\mathbb Z}/p^r{\mathbb Z})\longrightarrow\prod_{\mathfrak p}\mid p({\mathbb Z}/p^r{\mathbb Z})^\times$ for each $r>0$. Let $S_{cyc}(p^r)$ be a subgroup of $G(\AA^{(\infty)})$ with $\widehat\Gamma_0(p^r)\supset S_{cyc}(p^r)\supset S_{cyc}(p^r)\supset\widehat\Gamma(p^r)$ which is given by \begin{equation} S_{cyc}/\widehat\Gamma(p^r)=\mathrm{ker}\left(N_p^2:T({\mathbb Z}/p^r{\mathbb Z})^2\longrightarrow\prod_{{\mathfrak p}\mid p}(({\mathbb Z}/p^r{\mathbb Z})^\times)^2\right). \end{equation} Putting $S_r=S_{cyc}(p^r)\cap\widehat\Gamma_0(\mathfrak{N})$. Let $S(S_n,\varepsilon;A)$ denote the space of cusp forms of weight $(0,I)$ defined over the congruence subgroup $S_n$. A more general definition is given in \cite[page 165]{hida-hmf}. For $m>n$, there is an inclusion $S(S_n,\varepsilon;A)\hookrightarrow S(S_m,\varepsilon;A)$, which is compatible with the Hecke operators. By duality, this induces a $W$-algebra homomorphism $h(S_m,\varepsilon;W)\longrightarrow h(S_n,\varepsilon;W)$. The \emph{universal locally cyclotomic Hecke algebra} is defined to be \begin{equation} \bh^{\mathrm{n.ord}}_{\mathrm{cyc}}(\mathfrak{N},\varepsilon;W[[\Gamma_F]]):=\varprojlim_n h^{\mathrm{n.ord}}(S_n,\varepsilon;W). \end{equation} For the character $\underline{\varepsilon}:Z(\widehat A^{(\infty)})\widehat\Gamma_0(\mathfrak{N})\longrightarrow A^\times$, defined by $\underline{\varepsilon}(u)=\varepsilon_2(\det(u))\varepsilon^-(a_\mathfrak{N})\varepsilon_+(z)$ for $u=\begin{pmatrix}a &b \\c & d\end{pmatrix}\in\widehat\Gamma_0(\mathfrak{N}), z\in Z(\AA^{(\infty)})$, assume the following condition (\cite[page 165]{hida-hmf}): \begin{description} \item[{(sm0)}] $\underline{\varepsilon}$ restricted to $\Gamma_j/ Z({\mathbb Q})$ has order prime to $p$. \end{description} Under this condition, $\bh^{\mathrm{n.ord}}_{\mathrm{cyc}}(\mathfrak{N},\varepsilon;W[[\Gamma_F]])$ is a torsion-free $W[[\Gamma_F]]$-module of finite type. \subsection{Modular Galois representations} Recall the following Hecke operators on $S_\kappa(\mathfrak{N},\varepsilon;W)$: \begin{eqnarray} T(y)&=& y_p^{-\kappa_1}\left[\widehat\Gamma_0(\mathfrak{N})\begin{pmatrix}1 &0 \\0 &y\end{pmatrix}\widehat\Gamma_0(\mathfrak{N})\right], \mbox{ if the ideal } y\cO \mbox{ is prime to } \mathfrak{N} \\ U(y)&=& y_p^{-\kappa_1}\left[\widehat\Gamma_0(\mathfrak{N})\begin{pmatrix}1 &0 \\0 &y\end{pmatrix}\widehat\Gamma_0(\mathfrak{N})\right], \mbox{ if } y\in\widehat\cO_\mathfrak{N}. \end{eqnarray} Let $f\in S_\kappa(\mathfrak{N},\varepsilon;W)$, with $\kappa=(0,I)$ be a Hecke eigenform, and $\lambda:h_\kappa(\mathfrak{N},\varepsilon;W)\longrightarrow\overline{\mathbb Q}_p$ be an algebra homomorphism satisfying $f\mid T(\varpi_{\mathfrak q})=\lambda(T(\varpi_{\mathfrak q}))f$ for all prime ideals ${\mathfrak q}$. Suppose $P=\mathrm{ker}{(\lambda)}$. Recall the character $\varepsilon^-=\varepsilon_1\varepsilon_2^{-1}$. We consider the following condition: \begin{description} \item[(sf)] $\mathfrak{N}/\mathfrak{c}(\varepsilon^{-})$ is square-free and is prime to $\mathfrak{c}(\varepsilon^-)$. \end{description} The duality between the space of cusp forms and Hecke algebras gives rise to an algebra homomorphism $\pi_f\in\homcat{\mathrm{alg}}{h_\kappa(\mathfrak{N},\varepsilon;W)}{W}$ (\cite[Theorem 2.28]{hida-hmf}). We have the following theorem due to Shimura, Deligne, Serre, Carayol, Ohta, Wiles, Blasius, Rogawski, Taylor, and the version that we give here is from {\cite[Theorem 2.43]{hida-hmf}}. \begin{theorem} Let $h=h_\kappa(\mathfrak{N},\varepsilon;W)$ and $k(P)$ denote the field of fractions of $h/P$. Then there exists a continuous semi-simple Galois representation $\rho_f:\mathrm{Gal}(\overline{\mathbb Q}/F)\longrightarrow GL_2(k(P))$, such that \begin{enumerate} \item $\rho_f$ is unramified outside $p\mathfrak{N}$, \item $\mathrm{Tr}(\rho_f(Frob_\mathfrak l))=\lambda(T(\varpi_\mathfrak{l}))$ for $\mathfrak{l}\nmid{\mathfrak p}\mathfrak{N}$. \end{enumerate} \end{theorem} Let $f$ be nearly ordinary at all primes ${\mathfrak p}\mid p$. Then we have the following theorem (\cite[Theorem 2.43 (3)]{hida-hmf}). \begin{theorem} Let $f$ be nearly ordinary at all primes ${\mathfrak p}\mid p$, i.e., $f|U_p(\varpi_{\mathfrak p})=\lambda(U_p(\varpi_{\mathfrak p}))f$ with a $p$-adic unit $\lambda(U_p(\varpi_{\mathfrak p}))$. Then \begin{equation} \rho_f\mid_{D_{\mathfrak p}}\cong\begin{pmatrix} \epsilon_{\mathfrak p} & \ast\\ 0 & \delta_{\mathfrak p} \end{pmatrix} \end{equation} for the decomposition subgroup $D_{\mathfrak p}$ at ${\mathfrak p}$, and $\delta_{\mathfrak p}([\varpi_{\mathfrak p},F_{\mathfrak p}])=\lambda(U_p(\varpi_{\mathfrak p}))$, for the local Artin symbol $[\varpi_{\mathfrak p},F_{\mathfrak p}]$. In particular, $\delta_{\mathfrak p}([\varpi_{\mathfrak p},F_{\mathfrak p}])=\varepsilon_{1,{\mathfrak p}}(u)u^{-\kappa_1}$ for $u\in\cO^\times_{\mathfrak p}$. \end{theorem} \subsection{Locally cyclotomic deformation}\label{lcd} We now recall the locally cyclotomic deformation associated to a Hecke eigenform from \cite[3.2.1, 3.2.2]{hida-hmf}. Let $f_0\in S(\mathfrak{N},\varepsilon;B)$ be a Hecke eigenform, whose Galois representation $\rho_0=\rho_{f_0}:\mathrm{Gal}_F\longrightarrow \gl{W}$ is unramified outside $p\mathfrak{c}(\varepsilon)$, where $W$ as before is the $p$-adic completion of $B$. Among all the forms equivalent to $f_0$, we assume that $f_0$ has the maximal level. Recall the following condition \begin{description} \item[(sf)] $\mathfrak{N}_0=\mathfrak{N}/\mathfrak{c}(\varepsilon^-)$ is square-free and is prime to $\mathfrak{c}(\varepsilon^-)$. \end{description} Let $f_0$ be nearly ordinary at all prime ideals ${\mathfrak p}\mid p$. Recall the following conditions on the representation $\bar\rho:=(\rho_0\mod{\mathfrak m}_B):\mathrm{Gal}_{F}\longrightarrow\gl{F}$. These are the conditions $\mathrm{(h1)-(h4)}$ in \cite[page 185]{hida-hmf}. \begin{description} \item[(h1)] $\underline{\varepsilon}$ has order prime to $p$. \item[(h2)] $\rho_0\mid_{D_{\mathfrak p}}$ is reducible for all ${\mathfrak p}\mid p$. \item[(h3)] For all ${\mathfrak p}\mid p$ in $F$, viewing the local representations at ${\mathfrak p}$ is given by $\pi_{0,{\mathfrak p}}\cong\pi(\eta_{1,{\mathfrak p}},\eta_{2,{\mathfrak p}})$ or $\pi_{0,{\mathfrak p}}\cong\sigma(\eta_{1,{\mathfrak p}},\eta_{2,{\mathfrak p}})$ with $\eta_{2,{\mathfrak p}}=\eta_{1,{\mathfrak p}}\mid.\mid_{\mathfrak p}^{-1}$. Then by local class field theory, we have \begin{equation*} \rho_0\mid_{D_{\mathfrak p}}\cong\begin{pmatrix} \eta_{1,{\mathfrak p}}^{-1}\varepsilon_{+,{\mathfrak p}}\mathcal{N}_{{\mathfrak p}} & \ast\\ 0 & \eta_{1,{\mathfrak p}} \end{pmatrix} \end{equation*} with $\bar\delta_{\mathfrak p}^{-1}\det{(\bar\rho)}\neq\bar\delta_{\mathfrak p}$, and $\bar\delta_{\mathfrak p}=(\eta_{1,{\mathfrak p}}\mod{{\mathfrak m}_B})$. The finite order Hecke character $\varepsilon_+$ is regarded as a global Galois character by class field theory, and $\mathcal{N}_{\mathfrak p}$ is the $p$-adic cyclotomic character restricted to $D_{\mathfrak p}$. The character $\bar\delta_{\mathfrak p}$ (resp. $\eta_{1,{\mathfrak p}}$) is the \emph{nearly ${\mathfrak p}$-ordinary character} of $\bar\rho$ (resp. $\rho_0$). \item[{(h4)}] If ${\mathfrak q}$ is a prime ideal such that ${\mathfrak q}\nmid p$ but ${\mathfrak q}\mid\mathfrak{N}/{\mathfrak{c}(\varepsilon^-)}$, then $\bar\rho\mid_{D_{\mathfrak q}}$ has ramification index divisible by $p$. So, $\bar\rho$ restricted to the inertia subgroup $I_{\mathfrak q}$ has the isomorphism $\bar\rho\mid_{I_{\mathfrak q}}\cong\begin{pmatrix}1 &\ast\\ 0 &1\end{pmatrix}$, which is non-semisimple with a non-trivial $\ast$. \end{description} Now we consider the following deformation functor on the category $\mbox{CNL}_W$ of complete, noetherian local $W$-algebras $A$ such that $A/{{\mathfrak m}_A}\cong\FF$. The deformation functor that we are considering is the one that is denoted $\Phi^{cyc}$ in \cite[3.2.8]{hida-hmf}. We recall that the functor $\Phi^{cyc}:CNL_W\longrightarrow SETS$ is defined to be the set of isomorphism classes of representations $\rho:\mathrm{Gal}_F\longrightarrow\mathrm{GL}_2(A)$ satisfying the following conditions: \begin{description} \item[(L1)] $\rho\mod{\mathfrak m}_A\cong\bar\rho$. \item[(L2)] $\rho$ is unramified outside $\mathfrak{c}(\varepsilon)\mathfrak{N}$. \item[(L3)] $\det(\rho)=\varepsilon_+\mathcal{N}$, for the global cyclotomic character $\mathcal{N}$. \item[(L4)] $\rho$ is nearly ordinary at all ${\mathfrak p}\mid p$, and $\rho\mid_{D_{\mathfrak p}}\cong\begin{pmatrix} \epsilon_{\mathfrak p} & \ast\\ 0 & \delta_{{\mathfrak p}} \end{pmatrix}$ such that the characters $\epsilon_{\mathfrak p}\mid_{I_{\mathfrak p}}\varepsilon_{2,{\mathfrak p}}^{-1}$ and $\delta_{\mathfrak p}\mid_{I_{\mathfrak p}}\varepsilon_{1,{\mathfrak p}}^{-1}$ factor through $\mathrm{Gal}(F_{\mathfrak p}^{unr}(\mu_{p^\infty})/F_{\mathfrak p}^{unr})$ for all primes ${\mathfrak p}\mid p$. \item[(L5)] $\rho\mid_{D_{\mathfrak q}}\cong\begin{pmatrix} \epsilon_{\mathfrak q} & \ast\\ 0 & \delta_{{\mathfrak q}} \end{pmatrix}$ with $\delta_{\mathfrak q}\mod{\mathfrak m}_A\cong\bar\delta_{\mathfrak q}$ and $\delta_{\mathfrak q}\mid_{I_{\mathfrak q}}=\varepsilon_{1,{\mathfrak q}}$ if ${\mathfrak q}\mid\mathfrak{c}(\varepsilon)\mathfrak{N}$ and ${\mathfrak q}\nmid p$; and $\rho\mid_{I_{\mathfrak q}}\otimes\varepsilon_1^{-1}$ is unramified if ${\mathfrak q}\mid\mathfrak{c}(\varepsilon)$ and ${\mathfrak q}\nmid p\mathfrak{N}$. \end{description} The conditions (L1)-(L3) correspond to the conditions (Q1)-(Q3) and the last condition (L5) is the condition (Q6) in \cite[page 186]{hida-hmf}. The condition (L4) is the condition (Q4') in \cite[3.2.8]{hida-hmf}. By Mazur's theorem, (see for example \cite[Theorem 1.46]{hida-hmf}), the functor $\Phi^{cyc}$ is represented by a universal couple $(\mathcal R_F,\boldsymbol{\rho}^{cyc}_F)$, with $\mathcal R_F\in CNL_W$. \section{Deformation rings and base change}\label{deformation} \subsection{Base change of deformation rings} Let $E$ be a totally real field and $f_0\in S_\kappa(\mathfrak{N},\varepsilon;W)$ with $\kappa=(0,I)$ be a Hilbert modular eigenform defined over the totally real field $E$. Let $\rho_E$ denote the Galois representation that is attached to $f_0$. We assume that the Galois representation $\rho_E$ satisfies the conditions (h1)-(h4) in section \ref{lcd}. Let $F$ be a finite totally real Galois extension of $E$ such that the Galois group $\Delta:=\mbox{Gal}(F/E)$ is a \emph{finite} $p$ group. The Galois group $\Delta$ is \emph{not necessarily} cyclic. Since $\Delta$ is a group with order a power of $p$, it is a \emph{solvable} group. Therefore the Hecke eigenform $f_0\in S_\kappa(\mathfrak{N},\varepsilon;W)$ admits a \emph{unique} base-change lift, say, $f\in S_\kappa(\mathfrak{N}',\varepsilon;W)$, which is defined over the totally real field $F$ (cf. \cite[3.3.3]{hida-hmf}), for an appropriate choice of $\mathfrak{N}'$, such that the associated Galois representation $\rho_f:\mathrm{Gal}_F\longrightarrow\gl{W}$ is equivalent to the restriction $\rho_E\mid_{\mathrm{Gal}_F}$. If the representation $\rho_E$ satisfies the conditions (h1)-(h4) over $E$, then the restriction $\rho_F$ satisfies the conditions (h1)-(h4) over $F$. Let $\Phi^{cyc}_E$ denote the locally cyclotomic deformation for $\bar\rho_E$, and $\Phi^{cyc}_F$ denote the locally cyclotomic deformation for $\bar\rho_F$. Since $\bar\rho_E$ is absolutely irreducible and $\Delta$ is a finite $p$-group, by \cite[Lemma 1.62]{hida-hmf}, the representation $\bar\rho_F$ is absolutely irreducible. Therefore, $\Phi^{cyc}_F$ is representable. Let $(\mathcal R_F,\boldsymbol{\rho}^{cyc}_F)$ denote the universal deformation ring and the universal deformation of $\bar\rho_F$. Then $\boldsymbol{\rho}^{cyc}_E\mid_{\mathrm{Gal}_F}$ is a deformation in $\Phi^{cyc}_F$. Therefore we have a non-trivial algebra homomorphism $\alpha:\mathcal R_F\longrightarrow \mathcal R_E$ such that $\alpha\circ\boldsymbol{\rho}_F^{cyc}\cong\boldsymbol{\rho}_E^{cyc}\!\!\mid_{\mathrm{Gal}_F}$. The morphism $\alpha$ is referred to as the base change morphism. We now describe the action of $\Delta$ on $\Phi_F^{cyc}$ and $\mathcal R_F$. \paragraph{$\Delta$-action on $\Phi_F^{cyc}$:} Let $\sigma\in \Delta$ and $\rho\in\Phi_F^{cyc}(A)$, where $A$ is an $\cO$-algebra in \cnl. Consider any $c(\sigma)\in\gl\cO$ such that $c(\sigma)\equiv\bar\rho(\sigma)\pmod{{\mathfrak m}_{\cO}}$. Then the action of $\sigma$ on $\rho$ is defined by \begin{equation} \rho^{\sigma}(g):=c(\sigma)^{-1}\rho(\sigma g\sigma^{-1}) c(\sigma)\in\Phi_F^{cyc}(A). \end{equation} The strict equivalence class of $\rho^{\sigma}$ is well defined and depends only upon the class of $\sigma$ in $\Delta$. This gives a well-defined action of $\Delta$ on $\Phi_F^{cyc}$. \paragraph{$\Delta$-action on $\mathcal R_F$:} For any $\sigma\in \Delta$, since $\overline{\rho^{\sigma}}=\overline{\rho_F}$, $\mathcal R_F$ is the universal deformation ring for $\rho^{\sigma}$. Therefore, there is a morphism $\mathcal R_F\longby{\widetilde\sigma}\mathcal R_F$ in \cnl. Similarly, we have $\mathcal R_F\longby{\widetilde{(\sigma^{-1})}}\mathcal R_F$ in \cnl. Composing these two morphisms gives the identity, so that $\widetilde\sigma$ is an automorphism in \cnl. Extending this to $\cO[\Delta]$, we find that $\mathcal R_F$ is a module over $\cO[\Delta]$. In other words, the universality of the deformation rings gives rise to the automorphisms which define the action. \subsection{Control of deformation rings} Recall that $\Delta$ is a group of order a power of $p$. Consider the base-change morphism $\mathcal R_F\longby{\alpha}\mathcal R_E$ in \cnl. We consider the following ideal \begin{equation} I_\Delta(\mathcal R_F):=\langle\sigma x-x\mid x\in\mathcal R_F,\sigma\in\Delta\rangle \end{equation} which is the augmentation ideal. This is an ideal of $\mathcal R_F$. Let $(\mathcal R_F)_\Delta:=\mathcal R_F/{I_\Delta(\mathcal R_F)}$. Since the determinant of the locally cyclotomic deformation functor is fixed, by \cite[Prop 5.41]{hida-mfg}, we have the following proposition \begin{propn}\label{base-change} $(\mathcal R_F)_\Delta\cong\mathcal R_E$. \end{propn} \section{Adjoint Selmer groups and K\"ahler differentials}\label{adjoint \subsection{Selmer groups} Let $p$ be an odd prime. We fix an algebraic closure $\bar{\mathbb Q}$ and embeddings ${\mathbb Q}\hookrightarrow\bar{\mathbb Q}$ and ${\mathbb Q}\hookrightarrow\bar{\mathbb Q}_l$ for every prime $l$. For a prime $p$ in ${\mathbb Q}$, let $D_p$ denote the decomposition group under this embedding. For a prime ${\mathfrak p}$ in a finite extension $F$ of ${\mathbb Q}$, let $D_{\mathfrak p}$ denote the decomposition group at ${\mathfrak p}$ defined by the above embedding. Let $\cO$ be a finite extension of ${\mathbb Z}_p$ and \cnl denote the category of \emph{complete noetherian local rings which are $\cO$-algebras.} We recall the definition of the \emph{adjoint representation} associated to a $2$-dimensional Galois representation. Let $\mathbb I\in\cnl$ and $\mathbb M$ be the quotient field of $\mathbb I$. Consider a two-dimensional representation $\rho:\gq\longrightarrow\gl{\mathbb I}$. Let $\mathbb L:=\mathbb I^2$. Then $\rho$ induces an action of $\gq$ on $M_2(\mathbb I)$, the ring of $2\times 2$-matrices over $\mathbb I$, by conjugation, i. e., $\sigma(x):= \rho(\sigma)x\rho(\sigma)^{-1}$. Then the \emph{adjoint representation} is defined by \begin{equation} \ad{\rho}:=\{\eta\in \mbox{End}_\mathbb I(\mathbb L)\mid Trace(\eta)=0 \}. \end{equation} It is easy to see that this is a 3-dimensional representation of $\gq$. \begin{defn}Let $D_p$ denote the decomposition group at $p$. Then the representation $\rho$ is said to be \emph{nearly ordinary} at $p$, if there is a two-step filtration of $\mathbb L$ given by \begin{equation}\label{filtration} \mathbb{L} \supset {{\mathcal F}}_p^+{\mathbb L} \supset {0} \end{equation} as $D_p$-modules, such that ${\mathcal F}_p^+\mathbb L$ is \emph{free} of rank \emph{one} over $\mathbb I$. \end{defn} If $\rho$ is nearly ordinary, then it induces on $\ad{\rho}$ the following three-step filtration stable under $D_p$: \begin{equation} \ad{\rho}\supset {\mathcal F}_p^-\ad{\rho}\supset {\mathcal F}_p^+\ad{\rho}\supset 0 \end{equation} where \begin{equation}\begin{split} {\mathcal F}_p^-\ad{\rho}=&\{\eta\in\ad{\rho}\mid \eta({\mathcal F}_p^+\mathbb L)\subset {\mathcal F}_p^+\mathbb L\}, \mbox{ and }\\ {\mathcal F}_p^+\ad{\rho}=&\{\eta\in\ad{\rho}\mid \eta({\mathcal F}_p^+\mathbb L)=0\}. \end{split} \end{equation} In terms of matrices, if we choose a basis of $\mathbb L$ containing a generator of ${\mathcal F}_p^+ \ad{\rho}$ and identify $\mbox{End}_\mathbb I(\mathbb L)$ with $M_2(\mathbb I)$ using this basis, then ${\mathcal F}_p^-\ad{\rho}$ is made up of \emph{upper triangular matrices with trace zero.} On the other hand, ${\mathcal F}_p^+\ad{\rho}$ is made up of \emph{upper nilpotent matrices.} \begin{defn} For a number field $F$, let $\rho_F:\mathrm{Gal}_F\longrightarrow\gl{\mathbb I}$ be a representation of $\mathrm{Gal}_F$. Then $\rho_F$ is nearly ordinary \emph{at a prime ${\mathfrak p}$ of $F$} lying above $p$ if there is a two-step filtration of $\mathbb I^2$ as $D_{\mathfrak p}$-modules as in \eqref{filtration}. Then this filtration induces a filtration of $\ad{\rho_F}$ restricted to $D_{\mathfrak p}$ for each prime ${\mathfrak p}$, that is, \begin{equation}\ad{\rho}\supset {\mathcal F}^-_{\mathfrak p}\ad{\rho}\supset {\mathcal F}^+_{\mathfrak p}\ad{\rho}\supset 0.\end{equation} \end{defn} Recall that $\mathbb M$ is the quotient field of $\mathbb I$. We put $\mathbb V:=\mathbb L\otimes_{\mathbb I}\mathbb M$ and $\mathbb A:=\mathbb V/\mathbb L$. Let $\ad{\mathbb V}:=\ad{\rho}\otimes\mathbb M$ and $\ad{\mathbb A}:=\ad{\mathbb V}/\ad{\rho}$. Let $F$ be any algebraic extension of ${\mathbb Q}$. Let $\rho_F:\mathrm{Gal}_F\longrightarrow\gl{\mathbb I}$ be nearly ordinary at \emph{every} prime of $F$ above $p$. Then for each ${\mathfrak p}\mid p$ in $F$, we have the following filtration \begin{equation} \ad{\rho_F}\supset {\mathcal F}^-_{{\mathfrak p}}\ad{\rho_F}\supset {\mathcal F}^+_{{\mathfrak p}}\ad{\rho_F}\supset \{0\}. \end{equation} This induces the following filtration on $\ad{\mathbb A}$ \begin{equation} \ad{\mathbb A}\supset {\mathcal F}^-_{{\mathfrak p}}\ad{\mathbb A}\supset {\mathcal F}^+_{{\mathfrak p}}\ad{\mathbb A}\supset \{0\}. \end{equation} This filtration allows us to define the following \emph{local} conditions: \begin{equation}\label{loc-con} \begin{split} \mathscr L(F_{\mathfrak q})= \begin{cases} ker\left[\Hone{F_{{\mathfrak q}}}{\AA}\longrightarrow \Hone{F_{{\mathfrak q}}}{\AA}/{\mathcal F}^+_{{\mathfrak q}}\ad{\AA}\right] \mbox{\quad for\quad} {\mathfrak q}\mid p,\\ ker\left[\Hone{F_{{\mathfrak q}}}{\AA} \longrightarrow \Hone{F_{{\mathfrak q}}}{\AA}\right]\quad\mbox{for}\quad {\mathfrak q}\nmid p.\end{cases} \end{split} \end{equation} \begin{defn} The Selmer group of $\ad{\rho}$ over $F$ is defined by \begin{equation} \sg{F}{\ad{\rho}}:=ker \left[\Hone{F^\Sigma/F}{\AA}\longrightarrow\prod_{{\mathfrak q}}\dfrac{\Hone{F_{{\mathfrak q}}}{\AA}}{\mathscr L(F_{\mathfrak q})}\right]. \end{equation} \end{defn} Let $\Sigma$ be any finite set of primes of $F$ containing the primes above $p$, the infinite primes and the primes ramified in $F$. Let $F^\Sigma$ denote the maximal extension of $F$ that is unramified outside $F$. Suppose that $\inft{F}$ is any pro-$p$, $p$-adic Lie extension of $F$ that is contained in $F^\Sigma$ and $\inft{F}_{,{\mathfrak q}}:=\varinjlim_n F_{n,{\mathfrak q}_n}$, where $F_n$ are finite extensions of $F$ and $\{{\mathfrak q}_n\}$ is a compatible sequence of primes. Then by restriction we define local conditions $\mathscr L(F_{\infty,{\mathfrak q}})$as in \eqref{loc-con}. \begin{defn} We define the \emph{Selmer group} of $\ad{\rho}$ over $\inft{F}$ by \begin{equation} \sg{\inft{F}}{\ad{\rho}}:=ker \left[\Hone{F^\Sigma/\inft{F}}{\ad{\AA}}\longrightarrow\prod_{{\mathfrak q}\mid\Sigma} \dfrac{\Hone{F_{\infty,{\mathfrak q}}}{\AA}}{\mathscr L(F_{\infty,{\mathfrak q}})}\right]. \end{equation} \end{defn} There is an action of the Galois group ${\mathcal G}:=\mathrm{Gal}(\inft{F}/F)$ on $\sg{\inft{F}}{\ad{\rho}}$ via \emph{conjugation:} if $[c]\in\Hone{F^\Sigma/\inft{F}}{A}$ is any cocycle class and $g\in{\mathcal G}$, then the action is given by $(g\ast c)(\sigma):=\tilde g c(\tilde g^{-1}\sigma\tilde g)$, for a lift $\tilde g$ of $g$ to $\mathrm{Gal}_{{F}}$. \subsection{K\"ahler differentials} Let $A$ and $B$ be complete local noetherian algebras. Let $A$ be a $B$-algebra and let $\Om{A}{B}$ denote the $A$-module of K\"ahler differentials of $A$ over $B$. The Selmer group attached to the adjoint representation is related to K\"ahler differentials as follows ( see \cite{mt} or \cite{ht}). Consider the representation $\bar\rho_F$ and the locally cyclotomic deformation functor $\Phi_F^{cyc}$. \begin{theorem}\cite[{Prop 3.87}]{hida-hmf} Consider the representation $\bar\rho_F$ that is attached to a Hilbert modular eigenform $f_0\in S_{\kappa}(\mathfrak{N},\varepsilon;W)$, with $\kappa=(0,I)$, and also satisfying the conditions (h1)-(h4). Let $\Phi_F^{cyc}$ be the locally cyclotomic deformation functor of $\bar\rho_F$. Suppose that $\Phi_F^{cyc}$ is represented by the universal deformation ring $\mathcal R_F$ and $\boldsymbol{\rho}_F$ is the representation of $\mathrm{Gal}_F$ into $\gl{\mathcal R_F}$. Then for any $A\in\cnl$ and $\widetilde\rho\in\Phi_F^{cyc}(A)$, with $\varphi$ denoting the morphism $\mathcal R_F\longrightarrow A$, there exists a canonical isomorphism \begin{equation} \sgd{F}{\ad{\widetilde\rho_F}\otimes_A A^\ast}\cong\Om{\mathcal R_F}{W[[\Gamma_F]]}\otimes_{\mathcal R_F,\varphi}A. \end{equation} In particular, \begin{equation} \sgd{F}{\ad{\boldsymbol{\rho}_F}}\cong\Om{\mathcal R_F}{W[[\Gamma_F]]}. \end{equation} \end{theorem} \subsection{Control of K\"ahler differentials} We now apply the above results to study deformation rings under base-change over finite subextensions in a $p$-adic Lie extension. Let $E$ be any totally real number field and $E_\infty$ be any \emph{arbitrary} $p$-adic Lie extension of $E$ with Galois group ${\mathcal G}:=G(E_\infty/E)$. The following proposition is fundamental to the study of deformation rings over the $p$-adic Lie group ${\mathcal G}$. It is a generalization of a result of Hida for cyclotomic ${\mathbb Z}_p$-extension of $E$. If ${\mathcal H}$ is a closed subgroup of ${\mathcal G}$ and ${\mathcal C}$ is a closed subgroup of ${\mathcal H}$, then there is an action of ${\mathcal H}/{\mathcal C}$ on $\mathcal R_{\mathcal C}$. This action action may be extended to an action of ${\mathcal H}$, with ${\mathcal C}$ acting on $\mathcal R_{\mathcal C}$ trivially. In other words, the action of ${\mathcal H}/{\mathcal C}$ and ${\mathcal H}$ on $\mathcal R_{\mathcal C}$ are equal. \begin{propn}\label{control-kahler1} Consider the nearly-ordinary deformation functor $\Phi_F^{cyc}$. Let $A$ be a closed $\cO$-subalgebra of $\mathcal R_\infty$ such that $A$ is in \cnl and ${\mathcal G}$ acts on it trivially. Let $B$ be an $A$-algebra in \cnl and $\pi:\mathcal R_0\longrightarrow B$ be an $A$-algebra homomorphism. Then, for any closed normal subgroup ${\mathcal C}\subset{\mathcal H}\subset{\mathcal G}$, we have \begin{equation*} (\Om{\mathcal R_{\mathcal C}}{A}\widehat\otimes_{\mathcal R_{\mathcal C}} B)_{{\mathcal H}}\cong \Om{\mathcal R_{\mathcal H}}{A}\widehat\otimes_{\mathcal R_{\mathcal H}} B. \end{equation*} \end{propn} \begin{proof} Let $R:=\mathcal R_{\mathcal C}\hat\otimes_{\mathcal R_{\mathcal C}}B$ and $R':=\mathcal R_{\mathcal H}\hat\otimes_{\mathcal R_{\mathcal H}}B$. Consider the following homomorphism of algebras in $\cnl$ \begin{equation}\xymatrixcolsep{3pc}\xymatrixrowsep{3pc}\xymatrix{ \mathcal R_{\mathcal H}\hat\otimes_{\Lambda_F'}B\ar[r]^{\alpha\otimes id}\ar@/^3pc/@{->}[urrd]^{\lambda_F'} &\mathcal R_{\mathcal C}\hat\otimes_{\Lambda_F'}B\ar[r]^{\mu_F'} & B, } \end{equation} where $\lambda_F'=\mu_F'\circ(\alpha\otimes id)$. We then have, \begin{equation}\begin{split} ker(\lambda'_F)\otimes_R B\cong\Omega_{{\mathcal R_{\mathcal C}}/A}\otimes_{\mathcal R_{\mathcal C}} B \\ ker(\mu'_F)\otimes_{R'} B\cong\Omega_{{\mathcal R_{\mathcal H}}/A}\otimes_{\mathcal R_{\mathcal H}} B. \end{split} \end{equation} Then we have the following exact sequence: \begin{equation} 0\longrightarrow I_{\mathcal H}(R)\longrightarrow ker(\lambda'_F)\longby{\alpha} ker(\mu')\longrightarrow 0. \end{equation} Tensoring with $B$ over $R$ and writing $J:=I_{\mathcal H}(R)$, we get another exact sequence: \begin{equation} (J/J^2)\otimes_R B=J\otimes_R B\longby{i}\Omega_{\mathcal R_{\mathcal C}/A}\otimes_{\mathcal R_{\mathcal C}} B\longrightarrow\Omega_{\mathcal R_{\mathcal H}/A}\otimes_{\mathcal R_{\mathcal H}} B\longrightarrow 0. \end{equation} We now determine the image of the map $i$. Consider the map $j:B\longrightarrow R$, given by $j(b)=1\otimes b$, and let $B'=im(j)$. Then $j$ is easily seen to be a section of $\lambda'_{F}$, and ${\mathcal H}/{\mathcal C}$ acts trivially on the image $j(B)$. For any $y\in R$, the element $x:=y-j\lambda'_F(y)\in ker{\lambda'_F}$. As ${\mathcal H}$ acts trivially on $j(B)$, we have $(\sigma-1)(y)=(\sigma-1)(x)$. Therefore, $(\sigma-1)R=(\sigma-1)ker(\lambda'_F)$. Since $\sigma$ is a $B'$-algebra automorphism of $R$ and $J/J^2$ is a $B'$-module, therefore \begin{equation*} y(\sigma-1)y'\equiv (\sigma-1)yy' \pmod{J^2}, \mbox{ where }y, y'\in J. \end{equation*} Therefore the $B$-linear map $\sigma-1:ker(\lambda)\longrightarrow R$ induces a surjective morphism of $B'$-modules: \begin{equation*} \oplus_{\sigma\in{\mathcal H}/{\mathcal C}}(\sigma-1):\oplus_{\sigma\in{\mathcal H}/{\mathcal C}} ker(\lambda'_F)/ker(\lambda'_F)^2\longrightarrow J/J^2. \end{equation*} Therefore \begin{equation*} Im(i)=\sum_{}(\sigma-1)(\mathcal R_{\mathcal C}\hat\otimes_{\mathcal R_{\mathcal C}}B)=I_{\mathcal H}(\mathcal R_{\mathcal C}\hat\otimes_{\mathcal R_{\mathcal C}}B), \end{equation*} where $I_{\mathcal H}$ denotes the augmentation ideal of ${\mathcal H}$. \end{proof} \begin{cor}\label{control-kahler2} Let $A_\infty$ be an $\cO$-algebra with a continuous action of ${\mathcal G}$ which is a pro-object in $\cnl$. Suppose that $\mathcal R_\infty$ has a structure of $A_\infty$-algebra and that the ${\mathcal G}$-action on $A_\infty$ and $\mathcal R_\infty$ are compatible. Thus $\mathcal R_{\mathcal C}$ is an $A_{\mathcal C}$-algebra for $A_{\mathcal C}=(A_\infty)_{\mathcal C}$. Let $B$ be an algebra in $\cnl$ and $\pi:\mathcal R\longrightarrow B$ be an $A_\infty$-algebra homomorphism. Then, for any closed subgroups ${\mathcal C}\subset{\mathcal H}\subset{\mathcal G}$, we have: \begin{equation*} (\Om{\mathcal R_{\mathcal C}}{A_{\mathcal C}}\otimes_{\mathcal R_{\mathcal C}} B)_{{\mathcal H}}\cong \Om{\mathcal R_{\mathcal H}}{A_{\mathcal H}}\otimes_{\mathcal R_{\mathcal H}} B. \end{equation*} \end{cor} \begin{proof} By the above proposition, we have, \begin{equation*}\begin{split} (\Om{\mathcal R_{\mathcal C}}{A}\widehat\otimes_{\mathcal R_{\mathcal C}} B)_{{\mathcal H}}\cong \Om{\mathcal R_{\mathcal H}}{A}\widehat\otimes_{\mathcal R_{\mathcal H}} B,\\ (\Om{A_{\mathcal C}}{\cO}\widehat\otimes_{A_{\mathcal C}} B)_{{\mathcal H}}\cong \Om{A_{\mathcal H}}{\cO}\widehat\otimes_{A_{\mathcal H}} B. \end{split} \end{equation*} These isomorphisms give rise to the following commutative diagram: $$ \xymatrixcolsep{4pc}\xymatrixrowsep{4pc} \xymatrix{ (\Om{A_\infty}{\cO}\widehat\otimes_{A_\infty} B)_{\mathcal H}\ar[r]\ar[d]^\cong &(\Om{\mathcal R_\infty}{A}\widehat\otimes_{\mathcal R_\infty} B)_{{\mathcal H}}\ar[r]\ar[d]^\cong &(\Om{\mathcal R_\infty}{A_\infty}\widehat\otimes_{\mathcal R_\infty} B)_{{\mathcal H}}\ar[r]\ar[d] & 0 \\ \Om{A_{\mathcal H}}{\cO}\widehat\otimes_{A_{\mathcal H}} B \ar[r] &\Om{\mathcal R_{\mathcal H}}{\cO}\widehat\otimes_{A_{\mathcal H}} B\ar[r] &\Om{\mathcal R_{\mathcal H}}{A_{\mathcal H}}\widehat\otimes_{\mathcal R_{\mathcal H}} B\ar[r] & 0. } $$ Hence $$ (\Om{\mathcal R_{\mathcal C}}{A_{\mathcal C}}\otimes_{\mathcal R_{\mathcal C}} B)_{{\mathcal H}}\cong \Om{\mathcal R_{\mathcal H}}{A_{\mathcal H}}\otimes_{\mathcal R_{\mathcal H}} B. $$ \end{proof} \section{Deformation rings in an \emph{admissible} tower $E_\infty/E$}\label{admissible \subsection{Admissible $p$-adic Lie extension} In this section, we study the deformation rings of the functor $\Phi_F^{cyc}$, when $F$ varies over finite Galois subextensions of an \emph{admissible $p$-adic Lie extension} $E_\infty$ over $E$, whose definition we recall below. \begin{defn} An \emph{admissible $p$-adic Lie extension} $E_\infty$ of $E$ is a Galois extension of $E$ such that (1) $E_\infty/E$ is unramified outside a finite set of primes of $E$; (2) $E_\infty$ is totally real; (3) $E_\infty$ is a $p$-adic Lie extension; and (4) $\cy E\subsetE_\infty$. \end{defn} Let $E_\infty$ be an \emph{admissible } $p$-adic Lie extension. Let $E_\infty:=\cup_n E_n$, where $E_n$ is a finite Galois extension of $E$ for every $n$. Consider the functor $\Phi_{E_n}^{cyc}$ and let $\mathcal R_{E_n}$ denote the universal deformation ring over $E_n$. Since the extension $E_n/E$ is a pro-$p$ extension, the conditions $(\mathbf{A}_{E_n})$ and $(\mathbf{Z}_{E_n})$ are satisfied. Then for every $n$, we have the base change morphisms $\mathcal R_{E_n}\longrightarrow\mathcal R_E$. Consider the projective limit \begin{equation} \mathcal R_\infty:=\varprojlim_n\mathcal R_{E_n}. \end{equation} The action of $\Delta_n:=G(E_n/E)$ on $\mathcal R_{E_n}$ induces an action of ${\mathcal G}$ on $\mathcal R_\infty$. Moreover, for finite Galois extensions $F_m$ inside $\cy{E}$, we also define \begin{equation} \mathcal R_{cyc}:=\varprojlim_m\mathcal R_{F_m}. \end{equation} \begin{propn} Let ${\mathcal H}:=\mathrm{Gal}(E_\infty/E_{cyc})$. Then ${\mathcal H}$ acts on $\mathcal R_\infty$, and we have a morphism of rings $(\mathcal R_\infty)_{\mathcal H} \longrightarrow \mathcal R_{{cyc}}$, which is an isomorphism of algebras. \end{propn} \begin{proof} This follows from the the base-change isomorphism. \end{proof} \begin{cor}\label{noetherian} If $\mathcal R_\infty$ is noetherian ring, then $\mathcal R_{cyc}$ is a noetherian ring. \end{cor} \begin{proof} If $\mathcal R_\infty$ is a noetherian ring, then $(\mathcal R_\infty)_{\mathcal H}$ is noetherian. \end{proof} \begin{remark} For the deformation rings of the nearly ordinary functor and the fixed determinant, analogous results over $\inft{E}/E$ can be proven using \cite[Cor 3.2]{hs2}. \end{remark} \subsection{Example over decomposition groups} Our example is a generalization of an example due to Hida in the cyclotomic case. Let $E_\infty$ be a $p$-adic Lie extension which is totally ramified at all the primes of $E$ above $p$ and $E_\infty=\cup_n E_n$ with $E_0=E$. Let ${\mathfrak p}$ denote a prime of $E$ lying above $p$, and we also denote by ${\mathfrak p}_n$ the unique prime of $E_n$ above $p$. Let $D_{n,{\mathfrak p}}$ be the decomposition group of ${\mathfrak p}$. We consider the universal nearly ordinary representation $\rho:\mathrm{Gal}_E\longrightarrow\gl{\mathcal R_{0}}$. Then restricted to the decomposition subgroup $D_{0,{\mathfrak p}}$ at a prime lying over $p$, we have \begin{equation} \rho\mid_{D_{0,{\mathfrak p}}}\cong\begin{pmatrix}\widetilde{\epsilon}_{{\mathfrak p}} &*\\ 0 &\widetilde{\delta}_{{\mathfrak p}} \end{pmatrix}, \mbox{ with } \widetilde\delta_{{\mathfrak p}}\equiv\bar\delta_{\mathfrak p}\pmod{{\mathfrak m}_0} \mbox{ in } D_{0,{\mathfrak p}}, \end{equation} where ${\mathfrak m}_0$ is the maximal ideal of $\mathcal R_0$. Let $\rho_n:=\rho\mid_{G_{E_n}}$. We also denote the unique prime of $E_n$ lying above ${\mathfrak p}$ by ${\mathfrak p}$. Let $D_{n,{\mathfrak p}}$ denote the decomposition group at the prime ${\mathfrak p}$. Then \begin{equation} \rho_n|_{D_{n,{\mathfrak p}}}\cong\begin{pmatrix}\widetilde{\epsilon}_{n,{\mathfrak p}} &*\\ 0 &\widetilde{\delta}_{n,{\mathfrak p}} \end{pmatrix}, \mbox{ with } \widetilde\delta_{n,{\mathfrak p}}\equiv\bar\delta_{\mathfrak p}\pmod{{\mathfrak m}_n} \mbox{ in } D_{n,{\mathfrak p}}. \end{equation} Let $\widetilde\delta_{\infty,{\mathfrak p}}$ be the restriction of $\widetilde\delta_{\mathfrak p}$ to $G_{E_{\infty,{\mathfrak p}}}$, and let $\Lambda_\infty$ be the projective limit of the universal deformation rings for $\delta_{n,{\mathfrak p}}=\delta_{{\mathfrak p}}\mid_{E_{n,{\mathfrak p}}}$. Let $\widetilde\Lambda_n$ be the subalgebra of $\mathcal R_n$ topologically generated by the image of $\widetilde\delta_{\infty,{\mathfrak p}}$ over $\cO$. Assume that the order of $\widetilde\epsilon_{n,{\mathfrak p}}\mod{\mathfrak m}_n$ is prime to $p$. As $\widetilde\delta_{\infty,{\mathfrak p}}$ restricted to the $p$-wild inertia subgroup factors through $\Gamma_{n,{\mathfrak p}}$, and the tame part has values in $\cO$, therefore $\widetilde\Lambda_n\cong\cO[[\widetilde\delta_{n,{\mathfrak p}}(Frob_{\mathfrak p})-\delta_{\mathfrak p}(Frob_{\mathfrak p})]]$ inside $\mathcal R_n$, for the Frobenius element $Frob_{\mathfrak p}$ in $D_{\infty,{\mathfrak p}}$. Let $\widetilde\delta_0(Frob_{\mathfrak p})=a({\mathfrak p})\in\mathcal R_0$ and \begin{equation}\label{jacobian} Jac_A:=\Big(det\Big(\dfrac{\partial a({\mathfrak p})}{\partial x_{{\mathfrak p}'}}\Big)_{{\mathfrak p},{\mathfrak p}'\mid p}\mod\mathfrak a\Big)\in Q(A). \end{equation} for any quotient integral domain $A=\mathcal R_0/\mathfrak a$ of characteristic 0 with quotient field $Q(A)$. \begin{propn}\label{inf-noetherian} Let $\widetilde\Lambda_0=\cO[[x_{\mathfrak p}]]_{{\mathfrak p}|p}$ (under the normalization $\gamma_{\mathfrak p}\mapsto 1+x_{\mathfrak p}$), and \begin{equation} Jac_{\mathbf\Lambda_0}:=\det\left(\dfrac{\partial a({\mathfrak p})}{\partial x_{{\mathfrak p}'}}\right)_{{\mathfrak p},{\mathfrak p}'\mid p} \in\widetilde\Lambda_0^\times. \end{equation} Then ${\Lambda_\infty}=\widetilde\Lambda_0$. \end{propn} \begin{proof} Note that $\Om{\cO[[x_{\mathfrak p}]]_{{\mathfrak p}|p}}{\cO}=\sum_{{\mathfrak p}|p}\widetilde\Lambda_0dx_{\mathfrak p}$. Consider the following exact sequence: \begin{equation*} \Om{{\inft\Lambda}}{\cO}\otimes\widetilde\Lambda_0 \longrightarrow\Om{\widetilde\Lambda_0}{\cO}\longrightarrow \Om{\widetilde\Lambda_0}{{\inft\Lambda}}\longrightarrow 0. \end{equation*} The image of $\Om{{\inft\Lambda}}{\cO}\otimes\widetilde\Lambda_0$ is generated by $da({\mathfrak p})=\sum_{{\mathfrak p}'|p}\dfrac{\partial{a({\mathfrak p})}}{\partial x_{{\mathfrak p}'}}\partial x_{{\mathfrak p}'}$. Therefore $\Om{\widetilde\Lambda_0}{{\inft\Lambda}}\cong\dfrac{\sum_{{\mathfrak p}|p}\widetilde\Lambda_0dx_{\mathfrak p}}{\sum_{{\mathfrak p}|p}\widetilde\Lambda_0da({\mathfrak p})}$. Since the Jacobian is a unit in $\widetilde\Lambda_0$, we have $\Om{\widetilde\Lambda_0}{{\inft\Lambda}}=0$. Therefore $\widetilde\Lambda_0={\inft\Lambda}$. \end{proof} For any number field $L$, let $S_L$ be the set of primes of $L$ lying above $p$ and $D_L=\prod_{{\mathfrak p}\in S_L}D_{{\mathfrak p},p}^{ab}$, where $D_{{\mathfrak p},p}^{ab}$ is the maximal $p$-profinite abelian quotient of the decomposition subgroup $D_{\mathfrak p}$ at ${\mathfrak p}$ in $\mathrm{Gal}_{L}$. Let $I_L=\prod_{{\mathfrak p}\in S_L}I_{{\mathfrak p},p}^{ab}$, where $I_{{\mathfrak p},p}^{ab}$ is the inertia subgroup of $D_{{\mathfrak p},p}^{ab}$. Let $E$ be a totally real field, and let $f$ be a nearly $p$-ordinary Hilbert modular form, which is a Hecke eigenform. Let $\rho$ be the representation of $\mathrm{Gal}_E$ that is associated to $f$. Let $\boldsymbol{\rho}$ be the nearly ordinary deformation for $\rho$. Let $\mathcal R_{E_n}^{n.ord}$ be the universal nearly ordinary deformation ring and $h_{E_n}^{n.ord}$ be the nearly ordinary Hecke algebra, for the Galois representation $\rho$ restricted to $\mathrm{Gal}_{E_n}$. Let $\mathrm{Cl}_{E_n}(p^\infty)_p$ be the Galois group of the maximal $p$-profinite abelian extension of $E_n$ unramified outside $p$ and the Archimedean primes. Let $\mathbb I$ be an irreducible component of $\mathcal R_E^{n.ord}$. In \cite[Theorem 6.3]{hs2}, Hida gave the structure of Hecke algebras along the cyclotomic tower of a number field. We give the following generalization of Hida's theorem. \begin{theorem}\label{trivial-zeros} Let $s=\mid S_E\mid$ be the number of primes of $E$ lying above $p$, and $J:=Jac_{\mathbb I}$ be the Jacobian. \begin{enumerate} \item Let $\sg{E}{\ad{\boldsymbol\rho}}=0$ and $J\in\mathbb I^\times$. Then \begin{equation}\begin{split} &\mathcal R_{E_n}^{n.ord}\cong h_{E_n}^{n.ord}\cong\mathcal{O}[[D_n\times Cl_{E_n}(p^\infty)_p]];\\ &\sg{\inft{E}}{\ad{\boldsymbol\rho}}\cong\mathbb I[S_E] \end{split} \end{equation} \item Let $M:=\Om{\mathcal R_\infty}{\cO[[\inft{D}]]}\otimes_{\mathcal R_\infty}\mathbb I$. Then we have the following short-exact sequence: \begin{equation*} 0\longrightarrow \sgd{\inft{E}}{\ad{\boldsymbol\rho}} \stackrel{}{\longrightarrow} M\times(\Om{\mathbb I}{\cO[[I_0]]}\otimes_{\mathbb I}\mathbb I)\stackrel{}{\longrightarrow}\Om{\mathbb I}{A_0}\otimes_{\mathbb I}\mathbb I\longrightarrow 0. \end{equation*} \item Let $\cyn{E}{n}$ be the cyclotomic ${\mathbb Z}_p$-extension of $E_n$, and $\Gamma_n:=\mathrm{Gal}(\cyn{E}{n}/E)$. Then the module $\sg{\cyn{E}{n}}{\ad{\boldsymbol\rho}}$ is torsion over $\mathbb I[[\Gamma_n]]\cong\mathbb I[[T]]$ for all $n$; and is pseudo-isomorphic to $\mathbb I^s\oplus \Om{\mathcal R_{\cyn{E}{n}}}{\mathcal O[[\cy D]]}\otimes\mathbb I$. \item Let $\Phi(T)$ be the characteristic ideal of $M$, and $\Psi(T)$ the characteristic ideal of $\sg{\cyn{E}{n}}{\ad{\boldsymbol\rho}}$. Then \begin{equation} \Psi(T)=\Phi(T)T^s, \Phi(0)\neq0 \mbox{ and } \Phi(0)\mid J\eta, \end{equation} where $\eta$ is the characteristic ideal of the $\mathbb I$-module $\sg{E_n}{\ad{\boldsymbol\rho}}$. \end{enumerate} \end{theorem} \begin{proof} We give a proof for (ii), as the proof of the rest of the statements are in \cite[Theorem 6.3]{hs2}. Let $\mathcal R_j:=\mathcal R^{\phi'}_{E_j}$, $A_j:=\cO[[D_j]]$ and $\Lambda:=\cO[[I_E]]$. Put $J_j:=\mathrm{ker}(\mathcal R_j\longrightarrow\mathbb I)$. Then, we have the following commutative diagram with exact rows and columns, for all $j=1,\cdots,\infty$: \begin{equation*} \xymatrix{ 0\ar[r]\ar[d] &\Om{A_j}{\cO[[I_j]]}\otimes_{A_j}\mathbb I\ar@{=}[r]\ar[d]^{e} &\Om{A_j}{\cO[[I_j]]}\otimes_{A_j}\mathbb I\ar[r]\ar[d]^{f} &0\\ \dfrac{J_j}{J_j^2}\otimes_{\mathbb I}\mathbb I\ar[r]\ar[d]^{\cong} &\Om{R_j}{\cO[[I_j]]}\otimes_{R_j}\mathbb I\ar[r]^{b}\ar[d]^{g} &\Om{\mathbb I}{\cO[[I_j]]}\otimes_{R_j}\mathbb I\ar[r]\ar[d]^{h} &0\\ \dfrac{J_j}{J_j^2}\otimes_{\mathbb I}\mathbb I\ar[r]\ar[d] &\Om{R_j}{A_j}\otimes_{R_j}\mathbb I\ar[r]^{d}\ar[d] &\Om{\mathbb I}{A_j}\otimes_{R_j}\mathbb I\ar[r]\ar[d] &0\\ 0 \ar[r] &0 \ar[r] &0. } \end{equation*} Since the Jacobian $Jac_{\mathbb I}\neq0$, the maps $e$ and $f$ are injective. Therefore, for $j=\infty$, we have the following short-exact sequence: \begin{equation*} 0\longrightarrow \Om{A_j}{\cO[[I_j]]}\otimes_{A_j}\mathbb I \stackrel{\beta}{\longrightarrow} M\times(\Om{\mathbb I}{\cO[[I_0]]}\otimes_{\mathbb I}\mathbb I)\stackrel{\alpha}{\longrightarrow}\Om{\mathbb I}{A_0}\otimes_{\mathbb I}\mathbb I\longrightarrow 0, \end{equation*} where $\alpha(m,a)=d(m)-h(a)$ and $\beta(a)=(g(a),b(a))$. Again as the Jacobian vanishes, the modules $\Om{\mathbb I}{A_0}$ and $\Om{\mathbb I}{\cO[[I_0]]}/(\mathbb I[S_E])$ are torsion over $\mathbb I$. Finally, as $\sgd{\inft{F}}{\ad{\rho}}\cong\Om{A_j}{\cO[[I_j]]}\otimes_{A_j}\mathbb I$, we have the following exact sequence: \begin{equation*} 0\longrightarrow \sgd{\inft{E}}{\ad{\rho}} \stackrel{\beta}{\longrightarrow} M\times(\Om{\mathbb I}{\cO[[I_0]]}\otimes_{\mathbb I}\mathbb I)\stackrel{\alpha}{\longrightarrow}\Om{\mathbb I}{A_0}\otimes_{\mathbb I}\mathbb I\longrightarrow 0. \end{equation*} \end{proof} This result gives a finer structure of the dual Selmer group $\sgd{\inft{E}}{\ad{\boldsymbol\rho}}$ of the nearly ordinary representation $\boldsymbol\rho$. Over the cyclotomic ${\mathbb Z}_p$-extension, Hida interprets the finer structure of $\sgd{\cy{E}}{\ad{\boldsymbol\rho}}$ in terms of trivial zeros of the $p$-adic L-function. However, in the case when we have other $p$-adic Lie extensions, a more general interpretation seems to be needed (see \S \ref{remark-trivial-zeros} below). \section{Noncommutative Iwasawa theory of $\sgd{E_\infty}{\ad{\phi}}$}\label{non-commmutative \subsection{Ore sets and the category $\mathfrak M^\mathbb I_{\mathcal H}({\mathcal G})$} Let $E_\infty/E$ be a $p$-adic Lie extension such that $\cy{E}\subsetE_\infty$. Let ${\mathcal G}:=\mathrm{Gal}(E_\infty/E)$ and ${\mathcal H}=\mathrm{Gal}(E_\infty/\cy{E})$. We will consider an analogue of the \emph{Ore set}, that was first considered by Venjakob for a formulation of the Iwasawa Main conjecture over $p$-adic Lie extensions (see \cite{cfksv}). We recall the Ore set that was considered by Venjakob. \begin{defn} Let $\cO$ be a finite extension of ${\mathbb Z}_p$. Then the set \begin{equation} S:=\{x\in\cO[[{\mathcal G}]]\mid\cO[[{\mathcal G}]]/x \mbox{ is a finitely generated module over } \cO[[{\mathcal H}]]\}. \end{equation} is a left-right Ore set. \end{defn} The following Ore set is a natural and obvious generalization of the one which has been considered by Venjakob, Coates et al in \cite{cfksv} and in Fukaya-Kato \cite{fk}. Let $\mathbb I$ be an irreducible component of the universal locally cyclotomic deformation ring $\mathcal R_E$ for the functor $\Phi_E^{cyc}$. \begin{defn} The set defined by \begin{equation} \sS:=\{x\in\mathbb I[[\cG]]\mid\mathbb I[[\cG]]/x \mbox{ is a finitely generated module over } \mathbb I[[{\mathcal H}]]\} \end{equation} is a left-right Ore set. \end{defn} \begin{lemma} The set $\sS$ is a multiplicatively closed set. \end{lemma} \begin{proof} For two elements $x, y \in\mathbb I[[\cG]]$ consider the following exact sequence \begin{equation} 0\longrightarrow x\mathbb I[[\cG]]/xy \longrightarrow \mathbb I[[\cG]]/xy \longrightarrow \mathbb I[[\cG]]/x \longrightarrow 0. \end{equation} The surjection $\mathbb I[[\cG]]/y\longrightarrow x\mathbb I[[\cG]]/xy\longrightarrow 0$ implies that $x\mathbb I[[\cG]]/xy$ is finitely generated over $\mathbb I[[{\mathcal H}]]$ and the lemma follows. \end{proof} \begin{defn} Let ${\mathfrak m}$ denote the maximal ideal of $\mathbb I$. We define \begin{equation}\sS^\ast:=\cup_n {\mathfrak m}^n\sS.\end{equation} \end{defn} The set $\sS^*$ is also a multiplicative Ore set. In his thesis, Barth \cite{barth} has also considered an Ore set which is different from ours. \begin{defn} We denote the category of all modules which are finitely generated over $\mathbb I[[\cG]]$ and $\sS^\ast$-torsion by $\mathfrak M^\mathbb I_{\mathcal H}({\mathcal G})$. \end{defn} For the maximal ideal ${\mathfrak m}$ of $\mathbb I$, we define \begin{eqnarray} M[{\mathfrak m}]&:=&\{x\in M\mid ax=0 \mbox{ for some } a\in{\mathfrak m}\}\\ M({\mathfrak m})&:=&\cup_n M[{\mathfrak m}^n]. \end{eqnarray} As $\mathbb I$ is an commutative integral domain, it is easy to see that $M[{\mathfrak m}]$ and $M({\mathfrak m})$ are submodules of $M$ over $\mathbb I[[\cG]]$. As in \cite[Lemma 2.1]{cfksv}, we have the following characterization of the Ore set $\sS$. \begin{lemma} Let $\varphi_{\mathcal H}:\mathbb I[[\cG]]\longrightarrow\mathbb I[[\Gamma]]$ and $\psi_{\mathcal H}:\mathbb I[[\cG]]\longrightarrow\Omega(\Gamma)$ be the natural surjections. Then \begin{enumerate} \item $\sS$ is the set of all $x$ in $\mathbb I[[\cG]]$ such that $\mathbb I[[\Gamma]]/\mathbb I[[\Gamma]]\varphi_{\mathcal H}(x)$ is a finitely generated $\mathbb I$-module; \item $\sS$ is the set of all $x$ in $\mathbb I[[\cG]]$ such that $\Omega(\Gamma)/\Omega(\Gamma)\psi_{\mathcal H}(x)$ is finite. \end{enumerate} \end{lemma} \begin{proof} For any element $x\in\mathbb I[[\cG]]$, we put $M=\mathbb I[[\cG]]/\mathbb I[[\cG]] x$. Then \begin{equation} M_{\mathcal H}=\mathbb I[[\Gamma]]/\mathbb I[[\Gamma]]\varphi_{\mathcal H}(x),\quad M/{{\mathfrak m}_{\mathcal H}}M=\Omega(\Gamma)/\Omega(\Gamma)\psi_{\mathcal H}(x), \end{equation} where ${\mathfrak m}_{\mathcal H}$ denotes the maximal ideal of $\mathbb I[[{\mathcal H}]]$. Therefore the assertions follow from Nakayama's lemma. \end{proof} \begin{propn} A finitely generated module $M$ over $\mathbb I[[\cG]]$ is $\sS$-torsion if and only if $M$ is finitely generated over $\mathbb I[[{\mathcal H}]]$. \end{propn} \begin{cor} A finitely generated module $M$ over $\mathbb I[[\cG]]$ is $\sS^\ast$-torsion if and only if $M/M({\mathfrak m})$ is finitely generated over $\mathbb I[[{\mathcal H}]]$. \end{cor} We now recall another way of describing the category $\mathfrak{M}_{\mathcal H}^\mathbb I({\mathcal G})$. Consider the canonical injection $i:\mathbb I[[\cG]]\longrightarrow\mathbb I[[\cG]]_\sS$. First recall that $K_0(\mathbb I[[\cG]],\mathbb I[[\cG]]_\sS)$ is an abelian group, whose group law is denoted additively. Consider triples $(P,\alpha,Q)$, with $P$ and $Q$ finitely generated projective modules over $\mathbb I[[\cG]]$ and $\alpha$ is an isomorphism between $P\otimes_{\mathbb I[[\cG]]}\mathbb I[[\cG]]_\sS$ and $Q\otimes_{\mathbb I[[\cG]]}\mathbb I[[\cG]]_\sS$ over $\mathbb I[[\cG]]_\sS$. A morphism between $(P,\alpha,Q)$ and $(P',\alpha',Q')$ is naturally defined to be a pair of $\mathbb I[[\cG]]$-module homomorphism $g:P\longrightarrow P'$ and $h:Q\longrightarrow Q'$ such that \begin{equation*} \alpha'\circ(\mathrm{id}_{\mathbb I[[\cG]]_\sS}\otimes g)=(\mathrm{id}_{\mathbb I[[\cG]]_\sS}\otimes h)\circ\alpha. \end{equation*} Note that it is an isomorphism if both $g$ and $h$ are isomorphisms. We denote the isomorphism class by $[(P,\alpha,Q)]$. Then the abelian group $K_0(\mathbb I[[\cG]],\mathbb I[[\cG]]_\sS)$, is defined by the following generators and relations. Generators are the isomorphism classes $[(P,\alpha,Q)]$ and the relations are given by \begin{enumerate} \item $[(P,\alpha,Q)]=[(P',\alpha',Q')]$ if $(P,\alpha,Q)$ is isomorphic to $[(P',\alpha',Q')]$ \item $[(P,\alpha,Q)]=[(P',\alpha',Q')]+[(P'',\alpha'',Q'')]$ \\ for every short exact sequence $0\longrightarrow [(P',\alpha',Q')]\longrightarrow[(P,\alpha,Q)]\longrightarrow [(P'',\alpha'',Q'')]\longrightarrow 0$ in $\mathscr C_i$. \item $[(P_1,\beta\circ\alpha,P_3)]=[(P_1,\alpha,P_2)]+[(P_2,\alpha,P_3)]$, for the map $P_1\stackrel{\alpha}{\longrightarrow}P_2\stackrel{\beta}{\longrightarrow}P_3$. \end{enumerate} Recall the category $\mathscr C_i$, whose objects are bounded complexes of finitely generated projective $\mathbb I[[\cG]]$-modules whose cohomologies are $\sS$-torsion. Then the abelian group $K_0(\mathscr C_i)$ is defined with the following set of generators and relations. The generators are given by $[C]$, where $C$ is an object of $\mathscr C_i$. The relations are given by \begin{enumerate} \item $[C]=0$ if $C$ is acyclic, \item $[C]=[C']+[C'']$, for everys short-exact sequence $0\longrightarrow C'\longrightarrow C\longrightarrow C''\longrightarrow 0$ in $\mathscr C_i$. \end{enumerate} It is known that $K_0(\mathbb I[[\cG]],\mathbb I[[\cG]]_\sS)\cong K_0(\mathscr C_i)$. Moreover, if $\mathscr{H}_\sS$ is the category of all finitely generated $\mathbb I[[\cG]]$-modules which are $\sS$-torsion and which have a finite resolution by finitely generated projective modules then $K_0(\mathbb I[[\cG]],\mathbb I[[\cG]]_\sS)\cong K_0(\mathscr H_\sS)$. For details see Weibel \cite{weibel}. Therefore $K_0(\mathbb I[[\cG]],\mathbb I[[\cG]]_\sS)$ is isomorphic to $K_0(\mathfrak{M}_{\mathcal H}^\mathbb I({\mathcal G}))$. We then have the following exact sequence sequence of localization: \begin{equation}\label{localization} \kone{\mathbb I[[\cG]]}\longrightarrow\kone{\mathbb I[[\cG]]_\sS}\stackrel{\partial}{\longrightarrow} K_0(\mathbb I[[\cG]],\mathbb I[[\cG]]_\sS)\longrightarrow K_0(\mathbb I[[\cG]]) \longrightarrow K_0(\mathbb I[[\cG]]_\sS). \end{equation} Regarding the connecting homomorphism $\partial$, we have the following generalization of \cite[Lemma 5]{kakde} and \cite[Prop 3.4]{cfksv}. \begin{lemma} The connecting homomorphism $\partial$ is surjective. \end{lemma} \begin{proof} We give only a brief sketch of the proof. Let $P$ be a pro-$p$ open normal subgroup of ${\mathcal G}$, and $L$ be a finite extension of ${\mathbb Q}_p$ such that all the irreducible representations of $\Delta={\mathcal G}/P$ are defined. Then, we have an isomorphism of rings $L[\Delta]\stackrel{\cong}{\longrightarrow}\prod_{\psi:\mathrm{irred}}M_{n_\psi}(L)$, where $\psi$ runs over all the irreducible representations of $\Delta$ and $n_\psi$ is the dimension of $\psi$. Let $\mathbb I=\cO[[]X_1,\cdots,X_r]$ and $\mathbb{K}:=L[[X_1,\cdots,X_r]]$. Then tensoring with $\mathbb I$, we have \begin{equation*} \mathbb{K}[\Delta]\stackrel{\cong}{\longrightarrow}\prod_{\psi:\mathrm{irred}}M_{n_\psi}(\mathbb{K}). \end{equation*} Then, there is a map $\lambda: K_0(\mathbb I[[\cG]])\longrightarrow\prod_{\psi:\mathrm{irred}}K_0(\mathbb K)$ which is constructed analogously as in Coates et. al \cite{cfksv}. In fact, this map is constructed as the composition $\lambda=\lambda_4\circ\lambda_3\circ\lambda_2\circ\lambda_1$ of the following natural maps \begin{align*} \lambda_1&:K_0(\mathbb I[[\cG]])\longrightarrow K_0(\mathbb I[\Delta]),\\ \lambda_2&:K_0(\mathbb I[\Delta])\longrightarrow K_0(\mathbb K[\Delta]),&\\ \lambda_3&:K_0(\mathbb K[\Delta])\longrightarrow K_0(\mathbb K[\Delta]),\\ \lambda_4&:K_0(\mathbb K[\Delta])\stackrel{\cong}{\longrightarrow}\prod_{\psi:\mathrm{irred}}K_0(M_{n_\psi}(\mathbb{K}))\stackrel{\cong}{\longrightarrow}\prod_{\psi:\mathrm{irred}}K_0(\mathbb K). \end{align*} The map $\lambda_1$ is defined analogously as in \cite[Lemma 3.5]{cfksv}, and $\lambda_2, \lambda_3$ are induced by the inclusion of rings. The map $\lambda_4$ is induced by the isomorphism above followed by Morita equivalence. After this, the proof procceeds as in \cite[Lemma 5]{kakde}. A crucial input here is a generalization of a result of Venjakob \cite{ven-characteristic}, that if $U$ is finitely generated $\sS$-torsion over $\mathbb I[[\cG]]$, then the twist $tw_\psi(U):=U\otimes_\mathbb I\II^{n_\psi}$, for any irreducible representation $\psi$ of $\Delta$, is also finitely generated and $\sS$-torsion over $\mathbb I[[\cG]]$. This also follows analogously as in \emph{loc. cit}. \end{proof} As a generalization of Conjecture 5.1 in \cite{cfksv}, we can hope that the following is true. \begin{conj} The dual Selmer group $\sgd{E_\infty}{\ad{\rho}}$ is in the category $\mathfrak M^\mathbb I_{\mathcal H}({\mathcal G})$. \end{conj} We can compare the following Ore set considered in \cite{cfksv} with the multiplicative set $\sS$, \begin{equation} S=\{h\in\cO[[{\mathcal G}]]\mid \cO[[{\mathcal G}]]/h \mbox{ is finitely generated as a module over }\cO[[{\mathcal H}]]\}. \end{equation} \begin{propn} Let $\phi_k:\mathbb I\longrightarrow\cO$ be a specialization map. Then $\phi_k(\sS)= S$. \end{propn} \begin{proof} Let $x\in\sS$. Then there exists a positive integer $m$, such that $\mathbb I({\mathcal H})^m\twoheadrightarrow\mathbb I[[\cG]]/x$. Applying $\phi_k$, we get the following diagram $$ \xymatrix{ \mathbb I({\mathcal H})^m\ar[r]\ar[d] &\mathbb I[[\cG]]/x\ar[r]\ar[d] & 0\\ \cO[[{\mathcal H}]]^m\ar[r]\ar[d] &\cO[[{\mathcal G}]]/\phi_k(x)\ar[d] &\\ 0 & 0. } $$ Since the specialization map is surjective, the vertical maps induced by the specialization map $\phi_k$ are also surjective. Therefore $\phi_k(x)\in S$ (\cite[Lemma 2.1]{cfksv}). Conversely, let $y\in S$. Then, we have a surjection $\cO[[{\mathcal H}]]^m\longrightarrow\cO[[{\mathcal G}]]/y\longrightarrow 0$ for some $m$. Since $\phi_k$ is surjective, there exists $z\in\mathbb I({\mathcal H})$ such that $\phi_k(z)=y$. Further, $\cO[[{\mathcal G}]]\cong\mathbb I[[\cG]]/\ker{\phi_k}$. Therefore, $\cO[[{\mathcal G}]]/y\cong\dfrac{\mathbb I[[\cG]]/\ker{\phi_k}}{z}\cong \dfrac{\mathbb I[[\cG]]/z}{\ker{\phi_k}}$, which is finitely generated over $\cO[[{\mathcal H}]]\cong\mathbb I({\mathcal H})/\ker{\phi_k}$. Therefore, $\dfrac{\mathbb I[[\cG]]/z}{\mathfrak n}$ is finitely generated over $\mathbb I({\mathcal H})/\mathfrak n$, where $\mathfrak n$ is the maximal ideal of $\mathbb I({\mathcal H})$. By Nakayama's lemma, $\mathbb I[[\cG]]/z$ is finitely generated over $\mathbb I({\mathcal H})$. Hence $z\in\sS$. \end{proof} \begin{cor} For any specialization map $\phi_k$, $\phi_k(\sS^\ast)=S^\ast$. \end{cor} \iffalse \begin{propn} The dual Selmer group $\sgd{E_\infty}{\ad{\phi}}$ is in the category $\mathfrak{M}^\mathbb I_{\mathcal H}({\mathcal G})$ if and only if there is a weight $k$ specialization $\phi_k$ for which $\sgd{E_\infty}{\ad{\phi_k}}$ is in the category $\mathfrak M_{\mathcal H}({\mathcal G})$. \end{propn} Since the map $\phi$ is surjective, the vertical maps induced by $\phi$ are also surjective. This implies that the lower horizontal arrow is surjective. Therefore $\phi(f)\in S$ (\cite[Lemma 2.1]{cfksv}). \begin{cor} For any surjective algebra homomorphism $\phi:\mathbb I\longrightarrow\cO$, we have $\phi(\sS^\ast)\subseteq S^\ast$. \end{cor} \begin{propn} Consider the representation $\boldsymbol{\rho}_\mathbb I:G_E^\Sigma\longrightarrow\gl{\mathbb I}$ and $\phi:\mathbb I\longrightarrow\cO$ be any surjective morphism of local algebras. Then the representation $\phi\circ\boldsymbol{\rho}_\mathbb I:G_E^\Sigma\longrightarrow\gl{\cO}$ is a deformation of $\rho$. The dual Selmer group $\sgd{E_\infty}{\ad{{\rho}_\mathbb I}}$ is in the category $\mathfrak{M}^\mathbb I_{\mathcal H}({\mathcal G})$ if and only if there is a surjective local algebra homomorphism $\phi$ so that for which $\sgd{E_\infty}{\ad{\phi\circ\boldsymbol{\rho}_\mathbb I}}$ is in the category $\mathfrak M_{\mathcal H}({\mathcal G})$. \end{propn} \begin{proof} Let $\sgd{E_\infty}{\ad{{\rho}_\mathbb I}}$ be in the category $\mathfrak M^\mathbb I_{\mathcal H}({\mathcal G})$. Then there is an element $s\in\sS^\ast$ that annihilates $\sgd{E_\infty}{\ad{{\rho}_\mathbb I}}$. For any surjective morphism of local algebras $\phi:\mathbb I\longrightarrow\cO$, the element $\phi(s)\in S^*$ and annihilates $\sgd{E_\infty}{\ad{\phi\circ\boldsymbol{\rho}_\mathbb I}}$. Therefore $\sgd{E_\infty}{\ad{\phi\circ\boldsymbol{\rho}_\mathbb I}}$ is in the category $\mathfrak M_{\mathcal H}({\mathcal G})$. \end{proof} }\f \iffalse \subsection{$\mathfrak M_{\mathcal H}({\mathcal G})$ for $\mathrm{dim}({\mathcal G})=2$} The case when $\mathrm{dim}({\mathcal G})=2$ is very important and the conjecture regarding the category $\mathfrak{M}_{\mathcal H}({\mathcal G})$ is not even understood in this case so far. We assume that ${\mathcal G}\cong{\mathcal H}\ltimes\Gamma$ or ${\mathcal H}\times\Gamma$, where both ${\mathcal H}$ and $\Gamma$ are isomorphic to ${\mathbb Z}_p$. In general, for any $p$-adic Lie group $G$ and for any module $M$ which is finitely generated over ${\mathbb Z}_p[[G]]$, the $\mu_G$-invariant is defined by \begin{equation}\label{muG} \mu_G(M)=\sum_r \mathrm{rank}_{\FF_p[[G]]}\dfrac{M[p^{r+1}]}{M[p^r]}. \end{equation} \begin{lemma} Let $N$ be a finitely generated torsion module over the Iwasawa algebra ${\mathbb Z}_p[[{\mathcal G}]]$ such that $\mu_{\mathcal G}(N)=0$. Then $N(p)$, the $p$-power torsion elements of $N$, is finite. In particular, $N[p]$, the $p$-torsion elements of $N$, is finite. \end{lemma} \begin{proof} By definition, we have $\mu_{\mathcal G}(N)=\mu_{\mathcal G}(N(p))$. Further, by the proof of \cite[Th 5.3]{hachi-ven}, we have $\mu_{\mathcal G}(N(p))=\mu_\Gamma(N(p)_{\mathcal H})$. Therefore, $N(p)_{\mathcal H}$ is a finitely generated torsion ${\mathbb Z}_p$-module. Nakayama Lemma then implies that $N(p)$ is a finitely generated torsion ${\mathbb Z}_p[[{\mathcal H}]]$-module. By the structure theorem for finitely generated torsion modules over ${\mathbb Z}_p[[{\mathcal H}]]$-modules, we have a pseudoisomorphism \begin{equation} N(p)\sim {\mathbb Z}_p[[{\mathcal H}]]/{p^m}, \end{equation} for some $m\geq 0$. Hence $N(p)=N[p^m]$. At this point, note that if $m\neq0$, then $\mu_{\mathcal G}(N(p))\geq1$. Hence $m=0$. Therefore $N(p)$ is finite. \end{proof} \begin{cor}\label{main-corollary} Let $M$ be a finitely generated torsion ${\mathbb Z}_p[[{\mathcal G}]]$-module such that $\mu_{\mathcal G}(M)=0$. Then $M/pM$ is also finite. In other words, $M\otimes_{{\mathbb Z}_p}\FF_p$ is finite. \end{cor} \begin{proof} Consider the exact sequence of torsion ${\mathbb Z}_p[[{\mathcal G}]]$-modules, under the multiplication by $p$ map on $M$: \begin{equation} 0\longrightarrow M[p]\longrightarrow M\stackrel{\times p}{\longrightarrow} M\longrightarrow M/{pM}\longrightarrow 0. \end{equation} Since $\mu_{\mathcal G}$ is additive along exact sequences of torsion ${\mathbb Z}_p[[{\mathcal G}]]$-modules, therefore $\mu_{\mathcal G}(M/pM)=0$. By the above lemma applied to the module $M/pM$, it follows that $(M/{pM})(p)$ is finite. In particular, $(M/pM)[p]$ is finite. Now note that $(M/pM)[p]=M/pM$. Therefore, $M/pM$ is finite. \end{proof} }\f \subsection{Noetherian Deformation rings and $\mathfrak M_{\mathcal H}^\mathbb I({\mathcal G})$} We now give some results which are extensions of the results over the cyclotomic ${\mathbb Z}_p$-extension \cite[Th 5.9, Cor 5.10, 5.11]{hida-hmf} to the $p$-adic Lie extension case. \begin{propn}\label{arith-spe} Let $P$ be a locally cyclotomic arithmetic point of weight $k$. Then \begin{equation} \sgd{F_\infty}{\ad{\rho_P}\otimes_{W}W^*}\cong\Om{\mathcal R_\infty}{W}\otimes_{\mathcal R_\infty} \mathcal R_0/P, \end{equation} as $W[[{\mathcal G}]]$-modules. Further, $\sgd{F_\infty}{\ad{\rho_P}\otimes_{W}W^*}$ is a $W[[{\mathcal G}]]$-module of finite type. Here $W^*$ is the Pontryagin dual of $W$. \end{propn} \begin{proof} Let $\pi_n:\mathcal R_n\longrightarrow\mathcal R_0$ be the base change morphism. Let $P_n=\pi^{-1}_{n}(P)$ and consider the module $\Om{\mathcal R_n}{W}$. Note that $\mathcal R_\infty/P_\infty=\mathcal R_n/P_n$, and \begin{equation} \sgd{F_n}{\ad{\rho_P}\otimes_{W}W^*}\cong\Om{\mathcal R_n}{W}\otimes_{\mathcal R_\infty}\mathcal R_n/P_n \end{equation} Taking projective limits, we have \begin{equation} \sgd{F_\infty}{\ad{\rho_P}\otimes_{W}W^*}\cong\Om{\mathcal R_\infty}{W}\otimes_{\mathcal R_\infty} \mathcal R_0/P. \end{equation} \end{proof} \iffalse \begin{theorem}\label{noetherian-mhg} Let ${\mathcal G}$ be a pro-$p$, $p$-adic analytic Lie group of dimension 2. If $\mathcal R_\infty$ is noetherian, then the dual Selmer group $\sgd{F_\infty}{\ad{\rho_P}\otimes_{W}W^\ast}$ is in the category $\mathfrak M_{{\mathcal H}}({\mathcal G})$. \end{theorem} \begin{proof} If $\mathcal R_\infty$ is noetherian, then the deformation ring $\mathcal R_{cyc}=(\mathcal R_\infty)_{\mathcal H}$ is also noetherian, and hence $\sgd{\cy{F}}{\ad{\rho_P}\otimes_{W}W^\ast}$, is finitely generated over $W$, i.e., the $\mu$-invariant over the cyclotomic ${\mathbb Z}_p$-extension of $F$ of $\sgd{\cy{F}}{\ad{\rho_P}\otimes_{W}W^\ast}$ is zero, (\cite[Cor 5.11]{hida-hmf}). Then \cite[Th 5.3]{hachi-ven}, implies that the $\mu$-invariant over the extension $F_\infty$ is also zero. Hence $\sgd{\cy{F}}{\ad{\rho_P}\otimes_{W}W^\ast}(p)=0$. Therefore $\sgd{\inft{F}}{\ad{\rho_P}\otimes_{W}W^\ast}$ is in the category $\mathfrak{M}_{\mathcal H}({\mathcal G})$. \end{proof} }\f \iffalse \begin{theorem}\label{inf-noetherian} Let ${\mathcal G}$ be a pro-$p$, $p$-adic Lie group of dimension 2. Then the ring $\mathcal R_\infty$ is noetherian if and only if $\mathcal R_{cyc}$ is noetherian. \end{theorem} \begin{proof} Let $\cy\mathcal R$ be noetherian. Let $\mathfrak{a}$ be an ideal of $\mathcal R_\infty$. Then $\mathfrak{a_{\mathcal H}}$ is an ideal of $\cy{\mathcal R}=(\mathcal R_\infty)_{\mathcal H}$. Therefore, $\mathfrak{a}_{\mathcal H}$ is finitely generated over $\cy\mathcal R=(\mathcal R_\infty)_{\mathcal H}$. As $\mathcal R_\infty$ is complete, so it is local. Let $\inft{\mathfrak{m}}$ be the maximal ideal of $\mathcal R_\infty$. Then the quotient $\mathfrak{a}/\inft{\mathfrak{m}}\mathfrak{a}$ of $\mathfrak{a}_{\mathcal H}$ is also finitely generated over $\mathcal R_\infty/\inft{\mathfrak{m}}$. By Nakayama's Lemma, $\mathfrak{a}$ is finitely generated over $\mathcal R_\infty$. The converse is easy to see. \end{proof} \begin{remark} This theorem says that beyond the cyclotomic ${\mathbb Z}_p$-extension, the ring $\mathcal R_\infty$ is controlled by the ring $\mathcal R_{cyc}$. However, over the cyclotomic ${\mathbb Z}_p$-extension, where each of the deformation rings $\mathcal R_n$ over any finite extension of $F$ in $\cy{F}$ is noetherian, $\cy{\mathcal R}$ may not be noetherian. In the following theorem, we show that under the condition that the $\mu_{\mathcal G}$-invariant is zero, the dual Selmer group is in the category $\mathfrak{M}_{\mathcal H}({\mathcal G})$. This is an important case regarding the Conjecture 5.1 of \cite{cfksv}. \end{remark} }\f \iffalse{+1111 \begin{theorem}\label{mhG-2} Let ${\mathcal G}$ be a pro-$p$, $p$-adic Lie group of dimension 2, such that the $\mu_{\mathcal G}$-invariant of the ${\mathbb Z}_p[[{\mathcal G}]]$-module $\Om{\mathcal R_\infty}{W}\otimes_{\mathcal R_\infty} \mathcal R_0/P$ is zero for a locally cyclotomic point $P$. Then $\mathcal R_\infty$ is noetherian and hence the ${\mathbb Z}_p[[{\mathcal G}]]$-module $\sgd{F_\infty}{\ad{\rho_P}\otimes_{{\mathbb Z}_p}({\mathbb Q}_p/{\mathbb Z}_p)}$ is in the category $\mathfrak{M}_{\mathcal H}({\mathcal G})$. \end{theorem} \begin{proof} Taking $M=\Om{\mathcal R_\infty}{W}\otimes_{\mathcal R_\infty} \mathcal R_0/P$ in the Corollary \ref{main-corollary}, it follows that the module $M\otimes_{{\mathbb Z}_p}\FF_p=(\Om{\mathcal R_\infty}{W}\otimes_{\mathcal R_\infty} \mathcal R_0/P)\otimes_{{\mathbb Z}_p}\FF_p$ is finite. Therefore by \cite[Lemma 1.58]{hida-hmf}, the ring $\mathcal R_\infty$ is noetherian. Hence, by Corollary \ref{noetherian-mhg}, the module $\Om{\mathcal R_\infty}{W}\otimes_{\mathcal R_\infty} \mathcal R_0/P$ is in the category $\mathfrak{M}_{\mathcal H}({\mathcal G})$. \end{proof} As a consequence, we get the following corollary: \begin{cor} $\mathcal R_\infty$ is noetherian if and only if the $\mu_{\mathcal G}$-invariant of $\Om{\mathcal R_\infty}{W}\otimes_{\mathcal R_\infty} \mathcal R_0/P$ is zero, for some locally cyclotomic point $P$. \end{cor} }\f \begin{propn} If the $p$-adic Lie extension $\inft{F}$ is totally ramified over $F$, and $e=|\Sigma_p|$ is the number of primes of $F$ above $p$, then $\mathrm{dim}\,\mathcal R_{m,P}=e+1$. Further, let $P$ be a locally cyclotomic point over $\mathcal R_n$, which we may regard as a point over $\mathcal R_\infty$. Then for any finite index subgroup $\Delta_m$, with $m\geq n$, we have \begin{equation} (\mathcal R_{\infty,P})_{\Delta_m}\cong\mathcal R_{n,P}. \end{equation} \end{propn} \begin{proof} Since $\mathcal R_{m,P}$ is an integral domain of dimension $e+1$, and the base change morphism $\mathcal R_{m,P}\longrightarrow\mathcal R_{n,P}$ of $W$-algebras is surjective, we have $\mathrm{dim}\,\mathcal R_{m,P}=\mathrm{dim}\,\mathcal R_{n,P}$. It follows that $\mathcal R_{m,P}\cong\mathcal R_{n,P}$, and the result follows. \end{proof} \iffalse \begin{cor} Let $P$ be a locally cyclotomic point over $\mathcal R_n$, which we may regard as a point over $\mathcal R_\infty$. Then for any finite index subgroup $\Delta_m$, we have \begin{equation} (\mathcal R_\infty)_{\Delta_m}\cong\mathcal R_{n,P}. \end{equation} \end{cor} \begin{proof} For \end{proof} }\f \begin{theorem} Consider the representation $\boldsymbol{\rho}_\mathbb I:\mathrm{Gal}_F\longrightarrow\gl{\mathbb I}$ and $\phi_k:\mathbb I\longrightarrow\cO$ be any surjective morphism of local algebras which give rise to a locally cyclotomic point $P$ of weight $k$. The dual Selmer group $\sgd{\inft{F}}{\ad{{\rho}_\mathbb I}}$ is $\sS$-torsion if and only if $\sgd{\inft{F}}{\ad{\rho_P}}$ is $S$-torsion. \end{theorem} \begin{proof} If $\sgd{\inft{F}}{\ad{{\rho}_\mathbb I}}$ is $\sS$-torsion, then it is easy to see that $\sgd{\inft{F}}{\ad{\rho_P}}$ is $S$-torsion. By Proposition \ref{arith-spe}, we have \begin{equation} \sgd{F_\infty}{\ad{\rho_P}\otimes_{W}W^*}\cong\Om{\mathcal R_\infty}{W}\otimes_{\mathcal R_\infty} \mathcal R_0/P, \end{equation} as $W[[{\mathcal G}]]$-modules. Let $\mathcal M:=\Om{\mathcal R_\infty}{W}\otimes_{\mathcal R_\infty} \mathbb I$. Note that $\mathcal M$ is a finitely generated $\mathbb I[[{\mathcal G}]]$-module. Then under the specialization map $\phi_k:\mathbb I\longrightarrow\cO$ with $\ker(\phi_k)=P$, by the above isomorphism, the $\cO[[{\mathcal G}]]$-module $M:=\mathcal M\otimes\mathcal R_0/P$ is finitely generated. Let $\{y_1,\cdots,y_m\}$ be a set of generators for $M$ over $\cO[[{\mathcal G}]]$. By Nakayama's Lemma, a lift of these generators to $\mathcal{M}$, say $\{z_1,\cdots,z_m\}$ generates $\mathcal{M}$. We now have the following commutative diagram: $$ \xymatrix{ \bigoplus_{j=1}^m\mathbb I[[\cG]] z_j\ar[r]\ar[d] &\mathcal{M}\ar[r]\ar[d] & 0\\ \bigoplus_{j=1}^m\cO[[{\mathcal G}]]y_j\ar[r] & M\ar[r] & 0 } $$ Let $M$ be $S$-torsion. Then each $y_j$ in $M$ is annihilated by $s_j\in S$. Let $t_j$ be a lift of $s_j$ to $\mathbb I[[\cG]]$, and $\alpha_j\in\mathbb I[[\cG]]$ be an annihilator of $t_jz_j$. Consider $\alpha_jt_j\in\mathbb I[[\cG]]$. Then $\alpha_jt_j\neq 0$ as $\mathbb I[[\cG]]$ has no nonzero divisors. Let $\beta_j$ be the image of $\alpha_j\in\cO[[{\mathcal G}]]$. Then $\beta_js_j$ annihilates $y_j$. We now have the commutative diagram: \begin{equation*} \xymatrix{ \mathbb I[[\cG]]/\alpha_jt_j\ar[r]\ar[d] & \mathbb I[[\cG]] z_j\ar[r]\ar[d] &0\\ \cO[[{\mathcal G}]]/\beta_js_j\ar[r]\ar[d] & \cO[[{\mathcal G}]]y_j\ar[r]\ar[d] &0\\ 0 & 0 } \end{equation*} Note that the surjective map $\cO[[{\mathcal G}]]/\beta_js_j\longrightarrow\cO[[{\mathcal G}]]y_j$ factors through $\cO[[{\mathcal G}]]/s_j$. Therefore, we have the following commutative diagram \begin{equation*} \xymatrix{ \mathbb I[[\cG]]/\alpha_jt_j\ar[r]\ar[d] & \mathbb I[[\cG]] z_j\ar[r]\ar[d] &0\\ \cO[[{\mathcal G}]]/s_j\ar[r]\ar[d] & \cO[[{\mathcal G}]]y_j\ar[r]\ar[d] &0\\ 0 & 0 } \end{equation*} Let $x=\alpha_jt_j$ and $s=s_j$. As $P=\ker(\mathbb I\longrightarrow\cO)$, we have $\cO[[{\mathcal G}]]\cong\mathbb I[[{\mathcal G}]]/P$. Hence $\cO[[{\mathcal G}]]/s\cong\dfrac{\mathbb I[[{\mathcal G}]]/P}{x}$. Further $\dfrac{\mathbb I[[{\mathcal G}]]/P}{x}\cong\dfrac{\mathbb I[[{\mathcal G}]]}{\langle x,P\rangle}\cong\dfrac{\mathbb I[[{\mathcal G}]]/x}{P}$. So $\dfrac{\mathbb I[[{\mathcal G}]]/x}{P}$ is finitely generated over $\cO[[{\mathcal H}]]\cong\mathbb I[[{\mathcal H}]]/P$. Let $\mathfrak{n}$ denote the maximal ideal of $\mathbb I[[{\mathcal H}]]$. Then $\dfrac{\mathbb I[[{\mathcal G}]]/x}{\mathfrak{n}}$ is finitely generated over $\mathbb I[[{\mathcal H}]]/\mathfrak{n}$. By Nakayama's Lemma, $\mathbb I[[{\mathcal G}]]/x$ is finitely generated over $\mathbb I[[{\mathcal H}]]$. It follows that each summand $\mathbb I[[\cG]] z_j$ is finitely generated over $\mathbb I({\mathcal H})$. Therefore, $\mathcal{M}$ is finitely generated over $\mathbb I({\mathcal H})$ and hence $\sS$-torsion. \end{proof} Now suppose that $M$ is in the category $\mathfrak{M}_{\mathcal H}({\mathcal G})$. Then $M/M(p)$ is $S$-torsion. The natural surjection $\mathcal{M}\longrightarrow M/M(p)$ factors through the submodule $\mathcal{M}({\mathfrak m})$ of $\mathcal{M}$. By a similar argument as in the above proof, we can see that $\mathcal{M}/\mathcal{M}({\mathfrak m})$ is annihilated by $\sS$. Therefore, $\mathcal{M}$ is in the category $\mathfrak{M}_{\mathcal H}^\mathbb I[[\cG]]$. We therefore have the following consequence: \begin{theorem}\label{big-torsion-small-torsion} Consider the representation $\boldsymbol{\rho}_\mathbb I:\mathrm{Gal}_F\longrightarrow\gl{\mathbb I}$ and $\phi_k:\mathbb I\longrightarrow\cO$ be any surjective morphism of local algebras which give rise to a locally cyclotomic point $P$ of weight $k$. The dual Selmer group $\sgd{\inft{F}}{\ad{{\rho}_\mathbb I}}$ is $\sS^\ast$-torsion if and only if $\sgd{\inft{F}}{\ad{\rho_P}}$ is $S^\ast$-torsion. \end{theorem} \subsection{Noetherian property of $\mathcal R_\infty$ \iffalse{ In this subsection, we continue to study the ring $\mathcal R_\infty$. The results are extensions of results for the deformation ring ${\mathcal R_{cyc}}$ in \cite[Section 5.1.3]{hida-hmf}. Throughout the subsection, we assume the following two conditions: \begin{enumerate} \item $\mathcal R_\infty$ is noetherian, \item the formal spectrum $\mathrm{Spf}(\mathcal R_0)$ has an irreducible component $\mathrm{Spf}(\mathbb I)$ which is formally smooth over $W$, which is a finite extension of ${\mathbb Z}_p$. \end{enumerate} Let $\mathrm{dim}\,\mathbb I=e_0$. Since $\mathrm{Spf}(\mathbb I)$ is smooth, $\mathbb I=W[[\bar x_1,\cdots,\bar x_{e_0}]]$. Let $W[[x_1,\cdots,x_{e_0}]]$ be a lift of $\mathbb I$ in $\mathcal R_\infty$, and $\{y_1,\cdots,y_d\}$ be a minimal set of generators of $\mathcal R_\infty$. Then $\mathcal R_\infty\cong\mathbb I[[y_1,\cdots, y_d]]/\inft{\mathfrak{a}}$, for some ideal $\inft{\mathfrak{a}}$. Assuming the conditions in \cite[Theorem 3.50]{hida-hmf}, $\mathcal R_n$ is local complete intersection ring. Therefore \begin{equation} \mathcal R_n\cong\mathbb I[[y_1,\cdots,y_{m_n}]]/(f_1(y),\cdots,f_{m_n}(y)). \end{equation} As we know that $\mathcal R_\infty$ surjects onto $\mathcal R_n$, we may write $\mathcal R_n$ as \begin{equation} \mathcal R_n\cong\mathbb I[[y_1,\cdots,y_d]]/{\mathfrak{a}_n} \end{equation} where $\mathfrak{a}_n$ is generated by a regular sequence $f_1^{(n)},\cdots,f_d^{(n)}$, as $\mathrm{dim }\,\mathcal R_n=\mathrm{dim}\,\mathbb I$. We therefore have the following lemma: \begin{lemma} Under the two conditions above, for each $n<\infty$, $\mathcal R_n\cong\mathbb I[[y_1,\cdots,y_d]]/{\mathfrak{a}_n}$, where $\mathfrak{a}_n$ is generated by a regular sequence $f_1^{(n)},\cdots,f_d^{(n)}$. \end{lemma} Again, following \cite[Section 5.1.3]{hida-hmf}, we show that $\mathcal R_\infty$ is equidimensional. We give a brief sketch of the proof. For this, we begin by noting that $\mathcal R_\infty=\varprojlim_n\mathcal R_n$. Therefore, $\inft{\mathfrak{a}}=\cap_n\mathfrak{a}_n$ and $\mathfrak{a}_\infty^m=\cap_n\mathfrak{a}_n^m$, for each $m$. Now consider the ring homomorphism \begin{equation} \varphi_n:\mathcal R_n[Y_1,\cdots,Y_d]\longrightarrow\mathrm{gr}_{\mathfrak{a}_n}(\mathbb I[[y_1,\cdots,y_d]]), \varphi_n(\phi(Y))=\left(\phi(f_1^{(n)},\cdots,f_d^{(n)})\mod\mathfrak{a}_n^{m+1}\right). \end{equation} where for any ideal $I$ in any ring $A$, $\mathrm{gr}_{I}(A)=\bigoplus_{m=0}^\infty I^m/I^{m+1}$ denotes the graded ring for $I$. Since $f^{(n)}:=(f_1^{(n)},\cdots,f_d^{(n)})$ is a regular sequence, $\varphi_n$ is an isomorphism. If $\mathfrak{a}_\infty$ has minimal set of generators $f_1^\infty,\cdots,f_s^\infty$ and $f^\infty:=(f_1^\infty,\cdots,f_s^\infty)$, then for $n>>0$, $f^\infty\mod\mathfrak{a}_n\mathfrak{m}_B=(f_1^\infty,\cdots,f_s^\infty)\mod\mathfrak{a}_n\mathfrak{m}_B$ is contained in an $\mathbb{F}$-basis of $\mathfrak{a}_n/\mathfrak{a}_n\mathfrak{m}_B$. Hence, we may assume that the first $s$ elements of $f^{(n)}$ are given by $(f_1^\infty,\cdots,f_s^\infty)$ for $n>>0$. Next, putting $B:=\mathbb I[[y_1,\cdots,y_d]]$, we have a matrix $A_{n+1,n}:B^d\longrightarrow B^d$ such that $f^{(n+1)}=f^{(n)}A_{n+1,n}$. This gives rise to an inverse system of linear maps $\{B^d,A_{m,n}\}$, where $A_{m,n}=A_{n+1,n}\circ\cdots\circ A_{m,m-1}$. Since the first $s$ elements of $f^{(n)}$ are given by $(f_1^\infty,\cdots,f_s^\infty)$ for $n>>0$, $\lim_{m\longrightarrow\infty}A_{(m,1)}=A_{n,1}\begin{pmatrix} 1_s & 0\\ 0 &0\end{pmatrix}$. As $f^{(n)}$ is a regular sequence, the sequence $(f_1^\infty,\cdots,f_s^\infty)$ is a regular $B$-sequence. Then noting that a local complete intersection ring is Cohen-Macaulay and hence equidimensional the result follows. \begin{propn} Let $f_0$ be a Hecke eigenform over $F$, such that the following two conditions are satisfied \begin{enumerate} \item $\mathcal R_\infty$ is noetherian, \item the formal spectrum $\mathrm{Spf}(\mathcal R_0)$ has an irreducible component $\mathrm{Spf}(\mathbb I)$ which is formally smooth over $W$, which is a finite extension of ${\mathbb Z}_p$. \end{enumerate} Then the ring $\mathcal R_\infty$ is a local complete intersection, and $\mathcal R_\infty$ is equidimensional and flat over $\mathbb I$. \end{propn} From the proof of the result, the following result follows \begin{propn} The ring $\mathcal R_\infty$ is equidimensional of dimension $d-s+e_0$. \end{propn} Let the ring $\mathcal R_\infty$ be noetherian. Then, as shown earlier in Corollary \ref{noetherian}, the ring $\cy{\mathcal R}$ is noetherian, and the dual Selmer group $\Om{\cy{\mathcal R}}{W}\otimes W$ has $\mu$-invariant equal to zero. In other words, the dual Selmer group $\Om{\cy{\mathcal R}}{W}\otimes \FF$ is finite, for the residue field $\FF$ of $W$. }\fi \begin{propn}\label{not-noetherian} Let $\inft{F}$ be a $p$-adic Lie extension of a totally real field $F$ such that ${\mathcal G}:=\mathrm{Gal}(\inft{F}/F)$ is a $p$-adic Lie group of dimension two. Let ${\mathcal H}:=\mathrm{Gal}(\inft{F}/\cy{F})$ and $\Gamma:=\mathrm{Gal}(\cy{F}/F)$. Then \begin{enumerate} \item the dual Selmer group $\Om{\mathcal R_\infty}{W}\otimes W$ is a finitely generated module over $W[[{\mathcal H}]]$, \item the ring $\mathcal R_\infty$ is not noetherian. \end{enumerate} \end{propn} \begin{proof} \begin{description} \item[(i)] By the control theorem, we have $(\Om{\mathcal R_\infty}{W}\otimes W)_{\mathcal H}\cong\Om{\cy{\mathcal R}}{W}\otimes W$. Suppose $\mathcal R_\infty$ is a noetherian ring. Then $\cy\mathcal R$ is also noetherian. It follows that $\Om{\cy{\mathcal R}}{W}\otimes W$ is a finitely generated $W$-module. By Nakayama Lemma, $\Om{\mathcal R_\infty}{W}\otimes W$ is a finitely generated module over $W[[{\mathcal H}]]$. \item[(ii)] Suppose $\mathcal R_\infty$ is noetherian. Then the module $\Om{\mathcal R_\infty}{W}\otimes \FF$ is finite. By Nakayama Lemma, $\Om{{\mathcal R_\infty}}{W}\otimes W$ is a finitely generated torsion module over $W[[{\mathcal H}]]$. Moreover, its $\mu$-invariant over $W[[{\mathcal H}]]$ is zero. The fact that $\Om{{\mathcal R_\infty}}{W}\otimes W$ is $W[[{\mathcal H}]]$-torsion implies that $(\Om{{\mathcal R_\infty}}{W}\otimes W)_{\mathcal H}$ is finitely generated and torsion over $W$, i.e., finite. Therefore, $\Om{\cy{\mathcal R}}{W}\otimes W$ is also finite. By \cite[Theorem 6.3 (4)]{hs2}, the dual Selmer group $\Om{\cy{\mathcal R}}{W}\otimes W$ has no non-trivial finite submodules over $W[[\Gamma]]$. Therefore $\Om{\cy{\mathcal R}}{W}\otimes W=0$, which is not possible. It follows that the ring $\mathcal R_\infty$ is not noetherian. \end{description} \end{proof} \begin{remark} Unlike the situation in the case of the cyclotomic ${\mathbb Z}_p$-extension over a totally real field, the ring $\mathcal R_\infty$ is not noetherian over any $p$-adic Lie extension that contains the cyclotomic ${\mathbb Z}_p$-extension and of higher dimension. It was felt that the noetherian property of $\mathcal R_\infty$ could be used to check the conjecture on the category $\mathfrak{M}_{\mathcal H}({\mathcal G})$. However, this hope turned out to be an impossibility at least over totally real fields, where the dual Selmer group does not admit any pseudo-null submodules. A similar result might also hold for uniform pro-$p$ groups which have no elements of finite order. \end{remark} \iffalse The dual Selmer group $\Om{\cy{\mathcal R}}{W}\otimes W$ has no non-trivial finite submodules over $W[[\Gamma]]$, by \cite[Theorem 6.3 (4)]{hs2}. Let ${\mathcal H}_n:={\mathcal H}^{p^n}$ and $F^{(n)}$ be the intermediate field corresponding to ${\mathcal H}_n$. The field $F^{(n)}$ is the cyclotomic ${\mathbb Z}_p$ extension of a finite extension $K_n$ of $F$. Let $\mathcal R^{(n)}$ be the universal locally cyclotomic deformation ring of the representation $\rho_{f_0}\mid_{\mathrm{Gal}_{K_n}}$. Then, the dual Selmer group $\Om{{\mathcal R^{(n)}}}{W}\otimes W$ also has no non-trivial pseudonull submodules. Therefore, $\Om{\mathcal R_\infty}{W}\otimes W=\varprojlim_n\Om{\mathcal R^{(n)}}{W}\otimes W$ is also $W$-torsionfree. Therefore, $\Om{\mathcal R_\infty}{W}\otimes W$ has no non-trivial finite $W[[{\mathcal H}]]$-submodules. It follows that $\Om{\mathcal R_\infty}{W}\otimes W\hookrightarrow W[[{\mathcal H}]]^r$, for some $r\geq0$, with a pseudonull cokernel. Note that $\Om{\mathcal R_\infty}{W}\otimes W$ is not torsion over $W[[{\mathcal H}]]$, if $\Om{\cy{\mathcal R}}{W}\otimes W$ is not finite. }\f \subsection{Periods of adjoint Galois representations} We briefly recall the periods of the representation $\ad{\rho}$ where $\rho$ is the representation of $\mathrm{Gal}_F$ that is associated to the Hilbert modular form $f\in S_\kappa(\mathfrak{N},\varepsilon;W)$. Let $M_f$ denote the motive associated to the Hilbert modular cusp form $f$ over $E$. Let $c^+_\infty(M_f)$ and $c^\pm_p(M_f)$ denote the Deligne periods and the $p$-adic periods of $M_f$. Then, by \cite[Theorems 5.2.1(ii), 5.2.2]{hida-sgl}, since $F$ is totally real, we have \begin{equation*} c^\pm_p(\ad{M_f}(1))=c^+_p(M_f(1))c^-_p(M_f)\delta_p(M_f(1)). \end{equation*} Let $\psi$ be any Artin representation of the Galois group $\mathrm{Gal}_E$. Then $\ad{\rho_f}\otimes\psi$ is also critical at $0, 1$. Let $d_\psi$ denote the dimension of $\psi$ and $d_\pm$ be the dimension of the $\pm$-eigenspaces of the action of complex conjugation on $\psi$. Then \begin{equation*} c^\pm_p(\ad{M_f}\otimes\psi(1))=(2\pi\imath c^\pm_p(\ad{M_f}(1)))^{d_\psi}. \end{equation*} It is conjectured in \cite{deligne} that \begin{equation}\label{algebraic} \dfrac{L(\ad{\rho_f}(1),\psi,0)}{(2\pi\imath c^\pm_p(\ad{M_f}(1)))^{d_\psi}}\in\bar{\mathbb Q}. \end{equation} Here, we recall that the L-function $L(\ad{\rho_f}(1)\otimes\psi,s)$ is defined to be the Euler product defined as reciprocal of the product of the following polynomials: \begin{eqnarray}\label{hecke-polynomials} P_{\mathfrak q}(\ad{\rho_f},\psi,T) &:=& \det(1-Frob_{\mathfrak q}^{-1}T\mid(\ad{\rho_f}\otimes\psi)^{I_{\mathfrak q}})\in\cO[T], {\mathfrak q}\neq{\mathfrak p};\\ P_{\mathfrak p}(\ad{\rho_f},\psi,T) &:=& \det(1-Frob_{\mathfrak p}^{-1}T\mid(\ad{\rho_f}\otimes\psi)^{I_{\mathfrak p}})\in\cO[T];\\ P_{\mathfrak p}(\mathcal{F}_{\mathfrak p}^+\ad{\rho_f},\psi,T) &:=& \det(1-Frob_{\mathfrak p}^{-1}T\mid((\mathcal{F}_{\mathfrak p}^+\ad{\rho_f})\otimes\psi)^{I_{\mathfrak p}})\in\cO[T];\\ P_{\mathfrak p}((\mathcal{F}_{\mathfrak p}^+\ad{\rho_f})^\ast,\psi,T)&:=& \det(1-Frob_{\mathfrak p}^{-1}T\mid((\mathcal{F}_{\mathfrak p}^+\ad{\rho_f})^\ast\otimes\psi)^{I_{\mathfrak p}})\in\cO[T], \end{eqnarray} where $(\mathcal{F}_{\mathfrak p}^+\ad{\rho_f})^\ast$ denotes the contragredient representation of $\mathcal{F}_{\mathfrak p}^+\ad{\rho_f}$. \subsection{Non-commutative Main conjecture The noncommutative Main conjecture of Iwasawa theory predicts that there is an element in the group $\kone{\mathbb I[[\cG]]_{\sS^*}}$ such that its image under the connecting homomorphism of $K$-theory gives rise to the class of the dual Selmer group. More precisely, consider the connecting homomorphism \begin{equation} \konep{\mathbb I[[\cG]]_\sS}\stackrel{\widetilde\partial}{\longrightarrow} K_0(\mathbb I[[\cG]],\mathbb I[[\cG]]_\sS). \end{equation} Let $\phi_\kappa:\mathbb I\longrightarrow\cO$ be any specialization map. Then this induces the following map \begin{equation} \mathbb I[[\cG]]_\sS\stackrel{\phi_\kappa}{\longrightarrow}\cO[[{\mathcal G}]]_{S}, \end{equation} where $S$ is the multiplicative set in $\cO[[{\mathcal G}]]$. This induces the homomorphisms in the following commutative diagram \begin{equation} \xymatrix{ \konep{\mathbb I[[\cG]]_\sS}\ar[r]^{\widetilde\partial}\ar[d]_{\widetilde\phi_\kappa} & K_0(\mathbb I[[\cG]],\mathbb I[[\cG]]_\sS)\ar[d]\\ \konep{\cO[[{\mathcal G}]]_S}\ar[r]^{\partial} & K_0(\cO[[{\mathcal G}]],\cO[[{\mathcal G}]]_S). } \end{equation} Now let $\rho$ be any Artin representation of ${\mathcal G}$, say $\rho:{\mathcal G}\longrightarrow GL_n(\cO')$. Then this induces the following homomorphism of rings \begin{equation} \rho:\cO[[{\mathcal G}]]\longrightarrow M_n(\cO''[[\Gamma]]), \end{equation} for some finite extension $\cO''$ of $\cO$ and $\cO'$. Further, we have the following homomorphism \begin{equation} \Phi_\rho:\cO[[{\mathcal G}]]_S\longrightarrow M_n(Q_{\cO''}(\Gamma)), \end{equation} where $Q_{\cO''}(\Gamma)$ is the quotient field of $\cO''[[\Gamma]]$. Therefore, we have \begin{equation} \konep{\cO[[{\mathcal G}]]_S}\longrightarrow \kone{M_n(Q_{\cO''}(\Gamma)}\cong Q_{\cO''}(\Gamma)^\times. \end{equation} Now, let $\varphi$ be the augmentation map $\cO[[{\mathcal G}]]$ to $\cO$, and ${\mathfrak p}$ be the kernel of this map. Then the map $\varphi$ can be extended to the map $\varphi':Q_{\cO''}(\Gamma)\longrightarrow L\cup\{\infty\}$, for some finite extension $L$ of ${\mathbb Q}_p$ and by putting $\varphi(x)=\infty$, if $x\notin\cO[[{\mathcal G}]]_{\mathfrak p}$. The composition of the map $\widetilde\phi_\kappa$ in the above commutative diagram with the map $\varphi'$ gives us a map \begin{equation}\label{evaluation} \begin{split} \konep{\mathbb I[[\cG]]_\sS} & \longrightarrow L\cup\{\infty\}\\ x & \mapsto x(\rho). \end{split} \end{equation} \iffalse{ Consider the specialization map $\eta_P:\mathbb I\longrightarrow\cO$. This induces a homomorphism $\mathbb I[[\cG]]\longrightarrow\cO[[{\mathcal G}]]$, which induces a map $\kone{\mathbb I[[\cG]]_{\sS^\ast}}\longrightarrow\kone{\cO({\mathcal G})_{S^\ast}}$. Let $\rho:{\mathcal G}\longrightarrow GL_n(\cO')$ be any Artin representation of ${\mathcal G}$. Then, we have a ring homomorphism $\phi_\rho\cO[[{\mathcal G}]]\longrightarrow M_n(Q_{\cO'}(\Gamma))$ given by $\Phi_\rho(\sigma)=\rho(\sigma)\bar\sigma$, where $\bar\sigma$ is the image of $\sigma$ in $\Gamma$. This then induces the following homomorphism of groups $\kone{\cO({\mathcal G})_{S^\ast}}\longrightarrow\kone{M_n(Q_{\cO}(\Gamma))}$. }\fi This map satisfies the following properties: \begin{enumerate} \item Let ${\mathcal G}'$ be an open subgroup of ${\mathcal G}$. Let $\chi$ be a one dimensional representation of ${\mathcal G}'$ and $\rho=\mathrm{Ind}_{{\mathcal G}'}^{{\mathcal G}}\chi$. Consider the norm map $\konep{\mathbb I[[\cG]]_{\sS^\ast}}\longrightarrow\konep{\mathbb I({\mathcal G}')_{\sS}}$. Then for any $\widetilde x\in \konep{\mathbb I[[\cG]]_{\sS^\ast}}$, we have \begin{equation*} \widetilde x(\rho)=N(\widetilde x)(\chi). \end{equation*} \item Let $\rho_1:{\mathcal G}\longrightarrow GL_{n_1}(L)$ and $ \rho_2:{\mathcal G}\longrightarrow GL_{n_2}(L)$ be two Artin representations for some field extension $L$ of ${\mathbb Q}_p$. Then for any $\widetilde x\in\konep{\mathbb I[[\cG]]_{\sS^\ast}}$, we have \begin{equation*} \widetilde x(\rho_1\oplus\rho_2)=\widetilde x(\rho_1)\widetilde x(\rho_2). \end{equation*} \item Let $U$ be a subgroup of ${\mathcal H}$ which is normal in ${\mathcal G}$. Then the homomorphism ${\mathcal G}\longrightarrow{\mathcal G}/U$ induces the homomorphism $\mathbb I[[\cG]]_\sS\longrightarrow\mathbb I[[{\mathcal G}/U]]_\sS$. Further, we get the homomorphism $\pi:\konep{\mathbb I[[\cG]]_\sS}\longrightarrow\konep{\mathbb I[[{\mathcal G}/U]]}$. Let $\rho:{\mathcal G}/U\longrightarrow GL_n(L)$. Then we get an Artin representation $\mathrm{inf}(\rho):{\mathcal G}\longrightarrow{\mathcal G}/U\longrightarrow GL_n(L)$. For any $\widetilde x\in\konep{\mathbb I[[\cG]]_\sS}$, we have \begin{equation} \widetilde x(\mathrm{inf}(\rho))=\pi(\widetilde x)(\rho). \end{equation} \end{enumerate} From the localization sequence \eqref{localization}, we get the following exact sequence \begin{equation*} \konep{\mathbb I[[\cG]]}\longrightarrow\konep{\mathbb I[[\cG]]_\sS}\stackrel{\partial}{\longrightarrow} K_0(\mathbb I[[\cG]],\mathbb I[[\cG]]_\sS)\longrightarrow 0. \end{equation*} \begin{conj}\label{main-conj}(Main Conjecture over $\mathbb I[[{\mathcal G}]]$) Let $f\in S_\kappa(\mathfrak{N}{\mathfrak p},\varepsilon;\cO)$ be obtained through the arithmetic specialization $\phi_\kappa:\mathbb I\longrightarrow\cO$, for some finite extension $\cO$ of ${\mathbb Z}_p$. Let $\rho_{f}$ be the representation of $\mathrm{Gal}_E$ that is associated to $f$, $V:=\ad{\rho_f}$, and $V_{\mathfrak p}^+:=\mathcal{F}_{\mathfrak p}^+\ad{\rho_f}$. Then there exists an element $\widetilde\xi\in\konep{\mathbb I[[\cG]]_\sS}$ such that $\partial(\widetilde\xi)=-[\sgd{E_\infty}{\ad{\rho}}]$. Further, under the map in \eqref{evaluation}, the following interpolation properties are satisfied for all Artin representations $\alpha$ of ${\mathcal G}$ with degree $d_\alpha$ \iffalse{ \begin{equation} \phi_j(\xi)(\alpha)=\dfrac{L(\ad{\rho_f},\chi_p^{-j}\alpha,j)}{\Omega(+,\rho_f)^{d_+}\Omega(-,\rho_f)^{d_-}}. \end{equation} }\fi \begin{equation}\label{interpolate} \begin{split} \widetilde\phi_\kappa(\widetilde\xi)(\alpha)=&\dfrac{L_\Sigma(V(1),\alpha,0)}{(2\pi\imath c^\pm_p(\ad{M_f}(1)))^{d_\alpha}}\times\\ & \left[\dfrac{P_{\mathfrak p}(V\otimes\psi,T)}{P_{\mathfrak p}(V_{\mathfrak p}^+\otimes\psi,T)}\right]_{T=1} P_{\mathfrak p}((V_{\mathfrak p}^+\otimes\psi)^\ast,1) \prod_{{\mathfrak q}\mid\mathfrak{N},{\mathfrak q}\neq{\mathfrak p}}P_{\mathfrak q}(V\otimes\psi,1). \end{split} \end{equation} Here $L_\Sigma(V(1),\alpha,0)$ is the value of the L-function for the twisted adjoint representation with Euler factors for primes in the set $\Sigma:=\{{\mathfrak q}\mid\mathfrak{N}{\mathfrak p}\}$ removed. \end{conj} \iffalse For the representation $\ad{\rho}$, we assume that the Selmer group $\sg{E_\infty}{\ad{\rho}}$ is in the category $\mathfrak M^\mathbb I_H(G)$. The element in $\kone{\mathbb I[[\cG]]_\sS}$ interpolates special values of $L$-functions of the specializations twisted by Artin characters. More precisely, \begin{equation} \xi(\chi_p^j\alpha)=\dfrac{L(\ad{\rho_f},\chi_p^{k-j}\alpha,j)}{\Omega(+,\rho_f)^{d_+}\Omega(-,\rho_f)^{d_-}}. \end{equation} In this case, $\xi$ does not interpolate the special values immediately. However, once we specialize at a particular weight, we do conjecture that we get the special value of a twist of that specilization. At the step before specialization, after augmentation the value lands inside $Q(\mathbb I)^\times\cup\{\infty\}$. We believe that this is a $p$-adic $L$-function for a family whose $L$-values are twisted only by $\alpha$, and the variable here is only the weight variable. This makes sense as further specialization gives the special values at each weight. In other words, the big $p$-adic $L$-function interpolates the special values of abelian $p$-adic $L$-function. However, we define the interpolation by going to the special values. \begin{conj}(Main Conjecture) There exists an element $\xi\in\kone{\mathbb I[[\cG]]_\sS}$ such that $\partial(\xi)=-[\sgd{E_\infty}{\ad{\rho}}]$, such that the following interpolation properties are satisfied. \end{conj} Similar Main conjectures are also formulated in the thesis of Barth. There are two differences between the conjecture that he formulated and ours. Firstly, it is regarding the choice of the Ore set $\sS^\ast$. Secondly, in the interpolation, we allow only twists by Artin representations coming of ${\mathcal G}$, as in the case of the Main conjecture formulated in \cite{cfksv} and Fukaya-Kato. On the other hand, in the interpolation formula of Barth, he considers the isomorphism $\mathbb I[[\cG]]\cong{\mathbb Z}_p[[W\times G]]$, with $W\cong{\mathbb Z}_p$ being the weight variable, and then use the interpolation formula of Fukaya and Kato which allows only finite order characters of $W$ and possibly do not reflect the weight variable. The weight variable is, on the other hand, an infinite order character of $W$. Further, if we consider the $p$-adic Lie group $W\times{\mathcal G}$, then $W$ can be taken to be the group which is isomorphic to $Z_p$ and the Ore set defined with respect to $W$. However, in his thesis this is not the case. The one dimensional quotient isomorphic to ${\mathbb Z}_p$ comes from the group ${\mathcal G}$. }\f \iffalse \begin{theorem} Let $s=\mid S_E\mid$. Then, we have \begin{enumerate} \item If $\sg{E}{\ad{\rho}}=0$ and $J\in\mathbb I$ then \begin{equation}\begin{split} &\mathcal R_{E_n}^\phi\cong h_{E_n}^\phi\cong\mathcal{O}[[D_n]], \mathcal R_{E_n}^{n.ord}\cong h_{E_n}^{n.ord}\cong\mathcal{O}[[D_n\times Cl_{E_n}(p^\infty)_p]];\\ &\sg{\inft{E}}{\ad{\phi}}\cong\mathbb I[S_E] \end{split} \end{equation} \item $\sg{\cyn{E}{n}}{\ad{\phi}}$ is torsion over $\mathcal O[[\Gamma_n]]$ for all $n$ and is pseudo-isomorphic to $\mathbb I^s\times M$, where \begin{equation} M:=\Om{\mathcal R_{\cyn{E}{n}}}{\mathcal O[[\cy D]]}. \end{equation} \item Suppose that $\mathbb I$ is normal. Let $\Phi(T)$ be the characteristic ideal of $M$, and $\Psi(T)$ the characteristic ideal of $\sg{\cyn{E}{n}}{\ad{\phi}}$. Then \begin{equation} \Psi(T)=\Phi(T)T^s, \Phi(0)\neq0 \mbox{ and } \Phi(0)\mid J\eta, \end{equation} where $\eta$ is the characteristic ideal of the $\mathbb I$-module $\sg{E_n}{\ad{\phi}}$. \end{enumerate} \end{theorem} \begin{proof} \end{proof} }\f \begin{remark}\label{Gen-MC} A similar Main conjecture is formulated in the thesis of Barth \cite{barth}. Similar main conjectures can be formulated for any $p$-adic family of nearly ordinary Galois representations. See sections 4.3 and 4.4 for a discussion about the interpolation properties. See \cite[Theorem 4.1.12]{fk} for the interpolation property for motives. \end{remark} \subsection{Main conjecture in the abelian case Let $F$ be a totally real field, and $f$ be a Hilbert modular Hecke eigenform of weight $\kappa$, level $\mathfrak{N}$. We also assume that $f$ is ordinary at all the primes above the prime $p$. We denote the Galois representation associated to $f$ by $\rho_f$. Let $\boldsymbol\rho$ be the universal ordinary deformation of $\rho_f$. Consider the field $\inft{F}$ which is the maximal abelian pro-$p$ extension of $F$ which is unramified outside the $p\mathfrak{N}$. Let ${\mathcal G}=\mathrm{Gal}(\inft{F}/F)$. Then ${\mathcal G}={\mathcal H}\times\Gamma$ where $\Gamma$ is the Galois group of the cyclotomic ${\mathbb Z}_p$-extension of $F$, and we assume that ${\mathcal H}$ is some finite group. In fact, if Leopoldt conjecture for $F$ is true, then ${\mathcal H}$ will be a finite group. Then for $\mathbb I=\cO[[X_1,\cdots,X_r]]$, we get $\mathbb I[[\cG]]\cong\mathbb I[{\mathcal H}][[\Gamma]]$. In this context, the $p$-adic L-function of $\ad{\rho_f}$ over the quotient field of $\mathbb I[[\cG]]$ is constructed by Hida and Tilouine in the case $F={\mathbb Q}$. If $F$ is totally real and its ring of integers has class number one, then the $p$-adic L-function has been constructed by Hsin-Tai Wu (\cite{wu}). In general, it has been constructed by Rosso in \cite[Theorem 7.2]{rosso}. The Main conjecture over the field ${\mathbb Q}$ is also proven for the Selmer group of the adjoint in \cite{urban} in some cases and over totally real fields $F$ in certain cases in \cite[Section 10]{rosso}. Then, we have ${\mathcal L}_p(X_1,\cdots,X_r,\epsilon,T)=G(X_1,\cdots,X_r,\epsilon,T)$, as ideals, where $G$ is the characteristic ideal of the dual selmer group of $\ad{\boldsymbol\rho}$. Here $\epsilon$ is any finite order character of ${\mathcal G}$. We now interpret this equality of the Iwasawa Main conjecture in terms of $K$-theory. In this situation, the category $\mathfrak{M}_{\mathcal H}^\mathbb I[[\cG]]$ consists of all finitely generated modules which are $\sS^\ast$-torsion. \begin{lemma} Suppose that the $\mu$-invariant of the Selmer group $\sgd{\cy{F}}{\ad{\rho_f}}$ be equal to zero. Then $\sgd{\inft{F}}{\ad{\rho_f}}$ is $S$-torsion. It follows that $\sgd{\inft{F}}{\ad{\boldsymbol\rho}}$ is $\sS$-torsion, and the $p$-adic L-function ${\mathcal L}_p$ is a unit in $\mathbb I[[\cG]]_\sS$. \end{lemma} \begin{proof}Let ${\mathcal H}={\mathcal H}'\times{\mathcal H}_p$, where ${\mathcal H}_p$ is the $p$-part of ${\mathcal H}$ and ${\mathcal H}'$ the group whose order is prime-to-$p$. By the proof of the Main conjecture due to Urban and Rosso, mentioned above, we have ${\mathcal L}_p(X_1,\cdots,X_r,\epsilon,T)=G(X_1,\cdots,X_r,\epsilon,T)$, for every character $\epsilon$ of ${\mathcal H}'$. Furthermore, as $\sgd{\inft{F}}{\ad{\boldsymbol\rho}}$ is $\sS$-torsion, therefore Note that $\mathbb I[[\cG]]_\sS$ is the localization at the prime ideal ${\mathfrak m}_\mathbb I$. Then we have the following decomposition \begin{equation*} \mathbb I[[\cG]]_{{\mathfrak m}_\mathbb I}\stackrel{\cong}{\longrightarrow}\mathbb I[H'\times H_p][[T]]_{{\mathfrak m}_\mathbb I}\stackrel{\cong}{\longrightarrow}\oplus_{\psi\in\widehat{H'}}\mathbb I[H_p][[T]]_{{\mathfrak m}_\mathbb I}. \end{equation*} It is enough for us to show that the image in each summand is a unit. Note that the image in each summand is ${\mathcal L}_p(X_1,\cdots,X_r,\psi,T)=G(X_1,\cdots,X_r,\psi,T)$. Since the $\mu$-invariant is equal to zero, $G(X_1,\cdots,X_r,\psi,T)\in\sS$, and hence $G(X_1,\cdots,X_r,\psi,T)$ is a unit in $\mathbb I[[\cG]]_\sS$. \end{proof} Now, let $Y$ be any finitely generated $\mathbb I[[T]]$-module which is annihilated by an element outside the maximal ideal ${\mathfrak m}_\mathbb I$ of $\mathbb I$. Then the characteristic ideal $\mathscr P$ of $Y$ belongs to $\mathbb I[[T]]_{{\mathfrak m}_\mathbb I}^\times$. Now consider the class in $K_0(\mathbb I[[T]],\mathbb I[[T]]_{{\mathfrak m}_\mathbb I})$ which is given by $\left[\left(Y,0,\frac{\mathbb I[[T]]}{f\mathbb I[[T]]}\right)\right]$. Then, under the connecting homomorphism \begin{equation*} \partial:\kone{\mathbb I[[T]]_{{\mathfrak m}_\mathbb I}}\cong\mathbb I[[T]]_{{\mathfrak m}_\mathbb I}^\times\longrightarrow K_0(\mathbb I[[T]],\mathbb I[[T]]_{{\mathfrak m}_\mathbb I}), \end{equation*} we have $\partial(\mathscr P)=\left[\left(Y,0,\frac{\mathbb I[[T]]}{f\mathbb I[[T]]}\right)\right]$. Taking $Y$ to be the module $\sgd{\inft{F}}{\ad{\boldsymbol\rho}}$ we have the following theorem. \begin{theorem} Let $\sgd{\inft{F}}{\ad{\boldsymbol\rho}}$ be annihilated by an element of $\mathbb I[[\cG]]$ outside the maximal ideal ${\mathfrak m}_\mathbb I$ of $\mathbb I$. Then under the connecting homomorphism $\partial:\kone{\mathbb I[[\cG]]_\sS}\longrightarrow K_0(\mathbb I[[\cG]],\mathbb I[[\cG]]_{{\mathfrak m}_\mathbb I})$, the $p$-adic L-function is mapped to the class $-[\sgd{\inft{F}/F}{\ad{\rho}}]$. \end{theorem} \subsection{Remark on zeros}\label{remark-trivial-zeros} As a consequence of Theorem \ref{trivial-zeros}, we get the following result: \begin{propn} Recall the notations from Theorem \ref{trivial-zeros}. Then we have the following equality in $K_0(\mathbb I[[\cG]],\mathbb I[[\cG]]_{\sS})$: \begin{equation*} [\sgd{\inft{E}}{\ad{\boldsymbol\rho}}]=[\Om{\mathcal R_\infty}{\cO[[\inft{D}]]}\otimes_{\mathcal R_\infty}\mathbb I]+[\Om{\mathbb I}{\cO[[I_0]]}\otimes_{\mathbb I}\mathbb I]. \end{equation*} Assuming the noncommutative Main Conjecture, the $p$-adic L-function for $[\sgd{\inft{E}}{\ad{\boldsymbol\rho}}]$ arises as the product of the $p$-adic L-function for $[\Om{\mathcal R_\infty}{\cO[[\inft{D}]]}\otimes_{\mathcal R_\infty}\mathbb I]$ and $[\Om{\mathbb I}{\cO[[I_0]]}\otimes_{\mathbb I}\mathbb I]$. \end{propn} \begin{proof} By Theorem \ref{trivial-zeros}, we have the following short-exact sequence: \begin{equation*} 0\longrightarrow \sgd{\inft{E}}{\ad{\boldsymbol\rho}} \stackrel{}{\longrightarrow} \left(\Om{\mathcal R_\infty}{\cO[[\inft{D}]]}\otimes_{\mathcal R_\infty}\mathbb I\right)\oplus\left(\Om{\mathbb I}{\cO[[I_0]]}\otimes_{\mathbb I}\mathbb I\right) \stackrel{}{\longrightarrow}\Om{\mathbb I}{A_0}\otimes_{\mathbb I}\mathbb I\longrightarrow 0. \end{equation*} Then, we have the following equality in $K_0(\mathbb I[[\cG]],\mathbb I[[\cG]]_{\sS})$: \begin{equation*} [\sgd{\inft{E}}{\ad{\boldsymbol\rho}}]+[\Om{\mathbb I}{A_0}\otimes_{\mathbb I}\mathbb I]=[\Om{\mathcal R_\infty}{\cO[[\inft{D}]]}\otimes_{\mathcal R_\infty}\mathbb I]+[\Om{\mathbb I}{\cO[[I_0]]}\otimes_{\mathbb I}\mathbb I]. \end{equation*} Since the module $\Om{\mathbb I}{A_0}$ is $\mathbb I$-torsion, therefore it is the trivial class, and we have, \begin{equation*} [\sgd{\inft{E}}{\ad{\boldsymbol\rho}}]=[\Om{\mathcal R_\infty}{\cO[[\inft{D}]]}\otimes_{\mathcal R_\infty}\mathbb I]+[\Om{\mathbb I}{\cO[[I_0]]}\otimes_{\mathbb I}\mathbb I]. \end{equation*} Assuming the noncommutative Main Conjecture for the classes $[\Om{\mathcal R_\infty}{\cO[[\inft{D}]]}\otimes_{\mathcal R_\infty}\mathbb I]$ and $[\Om{\mathbb I}{\cO[[I_0]]}\otimes_{\mathbb I}\mathbb I]$ there exists elements $\widetilde\psi,\widetilde\tau\in\konep{\mathbb I[[\cG]]_\sS}$ such that their images under the connecting homomorphism $\partial$ are the modules $[\Om{\mathcal R_\infty}{\cO[[\inft{D}]]}\otimes_{\mathcal R_\infty}\mathbb I]$ and $[\Om{\mathbb I}{\cO[[I_0]]}\otimes_{\mathbb I}\mathbb I]$ in $K_0(\mathbb I[[\cG]],\mathbb I[[\cG]]_{\sS})$. Therefore, $\partial$ maps the product $\widetilde\psi\widetilde\tau$ to the class $[\sgd{\inft{E}}{\ad{\boldsymbol\rho}}]$. \end{proof} \begin{remark} Over the cyclotomic ${\mathbb Z}_p$-extension, we saw in Theorem \ref{trivial-zeros} that the $\mathbb I[[T]]$-module $\Om{\mathbb I}{\cO[[I_0]]}$ is pseudo-isomorphic to $T^s$, where $s=S_F$ is the number of primes of $E$ above $p$. In the case of a $p$-adic Lie extension also, the pre-image of the class $[\Om{\mathbb I}{\cO[[I_0]]}\otimes\mathbb I]$ occurs as a factor of the $p$-adic L-function of the class $[\sgd{\inft{E}}{\ad{\boldsymbol\rho}}]$. \end{remark} \section{$K_1$ computations and congruences over $\mathbb I[[\cG]]$}\label{k-one} In this section, we extend the strategy that has so far been followed to prove the Main conjecture. This strategy relying on a description of the $K$-groups was first used by Burns and then by Kato, who used it to prove certain instances of the Main conjecture over number fields. Independently, Ritter and Weiss also proved instances of the Main conjecture. Their ideas are similar to that of Burns and Kato. Kakde and Hara also proved instances of the Main conjecture for certain $p$-adic Lie extensions. They were inspired by the work of Burns and Kato. We extend their results suitably and show that the Main conjecture that we have formulated can also be established for certain $p$-adic families of Galois representations. \subsection{General strategy} The strategy involves reducing the proof of the Main conjecture over compact $p$-adic Lie groups to compact $p$-adic Lie groups of dimension one. For this, it is crucial to know that the completed group ring $\mathbb I[[\cG]]$ is an adic ring. Indeed, if ${\mathfrak m}_\mathbb I$ is the maximal ideal of $\mathbb I$ and $I_{\mathcal G}$ the augmentation ideal of $\mathbb I[[\cG]]$, then $J_{\mathcal G}={\mathfrak m}_\mathbb I+I_{\mathcal G}$ is the maximal ideal of $\mathbb I[[\cG]]$, and $\mathbb I[[\cG]]$ is an adic ring with respect to the ideals $\{J_{\mathcal G}^n:n\in\mathbb N\}$ in the sense of Fukaya-Kato (\cite[1.4.1]{fk}). Then it is shown in Fukaya-Kato (\cite[Prop 1.5.1]{fk}) that \begin{equation*}\label{k-limit-iso} \kone{\mathbb I[[\cG]]}\stackrel{\cong}{\longrightarrow}\varprojlim_n\kone{\mathbb I[[\cG]]/J_{\mathcal G}^n}. \end{equation*} Following \cite{burns1}, in \cite[\S 4]{kakde}, a series of reduction steps are made showing that the proof of the Main Conjecture for any arbitrary $p$-adic Lie group can be reduced to the case when the Galois group ${\mathcal G}$ has dimension one, with ${\mathcal G}\cong \Delta\times {\mathcal G}_p$, where $\Delta$ is a finite cyclic group of order prime to $p$ and ${\mathcal G}_p$ is a pro-$p$ compact $p$-adic Lie group of dimension one. We do not give the steps leading to this reduction, though the idea essentially is derived from the isomorphism\eqref{k-limit-iso} above. We proceed with the belief that similar reductions are possible. Consider the Iwasawa algebra $\mathbb I\cong\cO[[X_1,\cdots,X_r]]$, for some $r$. Then $\mathbb I[[\cG]]\cong\prod_{\psi\in\Delta^\ast}\mathbb I[\psi]({\mathcal G}_p)$, where $\mathbb I[\psi]$ is the algebra obtained by adjoining the values of $\psi$ to $\mathbb I$. This further allows one to reduce the proof of the Main conjecture to the case when ${\mathcal G}$ is a pro-$p$, compact $p$-adic Lie group of dimension one. We now assume that ${\mathcal G}$ is a $p$-adic Lie group of dimension 1. Let $\Sigma({\mathcal G})$ be any set of rank 1 subquotients of ${\mathcal G}$ of the form $U^{ab}$ with $U$ an open subgroup of ${\mathcal G}$ that has the following property: \begin{description} \item[($\ast$)] For each Artin representation $\rho$ of ${\mathcal G}$, there is a finite subset $\{U^{ab}_i:i\in I\}$ of $\Sigma({\mathcal G})$ and for each index $i$ an integer $m_i$ and a degree one representation $\rho_i$ of $U^{ab}$ such that there is an isomorphism of virtual representations $\rho\cong\sum_{i\in I}m_i.\Ind{{\mathcal G}}{U_i}{\Ind{U_i}{U_i^{ab}}{\rho_i}}$. \end{description} Let $U^{ab}$ be a subquotient satisfying the above property $(\ast)$. Note that we have the following natural homomorphism, \begin{equation} \kone{\mathbb I[[\cG]]_\sS}\longrightarrow \kone{\mathbb I(U)_\sS}\longrightarrow \kone{\mathbb I(U^{ab})_\sS}\longrightarrow\mathbb I(U^{ab})_\sS^\times\subset Q_\mathbb I(U^{ab})^\times. \end{equation} Taking all the $U^{ab}$ in $\Sigma({\mathcal G})$ we get the following homomorphism \begin{equation} \Theta_{\Sigma({\mathcal G})}:\kone{\mathbb I[[\cG]]}\longrightarrow\prod_{U^{ab}\in\Sigma({\mathcal G})}Q_\mathbb I(U^{ab})^\times. \end{equation} \begin{defn}\label{sk-def} Let $\mathbb{K}=K[[X_1,\cdots,X_r]]$, where $K$ is the quotient field of $\cO,$ and $Y$ is the variable corresponding to $\widetilde\Gamma^{p^e}$. For any finite group $G$, we consider the following groups (see \cite[Page 173]{ol}): \begin{equation*} \begin{split} SK_1(\mathbb I[[\cG]])&:=\mathrm{ker}\left[\kone{\mathbb I[[\cG]]}\longrightarrow\kone{\mathbb{K}[[{\mathcal G}]]}\right],\\ \konep{\mathbb I[[\cG]]}&:=\kone{\mathbb I[[\cG]]}/S\kone{\mathbb I[[\cG]]},\\ \end{split} \end{equation*} where $\mu_K$ is the set of roots of unity in $K$. \end{defn} \iffalse{ We now consider the set $\Sigma({\mathcal G})$ considered by Kato, Kakde, Burns, Ritter-Weiss. Then we have the following analogue of a result of Burns and Kato. \begin{propn}[Burns, Kato] A $p$-adic L-function exists if the following two conditions are satisfied: \end{propn} Let ${\mathcal G}$ be a $p$-elementary group of rank one with a direct product decomposition ${\mathcal G}=P\times P'$, with $P$ as a $p$-group of rank one and $P'$ finite. Then we have $\mathbb I[[\cG]]=\prod_{\psi\in P'^\ast}\mathbb I_{{\mathbb Z}_p[\psi]}(P)$ and $\mathcal{L}_{\mathcal G}\cong\oplus_{\psi\in P'^\ast}(\mathcal L\otimes\mathcal{L}_\psi)_P$. In particular, after replacing ${\mathcal G}$, $\mathbb I[[\cG]]$ by $P$ and $\mathbb I[[\cG]]$ we may assume that ${\mathcal G}$ is a pro-$p$ group of rank one. }\fi \begin{propn} Let the $\mu$-invariant of the dual Selmer group $\sgd{\cy{E}}{\ad{\boldsymbol\rho}}$ be equal to zero. Then the Main Conjecture in \ref{main-conj} is valid if and only if for any set of subquotients $\Sigma({\mathcal G})$ with the property (*) above, the following two conditions hold: \begin{enumerate} \item there exists a subgroup $\Phi$ of $\prod_{U^{ab}\in\Sigma({\mathcal G})}\mathbb I(U^{ab})^\times$ such that $\Theta_{\Sigma({\mathcal G})}:\kone{\mathbb I[[\cG]]}{\longrightarrow}\Phi$ is an isomorphism; \item there exists a subgroup $\Phi_\sS$ of $\prod_{U^{ab}\in\Sigma({\mathcal G})}\mathbb I(U^{ab})_\sS^\times$ such that $\Phi_\sS\cap(\prod_{U^{ab}\in\Sigma({\mathcal G})}\mathbb I(U^{ab})^\times)=\Phi$ and $\Theta'_{\Sigma({\mathcal G})}(\kone{\mathbb I[[\cG]]_\sS})\subset\Phi_\sS$. \end{enumerate} \end{propn} \begin{proof} Let $C:=[\sgd{E_\infty}{\ad{\boldsymbol\rho}}]$. Consider the following commutative diagram: \begin{equation} \xymatrix{ \konep{\mathbb I[[\cG]]}\ar[r]\ar[d]^{\Theta_{\Sigma({\mathcal G})}} & \konep{\mathbb I[[\cG]]_\sS}\ar[r]^{\widetilde\partial_{\mathcal G}}\ar[d]^{\Theta'_{\Sigma({\mathcal G})}} & K_0(\mathbb I[[\cG]],\mathbb I[[\cG]]_\sS)\ar[r]\ar[d]^{\Theta_0} & 0\\ \prod_{U^{ab}\in\Sigma({\mathcal G})}\konep{\mathbb I(U^{ab})}\ar[r] & \prod_{U^{ab}\in\Sigma({\mathcal G})}\konep{\mathbb I(U^{ab})_\sS}\ar[r]^{\!\!\!\!\!\partial_{\mathcal G}} & \prod_{U^{ab}\in\Sigma({\mathcal G})}K_0(\mathbb I(U^{ab}),\mathbb I(U^{ab})_\sS)\ar[r] & 0 } \end{equation} Let $g$ be any element in $\kone{\mathbb I[[\cG]]_\sS}$ such that $\partial_{\mathcal G}(g)=-C$. Since the Main conjecture is valid for the extension $E^{U^{ab}}/E$, there exists $\xi_{U^{ab}}$ such that it is the pre-image of the class $-[\sgd{E^{U^{ab}}}{\ad{\boldsymbol\rho}}]$. On the other hand, the commutativity of the square on the left also implies that under the map $\Theta'_{\Sigma({\mathcal G})}$ the element $\left(g_{U^{ab}}\right)$ is mapped to $-[\sgd{E^{U^{ab}}}{\ad{\boldsymbol\rho}}]$. Therefore the element $(g_{U^{ab}}^{-1}\xi_{U^{ab}})$ comes from the group $\prod_{U^{ab}\in\Sigma({\mathcal G})}\kone{\mathbb I(U^{ab})}=\prod_{U^{ab}\in\Sigma({\mathcal G})}{\mathbb I(U^{ab})}^\times$. The second condition therefore implies that $(g_{U^{ab}}^{-1}\xi_{U^{ab}})\in\Phi$. By the isomorphism in the first condition, we find that there exists $u\in\konep{\mathbb I[[\cG]]}$, such that $\Theta_{\Sigma({\mathcal G})}(u)=(g_{U^{ab}}^{-1}\xi_{U^{ab}})$. Since the map $\Theta_{\Sigma({\mathcal G})}$ is injective, the map $\konep{\mathbb I[[\cG]]} \longrightarrow \konep{\mathbb I[[\cG]]_\sS}$ is injective. Now, we define $\xi_{\mathcal G}:=ug$, and we claim that this is the $p$-adic L-function defined over $\mathbb I[[\cG]]$ that satisfies the interpolation formula. Clearly, $\widetilde\partial_{\mathcal G}(\xi_{\mathcal G})=\widetilde\partial_{\mathcal G}(u)+\widetilde\partial_{\mathcal G}(g)=\widetilde\partial_{\mathcal G}(g)=-C$ as $u$ comes from an element of $\konep{\mathbb I[[\cG]]}$. For the interpolation formula, for any Artin representation $\rho$ of ${\mathcal G}$, consider the isomorphism $\rho\cong\sum_{i\in I}m_i.\Ind{{\mathcal G}}{U_i}{\Ind{U_i}{U_i^{ab}}{\rho_i}}$ of virtual representations given by the condition (*) above. Then, we have, \begin{equation*} \phi_\kappa(\xi_{\mathcal G})(\rho)=\prod_{i\in I}\phi_\kappa(\xi_{\mathcal G})(\Ind{{\mathcal G}}{U_i}{\Ind{U_i}{U_i^{ab}}{\rho_i}})^{m_i} =\prod_{i\in I}\phi_\kappa(\xi_{U_i^{ab}})(\rho_i)^{m_i}. \end{equation*} On the other hand, if the Main conjecture is true over the extension, then there exists $\xi\in\konep{\mathbb I[[\cG]]_\sS}$. Let $\Theta'_{\Sigma({\mathcal G})}(\xi)=(\xi_{U^{ab}})\in\prod_{\Sigma({\mathcal G})}(\mathbb I(U^{ab}_\sS)^\times$. Note that the image $(\xi_{U^{ab}})\in\Phi_\sS$. By the interpolation formula, it is easy to see that the element $\xi_{U^{ab}}$ is the $p$-adic L-function over $U^{ab}$. Therefore $\xi_{U^{ab}}\in\Phi_\sS$. This finishes the proof of the proposition. \end{proof} We now fix a lift $\widetilde\Gamma$ of $\Gamma$ in ${\mathcal G}$. Then we can identify ${\mathcal G}$ with $H\rtimes\Gamma$. Fix $e\in\mathbb N$ such that $\widetilde\Gamma^{p^e}\subset Z({\mathcal G})$, and put $\overline{\mathcal G}:={\mathcal G}/{\Gamma^{p^e}}$ and ${\mathscr R}:=\mathbb I(\widetilde\Gamma^{p^e})$. Then $\mathbb I[[\cG]]\cong {\mathscr R}[\overline{\mathcal G}]^\tau$, the twisted group ring with multiplication \begin{equation*} (h\widetilde\gamma^a)^\tau(h\widetilde\gamma^b)^\tau=\widetilde\gamma^{p^e[\frac{a+b}{b}]}(h\widetilde\gamma^a.h'\widetilde\gamma^b)^\tau, \end{equation*} where $g^\tau$ is the image of $g\in G$ in ${\mathscr R}[\overline{\mathcal G}]^\tau$ (\cite[\S 5.1.1, \S 5.1.2]{kakde}). The Ore set $\sS$ that we have considered in the formulation of the Main Conjecture over $\mathbb I[[\cG]]$ contains a multiplicative set which is crucial in setting up the strategy to prove the Conjecture. \begin{lemma} Let $Z:=Z({\mathcal G})$. Consider the subset $T=\mathbb I(Z)\setminus p \mathbb I(Z)$. Then $T$ is a multiplicatively closed left and right Ore set of $\mathbb I[[\cG]]$. Further, the inclusion of rings $\mathbb I[[\cG]]_T\hookrightarrow\mathbb I[[\cG]]_\sS$ is an isomorphism. \end{lemma} \begin{proof} As $Z$ is central in ${\mathcal G}$, it is easy to see that $T$ is a multiplicatively Ore set. Further, $T$ has no zero-divisors as it is contained in the domain $\mathbb I(Z)$. Therefore, the natural map $\mathbb I[[\cG]]_T\longrightarrow\mathbb I[[\cG]]_\sS$ induced by the inclusion $T\hookrightarrow\sS$ is an injective. For surjectivity, consider the equality $\mathbb I[[\cG]]_T=\mathbb I(Z)_T\otimes_{\mathbb I(Z)}\mathbb I[[\cG]]$. We first observe that $Q(\mathbb I(Z))\otimes_{\mathbb I(Z)}\mathbb I[[\cG]]= Q(\mathbb I[[\cG]])$. Indeed, it is easy to see that \begin{equation*} Q(\mathbb I(Z))\otimes_{\mathbb I(Z)}\mathbb I[[\cG]]\hookrightarrow Q(\mathbb I[[\cG]]). \end{equation*} Further, the ring $\mathbb I[[\cG]]=\mathbb I(Z)[\overline{\mathcal G}]$ is a module of finite rank over $\mathbb I(Z)$ and $Q(\mathbb I(Z))$ is a field, so the ring $Q(\mathbb I(Z))\otimes_{\mathbb I(Z)}\mathbb I[[\cG]]$ is Artinian. It follows that every regular element is a unit. The inclusion $\mathbb I[[\cG]]\hookrightarrow Q(\mathbb I(Z))\otimes_{\mathbb I(Z)}\mathbb I[[\cG]]$, then implies that every regular element of $\mathbb I[[\cG]]$ is invertible in $Q(\mathbb I(Z))\otimes_{\mathbb I(Z)}\mathbb I[[\cG]]$. It follows that the inclusion $Q(\mathbb I(Z))\otimes_{\mathbb I(Z)}\mathbb I[[\cG]]\hookrightarrow Q(\mathbb I[[\cG]])$ is surjective. Finally, if $x\in\mathbb I[[\cG]]_\sS\subset Q(\mathbb I[[\cG]])$, then $x=a/t$, for some $a\in\mathbb I[[\cG]]$ and $t\in\mathbb I(Z)$ with $t\neq 0$. Here, if $t\in p^n\mathbb I(Z)$, then $tx=a\in p^n\mathbb I[[\cG]]_\sS$. Since $a\in p^n\mathbb I[[\cG]]$, it follows that $a\in p^n\mathbb I[[\cG]]_\sS$. Cancelling the powers of $p$ from $a$ and $t$, the element $x=a'/t'$ with $t'\in T$. \end{proof} \begin{remark} The $p$-adic completion of $\mathbb I[[\cG]]_T$ is denoted by $\widehat\mathbb I[[\cG]]_T=\widehat{\mathbb I(Z)}_T[\overline{\mathcal G}]^\tau$. We also note that the localizations with respect to $T$ and $p$ are equal $\mathbb I(Z)_T=\mathbb I(Z)_{(p)}$. \end{remark} Let $P$ be a subgroup of $\overline{\mathcal G}$ and $U_P$ be the inverse image of $P$ in ${\mathcal G}$. Let \begin{enumerate} \item $N_{\overline{\mathcal G}}P:=$ the normalizer of $P$ in ${\mathcal G}$, \item $W_{\overline{\mathcal G}}(P):=N_{\overline{\mathcal G}}P/P$, \item $C(\overline{\mathcal G}):=$ set of cyclic subgroups of $\overline{\mathcal G}$, \item for $P\in C(\overline{\mathcal G})$, the set $C_P(\overline{\mathcal G})$ denotes the set of cyclic subgroups $P'$ of $\overline{\mathcal G}$ with $P'^p=P$ and $P'\neq P$. \end{enumerate} If $P\in C(\overline{\mathcal G})$, then $U_P$ is a rank one abelian subquotient of ${\mathcal G}$, and for every $P\in C(\overline{\mathcal G})$ set \begin{equation} {\mathscr T}_P:=\{\sum_{g\in W_{\overline{\mathcal G}}(P)}g^\tau x(g^\tau)^{-1}\mid x\in {\mathscr R}[P]^\tau\}. \end{equation} In the same way, we define ${\mathscr T}_{P,\sS}$ and $\widehat{{\mathscr T}}_P$. Let $P\leq P'\leq\overline{\mathcal G}$. Then consider the homomorphism $\mathbb I[[\cG]]\longrightarrow\mathbb I[[\cG]]$ given by $x\mapsto \sum_{g\in P'/P}\tilde{g}x\tilde{g}^{-1}$, where $\tilde{g}$ is a lift of $g$. We define ${\mathscr T}_{P,P'}$ to be the image of this homomorphism. Similarly, we define ${\mathscr T}_{P,P',\sS}$ and $\widehat{\mathscr T}_{P,P',\sS}$, by considering the images of the same map on $\mathbb I(U_P)_\sS\longrightarrow\mathbb I(U_P)_\sS$ and $\widehat{\mathbb I(U_P)}_\sS\longrightarrow\widehat{\mathbb I(U_P)}_\sS$. For two subgroups $P, P'$ of $\overline{\mathcal G}$ with $[P',P']\leq P\leq P'$ consider \begin{equation} \begin{split} &\mathrm{Tr}_P^{P'}:\mathbb I(U_{P'}^{ab})\longrightarrow\mathbb I(U_P/[U_{P'},U_{P'}]),\quad\mbox{(the trace map)},\\ &\mathrm{Nr}_P^{P'}:\mathbb I(U_{P'}^{ab})^\times\longrightarrow\mathbb I(U_P/[U_{P'},U_{P'}])^\times,\quad\mbox{(the norm map)},\\ &\Pi_P^{P'}:\mathbb I(U_{P'}^{ab})\longrightarrow\mathbb I(U_P/[U_{P'},U_{P'}]),\quad\mbox{(the projection map)}. \end{split} \end{equation} We also have these maps in the localized case and the $p$-adic completion case. We continue to denote them by $\mathrm{Tr}_P^{P'}, \mathrm{Nr}_P^{P'}$ and $\Pi_P^{P'}$. Recall the map $\Theta_{\Sigma({\mathcal G})}$. For every subgroup $P$ of $\overline{\mathcal G}$, let $U_P$ denote the inverse image of $P$ in ${\mathcal G}$. Then we have the following natural homomorphism \begin{equation} \Theta_P^{\mathcal G}:\konep{\mathbb I[[\cG]]}\longrightarrow\konep{\mathbb I(U_P)}\longrightarrow\konep{\mathbb I(U_P^{ab})}=\mathbb I(U_P^{ab})^\times. \end{equation} Combining all these homomorphisms, we get the following homomorphism \begin{equation} \Theta^{\mathcal G}=(\Theta_P^{\mathcal G})_{P\leq\overline{\mathcal G}}:\konep{\mathbb I[[\cG]]}\longrightarrow\prod_{P\leq\overline{\mathcal G}}\mathbb I(U_P^{ab})^\times. \end{equation} Similarly, we also consider the following homomorphisms: \begin{equation} \Theta_\sS^{\mathcal G}:\konep{\mathbb I[[\cG]]_\sS}\longrightarrow\prod_{P\leq\overline{\mathcal G}}\mathbb I(U_P^{ab})^\times_\sS \end{equation} and \begin{equation} \widehat\Theta^{\mathcal G}:\konep{\widehat{\mathbb I[[\cG]]}}\longrightarrow\prod_{P\leq\overline{\mathcal G}}\widehat{\mathbb I(U_P^{ab})^\times}. \end{equation} For $P\in C(\overline{\mathcal G})$ with $P\neq (1)$, fix a homomorphism $\omega_P:P\longrightarrow\bar{\mathbb Q}_p^\times$ of order $p$, and also a homomorphism $\omega_1:=\omega_{\{1\}}:\widetilde\Gamma^{p^e}\longrightarrow\bar{\mathbb Q}_p^\times$ of order $p$. The homomorphism $\omega_P$ induce the following homomorphism which we again denote by the same symbol: \begin{equation} \omega_P:\mathbb I[\mu_p](U_P)^\times\longrightarrow\mathbb I[\mu_p](U_P)^\times, g\mapsto \omega_P(g)g. \end{equation} For $P\leq \overline{\mathcal G}$, consider the homomorphism $\alpha_P:\mathbb I(U_P)_{\sS}^\times\longrightarrow\mathbb I(U_P)_{\sS}^\times$ defined by \begin{equation} \alpha_P(x):=\begin{cases} x^p &\mbox{ if } P=\{1\}\\%x^p\varphi(x)^{-1} &\mbox{ if } P=\{1\}\\ x^p(\prod_{k=0}^{p-1}\omega_P^k(x))^{-1} &\mbox{ if } P\neq\{1\}\mbox{ and cyclic}\\ x^p &\mbox{ if } P \mbox{ is not cyclic}. \end{cases} \end{equation} Note that, for all $P\leq\overline{\mathcal G}$, there is an action of ${\mathcal G}$ and $\overline{\mathcal G}$ act on $U_P^{ab}$ by conjugation since $\widetilde\Gamma^{p^e}$ is central. The following theorem is a generalization of results of Kakde, Kato, Burns, and Ritter and Weiss to $\mathbb I[[\cG]]$-modules. \begin{theorem} Let ${\mathcal G}$ be a rank one pro-$p$ group. Then the set $\Sigma({\mathcal G}):=\{U_P^{ab}:P\leq\overline{\mathcal G}\}$ satisfies the condition $(\ast)$. Further, an element $(\Xi_\cA)_{\cA}\in\prod_{\cA\in\Sigma({\mathcal G})}\mathbb I(\cA)^\times$ belongs to $im(\Theta_{{\mathcal G}})$ if and only if it satisfies all of the following three conditions. \begin{enumerate} \item For all subgroups $P, P'$ of $\overline{\mathcal G}$ with $[P',P']\leq P\leq P'$, one has \begin{equation} \mathrm{Nr}_P^{P'}(\Xi_{U_{P'}^{ab}})=\Pi_P^{P'}(\Xi_{U_{P'}^{ab}}). \end{equation} \item For all subgroups $P$ of $\overline{\mathcal G}$ and all $g$ in $\overline{\mathcal G}$ one has $\Xi_{gU_{P}^{ab}g^{-1}}=g\Xi_{U_{P'}^{ab}}g^{-1}$. \item For every $P\in\overline{\mathcal G}$ and $P\neq (1)$, we have \begin{equation*} \mathrm{ver}_P^{P'}(\Xi_{U_{P'}^{ab}})\equiv \Xi_{U_P^{ab}} \pmod{{\mathscr T}_{P,P'}} (\mbox{ resp. } {\mathscr T}_{P,P',\sS} \mbox{ and } \widehat{\mathscr T}_{P,P'}). \end{equation*} \item For all $P\in C(\overline{\mathcal G})$ one has $\alpha_P(\Xi_{U_{P}^{ab}})\equiv\prod_{P'\in C_P(\overline{\mathcal G})}\alpha_{P'}(\Xi_{U_{P'}^{ab}})\pmod{p{\mathscr T}_P}$. \end{enumerate} \end{theorem} We give a proof of this theorem referring to \cite{kakde} for many of the details which remain true in our set-up. \begin{defn} Let $\Phi_{\mathscr R}^{\overline{\mathcal G}}$ (resp. $\Phi_{{\mathscr R},\sS}^{\overline{\mathcal G}}$ and $\widehat\Phi_{{\mathscr R}}^{\overline{\mathcal G}}$) denote the subgroup of $\prod_{P\leq\overline{\mathcal G}}\mathbb I(U_P^{ab})^\times$ (resp. $\prod_{P\leq\overline{\mathcal G}}\mathbb I(U_P^{ab})_\sS^\times$ and $\prod_{P\leq\overline{\mathcal G}}\widehat\mathbb I(U_P^{ab})_\sS^\times$) consisting of tuples $(\Xi_{U_P^{ab}})$ satisfying the conditions of the above theorem: \begin{enumerate} \item[(C1)] For all subgroups $P, P'$ of $\overline{\mathcal G}$ with $[P',P']\leq P\leq P'$, one has \begin{equation*} \mathrm{Nr}_P^{P'}(\Xi_{U_{P'}^{ab}})=\Pi_P^{P'}(\Xi_{U_{P'}^{ab}}). \end{equation*} \item[(C2)] For all subgroups $P$ of $\overline{\mathcal G}$ and all $g$ in $\overline{\mathcal G}$ one has $\Xi_{gU_{P}^{ab}g^{-1}}=g\Xi_{U_{P'}^{ab}}g^{-1}$. \item[(C3)] For every $P\in\overline{\mathcal G}$ and $P\neq (1)$, we have \begin{equation*} \mathrm{ver}_P^{P'}(\Xi_{U_{P'}^{ab}})\equiv \Xi_{U_P^{ab}} \pmod{{\mathscr T}_{P,P'}} (\mbox{ resp. } {\mathscr T}_{P,P',\sS} \mbox{ and } \widehat{\mathscr T}_{P,P'}). \end{equation*} \item[(C4)] For all $P\in C(\overline{\mathcal G})$ one has $\alpha_P(\Xi_{U_{P}^{ab}})\equiv\prod_{P'\in C_P(\overline{\mathcal G})}\alpha_{P'}(\Xi_{U_{P'}^{ab}})\pmod{p{\mathscr T}_P} (\mbox{ resp. } p{\mathscr T}_{P,\sS} \mbox{ and } p\widehat{\mathscr T}_{P}).$ \end{enumerate} \end{defn} As in \cite{kakde,kato-hberg,burns1,ritter-weiss}, the theorem follows from an explicit description of the image of the groups $\kone{\mathbb I[[\cG]]}$ and $\kone{\mathbb I[[\cG]]_{\sS^\ast}}$. We follow the same steps as in \cite{kakde} to prove this theorem. \iffalse{ We only give the main steps, and where the proof is exactly the same as in \cite{kakde}, we only give state the result. This theorem follows from an explicit description of the image of the groups $\kone{\mathbb I(G)}$ and $\kone{\mathbb I(G)_{\sS^\ast}}$. }\fi In fact, the theorem is a combination of the following two theorems which are generalizations of \cite[Theorem 52 and 53]{kakde}. We will give the main results leading to a proof of these theorems. For any $P\leq\overline{\mathcal G}$, consider the map \begin{equation*} t_P^{\overline{\mathcal G}}:{\mathscr R}[\mathrm{Conj}{\overline{\mathcal G}}]^\tau\longrightarrow {\mathscr R}[P^{ab}]^\tau \end{equation*} defined by \begin{equation*} t_P^{\overline{\mathcal G}}(\bar g)=\sum_{x\in C(\overline{\mathcal G},P)}\{ ({\bar x}^{-1})(\bar g)(\bar x)\mid x^{-1}gx\in P \}, \end{equation*} where $C(\overline{\mathcal G},P)$ is the set of left coset representatives of $P$ in ${\mathcal G}$. This is a well-defined ${\mathscr R}$-linear map, independent of the choice of $C(\overline{\mathcal G},P)$. For any $P\in C(\overline{\mathcal G})$, we define \begin{equation*} \eta_P:{\mathscr R}[P]^\tau\longrightarrow {\mathscr R}[P]^\tau, \end{equation*} by ${\mathscr R}$-linearly extending the map, \begin{equation*} \eta_P(h)=\begin{cases} h \quad\mbox{ if } h \mbox{ is a generator of } P\\ 0 \quad\mbox{ otherwise}. \end{cases} \end{equation*} In other words, $\eta_P(x)=x-\frac{1}{p}\sum_{k=0}^{p-1}\omega_P^k(x)$. Now, define the homomorphism $\beta_P^{\overline{\mathcal G}}:{\mathscr R}[\mathrm{Conj}{\overline{\mathcal G}}]^\tau\longrightarrow {\mathscr R}[P^{ab}]^\tau$ by \begin{equation*} \beta_P^{\overline{\mathcal G}}=\begin{cases} \eta_P\circ t_P^{\overline{\mathcal G}} \quad\mbox{if } P\in C(\overline{\mathcal G})\\ t_P^{\overline{\mathcal G}} \quad\mbox{if } P\leq\overline{\mathcal G} \mbox{ is not cyclic} \end{cases} \end{equation*} and $\beta_{\mathscr R}^{\overline{\mathcal G}}$ is defined by \begin{equation*} \beta_{\mathscr R}^{\overline{\mathcal G}}=(\beta_P^{\overline{\mathcal G}})_{P\leq{\mathcal G}}:{\mathscr R}[\mathrm{Conj}{\overline{\mathcal G}}]^\tau\longrightarrow\prod_{P\leq\overline{\mathcal G}}{\mathscr R}[P^{ab}]^\tau. \end{equation*} \begin{defn} Let $\Psi_{\mathscr R}^{\overline{\mathcal G}}$ (resp. $\Psi_{{\mathscr R},\sS}^{\overline{\mathcal G}}$) be the subgroup of $\prod_{P\leq{\mathcal G}}({\mathscr R}[P^{ab}]^\tau)^\times$ (resp. $\prod_{P\leq{\mathcal G}}({\mathscr R}[P^{ab}]^\tau)_\sS^\times$) consisting of all tuples $(a_P)$ with the following properties: \begin{enumerate} \item[(A1)] Let $P\leq P'\leq{\mathcal G}$ such that $[P',P']\leq P$ and the following conditions hold: \begin{enumerate} \item if $P$ is a non-trivial cyclic group then $[P',P']\neq P$; \item if $P$ is not cyclic, then $\mathrm{tr}_P^{P'}(a_{P'})=\pi^{P'}_P(a_P)$; \item if $P$ is cyclic but $P'$ is not cyclic then $\eta_P(\mathrm{tr}^{P'}_P(a_{P'}))=\pi^{P'}_P(a_P)$; \item if $P'$ is cyclic, then $\mathrm{tr}^{P'}_P(a_{P'})=0$. \end{enumerate} \item[(A2)] $(a_P)_{P\in C(\overline{\mathcal G})}$ is invariant under conjugation action by every $g\in\overline{\mathcal G}$. \item[(A3)] For all $P\in C(\overline{\mathcal G})$, $a_P\in{\mathscr T}_{P}$. \end{enumerate} \end{defn} Then we have the following theorem as a generalization of \cite[Theorem 58]{kakde}. \begin{theorem}\label{additive} The homomorphism $\beta^{\overline{\mathcal G}}_{\mathscr R}$ induces an isomorphism between ${\mathscr R}[\mathrm{Conj}{\overline{\mathcal G}}]^\tau$ and $\Psi_{\mathscr R}^{\overline{\mathcal G}}$. \end{theorem} The first step of the proof is to show that the image of $\beta^{\overline{\mathcal G}}_{\mathscr R}$ is contained in $\Psi_{\mathscr R}^{\overline{\mathcal G}}$. This proof is the same as the proof of \cite[Lemma 60]{kakde}. The next step is to consider the following map and get left inverse of $\beta^{\overline{\mathcal G}}_{\mathscr R}$. \begin{equation*} \begin{split} \delta_P:{\mathscr R}[P^{ab}]^\tau\longrightarrow{\mathscr R}[Conj(\overline{\mathcal G})]^\tau\left[\frac{1}{p}\right]\\ x\mapsto \begin{cases} \frac{1}{[\overline{\mathcal G}:P]}[x], &\mbox{ if } P \mbox{ is cyclic}\\ 0, & \mbox{otherwise}. \end{cases} \end{split} \end{equation*} Combining all these maps, we get the following map: \begin{equation*} \begin{split} \delta:&\prod_{P\leq\overline{\mathcal G}}{\mathscr R}[P^{ab}]^\tau\longrightarrow{\mathscr R}[Conj(\overline{\mathcal G})]^\tau\left[\frac{1}{p}\right],\\ \delta &=\sum_{P\leq\overline{\mathcal G}}\delta_P. \end{split} \end{equation*} \begin{lemma} The composite map $\delta\circ\beta^{\overline{\mathcal G}}_{\mathscr R}$ is identity on ${\mathscr R}[Conj(\overline{\mathcal G})]^\tau$. In particular, the map $\beta^{\overline{\mathcal G}}_{\mathscr R}$ is injective. \end{lemma} \begin{proof} Let $g\in\overline{\mathcal G}$ and $P=(\bar g)$, and consider the collection $C$ of all the conjugates of $P$ in $\overline{\mathcal G}$. Then, \begin{equation*} \begin{split} \delta(\beta^{\overline{\mathcal G}}_{\mathscr R}([\bar g])) &=\sum_{P'\in C}\delta_{P'}(\beta^{\overline{\mathcal G}}_{\mathscr R}([\bar g]))\\ &=\sum_{P'\in C}\dfrac{1}{[\overline{\mathcal G}:P]}[\beta^{\overline{\mathcal G}}_{\mathscr R}([\bar g])]\\ &=\dfrac{1}{[\overline{\mathcal G}:P]}\sum_{P'\in C}[N_{\overline{\mathcal G}}P':P'][\bar g]]\\ &=\dfrac{1}{[\overline{\mathcal G}:N_{\overline{\mathcal G}}P]}\sum_{P'\in C}[\bar g])]\\ &=[\bar g]. \end{split} \end{equation*} \end{proof} Next, in a similar way as in \cite[Lemma 63]{kakde}, we get the following lemma: \begin{lemma}\label{inj-beta} The map $\delta\mid_{\Psi_{\mathscr R}^{\overline{\mathcal G}}}$ is injective and its image lies in ${\mathscr R}[Conj(\overline{\mathcal G})]^\tau$. \end{lemma} Finally, we can show Theorem \ref{additive}. \begin{proof} Since $\delta\circ\beta^{\overline{\mathcal G}}_{\mathscr R}$ is identity on ${\mathscr R}[Conj(\overline{\mathcal G})]^\tau$ and $\delta\mid_{\Psi_{\mathscr R}^{\overline{\mathcal G}}}$ is injective, $\delta\circ\beta^{\overline{\mathcal G}}_{\mathscr R}$ is also identity on $\Psi_{\mathscr R}^{\overline{\mathcal G}}$. Indeed, if $(a_P)\in\Psi_{\mathscr R}^{\overline{\mathcal G}}$, then $\delta(\beta^{\overline{\mathcal G}}_{\mathscr R}(\delta((a_P))))=\delta((a_P))$. As the image of $\beta^{\overline{\mathcal G}}_{\mathscr R}$ is contained in $\Psi_{\mathscr R}^{\overline{\mathcal G}}$ and $\delta$ is injective on $\Psi_{\mathscr R}^{\overline{\mathcal G}}$, we have $\beta^{\overline{\mathcal G}}_{\mathscr R}(\delta((a_P)))=(a_P)$. Therefore $\beta^{\overline{\mathcal G}}_{\mathscr R}$ is surjective. By Artin's induction theorem, a linear representation of a finite group is a ${\mathbb Q}$-linear combination of representations induced from cyclic subgroups \cite[Theorem 17]{serre}. The injectivity follows using this result. \end{proof} \begin{propn}\label{id-beta} Let $K$ be the quotient field of $\cO$. Then the map $\mathrm{id}_{K}\otimes\beta_{\mathscr R}^{\overline{\mathcal G}}:K\otimes{\mathscr R}[\mathrm{Conj}(\overline{\mathcal G})]^\tau\longrightarrow\prod_{P\leq\overline{\mathcal G}}K\otimes{\mathscr R}[P^{ab}]^\tau$ is injective, and its image consists of all tuples $(a_P)$ satisfying the following: \begin{enumerate} \item Let $P\leq P'\leq{\mathcal G}$ such that $[P',P']\leq P$ and the following conditions hold: \begin{enumerate} \item if $P$ is a non-trivial cyclic group then $[P',P']\neq P$; \item if $P$ is not cyclic, then $\mathrm{tr}_P^{P'}(a_{P'})=\pi^{P'}_P(a_P)$; \item if $P$ is cyclic but $P'$ is not cyclic then $\eta_P(\mathrm{tr}^{P'}_P(a_{P'}))=\pi^{P'}_P(a_P)$; \item if $P'$ is cyclic, then $\mathrm{tr}^{P'}_P(a_{P'})=0$. \end{enumerate} \item $(a_P)_{P\in C(\overline{\mathcal G})}$ is invariant under conjugation action by every $g\in\overline{\mathcal G}$. \end{enumerate} Hence, if $\mathrm{id}_K\otimes\beta_{\mathscr R}^{\overline{\mathcal G}}(a)=(a_P)$ with $a_P\in{\mathscr T}_{P}, \forall\, P\in C(\overline{\mathcal G})$, then $a\in{\mathscr R}[\mathrm{Conj}(\overline{\mathcal G})]^\tau$ and $a_P\in{\mathscr R}[P^{ab}]^\tau, \forall\, P\leq\overline{\mathcal G}$. \end{propn} \begin{proof} By Lemma \ref{inj-beta} above, the injectivity is clear. The statement about the image also follows from this Lemma. Clearly, if $\mathrm{id}_K\otimes\beta_{\mathscr R}^{\overline{\mathcal G}}(a)=(a_P)$, with $a_P\in{\mathscr T}_{P}, \forall\, P\in C(\overline{\mathcal G})$, then as the map $\delta$ is determined by the $a_P$'s for cyclic $P$, it follows that the inverse image $a$ lies in ${\mathscr R}[\mathrm{Conj}(\overline{\mathcal G})]^\tau$ and $a_P\in{\mathscr R}[P^{ab}]^\tau$, $\forall P\leq\overline{\mathcal G}$. \end{proof} \subsection{The Logarithm map over $\mathbb I(\widetilde\Gamma^{p^e})$} Recall that ${\mathscr R}:=\mathbb I[[\widetilde\Gamma^{p^e}]]$. Note that ${\mathscr R}$ is a local ring. Our aim in this section is to construct a logarithm map on $\kone{{\mathscr R}[\overline{\mathcal G}]^\tau}$. This is done by generalizing the logarithm map which was considered by Ritter and Weiss and then later by Kakde. Their constructions were inspired by the logarithm map introduced by Oliver. We assume that ${\mathcal G}$ is a $p$-adic Lie group of rank $1$. For any subgroup $P$ of $\overline{\mathcal G}$, we set \begin{equation}\label{R-notation} {\mathscr R}_P:=\mathbb I[[U_P]]=\mathbb I[[\widetilde\Gamma^{p^e}]][P]^\tau={\mathscr R}[P]^\tau. \end{equation} Consider the natural ${\mathscr R}$-linear map \begin{equation*} \kappa_P:{\mathscr R}_P\longrightarrow {\mathscr R}[\mathrm{Conj}(P)]^\tau, \kappa_P(g^\tau)=[g^\tau], \end{equation*} for all $g\in P$, where $[g^\tau]$ denotes the conjugacy class in $P$. For any ring $A$, let $J(A)$ denote the Jacobson radical. Since ${\mathcal G}=H\ltimes\Gamma$, the kernel of the composite homomorphism $U_P\hookrightarrow{\mathcal G}\twoheadrightarrow\Gamma$ is $H\cap U_P$. Let $I_{H\cap U_P}$ be the augmentation ideal of ${\mathscr R}_{H\cap U_P}$. Then for the ring ${\mathscr R}_P$, the Jacobson radical $J({\mathscr R}_P)$ is generated by ${\mathfrak m}_\mathbb I$ and $I_{H\cap U_P}$, where ${\mathfrak m}_\mathbb I$ denotes the maximal ideal $\langle p,X_1,\cdots,X_r\rangle$ of $\mathbb I$. In the proof of the following propositions, the following short exact sequence which is obtained by using ${\mathscr R}_P/J({\mathscr R}_P)\cong\FF_p$ play a crucial role: \begin{equation}\label{fund-exact} 1\longrightarrow \kone{{\mathscr R}_P,J({\mathscr R}_P)}\longrightarrow \kone{{\mathscr R}_P}\longrightarrow \kone{\FF_p}. \end{equation} It is well known that $ \kone{\FF_p}\cong\FF_p^\times$. \begin{propn}\label{Log} Let $P\leq\overline{\mathcal G}$. Then, for $x\in J({\mathscr R}_P)$, the logarithm defined by \begin{equation}\label{def-log} \mathrm{Log}(1+x):=\sum_{n\geq 1}(-1)^{n+1}\frac{x^n}{n}. \end{equation} is well-defined, and it induces a homomorphis \begin{equation*} \log_P:\kone{{\mathscr R}_P}\longrightarrow {\mathscr R}[\mathrm{Conj}(P)]^\tau[\frac{1}{p}]. \end{equation*} Moreover, this map is natural with respect to ring homomorphisms induced by group homomorphisms. \end{propn} \begin{proof} First, we show that the map is well-defined by showing that the power series $\mathrm{Log}(1+x)$ converges in ${\mathscr R}[P]^\tau[\frac{1}{p}]$. Since $H\cap U_P$ is a finite $p$-group, with say, $p^r$ elements, we have $(g-1)^{p^r}\in p\mathbb I[H\cap U_P]$, for any $g\in H\cap U_P$. Therefore, for any $x\in J({\mathscr R}_P)=\langle{\mathfrak m}_\mathbb I, I_{H\cap U_P}\rangle$, we have $x^{p^r}\in \langle p,{\mathfrak m}_\mathbb I\rangle{\mathscr R}_P$. Hence $x^{n}\in \langle p,{\mathfrak m}_\mathbb I\rangle^m{\mathscr R}_P$ for large enough $n,m$. This implies that $x^i/i$ converges to $0$ as $i$ tends to infinity. Hence the series $\mathrm{Log}(1+x)$ converges in ${\mathscr R}[P]^\tau[\frac{1}{p}]$. We now use arguments of Oliver to construct the map $\log_P$. Indeed, the proof of Oliver for \cite[Lemma 2.7]{ol}, shows that for any $x,y\in J({\mathscr R}_P)$, we have \begin{equation*} \mathrm{Log}((1+x)(1+y))\equiv\mathrm{Log}(1+x)+\mathrm{Log}(1+y)\pmod{[{\mathscr R}_P[\frac{1}{p}], J({\mathscr R}_P)[\frac{1}{p}]]}. \end{equation*} Then by the proof of \cite[Theorem 2.8]{ol}, $\mathrm{Log}(1+x)$ induces a well-defined homomorphism \begin{equation*} \log_P':\kone{{\mathscr R}_P,J({\mathscr R}_P)}\longrightarrow (J({\mathscr R}_P)/[{\mathscr R}_P, J({\mathscr R}_P)])[\frac{1}{p}]. \end{equation*} Since ${\mathscr R}_P/J({\mathscr R}_P)\cong\FF_q$, we have the following exact sequence \begin{equation}\label{rel-K} 1\longrightarrow \kone{{\mathscr R}_P,J({\mathscr R}_P)}\longrightarrow \kone{{\mathscr R}_P}\longrightarrow \kone{\FF_q}. \end{equation} Here $\kone{\FF_q}\cong\FF_q^\times$ and $(J({\mathscr R}_P)/[{\mathscr R}_P, J({\mathscr R}_P)])[\frac{1}{p}]$ is torsion-free. Hence the map $\log_P'$ can be extended uniquely to $\kone{{\mathscr R}_P}$, which we call $\log_P$. \end{proof} \begin{remark}\label{general-log} The proof of \cite[Theorem 2.8]{ol} can also be generalized to show that we have the following homomorphism: \begin{equation} \log_P^I:\kone{{\mathscr R}_P,I}\longrightarrow (I/[{\mathscr R}_P, I])\left[\frac{1}{p}\right], \end{equation} for any ideal $I\subset J({\mathscr R}_P)$. \end{remark} \begin{lemma}\label{log-sub} Let $U_P$ be abelian and let $I$ be any ideal of ${\mathscr R}[P]^\tau$ such that $I\subset p{\mathscr R}[P]^\tau$. Then $\log_P$ is a well-defined map from $1+I$ to $I$, which is an isomorphism. \end{lemma} \begin{proof} We first show that the map is well-defined. Since $I\subseteq p{\mathscr R}[P]^\tau$, we have $I^p\subseteq pI$. It follows that $I^{p^r}\subseteq p^rI$ for all $r\in\mathbb N$. This further implies that $I^n\subseteq nI$, for all $n\in\mathbb N$. This is easy to see if $p\nmid n$, as $n$ is then a unit and $I^n\subseteq I=nI$. Next, if $\mathrm{ord}_p(n)=e$, then $I^{p^e}\subseteq p^eI$. As $n/{p^e}$ is a unit, raising to $n/{p^e}$-th power, we have $I^n\subseteq nI$. Now, let $x\in I$, then $x^n\in I^n\subseteq nI$. Therefore $x^n/n\in I$. Noting that the series $\sum_{n\geq 0}(-1)^{n+1}\dfrac{x^n}{n}$ converges, it converges in $I$. Conversely, we show that for each $x\in I$, the series $\mathrm{exp}_P(x)=\sum_{n\geq 0}\dfrac{x^n}{n!}$ is convergent in $1+I$. Since $I\subset p{\mathscr R}[P]^\tau\cong\mathbb I(U_P)$, the Jacobson radical $J({\mathscr R}[P]^\tau)$ contains $I$. Further, $I^p\subseteq pI.J({\mathscr R}[P]^\tau)$. Suppose $I^{p^k}\subseteq p^k!I.J({\mathscr R}[P]^\tau)^k$, then raising both sides to a $p$-th power, we get \begin{equation*} I^{p^{k+1}}\subseteq (p^k!)^pI^p.J({\mathscr R}[P]^\tau)^{kp}. \end{equation*} Clearly $(p^k!)^p I^p.J({\mathscr R}[P]^\tau)^{kp}\subseteq (p^{k+1}!)I.J({\mathscr R}[P]^\tau)^{k+1} $. Therefore, by induction, $$I^{p^n}\subseteq p^n!I.J({\mathscr R}[P]^\tau)^n, \mbox{ for all } n\in\mathbb N.$$ The lemma follows by observing that $\log_P$ and $\mathrm{exp}_P$ are inverses of each other. \end{proof} For each pair of subgroups $P$ and $P'$ of $\overline{\mathcal G}$ with $P\leq P'$, consider the natural \emph{restriction map} on $K$-groups \begin{equation} \maptheta{P'}{P}:\kone{\mathbb I(U_{P'})}=\kone{{\mathscr R}_{P'}}\longrightarrow\kone{{\mathscr R}_P}=\kone{\mathbb I(U_P)}. \end{equation} Moreover, we define an $\mathbb I$-linear map \begin{equation} \Res{P'}{P}:{\mathscr R}[\mathrm{Conj}(P')]^\tau\longrightarrow {\mathscr R}[\mathrm{Conj}{P}]^\tau \end{equation} given by \begin{equation}\label{kappa} \Res{P'}{P}(\kappa_{P'}(g^\tau)):=\sum_x\kappa_P((x^\tau)^{-1}(gx)^\tau) \end{equation} where $x$ runs over all elements in a given set of left coset representatives of $P$ in $P'$ with $xgx^{-1}\in P.$ \begin{lemma}\label{log-theta-res} For each subgroup $P$ of $\overline{\mathcal G}$, we have the following commutative diagram. \begin{equation*} \xymatrix{ \kone{\mathbb I[[\cG]]}\ar[r]^{\!\!\!\!\!\log_{\overline{\mathcal G}}}\ar[d]^{\maptheta{\overline{\mathcal G}}{P}} &{\mathscr R}[\mathrm{Conj}{\overline{\mathcal G}}]^\tau[\frac{1}{p}]\ar[d]_{\Res{\overline{\mathcal G}}{P}}\\ \kone{\mathbb I(U_P)}\ar[r]^{\!\!\!\!\!\log_{P}} &{\mathscr R}[\mathrm{Conj}{P}]^\tau[\frac{1}{p}]. } \end{equation*} \end{lemma} \begin{proof} For any $\xi\in\mathbb I[[\cG]]$, it follows from a similar argument as used by Oliver and Taylor to prove \cite[Theorem 1.4]{ol-tay}, that \begin{equation}\label{log-res} \log_P(\maptheta{\overline{\mathcal G}}{P}(1+p\xi))=\Res{\overline{\mathcal G}}{P}(\log_{\overline{\mathcal G}}(1+p\xi)). \end{equation} Recall that $\mathbb I\cong\cO[[X_1,\cdots,X_r]]$ and consider now $x\in J({\mathscr R}[\overline{\mathcal G}]^\tau)$. We can see that $J({\mathscr R}[\overline{\mathcal G}]^\tau)$ is generated by ${\mathfrak m}_\mathbb I$ and the augmentation ideal $I_H$ of $\Lambda(H)[[X_1,\cdots,X_r]]$. Therefore, $I_H^{p^n}\subseteq p\Lambda(H)=p{\mathbb Z}_p[H]$, for $n$ sufficiently large. Now, for $n$ sufficiently large, we have $(1+x)^{p^n}=1+p\xi''+x^{p^n}$. Further, $x=a+(\sum_j b_jX_j)+c$, where $a\in p\Lambda(H),b\in\Lambda(H),c\in I_H$. Hence, $x^{p^n}=a^{p^n}+\sum_j b_j^{p^n}X_j^{p^n}+c^{p^n}+pd$, where $d\in\Lambda(H)[[X_1,\cdots,X_r]]$. As $c\in I_H$, we can see that $x^{p^n}=a^{p^n}+\sum_j b_j^{p^n}X_j^{p^n}+pd'$, for some $d'\in\Lambda(H)[[X_1,\cdots,X_r]]$. This means that we can put $(1+x)^{p^n}$ in the following form \begin{equation*} (1+x)^{p^n}=1+p\xi''+x^{p^n}=1+p\xi+\sum_j X_j^{p^m}\xi'_j=(1+\sum_jX_j^{p^m}\xi'_j)[1+p\xi(1+\sum_jX_j^{p^m}\xi'_j)^{-1}], \end{equation*} for some $\xi, \xi', \xi''\in {\mathscr R}[\overline{\mathcal G}]^\tau$. Here $\sum_jX_j^{p^m}\xi'_j\in J( {\mathscr R}[\overline{\mathcal G}]^\tau)$ and $(1+\sum_jX_j^{p^m}\xi_j')^{-1}\in {\mathscr R}[\overline{\mathcal G}]^\tau$. It follows from \eqref{log-res} that \begin{equation*} \begin{split} &p^m\log_P(\maptheta{\overline{\mathcal G}}{P}(1+x))\\ =& \log_P(\maptheta{\overline{\mathcal G}}{P}(1+x)^{p^m})\\ =& \log_P(\maptheta{\overline{\mathcal G}}{P}(1+p\xi(1+\sum_jX_j^{p^m}\xi_j')^{-1}))+\log_P(\maptheta{\overline{\mathcal G}}{P}(1+\sum_jX_j^{p^m}\xi_j'))\\ =& \Res{\overline{\mathcal G}}{P}(\log_{\overline{\mathcal G}}(1+p\xi(1+\sum_jX_j^{p^m}\xi_j')^{-1})) + \log_P(\maptheta{\overline{\mathcal G}}{P}(1+\sum_jX_j^{p^m}\xi_j'))\\ =& \Res{\overline{\mathcal G}}{P}(\log_{\overline{\mathcal G}}(1+x)^{p^m}) + \log_P(\maptheta{\overline{\mathcal G}}{P}(1+\sum_jX_j^{p^m}\xi_j'))- \Res{\overline{\mathcal G}}{P}(\log_{\overline{\mathcal G}}(1+\sum_jX_j^{p^m}\xi_j'))\\ =& p^m\Res{\overline{\mathcal G}}{P}(\log_{\overline{\mathcal G}}(1+x)) + \log_P(\maptheta{\overline{\mathcal G}}{P}(1+\sum_jX_j^{p^m}\xi_j'))- \Res{\overline{\mathcal G}}{P}(\log_{\overline{\mathcal G}}(1+\sum_jX_j^{p^m}\xi_j')). \end{split} \end{equation*} Here $\log_{\overline{\mathcal G}}(1+\sum_jX_j^{p^m}\xi_j')\in (X_1^{p^m},\cdots,X_r^{p^m}) {\mathscr R}[\mathrm{Conj}{P}]^\tau[\frac{1}{p}]$. Moreover, it is easy to see that \begin{equation*} \maptheta{\overline{\mathcal G}}{P}(1+\sum_jX_j^{p^m}\xi_j')\equiv 1+\sum_jX_j^{p^m}\Res{\overline{\mathcal G}}{P}\xi_j' \pmod{\kone{{\mathscr R}[P]^\tau,(X_1^{p^m},\cdots,X_r^{p^m}){\mathscr R}[P]^\tau}}. \end{equation*} Therefore, $\log_P(\maptheta{\overline{\mathcal G}}{P}(1+\sum_jX_j^{p^m}\xi_j'))$ and $\Res{\overline{\mathcal G}}{P}(\log_{\overline{\mathcal G}}(1+\sum_jX_j^{p^m}\xi_j'))$ both belong to $(X_1^{p^m},\cdots,X_r^{p^m}){\mathscr R}[P]^\tau[\frac{1}{p}]$. Using this in the above equality, we get \begin{equation*} \log_P(\maptheta{\overline{\mathcal G}}{P}(1+x))\equiv \Res{\overline{\mathcal G}}{P}(\log_{\overline{\mathcal G}}(1+x))\pmod{(X_1^{p^m},\cdots,X_r^{p^m}){\mathscr R}[\mathrm{Conj}{P}]^\tau[\frac{1}{p}]}. \end{equation*} Since this is true for all $m\in\mathbb N, m>>0$, therefore \begin{equation*}\log_P(\maptheta{\overline{\mathcal G}}{P}(1+x))=\Res{\overline{\mathcal G}}{P}(\log_{\overline{\mathcal G}}(1+x)).\end{equation*} Hence the diagram in the lemma commutes for the subgroup $\kone{{\mathscr R}[\overline{\mathcal G}]^\tau,J({\mathscr R}[\overline{\mathcal G}]^\tau)}$ of $\kone{{\mathscr R}[\overline{\mathcal G}]^\tau}$. Now note that the index of the subgroup $\kone{{\mathscr R}[\overline{\mathcal G}]^\tau,J({\mathscr R}[\overline{\mathcal G}]^\tau)}$ of $\kone{{\mathscr R}[\overline{\mathcal G}]^\tau}$ is finite, and the group ${\mathscr R}[\mathrm{Conj}{P}]^\tau[\frac{1}{p}]$ is torsion-free. Therefore the commutativity extends to the diagram in the assertion of the lemma. \end{proof} \subsection{The Integral logarithm over $\mathbb I[[\cG]]$} We now make the following assumption \begin{description} \item[(Ur)] $\mathbb I=\cO[[X_1,\cdots,X_r]]$ with $\cO$ unramified over ${\mathbb Z}_p$. \end{description} Then on ${\mathscr R}=\mathbb I(\widetilde\Gamma^{p^e})$, consider the map $\varphi:{\mathscr R}\longrightarrow {\mathscr R}$ such that its restriction to $\cO$ is the Frobenius and maps $X_j^n$ to $X_j^{pn}$ for all $j=1,\cdots,r$ and the $p$-th power map on $\widetilde\Gamma^{p^e}$. Further, we extend $\varphi$ to a map \begin{equation*} \varphi_{conj}:{\mathscr R}[\mathrm{Conj}{\overline{\mathcal G}}]^\tau\longrightarrow {\mathscr R}[\mathrm{Conj}{\overline{\mathcal G}}]^\tau, \end{equation*} by \begin{equation*} \kappa(g^\tau)\mapsto\kappa((g^\tau)^p). \end{equation*} \begin{defn} The map $\mathfrak{L}_{\mathcal G}:\kone{{\mathscr R}[\overline{\mathcal G}]^\tau}\longrightarrow {\mathscr R}[\mathrm{Conj}{\overline{\mathcal G}}]^\tau[\frac{1}{p}]$ defined by \begin{equation*} \mathfrak{L}_{\mathcal G}:=\log_{\mathcal G}-p^{-1}\varphi_{conj}\circ\log_{\mathcal G} \end{equation*} is called the $p$-adic logarithm map over $\mathbb I[[\cG]]$. \end{defn} \begin{propn}\label{int-log}For the map $\mathfrak{L}_{\mathcal G}:\kone{{\mathscr R}[\overline{\mathcal G}]^\tau}\longrightarrow {\mathscr R}[\mathrm{Conj}{\overline{\mathcal G}}]^\tau[\frac{1}{p}]$, we have $Im(\mathfrak{L}_{\mathcal G})\subseteq {\mathscr R}[\mathrm{Conj}{\overline{\mathcal G}}]^\tau$. \end{propn} \begin{proof}As before, by the exact sequence in \eqref{fund-exact}, the index of $\kone{{\mathscr R}[\overline{\mathcal G}]^\tau,J({\mathscr R}[\overline{\mathcal G}]^\tau)}$ in $\kone{{\mathscr R}[\overline{\mathcal G}]^\tau}$ is finite and prime to $p$. Therefore, it is enough to prove that $\mathfrak{L}_{\mathcal G}(\kone{{\mathscr R}[\overline{\mathcal G}]^\tau,J({\mathscr R}[\overline{\mathcal G}]^\tau)})\subseteq {\mathscr R}[\mathrm{Conj}{\overline{\mathcal G}}]^\tau$. Let $y\in\kone{{\mathscr R}[\overline{\mathcal G}]^\tau,J({\mathscr R}[\overline{\mathcal G}]^\tau)}$. Then, by \cite{vaser}, there exists $x$ such that $y=[1-x]$ for some $x\in J({\mathscr R}[\overline{\mathcal G}]^\tau)$. Then, we have, \begin{eqnarray*} \mathfrak{L}_{\mathcal G}(y)&=&-\sum_{i\geq 1}\frac{x^i}{i}+\sum_{i\geq 1}\frac{\varphi_{conj}(x^i)}{pi}\\ &=&-\sum_{i\geq 1,p\nmid i}\frac{x^i}{i}-\sum_{j\geq 1}\frac{x^{pj}-\varphi_{conj}(x^j)}{pj}. \end{eqnarray*} Therefore, it is enough to prove that the sum $\sum_{j\geq 1}\frac{x^{pj}-\varphi_{conj}(x^j)}{pj}\in {\mathscr R}[\mathrm{Conj}{\overline{\mathcal G}}]^\tau$. This follows from the same argument as in the proof of \cite[Theorem 6.2]{ol}. \end{proof} We now extend the theorem of Oliver \cite[Theorem 6.4, 6.6]{ol} to ${\mathscr R}_G$, for finite groups $G$ of prime order. For this, we recall the following exact sequence from \cite[Lemma 6.3(ii)]{ol}: \begin{equation}\label{k_h_0} 0\hookrightarrow\mathbb{F}_p\longrightarrow{\mathscr R}/{\mathfrak m}_{\mathscr R}\stackrel{1-\varphi}{\longrightarrow}{\mathscr R}/{\mathfrak m}_{\mathscr R}\stackrel{\mathrm{Tr}}{\longrightarrow}\mathbb{F}_{p}\longrightarrow 0 \end{equation} where $\mathbb{F}_p$ is the finite field of order $p$, $\varphi$ is the Frobenius and $\mathrm{Tr}$ denotes the trace map. Here we note that ${\mathscr R}$ is a local field with maximal ideal ${\mathfrak m}_{\mathscr R}$. \begin{propn}\label{oli-6.4} Let $G$ be a finite $p$-group and $z$ be an element of order $p$ in the center $Z(G)$. Then, we have the following exact sequence: \begin{equation}\label{exact-log} 1\longrightarrow \langle z\rangle \longrightarrow \kone{{\mathscr R}_G,(1-z){\mathscr R}_G}\stackrel{\log_G}{\longrightarrow}\mathrm{H}_0(G,(1-z){\mathscr R}_G)\stackrel{\omega}{\longrightarrow}\FF_p\longrightarrow 0 \end{equation} \end{propn} \begin{proof} The proof is the same as in the proof of \cite[Theorem 6.4]{ol}. We set $I=(1-z){\mathscr R}_G$ and $J$ the Jacobson radical of ${\mathscr R}_G$. As $(1-z)^p\in pI$, the $p$-adic logarithm induces a homomorphism $\log_G^I$ and an isomorphism $\log_G^{IJ}$. These maps fit in the following commutative diagram: \begin{equation*} \xymatrix{ &\kone{{\mathscr R}_G,(1-z)J}\ar[r]\ar[d]_{\log_G^{IJ}}^{\cong} &\kone{{\mathscr R}_G,I}\ar[r]\ar[d]_{\log_G^I} &\kone{\frac{{\mathscr R}_G}{(1-z)J},\frac{I}{(1-z)J}}\ar[r]\ar[d]_{\log_0} & 1\\ 0\ar[r] & \mathrm{H}_0(G,(1-z)J)\ar[r] &\mathrm H_0(G,I)\ar[r] &\mathrm H_0(G,\frac{I}{(1-z)J})\ar[r] &0 } \end{equation*} By a result of Bass (see \cite[Theorem 1.15]{ol}), we have the following identification \begin{equation*} \kone{\frac{{\mathscr R}_G}{(1-z)J},\frac{I}{(1-z)J}} \mathrel{\mathop{\longrightarrow}^{\alpha}_{\cong}} {\mathscr R}_G/J\cong{\mathscr R}/{\mathfrak m}_{\mathscr R}, \end{equation*} where the map $\alpha$ is given by $\alpha(1+(1-z)\xi)=\xi$ for $\xi\in{\mathscr R}_G/J$. Further, $\mathrm H_0(G,I/(1-z)J)\cong{\mathscr R}/{\mathfrak m}_{\mathscr R}$. Together with the exact sequence in \eqref{k_h_0}, \iffalse{ \begin{equation*} \xymatrix{ \kone{{\mathscr R}_G,I}\ar[r]\ar[d]_{\log_G^I} &\kone{{\mathscr R}_G/(1-z)J,(1-z){\mathscr R}_G/(1-z)J}\ar[r]\ar[d]_{\log_0}^{\cong} & 1\\ \mathrm H_0(G,I)\ar[r] &\mathrm H_0(G,(1-z){\mathscr R}_G/(1-z)J)\ar[r] &0 \\ } \end{equation*} }\fi we can see that the map $\log_0$ fits in the following exact sequence: \begin{equation*} 0\longrightarrow \FF_p\longrightarrow{\mathscr R}_G/J\stackrel{\cong}{\longrightarrow}\kone{\frac{{\mathscr R}_G}{(1-z)J},\frac{I}{(1-z)J}}\stackrel{\log_0}{\longrightarrow} \mathrm H_0\left(G,\frac{I}{(1-z)J}\right)\stackrel{\cong}{\longrightarrow} {\mathscr R}/{\mathfrak m}_{\mathscr R} \longrightarrow\FF_p\longrightarrow 0, \end{equation*} as a result, we have the following commutative diagram: \begin{equation*} \xymatrix{ &0\ar[d] & \\ &\FF_p\ar[d] & \\ \kone{{\mathscr R}_G,I}\ar[r]^{\alpha'}\ar[d]_{\log_G^I} &{\mathscr R}_G/J\cong{\mathscr R}/{\mathfrak m}_{\mathscr R}\ar[r]\ar[d]_{\log_0}^{\cong} & 1\\ \mathrm H_0(G,I)\ar[r]^{\alpha''}\ar[rd]_\omega &{\mathscr R}/{\mathfrak m}_{\mathscr R}\ar[r]\ar[d]^{\mathrm{Tr}} &0 \\ &\FF_p\ar[d] & \\ &0 & } \end{equation*} \iffalse{ \begin{equation*} \xymatrix{ & &\kone{{\mathscr R}_G,I}\ar[r]^{\log_G^I}\ar[d]_{\alpha'} & \mathrm H_0(G,I)\ar[d]_{\alpha''}\ar[dr]^{\omega} \\ 0\ar[r] &\FF_p \ar[r] &\mathbb I/{\mathfrak m}_{\mathbb I}\ar[r]_{1-\varphi} &\mathbb I/{\mathfrak m}_{\mathbb I}\ar[r]_{\mathrm{Tr}} &\FF_p\ar[r] &0, } \end{equation*} }\fi where $\alpha'(1+(1-z)\sum r_ig_i)=\sum\overline{r_i}$ and $\alpha''((1-z)\sum r_ig_i)=\sum\overline{r_i}$. Indeed, the right column is exact, and as \begin{equation*} \alpha''(\log_G^I(1+(1-z)rg))=\alpha''((1-z)(rg-r^pg^p))=r-\varphi(r)\in{\mathscr R}/{\mathfrak m}_{\mathscr R}, \end{equation*} the square is also commutative. Therefore the maps $\log_G^I$ and $1-\varphi$ have isomorphic kernel and cokernel. Noting that $\omega=\mathrm{Tr}\circ\alpha''$ and $\alpha'$ maps $\langle z\rangle$ isomorphically onto $\FF_p=\mathrm{ker}(1-\varphi)$, the exactness of the sequence in the Proposition follows. \end{proof} We now prove the key result regarding the integral logarithm. This is a generalization of the following exact sequence, (\cite[Def 70]{kakde}). Since $\mathbb I=\cO[[X_1,\cdots,X_r]]$, the ring ${\mathscr R}=\mathbb I[[\widetilde\Gamma^{p^e}]]$ isomorphic to an Iwasawa algebra with $r+1$ variables. Let $\mathbb W\cong (1+p{\mathbb Z}_p)^{r+1}$, then ${\mathscr R}\cong\cO[[\mathbb W]]$. We assume that the extension $\cO/{\mathbb Z}_p$ is unramified. Recall that the quotient field of $\cO$ is denoted by $K$. Then the following sequence is an exact sequence of groups: \begin{equation*} 1\longrightarrow\mu_K\times\mathbb W\longrightarrow\kone{{\mathscr R}}\stackrel{\mathfrak L}{\longrightarrow}{\mathscr R}\stackrel{\omega}{\longrightarrow}\mathbb W\longrightarrow 1. \end{equation*} \begin{propn}\label{log-exact} Let $G$ be a finite $p$ group. Let $\mathbb W\cong(1+p{\mathbb Z}_p)^{r+1}$ and $\mathbb I=\mathcal\cO[[X_1,\cdots,X_r]]$, such that the extension $\cO/{\mathbb Z}_p$ is unramified. Define the map \begin{equation*} \widetilde\omega:{\mathscr R}[G]\longrightarrow \mathbb W\times G^{ab} \end{equation*} by $\widetilde\omega(\sum_{i}a_ig_i)=\prod_i(\omega(a_i),(g_i)^{\mathrm{Tr}(a_i\mod{\mathfrak m}_\mathbb I)})$. Then the sequence \begin{equation*} 1\longrightarrow\kone{{\mathscr R}_G}/\left(\mathbb W\times\kone{{\mathscr R}_G}_{tors}\right)\stackrel{\mathfrak{L}_G}{\longrightarrow}{\mathscr R}_G\stackrel{\widetilde\omega}{\longrightarrow}\mathbb W\times G^{ab}\longrightarrow 1 \end{equation*} is exact. \end{propn} \begin{proof} The proof is a generalization of the proof of Oliver. We also use induction on the order of $G$ to prove the result. Let $G=(1)$. Then ${\mathscr R}=\mathbb I[[\widetilde\Gamma^{p^e}]]$. \iffalse{ Then consider the following isomorphism from Lemma \ref{log-sub}: \begin{equation*} 1+p\mathbb I\longrightarrow p\mathbb I. \end{equation*} We also have $p.ker{\omega}=p\mathbb I$, as $G=(1)$. Since $\log_G(1+p\mathbb I)=p\mathbb I$ is invariant with respect to the Frobenius $\varphi$, we have \begin{equation*}\begin{split} \mathfrak{L}(\mathbb I^*)=& \left((1-p^{-1}\varphi_{conj})\circ\log_G\right)(\mathbb I^*)\\ =& p^{-1}\log_G(\mathbb I^*), \mbox{ since } (1-p^{-1}\varphi_{conj})^{-1}(r)\\ =&-\sum_{i=1}^\infty p^i\varphi^{-i}(r). \end{split} \end{equation*} }\fi By \cite[Def 70]{kakde}, we have the exact sequence, \begin{equation*} 1\longrightarrow\mu_K\times\mathbb W\longrightarrow\kone{{\mathscr R}}\stackrel{\mathfrak L}{\longrightarrow}{\mathscr R}\stackrel{\omega}{\longrightarrow}\mathbb W\longrightarrow 1. \end{equation*} Next, let $G$ be a non-trivial $p$-group. We then show that $\widetilde\omega\circ\mathfrak{L}_G=1$. It is enough to prove this when $G$ is abelian. Let $I$ be the augmentation ideal of ${\mathscr R}[G]$. Consider $u=1+\sum r_i(1-a_i)g_i\in 1+I,$ where $r_i\in{\mathscr R}$. Then \begin{equation*} \begin{split} u^p\equiv & 1+p\sum r_i(1-a_i)g_i+\sum r_i^p(1-a_i)^pg_i^p\mod{pI^2}\\ \equiv & 1+p\sum r_i(1-a_i)g_i+\sum r_i\{(1-a_i)^p-p(1-a_i)\}g_i^p\mod{pI^2}, \mbox{ by \cite[Lemma 6.3(i)]{ol}}\\ \equiv & \varphi_{conj}(u)+p\sum r_i(1-a_i)(g_i-g_i^p)\mod{pI^2}\\ \equiv & \varphi_{conj}(u)\mod{pI^2}, \end{split} \end{equation*} i.e., $u^p/\varphi_{conj}(u)\in 1+pI^2$. Therefore $\mathfrak{L}_G(u)=\log_G(u)-p^{-1}\varphi_{conj}(\log_G(u))=\frac{1}{p}\log_G(u^p/\varphi_{conj}(u))\in I^2$. On the other hand, for any $r\in{\mathscr R}$ and $a,b,g\in G$, we have \begin{equation*} \begin{split} \widetilde\omega(r(1-a)(1-b)g)&=(\omega(r),g^{\mathrm{Tr}(r)})(\omega(-r),(ag)^{\mathrm{Tr}(-r)})(\omega(-r),(bg)^{\mathrm{Tr}(-r)})(\omega(r),(abg)^{\mathrm{Tr}(r)})\\ &=(\omega(r)\omega(-r)\omega(-r)\omega(r),1)\\ &=(1,1)\in \mathbb W\times G. \end{split} \end{equation*} Therefore, $\mathfrak{L}_G(1+I)\subseteq I^2\subseteq\mathrm{ker}(\widetilde\omega)$, and hence \begin{equation}\label{int_subset_ker_omega} \mathfrak{L}_G(\kone{{\mathscr R}_G})=\mathfrak{L}_G({\mathscr R}^\times\times(1+I))=\langle\mathfrak{L}({\mathscr R}^\times),\mathfrak{L}(1+I)\rangle\subseteq\mathrm{ker}(\widetilde\omega). \end{equation} It follows that $\widetilde\omega\circ\mathfrak{L}_G=1$. Assume that the theorem is true for all groups whose order is less than the order of $G$. Now, let $z$ be an element of order $p$ in the center $Z(G)$, such that $z$ is a commutator if $G$ is nonabelian. The existence of such a commutator is shown in \cite[Lemma 6.5]{ol}. Let $\widehat{G}:=G/\langle z\rangle$. Let $\alpha:G\longrightarrow\widehat G$ denote the natural projection map. Then we have the following commutative diagram: \begin{equation*} \xymatrix{ &1\ar[d] &1\ar[d] &1\ar[d] & \\ 1\ar[r] &\kone{{\mathscr R}[G],(1-z){\mathscr R}[G]}/\mathrm{tors}\ar[r]^{\quad\quad\mathfrak{L}_0}\ar[d] &\overline{\mathrm H}_0(G;(1-z){\mathscr R}[G])\ar[r]^{\quad\quad\omega_0}\ar[d] &\mathrm{ker}(\alpha^{\mathrm{ab}})\ar[r]\ar[d] &1 \\ 1\ar[r] &\kone{{\mathscr R}[G]}/\mathrm{tors}\ar[r]^{\mathfrak{L}_G}\ar[d] & {\mathrm H}_0(G;(1-z){\mathscr R}[G])\ar[r]^{\quad\quad\omega_G}\ar[d] &\mathbb{W}\times G^{ab}\ar[r]\ar[d]^{\alpha} &1 \\ 1\ar[r] &\kone{{\mathscr R}[\widehat{G}]}/\mathrm{tors}\ar[r]^{\mathfrak{L}_{\widehat G}}\ar[d] & {\mathrm H}_0(\widehat G;(1-z){\mathscr R}[\widehat G])\ar[r]^{\quad\quad\omega_{\widehat G}}\ar[d] &\mathbb{W}\times\widehat{G}^{ab}\ar[r]\ar[d] &1 \\ &1 &1 &1 & } \end{equation*} By \cite[Theorem 1.14 (iii)]{ol}, the columns are all exact. By the induction hypothesis, the bottom row is exact. In the top row, the integral logarithm map $\mathfrak{L}_0$ is injective, by Proposition \ref{oli-6.4}, and the map $\omega_0$ is clearly onto. Moreover, $\mathrm{Im}(\mathfrak{L}_0)\subset\mathrm{ker}(\omega_0)$, by \eqref{int_subset_ker_omega}. Lastly, by Proposition \ref{oli-6.4} again, we have, \begin{equation*} \begin{split} \mid\mathrm{ker}(\alpha^{\mathrm{ab}})\mid&=\begin{cases} 1 & \mbox{ if } z \mbox{ is a commutator }\\ p & \mbox{ otherwise} \end{cases}\\ &=\mid\mathrm{coker}(\mathfrak{L}_0) \mid. \end{split} \end{equation*} Again, as $\omega_G\circ\mathfrak{L}_G=1$, by the equality \eqref{int_subset_ker_omega}, it follows that the middle row is short exact. \end{proof} \begin{defn}\label{sk-def2} Let $\mathbb{K}=K[[X_1,\cdots,X_r,Y]]$, where $K$ is the quotient field of $\cO,$ and $Y$ is the variable corresponding to $\widetilde\Gamma^{p^e}$. Recall that $\mathbb W=(1+p{\mathbb Z}_p)^{r+1}$. For any finite group $G$, we define the following groups (see \cite[Page 173]{ol}): \begin{equation*} \begin{split} SK_1({\mathscr R}[G])&:=\mathrm{ker}\left[\kone{{\mathscr R}[G]}\longrightarrow\kone{\mathbb{K}[G]}\right],\\ \konep{{\mathscr R}[G]}&:=\kone{{\mathscr R}[G]}/S\kone{{\mathscr R}[G]},\\ \mathrm{Wh}({\mathscr R}[G])&:=\kone{{\mathscr R}[G]}/\left(\mu_K\times\mathbb W\times G^{ab}\right), \end{split} \end{equation*} where $\mu_K$ is the set of roots of unity in $K$. This is a generalization of Definition \ref{sk-def} \end{defn} The following proposition is a generalization of \cite[Theorem 7.1]{ol}. \begin{propn} Let $G$ be a finite $p$-group and $z\in Z(G)$ such that the order of $z$ is the prime $p$. Let \begin{equation*} \Omega=\{g\in G: [g,h]=z, \mbox{ for some } h\in G\}. \end{equation*} On this set, consider the following relation $\sim$: \begin{equation*} g\sim h \mbox{ if } \begin{cases} g \mbox{ is conjugate to } h, \mbox{ or }\\ [g,h]=z^i, \mbox{ for any } i \mbox{ prime to } p. \end{cases} \end{equation*} Then \begin{equation*} \mathrm{ker}\left[\mathrm{tors}(\mathrm{Wh}({\mathscr R}[G]))\longrightarrow \mathrm{tors}(\mathrm{Wh}({\mathscr R}[G/\langle z\rangle]))\right]\cong ({\mathbb Z}/p)^N, \mbox{where } N=\begin{cases} 0 &\!\!\! \mbox{ if } \Omega=\emptyset \\ |\Omega/\sim|-1 &\!\!\! \mbox{ if } \Omega\neq0.\end{cases} \end{equation*} \iffalse $\mathrm{ker}\left[\mathrm{tors}(\mathrm{Wh}({\mathscr R}[G]))\longrightarrow \mathrm{tors}(\mathrm{Wh}({\mathscr R}[G/\langle z\rangle]))\right]\cong ({\mathbb Z}/p)^N$, where $N=\begin{cases} 0 &\!\!\! \mbox{ if } \Omega=\emptyset \\ |\Omega/\sim|-1 &\!\!\! \mbox{ if } \Omega\neq0. \end{cases}$ }\f \end{propn} \begin{proof} The proof of this Proposition also follows the same argument as in the proof of Oliver. The first step is to recall the exact sequence in \eqref{exact-log}, which comes from the homomorphism \begin{equation*} \log:\kone{{\mathscr R}[G],(1-z){\mathscr R}[G]}\longrightarrow \mathrm{H}_0(G;(1-z){\mathscr R}[G]), \end{equation*} where $\mathrm{ker}(\log)=\langle z\rangle$ and $\mathrm{im}(\log)=\{(1-z)\sum r_ig_i:r_i\in{\mathscr R}, g_i\in G,\sum r_i\in\mathrm{ker}(\tau)\}$. This map fits into the following commutative diagram: \begin{equation*} \xymatrix{ &0\ar[d]\\ &\mathrm{H}_0(G,(1-z){\mathscr R}[\Omega])\ar[d]\\ \kone{{\mathscr R}[G],(1-z){\mathscr R}[G]}\ar[r]^{\log}\ar[d] &\mathrm{H}_0(G,(1-z){\mathscr R}[G])\ar[d]\\ \kone{{\mathscr R}[G]}\ar[r]\ar[d]^\eta &\mathrm{H}_0(G,{\mathscr R}[G])\\ \mathrm{Wh}({\mathscr R}[G])\ar[ru]_{\log_{{\mathscr R}[G]}}. } \end{equation*} In the above diagram, we have used the following equality \begin{equation} \begin{split} &\mathrm{ker}\left[\mathrm{H}_0(G,(1-z){\mathscr R}[G])\longrightarrow\mathrm{H}_0(G,{\mathscr R}[G])\right]\\ &=\langle r(1-z)g\in\mathrm{H}_0(G,(1-z){\mathscr R}[G]): g \mbox{ is conjugate to } gz, r\in{\mathscr R}[G]\rangle\\ &=\mathrm{H}_0(G,(1-z){\mathscr R}[\Omega]). \end{split} \end{equation} Now, consider the surjection $\kone{{\mathscr R}[G]}\stackrel{\eta}{\longrightarrow}\mathrm{Wh}({\mathscr R}[G])$. Let $x\in(\mathrm{Wh}({\mathscr R}[G]))_{\mathrm{tors}}$, with $\eta(y)=x$ for some $y\in\kone{{\mathscr R}[G]}$. Then, as $x^n=1$, for some $n$, it follows that $y^n\in\mathrm{ker}(\eta)$, which is finite. Therefore $y^{nr}=1$, for some $r$, and $y\in\kone{{\mathscr R}[G]}_{\mathrm{tors}}$. Hence $\eta$ induces a surjection $\kone{{\mathscr R}[G]}_{\mathrm{tors}}\stackrel{\eta}{\longrightarrow}\mathrm{Wh}({\mathscr R}[G])_{\mathrm{tors}}$. On the other hand, by a straight forward generalization of \cite[Theorem 2.9]{ol}, we have $\mathrm{ker}(\log_{{\mathscr R}[G]}\circ\eta)=\kone{{\mathscr R}[G]}_{\mathrm{tors}}$. Combining this with the surjection $\mathrm{ker}(\log_{{\mathscr R}[G]}\circ\eta)\stackrel{\eta}{\longrightarrow}\mathrm{ker}(\log_{{\mathscr R}[G]})$, we have $\mathrm{ker}(\log_{{\mathscr R}[G]})=\mathrm{Wh}({\mathscr R}[G])_\mathrm{tors}$. Further, set $I:=(1-z){\mathscr R}[G]$ and consider the map $L:1+I\stackrel{\log}{\longrightarrow} I\stackrel{\mathrm{proj}}{\longrightarrow}I/[{\mathscr R}[G],I]$. Then by a straightforward generalization of \cite[Theorem 2.9]{ol}, we have a surjection from $\mathrm{ker}(L)$ to $\mathrm{ker}(\log_I)$. Therefore, for any $u\in1+(1-z){\mathscr R}[G]$, if $\bar u\in\mathrm{Wh}({\mathscr R}[G])$ denotes the image of $u$, then \begin{equation*} \begin{split} \bar u\in\mathrm{Wh}({\mathscr R}[G])_{tors}&\iff \bar u\in \mathrm{ker}(\log_{{\mathscr R}[G]})\\ &\iff \log(u)\in\mathrm{ker}\left[\mathrm{H}_0(G,(1-z){\mathscr R}[G])\longrightarrow\mathrm{H}_0(G,{\mathscr R}[G])\right]=\mathrm{H}_0(G,(1-z){\mathscr R}[\Omega]) \end{split} \end{equation*} The next step is to consider the sets \begin{equation*} \begin{split} D&=\{\xi\in{\mathscr R}(\Omega):(1-z)\xi\in\log(1+(1-z){\mathscr R}[G])\}\\ C&=\{\xi\in{\mathscr R}(\Omega):(1-z)\xi=\log(u), \mbox{ for some }u\in\mathrm{ker}(1+(1-z){\mathscr R}[G]\longrightarrow\mathrm{Wh}({\mathscr R}[G]))\} \end{split} \end{equation*} and show that the required kernel is equal to $D/C$. This is done exactly as in the proof of \cite[Theorem 7.1]{ol}. \end{proof} As a consequence of the proposition, we get the following corollary. \begin{cor}\label{sk-trivial} Let $G$ be a finite $p$-group containing an abelian subgroup $H\unlhd G$ such that $G/H$ is cyclic. Then $S\kone{{\mathscr R}[G]}=1$. \end{cor} \begin{proof} The proof proceeds by induction as in \cite[Cor 7.2]{ol}. If $G=1$, then as ${\mathscr R}$ is a local ring, $S\kone{{\mathscr R}[G]}=S\kone{{\mathscr R}}=1$ (\cite[Cor V.9.2]{bass}). Then assume that $H\neq1$, and choose $z\in H\cap Z(G)$ of order $p$. We also assume inductively that $\mathrm{Wh}({\mathscr R}[G/(z)])$ is torsion free. As above, we consider the set $\Omega$ and the relation $\sim$. By the previous result, it is enough to show that this relation is transitive on $\Omega$. We include the short proof for convenience. Let $\Omega\neq\emptyset$. We take any $g\in\Omega$, and any $x\in G-H$, which is a generator of $G/H$. Let $h\in\Omega$ such that $[g,h]=z$. As $G/H$ is cyclic, there exists $i$ such that either $gh^{i}$ or $g^{i}h$ lies in $H$. By symmetry, we may assume that $gh^i=a\in H$. Let $h=bx^j$ for some $b\in H$, then \begin{equation*} z=[g,h]=[gh^i,h]=[a,bx^i]=[a,x^j]=[ax,x^j]=[ax,x^j(ax)^{-j}]=[x,x^j(ax)^{-j}], \end{equation*} where the last equality happens as $x^j(ax)^{-j}\in H$. Therefore, in $\Omega$, we have, \begin{equation*} g\sim h\sim gh^i=a\sim x^j\sim ax\sim x^j(ax)^{-j}\sim x. \end{equation*} Hence, the relation is transitive, and the result follows. \end{proof} \begin{theorem}\label{k-tors} Let $G$ be any finite $p$-group. Then $(\kone{{\mathscr R}[G]})_\mathrm{tors}\cong\mu_K\times G^{ab}\times S\kone{{\mathscr R}[G]}$. \end{theorem} \begin{proof} If $G$ is abelian then the previous Proposition implies the result. Now let $G$ be any $p$-group, then we show that the projection map \begin{equation*} pr^*:\konep{{\mathscr R}[G]}_{\mathrm{tors}}\longrightarrow\konep{{\mathscr R}[G^{ab}]}_{\mathrm{tors}} \end{equation*} is injective. Now, we fix the group $G$ and assume inductively that the theorem holds for all of its proper subgroups and quotients. If $G$ is cyclic, dihedral, quaternionic or semi-dihedral, then the Proposition holds by the previous corollary. Since the characteristic of $\mathbb K=K[[X_1,\cdots,X_r,Y]]$ is zero, by Maschke's theorem the ring $\mathbb K[G]$ is semisimple. Therefore, by Wedderburn's Theorem we have the decomposition $\mathbb K[G]\cong\prod_{i=1}^s A_i$, for some simple $\mathbb K[G]$-modules $A_i$ and some $s\in\mathbb N$. Since $\mathbb K[G]$ contains the field $K$ of characteristic 0, we can show as in \cite[Section 2]{roquette}, that each of the division algebras that occur in the above decomposition is isomorphic to that of a primitive, faithful representation of some subquotient of $G$. In other words, the endomorphism rings of the simple modules $A_i$ are isomorphic to that of simple modules defined over $\mathbb K[T]$ for subgroups or subquotients $T$ of $G$. Therefore, the restriction maps and the quotient maps define the following monomorphism: \begin{equation}\label{k-roquette} \sum\mathrm{Res}_T^G\oplus\sum\mathrm{Proj}_{G/N}^G:\kone{\mathbb K[G]}\longrightarrow\bigoplus_{T\subset G,[G:T]=p}\kone{\mathbb K[T]}\oplus\bigoplus_{N\unlhd G,|N|=p}\kone{\mathbb K[G/N]}. \end{equation} It follows that the corresponding homomorphism for $\konep{\mathbb I[G]}$ is also injective. Next, for any subgroup $H$ of $ G$ of index $p$, we have the following commutative diagram, where the maps $t_1, t_2$ are the transfer maps and the maps $\mathrm{Proj_1}, \mathrm{Proj_2}$ and $\mathrm{Proj_3}$ are induced by the projection maps: \begin{equation*} \xymatrix{ \konep{{\mathscr R}[G]}_\mathrm{tors}\ar[r]^{\mathrm{Proj}_1}\ar[d]_{t_1} & \konep{{\mathscr R}[G/[H,H]]}_\mathrm{tors}\ar@{^{(}->}[r]^{\mathrm{Proj}_3}\ar[d]_{t_2} &\konep{{\mathscr R}[G^{ab}]}_\mathrm{tors} \\ \konep{{\mathscr R}[H]}_\mathrm{tors}\ar@{^{(}->}[r]^{\mathrm{Proj}_2} & \konep{{\mathscr R}[H^{ab}]}_\mathrm{tors}. } \end{equation*} Here the map $\mathrm{Proj_2}$ is injective by the induction assumption. Regarding the map $\mathrm{Proj_3}$, it is also injective by Corollary \ref{sk-trivial} above, since $G/[H,H]$ contains an abelian subgroup of index $p$. Therefore, for any $u\in\mathrm{ker}(\mathrm{Proj}_3\circ\mathrm{Proj}_1)$, we have $t_1(u)=1\in\konep{{\mathscr R}[H^{ab}]}$. Together with the fact that $\mathrm{Proj}_{G/N}^G(u)=1$, for all $N\unlhd G$ of order $p$, by the induction hypothesis, we have $u=1$ by the injective map \eqref{k-roquette}. Hence the map $pr^\ast$ is injective. \end{proof} As a consequence of this along with Proposition \ref{log-exact}, we get the following result. \begin{cor} Let $G$ be a finite $p$-group. Then we have the following exact sequence of groups: \begin{equation}\label{k-exact} 1\longrightarrow\mu_K\times\mathbb W\times G^{ab}\longrightarrow\konep{{\mathscr R}[G]}\stackrel{\mathfrak{L}_G}{\longrightarrow}{\mathscr R}[G]\longrightarrow \mathbb W\times G^{ab}\longrightarrow 1. \end{equation} \end{cor} \subsection{The Logarithm map over $\widehat{\mathbb I(Z)}_{(p)}$} Recall that $Z:=Z({\mathcal G})$, the center of ${\mathcal G}$. Let $\widehat{{\mathscr R}}=\widehat{\mathbb I(Z)}_{(p)}$ and $J({\widehat{\mathscr R}})$ denote its Jacobson radical. Since ${\mathcal G}$ is pro-$p$, the ring $\widehat{\mathscr R}[\overline{\mathcal G}]^\tau$ is a local ring and the Jacobson radical $J({\widehat{\mathscr R}}[\overline{\mathcal G}]^\tau)$ is its maximal ideal. We again consider the power series: \begin{equation} \log{(1+x)}=\sum_{n=1}^\infty(-1)^{n+1}\frac{x^n}{n}. \end{equation} \begin{lemma}Let $G=\overline{\mathcal G}$. The ideal $J({\widehat{\mathscr R}}[\overline{\mathcal G}]^\tau)/p\widehat{\mathscr R}[G]^\tau$ is a nilpotent ideal of $\widehat{\mathscr R}[G]^\tau/p\widehat{\mathscr R}[G]^\tau$. \end{lemma} \begin{proof} Let $x\in J({\widehat{\mathscr R}}[\overline{\mathcal G}]^\tau)/p\widehat{\mathscr R}[G]^\tau$. Since $\widehat{\mathscr R}$ is a complete local ring, and $G$ is pro-$p$, the maximal ideal $J({\widehat{\mathscr R}}[\overline{\mathcal G}]^\tau)=\langle{\mathfrak m}_{\widehat{\mathscr R}},I_G\rangle$, where ${\mathfrak m}_{\widehat{\mathscr R}}$ is the maximal ideal of $\widehat{\mathscr R}$ and $I_G$ is the augmentation ideal of $\widehat{\mathscr R}[G]^\tau$. Let $|G|=p^r$. Then $(g-1)^{p^r}\in p{\mathscr R}[G]^\tau$, for any $g\in G$. Therefore, for any $x\in J({\widehat{\mathscr R}}[\overline{\mathcal G}]^\tau)=\langle{\mathfrak m}_{\widehat{\mathscr R}}, I_{G}\rangle$, we have $x^{p^r}\in \langle p,{\mathfrak m}_{\widehat{\mathscr R}}\rangle$. Hence $x^{n}\in \langle p,{\mathfrak m}_{\widehat{\mathscr R}}\rangle^m\widehat{\mathscr R}[G]^\tau$ for large enough $n,m$. This implies that $x^i/i$ converges to $0$ as $i$ tends to infinity. Hence the series $\mathrm{Log}(1+x)$ converges in $\widehat{\mathscr R}[G]^\tau[\frac{1}{p}]$. \end{proof} The techniques of Oliver in \cite[Lemma 27]{ol} as generalized by Kakde to prove \cite[Lemma 66]{kakde}, can further be generalized easily to show the following Lemma. \begin{lemma}\label{log-of-hat} Let $I\subset J(\widehat{\mathscr R}[\overline{\mathcal G}]^\tau)$ be any ideal of $\widehat{\mathscr R}[\overline{\mathcal G}]^\tau$. Then \begin{enumerate} \item For any $x,y\in I$, the series $\mathrm{Log}(1+x)$ converges to an element in $\widehat{\mathscr R}[\overline{\mathcal G}]^\tau[\frac{1}{p}]$, and \begin{equation} \mathrm{Log}((1+x)(1+y))\equiv\mathrm{Log}(1+x)+\mathrm{Log}(1+y)\pmod{[\widehat{\mathscr R}[\overline{\mathcal G}]^\tau[\frac{1}{p}], I[\frac{1}{p}]]} \end{equation} \item If $I\subset\xi\widehat{\mathscr R}[\overline{\mathcal G}]^\tau$, for some central element $\xi$ such that $\xi^p\in p\xi\widehat{\mathscr R}[\overline{\mathcal G}]^\tau$, then for any $x,y\in I$, $\mathrm{Log}(1+x)$ and $\mathrm{Log}(1+y)$ converge in $I$, and \begin{equation} \mathrm{Log}((1+x)(1+y))\equiv\mathrm{Log}(1+x)+\mathrm{Log}(1+y)\pmod{[\widehat{\mathscr R}[\overline{\mathcal G}]^\tau, I]}. \end{equation} Moreover, if $I^p\subset pI J(\widehat{\mathscr R}[\overline{\mathcal G}]^\tau)$, then \begin{enumerate} \item for all $x\in I$ the series $\mathrm{Exp}(x)=\sum_{n=0}^\infty\frac{x^n}{n!}$ converges to an element in $1+I$. \item the maps $\mathrm{Log}$ and $\mathrm{Exp}$ are bijections and inverse to each other between $1+I$ and $I$. \end{enumerate} \end{enumerate} \end{lemma} \begin{propn}Let $I$ be any ideal contained in the maximal ideal $J(\widehat{\mathscr R}[\overline{\mathcal G}]^\tau)$, then the logarithm map \begin{equation*} \mathrm{Log}(1+x)=\sum_{n=1}^\infty(-1)^{n+1}\frac{x^n}{n} \end{equation*} defined on $I$, induces a unique homomorphism \begin{equation*} \log_I:\kone{\widehat{\mathscr R}[\overline{\mathcal G}]^\tau,I}\longrightarrow \left(\frac{I}{[\widehat{\mathscr R}[\overline{\mathcal G}]^\tau, I]}\right)\left[\frac{1}{p}\right]. \end{equation*} If, in addition, $I\subset\xi\widehat{\mathscr R}[\overline{\mathcal G}]^\tau$, for some central element $\xi$ such that $\xi^p\in p\xi\widehat{\mathscr R}[\overline{\mathcal G}]^\tau$, then the logarithm map $\mathrm{Log}$ induces a homomorphism \begin{equation*} \log_I:\kone{\widehat{\mathscr R}[\overline{\mathcal G}]^\tau,I}\longrightarrow \frac{I}{[\widehat{\mathscr R}[\overline{\mathcal G}]^\tau, I]}. \end{equation*} \end{propn} \begin{proof} The proof is the same as the proof of \cite[Prop 67]{kakde}. \end{proof} \subsection{The Integral Logarithm over $\widehat{\mathbb I(Z)}_{(p)}$} We now define the integral logarithm over the ring $\widehat{\mathbb I(Z)}_{(p)}$. For this we first consider the kernel \begin{equation*} J:=\mathrm{ker}\left[\widehat{\mathbb I[[\cG]]_\sS}\longrightarrow\widehat{\mathbb I(\Gamma)_{(p)}}\right]. \end{equation*} Note that the ring $\widehat{\mathbb I[[\cG]]_\sS}$ is local, therefore we have the surjective map \begin{equation*} \begin{split} \widehat{\mathbb I[[\cG]]_\sS}^\times\longrightarrow\konep{\widehat{\mathbb I[[\cG]]_\sS}} \mbox{ and } &1+J\longrightarrow\konep{\widehat{\mathbb I[[\cG]]_\sS},J}. \end{split} \end{equation*} Now consider the exact sequence of groups which is split by the embedding $\Gamma\hookrightarrow{\mathcal G}$, \begin{equation*} 1\longrightarrow 1+J\longrightarrow\widehat{\mathbb I[[\cG]]_\sS}^\times\longrightarrow\widehat{\mathbb I(\Gamma)_{(p)}}^\times\longrightarrow 1. \end{equation*} It is easy to see that any $x\in\widehat{\mathbb I[[\cG]]_\sS}^\times$ can be expressed uniquely as $x=uy$, where $u\in 1+J$ and $y\in \widehat{\mathbb I(\Gamma)_{(p)}}$. Hence, any $x\in\konep{\widehat{\mathbb I[[\cG]]_\sS}}$ can be written uniquely as a product $x=uy$, where $y\in\konep{\widehat{\mathbb I(\Gamma)_{(p)}}}$ and $u$ lies in the image of $\kone{\widehat{\mathbb I[[\cG]]_\sS},J}$ in $\konep{\widehat{\mathbb I[[\cG]]_\sS}}$. We record this fact below. \begin{lemma} Any $x\in\konep{\widehat{\mathbb I[[\cG]]_\sS}}$ can be written uniquely as a product $x=uy$, where $y\in\konep{\widehat{\mathbb I(\Gamma)_{(p)}}}$ and $u$ lies in the image of $\kone{\widehat{\mathbb I[[\cG]]_\sS},J}$ in $\konep{\widehat{\mathbb I[[\cG]]_\sS}}$. \end{lemma} Recall the following assumption \begin{description} \item[(Ur)] $\mathbb I=\cO[[X_1,\cdots,X_r]]$ with $\cO$ unramified over ${\mathbb Z}_p$. \end{description} Therefore, $\widehat{\mathbb I(\Gamma)_{(p)}}/p\widehat{\mathbb I(\Gamma)_{(p)}}\cong\FF_q[[X_1,\cdots,X_r]][[\Gamma]]$, where $\FF_q$ is the finite field of order $q$ and characteristic $p$. Consider the map \begin{equation*} \varphi:\FF_q[[X_1,\cdots,X_r]][[\Gamma]]\longrightarrow \FF_q[[X_1,\cdots,X_r]][[\Gamma]] \end{equation*} such that its restriction to $\cO$ is the Frobenius and maps $X_j^n$ to $X_j^{pn}$ for all $j=1,\cdots,r$ and the $p$-th power map on $\Gamma$. \begin{lemma} Let $y\in\kone{\widehat{\mathbb I(\Gamma)_{(p)}}}$. Then \begin{equation*} \frac{y^p}{\varphi(y)}\equiv 1\pmod{p\widehat{\mathbb I(\Gamma)_{(p)}}}. \end{equation*} Therefore, $\mathrm{Log}(\frac{y^p}{\varphi(y)})$ is defined. \end{lemma} \begin{proof} It is enough to show the congruence and this follows by computing $y^p$. For this, let $\bar y\in\FF_q[[X_1,\cdots,X_r]][[\Gamma]]\cong\FF_q[[X_1,\cdots,X_r]][[T]]$. Then $\bar y=\sum_{i=0}^\infty a_iT^i$, with $a_i\in\FF_q[[X_1,\cdots,X_r]]$. Then $\bar y^p=\left(\sum_{i=0}^\infty a_iT^i\right)^p=\sum_{i=0}^\infty a_i^pT^{ip}=\varphi(\bar y)$. Therefore $\frac{y^p}{\varphi(y)}\equiv 1\pmod{p\widehat{\mathbb I(\Gamma)_{(p)}}}.$ \end{proof} \begin{defn} Let $x\in\konep{\widehat{\mathbb I[[\cG]]_\sS}}$. Then $x=uy$, with $y\in\konep{\widehat{\mathbb I(\Gamma)_{(p)}}}$ and $u$ lies in the image of $\kone{\widehat{\mathbb I[[\cG]]_\sS},J}$ in $\konep{\widehat{\mathbb I[[\cG]]_\sS}}$. We define the \emph{integral} logarithm map on $\konep{\widehat{\mathbb I[[\cG]]_\sS}}$ by \begin{equation*} L(x)=L(uy)=L(u)+L(y)=\mathrm{Log}(u)-\frac{1}{p}\varphi(\mathrm{Log}(u))+\frac{1}{p}\mathrm{Log}\left(\frac{y^p}{\varphi(y)}\right). \end{equation*} \end{defn} For this integral logarithm, we have the following result which is proven exactly as in \cite[Prop 74]{kakde}. \begin{propn} The integral logarithm map defined above induces a homomorphism \begin{equation*} L:\konep{\widehat{\mathbb I[[\cG]]_\sS}}\longrightarrow{\widehat{\mathbb I(Z)}_{(p)}}[\mathrm{Conj}(\overline{\mathcal G})]^\tau, \end{equation*} which is independent of the choice of the splitting of ${\mathcal G}\longrightarrow\Gamma$. \end{propn} \subsection{The Logarithm map under restriction maps} For any $P\leq\overline{\mathcal G}$, recall the map \begin{equation*} t_P^{\overline{\mathcal G}}:{\mathscr R}[\mathrm{Conj}{\overline{\mathcal G}}]^\tau\longrightarrow {\mathscr R}[P^{ab}]^\tau \end{equation*} defined by \begin{equation*} t_P^{\overline{\mathcal G}}(\bar g)=\sum_{x\in C(\overline{\mathcal G},P)}\{ ({\bar x}^{-1})(\bar g)(\bar x)\mid x^{-1}gx\in P \}, \end{equation*} where $C(\overline{\mathcal G},P)$ is the set of left coset representatives of $P$ in ${\mathcal G}$. Let $\mathbb K=K[[X_1,\cdots,X_r]]$. Then the map $t_P^{\overline{\mathcal G}}$ can be naturally extended to a map \begin{equation*} t_P^{\overline{\mathcal G}}:\mathbb K[[Z]][\mathrm{Conj}{\overline{\mathcal G}}]^\tau\longrightarrow \mathbb K[[Z]][\mathrm{Conj}{P}]^\tau. \end{equation*} \begin{lemma}\label{k-restr} For any $P\leq\overline{\mathcal G}$, we have the commutative diagram \begin{equation*} \xymatrix{ \konep{\mathbb I[[\cG]]}\ar[r]^{\log}\ar[d]_{\Theta_P^{\overline{\mathcal G}}} & \mathbb K[[Z]][\mathrm{Conj}{\overline{\mathcal G}}]^\tau\ar[d]^{t^{\overline{\mathcal G}}_P}\\ \konep{\mathbb I(U_P)}\ar[r]^{\log} & \mathbb K[[Z]][\mathrm{Conj}{P}]^\tau } \end{equation*} Similarly, for $J=\mathrm{ker}\left[\widehat{\mathbb I[[\cG]]_\sS}\longrightarrow\widehat{\mathbb I(\Gamma)_{(p)}}\right]$, we also have \begin{equation*} \xymatrix{ \kone{\widehat{\mathbb I[[\cG]]_\sS},J}\ar[r]^{\mathrm{Log}}\ar[d]_{\widehat{\Theta_P^{\overline{\mathcal G}}}} & \widehat{\mathbb I(Z)_{(p)}}[\mathrm{Conj}{\overline{\mathcal G}}]^\tau[\frac{1}{p}]\ar[d]^{t^{\overline{\mathcal G}}_P}\\ \kone{\widehat{\mathbb I(U_P)},J}\ar[r]^{\mathrm{Log}} & \widehat{\mathbb I(Z)_{(p)}}[\mathrm{Conj}{P}]^\tau[\frac{1}{p}]. } \end{equation*} \end{lemma} The proof of this lemma proceeds exactly as in \cite[Theorem 6.8]{ol} and for the second part consider any $u\in\kone{\widehat{\mathbb I[[\cG]]_\sS},J}$, then the commutativity follows from the following equalities \begin{equation*} \begin{split} \mathrm{Log}(u) &=\lim_{n\to\infty}\frac{1}{p^n}(u^{p^n}-1)\\ \widehat{\Theta_P^{\overline{\mathcal G}}}(u)&=\lim_{n\to\infty}(1+t^{\overline{\mathcal G}}_P(u^{p^n}-1))^{1/{p^n}}. \end{split} \end{equation*} Next, proceeding as in the proof of \cite[Lemma 77]{ol}, we get the following commutative diagram. \begin{lemma}\label{alpha-restr} Let $P\in C(\overline{\mathcal G})$ be a non-trivial subgroup. Then, the following diagram is commutative: \begin{equation*} \xymatrix{ {\mathbb I(U_P)}^\times\ar[r]^{\log}\ar[d]_{\alpha_P} & \mathbb K[U_P]\ar[d]^{p\eta_P}\\ {\mathbb I(U_P)}^\times\ar[r]^{\log} & \mathbb K[U_P] } \end{equation*} Similarly, the following diagram is also commutative \begin{equation*} \xymatrix{ \kone{\widehat{\mathbb I(U_P)_\sS},J}\ar[r]^{\mathrm{Log}}\ar[d]_{\alpha_P} & \widehat{\mathbb I[U_P]_{\sS}}[\frac{1}{p}]\ar[d]^{p\eta_P}\\ \kone{\widehat{\mathbb I(U_P)},J}\ar[r]^{\mathrm{Log}} & \widehat{\mathbb I[U_P]_{\sS}}[\frac{1}{p}]. } \end{equation*} \end{lemma} To establish a compatibility between the subgroups, we also consider the following maps. \begin{defn}\label{def-v} Define a map $v_P^{\overline{\mathcal G}}:\prod_{C\leq\overline{\mathcal G}}\mathbb K[[Z]][C^{ab}]\longrightarrow\mathbb K[[Z]][P^{ab}]$, as follows: If $P$ is not cyclic, then \begin{equation*} v_P^{\overline{\mathcal G}}((x_C))=\left(\sum_{P'}\frac{[P:P']}{P:(P')^p}\varphi(x_{P'})\right) \end{equation*} where $P'$ runs over all subgroups contained in $C(\overline{\mathcal G})$ such that $(P')^p\leq P$. If $P$ is cyclic, then \begin{equation*} v_P^{\overline{\mathcal G}}((x_C))=\sum_{P'}[P:(P')^p]\varphi(x_{P'})=p\sum_{P'}\varphi(x_{P'}), \end{equation*} where $P'$ runs over all the $P'\in C(\overline{\mathcal G})$ with $(P')^p=P$ but $P'\neq P$. We set $v^{\overline{\mathcal G}}=(v^{\overline{\mathcal G}}_P)_P$. Analogously, we define maps in the case of the $p$-adic completions, and we denote them again by $v^{\overline{\mathcal G}}$. \end{defn} Then we can show the following lemma as in \cite[Lemma 79]{kakde}. \begin{lemma}\label{beta-restr} Let $P\neq1$. Then the following diagram is commutative \begin{equation*} \xymatrix{ \mathbb K[[Z]][\mathrm{Conj}{\overline{\mathcal G}}]^\tau\ar[r]^\varphi\ar[d]_{\beta^{\overline{\mathcal G}}} &\mathbb K[[Z]][\mathrm{Conj}{\overline{\mathcal G}}]^\tau\ar[d]^{\beta^{\overline{\mathcal G}}_P}\\ \prod_{C\leq\overline{\mathcal G}}\mathbb K[[Z]][C^{ab}]\ar[r]_{v_P^{\overline{\mathcal G}}} &\mathbb K[[Z]][P^{ab}]. } \end{equation*} Let $P=\{1\}$, then we have the following commutative diagram \begin{equation*} \xymatrix{ \mathbb K[[Z]][\mathrm{Conj}{\overline{\mathcal G}}]^\tau\ar[r]^\varphi\ar[d]_{\beta^{\overline{\mathcal G}}} &\mathbb K[[Z]][\mathrm{Conj}{\overline{\mathcal G}}]^\tau\ar[d]^{\beta^{\overline{\mathcal G}}_P}\\ \prod_{C\leq\overline{\mathcal G}}\mathbb K[[Z]][C^{ab}]\ar[r]_{\varphi+v_1^{\overline{\mathcal G}}} &\mathbb K[[Z]]. } \end{equation*} Analogous results hold for the $p$-adic completions. \end{lemma} \begin{defn}\label{defn-u} Define the map $u_P^{\overline{\mathcal G}}:\prod_{C\leq\overline{\mathcal G}}\mathbb I(U^{ab}_{C})^\times\longrightarrow\mathbb I(U^{ab}_P)^\times$ as follows: If $P$ is not a cyclic subgroup of $\overline{\mathcal G}$, then \begin{equation*} u^{\overline{\mathcal G}}_P((x_C))=\prod_{P'}\varphi(x_{P'})^{|P'|}, \end{equation*} where $P'$ runs over all subgroups contained in $C(\overline{\mathcal G})$ such that $(P')^p\leq P$. If $P$ is cyclic, then \begin{equation*} u^{\overline{\mathcal G}}_P((x_C))=\prod_{P'}\varphi(x_{P'}), \end{equation*} where $P'$ runs over all the $P'\in C(\overline{\mathcal G})$ with $(P')^p=P$ but $P'\neq P$. Then, we define the collection of maps by $u^{\overline{\mathcal G}}=(u^{\overline{\mathcal G}}_P)_P$. Analogously, we define maps in the case of the $p$-adic completions, and we denote them again by $v^{\overline{\mathcal G}}$. \end{defn} As the logarithm maps that we have defined respects group homomorphisms of Iwasawa algebras induced by the group homomorphism, we have the following lemma. \begin{lemma}\label{u-restr} Let $P$ be a non-cyclic subgroup of $\overline{\mathcal G}$. Then the following diagram is commutative. \begin{equation*} \xymatrix{ \prod_{C\leq\overline{\mathcal G}}\mathbb I(U_C^{ab})^\times\ar[r]^{\log}\ar[d]_{u_P^{\overline{\mathcal G}}} &\prod_{C\leq\overline{\mathcal G}} \mathbb K[[Z]][C^{ab}]\ar[d]^{|P|v_P^{\overline{\mathcal G}}}\\ \mathbb I(U_P^{ab})^\times\ar[r]_\log &\mathbb K[[Z]][P^{ab}]. } \end{equation*} Let $P$ be a cyclic subgroup of $\overline{\mathcal G}$. Then the following diagram is commutative. \begin{equation*} \xymatrix{ \prod_{C\leq\overline{\mathcal G}}\mathbb I(U_C^{ab})^\times\ar[r]^{\log}\ar[d]_{u_P^{\overline{\mathcal G}}} &\prod_{C\leq\overline{\mathcal G}} \mathbb K[[Z]][C^{ab}]\ar[d]^{\frac{1}{p}v_P^{\overline{\mathcal G}}}\\ \mathbb I(U_P^{ab})^\times\ar[r]_\log &\mathbb K[[Z]][P^{ab}]. } \end{equation*} Recall that $\widehat{\mathscr R}=\widehat{\mathbb I(Z)_{(p)}}$ and let $J_P=\mathrm{ker}\left[\widehat{\mathscr R}[P^{ab}]\longrightarrow\widehat{\mathbb I(\Gamma)_{(p)}}\right]$. Let $P$ be a non-cyclic subgroup of $\overline{\mathcal G}$. Then we have the following commutative diagram. \begin{equation*} \xymatrix{ \prod_{C\leq\overline{\mathcal G}}1+J_C\ar[r]^{\mathrm{Log}}\ar[d]^{U_P^{\overline{\mathcal G}}} & \prod_{C\leq\overline{\mathcal G}}{\mathbb Q}_p\otimes J_C\ar[d]^{|P|v_P^{\overline{\mathcal G}}}\\ 1+J_P\ar[r]_{\mathrm{Log}} & {\mathbb Q}_p\otimes J_P. } \end{equation*} Let $P$ be a cyclic subgroup of $\overline{\mathcal G}$. Then the following diagram is commutative. \begin{equation*} \xymatrix{ \prod_{C\leq\overline{\mathcal G}}1+J_C\ar[r]^{\mathrm{Log}}\ar[d]^{U_P^{\overline{\mathcal G}}} & \prod_{C\leq\overline{\mathcal G}}{\mathbb Q}_p\otimes J_C\ar[d]^{\frac{1}{p}v_P^{\overline{\mathcal G}}}\\ 1+J_P\ar[r]_{\mathrm{Log}} & {\mathbb Q}_p\otimes J_P. } \end{equation*} \end{lemma} \begin{lemma}\label{alpha-formula} Let $x\in\konep{{\mathscr R}[\overline{\mathcal G}]^\tau}$ or $\konep{\widehat{{\mathscr R}}[\overline{\mathcal G}]^\tau}$. Then for every non-cyclic subgroup $P\leq\overline{\mathcal G}$ , we have \begin{equation*} \alpha_P(\theta_P^{\overline{\mathcal G}}(x))^{p|P|}\equiv u_P^{\overline{\mathcal G}}(\alpha(\theta^{\overline{\mathcal G}}(x)))\pmod{p}. \end{equation*} In particular, the logarithm $\log\left(\frac{\alpha_P(\theta_P^{\overline{\mathcal G}}(x))^{p|P|}}{u_P^{\overline{\mathcal G}}(\alpha(\theta^{\overline{\mathcal G}}(x)))}\right)$ is well-defined. \end{lemma} \begin{proof} Let $C$ be cyclic. Then $\alpha_C(\theta_C^{\overline{\mathcal G}}(x))\equiv 1\pmod{p}$ and $u_P^{\overline{\mathcal G}}(\alpha(\theta^{\overline{\mathcal G}}(x)))\equiv\varphi(\alpha_{\{1\})}(\Theta_{\{1\}}\overline{\mathcal G}))\pmod{p}$. Then the result follows from the congruence $\theta_P^{\overline{\mathcal G}}(x)^{|P|}\equiv\theta_{\{1\}}^{\overline{\mathcal G}}(x)\pmod{p}$, which follows from a straightforward generalization of \cite[Prop 2.3]{sv}. \end{proof} We now give the relation between the multiplicative and the additive sides. For completeness and to see how the lemmas proved above are used we also give a proof of one of the formulas on the lines of \cite[Prop 84]{kakde}. \begin{propn}\label{beta-formula} Let $x\in\konep{\mathbb I[[\cG]]}$. Then \iffalse If $P$ is a non-cyclic subgroup of $\overline{\mathcal G}$, then \begin{equation*} \beta_P^{\overline{\mathcal G}}(L(x))=\frac{1}{p^2|P|}\log\left(\frac{\alpha_P(\theta_P^{\overline{\mathcal G}}(x))^{p|P|}}{u_P^{\overline{\mathcal G}}(\alpha(\theta^{\overline{\mathcal G}}(x)))}\right). \end{equation*} If $P$ is a cyclic subgroup of $\overline{\mathcal G}$, with $P\neq\{1\}$, then \begin{equation*} \beta_P^{\overline{\mathcal G}}(L(x))=\frac{1}{p^2}\log\left(\frac{\alpha_P(\theta_P^{\overline{\mathcal G}}(x))^{p|P|}}{u_P^{\overline{\mathcal G}}(\alpha(\theta^{\overline{\mathcal G}}(x)))}\right). \end{equation*} If $P=\{1\}$, then \begin{equation*} \beta_{\{1\}}^{\overline{\mathcal G}}(L(x)) =\frac{1}{p^2|P|}\log\left(\frac{\alpha_{\{1\}}(\theta_{\{1\}}^{\overline{\mathcal G}}(x))^{p}}{\varphi(\theta_{\{1\}}^{\overline{\mathcal G}}(x))(u_P^{\overline{\mathcal G}}(\alpha(\theta^{\overline{\mathcal G}}(x))))}\right). \end{equation*} }\f \begin{equation*} \beta_P^{\overline{\mathcal G}}(L(x))=\begin{cases} \frac{1}{p^2|P|}\log\left(\frac{\alpha_P(\theta_P^{\overline{\mathcal G}}(x))^{p|P|}}{u_P^{\overline{\mathcal G}}(\alpha(\theta^{\overline{\mathcal G}}(x)))}\right), &\mbox{ if } P\notin C(\overline{\mathcal G}) \\ \frac{1}{p}\log\left(\frac{\alpha_P(\theta_P^{\overline{\mathcal G}}(x))^{p|P|}}{u_P^{\overline{\mathcal G}}(\alpha(\theta^{\overline{\mathcal G}}(x)))}\right), &\mbox{ if } P\in C(\overline{\mathcal G}), P\neq\{1\}\\ \frac{1}{p}\log\left(\frac{\alpha_{\{1\}}(\theta_{\{1\}}^{\overline{\mathcal G}}(x))^{p}} {\varphi(\theta_{\{1\}}^{\overline{\mathcal G}}(x))(u_{\{1\}}^{\overline{\mathcal G}}(\alpha(\theta^{\overline{\mathcal G}}(x))))}\right), &\mbox{ if } P=\{1\}. \end{cases} \end{equation*} We also have analogous relations over the $p$-adic completions $\konep{\widehat{\mathbb I[[\cG]]_\sS}}$. \end{propn} \begin{proof} Let $x\in\konep{\mathbb I[[\cG]]}$. In the first case, we consider a group $P\in C(\overline{\mathcal G})$. Then we have \iffalse \begin{eqnarray*} \begin{split} \beta_P^{\overline{\mathcal G}}(L(x))=&\beta_P^{\overline{\mathcal G}}(\log(x)-\frac{\varphi}{p}(\log(x))) \\ =&\frac{1}{p}\log(\alpha_P(\Theta_P^{\overline{\mathcal G}}(x)))-\beta_P^{\overline{\mathcal G}}(\frac{\varphi}{p}(\log(x)), &(\mbox{ Lemmas } \ref{k-restr}, \ref{alpha-restr})\\ =&\frac{1}{p}\log(\alpha_P(\Theta_P^{\overline{\mathcal G}}(x)))-\frac{1}{p}v_P^{\overline{\mathcal G}}(\beta^{\overline{\mathcal G}}(\log(x), &(\mbox{ Lemma } \ref{beta-restr})\\ =&\frac{1}{p}\log(\alpha_P(\Theta_P^{\overline{\mathcal G}}(x)))-\frac{1}{p^2}v_P^{\overline{\mathcal G}}(\log(\alpha(\Theta^{\overline{\mathcal G}(x)}))), &(\mbox{ Lemmas } \ref{k-restr}, \ref{alpha-restr}) \\ =&\frac{1}{p}\log(\alpha_P(\Theta_P^{\overline{\mathcal G}}(x)))-\frac{1}{p^2|P|}\log(u_P^{\overline{\mathcal G}}(\alpha(\Theta^{\overline{\mathcal G}(x)}))), &(\mbox{ Lemma } \ref{u-restr})\\ =&\frac{1}{p^2|P|}\log\left(\frac{\alpha_P(\theta_P^{\overline{\mathcal G}}(x))^{p|P|}}{u_P^{\overline{\mathcal G}}(\alpha(\theta^{\overline{\mathcal G}}(x)))}\right) \end{split} \end{eqnarray*} }\f \begin{align*} \beta_P^{\overline{\mathcal G}}(L(x))=&\beta_P^{\overline{\mathcal G}}(\log(x)-\frac{\varphi}{p}(\log(x))) \\ =&\frac{1}{p}\log(\alpha_P(\Theta_P^{\overline{\mathcal G}}(x)))-\beta_P^{\overline{\mathcal G}}(\frac{\varphi}{p}(\log(x)), &(\mbox{ Lemmas } \ref{k-restr}, \ref{alpha-restr})\\ =&\frac{1}{p}\log(\alpha_P(\Theta_P^{\overline{\mathcal G}}(x)))-\frac{1}{p}v_P^{\overline{\mathcal G}}(\beta^{\overline{\mathcal G}}(\log(x), &(\mbox{ Lemma } \ref{beta-restr})\\ =&\frac{1}{p}\log(\alpha_P(\Theta_P^{\overline{\mathcal G}}(x)))-\frac{1}{p^2}v_P^{\overline{\mathcal G}}(\log(\alpha(\Theta^{\overline{\mathcal G}(x)}))), &(\mbox{ Lemmas } \ref{k-restr}, \ref{alpha-restr}) \\ =&\frac{1}{p}\log(\alpha_P(\Theta_P^{\overline{\mathcal G}}(x)))-\frac{1}{p^2|P|}\log(u_P^{\overline{\mathcal G}}(\alpha(\Theta^{\overline{\mathcal G}(x)}))), &(\mbox{ Lemma } \ref{u-restr})\\ =&\frac{1}{p^2|P|}\log\left(\frac{\alpha_P(\theta_P^{\overline{\mathcal G}}(x))^{p|P|}}{u_P^{\overline{\mathcal G}}(\alpha(\theta^{\overline{\mathcal G}}(x)))}\right) \end{align*} If $P\in C(\overline{\mathcal G}), P\neq\{1\}$, or $P=\{1\}$, then the formula for $\beta_P^{\overline{\mathcal G}}$ can also be shown similarly as in \cite[Prop 84]{kakde}. We now give a proof if $x\in\konep{\widehat{\mathbb I[[\cG]]_\sS}}$. For this, we first note that we can write $x=uy$, for some $u\in\mathrm{im}\left[\kone{\widehat{\mathbb I[[\cG]]_\sS},J}\longrightarrow\konep{\widehat{\mathbb I[[\cG]]}_\sS}\right]$ and $y\in\konep{\widehat{\mathbb I(\Gamma)}_{(p)}}$. (Recall that $J=\mathrm{ker}[\widehat{\mathbb I[[\cG]]_\sS}\longrightarrow\widehat{\mathbb I(\Gamma)_{(p)}}]$). The proof for $u$ is the same as above, and in fact, we may show that it is true for any $u$ in the image of $\kone{\widehat{\mathbb I[[\cG]]_\sS},J_{\mathscr R}}$, where $\widehat{J}_{\mathscr R}$ is the Jacobson radical of $\widehat{\mathbb I[[\cG]]_\sS}$. We now show it for $y\in\widehat{\mathbb I(Z)_{(p)}}$. Note that $\Theta_P^{\overline{\mathcal G}}(y)=y^{[\overline{\mathcal G}:P]}$, for all $P\in\overline{\mathcal G}$. Further, if $P$ is a non-trivial cyclic subgroup of $\overline{\mathcal G}$, then $\alpha_P(\Theta_P^{\overline{\mathcal G}}(y))=1$. On the other hand, since $L(y)\in\widehat{\mathbb I(Z)_{(p)}}$, we have \begin{equation*} \beta_P^{\overline{\mathcal G}}(L(y))=\begin{cases} [\overline{\mathcal G}:P]L(y), &\mbox{ if } P \mbox{ is noncyclic or } P=\{1\}\\ 0, &\mbox{ if } P \mbox{ is cyclic}. \end{cases} \end{equation*} Now, it is easy to see that the formula for $\beta_P^{\overline{\mathcal G}}(L(y))$ holds. We now consider the case when $y\in\widehat{\mathbb I(\Gamma)_{(p)}}$. In this case, $\varphi(y)^r\in\widehat{\mathbb I(Z)_{p}}$, for some $r$. Then $\frac{\varphi(y)^r}{y^{p^r}}\equiv 1\pmod{\widehat{\mathbb I(\Gamma)_{(p)}}}$ and hence in the image of $\kone{\widehat{\mathbb I[[\cG]]_\sS},J_{\mathscr R}}$. Therefore the formula holds for $\frac{\varphi(y)^r}{y^{p^r}}$ and also $\varphi^r(y)$. Hence, the formula holds for $y^{p^r}$. Since the image of $\beta_P^{\overline{\mathcal G}}$ is torsion free abelian group, therefore the formula holds for $\beta_P^{\overline{\mathcal G}}$. \end{proof} \subsection{Congruences over $\mathbb I[[\cG]]$} For any subgroup $P$ of $\overline{\mathcal G}$, we write $\maptheta{\overline{\mathcal G},ab}{P}$ for the following natural composite homomorphism \begin{equation*} \konep{\mathbb I[[\cG]]}\stackrel{\maptheta{\overline{\mathcal G}}{P}}{\longrightarrow}\konep{\mathbb I(U_P)}\longrightarrow\kone{\mathbb I(U_P^{ab})}\cong\mathbb I(U_P^{ab})^\times, \end{equation*} where the isomorphism is induced by taking determinants over $\mathbb I(U_P^{ab})$. We now show that the image of the map $\maptheta{\overline{\mathcal G}}{}=(\maptheta{\overline{\mathcal G}}{P})$ lies in $\Phi^{\mathcal G}$. \begin{theorem}\label{cong} Let $\Xi\in\konep{\mathbb I[[\cG]]}$ and for all subgroups $P$ of $\overline{\mathcal G}$, put $\Xi_{U_P^{ab}}:=\maptheta{\overline{\mathcal G},ab}{P}(\Xi)\in\mathbb I(U_P^{ab})^\times$. \begin{enumerate} \item\label{C1} For all subgroups $P, P'$ of $\overline{\mathcal G}$ with $[P',P']\leq P\leq P'$, we have \begin{equation*} \mathrm{Nr}_P^{P'}(\Xi_{U_{P'}^{ab}})=\Pi_P^{P'}(\Xi_{U_{P'}^{ab}}). \end{equation*} \item\label{C2} For all subgroups $P$ of $\overline{\mathcal G}$ and all $g$ in $\overline{\mathcal G}$ we have $\Xi_{gU_{P}^{ab}g^{-1}}=g\Xi_{U_{P}^{ab}}g^{-1}$. \item\label{C3}For every $P\in\overline{\mathcal G}$ and $P\neq (1)$, we have \begin{equation*} \mathrm{ver}_P^{P'}(\Xi_{U_{P'}^{ab}})\equiv \Xi_{U_P^{ab}} \pmod{{\mathscr T}_{P,P'}} (\mbox{ resp. } {\mathscr T}_{P,P',\sS} \mbox{ and } \widehat{\mathscr T}_{P,P'}). \end{equation*} \item\label{C4} For all $P\in C(\overline{\mathcal G})$ we have $\alpha_P(\Xi_{U_{P}^{ab}})\equiv\prod_{P'\in C_P(\overline{\mathcal G})}\alpha_{P'}(\Xi_{U_{P'}^{ab}})\pmod{p{\mathscr T}_P}$. \end{enumerate} \end{theorem} To prove this theorem, we recall an explicit description of the map $\maptheta{P',ab}{P}$. We write $n_{P'/P}:=[P':P]=[U_{P'}:U_P]$. Since $\mathbb I(U_{P'})$ is a local ring, the natural homomorphism \begin{equation*} q_{P'}:\mathbb I(U_{P'})^\times\longrightarrow\kone{\mathbb I(U_{P'})} \end{equation*} is \emph{surjective}. For any $\Xi\in\kone{\mathbb I(U_{P'})}$, let $\widetilde\Xi\in\mathbb I(U_{P'})^\times$ denote a pre-image under $q_{P'}$. We denote the set of left coset representatives of $U_p$ in $U_{P'}$ by $C(P',P):=\left\lbrace c_i:1\leq i\leq n_{P'/P}\right\rbrace$. Then as an $\mathbb I(U_{P'})$-module we have \begin{equation*} \mathbb I(U_{P'})\cong\bigoplus_{i=1}^{n_{P'/P}}\mathbb I(U_P)c_i. \end{equation*} Let $M_{C(P',P)}(\widetilde\Xi)$ denote the matrix in $M_{n_{P'/P}}(\mathbb I(U_P))$ of the automorphism given by multiplication by $\widetilde\Xi$ on the right, and $\Pi_{P',P}:M_{n_{P'/P}}(\mathbb I(U_P))\longrightarrow M_{n_{P'/P}}(\mathbb I(U_P^{ab}))$ denote the natural projection. Then \begin{equation*} \maptheta{P',ab}{P}(\Xi)=\det\left(\Pi_{P',P}(M_{C(P',P)}(\widetilde\Xi))^{}\right)\in\mathbb I(U_{P}^{ab})^\times, \end{equation*} \begin{proof}[Proof of Theorem \ref{cong}(\ref{C1}):] Consider the following diagram \iffalse{ \begin{equation} \xymatrix{ \kone{\mathbb I[[\cG]]}\ar[rrd]^{\maptheta{\overline{\mathcal G}}{P}}\ar[d]^{\maptheta{\overline{\mathcal G}}{P'}} & &\\ \kone{\mathbb I(U_{P'})}\ar[rrd]^{\maptheta{P'}{P}}\ar[d]^{\pi_{P'}} & &\kone{\mathbb I(U_{P})}\ar[d]^{\pi_P}\\ \mathbb I(U_{P'})^\times\ar[rd]^{\mathrm{Nr}_P^{P'}} & &\mathbb I(U_{P}^{ab})^\times\ar[ld]^{\Pi^{P'}_P} \\ &\mathbb I\left(U_P/[U_{P'},U_{P'}]\right)^\times. } \end{equation} }\fi \begin{equation*} \xymatrix{ \konep{\mathbb I[[\cG]]}\ar[r]^{\maptheta{\overline{\mathcal G}}{P}}\ar[d]^{\maptheta{\overline{\mathcal G}}{P'}} & \konep{\mathbb I(U_{P})}\ar[d]^{\pi_P} &\\ \konep{\mathbb I(U_{P'})}\ar[r]^{\maptheta{P'}{P}}\ar[d]^{\pi_{P'}} & \mathbb I(U_{P}^{ab})^\times\ar[dd]^{\Pi^{P'}_P} \\ \mathbb I(U_{P'})^\times\ar[rd]^{\mathrm{Nr}_P^{P'}} & \\ & \mathbb I\left(U_P/[U_{P'},U_{P'}]\right)^\times. } \end{equation*} The upper quadrilateral in the diagram is obviously seen to be commutative. The lower quadrilateral is also commutative since the coset space $C(P',P)$ can be regarded as an $\mathbb I(U_P/[U_{P'},U_{P'}])$-basis of $\mathbb I(U_{P'}^{ab})$. Therefore, we have \begin{equation*} \mathrm{Nr}_P^{P'}(\Xi_{P'})=\mathrm{Nr}_P^{P'}(\pi_{P'}(\widetilde\Xi))=\Pi^{P'}_P(\det\left(\Pi_{P',P}(M_{C(P',P)}(\widetilde\Xi))\right)) =\Pi^{P'}_P(\maptheta{P',ab}{P}(\Xi)). \end{equation*} \end{proof} \begin{proof}[Proof of Theorem \ref{cong}(\ref{C2}):] Let $C:=C(P,\overline{{\mathcal G}})$. Then, for any $g\in\overline{\mathcal G}$, the set $gCg^{-1}:=\{gc_ig^{-1}\mid c_i\in C\}$ is a set of left coset representatives of $gU_Pg^{-1}=U_{gPg^{-1}}$ in ${\mathcal G}$. By definition, we have \begin{equation*} \Xi_{gPg^{-1}}=\maptheta{\overline{\mathcal G},ab}{P}(\Xi)=\det\left(\Pi_{\overline{\mathcal G},gPg^{-1}}(gM_{C}(\widetilde\Xi)g^{-1})\right)=g\det\left(\Pi_{\overline{\mathcal G},P}(M_{C}(\widetilde\Xi))\right)g^{-1}=g\Xi_{P}g^{-1}. \end{equation*} The equality follows from this. \end{proof} \begin{proof}[Proof of Theorem \ref{cong}(\ref{C3}):] The proof of (C3) is same as the proof of (M3) in \cite[Lemma 85]{kakde}. \end{proof} For the proof of Theorem \ref{cong}(\ref{C4}), we need the following lemma. \begin{lemma}\label{log-eta-res} For all $x\in\konep{\mathbb I[[\cG]]}$ and all $P\in C(\overline{\mathcal G})$, we have, \begin{equation*} \log_P\left(\dfrac{\alpha_P(\maptheta{{\mathcal G}}{P}(x))}{\prod_{P'\in C_P(\overline{\mathcal G})}\alpha_{P'}(\maptheta{{\mathcal G}}{P}(x))}\right)= p(\eta_P\circ\Res{\overline{\mathcal G}}{P})(\mathfrak{L}_{\overline{\mathcal G}}(x)). \end{equation*} \end{lemma} \begin{proof} The lemma follows from the commutativity of the following diagram \begin{equation*} \xymatrix{ \konep{\mathbb I[[\cG]]}\ar[r]^{\log_{\overline{\mathcal G}}}\ar[d]^{\maptheta{\overline{\mathcal G}}{P}} &\mathbb I[\mathrm{Conj}{\overline{\mathcal G}}]^\tau[\frac{1}{p}]\ar[d]_{\Res{\overline{\mathcal G}}{P}}\ar@/^1pc/[ddr]^a \\ \prod_{P\in C(\overline{\mathcal G})}\konep{\mathbb I(U_P)}\ar[r]^{\log_{P}}\ar@/_1pc/[rrd]_b &\prod_{P\in C(\overline{\mathcal G})}\mathbb I[P]^\tau[\frac{1}{p}]\ar[dr]^c \\ & &\prod_{P\in C(\overline{\mathcal G})}\mathbb I[P]^\tau[\frac{1}{p}], \\ } \end{equation*} where \begin{eqnarray*} a(x) &:=&\left(\eta_P(\Res{\overline{\mathcal G}}{P}((1-p^{-1}\varphi_{conj})(x)))\right)_P\\ b((x_P)_P)&:=&\left( p^{-1}\log_Q\left(\dfrac{\alpha_Q(\maptheta{{\mathcal G}}{Q}(x_Q))}{\prod_{P'\in C_Q(\overline{\mathcal G})}\alpha_{P'}(\maptheta{{\mathcal G}}{P}(x_{P'}))}\right) \right)_Q. \end{eqnarray*} By Lemma \ref{log-theta-res}, the square in the diagram is commutative. To show that the triangles commute, as in \cite{kakde}, the map $c$ is chosen as \begin{equation*} c((x_P)_P):=\left(1-\delta_Pp^{-1}\varphi_{conj}(x_P)-\sum_{P'\in C_P({\mathcal G})}\varphi_{conj}(x_{P'})\right)_P, \end{equation*} where $\delta_P=1$ if $P$ is non-trivial, and $0$ otherwise. Then the commutativity of the triangles follow analogously as in \cite[Lemma 7.4]{kakde}. \end{proof} \begin{lemma}\label{res-TP} We have $\Res{\overline{\mathcal G}}{P}({\mathscr R}[\mathrm{Conj}{\overline{\mathcal G}}]^\tau)\subset {\mathscr T}_P$. \end{lemma} \begin{proof} Let $x\in {\mathscr R}[\mathrm{Conj}{\overline{\mathcal G}}]^\tau$. Then $x:=\sum_{(i_1,\cdots,i_n)\geq 0}(\sum_{g\in\overline{\mathcal G}}c_{g,(i_1,\cdots,i_n)}\kappa(g))X_1^{i_1}\cdots X_n^{i_n}$, where $c_{g,(i_1,\cdots,i_n)}\in {\mathscr R}$, for all $g$ and $(i_1,\cdots,i_n)$. Consider the normalizer $N_{\overline{\mathcal G}}(P)$ of $P$ in $\overline{\mathcal G}$. Then $\Res{\overline{\mathcal G}}{P}=\Res{N_{\overline{\mathcal G}}(P)}{P}\circ\Res{\overline{\mathcal G}}{N_{\overline{\mathcal G}}(P)}$. Therefore \begin{equation*} \Res{\overline{\mathcal G}}{P}(x)=\sum_{(i_1,\cdots,i_n)\geq 0}\left(\sum_{g\in\overline{\mathcal G}}c_{g,(i_1,\cdots,i_n)}\Res{N_{\overline{\mathcal G}}(P)}{P}\left(\Res{\overline{\mathcal G}}{N_{\overline{\mathcal G}}(P)}(\kappa(g))\right)\right)X_1^{i_1}\cdots X_n^{i_n}. \end{equation*} Since for every $h,h'\in N_{\overline{\mathcal G}}(P)$, $h^{-1}h'h\in P$ if and only if $h'\in P$, by equation \eqref{kappa}, we have \begin{equation*} \Res{N_{\overline{\mathcal G}}(P)}{P}(\kappa_{N_{\overline{\mathcal G}}(P)}(h'))= \begin{cases} \sum_{x\in W_{\overline{\mathcal G}}(P)}\kappa_P((x^\tau)^{-1}h'^\tau x^\tau), & \mbox{ if } h'\in P,\\ 0, & \mbox{ otherwise}. \end{cases} \end{equation*} It follows that each term $c_{g,(i_1,\cdots,i_n)}\Res{N_{\overline{\mathcal G}}(P)}{P}\left(\Res{\overline{\mathcal G}}{N_{\overline{\mathcal G}}(P)}(\kappa(g))\right)X_1^{i_1}\cdots X_n^{i_n}\in {\mathscr T}_P$. \end{proof} \begin{proof}[Proof of Theorem \ref{cong}(\ref{C4}):] Note that $p{\mathscr T}_P\subset p{\mathscr R}[\overline{\mathcal G}]^\tau$. Taking $I=p{\mathscr R}[\overline{\mathcal G}]^\tau$ in Lemma \ref{log-sub}, we have an isomorphism $I\stackrel{log_{\overline{\mathcal G}}^{-1}}{\longrightarrow} 1+I$. Then it follows from Proposition \ref{int-log} and Lemma \ref{log-eta-res}, that the congruences follow if $\eta_P\circ\Res{\overline{\mathcal G}}{P}({\mathscr R}[\mathrm{Conj}{\overline{\mathcal G}}]^\tau)\subset {\mathscr T}_P$. Since $\eta_P$ preserves ${\mathscr T}_P$, it follows from the Lemma \ref{res-TP} above, that the containment holds and hence the congruence. \end{proof} This finishes the proof of Theorem \ref{cong}, and the Theorem D. By this theorem, to show that an element is in the image of $\konep{\mathbb I[[\cG]]}$ under the map $\Theta^{\mathcal G}$ it is sufficient to verify the statements in Theorem \ref{cong}. \begin{defn} We now consider the map ${\mathcal L}=({\mathcal L}_P):\Phi^{\overline{\mathcal G}}\longrightarrow\Psi^{\overline{\mathcal G}}$, defined by \begin{equation*} {\mathcal L}_P((x_C))=\begin{cases} \frac{1}{p^2|P|}\log\left(\frac{\alpha_P(\theta_P^{\overline{\mathcal G}}(x_P))^{p|P|}}{u_P^{\overline{\mathcal G}}(\alpha(\theta^{\overline{\mathcal G}}(x_C)))}\right), &\mbox{ if } P\notin C(\overline{\mathcal G}) \\ \frac{1}{p}\log\left(\frac{\alpha_P(\theta_P^{\overline{\mathcal G}}(x_P))^{p|P|}}{u_P^{\overline{\mathcal G}}(\alpha(\theta^{\overline{\mathcal G}}(x_C)))}\right), &\mbox{ if } P\in C(\overline{\mathcal G}), P\neq\{1\}\\ \frac{1}{p}\log\left(\frac{\alpha_{\{1\}}((x_{\{1\}}))^{p}} {\varphi((x_{\{1\}}))(u_{\{1\}}^{\overline{\mathcal G}}(\alpha(\theta^{\overline{\mathcal G}}(x_{\{1\}}))))}\right), &\mbox{ if } P=\{1\}. \end{cases} \end{equation*} \end{defn} \begin{lemma} The following sequence is exact \begin{equation*} 1\longrightarrow\mu(\cO)\times\mathbb W\times{\mathcal G}^{ab}\longrightarrow\Phi^{\overline{\mathcal G}}\stackrel{{\mathcal L}}{\longrightarrow}\Psi^{\overline{\mathcal G}}\stackrel{\omega}{\longrightarrow}\mathbb W\times{\mathcal G}^{ab}\longrightarrow 1. \end{equation*} More precisely, the map $\mu(\cO)\times\mathbb W\times{\mathcal G}^{ab}\longrightarrow\Phi^{\overline{\mathcal G}}\subset\prod_{P\leq\overline{\mathcal G}}\mathbb I(U_P^{ab})^\times$ is the composition \begin{equation*} \mu(\cO)\times\mathbb W\times{\mathcal G}^{ab}\longrightarrow\prod_{P\leq\overline{\mathcal G}}\mu(\cO)\times\mathbb W\times U_P^{ab}\hookrightarrow\prod_{P\leq\overline{\mathcal G}}\mathbb I(U_P^{ab})^\times \end{equation*} where the first map is the identity on $\mu(\cO)$ and the transfer homomorphism from ${\mathcal G}^{ab}$ to $U_P^{ab}$ for each $P\leq\overline{\mathcal G}$. \end{lemma} \begin{proof} Clearly the image of ${\mathcal L}$ is contained in $\prod_{P\leq\overline{\mathcal G}}{\mathbb Q}_p\otimes\mathbb I(U_P^{ab})$. To show that the image is contained in $\Psi^{\overline{\mathcal G}}\prod_{P\leq\overline{\mathcal G}}\mathbb I(U_P^{ab})$, we show that the conditions defining the set $\Psi^{\overline{\mathcal G}}$ are satisfied. Below we show how the first condition defining $\Psi^{\overline{\mathcal G}}$ can be shown. The rest of the conditions can be demonstrated easily from the conditions defining $\Phi^{{\mathcal G}}$ (\cite[Lemma 88]{kakde}). Let $P\leq P'\leq\overline{\mathcal G}$ such that $[P',P]\leq P$ with $P$ a non-trivial cyclic group if $[P',P']\neq P$. We then have three cases to consider: (i) $P$ is not cyclic, (ii) $P$ is cyclic but $P'$ is not cyclic, and (iii) $P'$ is cyclic. Case (i): Let $P$ be not cyclic. Letting $C'$ run through all cyclic subgroups of $\overline{\mathcal G}$ with ${C'}^p\leq P'$ and $C$ run through all cyclic subgroups of $\overline{\mathcal G}$ with $C^p\leq P$, we have \begin{equation*} \begin{split} Tr_P^{P'}({\mathcal L}_{P'}((x_C)))&=Tr_P^{P'}\left(\frac{1}{p^2|P'|}\log\left(\frac{\alpha_{P'}(x_{P'})^{p|P'|}}{u_{P'}^{\overline{\mathcal G}}(\alpha((x_C)))}\right)\right)\\ &=Tr_P^{P'}\left(\frac{1}{p^2|P'|}\log\left(\frac{(x_{P'})^{p^2|P'|}}{\prod_{C'}\varphi(\alpha_{C'}(x_{C'}))^{|C'|}}\right)\right)\\ &=\frac{1}{p^2|P'|}\log\left(\frac{Nr_P^{P'}(x_{P'})^{p^2|P'|}}{Nr_P^{P'}(\prod_{C'}\varphi(\alpha_{C'}(x_{C'}))^{|C'|})}\right)\\ &=\frac{1}{p^2|P'|}\log\left(\frac{\Pi_P^{P'}(x_{P'})^{p^2|P'|}}{\prod_{C}\varphi(\alpha_{C}(x_{C}))^{p|C|}}\right), \mbox{ by first condition of }\Phi^{{\mathcal G}} \\ &=\Pi_P^{P'}\left(\frac{1}{p^2|P'|}\log\left(\frac{\alpha_P(x_{P})^{p|P'|}}{\prod_{C}\varphi(\alpha_{C}(x_{C}))^{|C|}}\right)\right)\\ &=\Pi_P^{P'}\left({\mathcal L}_P((x_C))\right). \end{split} \end{equation*} Case (ii): Let $P$ be cyclic but $P'$ be not cyclic. Let $C$ run through all cyclic subgroups of $\overline{\mathcal G}$ with $C^p\leq P$. Then \begin{equation*} \begin{split} \eta_P(Tr_P^{P'}({\mathcal L}_{P'}((x_C))))&=\Pi_P^{P'}\left(\eta_P \left(\frac{1}{p^2|P|}\log\left(\frac{(x_{P})^{p^2|P|}}{\prod_{C}\varphi(\alpha_{C}(x_{C}))^{|C|}}\right)\right)\right). \end{split} \end{equation*} Since $\alpha_P(\varphi(\alpha_C(x_C)))=\alpha_P(\alpha_C(x_C))^p$ (resp. $1$) if $C^p=P$ ( resp. $C^p\neq P$), letting $C$ run through all cyclic subgroups of $\overline{\mathcal G}$ with $C^p=P$, we have \begin{equation*} \begin{split} \eta_P(Tr_P^{P'}({\mathcal L}_{P'}((x_C))))&=\Pi_P^{P'}\left(\frac{1}{p}\log\left(\frac{\alpha(x_{P})}{\prod_{C}\varphi(\alpha_{C}(x_{C}))}\right)\right)\\ &=\Pi_P^{P'}({\mathcal L}_P((x_C))). \end{split} \end{equation*} Case (iii): Let $P'$ be cyclic. This case follows from the following lemma: \begin{lemma} Let $P\leq P'\leq\overline{\mathcal G}$ such that $[P':P]=p$. Let $C\in C(\overline{\mathcal G})$ be such that $C^p$ is contained in $P'$ but not in $P$. Then $\mathrm{Nr}^{P'}_P(\varphi(\alpha_C(x_C))=1$ in $\mathbb I(U_P/[U_{P'},U_{P'}])$. \end{lemma} \begin{proof} By definition, we have $\alpha_C(x_C)=\frac{x_C^p}{\prod_{k=0}^{p-1}\omega_C^k(x_C)}$. Therefore, $\varphi(\alpha_C(x_C))=\frac{\varphi(x_C^p)}{\prod_{k=0}^{p-1}\varphi(\omega_C^k(x_C))}=\frac{\varphi(x_C^p)}{\prod_{k=0}^{p-1}\omega_{C^p}^k(\varphi(x_C))}$. Since $\mathrm{Nr}^{P'}_P(\varphi(\alpha_C(x_C))=\prod_{k=0}^{p-1}\omega_{C^p}^k(\varphi(x_C))$ by a straightforward generalization of \cite[Lemma 50]{kakde}, the lemma follows. \end{proof} Regarding the second and third conditions defining $\Psi^{\overline{\mathcal G}}$, both follow easily from (C2) and (C4) respectively. Finally, to show that the image of ${\mathcal L}$ is contained in $\prod_{P\leq\overline{\mathcal G}}\mathbb I(U_P^{ab})$, it is enough to note that ${\mathcal L}_P((x_C))\in{\mathscr T}_P$ for all $P\in C(\overline{\mathcal G})$. Then by Proposition \ref{id-beta} it follows that $\mathrm{im}({\mathcal L})\subseteq\prod_{P\leq\overline{\mathcal G}}\mathbb I(U_P^{ab})$. The exactness of the four term sequence can also shown as in \cite[Lemma 88]{kakde}. However, there is one crucial input which is the fact that the only torsion elements of $\mathbb I(U_P^{ab})^\times$ are contained in $\mu(\cO)\times\mathbb W\times U_P^{ab}$, which is a generalization of a Theorem of Higman \cite{higman}. This input is provided by Proposition \ref{k-tors} and Corollary \ref{sk-trivial}. We then have the following commutative diagram: \begin{equation*} \xymatrix{ \konep{\mathbb I[[\cG]]}\ar[r]^{\mathfrak L}\ar[d]_{\Theta^{\overline{\mathcal G}}} &\mathbb I(Z)[\mathrm{Conj}{\overline{\mathcal G}}]^\tau\ar[d]^{\beta^{\overline{\mathcal G}}}\\ \Phi^{\overline{\mathcal G}}\ar[r]_{{\mathcal L}} &\Psi^{\overline{\mathcal G}}. } \end{equation*} In other words, the image of $\Theta^{\overline{\mathcal G}}$ is contained in $\Phi^{\overline{\mathcal G}}$. \end{proof} In the same way, we can prove that the image of $\widehat{\Theta_\sS^{\overline{\mathcal G}}}$ is contained in $\widehat{\Phi_\sS^{\overline{\mathcal G}}}$, which we record below. \begin{theorem}\label{hat-Theta} The image of $\widehat{\Theta_\sS^{\overline{\mathcal G}}}$ under the logarithm map is contained in $\widehat{\Phi_\sS^{\overline{\mathcal G}}}$. \end{theorem} \begin{theorem}\label{Theta-iso} The map $\Theta^{\overline{\mathcal G}}:\konep{\mathbb I[[\cG]]}\longrightarrow\Phi^{\overline{\mathcal G}}$ is an isomorphism. \end{theorem} \begin{proof} The lemmas regarding the restrictions under integral logarithms give us the following commutative diagram: \begin{equation*} \xymatrix{ 1\ar[r] &\mu(\cO)\times\mathbb W\times{\mathcal G}^{ab}\ar[r]\ar[d]_{=} &\konep{\mathbb I[[\cG]]}\ar[r]^{\mathfrak L}\ar[d]^{\Theta^{\overline{\mathcal G}}} &\mathbb I(Z)[\mathrm{Conj}{\overline{\mathcal G}}]^\tau\ar[r]\ar[d]^{\beta^{\overline{\mathcal G}}}_\cong &\mathbb W\times{\mathcal G}^{ab}\ar[r]\ar[d]_{=} &1\\ 1\ar[r] &\mu(\cO)\times\mathbb W\times{\mathcal G}^{ab}\ar[r] &\Phi^{\overline{\mathcal G}}\ar[r]_{\mathcal L} &\Psi^{\overline{\mathcal G}}\ar[r] &\mathbb W\times{\mathcal G}^{ab}\ar[r] &1. } \end{equation*} The Five Lemma then gives the result. \end{proof} \begin{theorem} The map $\Theta^{\overline{\mathcal G}}_\sS$ maps $\konep{\mathbb I[[\cG]]_\sS}$ into $\Phi^{\overline{\mathcal G}}_\sS$. Further \begin{equation*} {\Phi_\sS^{\overline{\mathcal G}}}\cap\prod_{P\leq{\overline{\mathcal G}}}\mathbb I(U_P^{ab})^\times=\mathrm{im}(\Theta^{\mathcal G}). \end{equation*} \end{theorem} \begin{proof} Note that $\widehat{\Phi_\sS^{\overline{\mathcal G}}}\cap\prod_{P\leq{\overline{\mathcal G}}}\mathbb I(U_P)_\sS^\times=\Phi_\sS^{\overline{\mathcal G}}$. By Theorem \ref{cong} and \ref{hat-Theta}, it follows that \begin{equation*} \mathrm{im}(\Theta_\sS^{\overline{\mathcal G}})\subset\Phi_\sS^{\overline{\mathcal G}} \end{equation*} Further, since ${\Phi_\sS^{\overline{\mathcal G}}}\cap\prod_{P\leq{\overline{\mathcal G}}}\mathbb I(U_P^{ab})^\times=\Phi^{\overline{\mathcal G}}$, we get from Theorem \ref{cong}, that \begin{equation*} {\Phi_\sS^{\overline{\mathcal G}}}\cap\prod_{P\leq{\overline{\mathcal G}}}\mathbb I(U_P^{ab})^\times=\mathrm{im}(\Theta^{\mathcal G}). \end{equation*} \end{proof} \section{Relations between the congruences over $\mathbb I[[{\mathcal G}]]$ and ${\mathbb Z}_p[[{\mathcal G}]]$ \subsection{Congruences over ${\mathbb Z}_p[[{\mathcal G}]]$} We first recall the main result of Kakde \cite{kakde}. As in the previous section, we fix a lift $\widetilde\Gamma$ of $\Gamma$ in ${\mathcal G}$. Then we can identify ${\mathcal G}$ with $H\rtimes\Gamma$. Fix $e\in\mathbb N$ such that $\widetilde\Gamma^{p^e}\subset Z({\mathcal G})$, and put $\overline{\mathcal G}:={\mathcal G}/{\Gamma^{p^e}}$ and $R:=\Lambda_\cO(\widetilde\Gamma^{p^e})$. Then $\Lambda_\cO(G)\cong R[\overline{\mathcal G}]^\tau$, the twisted group ring with multiplication \begin{equation*}(h\widetilde\gamma^a)^\tau(h\widetilde\gamma^b)^\tau=\widetilde\gamma^{p^e[\frac{a+b}{b}]}(h\widetilde\gamma^a.h'\widetilde\gamma^b)^\tau, \end{equation*} where $g^\tau$ is the image of $g\in {\mathcal G}$ in $R[\overline{\mathcal G}]^\tau$. Let $P$ be a subgroup of $\overline{\mathcal G}$ and $U_P$ be the inverse image of $P$ in ${\mathcal G}$. Recall that, $N_{\overline{\mathcal G}}P:=$ the normalizer of $P$ in ${\mathcal G}$, $W_{\overline{\mathcal G}}(P):=N_{\overline{\mathcal G}}P/P$, and $C(\overline{\mathcal G}):=$ set of cyclic subgroups of $\overline{\mathcal G}$. If $P\in C(\overline{\mathcal G})$, then $U_P$ is a rank one abelian subquotient of ${\mathcal G}$, and for every $P\in C(\overline{\mathcal G})$, set \begin{equation*} T_P:=\{\sum_{g\in W_{\overline{\mathcal G}}(P)}g^\tau x(g^\tau)^{-1}\mid x\in R[P]^\tau\}. \end{equation*} Let $P\leq P'\leq\overline{\mathcal G}$. Then consider the homomorphism ${\mathbb Z}_p[[{\mathcal G}]]\longrightarrow{\mathbb Z}_p[[{\mathcal G}]]$ given by $x\mapsto \sum_{g\in P'/P}\tilde{g}x\tilde{g}^{-1}$, where $\tilde{g}$ is a lift of $g$. We define $T_{P,P'}$ to be the image of this homomorphism. For two subgroups $P, P'$ of $\overline{\mathcal G}$ with $[P',P']\leq P\leq P'$ consider \begin{equation} \begin{split} &\mbox{nr}_P^{P'}:\Lambda_\cO(U_{P'}^{ab})^\times\longrightarrow\Lambda_\cO(U_P/[U_{P'},U_{P'}])^\times,\quad\mbox{(the norm map)},\\ &\pi_P^{P'}:\Lambda_\cO(U_{P'}^{ab})\longrightarrow\Lambda_\cO(U_P/[U_{P'},U_{P'}]),\quad\mbox{(the projection map)}. \end{split} \end{equation} For $P\in C(\overline{\mathcal G})$ with $P\neq (1)$, fix a homomorphism $\omega_P:P\longrightarrow\bar{\mathbb Q}_p^\times$ of order $p$, and also a homomorphism $\omega_1:=\omega_{\{1\}}:\widetilde\Gamma^{p^e}\longrightarrow\bar{\mathbb Q}_p^\times$ of order $p$. The homomorphism $\omega_P$ induce the following homomorphism which we again denote by the same symbol: \begin{equation} \omega_P:\Lambda_\cO(U_P)^\times\longrightarrow\Lambda_\cO(U_P)^\times, g\mapsto \omega_P(g)g. \end{equation} For $P\leq \overline{\mathcal G}$, consider the homomorphism $\alpha_P:\Lambda_\cO(U_P)_{S}^\times\longrightarrow\Lambda_\cO(U_P)_{S}^\times$ defined by \begin{equation} \alpha_P(x):=\begin{cases} x^p\varphi(x)^{-1} &\mbox{ if } P=\{1\}\\ x^p(\prod_{k=0}^{p-1}\omega_P^k(x))^{-1} &\mbox{ if } P\neq\{1\}\mbox{ and cyclic}\\ x^p &\mbox{ if } P \mbox{ is not cyclic}. \end{cases} \end{equation} Note that, for all $P\leq\overline{\mathcal G}$, there is an action of ${\mathcal G}$ and $\overline{\mathcal G}$ on $U_P^{ab}$ by conjugation since $\widetilde\Gamma^{p^e}$ is central. Now consider the following map \begin{equation} \konep{{\mathbb Z}_p[[{\mathcal G}]]_S}\longrightarrow \konep{{\mathbb Z}_p[[U]]_S}\longrightarrow \konep{{\mathbb Z}_p[[U^{ab}]]_S}\longrightarrow{\mathbb Z}_p[[U^{ab}]]_S^\times\subset Q({\mathbb Z}_p[[U^{ab}]])^\times. \end{equation} Taking all the $U^{ab}$ in $\Sigma({\mathcal G})$ we get the following homomorphism \begin{equation} \theta_{\Sigma({\mathcal G})}:\konep{{\mathbb Z}_p[[{\mathcal G}]]_S}\longrightarrow\prod_{U^{ab}\in\Sigma({\mathcal G})}Q({\mathbb Z}_p[[U^{ab}]])^\times. \end{equation} For any subgroup $P$ of $\overline{\mathcal G}$, we write $\theta_{\overline{\mathcal G},ab}^{P}$ for the following natural composite homomorphism \begin{equation*} \konep{{\mathbb Z}_p[[{\mathcal G}]]}\stackrel{\theta_{\overline{\mathcal G}}^{P}}{\longrightarrow}\kone{{\mathbb Z}_p[[U_P]]}\longrightarrow\kone{{\mathbb Z}_p[[U_P^{ab}]]}\cong{\mathbb Z}_p[[U_P^{ab}]]^\times, \end{equation*} where the isomorphism is induced by taking determinants over ${\mathbb Z}_p[[U_P^{ab}]]$. \begin{propn}\emph{\cite{kakde}} Let ${\mathcal G}$ be a rank one pro-$p$ group. Then the set $\Sigma({\mathcal G}):=\{U_P^{ab}:P\leq\overline{\mathcal G}\}$ satisfies the condition $(\ast)$. Further, an element $(\xi_\cA)_{\cA}\in\prod_{\cA\in\Sigma(G)}\Lambda_\cO(\cA)^\times$ belongs to $im(\theta_{\Sigma(G)})$ if and only if it satisfies all of the following three conditions. \begin{enumerate} \item For all subgroups $P, P'$ of $\overline{\mathcal G}$ with $[P',P']\leq P\leq P'$, one has \begin{equation} \mathrm{nr}_P^{P'}(\xi_{U_{P'}^{ab}})=\pi_P^{P'}(\xi_{U_{P'}^{ab}}). \end{equation} \item For all subgroups $P$ of $\overline{\mathcal G}$ and all $g$ in $\overline{\mathcal G}$ one has $\xi_{gU_{P}^{ab}g^{-1}}=g\xi_{U_{P'}^{ab}}g^{-1}$. \item For every $P\in\overline{\mathcal G}$ and $P\neq (1)$, we have \begin{equation*} \mathrm{ver}_P^{P'}(\xi_{U_{P'}^{ab}})\equiv \xi_{U_P^{ab}} \pmod{T_{P,P'}} \end{equation*} \item For all $P\in C(\overline{\mathcal G})$ one has $\alpha_P(\xi_{U_{P}^{ab}})\equiv\prod_{P'\in C_P(\overline{\mathcal G})}\alpha_{P'}(\xi_{U_{P'}^{ab}})\pmod{pT_P}$. \end{enumerate} \end{propn} We recall from \cite{kakde} the following set of congruences $\phi^{\overline{\mathcal G}}$, which are defined as follows. \begin{defn}\label{phi-G} Let $\phi^{\overline{\mathcal G}}$ denote the subgroup of $\prod_{P\leq\overline{\mathcal G}}\Lambda_\cO(U_P^{ab})^\times$ consisting of tuples $(x_P)$ satisfying the conditions of the above theorem. \end{defn} \subsection{Relation between the congruences} Consider the following commutative diagram, which is easily seen to be induced by each specialization map: \begin{equation} \xymatrix{ \kone{\mathbb I[[{\mathcal G}]]}\ar[r]\ar[d] & \Phi_{\mathscr R}^{\overline{\mathcal G}}\ar[d] \\ \kone{{\mathbb Z}_p[[{\mathcal G}]]}\ar[r] & \phi^{\overline{\mathcal G}}. } \end{equation} From this commutative diagram it is easy to see that the congruences over $\mathbb I[[\cG]]$ implies the congruences over ${\mathbb Z}_p[[{\mathcal G}]]$. The following proposition easily follows from this commutative diagram and the interpolation formula of the $p$-adic L-function over $\mathbb I[[{\mathcal G}]]$. \begin{propn} Let $\Xi\in\kone{\mathbb I[[{\mathcal G}]]}$ be a $p$-adic L-function over $\mathbb I[[{\mathcal G}]]$, then under every specialization map $\phi_k$, $\phi_k(\Xi)\in\kone{{\mathbb Z}_p[[{\mathcal G}]]}$ is a $p$-adic L-function over ${\mathbb Z}_p[[{\mathcal G}]]$. \end{propn} \section{Application to $p$-adic L-function}\label{applications} In this section, we generalize some of the results of Ritter-Weiss that has been used to prove the congruences and hence the main conjecture in some important cases. More precisely, we generalize the torsion congruences which have been used as a basic step in proving the congruences. \subsection{Torsion Congruences and $p$-adic L-function} Let $F^{ab,p}$ be the maximal pro-$p$ abelian extension of $F$ that is unramified outside $p$ and $\infty$. We set $\mathrm{Gal}_F=\mathrm{Gal}(F^{ab,p}/F)$. Let $\{f_\kappa\}$ be a family of Hilbert modular forms over $F$ which is parametrized by an irreducible component $\mathbb I$ of the universal cyclotomic deformation ring $\mathcal R_F$. In fact, $\mathbb I$ is a finite flat algebra over ${\mathbb Z}_p[[\mathbb{W}]]$, where $\mathbb{W}_{}$ is the torsion free part of $\cO_{\mathfrak p}$ as in Section \ref{hecke-algebras}. As in Definition \ref{arithmetic}, let $\phi_\kappa:\mathbb I\longrightarrow{\mathbb Z}_p$ denote an arithmetic point of weight $\kappa$. Set $\mathcal{P}_\kappa=\mathrm{ker}(\phi_\kappa)$. This induces an algebra homomorphism $\mathbb I[[G_F]]\longrightarrow{\mathbb Z}_p[[G_F]]$ which we again denote by $\phi_\kappa$. Furthermore, we extend $\phi_\kappa$ to $G_F\times\mathbb{W}$ by setting $\phi_\kappa(g)=1$ for all $g\in G_F$. The extended character is still denoted by $\phi_\kappa$. Further note that ${\mathfrak m}=\langle p,\mathcal{P}_\kappa\rangle$ is the maximal ideal of $\mathbb I$, and we have a natural homomorphism $\mathbb I/{\mathfrak m}[[G_F]]\longrightarrow\FF_p[[G_F]]$. For any integer $m\geq 0$, let $\psi_{m}:G_F\longrightarrow{\mathbb Z}_p^\times$ be a character of the form $\psi\chi_p^{m}$ where $\chi_p$ is the $p$-adic cyclotomic character, and $\psi$ is a character of finite order. We again extend the character $\psi_m$ to the group $G_F\times\mathbb{W}$ by setting $\psi_m$ to be equal to 1 on $\mathbb W$. Suppose that there exists a measure $\mu_F$ in $\mathbb I[[G_F]]$ that interpolates the critical values of each of the representations $\ad{\rho_{f_\kappa}}\otimes\psi$ for characters $\psi$ of $G_F$. More precisely, \begin{equation*} \int_{G_F\times\mathbb{W}}\chi(g)\phi_\kappa(g)d\mu_F(g)=L^\ast(\ad{\rho_{f_\kappa}}\otimes\chi,0), \end{equation*} where $L^\ast(\ad{\rho_{f_\kappa}}\otimes\chi,0)$ involves the critical value $L(\ad{\rho_{f_\kappa}}\otimes\chi,0)$ twisted by the finite order character $\chi$, some archimedean periods related to $\ad{\rho_{f_\kappa}}$, and some Euler factors removed, as in the interpolation formula in \eqref{interpolate}. Now, we consider the measure defined by \begin{equation*} \mu_\kappa(g)=\int_{\gamma'\in \{g\}\times\mathbb{W}}\phi_\kappa(\gamma')d\mu_F(\gamma'), \mbox{ for all } g\in G_F. \end{equation*} Then $\mu_\kappa\in{\mathbb Z}_p[[G_F]]$, and \begin{equation} \int_{G_F}\psi_m(g)d\mu_\kappa(g)=\int_{G_F}\psi_m(g)\int_{\gamma'\in \{g\}\times\mathbb{W}}\phi_\kappa(\gamma')d\mu_F(\gamma') =\int_{G_F\times\mathbb{W}}\psi_m(x)\phi_\kappa(x)d\mu_F(x). \end{equation} Therefore, $\int_{G_F}\chi(g)d\mu_\kappa(g)=L^\ast(\ad{\rho_{f_\kappa}}\otimes\chi,1)$ for all finite order characters $\chi$. It follows that $\mu_\kappa\in{\mathbb Z}_p[[G_F]]$ is a $p$-adic L-function interpolating the special values of $\ad{f_\kappa}$ twisted by all finite order characters of $G_F$. Let $\delta^{(x)}$ be the characteristic function of a coset of an open subgroup $U$. Then $\delta^{(x)}(g)=\sum_j c_j\chi_j(g)$, for some $c_j\in{\mathbb Z}_p$. Let \begin{equation} L^\ast(\ad{\widetilde f_\kappa}/F',\chi)= e_p(\ad{\widetilde f_\kappa}/F',\chi)\mathcal{L}_p(\ad{\widetilde f_\kappa}/F',\chi)\dfrac{L_{(S,p)}(\ad{\widetilde f_\kappa}/F',\chi,0)}{\Omega_\infty(\ad{\widetilde f_\kappa}/F')}, \end{equation} where $\mathcal{L}_p(\ad{\widetilde f_\kappa}/F',\chi):=\prod_{{\mathfrak p}\mid p}\mathcal{L}_{\mathfrak p}(\ad{\widetilde f_\kappa}/F',\chi)$ comes from the Euler factors at primes lying above $p$, and $e_p(\ad{\widetilde f_\kappa}/F',\chi)$ is the product of the local epsilon factors above $p$. We define, \begin{equation*} L^\ast(\ad{f_\kappa},\delta^{(x)})=\sum_jL^\ast(\ad{f_\kappa},\chi_j,0). \end{equation*} Consider the cyclotomic character $\mathcal N_F:G_F\longrightarrow{\mathbb Z}_p$. Then an open subgroup $U$ of $G_F$ is said to be admissible if $\mathcal{N}_F(U)\subset 1+p{\mathbb Z}_p$, and define $m_F(U)\geq1$, by $\mathcal N_F(U)=1+p^{m_F(U)}{\mathbb Z}_p$. Then we have the following lemma which is a generalization of a result in \cite{ritter-weiss}. The proof presented is also a straight forward generalization of the proof due to Ritter and Weiss in loc. cit. \begin{lemma} $\mathbb I[[G_F]]$ is the inverse limit of the system $\mathbb I[G_F/U]/p^{m_F(U)}\mathbb I[G_F/U]$, with $U$ running over the cofinal system of admissible open subgroups of $G_F$. \end{lemma} \begin{proof} If $V$ is an admissible open subgroup of $G_{F'}$ and $U$ is an admissible open subgroup of $G_F$ in $ver^{-1}(V)$, then $m_F(U)\geq m_{F'}(V)-1$. Consider the natural map \begin{equation*}\mathbb I[[G]]\longrightarrow\varprojlim_U\mathbb I[G_F/U]/p^{m_F(U)}\mathbb I[G_F/U].\end{equation*} We first show that this map is surjective. Let $(x_U)_U\in\varprojlim_U\mathbb I[G_F/U]/p^{m_F(U)}\mathbb I[G_F/U]$. Then for any $V\subseteq U$, consider the map \begin{equation*} \mathbb I[G_F/V]/p^{m_F(V)}\mathbb I[G_F/V]\longrightarrow \mathbb I[G_F/U]/p^{m_F(V)}\mathbb I[G_F/U] \end{equation*} and let the image of $x_V$ be denoted by $\overline{x_V}$. Note that fixing $U$ and taking the projective limit over $m_F(V)$, we have \begin{equation} \mathbb I[G_F/U]\cong\varprojlim_V\mathbb I[G_F/U]/p^{m_F(V)}\mathbb I[G_F/U]. \end{equation} Indeed, we have $\mathbb I[G_F/U]/p^k\mathbb I[G_F/U]\cong(\mathbb I/p^k)[G_F/U]$, for every nonnegative integer $k$. Taking projective limit with respect to $k$, we have $\mathbb I[G_F/U]=\varprojlim_k\mathbb I[G_F/U]/p^k\mathbb I[G_F/U]$. The lemma follows. \end{proof} \begin{lemma} The image of the measure $\mu_F\in\mathbb I[[G_F]]$ in $\mathbb I[G_F/U]/{{\mathfrak m}^k}$ is given by $\sum_{\bar g\in G/U}\mu_F(.,\delta^{\bar g})\bar g\mod{\mathfrak m}^k$. \end{lemma} \begin{proof} Consider the measure $\mu_F\in\mathbb I[[G]]$. Then $\mu_F$ is a measure on $\mathbb{W}\times G$. Then, it is a standard fact that the image of $\mu_F$ in $\mathbb I[G/U]$ is given by $\sum_{\bar g\in G/U}\mu_F(.,\delta^{(\bar g)})\bar g$, where $\mu(.,\delta^{(\bar g)})\in\mathbb I$. \end{proof} Now let $F'$ be a totally real extension of $F$ contained in $F^{ab,p}$. Consider the base change Hilbert modular form over $F'$. Let $\widetilde f_\kappa$ denote the base-change of $f_\kappa$ to $F'$, and let $\mathbb{J}$ be the irreducible component to which $\widetilde f_\kappa$ belongs. Let $\mu_{F'}$ be the measure in $\mathbb{J}[[G_{F'}]]$ interpolating the special values of $\ad{\widetilde f_\kappa}$. \begin{lemma}\label{invariance-measure} Let $y$ be a coset of a $\Delta$-stable admissible open subgroup of $G_{F'}$, where $\Delta=G(F'/F)$. Then \begin{equation} L^\ast(\ad{\widetilde f_\kappa}/F',\delta^{(y)}_{F'})=L^\ast(\ad{\widetilde f_\kappa}/F',\delta^{(y^\gamma)}_{F'}), \end{equation} for all $\gamma\in\Delta$. Further, let $\widetilde\mu_{F'}$ be the image of $\mu_{F'}$ under the map $\mathbb J[[G_{F'}]]\longrightarrow\mathbb I[[G_{F'}]]$. Then $\widetilde\mu_{F'}\in\mathbb I[[G_{F'}]]^\Delta$. \end{lemma} \begin{proof} Here $\gamma\in\Delta$ acts on $G_{F'}$ by conjugation and trivially on $\mathbb I$. It is enough to prove for finite order characters $\chi$ of $G_{F'}$. Recall that \begin{equation} L^\ast(\ad{\widetilde f_\kappa}/F',\chi)= e_p(\ad{\widetilde f_\kappa}/F',\chi)\mathcal{L}_p(\ad{\widetilde f_\kappa}/F',\chi)\dfrac{L_{(S,p)}(\ad{\widetilde f_\kappa}/F',\chi,0)}{\Omega_\infty(\ad{\widetilde f_\kappa}/F')}, \end{equation} where $L_{(S,p)}(\ad{\widetilde f_\kappa}/F',\chi,0)$ is the critical value at $s=0$ of the $L$-function $L(\ad{\widetilde f_\kappa}/F',\chi,s)$ with the Euler factors at $S$ and those above $p$ removed. Note that by induction of $L$-functions, we have, \begin{equation} \begin{split} L_{(S,p)}(\ad{\widetilde f_\kappa}/F',\chi,s)=& L_{(S,p)}(\ad{\widetilde f_\kappa}/F,\mathrm{ind}_F^{F'}\chi,s)\\ =& L_{(S,p)}(\ad{\widetilde f_\kappa}/F,\mathrm{ind}_F^{F'}\chi^\gamma,s)\\ =&L_{(S,p)}(\ad{\widetilde f_\kappa}/F',\chi^\gamma,s). \end{split} \end{equation} We also have $\mathcal{L}_p(\ad{\widetilde f_\kappa}/F,\chi)=\prod_{{\mathfrak p}\mid p}\mathcal{L}_{\mathfrak p}(\ad{\widetilde f_\kappa}/F',\chi)$ and therefore the equality in the lemma holds. Let $\kappa'$ be the weight of $\widetilde f_\kappa$ and $\phi_{\kappa'}$ be any arithmetic specialization of weight ${\kappa'}$. Now \begin{equation} \begin{split} (\mu_{F'})^\gamma(\phi_{\kappa'},\delta_{F'}^{(y)})=&\mu_{F'}(\phi_{\kappa'},\delta_{F'}^{(y^\gamma)})\\ =& L^\ast(\ad{\widetilde f_{\kappa'}}/F',\delta_{F'}^{(y^\gamma)})\\ =& L^\ast(\ad{\widetilde f_{\kappa'}}/F',\delta_{F'}^{(y)})\\ =&\mu_{F'}(\phi_{\kappa'},\delta_{F'}^{(y)}). \end{split} \end{equation} In fact, we have $(\mu_{F'})^\gamma(\phi_{\kappa'},\chi)=\mu_{F'}(\phi_{\kappa'},\chi),$ for any finite order $\chi$ of $G_{F'}$. Since this holds for all the arithmetic specializations $\phi_{\kappa'}$, the measures $(\mu_{F'})^\gamma(.,\chi)=\mu_{F'}(.,\chi)$. Indeed, the measures $(\mu_{F'})^\gamma(.,\chi)$ and $\mu_{F'}(.,\chi)$ on $\mathbb{W}$ are equal at infinitely many characters, they are equal. This further implies that $(\mu_{F'})^\gamma=\mu_{F'}$, for all $\gamma\in\Delta$. Since the morphism $\mathbb J\longrightarrow\mathbb I$ is equivariant with respect to $\Delta$, $(\widetilde\mu_{F'})^\gamma=\widetilde\mu_{F'}$, for all $\gamma\in\Delta$. Therefore $\widetilde\mu_{F'}\in\mathbb I[[G_{F'}]]^\Delta$. \end{proof} \begin{theorem} Let $\mu_F\in\mathbb I[[G_F]]$ be a measure interpolating all the critical values of each arithmetic specialization twisted by finite order characters of $G_F$. Similarly, let $\mu_{F'}\in\mathbb J[[G_{F'}]]$ interpolating the critical values of the base change of each arithmetic specialization. Recall the trace ideal ${\mathscr T}\in\mathbb I[[G_{F'}]]^\Delta$ generated by the elements $\Sigma_{\gamma\in\Delta}\alpha^\gamma$, with $\alpha\in\mathbb I[[G_{F'}]]$. Then the congruence \begin{equation}\label{torsion-congruence} ver(\mu_F)\equiv\widetilde\mu_{F'}\mod{\mathscr T} \end{equation} hold if and only if for every locally constant ${\mathbb Z}_p$-valued function $\epsilon$ of $G_{F'}$ satisfying $\epsilon^\gamma=\epsilon$ for all $\gamma\in\Delta$ we have the following congruences \begin{equation} \int_{G_F}\epsilon\circ ver(x)d\mu_F(x)\equiv \int_{G_{F'}}\epsilon(x)d\widetilde\mu_{F'}(x) \mod{p\mathbb I}. \end{equation} \end{theorem} \begin{proof} The necessary part is clear and we need only prove the sufficient part. Consider the components of the images of $\widetilde\mu_{F'}$ and $ver(\mu_{F})$ in $\mathbb I[G_{F'}/V]/p^{m_{F'}(V)-1}$ for a $\Delta$-stable admissible open subgroup $V$ of $G_{F'}$. We denote the component obtained by evaluating $\widetilde\mu_{F'}$ at $\delta^{(y)}_{F'}$ by $\widetilde\mu_{F'}(.,\delta_{F'}^{(y)})$ and the component obtained by evaluating $\mu_F$ at $\delta^{(x)}_{F'}$ by $\widetilde\mu_F(.,\delta_{F'}^{(y)})$. Let $U:=ver^{-1}(V)\subseteq G_F$, then $ver(\mu_{F})$ is the image under the transfer map of the $U$-component of $\mu_F$. These components are the images of \begin{enumerate}[(i)] \item $\sum_{y\in G_{F'}/V}\widetilde\mu_{F'}(.,\delta_{F'}^{(y)})y$, \item $\sum_{x\in G_{F}/U}\mu_{F}(.,\delta_{F'}^{(x)})ver(x)$ \end{enumerate} in $(\mathbb I[G_{F'}/V]/p^{m_{F'}(V)-1})^\Delta$. Let ${\mathscr T}(V)$ be the image of the trace ideal ${\mathscr T}$ in $(\mathbb I[G_{F'}/V]/p^{m_{F'}(V)-1})^\Delta$. We consider the following two cases: \paragraph{Case(i): $y$ is fixed by $\Delta$.} In this case, $\delta_{F'}^{(y)}$ is a locally constant function as in the statement of the theorem. Now if $y\notin \mathrm{im}(ver)$, then $\delta_{F'}\circ ver=0$, then again by the congruence condition we have $\widetilde\mu_{F'}(.,\delta_{F'}^{(y)})\equiv 0\pmod{p}$. Therefore the corresponding summands in (i) and (ii) above vanishes modulo ${\mathscr T}(V)$. \paragraph{Case(ii): $y$ is not fixed by $\Delta$:} By Lemma \ref{invariance-measure}, we have \begin{equation} \widetilde\mu_{F'}(.,\delta_{F'}^{(y)})=\widetilde\mu_{F'}(.,\delta_{F'}^{(y^\gamma)}), \forall \gamma\in\Delta. \end{equation} Therefore the $\Delta$-orbit of $y$ in the sum is given by $\widetilde\mu_{F'}(.,\delta_{F'}^{(y)})\sum_{\gamma\in\Delta} y^\gamma$, which belongs to ${\mathscr T}(V)$. \end{proof} Viewing the elements $\int_{G_F}\epsilon\circ ver(x)d\mu_F(x)$ and $ \int_{G_{F'}}\epsilon(x)d\widetilde\mu_{F'}(x)$ in $\mathbb I$, as measures on the weight space $\mathbb W$, they are determined by their values on characters $\homs{\mathbb{W}}{\overline{\mathbb Q}_p}$. Let $\nu:=\int_{G_F}\epsilon\circ ver(x)d\mu_F(x)-\int_{G_{F'}}\epsilon(x)d\widetilde\mu_{F'}(x)$. For any character $\chi:\mathbb{W}\longrightarrow\overline{\mathbb Q}_p^\times$ let $\int_\mathbb{W}\chi(\gamma)d\nu(\gamma)\equiv 0 \pmod{p{\mathbb Z}_p}$. Then, it is easy to see that $\eta=\frac{1}{p}\nu$ defines a measure on $\mathbb{W}$, with $\eta(\chi)=\nu(\chi)/p$. If this happens, then $\nu\equiv 0\pmod{p\mathbb I}$. We therefore have the following result. \begin{theorem} The congruence \begin{equation*} ver(\mu_F)\equiv\widetilde\mu_{F'}\mod{\mathscr T} \end{equation*} hold if and only if \begin{equation}\label{torsion-two-var} \int_\mathbb{W}\chi(y)\int_{G_F}\epsilon\circ ver(x)d\mu_F\equiv\int_\mathbb{W}\chi(y)\int_{G_{F'}}\epsilon(x)d\widetilde\mu_{F'}\pmod{p{\mathbb Z}_p} \end{equation} for all locally constant functions $\chi$ of $\mathbb{W}$, and for every locally constant ${\mathbb Z}_p$-valued function $\epsilon$ of $G_{F'}$ satisfying $\epsilon^\gamma=\epsilon$ for all $\gamma\in\Delta$. \end{theorem} We call the congruences that appear in equation \eqref{torsion-congruence} as torsion congruences over $\mathbb I$. The torsion congruences will be an important step towards proving the congruences in Theorem \ref{cong}. \subsection{Remarks on Torsion Congruence in Families}\label{example} We examine the torsion congruences in the case when the character $\epsilon$ of the group $G_{F'}$ is trivial. Recall that $F$ is a totally real field and $F'$ be a totally real extension of degree $p$. The Galois group $\Delta=\mathrm{Gal}(F'/F)$ has order $p$. As before, $f$ is a Hilbert modular form of weight $\kappa=(0,I)$ defined over $F$, and $f'$ is the base-change of $f$ to $F'$. We assume that both these modular forms are ordinary at all the primes above $p$. Then $\int_{G_{F}}d\mu_F(\sigma)$ is the $p$-adic L-function of $f$ in $\mathbb I$, which we denote by $L_{p,F}$. These $p$-adic L-functions have been constructed in \cite[\S 5.3.6]{hida-mfg} when $F={\mathbb Q}$. A similar construction works over the totally real fields under some conditions \cite{rosso}. Note that there is no cyclotomic variable in these $p$-adic L-functions. Further, $\int_{G_{F'}}d\widetilde\mu_{F'}(\sigma)$ is the image of the $p$-adic L-function of $f'$ in $\mathbb I$. We denote this image by $\widetilde L_{p,F'}$. These $p$-adic L-functions generate the characteristic ideals of the dual Selmer groups $\sg{F}{\ad{\boldsymbol\rho_f}\otimes\mathbb I}$ and $\sg{F'}{\ad{\boldsymbol\rho_{f'}}\otimes\mathbb I}$. The torsion congruence then takes following form \begin{equation*} \int_{G_{F}}d\mu_F(\sigma)\equiv\int_{G_{F'}}d\widetilde\mu_{F'}(\sigma)\pmod{p\mathbb I}. \end{equation*} This congruence can be written in the following way \begin{equation}\label{cong-trivial} L_{p,F}\equiv \widetilde L_{p,F'}\pmod{p\mathbb I}. \end{equation} The base-change morphism induces a map $\sg{F}{\ad{\boldsymbol\rho_f}\otimes\mathbb I}\longrightarrow\sg{F'}{\ad{\boldsymbol\rho_{f'}}\otimes\mathbb I}^\Delta$, which is an isomorphism. Since these Selmer groups have no non-trivial pseudonull submodules, we have an isomorphism $(\mathbb I/\widetilde L_{p,F'})_{\Delta}\cong\mathbb I/{L_{p,F}}$. In particular, we have a surjection $\mathbb I/\widetilde L_{p,F'}\longrightarrow\mathbb I/L_{p,F}$. Hence, we have the equality: $\mbox{image of the ideal }(\widetilde L_{p,F'})=(L_{p,F})$ in $\mathbb I$. Therefore $\widetilde L_{p,F'}\equiv uL_{p,F}\pmod{p\mathbb I}$ for some $u\in\mathbb I$. The congruences in the theorem above assert that a stronger congruence $\widetilde L_{p,F'}\equiv L_{p,F}\pmod{p\mathbb I}$ might occur. That this congruence might occur is suggested by the following observation. Consider the specialization of these $p$-adic L-functions to weight $\kappa$ and $\kappa'$. Then for the above congruence \eqref{cong-trivial}, we need the congruence \begin{equation} L_{p,F}(P_\kappa)\equiv \widetilde L_{p,F'}(P_{\kappa'})\pmod{p\cO}. \end{equation} where $P_\kappa$ and $P_{\kappa'}$ are the arithmetic points associated to $f$ and $f'$ respectively, and $\cO$ is a sufficiently large extension of ${\mathbb Z}_p$. In the case when $p$ divides the left hand side of this congruence, Hida showed in \cite{hida-cong}, that the value $L_{p,F}(P_{\kappa})$ is divisible by $p$. This theorem of Hida has been generalized by Ghate in \cite{ghate} to the case of totally real fields. Therefore, there is another Hilbert modular form $g$ over $F$ such that $f$ is congruent to $g$ modulo $p$. Such primes are known as congruence primes. The base-change of these Hilbert modular forms to the field $F'$ are also congruent modulo $p$. In the case $F={\mathbb Q}$, Hida also showed the converse, a congruence between modular forms modulo $p$, implies that $p$ divides $L_{p,F}(P_{\kappa})$. Therefore, an appropriate generalization of the theorem of Hida in \emph{loc cit} (generalized by Ghate to the case of Hilbert modular forms over real quadratic fields) will show that $p$ divides the value $\widetilde L_{p,F'}(P_{\kappa'})$. We hope to come back to the remaining cases and also the torsion congruences at a later time.
2024-02-18T23:40:28.933Z
2016-08-02T02:13:14.000Z
algebraic_stack_train_0000
2,527
45,613
proofpile-arXiv_065-12329
\section{Introduction} Ultrasound (US) is a low-cost and real-time medical imaging technique in minimally invasive surgery and in percutaneous procedures. It observes information under the surface, so it is used to locate invisible details about vessels, nerves, or tumours. By tracking the pose of a 2D ultrasound probe (2D US) we can render 3D reconstructions from a collection of 2D slices \cite{Khamene2005}, while a tracked 3D probe (3D US) is able to build large and detailed 3D models from a set of 3D scans \cite{Brattain2011}. Both 2D US and 3D US can also be used to guide other tracked medical instruments, such as biopsy needles \cite{Stoll2012}, and fuse data with other imaging modalities such as endoscopes. Freehand 3D ultrasound generally refers to the extrinsic calibration between a hand-held US probe and a pose tracking device. This calibration aims at determining the rigid transformation between the US scan and the tracked marker as well as the scale factor that converts the US scan to metric coordinates, i. e. a similarity transformation. This is usually achieved by scanning a known calibration object (phantom) immersed in either water or a tissue mimicking gel. Since the speed of sound in water is different than in tissue, sometimes an alcoholic solution is used to obtain a more realistic US scale. A multitude of calibration phantoms with different shapes have been proposed in the literature \cite{Mercier2005}, including intersecting wires \cite{Chen2009}, a single plane \cite{Prager1998,Najafi2015}, a stylus \cite{Muratore2001,Khamene2005,Hsu2008}, and 3D printed objects \cite{Najafi2014}. Although these methods focus on 2D US calibration, some extensions to 3D US using similar phantoms have been proposed as well \cite{Bergmeir2009,Hummel2013}. In this paper we focus on using a tracked needle as the calibration phantom. Our main motivation is towards assisted guidance and motion analysis in fetal interventions that require the extraction of \textit{in utero} samples with a biopsy needle. It thus becomes a practical solution to use the same needle as a calibration object, avoiding the need to introduce new objects in the operating room and the additional burden of their sterilization. The tracked needle is detected by the pose tracking system as a 3D line, and it is scanned either as a line (3D US) or as a point (2D US). By scanning the needle under different poses, we formulate the 3D US calibration as the similarity registration between two sets of 3D lines and the 2D US calibration as the similarity registration between co-planar 3D points and 3D lines. In this paper we propose a minimal solver to the similarity registration between two sets of 3D lines. We will also show that the registration between co-planar 3D points and 3D lines is a sub-problem of the same formulation and therefore the same minimal solver can be applied. Additionally, we show that this minimal solution can be easily generalised to the registration of any combination of plane, line, and point correspondences. We also present an alternative simplified minimal solver to the similarity registration between a set of co-planar points and a set of 3D lines. We apply the minimal solutions to the calibration of a 2D US and a 3D US with a pose tracking sensor and perform validation with both synthetic and real data. \begin{figure}[t] \centering \subfigure[]{\includegraphics[width=0.315\textwidth]{./calibrationSetup3D-eps-converted-to} } \subfigure[]{\includegraphics[width=0.27\textwidth]{./scan2DUS.png} } \subfigure[]{\includegraphics[width=0.28\textwidth]{./scan3DUS.png} } \label{fig:NeedleScan} \caption{(a) Scanning a tracked needle with a US probe; (b) a 2D US probe detects a cross section of the needle; (c) a 3D US detects a line segment.} \end{figure} \section{Related Work} Freehand US calibration using a tracked linear target was proposed in \cite{Khamene2005}. However, this method is initialized with a non-minimal linear solution and is only meant for calibration of 2D US probes. Furthermore, it assumes that the US probe produces an anisotropic image, i. e. it has different scaling factors along the $x$ and $y$ axes of the image. An alternative method \cite{Vasconcelos2016} extends this calibration procedure to 3D US and shows that assuming an isotropic model (single scale factor) produces better calibration results for curvilinear shaped probes. In this paper we assume that the US scans are isotropic. In some contexts, it is possible to assume that the scale factor is known, and the calibration problem becomes the Euclidean registration between the US probe and the phantom target. In the 2D US case this problem becomes equivalent to the extrinsic calibration between a camera and a laser \textit{rangefinder} \cite{Zhang2004}. In the 3D US case, with the appropriate phantom (e. g. 3 known 3D points) the absolute pose of the probe can be recovered in each calibration scan, and thus it can be formulated as the standard hand-eye problem \cite{Horaud1995,Thompson2016}. In this paper, however, we consider that the scale is always unknown. Estimating the similarity transformation (rigid pose and scale) between two coordinate frames gained recent attention due to its application in the registration of different Structure-from-Motion (SfM) sequences. If the same scene is recovered in two different monocular SfM runs, the scale of each reconstruction can be arbitrarily different. Therefore, to produce extended and more detailed 3D maps from independent SfM runs both the rigid pose and the scale must be recovered. If correspondences between SfM sequences are not available, one can use an extension of the ICP algorithm \cite{Zhang1994} to handle unkown scale \cite{Du2010}. If 2D-3D point correspondences are available, this is called the generalised pose and scale problem \cite{Ventura2014,Sweeney2014}, and is solved by extending the $PnP$ formulation \cite{Haralick1991,Quan1999,Lepetit2009} to handle the alignment of image rays from multiple view points. A closely related contribution estimates a similarity transformation from pairwise point correspondences between two generalised cameras \cite{Sweeney2015}. In the case of 3D US calibration, we are interested in the similarity registration between two sets of 3D lines. Different algorithms have been proposed to the euclidean registration between sets of lines \cite{Zhang1991,Kamgar2004}. One possible approach to solve the similarity registration problem would be to first estimate the unknown scale factor independently, e. g. by computing the ratio of orthogonal distances between all pairs of lines in both sets, and then use any of the previously mentioned euclidean registration algorithms. We found that this approach is extremely unstable with noisy measurements and thus we focus on the joint estimation of all similarity parameters. Non-minimal linear algorithms and non-linear refinement methods have been proposed to solve the registration of two sets of 3D lines for different non-rigid configurations, including the similarity transformation \cite{Bartoli2001,Anonymous}. However, and to the best of our knowledge, a minimal closed-form solution for the similarity registration of two sets of lines have not been proposed in the literature. The 2D US calibration problem is the similarity registration between a set of 3D lines and a set of co-planar points. This is a particular case of the pose and scale problem \cite{Ventura2014} when the 3D points are co-planar, and therefore this method could be adapted to solve this problem. However, the co-planarity of points introduces further simplifications, and as we will show in this paper, this problem can be minimally solved with a much more compact set of equations. Our strategy to solve both registration problems is to convert them to an equivalent registration between a set of 3D points and and a set of 3D planes. Although this strategy has been described in the context of euclidean registration \cite{Ramalingam2010}, it is also valid for non-rigid registration. Minimal solutions are a well established topic in computer vision literature \cite{Nister2004,Stewenius2005,Stewenius2005b,Kukelova2008,Kukelova2012}. In most cases they require solving a system of polynomial equations, which can be achieved using Grobner basis methods \cite{Stewenius2005,Byrod2009,Kukelova2012}. Although these methods provide a general framework to build numeric polynomial solvers, they require a certain amount of symbolic manipulation that often requires a case-by-case analysis. To address this issue an automatic generator of polynomial solvers have been proposed \cite{Kukelova2008}. In this paper we develop minimum solutions using the action matrix method as presented in \cite{Byrod2009}. \section{2D/3D US Model} \begin{figure}[tp] \centering \subfigure[]{\includegraphics[width=0.235\textwidth]{./modelUSbeam-eps-converted-to} \label{fig:USbeam} } \subfigure[]{\includegraphics[width=0.235\textwidth]{./model2DUSlin-eps-converted-to} \label{fig:model2DUSlin} } \subfigure[]{\includegraphics[width=0.235\textwidth]{./model2DUScurv-eps-converted-to} \label{fig:model2DUScurv} } \subfigure[]{\includegraphics[width=0.235\textwidth]{./model3DUS-eps-converted-to} \label{fig:model3DUScurv} } \caption{US probe models: (a) US emitted beams have a varying width. If we assume that the scanned region is focused, beams are approximated by a straight line. (b) Linear 2D US (c) Curvilinear 2D US (d) Curvilinear 3D US.} \end{figure} US probes emit a set of acoustic beams that are partially reflected whenever they cross a medium interface with a change in acoustic impedance. The time response of the echo reflections enables the formation of a spatial grayscale map representing the different acoustic impedances within the US scanning field of view. Note that US beams have a varying width (Fig. \ref{fig:USbeam}) which might induce undesired out of focus distortions. In an analogous way to most camera calibration models, we assume that the scanned region is always focused and thus each scene point reflects a beam along a single straight line. US image formation depends on the probe construction. Linear 2D US probes (Fig. \ref{fig:model2DUSlin}) emit parallel beams and thus there are two scale factors involved: $s_{y}$ depends on the speed of sound in the propagation medium, while $s_{x}$ is a fixed parameter that depends on the physical distance between beam emitters. These probes usually operate with high frequency acoustic signals (4 -- 16 MHz) and are used for short range scans (e. g. musculoskeletal imaging). The calibration of these probes cannot be represented with a similarity transformation and we discard it from further analysis in this paper. Curvilinear 2D US probes (Fig. \ref{fig:model2DUScurv}) emit beams in radial directions that intersect in a single point, forming a planar bundle of lines. In this case, the speed of sound in the propagation medium affects the scan scale isotropically. The curvilinear 3D US (Fig. \ref{fig:model3DUScurv}) is a generalization of the curvilinear 2D US, emitting a 3D bundle of beams. Curvilinear probes usually operate with lower frequency signals (2 -- 8 MHz) and are more suitable to long range scans (e. g. obstetrics, cardiac imaging). \section{Problem Statement} \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{./modelFreehand3DUS-eps-converted-to} \label{fig:3DCalibSetup} \caption{3D US calibration problem. The similarity transformation $\m A$ maps points $\v X_{i}$ in the 3D US volume (red) to points $\v P_{i}$ in the marker reference frame (green). $\m A$ can be decomposed as a uniform scaling transformation followed by a rigid transformation.} \end{figure} Consider a hand-held curvilinear 3D US probe whose pose is tracked by a rigidly attached marker (Fig. 3). In each frame the tracking system determines the transformation $\m T_{\m M \rightarrow \m O}$ from the marker coordinate system ($\m M$) to a fixed frame $\m O$. The freehand 3D US calibration consists in determining the unknown similarity transformation $\m A$ that maps 3D points $\v X_{i}$ in the US volume to 3D points $\v P_{i}$ represented in $\m M$. \begin{equation} \label{eq:3DUSpointmapping} \v P_{i} = \m A \v X_{i} \end{equation} The similarity $\m A$ is defined by a rotation $\m R$, a translation $\v t$ and a scale factor $s$ that converts the 3D US volume to metric coordinates, and is represented as \begin{equation} \m A = \begin{pmatrix} \m S & \v t \\ 0 & 1 \end{pmatrix} \end{equation} where $\m S = s \m R$ is a scaled rotation matrix such that \begin{equation} \label{eq:similarityNonLin} \tr{\m S} \m S = \m S \tr{\m S} = \begin{pmatrix} s^{2} & 0 & 0 \\ 0 & s^{2} & 0 \\ 0 & 0 & s^{2} \end{pmatrix} \end{equation} The calibration procedure consists in capturing a tracked needle in the 3D US volume under different poses. The needle is previously calibrated by determining its two endpoints in the reference frame $\m M$ and then it is represented in each acquisition as a 3D line $\v L_{i}$. The needle is also detected as a 3D line $\v B_{i}$ in the 3D US volume. The calibration problem is thus formulated as the 3D similarity registration between two sets of lines (Fig. \ref{fig:lineline3D}). \section{3D US Calibration Solution} In this section we derive a minimal solution for the calibration of a 3D US probe. We start by re-stating it as the similarity registration between 3D points and 3D planes, and then derive a linear and a minimal solution for this problem. The calibration of a 2D US is presented in section \ref{2DUS} as a particular case of the 3D US problem. \subsection{3D US Calibration as Point-Plane Registration} \begin{figure}[t] \centering \subfigure[]{ \includegraphics[width=0.47\textwidth]{./LineLineRegistration-eps-converted-to} \label{fig:lineline3D} } \subfigure[]{ \includegraphics[width=0.47\textwidth]{./LineLineRamalingam-eps-converted-to} \label{fig:ramalingam3D} } \caption{(a) The 3D US calibration is formulated as the similarity registration between lines $\v B_{i}$ and lines $\v L_{i}$; (b) Each line $\v L_{i}$ can be re-defined as two intersecting planes $\v \Pi_{i}$, $\v \Pi^{*}_{i}$, while each line $\v B_{i}$ can be redefined as two points $\v X_{i}$, $\v X^{*}_{i}$.} \end{figure} Ramalingam et. al. showed that any 3D registration problem involving 3D planes, lines and/or points can be re-stated as the registration between 3D planes and 3D points \cite{Ramalingam2010}. In our calibration problem this can be achieved by defining each needle line $\v L_{i}$ as two intersecting planes $\v \Pi_{i}$, $\v \Pi^{*}_{i}$ and each line $\v B_{i}$ as two points $\v X_{i}$, $\v X^{*}_{i}$ (Fig. \ref{fig:ramalingam3D}). Given that both $\v P_{i} = \m A \v X_{i}$ and $\v P^{*}_{i} = \m A \v X^{*}_{i}$ are contained in planes $\v \Pi_{i}$ and $\v \Pi^{*}_{i}$, each line-line correspondence ($\v L_{i}$,$\v B_{i}$) puts 4 linear constraints on $\m A$ \begin{align} \tr{\v \Pi_{i}}{\m A} \v X_{i} = 0 \label{eq:linear3DUS1}\\ \tr{\v \Pi^{*}_{i}}{\m A} \v X_{i} = 0 \label{eq:linear3DUS2} \\ \tr{\v \Pi_{i}}{\m A} \v X^{*}_{i} = 0 \label{eq:linear3DUS3} \\ \tr{\v \Pi^{*}_{i}}{\m A} \v X^{*}_{i} = 0 \label{eq:linear3DUS4} \end{align} Note that the same reasoning can be applied to any combination of plane, point, and line correspondences (planes are defined by 3 points, and points are defined as the intersection of 3 planes), and thus the remainder of this section equally applies to these problems as well. \subsection{Linear Solution} \label{sec:linearsol} The similarity matrix $\m A$ has 13 linear parameters and for $N$ line-line correspondences we can stack instances of the equations \ref{eq:linear3DUS1} -- \ref{eq:linear3DUS4}) to form a linear system with $4N$ equations and 13 unknowns. This linear system can be solved with SVD decomposition using at least 3 correspondences, determining $\m A$ up to a scale factor. The correct scale of $\m A$ can be recovered by setting the homogeneous parameter to 1. Note, however, that with noisy line measurements equation \ref{eq:similarityNonLin} is not satisfied and thus the linear solution for $\m A$ is generally not a similarity. The linear estimation can be projected to a similarity using QR decomposition of matrix $\m S$ and forcing its upper triangular component to be a scaled identity matrix (using the mean of its diagonal elements as scale $s$) \begin{equation} \m S = s\m R = \m R \begin{pmatrix} s & 0 & 0 \\ 0 & s & 0 \\ 0 & 0 & s \\ \end{pmatrix} \end{equation} \subsection{Minimal Solution} \label{sec:minimalSol} Equation \ref{eq:similarityNonLin} puts 5 quadratic constraints on matrix $\m A$ and therefore only 7 linear constraints are required to compute its 13 parameters. This can be achieved with a minimum of 2 line-line correspondences. Note that with 2 correspondences we have 8 linear constraints. To solve the problem minimally we should either discard one of the linear equations or partially solve the complete linear system, leaving 6 up to scale unknowns undetermined. We found the latter option to be numerically more stable. The linear system with 7 equations and 13 unknowns is partially solved using SVD decomposition, generating a 6D solution subspace for $\m A$ \begin{equation} \label{eq:Asubspace} \m A = a \m A_{a} + b \m A_{b} + c \m A_{c} + d \m A_{d} + e \m A_{e} + f \m A_{f} \end{equation} where $a$, $b$, $c$, $d$, $e$, $f$ are the remaining 6 unknowns. Equation \ref{eq:similarityNonLin} can be written as the following system of 10 quadratic equations \begin{equation} \label{eq:polySystem} \begin{aligned}[c] \tr{\v c_{1}} \v c_{1} - \tr{\v c_{2}} \v c_{2} &= 0 \\ \tr{\v c_{1}} \v c_{1} - \tr{\v c_{3}} \v c_{3} &= 0 \\ \tr{\v c_{1}} \v c_{2} &= 0 \\ \tr{\v c_{1}} \v c_{3} &= 0 \\ \tr{\v c_{2}} \v c_{3} &= 0 \end{aligned} \qquad \qquad \qquad \begin{aligned}[c] \tr{\v r_{1}} \v r_{1} - \tr{\v r_{2}} \v r_{2} &= 0 \\ \tr{\v r_{1}} \v r_{1} - \tr{\v r_{3}} \v r_{3} &= 0 \\ \tr{\v r_{1}} \v r_{2} &= 0 \\ \tr{\v r_{1}} \v r_{3} &= 0 \\ \tr{\v r_{2}} \v r_{3} &= 0 \end{aligned} \end{equation} where $\v c_{i}$ is the $i$th column of $\m S$ and $\v r_{i}$ is the $i$th row of $\m S$. Substituting equation \ref{eq:Asubspace} into equation \ref{eq:polySystem} generates a system of 10 quadratic equations in the 6 unknowns $a$, $b$, $c$, $d$, $e$, $f$. Note that this polynomial system is the same solved in \cite{Ventura2014} for the Generalised Pose and Scale Problem. This polynomial system is solved with the action matrix method \cite{Byrod2009}. Since the quadratic constraints determine $\m A$ up to scale we set $f = 1$. We expand the polynomial system by multiplying all equations by $a$, $b$, $c$, $d$ and form a cubic system with 47 linearly independent equations and 55 monomials. Using LU decomposition, we reduce the system to 5 equations in 13 monomials \begin{equation} \begin{pmatrix}\m C_{5 \times 5} & \m B_{5 \times 8} \end{pmatrix} \begin{pmatrix} \v m_{C} \\ \v m_{B}\end{pmatrix} = 0 \end{equation} with \begin{align} \v m_{C} &= \tr{\begin{pmatrix} b^3 & ab^2 & be & bd & bc \end{pmatrix}} \\ \v m_{B} &= \tr{\begin{pmatrix} b^2 & ab & e & d & c & b & a & 1\end{pmatrix}} \end{align} When a polynomial system is presented in this format, it can be solved with the action matrix method if matrix $\m C_{5 \times 5}$ is invertible and also if there is a monomial $w$ such that $w \v m_{B}$ is a linear combination of $\v m_{B}$. In our calibration problem $\m C_{5 \times 5}$ is generally invertible, and for $w = b$ we can build a $8 \times 8$ matrix $\m M$ such that \begin{equation} \m M \v m_{B} = b \v m_{B} \end{equation} The 8 solutions to $\v m_{B}$ that verify this constraint are the eigen vectors of $\m M$, from which we can extract 8 solutions for $a$, $b$, $c$, $d$, $e$ and recover 8 solutions for $\m A$ using equation \ref{eq:Asubspace}. The correct scale of $\m A$ is recovered in the same way as explained in section \ref{sec:linearsol}. \section{2D US Calibration Solution} \label{2DUS} \begin{figure}[tp] \centering \subfigure[]{ \includegraphics[width=0.47\textwidth]{./PointLineRegistration-eps-converted-to} \label{fig:pointline2D} } \subfigure[]{ \includegraphics[width=0.47\textwidth]{./PointLineRamalingam-eps-converted-to} \label{fig:ramalingam2D} } \caption{(a) The 2D US calibration is formulated as the similarity registration between co-planar points $\v P_{i}$ and lines $\v L_{i}$; (b) Each line $\v L_{i}$ can be re-defined as two intersecting planes $\v \Pi_{i}$, $\v \Pi^{*}_{i}$.} \end{figure} If we consider the same calibration problem with a curvilinear 2D US probe instead, each needle acquisition is detected as a single point $\v X_{i}$ that belongs to the US scanning plane. For the sake of continuity with the previous section, we still treat the image coordinates $\v X_{i}$ of the 2D US as 3D co-planar points, for an arbitrarily fixed scanning plane $\v \Delta$. Note that calibrating a curvilinear 2D US aims at determining the same 7 parameters as in the 3D US case. Therefore the calibration problem becomes the 3D similarity registration between a set of co-planar points $\v X_{i}$ and a set of lines $\v L_{i}$ (\ref{fig:pointline2D}). Each point-line correspondence puts 2 linear constraints on matrix $\m A$ (equation \ref{eq:linear3DUS1} and \ref{eq:linear3DUS2}), and therefore this problem can be minimally solved using 4 point-line correspondences. The same minimal solution described in section \ref{sec:minimalSol} can be used in this case, since the co-planarity of points $\v X_{i}$ is not a degenerate configuration. We observed that some particular choices for the scanning plane $\v \Delta$ (e. g. $z=0$) result in matrix $\m C_{5 \times 5}$ being singular and thus the polynomial system becomes numerically unstable. We found out through simulation that defining $\v \Delta$ as the plane $z=k$, with $k>0$ generally produces an invertible matrix $\m C_{5 \times 5}$ and the polynomial system is solvable. On the other hand, the linear solution described in section \ref{sec:linearsol} will not solve the 2D US problem, as the system will always be rank deficient. This can only be achieved with the additional elimination of parameters in the linear equations. These simplifications also lead to an alternative minimal solution for the 2D US case. Both methods are described in the remainder of this section. \subsection{Linear Solution} \label{sec:linearSol2D} If we define the scanning plane $\v \Delta$ as $z=0$, the 2D US points have the format $\v X_{i} = \tr{\begin{pmatrix} x_{i} & y_{i} & 0 & 1 \end{pmatrix}}$ and the linear equations do not put any constraints on the third column of $\m S$. The linear equations for each acquisition become \begin{align} \tr{\v \Pi_{i}} \bar{\m A} \tr{\begin{pmatrix} x_{i} & y_{i} & 1 \end{pmatrix}} &= 0 \\ \tr{\v \Pi^{*}_{i}} \bar{\m A} \tr{\begin{pmatrix} x_{i} & y_{i} & 1 \end{pmatrix}} &= 0 \end{align} with \begin{align} \bar{\m A} &= \begin{pmatrix} \bar{\m S} & \v t \\ 0 & 1 \end{pmatrix} \\ \bar{\m S} &= \begin{pmatrix} \v c_{1} & \v c_{2} \end{pmatrix} \end{align} where $\v c_{1}$ and $\v c_{2}$ are the first two columns of $\m S$. The linear system is thus reduced to 10 unknown parameters and can be solved with a minimum of 5 point-line correspondences. Note that analogously to the 3D US case (equation \ref{eq:similarityNonLin}), $\bar{\m A}$ must verify the following constraint \begin{equation} \label{eq:similarityNonLin2D} \bar{\m S} \tr{\bar{\m S}} = \begin{pmatrix} s^{2} & 0 \\ 0 & s^{2} \end{pmatrix} \end{equation} and with noisy measurements the linear solution must be forced to this format using its QR decomposition \begin{equation} \bar{\m S} = \m R \begin{pmatrix} s & 0 \\ 0 & s \\ 0 & 0 \end{pmatrix} \end{equation} The third column of $\m S$ can then be extracted by multiplying the third column of rotation $\m R$ by $s$. \subsection{Alternative Minimal Solution (2D US only)} \label{sec:minimalSol2D} This problem can be minimally solved with 7 linear constraints (4 point-line correspondences). Since in this case there are only 10 linear parameters, we can generate a 3D linear solution subspace \begin{equation} \label{eq:Asubspace2D} \bar{\m A} = a \bar{\m A}_{a} + b \bar{\m A}_{b} + c \bar{\m A}_{c} \end{equation} Equation \ref{eq:similarityNonLin2D} is re-written as the following system \begin{align} \label{eq:polySystem2D} \begin{split} \tr{\v c_{1}} \v c_{1} - \tr{\v c_{2}} \v c_{2} &= 0 \\ \tr{\v c_{1}} \v c_{2} &= 0 \end{split} \end{align} Substituting equation \ref{eq:Asubspace2D} into equation \ref{eq:polySystem2D} we generate a system of 2 quadratic homogeneous equations in the 3 unknowns $a$, $b$, $c$. Using the same procedure from section \ref{sec:minimalSol} we use monomial multiplication and LU decomposition to re-write this system as \begin{equation} \begin{pmatrix} \m C_{2 \times 2} & \m B_{2 \times 4} \end{pmatrix} \begin{pmatrix} \v m_{C} \\ \v m_{B} \end{pmatrix} = 0 \end{equation} with \begin{align} \v m_{C} &= \tr{\begin{pmatrix} ab^{2} & b^{2} \end{pmatrix}} \\ \v m_{B} &= \tr{\begin{pmatrix} ab & b & a & 1 \end{pmatrix}} \end{align} We solve this system using eigen decomposition of the action matrix, yelding up to 4 solutions. \section{Degenerate Cases} The degenerate configurations for both 3D US and 2D US calibration are closely related to the ones described for the pose and scale problem \cite{Ventura2014}. If the needle is moved without rotation (lines $\v L_{i}$ are parallel) there is an ambiguity in translation. This implies that fixing the needle and scanning with the US probe in different positions is a degenerate case, however, the inverse scenario of fixing the US probe while moving the needle is generally not a degenerate case. If the needle motion is a pure rotation around itself (lines $\v L_{i}$ intersect in a single point) there is an ambiguity in scale. This is analogous to pose estimation with monocular pinhole cameras. If lines $\v L_{i}$ are co-planar, the point detections $\v X_{i}$ of a 2D US are co-linear and there is a rotation ambiguity around the axis defined by these points. This, however, is not generally a degenerate case in 3D US calibration unless only 2 line correspondences are available, since it falls under either one of the two previously mentioned cases. Therefore, the similarity between two sets of co-planar lines can only be estimated from a minimum of 3 correspondences. \section{Iterative refinement} The closed-form solutions can be refined with Levenberg-Marquardt iterative optimization \cite{Marquardt1963}, however, there is no consensus on the most appropriate residue metric for 3D line registration \cite{Bartoli2001}. In all our experiments we perform iterative refinement by minimizing the euclidean orthogonal distance between the 3D lines $\v L_{i}$ and the projected 3D points from the US image $\v P_{i} = \m A \v X_{i}$. The refined solution is parametrised by the translation $\v t$, 3 rotation parameters ($\v R$ is represented as a unit norm quaternion), and the scale factor $s$. For the 3D US the optimization problem is \begin{equation} \min_{\m R, \v t, s} \sum_{i = 1}^{N} d(\v L_{i},\v P_{i})^{2} + d(\v L_{i},\v P^{*}_{i})^{2} \end{equation} where $d(\v L_{i},\v P_{i})$ represents the euclidean distance between line $\v L_{i}$ and point $\v P_{i}$. For the 2D US problem the last term of the minimization is ignored. \section{Experimental Results} We test the calibration algorithms both in simulation and with real data. For the 3D US calibration we test the linear solution from section \ref{sec:linearsol} (\textbf{3line3D}) and the minimum solution from section \ref{sec:minimalSol} (\textbf{2line3D}), while for the 2D US we test the linear solution from section \ref{sec:linearSol2D} (\textbf{5point2D}), the general minimal solution from section \ref{sec:minimalSol} (\textbf{2line3D}), and the simplified minimum solution from section \ref{sec:minimalSol2D} (\textbf{4point2D}). All algorithms are tested within a RANSAC framework \cite{Fischler1981} with outlier threshold of 5mm, followed by iterative refinement. \begin{figure}[t] \centering \subfigure[]{\includegraphics[width=0.4\textwidth]{./SimSetup-eps-converted-to} } \hspace{0.6cm} \subfigure[]{\includegraphics[width=0.17\textwidth]{./detection3DUS-eps-converted-to} } \hspace{0.6cm} \subfigure[]{\includegraphics[width=0.17\textwidth]{./detection2DUS-eps-converted-to} } \caption{The needle detections in 3D are obtained by sampling 2D slices: (a) Simulated set-up with a fixed 3D US scanner and random needle poses in green; (b) 3D US line acquisition; 2D US point acquisition.} \label{fig:usDetections} \end{figure} With both synthetic and real data, the 3D US lines $\v B_{i}$ are obtained by sampling points from 2D slices with different angles (Fig. \ref{fig:usDetections}). This is is a practical solution since we can directly define the points $\v X_{i}$ and $\v X^{*}_{i}$ to input in equations \ref{eq:linear3DUS1} to \ref{eq:linear3DUS4}. The needle tracking measurements (lines $\v L_{i}$) are converted to two intersecting planes $\v \Pi_{i}$ and $\v \Pi^{*}_{i}$ such that $\v \Pi_{i}$ intersects both $\v L_{i}$ and the origin of the US attached marker reference frame $\m M$, and $\v \Pi_{i}^{*}$ is orthogonal to $\v \Pi{i}$. Note that this approach degenerates when the needle is aligned with the origin of $\m M$ and therefore this should be taken into account when positioning the needle during calibration. All plots in this section are represented with Matlab boxplot function: the central mark is the median, the box limits are the 25th and 75th percentiles, the whiskers are the maximum and minimum inliers, and individual crosses are outliers. \subsection{Simulation} \begin{figure}[tp] \centering \subfigure[]{\includegraphics[width=0.31\textwidth]{./SimRotErr2D2-eps-converted-to} } \subfigure[]{\includegraphics[width=0.31\textwidth]{./SimTransErr2D2-eps-converted-to} } \subfigure[]{\includegraphics[width=0.31\textwidth]{./SimScaleErr2D2-eps-converted-to} } \subfigure{\includegraphics[width=0.5\textwidth]{./legend2D-eps-converted-to} } \caption{2D US error distributions with synthetic data} \label{fig:2DSimResults} \end{figure} \begin{figure}[t] \centering \subfigure[]{\includegraphics[width=0.31\textwidth]{./SimRotErr-eps-converted-to} } \subfigure[]{\includegraphics[width=0.31\textwidth]{./SimTransErr-eps-converted-to} } \subfigure[]{\includegraphics[width=0.31\textwidth]{./SimScaleErr-eps-converted-to} } \subfigure{\includegraphics[width=0.35\textwidth]{./legend3D-eps-converted-to} } \caption{3D US error distributions with synthetic data} \label{fig:3DSimResults} \end{figure} We simulate a 2D/3D US probe with a scale factor $s=0.24$ in a fixed position. 50 line segments with 400mm are generated at random poses within the field of view of the US. Gaussian noise is added to the US points along the 2D slices ($\sigma = 1$ pixel) and also to the extreme points of the line segments $\v L_{i}$ ($\sigma = 1$mm) to simulate tracking error. In each trial, we calibrate the US by sampling $N$ random line segments. $N$ varies between 3 and 10 for the 3D US case, and between 5 and 10 for the 2D US case. For each value $N$ we perform 100 trials. The calibration results are compared against of rotation ($\m R_{GT}$), translation ($\v t_{GT}$), and scale factor ($s_{GT}$) and are presented in figures \ref{fig:2DSimResults} and \ref{fig:3DSimResults}. The rotation error is measured as the angle displacement of the residual rotation $\tr{\m R} \m R_{GT}$, the translation error as $||\v t_{GT} - \v t||$ and the scale error as $|s_{GT} - s|$. As expected, the minimal solutions perform better than the linear solutions with a low number of input acquisitions and converge to the same result as the number of acquisitions grow. Also not surprisingly, 3D US performs better that 2D US for the same number of acquisitions, given that it has twice as much linear constraints. In the case of 2D US, the two alternative minimum solutions have similar performance. \subsection{Real Data} Our calibration method is tested using the set-up displayed in Fig. \ref{fig:2D3DCalibSetup} that includes a GE Voluson E10 machine with a eM6C probe (3D US) and a 333 mm length metal needle. Both instruments are tracked by the infrared camera system Optitrack V120 Trio. Experiments were conducted in a container filled with water at room temperature. We use the same probe for both for 2D US and 3D US data acquisition. For the 2D US we just choose a single 2D slice from the 3D US volume at a specified angle. The needle is manually segmented as a point in each 2D slice. Unlike in the simulation experiment, in this calibration procedure both the needle and the 3D US probe are moved between acquisitions. \begin{figure}[tp] \centering \subfigure[]{\includegraphics[width=0.48\textwidth]{./overviewPhoto.png} \label{fig:2D3DCalibSetup} } \hspace{0.1\textwidth} \begin{minipage}[b]{0.33\textwidth} \subfigure{ \includegraphics[width=1.0\textwidth]{./wirePhantom4.jpg} } \addtocounter{subfigure}{-1} \subfigure[]{ \includegraphics[width=1.0\textwidth]{./wireScan.png} \label{fig:ValidSetup} } \end{minipage} \caption{(a) Calibration set-up with a tracked 3D US probe and a tracked needle; (b) Validation using a cross wire pattern that defines a known 3D point in the tracker reference frame.} \end{figure} To validate the calibration accuracy we use an x-shaped wire phantom \ref{fig:ValidSetup} whose point intersection can be measured as a single point in the US scan. We use this phantom to measure the projection reconstruction accuracy (PRA) of our calibration results,, i. e., the difference in mm between the intersection point $\m A \v X$ according to the calibrated US measurement and the same point $\v P$ measured by the tip of the tracked needle. We performed 10 acquisitions of the wire phantom in order to cover different regions of the US scan. Figs. \ref{fig:3DPRA} and \ref{fig:2DPRA} display the distribution of PRA results for all trials. Each distribution contains 200 error measurements (20 trials $\times$ 10 phantom scans). In the 2D US results we only display the results for one of the minimal solutions, since as we've seen in simulation the results from both approaches are very similar. \begin{figure}[t] \centering \begin{minipage}[b]{0.35\textwidth} \centering \subfigure{\includegraphics[width=1.0\textwidth]{./PRA-eps-converted-to} \label{fig:2DPRA} } \addtocounter{subfigure}{-1} \subfigure[]{\includegraphics[width=0.8\textwidth]{./legend3D-eps-converted-to} } \end{minipage} \begin{minipage}[b]{0.35\textwidth} \centering \subfigure{\includegraphics[width=1.0\textwidth]{./PRA2D-eps-converted-to} \label{fig:3DPRA} } \addtocounter{subfigure}{-1} \subfigure[]{\includegraphics[width=0.8\textwidth]{./legend2D2-eps-converted-to} } \end{minipage} \subfigure[]{\includegraphics[width=0.45\textwidth]{./linelineProject-eps-converted-to} } \subfigure[]{\includegraphics[width=0.42\textwidth]{./pointLineProject-eps-converted-to} } \caption{Validation results: (a) Projection reconstruction accuracy (PRA) with 3D US; (b) PRA with 2D US (c) Sample registration result with 3D US projected lines in red and needle tracking measurements in green (d) Sample registration result with 2D US projected points in red and needle tracking measurements in green} \end{figure} The US calibration converges to a solution with with an error between 2 and 3 mm within a total scanning radius of 120 mm. Since in each trial we select random needle poses, close to degenerate configurations can be chosen and result in outlier results. This can be avoided in practice by scanning the needle in a spread region of the US volume, and by exploring all 6 degrees of freedom while moving the needle. Overall, the difference in accuracy between linear and minimal solutions is even more pronounced than in simulation. \section{Conclusions} We propose a minimum solution to the similarity registration between two sets of 3D lines, and between co-planar points and 3D lines. These solutions are tested to calibrate a US probe using a tracked line target with both 3D and 2D data. This is useful in medical imaging to guide a biopsy needle during US based interventions. The method can be easily be extended to additional US calibration problems using other types of phantoms, e. g. scanning single plane target leads to the similarity registration between co-planar lines and 3D planes (2D US) or between two sets of 3D planes (3D US). In other computer vision domains this algorithm can potentially be used as an extension of the pose and scale problem to the alignment of line-based and/or plane-based SfM sequences. \bibliographystyle{splncs}
2024-02-18T23:40:29.247Z
2016-08-02T02:09:03.000Z
algebraic_stack_train_0000
2,537
6,139
proofpile-arXiv_065-12350
\section{Introduction} Higgs bundles were first studied by Nigel Hitchin in 1987, and appeared as solutions of Yang-Mills self-duality equations on a Riemann surface \cite{N1}. Classically, a {\it Higgs bundle} on a compact Riemann surface $\Sigma$ of genus $g\geq 2$ is a pair $(E,\Phi)$ where $E$ is a holomorphic vector bundle on $\Sigma$, and the {\it Higgs field} $\Phi: E\rightarrow E\otimes K$ is a holomorphic 1-form, for $K:=T^*\Sigma$. Higgs bundles can also be defined for complex semisimple groups $G_{\mathbb{C}}$ and their real forms, and through stability conditions one can construct their moduli spaces $\mathcal{M}_{G_\mathbb{C}}$ (e.g. see \cite{N2}). A natural way of studying the moduli space of Higgs bundles is through the {\it Hitchin fibration}, sending the class of a Higgs bundle $(E,\Phi)$ to the coefficients of the characteristic polynomial $\det(\eta I -\Phi )$. The generic fibre is an abelian variety which can be seen through line bundles on an algebraic curve $S$, the {\it spectral curve} associated to the Higgs field as introduced in \cite{N2}. The \textit{spectral data} is then given by the line bundle on $S$ satisfying certain conditions. In the case of classical Higgs bundles, the smooth fibres are Jacobian varieties of $S$. The Hitchin fibration was defined for classical complex Lie groups in \cite[Section 5]{N2}, and following \cite[Section 7]{N5} one may consider Higgs bundles with real structure group $G$ as fixed point sets in the moduli space of Higgs bundles for the complexified group $G_\mathbb{C}$, therefore obtaining $G$-Higgs bundles as real points inside the Hitchin fibration (e.g. see \cite{N5, gothen,thesis} and references therein). We dedicate this short note to the study of the geometry of the moduli space of $SO(m,m+1)$-Higgs bundles inside ${\mathcal M}_{SO(2m+1,\mathbb{C})}$. In particular, whilst in this non hermitian symmetric case there is no Toledo invariant, one can consider the Langlands dual set up of $Sp(2m,\mathbb{C})$-Higgs bundles $(E=W\oplus W^*,\Phi')$ to understand the role of the symplectic Toledo invariant from the orthogonal perspective, as well as to construct the spectral data. By considering real Higgs bundles as fixed points of an involution (e.g. see \cite[Section 3.3.1]{thesis}), we see the moduli space of $SO(m,m+1)$-Higgs bundles $(V=V_+\oplus V_-, \Phi)$ inside the $SO(2m+1,\mathbb{C})$-Hitchin fibration. The characteristic polynomials of $SO(m,m+1)$ and $Sp(2m,\mathbb{R})$-Higgs fields define a spectral curve $\pi:S\rightarrow \Sigma$ in the total space of the canonical bundle $K$ whose equation is \begin{eqnarray}\eta^{2m}+a_1\eta^{2m-2}+\ldots+a_{m-1}\eta^2+a_m=0,\label{yo}\end{eqnarray} for $\eta$ the tautological section of $\pi^*K$ and $a_i\in H^0(\Sigma, K^{2i})$ . This is a $2m$ covering of the Riemann surface, generically smooth and ramified over $4m(g-1)$ points, the zeros of $a_m$. In order to understand the topological invariants associated to $SO(m,m+1)$-Higgs bundles, one has to consider a subdivisor $D$ of the ramification divisor, over which a natural involution $\sigma:\eta\mapsto -\eta$ acts as $-1$, and whose degree we denote by $M$ following the notation of \cite{umm}. The value of $M$ is closely related to the {\it Toledo invariant} (see \cite[Section 6]{classes}), and in particular one can deduce the following: \vspace{0.05 in} \noindent {\bf Theorem \ref{comp-sp}.} {\it Each even invariant $0<M\leq 4m(g-1)$ labels a component of the moduli space of $Sp(2m,\mathbb{R})$-Higgs bundles which intersects the nonsingular fibres of the Hitchin fibration for $Sp(2m,\mathbb{C})$-Higgs bundles given by a fibration of a $\mathbb{Z}_2$-vector space over the total space of a vector bundle on the symmetric product $S^M\Sigma$.} \vspace{0.05 in} In the case of orthogonal Higgs bundles, one has the following: \vspace{0.05 in} \noindent {\bf Proposition \ref{fibreSO}.} {\it The intersection of the moduli space ${\mathcal M}_{SO(m,m+1)}$ with the regular fibres of the $SO(2m+1,\mathbb{C})$-Hitchin fibration is given by two copies of } \begin{eqnarray} \{~L\in {\rm Prym}(S,S/\sigma)~:~L^2\cong {\mathcal O}~\}~/~ H^1(S/\sigma, \mathbb{Z}_2).\nonumber \end{eqnarray} Each of the two copies corresponds to whether the orthogonal bundle lifts to a spin bundle or not. Moreover, there is a decomposition of the torsion 2 points in the Prym variety ${\rm Prym}(S,S/\sigma)[2] \cong H^1(S/\sigma, \mathbb{Z}_2)\oplus \mathbb{Z}_2([a_m])^{ev}/b_0,$ where $\mathbb{Z}_2([a_m])^{ev}$ denotes subdivisors of the divisor $[a_m]$ with even number of +1, and $b_0:=(1,\ldots, 1)$. Thus the spectral data of an $SO(m,m+1)$-Higgs bundle is given, up to equivalence, by \begin{itemize} \item a line bundle ${\mathcal F}\in H^1(S/\sigma, \mathbb{Z}_2)$, and \item a divisor $D\in \mathbb{Z}_2([a_m])^{ev}/b_0$ of degree $M$. \end{itemize} Since $SO(m,m+1)$ retracts onto $S(O(m)\times O(m+1))$, an $SO(m,m+1)$-Higgs bundle $(V_{+}\oplus V_{-}, \Phi)$ carries three topological invariants: the Stiefel-Whitney classes $\omega_1(V_+)=\det(V_+)$, and $\omega_2(V_\pm)$. Through a $K$-theoretic approach following the methods of \cite{slice,classes}, in Section \ref{section-so} we can further classify these invariants in terms of their spectral data: \vspace{0.05 in} \noindent {\bf Theorem \ref{teo2}.} {\it The Stiefel-Whitney classes of an $SO(m,m+1)$-Higgs bundle $(V=V_-\oplus V_+, \Phi)$ with spectral data $(S/\sigma, {\mathcal F}, D)$ are given by} \begin{eqnarray} \omega_1(V_+)&=&{\rm Nm}({\mathcal F}) \in H^1(\Sigma, \mathbb{Z}_2);\\ \omega_2(V_+)&=&\varphi_{ S /\sigma}({\mathcal F})+\varphi_\Sigma({\rm Nm}({\mathcal F})) \in \mathbb{Z}_2;\\ \omega_2(V_-)&=&\left\{\begin{array} {ccc} \varphi_{ S /\sigma}({\mathcal F})+\varphi_\Sigma({\rm Nm}({\mathcal F})) &{\rm if} &\omega_2(V)=0\\ \varphi_{ S /\sigma}({\mathcal F})+\varphi_\Sigma({\rm Nm}({\mathcal F})) +1&{\rm if} &\omega_2(V)=1 \end{array}\right. \end{eqnarray} {\it for $\varphi_\Sigma$ and $\varphi_{ S/\sigma}$ the analytic mod 2 indices of the curves, and ${\rm Nm}({\mathcal F})$ the Norm on $\Sigma$. } \vspace{0.05 in} By considering ${\mathcal F}\otimes K_{\bar S}$ as a new spin structure, one can see $\omega_{2}(V_{\pm})$ purely in terms of spin structures in Corollary \ref{corro}. Moreover, by analysing spectral data through the induced 2-fold cover $\rho : S\rightarrow \bar S:=S/\sigma$, and recalling that the orthogonal vector bundle $V_+\oplus V_-$ is recovered as an extension defined through the divisor $D$ (see \cite[Section 4.2]{N3}), one obtains the number of points in each of the regular fibres of the Hitchin fibration for a fixed invariant $M$. \vspace{0.05 in} \noindent {\bf Proposition \ref{numbers}.} {\it The number of points in a regular fibre of the $SO(2m+1,\mathbb{C})$ Hitchin fibration corresponding to $SO(m,m+1)$-Higgs bundles with even invariant $M$ is} \begin{small}$\left(\begin{array}{c} 4m(g-1)\\mathcal{M}_{reg} \end{array}\right).\nonumbe $ \end{small} \vspace{0.05 in} By considering the parametrisation of the moduli space through spectral data, we obtain a natural grading of the moduli space of $SO(m,m+1)$-Higgs bundles leading to a description of Zariski dense open sets in each component: \vspace{0.05 in} \noindent {\bf Theorem \ref{teo1}.} {\it Each fixed even invariant $0<M\leq 4m(g-1)$ labels a component of the moduli space of $SO(m,m+1)$-Higgs bundles whose intersection with the regular fibres of the Hitchin fibration is a covering of a vector space over the symmetric product $S^M\Sigma$.} \vspace{0.05 in} It is important to note that these components will possibly (and often do so) meet over the discriminant locus of the Hitchin fibration, and thus one needs to do further analysis to understand the connectivity of the moduli space. An example of how to see the intersection of the components through the monodromy of the associated {\it Gauss-Manin connection} for $SO(2,3)$-Higgs bundles is discussed in \cite[Section 6.3]{mono}. A geometric description of the above covering is given in Section \ref{geometrytori}, recovering some of the results appearing in \cite[Section 6.4]{brian}. Moreover, in the so-called {\it maximal Toledo invariant case} on the symplectic side, which corresponds to $M=0$, one can deduce that when $m$ is odd the intersection of ${\mathcal M}_{SO(m,m+1)}$ with the smooth fibres of the $SO(2m+1,\mathbb{C})$ Hitchin fibration is given by $2^{2g}$ copies of ${\rm Prym}(\bar S,\Sigma)$ over a vector space. The moduli space of $SO(m,m+1)$-Higgs bundles considered in this paper is an example of what is known as $(B,A,A)$-brane in the moduli space ${\mathcal M}_{SO(2m+1,\mathbb{C})}$ of complex Higgs bundles. As such, these branes have dual $(B,B,B)$-branes in the dual moduli space ${\mathcal M}_{Sp(2m,\mathbb{C})}$ (see \cite[Section 12]{LPS_Kap}). In \cite[Section 7]{slice} it was conjectured what the support of this dual brane should be, the whole moduli space ${\mathcal M}_{Sp(2m,\mathbb{C})}$ of symplectic Higgs bundles. We conclude this note with some further comments on this duality in Section \ref{last}, as well as on the relation between the Hitchin components in both split symplectic and orthogonal $(B,A,A)$-branes in Langlands dual groups, and some implications of the geometric description of the spectral data given in this paper. \subsubsection*{Acknowledgements} This research was inspired by fruitful conversations with David Baraglia, \linebreak Steve Bradlow and Nigel Hitchin, and the author is also thankful for discussions with Ben Davidson and Alan Thompson. The paper was written with partial support of the U.S. National Science Foundation grants DMS 1107452, 1107263, 1107367: the GEAR Network for short research visits. The work of the author is also supported by NSF grant DMS-1509693. \section{The Hitchin fibration}\label{fibration} % Recall from \cite{N2} that an $Sp(2m,\mathbb{C})$-Higgs bundle is a pair $(E,\Phi')$ for $E$ a rank $2m$ vector bundle with a symplectic form $\omega(~,~)$, and the Higgs field $\Phi'\in H^{0}(\Sigma, {\rm End}(E)\otimes K)$ satisfying $\omega(\Phi' v,w)=-\omega(v,\Phi' w).$ Similarly, an $SO(2m+1,\mathbb{C})$-Higgs bundle is a pair $(V,\Phi)$ for $V$ a holomorphic vector bundle of rank $2m+1$ with a non-degenerate symmetric bilinear form $(v,w)$, and $\Phi$ a Higgs field in $H^{0}(\Sigma,{\rm End}_{0}(V)\otimes K)$ which satisfies $(\Phi v,w)=-(v,\Phi w).$ The spectral curves defined by $SO(2m+1,\mathbb{C})$-Higgs bundles and $Sp(2m,\mathbb{C})$-Higgs bundles have similar equations (e.g. see \cite[Section 3-4]{N3}), and are given by a $2m$-fold cover $\pi:S\rightarrow \Sigma$ in the total space of $K$ whose equation is $\eta^{2m}+a_{1}\eta^{2m-2}+\ldots +a_{m-1}\eta^{2}+a_{m}=0$ as in \eqref{yo}. The curve $S$ has an involution $\sigma$ which acts as $\sigma(\eta)=-\eta$ and thus we may consider the quotient curve $ \overline{S}:=S/\sigma$ in the total space of $K^{2}$, for which $\rho: S\rightarrow \bar S$ is a double cover: \begin{eqnarray} \xymatrix{ S\ar[rd]_{2m:1}^{\pi}\ar[rr]^{2:1}_{\rho}&&\bar S\ar[ld]_{\bar \pi}^{m:1}\\ &\Sigma &} \end{eqnarray} The covers $S$ and $\bar S$ have, respectively, genus $ g_{S}= 1+4m^{2}(g-1),$ and $g_{\bar S}=(2m^2-m)(g-1)+1$. Moreover, by the adjunction formula, their canonical bundles can be written, respectively, as $K_S=\pi^*K^{2m}$ and $K_{\bar S}=\bar \pi^*K^{2m-1}. $ As shown in \cite{N2}, the Hitchin fibration for both moduli spaces ${\mathcal M}_{SO(2m+1,\mathbb{C})}$ and ${\mathcal M}_{Sp(2m,\mathbb{C})}$ is given over $\mathcal{A}=\bigoplus_{i=1}^{m}H^0(\Sigma, K^{2i}).$ From \cite[Section 3]{N3}, the generic fibres for $\mathcal{M}_{Sp(2m,\mathbb{C})}$ are given by \begin{eqnarray} {\rm Prym}(S,\bar S),\label{fibresp} \end{eqnarray} and from \cite[Section 4]{N3}, the generic fibres for $\mathcal{M}_{SO(2m+1,\mathbb{C})}$ are given by (two copies of) \begin{eqnarray} {\rm Prym}(S,\bar S)/\rho^*H^1(\bar S,\mathbb{Z}_2).\label{fibreso} \end{eqnarray} In what follows we shall study the components of the moduli space of Higgs bundles for split real forms by considering, from \cite[Theorem 4.12]{thesis}, points of order two in the generic fibres \eqref{fibresp} and \eqref{fibreso}. \section{$Sp(2m,\mathbb{R})$-Higgs bundles}\label{section-sp} We shall now consider $Sp(2m,\mathbb{R})$-Higgs bundles, which from \cite[Theorem 4.12]{thesis} can be seen in the generic fibres of the Hitchin fibration as points of order two in \eqref{fibresp}, and are given by $Sp(2m,\mathbb{C})$-Higgs bundles which decompose as $(E=W\oplus W^*, \Phi')$. Fixing a choice of $\Theta$ characteristic $L_0:=K^{1/2}$, it is shown in \cite{N3} that the vector bundle $E$ is recovered as $\pi_*U$ for $U:=L \otimes K^{(2m-1)/2}. $ Note that the condition $L\in{\rm Prym}(S,\bar S)[2]:=\{ L\in {\rm Prym}(S,\bar S)~:~ L^2\cong {\mathcal O}\}$ is equivalent to requiring $U^2\cong K_S\pi^*K^*.$ Since points in ${\rm Prym}(S,\bar S)[2]$ are given by line bundles $L$ on $S$ for which $\sigma^*L\cong L^*\cong L ,$ following \cite[Theorem 3.5]{umm} they are classified by the action of the involution $\sigma$ on $L$ over its fixed point set (i.e., the ramification divisor of $S$). The involution $\sigma$ acts as $\pm 1$ over some subset of $M$ points of the ramification divisor $[a_m]$, and the number of Higgs bundles appearing in each fibre for a fixed invariant $M$ is described by Hitchin in \cite[Section 6]{classes}. Higgs bundles for the real symplectic group have associated a topological invariant, the Toledo invariant (e.g. see \cite{gothen}), defined as $ | \tau(W\oplus W^*, \Phi')|:=|c_1(W)|,\nonumber $ and satisfy a Milnor-Wood type inequality $|c_1(W)|\leq m(g-1)$. Moreover, from \cite[Section 6]{classes} the class can be expressed as \begin{eqnarray} w_1(W):=c_1(W)=-\frac{M}{2}+m(g-1),\nonumber\label{degree} \end{eqnarray} and its mod 2 value defines the invariant $ c_1(W)~ ({\rm mod} ~2).\nonumber $ Note that since the invariant $M$ is even, within the moduli space of $Sp(2m,\mathbb{R})$-Higgs bundles, the value of $c_1(W)~ ({\rm mod} ~2)$ differentiates components depending on the values of $M~({\rm mod~4})$. \begin{remark} For ${ Sp}(2m,\mathbb{C})$-Higgs bundles, the invariant $M$ appears as $n_-$ in \cite[Section 4]{N3}. In the case of $Sp(2m,\mathbb{R})$-Higgs bundles, it is the invariant $l$ of \cite[Section 6]{classes}. \end{remark} From \cite[Section 6]{classes}, the number of elements in ${\rm Prym}(S,\bar S)[2]$ corresponding to $M$ is \begin{eqnarray} \left(\begin{array}{c} 4m(g-1)\\mathcal{M}_{reg} \end{array}\right)\times 2^{2g_{\bar S}}.\nonumber \end{eqnarray} In order to describe the geometry of the components given by these Higgs bundles, recall from \cite[Proposition 4.15]{mono}, that a convenient splittings can be chosen for the short exact sequence \begin{eqnarray} 0\rightarrow H^1(\bar S, \mathbb{Z}_2) \xrightarrow{\rho^*} {\rm Prym}(S,\bar S)[2]\rightarrow \mathbb{Z}_2([a_m])^{ev}/b_0\rightarrow 0,\nonumber \end{eqnarray} where $b_0$ is the divisor in $\Sigma$ which has $+1$ for all zeros of $a_m$. Then, one can write \begin{eqnarray} {\rm Prym}(S,\bar S)[2] \cong H^1(\bar S, \mathbb{Z}_2)\oplus \mathbb{Z}_2([a_m])^{ev}/b_0. \label{fibrespreal}\label{am} \end{eqnarray} Hence, over each point in the base $\mathcal{A}$, one has the set of divisors $D$ of degree $M$ over which the involution acts as $-1$ (as in \cite[Section 3]{umm}) given by a point in $ \mathbb{Z}_2([a_m])^{ev}/b_0$, together with the $\mathbb{Z}_2$ vector space $H^1(\bar S, \mathbb{Z}_2)$ of dimension $2g_{\bar S}= 2m(m-1)(g-1)+2.\nonumber $ \begin{lemma}\label{lemmaH} The space $H^1(\bar S, \mathbb{Z}_2)$ is given by \begin{eqnarray} {\rm Prym}(\bar S,\Sigma)[2]\oplus \bar\pi^*H^1(\Sigma, \mathbb{Z}_2)~&{\rm for~}& m~{\rm odd},\label{seq-3} \\ \bar\pi^*H^1(\Sigma, \mathbb{Z}_2)\oplus ({\rm Prym}(\bar S,\Sigma)[2]/ \bar\pi^*H^1(\Sigma, \mathbb{Z}_2))\oplus \bar\pi^*H^1(\Sigma, \mathbb{Z}_2)~&{\rm for~}& m~{\rm even}.\label{seq-4} \end{eqnarray} \end{lemma} \begin{proof} In order to understand the space $H^1(\bar S, \mathbb{Z}_2)$ one needs to take into account the parity of $m$, since $\bar S$ is an $m$-fold cover of the compact Riemann surface $\Sigma$. Considering the Norm map \begin{eqnarray} 0\rightarrow {\rm Prym}(\bar S,\Sigma)[2]\rightarrow H^1(\bar S, \mathbb{Z}_2) \xrightarrow{Nm} H^1(\Sigma, \mathbb{Z}_2)\rightarrow 0, \label{seq-1} \end{eqnarray} note that for a line bundle $L$ on $\Sigma$ one has that $Nm(\bar\pi^*L)=mL$, and hence when $m$ is odd the pullback $\bar \pi^*$ gives a splitting of the short exact sequence \eqref{seq-1}. Therefore, in this case one has \begin{eqnarray} H^1(\bar S, \mathbb{Z}_2)\cong {\rm Prym}(\bar S,\Sigma)[2]\oplus \bar \pi^*H^1(\Sigma, \mathbb{Z}_2).\nonumber \end{eqnarray} When $m$ is even, the image of $\bar \pi^*: H^1(\Sigma, \mathbb{Z}_2)\rightarrow H^1(\bar S, \mathbb{Z}_2)$ is contained in ${\rm Prym}(\bar S,\Sigma)[2]$, thus giving a filtration $ H^1(\Sigma, \mathbb{Z}_2)\subset {\rm Prym}(\bar S,\Sigma)[2]\subset H^1(\bar S, \mathbb{Z}_2),$ which induces the splitting (via isomorphism theorems for $Nm$) \begin{eqnarray} H^1(\bar S, \mathbb{Z}_2) &\cong & \bar\pi^*H^1(\Sigma, \mathbb{Z}_2)\oplus ({\rm Prym}(\bar S,\Sigma)[2]/ \bar\pi^*H^1(\Sigma, \mathbb{Z}_2))\oplus \bar\pi^*H^1(\Sigma, \mathbb{Z}_2),\nonumbe \end{eqnarray} and thus the lemma follows.\end{proof} From the above, one has the following description of Zariski open sets in components of the moduli spaces of $Sp(2m,\mathbb{R})$-Higgs bundles which intersect the smooth fibres of the $Sp(2m,\mathbb{C})$ Hitchin fibration: \begin{theorem}\label{comp-sp} Each even invariant $0<M\leq 4m(g-1)$ labels one component of the moduli space of $Sp(2m,\mathbb{R})$-Higgs bundles which intersects the nonsingular fibres of the Hitchin fibration for $Sp(2m,\mathbb{C})$-Higgs bundles. This component is given by a fibration of a $\mathbb{Z}_2$-vector space over the total space of a vector bundle on the symmetric product $S^M\Sigma$.\end{theorem} \begin{proof} We shall follow the proof of \cite[Theorem 4.2]{umm}, taking into consideration the structure of the intersection of $\mathcal{M}_{Sp(2m,\mathbb{R})}$ with the generic fibres of the Hitchin fibration given in \eqref{fibrespreal}. The invariant $M$ gives the degree of a divisor $D$ which corresponds to the choice of an element in $ \mathbb{Z}_2([a_m])^{ev}/d_0. $ Hence, it can be seen as a point in $S^M\Sigma$, and the choice of the differential $a_m$ is then given by a section in $H^0(\Sigma, K^{2m}(-D))$, or equivalently, a vector bundle over the symmetric product. Then, in order to define the spectral curve in the smooth loci of the Hitchin base one needs to make a choice of the remaining differentials, and thus a choice of a point in $\bigoplus_{i=1}^{2m-2}H^0(\Sigma, K^{2i}).\nonumber$ Finally, the Higgs bundles in the component are obtained by the remaining data in the fibre, which is a point in the $\mathbb{Z}_2$ vector space $H^1(\bar S, \mathbb{Z}_2)$.\end{proof} From the above theorem one can see that when $M=0$, the space is given by a $2^{2g_{\bar S}}$ cover of a vector space over a point, and it is the monodromy action which needs to be considered from this perspective in order to deduce connectivity of this cover - this study, for low rank, can be found in \cite{mono}. In particular, for $Sp(4,\mathbb{R})$ it was shown by Gothen in \cite{gothen} that $M=0$ labels several connected components, and it is shown in \cite[Section 6.3]{mono} how these components appear as orbits of the monodromy action of the corresponding Gauss-Manin connection on the $Sp(4,\mathbb{C})$ Hitchin fibration. As in the case of $U(m,m)$-Higgs bundles of \cite{umm}, the invariant and anti-invariant sections of $L\in {\rm Prym}(S,\bar S)[2]$ decompose the direct image bundle $\rho_*U:= U_+\oplus U_-$, leading to the symplectic decomposition $E=W\oplus W^*$ as $W:=\bar \pi_*U_+$ and $W^*:=\bar \pi_*U_-.$ \begin{lemma}\label{lemmaU} Let $D\in \mathbb{Z}_2[a_m]/b_0$ be the divisor of degree $M$ on which the involution $\sigma$ acts as $-1$. Then, there exists a line bundle $L_0\in {\rm Prym}(\bar S, \Sigma)$ such that \begin{eqnarray} U_-= U_+ \otimes {\mathcal O}(D)\otimes K^*\otimes L_0 . \end{eqnarray} \end{lemma} \begin{proof}The symplectic structure of $E$ is obtained though relative duality (e.g. see \cite[Section 4]{classes}), and in particular it implies that $\bar \pi_*U_+ \cong (\bar \pi_* U_-)^*.$ Hence, as in \cite[Section 5]{umm}, the line bundles $U_\pm$ are not independent. Indeed, from \cite[Eq.~(15)]{umm} one has that \begin{eqnarray} {\rm Nm}(U_+) = -{\rm Nm}(U_{-}) + 2m(m - 1)K. \end{eqnarray} The above can be also written in terms of the divisor $D\in \mathbb{Z}_2[a_m]/b_0$ of degree $M$ on which the involution $\sigma$ acts as $-1$, as $ D = {\rm Nm}(U_+^*) + {\rm Nm}(U_-) + mK. $ Therefore, viewing $D$ as a divisor on $\bar S$ (since it is a subset of the ramification divisor of the $m$-fold cover $\bar \pi:\bar S\rightarrow \Sigma$), up to a line bundle $L_0\in {\rm Prym}(\bar S, \Sigma)$, the result follows. \end{proof} From \cite[Eq. (9) \& (10)]{umm} one can write the degrees of the line bundles $U_{\pm}$ in terms of $M$. In particular, recalling that $U=L \otimes K^{(2m-1)/2}$ one has that $ \deg(U)= m(2m-1)(g-1),\nonumbe $ and therefore the degrees of the invariant and anti-invariant line bundles on $\bar S$ can be expressed as \begin{eqnarray} \deg(U_+)&=&m(2m-1)(g-1)-\frac{M}{2},\label{U+}\\ \deg(U_-)&=& m(2m-3)(g-1)+\frac{M}{2}. \label{U-} \end{eqnarray} Equivalently, from \cite[Eq. (9)]{umm} one can write the degree of the rank $m$ bundle $W$ as \begin{eqnarray} \deg(W)= \frac{\deg(U)}{2}-\frac{M}{2} -(2m^2-2m)(g-1) =-\frac{M}{2} +m(g-1),\nonumber \end{eqnarray} recovering the result in \cite[Eq.~(7)]{N2}. The case of $M=0$ corresponds to the maximal Toledo invariant setting, for which it is known that within the covering space one has $2^{2g}$ connected components, the so called {\it Hitchin components}, parametrising rich geometric structures. Moreover, when $m=2$ the $\mathbb{Z}_2$ vector space $H^1(\bar S, \mathbb{Z}_2)$ can be understood in terms of line bundles of order two over the Riemann surface $\Sigma$, and we shall comment on this case in Section \ref{last}. \section{$SO(m,m+1)$-Higgs bundles}\label{section-so} An $SO(m,m+1)$ Higgs bundle is a pair $(V,\Phi)$ where $V=V_{+}\oplus V_{-}$ for $V_{\pm}$ complex vector spaces with orthogonal structures of dimension $m$ and $m+1$ respectively. The Higgs field is a section in $H^{0}(\Sigma, ({\rm Hom}(V_{-}, V_{+})\oplus {\rm Hom}(V_{+},V_{-}))\otimes K)$ given by \[\Phi=\left(\begin{array}{cc} 0&\beta\\ \gamma&0 \end{array} \right)~{~\rm for~}~\gamma \equiv -\beta^{\rm T},\] where $\beta^{\rm T}$ is the orthogonal transpose of $\beta$. As mentioned previously, since $SO(m,m+1)$ retracts onto $S(O(m)\times O(m+1))$, the Higgs bundles $(V_{+}\oplus V_{-}, \Phi)$ carry three topological invariants, the Stiefel-Whitney classes $\omega_1(V_+)$, and $\omega_2(V_\pm)$ (note that $\omega_1(V_-)=\omega_1(V_+)$ since $\det V_- =\det V_+^*$). By further requiring the Higgs bundle to be in the connected component of the identity, i.e. taking $SO(m,m+1)_0$-Higgs bundles, one would obtain pairs with $\omega_1(V_{\pm})={\mathcal O}$ (as considered, for instance, in \cite{aparicio}). In what follows we shall give a geometric description of these topological invariants, relate them to the ones for $Sp(2m,\mathbb{R})$-Higgs bundles obtained in \cite[Section 6]{classes}, and finally use this description to characterise Zariski dense open sets in each connected component of the moduli space of $SO(m,m+1)$-Higgs bundles. On several occasions, it will be important to distinguish when $m$ is even or odd, and we shall do so within this section. \subsection{KO-theory of $\Sigma$} In order to discuss the topology of orthogonal bundles on the surface $\Sigma$ we use KO-theory. For this, we shall recall some results from \cite[Section 6]{classes} and \cite{three}. The Stiefel-Whitney classes of $V_\pm$ can be seen as classes $[V_{\pm}]\in KO(\Sigma)$ where \begin{eqnarray} [V_{\pm}]\in KO(\Sigma)&\simeq& \mathbb{Z} \oplus H^{1}(\Sigma,\mathbb{Z}_2)\oplus \mathbb{Z}_2\nonumber\\ V_{\pm}&\mapsto&(rk(V_{\pm}), \omega_1(V_{\pm}), \omega_2(V_{\pm})).\nonumber \end{eqnarray} Taking the map given by the total Stiefel-Whitney class $\omega = 1 + \omega_1 + \omega_2 $ to the multiplicative group $\mathbb{Z} \oplus H^{1}(\Sigma,\mathbb{Z}_2)\oplus \mathbb{Z}_2$, we consider the generators given by holomorphic line bundles $L$ such that $L^2 \simeq O$, and the class $\Omega = {\mathcal O}_p + {\mathcal O}_p^* - 2$ where ${\mathcal O}_p$ is the holomorphic line bundle given by a point $p \in \Sigma$. Then, for $\alpha(x)$ the class of a line bundle $x\in H^1(\Sigma,\mathbb{Z}_2)$ and $(x,y)$ the intersection form, $\alpha(x+y)=\alpha(x)+\alpha(y)-1+(x,y)\Omega.$ As in \cite[Section 5]{classes}, the isomorphism between the additive group $\widetilde{KO}(\Sigma)$ and the multiplicative group $KO(\Sigma)$ is determined by the relations \begin{eqnarray} \omega_1(\alpha(x))=x~,~ \omega_1(\Omega)=0 ~{~\rm ~and~}~ \omega_2(\Omega)=c_1({\mathcal O}_p)~({\rm mod}~2)~=[\Sigma]\in H^{2}(\Sigma,\mathbb{Z}_2). \nonumber \end{eqnarray} With this notation, the classes $[V_{\pm}]$ satisfy $ [V_\pm ] = rk(V_\pm) - 1 + \alpha(\omega_1(V_\pm )) + \omega_2(V_{\pm})\Omega.\nonumber $ Choosing a $\Theta$ characteristic $K^{1/2}$, the classes $[V_{\pm}]$ have associated an analytic mod 2 index \begin{eqnarray}\label{hola} \varphi_{\Sigma}(V_{\pm} ) = \dim H^0(\Sigma, V_\pm \otimes K^{1/2}) ~({\rm mod} ~2), \end{eqnarray} and the characteristic class $\omega_2$ is independent of which spin structure $K^{1/2}$ is chosen. It follows from \cite[Theorem 1]{classes} that the classes $\omega_2(V_{\pm}) $ satisfy \begin{eqnarray}\omega_2(V_{\pm})= \varphi_{\Sigma}(V_{\pm})+\varphi_{\Sigma}(\det(V_{\pm})).\label{algo}\end{eqnarray} Moreover, $\varphi_{\Sigma}(\Omega)=1$ and the map can be seen as the map to a point \[\varphi_{\Sigma}:KO(\Sigma)\rightarrow KO^{-2}(pt)\cong \mathbb{Z}_{2}.\] Since we are interested in understanding Higgs bundles through their spectral data, we note that as in \cite[Section 5]{classes}, the spin structures together with the covers $\pi:S\rightarrow \Sigma$ and $\bar \pi:\bar S\rightarrow \Sigma$ define push forward maps $KO(S)\rightarrow KO(\Sigma)$ and $ KO(\bar S)\rightarrow KO(\Sigma)$. \subsection{Spectral data for $SO(m,m+1)$-Higgs bundles} \label{some} In order to give a geometric description of characteristic classes, we shall define here the spectral data associated to the $SO(m,m+1)$-Higgs bundles. One should note that since $SO(m,m+1)$-Higgs bundles lie completely inside the singular fibres of the $SL(2m+1,\mathbb{C})$ Hitchin fibration, the analysis done in \cite[Section 5]{classes} can not be directly applied. In this case we consider points of order two in the fibres \eqref{fibreso} of the $SO(2m+1,\mathbb{C})$-Hitchin fibration, which form two copies of the space \begin{eqnarray} {\rm Prym}(S,\bar S)[2]/\rho^*H^1(\bar S, \mathbb{Z}_2),\label{quotient} \end{eqnarray} a $\mathbb{Z}_2$ vector space of dimension $4m(g-1)-2.$ Moreover, the points in $\rho^*H^1(\bar S, \mathbb{Z}_2)$ are precisely those line bundles in ${\rm Prym}(S,\bar S)[2]$ with trivial action of $\sigma$ at all fixed points, i.e., with invariant $M=0$. From \cite[Theorem 4.12]{thesis} together with the structure of $\rho^*H^1(\bar S, \mathbb{Z}_2)$ from Lemma \ref{lemmaH}, the fibres \eqref{quotient} can be further described as follows. \begin{proposition}\label{fibreSO} The intersection of the moduli space ${\mathcal M}_{SO(m,m+1)}$ with the regular fibres of the $SO(2m+1,\mathbb{C})$-Hitchin fibration is given by two copies of ${\rm Prym}(S,\bar S)[2]~/~\rho^*H^1(\bar S, \mathbb{Z}_2)\nonumber $ where the $\mathbb{Z}_2$ space $H^1(\bar S, \mathbb{Z}_2) $ is given by \begin{eqnarray} {\rm Prym}(\bar S,\Sigma)[2]\oplus H^1(\Sigma, \mathbb{Z}_2)~&{\rm for~}& m~{\rm odd},\label{seq-3} \\ H^1(\Sigma, \mathbb{Z}_2)\oplus ({\rm Prym}(\bar S,\Sigma)[2]/ \pi^*H^1(\Sigma, \mathbb{Z}_2))\oplus H^1(\Sigma, \mathbb{Z}_2)~&{\rm for~}& m~{\rm even}.\label{seq-4} \end{eqnarray} \end{proposition} From Proposition \ref{fibreSO}, the spectral data associated to an $SO(m,m+1)$-Higgs bundle, up to equivalences by \eqref{seq-3}-\eqref{seq-4}, is given by the intermediate spectral curve $\bar S$ together with a line bundle ${\mathcal F}\in H^1(\bar S, \mathbb{Z}_2)$, and a divisor $D\in \mathbb{Z}_2([a_m])^{ev}/b_0$ of degree $M$. \begin{remark}It is interesting to note that when $m=2$ the middle term in \eqref{seq-4} gives in fact the spectral data for a $K^2$-twisted $PGL(2,\mathbb{R})$-Higgs bundle. Moreover, the component $ {\rm Prym}(\bar S,\Sigma)[2]$ gives the spectral data for $K^2$-twisted $SL(m,\mathbb{R})$-Higgs bundles. \end{remark} In order to recover $SO(m,m+1)$-Higgs bundles from the above spectral data, we shall recall the relation between symplectic and orthogonal Higgs bundles as described in \cite[Section 4]{N3}. As mentioned in Section \ref{section-sp}, the line bundle $L\in {\rm Prym}(S,\bar S)[2]$ defines a symplectic vector bundle as $E:=\pi_*U$, for $U=L\otimes K^{1/2}_S\otimes \pi^* K^{-1/2}.\nonumber $ Then, from \cite[Eq.~(7)]{N3} the orthogonal bundle $V$ is recovered as an extension \begin{eqnarray} 0\rightarrow E\otimes K^{-1/2}\rightarrow V\rightarrow K^{m}\rightarrow 0,\label{extO} \end{eqnarray} and therefore near the divisor defined by the section $a_m$, the orthogonal bundle $V$ of the $SO(m,m+1)$-Higgs pair $(V,\Phi)$ is recovered as $ V:=(E\otimes K^{-1/2})\oplus K^m.$ From \cite[Section 4.1]{N3}, the $2m+1$ vector bundle $V$ has trivial determinant and a nondegenerate symmetric bilinear form $g(v,w)$ for which $g(\Phi v,w)+g(v,\Phi w)=0$, related to the symplectic form on $E$. Indeed, by considering the Higgs field $\Phi$ on $V/K^{-m}$, one has a non-degenerate skew form on $V/K^{-m}$, and by choosing a square root $K^{1/2}$, one obtains a skew form on $E=V/K^{-m}\otimes K^{-1/2}$ which is generically non degenerate: $\omega(v,w)=g(\Phi v, w).$ Moreover, the extension class in \eqref{extO} can be seen as a choice of trivialization of the line bundle $L\in {\rm Prym}(S,\bar S)$ which depends on the action of the involution $\sigma$, this is, on the divisor \linebreak$D\in \mathbb{Z}_2[a_m]/b_0$ (see \cite[Section 4.3]{N3}). The orthogonal structure induced on the rank $m$ and $m+1$ vector bundles $V_+\oplus V_-$ obtained through the spectral data in the fibre \eqref{quotient} can be understood in terms of a decomposition of the symplectic bundle $E:=E_+\oplus E_-$, through which locally one has \begin{eqnarray} V_-&=&E_-\otimes K^{-1/2}\oplus K^m,\label{bundlev-}\\ V_+&=& E_+\otimes K^{-1/2}.\label{bundlev+} \end{eqnarray} One should note that it is not the symplectic decomposition $E=W\oplus W^*$ which leads to the decomposition $E=E_+\oplus E_-$ on the orthogonal side. This becomes evident, for instance, by considering the Hitchin components for both groups, and it is described in Section \ref{hitchin}. Furthermore, since $V_\pm$ form part of $GL(m,\mathbb{C})$ and $GL(m+1,\mathbb{C})$ Higgs bundles, from \cite{N2} and \cite{bnr} there is a line bundle on $\bar S$ whose direct image gives $V_+$ on $\Sigma$. Adopting the notation of \cite[Section 5]{classes} we define $\pi_!$ and $\bar \pi_!$ by \begin{eqnarray} \pi_!({\mathcal L})=\pi_*({\mathcal L} \otimes K^{1/2}_S \otimes \pi^*K^{-1/2}),~{\rm and ~} \bar\pi_!({\mathcal L})=\pi_*(\bar{\mathcal L} \otimes K^{1/2}_{\bar S} \otimes \pi^*K^{-1/2}),\label{orthogonal} \end{eqnarray} for ${\mathcal L}$ and $\bar {\mathcal L}$ line bundles on $S$ and $\bar S$. Then, as seen in Section \ref{section-sp}, the symplectic vector bundle $E$ is obtained as $E:= \pi_!(L)$ for $L\in {\rm Prym}(S,\bar S)$. When ${\mathcal L}^2\cong {\mathcal O}$ and $\bar L^2\cong {\mathcal O} $, the bundles $\pi_!({\mathcal L})$ and $\bar \pi_!(\bar {\mathcal L})$ acquire orthogonal structures by relative duality, as shown in \cite[Section 4]{classes}. Hence, since $V_+$ has an orthogonal structure, following \cite[Section 4]{classes} and \cite{bnr} for $K^2$-twisted Higgs bundles, the vector bundle $V_+$ is obtained, for some ${\mathcal F}\in H^1(\bar S, \mathbb{Z}_2), $ as \begin{eqnarray} V_+= \bar \pi_!({\mathcal F}). \end{eqnarray} \begin{lemma}\label{lemmaF} For ${\mathcal F}\in H^1(\bar S, \mathbb{Z}_2)$, one has $\det (\bar \pi_!({\mathcal F}))={\rm Nm}( {\mathcal F})$. \end{lemma} \begin{proof} The determinant bundle of $\bar \pi_!({\mathcal F})$ can be obtained through \cite[Section 4]{bnr}, leading to $\det (\bar \pi_!({\mathcal F}))= {\rm Nm}({\mathcal F} \otimes K^{1/2}_{\bar S} \otimes \bar \pi^*K^{-(2m-1)/2} ) = {\rm Nm}( {\mathcal F}).$ \end{proof} In order to understand how the other orthogonal bundle $V_-$ is reconstructed, we shall give now a construction of $E_-$ via the spectral data ${\mathcal F}$ and $D$ modulo \eqref{seq-3}-\eqref{seq-4} (which in particular implies modulo ${\rm Prym}(\bar S,\Sigma)$). Since $\det(E_+)\otimes \det (E_-)={\mathcal O}$, from \eqref{bundlev-}-\eqref{bundlev+} one has that $ \det(E_-)= {\rm Nm}({\mathcal F})\otimes K^{-m/2}$. Therefore, for some $L_0\in {\rm Prym}(\bar S,\Sigma)$ one may write \begin{eqnarray}V_-= \bar \pi_*(L_0\otimes K_{\bar S}^{-1/2}\otimes {\mathcal F})\otimes K^{-1/2}. \end{eqnarray} Note that the choice of $L_0$ is equivalent to the one done in Lemma \ref{lemmaU}, and th divisor $D$ gives the extension class as in the complex case described in \cite[Section 4.3]{N3}. \subsection{Characteristic classes for $SO(m,m+1)$-Higgs bundles} In what follows we shall see that the three Stiefel-Whitney classes of $SO(m,m+1)$-Higgs bundles $(V_+\oplus V_-, \Phi)$ can be described in terms of their spectral data, which from the previous sections is given modulo \eqref{seq-3}-\eqref{seq-4} by \[({\mathcal F},D)\in H^{1}(\bar S, \mathbb{Z}_2)\oplus \mathbb{Z}_2([a_m])/b_0.\] \begin{theorem}\label{teo2} The Stiefel-Whitney classes of an $SO(m,m+1)$-Higgs bundle $(V_-\oplus V_+, \Phi)$ with spectral data $({\mathcal F}, D)\in {\rm Prym}(S,\bar S)[2]/ \rho^*H^1(\bar S, \mathbb{Z}_2)$ are given by \begin{eqnarray} \omega_1(V_+)&=& {\rm Nm}({\mathcal F})\in H^1(\Sigma, \mathbb{Z}_2),\\ \omega_2(V_+)&=&\varphi_{\bar S}({\mathcal F}) +\varphi_\Sigma({\rm Nm}({\mathcal F})) \in \mathbb{Z}_2,\\ \omega_2(V_-)&=&\left\{\begin{array} {ccc} \omega_2(V_+)&{\rm if} &\omega_2(V)=0,\\ \omega_2(V_+)+1&{\rm if} &\omega_2(V)=1. \end{array}\right.\end{eqnarray} \end{theorem} \begin{proof} Recall that $ \varphi_{\Sigma}({\mathcal L}) = \dim H^0(\Sigma, {\mathcal L} \otimes K^{1/2}) ~({\rm mod} ~2),$ and from \cite[Theorem 1]{classes} that for an even spin structure $K^{1/2}$, the orthogonal bundles $V_{\pm}$ satisfy \begin{eqnarray}\omega_2(V_{\pm})= \varphi_{\Sigma}(V_{\pm})+\varphi_{\Sigma}(\det(V_{\pm})).\label{w2}\end{eqnarray} Moreover, since $\deg(\det(V_{\pm}))=0$, one has that $\varphi_\Sigma(V_-)=\varphi_\Sigma(V_+)~({\rm mod}~2).$ The above can also be seen in terms of the analytic mod 2 indices $\varphi_S$ and $\varphi_{\bar S}$ of the spectral line bundles producing $V_+$ and $V_-$. The three mod 2 indices can be related by considering the definition of push forward of sheaves. Indeed, note that for ${\mathcal L}$ a torsion two line bundle on $S$, by definition of direct image sheaf \begin{eqnarray} \varphi_S({\mathcal L})=\dim H^0(S, {\mathcal L} \otimes K^{1/2}_S)~({\rm mod} ~2) =\dim H^0(\Sigma, \pi_*({\mathcal L} \otimes K^{1/2}_S \otimes \pi^*K^{-1/2}) \otimes K^{1/2})~({\rm mod} ~2),\nonumber \end{eqnarray} and hence $\varphi_S({\mathcal L})=\varphi_\Sigma(\pi_!({\mathcal L})). $ An equivalent formula follows for $\bar S$, and therefore \begin{eqnarray}\omega_1(V_+)={\rm Nm}({\mathcal F})\in H^1(\Sigma, \mathbb{Z}_2).\end{eqnarray} Moreover, since $\varphi_{\bar S}({\mathcal F})=\varphi_\Sigma(\bar \pi_!({\mathcal F})) $ it follows that \begin{eqnarray} \omega_2(V_+)=\varphi_{\bar S}({\mathcal F}) +\varphi_\Sigma({\rm Nm}({\mathcal F})). \end{eqnarray} In order to understand $\omega_2(V_-)$ through \eqref{w2}, we should recall that $\omega_2(V)=\omega_2(V_+)+\omega_2(V_-),$ and thus $ \omega_2(V_-)=\varphi_{\bar S}({\mathcal F})+\varphi_{\Sigma}({\rm Nm}({\mathcal F})) +\varphi_{\Sigma}(V). $ The value of $\varphi_\Sigma(V)=\omega_2(V)$ has been studied in \cite[Section 4]{N3} and indicates whether $V$ has a lift to a spin bundle or not. In particular, it is shown there that it is the identity component of the fibre that gives spin bundles, which is $\omega_2(V)=\varphi_\Sigma(V)=0$, and the theorem follows. \end{proof} One should note that when further requiring $V_+$ to have trivial determinant, it becomes the vector bundle of an $SL(m,\mathbb{R})$-Higgs pair and our result agrees with the description of $\omega_2(V_+)$ of \cite[Theorem 1]{classes} for a fixed even spin structure. When considering $SO(m,m+1)_0$-Higgs bundles, i.e. Higgs bundles in the component of the identity, both vector bundles $V_\pm$ satisfy $\det V_{\pm}={\mathcal O}$, and thus they are obtained by choosing a point $L_+\otimes \bar \pi^{*}K^{3/2}$ in the Prym variety ${\rm Prym}(\bar S, \Sigma)$, after fixing a choice of spin structure $K^{1/2}$. Moreover, in this case $M=4m(g-1)$ and thus $L^* \in {\rm Prym}(S,\Sigma)[2]$ is the pullback of a line bundle on $\bar S$, hence determined by $L_+$. Since the characteristic class $\omega_2$ is independent of which spin structure $K^{1/2}$ is chosen, we may use this fact to further deduce the following from Theorem \ref{teo2} by fixing $K^{1/2}$ such that $\varphi_{\Sigma}({\mathcal O})=1$, along the lines of \cite[Theorem 1]{classes} purely in terms of spin structures: \begin{corollary}\label{corro} Let $\bar S$ be a smooth spectral curve in the total space of $K^2\rightarrow \Sigma$ give by an equation \[ \eta^m+a_1\eta^{m-1}+\ldots+a_{m-1}\eta+a_m=0,\] and let ${\mathcal F}$ be a line bundle on $\bar S$ such that ${\mathcal F}^2\cong {\mathcal O}$. Define $V_+:=\pi_!({\mathcal F})$ the image bundle with the orthogonal structure induced from relative duality. Let $K^{1/2}$ be an even spin structure on $\Sigma$, and for $K_{\bar S}^{1/2}=\bar \pi^*K^{m-1/2}$ the corresponding one on $\bar S$, consider the spin structure ${\mathcal F}_{\bar S}={\mathcal F} \otimes K_{\bar S}^{1/2}$. Then, the characteristic classes of the corresponding $SO(m,m+1)$-Higgs pair are \begin{eqnarray} \omega_1(V_+)&=& {\rm Nm}({\mathcal F})\in H^1(\Sigma, \mathbb{Z}_2),\\ \omega_2(V_+)&=&\left\{\begin{array} {ccc} \varphi_{\Sigma}({\rm Nm}({\mathcal F}))&{\rm if} &\varphi_S({\mathcal F}_{\bar S})=0 \\ 1+\varphi_{\Sigma}({\rm Nm}({\mathcal F}))&{\rm if} &\varphi_S({\mathcal F}_{\bar S})=1 \end{array}\right.,\\ \omega_2(V_-)&=&\left\{\begin{array} {ccc} \varphi_{\Sigma}({\rm Nm}({\mathcal F}))&{\rm if} &\varphi_S({\mathcal F}_{\bar S})=\varphi_\Sigma(V)\\ 1+\varphi_{\Sigma}({\rm Nm}({\mathcal F}))&{\rm if} & \varphi_S({\mathcal F}_{\bar S}) \neq\varphi_\Sigma(V)\end{array}\right..\end{eqnarray} \end{corollary} \subsection{The divisor $D\in \mathbb{Z}([a_m])/b_0$} We shall finally consider the geometric implications of the divisor $D\in \mathbb{Z}([a_m])/b_0$ appearing in the spectral data of the Higgs bundles studied in this paper. As mentioned previously, the extension class giving the orthogonal bundle $V$ is obtained through $D$. Moreover, its degree $M$ appears both at the level of complex $SO(2m+1,\mathbb{C})$-Higgs bundles (see \cite[Remark 2 p.14]{N3}) and real $SO(m,m+1)$-Higgs bundles. From \cite[Section 6]{classes}, and as recalled in Section \ref{section-sp}, the elements in ${\rm Prym}(S,\bar S)[2]$ can be distinguished by their associated invariant $M$, and in each regular fibre there are \begin{small} \begin{eqnarray} \left(\begin{array}{c} 4m(g-1)\\mathcal{M}_{reg} \end{array}\right)\times 2^{2g_{\bar S}}\nonumbe \end{eqnarray}\end{small} points with invariant $M$, where the genus of $\bar S$ is as before, $g_{\bar S}=( 2 m^2 - m ) ( g - 1 ) + 1$. Hence, in order to differentiate the characteristic classes for $SO(m,m+1)$-Higgs bundles, one needs to understand the characteristic classes of $\rho^*H^1(\bar S, \mathbb{Z}_2)$. In particular, it should be noted that since the Prym variety ${\rm Prym}(S,\bar S)$ is defined as the set of line bundles $L\in {\rm Jac}(S)$ such that $\sigma^*L\cong L^*$, the pulled-back line bundles in $\rho^*H^1(\bar S, \mathbb{Z}_2)$ are acted on trivially by the involution $\sigma$ and thus carry invariant $M=0$. Therefore, recalling that the topological invariant $M$ associated to $SO(m,m+1)$-Higgs bundles can be seen from \eqref{am} as the degree of the subdivisor of $[a_m]$ giving an element in $\mathbb{Z}_2([a_m])^{ev}$, one has the following: \begin{proposition}\label{numbers} In each generic fibre there are \begin{small} $\left(\begin{array}{c} 4m(g-1)\\mathcal{M}_{reg} \end{array}\right) $ \end{small} points with even invariant $M$.\end{proposition} Since exchanging $\sigma$ by $-\sigma$ exchanges the values of $M$ and $4m(g-1)-M$, those two cases should be identified. Hence, the total number of points in each regular fibre is half of \begin{eqnarray} \left(\begin{array}{c} 4m(g-1)\\0 \end{array}\right)+ \left(\begin{array}{c} 4m(g-1)\\2 \end{array}\right)+\ldots+ \left(\begin{array}{c} 4m(g-1)\\4m(g-1)-2 \end{array}\right)+ \left(\begin{array}{c} 4m(g-1)\\4m(g-1) \end{array}\right), \end{eqnarray} which using series multisection gives, as expected, $ \frac{1}{2}\left[ 2^{4m(g-1)-1} \right]. $ \subsection{On the geometry of the moduli space}\label{geometrytori} From the above analysis, one has a natural grading of the moduli space leading to a geometric description of Zariski dense open sets in the moduli space of $SO(m,m+1)$-Higgs bundles: \begin{theorem}\label{teo1} For each fixed even invariant $0<M\leq 4m(g-1)$, the moduli space of \linebreak $SO(m,m+1)$-Higgs bundles intersects the regular fibres of the Hitchin fibration in a component given by a covering of a vector space over the symmetric product $S^M\Sigma$. \end{theorem} \begin{proof} Over a point in the Hitchin base, defining a spectral curve, one has $\mathbb{Z}_2([a_m])^{ev}/d_0 $. This is all choices of $4m(g-1)$ $\mathbb{Z}_2$-uples $D$ with an even number of $+1$, up to the element $(1,\ldots, 1)$ and equivalence. As in \cite[Proposition 4]{classes}, this agrees with the previous section, asserting that the intersection of the space with the fibre is a $\mathbb{Z}_2$ vector space of dimension $ 4m(g-1)-2$. As in \cite[Theorem 4.2]{umm}, the choice of the divisor $D$ (or equivalently, a point in $\mathbb{Z}_2([a_m])^{ev}/d_0$) is given by a point in the symmetric product $S^M\Sigma$, for $M$ the degree of $D$. Then, the choice of $a_m$ is given by the choice of a section $s\in H^0(\Sigma, K^{2m}(-D))$, leading to a vector bundle $B$ of rank $(4m-1)(g-1)- M$ over $S^M\Sigma$. Finally, the choice of the spectral curve is completed by considering, as in the symplectic side, the space $\bigoplus_{i=1}^{2m-2}H^0(\Sigma, K^{2i}), $ where the parametrisation is done up to $H^1(\bar S, \mathbb{Z}_2)$. \end{proof} One should keep in mind that the characteristic classes of the $SO(m,m+1)$-Higgs bundles are topological invariants, and thus are constant within connected components. On the other hand, the invariant $M$ labels components which often intersect over the singular locus of the Hitchin fibration. An interesting comparison can be made with \cite[Section 6.4]{brian}, where it is shown how the invariant $M$ labels certain connected components of the moduli space. One should note also that the space $H^1(\bar S, \mathbb{Z}_2)$ is in fact the spectral data for $K^2$-twisted $GL(m,\mathbb{R})$-Higgs bundles, and thus over each point in the Hitchin base one has the fibre giving the spectral data for a corresponding $K^2$-twisted $GL(m,\mathbb{R})$-Higgs bundle, the Cayley partners. \begin{corollary}\label{cor1} When $M=0$ and $m$ is odd the intersection with smooth fibres is given by $2^{2g}$ copies of ${\rm Prym}(\bar S,\Sigma)$ over a vector space. \end{corollary} \section{Concluding remarks}\label{last} In what follows, we shall describe some applications of the above methods in the context of understanding the moduli spaces for other real groups. \subsection{The $Sp(2m,\mathbb{R})$ and $SO(m,m+1)$ Hitchin components} \label{hitchin}When considering the Hitchin components for both split real forms $Sp(2m,\mathbb{R})$ and $SO(2m+1,\mathbb{R})$ as described in \cite{N5}, one can see that the vector bundle for $Sp(2m,\mathbb{R})$ is given by (e.g. see \cite[p.4]{classes}) \begin{eqnarray} E:=\bigoplus_{i=1}^{2m} \left (K^{-m+i}\otimes K^{-1/2} \right ). \end{eqnarray} Then, by considering $E\otimes K^{-1/2}\oplus K^m$ one obtains, as expected, the orthogonal bundle for $SO(m,m+1)$-Higgs bundles \begin{eqnarray} V:= \bigoplus_{i=0}^{2m}K^{-m+i}. \end{eqnarray} The pairing for the symplectic vector bundle $E=W\oplus W^*$ is obtained by considering the symplectic pairing between $K^{\pm a}$, leading to \begin{eqnarray} W= \bigoplus_{i=m+1}^{2m} \left (K^{-m+i}\otimes K^{-1/2} \right ). \end{eqnarray} On the other hand, the pairing for the orthogonal bundle $V=V_+ \oplus V_-$ is obtained by taking the natural orthogonal structure for each $K^a\oplus K^{-a}$ and thus one has for $m$ even \begin{eqnarray} V_-=\bigoplus_{i=0}^{m-1} K^{-m+2i+1};~{~\rm~and ~} V_+=\bigoplus_{i=0}^{m} K^{-m+2i} ~= ~K^m \oplus \bigoplus_{i=0}^{m-1} K^{-m+2i}, \label{v-} \end{eqnarray} and if $m$ is odd, the roles of $V_-$ and $V_+$ are interchanged. One should note that, in particular, separating the vector bundle $W=W_{+}\oplus W_{-}$ into the odd (-) and even (+) values of $i$, one has \begin{eqnarray}V_-&=&W_{-}\oplus W^*_{-} ~{\rm ~ and ~}~ V_+=W_{+}\oplus W^*_{+}, \end{eqnarray} and thus the relation between both decompositions of the symplectic bundle and orthogonal bundle become apparent. In the case of $SL(m,\mathbb{R})$-Higgs bundles, the Hitchin component is given by Higgs bundles whose underlying vector $\tilde {\mathcal V}$ bundle has the form (see \cite[Section 3]{N5}) \begin{eqnarray} \tilde {\mathcal V}=\bigoplus_{i=0}^{m-1} K^{\frac{-m+1+2i}{2}}. \end{eqnarray} This bundle is obtained from the origin in the fibre of the Hitchin fibration, and a similar construction leads to the Hitchin component for $K^2$ twisted $SL(m,\mathbb{R})$-Higgs bundles, where ${\mathcal V}=\bigoplus_{i=0}^{m-1} K^{-m+1+2i}. $ In particular, this rank $m$ vector bundle coincides with $V_-$ in \eqref{v-}, which is not surprising, as it follows from the proof of Theorem \ref{teo2}. \subsection{Maximal $Sp(4,\mathbb{R})$-Higgs bundles and $SO(3,4)$-Higgs bundles}\label{mono} It is interesting to consider the description of the connected components given in Section \ref{section-sp} for $m=2$, or in other words, for $Sp(4,\mathbb{R})$-Higgs bundles. In this case, it was shown in Gothen's thesis (see \cite[p. 831-849]{gothen}) that for maximal Toledo invariant (i.e. $M=0$), the number of connected components is $3\cdot 2^{2g}+2g-4.$ As in the general case, the components are described by $H^1(\bar S, \mathbb{Z}_2)$ over a vector bundle over the symmetric product $S^M\Sigma$. But in the case of $m=2$ one has one more correspondence to consider. Indeed, $H^1(\bar S, \mathbb{Z}_2)$ becomes the spectral data for $K^2$-twisted $GL(2,\mathbb{R})$-Higgs bundles, the Cayley partner of $Sp(4,\mathbb{R})$-Higgs bundles. As in \cite[Theorem 6.8]{mono}, one has that $H^1(\bar S, \mathbb{Z}_2)=\Lambda_{\Sigma}[2]\oplus \mathbb{Z}_2([a_m])^{ev}/b_0 \oplus \Lambda_{\Sigma}[2] $ and as seen in \cite[Corollary 6.9]{mono}, one recovers the $3\cdot 2^{2g}+2g-4$ components as orbits of the monodromy action. Moreover, from the description in Section \ref{section-sp}, these components appear as the components of $K^2$-twisted Higgs bundles over the vector space $\mathcal{A}$. The geometry of these components can be studied by similar methods to those in Section \ref{section-sp} and Section \ref{section-so}, by noting that a choice in $\mathbb{Z}_2([a_m])^{ev}/b_0$ gives a point in a symmetric product labeled by the invariant $M$, and over that one has $2^{2g}$ covers coming from $H^1(\Sigma,\mathbb{Z}_2)$. \subsection{The dual $(B,B,B)$-branes} The smooth locus of the moduli space of $SO(2m+1,\mathbb{C})$-Higgs bundles on $\Sigma$ is a hyper-K\"ahler manifold, so there are natural complex structures $I,J,K$ obeying the same relations as the imaginary quaternions (following the notation of \cite{LPS_Kap}). Adopting physicists' language, a Lagrangian submanifold of a symplectic manifold is called an {\em A-brane} and a complex submanifold a {\em B-brane}. A submanifold of a hyper-K\"ahler manifold may be of type $A$ or $B$ with respect to each of the complex or symplectic structures, and thus choosing a triple of structures one may speak of branes of type $(B,B,B), (B,A,A), (A,B,A)$ and $(A,A,B)$. The moduli space of $SO(m,m+1)$-Higgs bundles is a $(B,A,A)$-brane in the moduli space ${\mathcal M}_{SO(2m+1,\mathbb{C})}$ of complex Higgs bundles. As such, it has a dual $(B,B,B)$-brane in the dual moduli space ${\mathcal M}_{Sp(2m,\mathbb{C})}$ (see \cite{LPS_Kap}). It was conjectured by D.~Baraglia and the author, in \cite[Section 7]{slice}, that the support of the dual $(B,B,B)$-brane should be the whole moduli space of $Sp(2m,\mathbb{C})$-Higgs bundles, which can now be understood through the spectral data description of the components of ${\mathcal M}_{SO(m,m+1)}$ given in this paper, and compared to the hyperholomorphic $(B,B,B)$-brane constructed by Hitchin in \cite[Section 7]{classes} dual to the $U(m,m)$-Higgs bundles studied in \cite{umm}. In particular, $SO(m,m+1)$ and $U(m,m)$-Higgs bundles provide examples of $(B,A,A)$-branes whose dual $(B,B,B)$-brane should have the same support, and the moduli spaces of split Higgs bundles in Section \ref{hitchin} provide examples of associated $(B,A,A)$-branes.
2024-02-18T23:40:29.313Z
2016-08-02T02:10:35.000Z
algebraic_stack_train_0000
2,542
9,112
proofpile-arXiv_065-12450
\section{Introduction} \IEEEPARstart{T}{ensors}, or multi-dimensional arrays, are generalizations of vectors and matrices, which have been commonly used in representing real-world data, such as videos \cite{jiang2017novel, kim2007tensor}, hyperspectral images \cite{du2016pltd, renard2008denoising}, multilinear signals \cite{de1997signal, comon2002tensor} and communication networks \cite{papalexakis2014spotting, nakatsuji2017semantic} among numerous others. Common to these and many other data, the so-called low-rank structure can oftentimes be used to identify these tensor data, and thus low-rank tensor approximation is becoming a fundamental tool in today's data analytics \cite{renard2008denoising, liu2012denoising, grasedyck2013literature}, \cite{liu2015generalized, liu2014trace}. However, it is oftentimes infeasible to find an accurate approximation because of the large size of these tensors. For example, as stated in~\cite{gilman2020grassmannian}, the hyperspectral video with hundreds of spectral bands and megapixel spatial resolution needs to be stored at the order of 10 gigabit per second. This clearly means that such a large tensor may not fit in main memory on a computer, which brings difficulties to the subsequent low-rank approximation by a tensor decomposition. Also, calculating such an approximation needs to perform SVD (Singular Value Decomposition) or tensor SVD, which is usually time-consuming. To address this concern, several tensor sketching methods \cite{wang2015fast, battaglino2018practical,xia2017effective,malik2018low, zhang2018randomized} were designed to perform fast low-rank approximation based on specific decompositions with a slight loss of precision, while significantly reducing the memory requirement. More precisely, analogous to the matrix sketching technique \cite{woodruff2014sketching} (such as random sampling \cite{boutsidis2014near} and random projection \cite{bingham2001random}), tensor sketching aims to compute a sketch tensor that is significantly smaller than the original one but still preserves its vital properties for further computations. In many real applications, however, the tensor data just as the aforementioned hyperspectral video often arrives in a streaming manner which naturally requires the sketching algorithms to be one pass. This poses a challenge that how to develop a sketching algorithm to perform streaming low-rank tensor approximation efficiently. In this paper, we develop an effective sketching algorithm to compute the low-tubal-rank tensor approximation from streaming data using the tensor SVD (t-SVD) framework. Similar to the matrix SVD, a key property of the t-SVD is the optimality of the truncated t-SVD for tensor approximation in terms of the Frobenius norm. Another key property of the t-SVD framework is that the derived tubal rank can well characterize the inherent low-rank structure of a tensor. With these good properties, the t-SVD has been extensively studied in dealing with several low-rank tensor approximation problems, both theoretically and practically. See, e.g., \cite{hao2013facial}, \cite{lu2016tensor}, among others. Considering that most existing studies are focused on the batch setting that requires tensor data to be fitted in the main memory, very little is known about the performance of t-SVD based low-rank approximation in the streaming setting. Our work presented here tries to fill in this void. By extending the simple deterministic sketching technique, Frequent Directions \cite{liberty2013simple}, we propose a new tensor sketching algorithm called tensor Frequent Directions (t-FD) to pursue an efficient streaming low-tubal-rank tensor approximation. The key idea of the proposed algorithm is to maintain a small sketch tensor dynamically updated in the streaming data. We shall summarize the main contributions of this paper as follows: \begin{itemize} \item This is the first attempt to apply FD technique to deal with higher order tensors. Specifically, the proposed t-FD algorithm only requires a single pass over the tensor, and thus is applicable to the streaming setting. \item Considering that operations under the t-SVD are mainly processed in the Fourier domain, we analyze the relationship of the tensor norms between the original and Fourier domains, and further derive the tensor covariance and projection error bounds. The theoretical analysis shows that the proposed t-FD is within $1+\varepsilon$ of best tubal-rank-$k$ approximation. \item Extensive experiments are carried out on both synthetic and real tensor data to illustrate the superiority of our t-FD algorithm over matrix FD algorithm and two randomized tensor sketching algorithms (srt-SVD and NormSamp) in most cases. These experimental results also partially verify our theoretical findings. \end{itemize} \section{Related work} \subsection{Streaming low-rank tensor approximation} Finding the low-rank structure from the streaming tensor data has been developed well in recent years. Most of the previous works focus on the {Tucker and CP} decompositions. The Tucker decomposition factorizes a tensor into the multiplication of a core tensor with orthogonal factor matrices along each mode, and the corresponding Tucker rank is defined as the tuple of the ranks of all unfolding matrices. Therefore, its computation is heavily relied on the computation of SVD, thus the work \cite{hu2011incremental} considered in incorporating incremental SVD to update the data dynamically. Subsequently, \cite{malik2018low, sun2019low} integrated the randomized sketching techniques into the traditional HOSVD/HOOI algorithms for single pass low Tucker rank decomposition. The CP decomposition factorizes a tensor into the sum of several rank-one tensors, and the corresponding CP rank is defined as the minimum number of such rank-one tensors, which is intuitive and similar to that of matrix rank. There are also a series of works \cite{zhou2016accelerating,ma2018randomized} focusing on the online CP problem, however, tracking the CP decomposition of online tensor often utilizes the alternating least squares method, namely CP-ALS, to update factor matrices in a nonconvex optimization manner, thus the performance would be highly dependent on a good initialization, which may not be satisfied in some real situations. More recently, the works ~\cite{kilmer2011factorization, martin2013order} proposed an alternative tensor decomposition named t-SVD, which is an elegant and natural extension of matrix SVD. More specifically, t-SVD factorizes a tensor into three factor tensors based on a new defined tensor-tensor product (t-product) operation, and could capture spatial-shifting correlation without losing intrinsic structures caused by matricization. As further stated in~\cite{kilmer2013third}, t-SVD possesses both efficient computations and solid mathematical foundations, and thus has been widely used in a great number of low-rank tensor related problems, e.g.,~\cite{hao2013facial,zhang2016exact, lu2019tensor}. However, very little is known about the performance of t-SVD in dealing with streaming low-rank tensor data. \subsection{Frequent Directions} The main idea of the so-called matrix sketching technique~\cite{woodruff2014sketching} is to first construct a sketch matrix whose size is much smaller than the original matrix but can retain most of the information, and then use such sketch instead of the original matrix to do the subsequent operations, such as matrix multiplication and SVD. To get the sketch matrix, several randomized algorithms, such as random sampling~\cite{boutsidis2014near} and random projection~\cite{bingham2001random}, have drawn great attention. The random sampling technique obtains a precise representation of the original matrix by sampling a small number of rows or columns and reweighting them. The most well-known random sampling technique is the leverage score sampling, in which the sampling probability is proportional to the leverage score of each column. This obviously poses the difficulty that the leverage score involves the calculation of the singular vectors of the original matrix, and thus is hard to process streaming data. As for the random projection technique, its key is to find a random matrix used to project the original matrix to a much smaller one. This needs to load the original matrix completely in memory, which is obviously unsuitable for streaming setting. As mentioned previously, these randomized sketching techniques have been extended to get fast low-rank tensor approximation based on specific decompositions, namely CP~\cite{battaglino2018practical}, Tucker~\cite{malik2018low} and t-SVD~\cite{zhang2018randomized}. Similar to the matrix case, such randomized tensor sketching algorithms cannot process streaming data directly. Recently, a deterministic matrix sketching technique named Frequent Directions (FD), which was introduced by \cite{liberty2013simple} and further analyzed by \cite{ghashami2016frequent}, is well suited for the streaming data. Precisely, the sketch is first initialized to an all zero-valued matrix. Then FD needs to insert the rows of the original matrix into the sketch matrix until it is fulfilled. A subsequent shrinking procedure is conducted by computing the SVD of the sketch and subtracting the squared $\ell$-th singular value. Considering that the last row of the sketch is always all zero-valued after the shrinking procedure, FD inserts the rows continually until all rows are processed. It has been proved in~\cite{woodruff2014low} that FD can achieve the optimal tradeoff between space and accuracy. Since FD could deal with streaming data without the sacrifice of accuracy, many online learning tasks have adopted it. Leng \cite{leng2015online} utilized FD to learn hashing function efficiently in the online setting. Ilja \cite{kuzborskij2019efficient} showed FD could accelerate two popular linear contextual bandit algorithms without losing much precision. In recent years, many subsequent attempts have been made to improve the precision and speed of FD. Luo \cite{luo2019robust} proposed Robust Frequent Directions (RFD) by introducing an additional variable to make the FD more robust. Huang \cite{huang2019near} considered to sample the removed part in the shrinking procedure then concatenate it with the sketch as the final result. And he theoretically proved such procedure is a space-optimal algorithm with improved running time compared with traditional FD. Besides, some papers considered the random projection technique to accelerate the original FD. That is, the subsampled randomized Hadamard transform and Count Sketch matrix were considered in \cite{chen2017frosh} and \cite{teng2018fast}, respectively. \section{Notations and preliminaries} We use the symbols $a$, $\boldsymbol{a}$, $\boldsymbol{A}$, $\mathbf{\mathcal{A}}$ for scalars, vectors, matrices, and tensors respectively. For the order-$p$ tensor $\mathbf{\mathcal{A}} \in \mathbb{R}^{n_{1} \times n_{2} \times\cdots \times n_{p}}\ (p\ge3)$, the $(i_1,i_2,\ldots,i_p)$-th entry is denoted by $\mathbf{\mathcal{A}}_{i_1i_2\ldots i_p}$, and matrix frontal slices of order-$p$ tensors can be referenced using linear indexing by reshaping the tensor into an $n_{1} \times n_{2} \times \rho$ third-order tensor and referring to the $k$-th frontal slice as $\boldsymbol{A}^{(k)}$, where $\rho=n_3n_4\ldots n_p$, and the corresponding relationship is as follows: $$ (i_1,i_2,i_3,\ldots,i_p)\rightarrow(i_1,i_2,\sum_{a=4}^{p}(i_a-1)\Pi_{b=3}^{a-1}n_b+i_3). $$ The Frobenius norm of $\mathcal{A}$ is denoted by $\|\mathbf{\mathcal{A}}\|_{F}=\sqrt{\sum_{i_1 i_2, \cdots,i_p }\left|\mathbf{\mathcal{A}}_{i_1 i_2, \ldots,i_p}\right|^{2}}$. We represent $\mathbf{\mathcal{A}} $ as $[\mathbf{\mathcal{A}}_1, ..., \mathbf{\mathcal{A}}_{n_1} ]$, where $\mathcal{A}_i \in \mathbb{R}^{1 \times n_2 \times\cdots \times n_{p}}$ denotes the $i$-th horizontal tensor. Furthermore, $\mathcal{A}^{(i)}\in \mathbb{R}^{n_{1} \times \cdots \times n_{p-1}}$ for $i=1,\ldots,n_p$ denotes the $(p - 1)$-order tensor created by holding the $p$-th index of $\mathcal{A}$ fixed at $i$. It is easy to see that when $p=3$, $\mathcal{A}^{(i)} $ is equivalent to the previously defined frontal slice $\boldsymbol{A}^{(i)}$. And the mode-1 unfolding matrix $\boldsymbol{A}_{(1)}$ of $\mathbf{\mathcal{A}}$ is denoted as $$ \boldsymbol{A}_{(1)}=\left[\boldsymbol{A}^{(1)}\ \boldsymbol{A}^{(2)}\ \cdots\ \boldsymbol{A}^{(\rho)}\right]. $$ Moreover, $\mathbf{\bar{\mathcal{A}}}\in \mathbb{C}^{n_{1} \times n_{2} \times\cdots \times n_{p}}$ is obtained by repeating FFTs along each mode of $\mathbf{\mathcal{A}}$, and $\boldsymbol{\bar{A}}$ is the block diagonal matrix composed of each frontal slice of $\bar{\mathbf{\mathcal{A}}}$, i.e., $$ \boldsymbol{\bar{A}}=\mathtt{bdiag}(\mathbf{\bar{\mathcal{A}}})=\left[\begin{array}{cccc} \boldsymbol{\bar{A}}^{(1)} & & & \\ & \boldsymbol{\bar{A}}^{(2)} & & \\ & & \ddots & \\ & & & \boldsymbol{\bar{A}}^{\left(\rho\right)} \end{array}\right]. $$ Note that \begin{align} \boldsymbol{\bar{A}}=\left(\boldsymbol{\tilde{F}} \otimes \boldsymbol{I}_{n_{1}}\right) \cdot \boldsymbol{\tilde{A}} \cdot \left(\boldsymbol{\tilde{F}}^{-1} \otimes \boldsymbol{I}_{n_{2}}\right), \label{orthop} \end{align} where $\boldsymbol{\tilde{F}}=\boldsymbol{F}_{n_{p}} \otimes\boldsymbol{F}_{n_{p-1}} \otimes \cdots \otimes \boldsymbol{F}_{n_{3}}$, $\boldsymbol{F}_{n_{i}}$ is the discrete Fourier transformation matrix, $\otimes$ denotes the Kronecker product, and $\boldsymbol{\tilde{A}}$ is the $n_1\rho \times n_2\rho$ block matrix formed from $\mathbf{\mathcal{A}}$ in the base level of recursion (see Fig. 3.2 in \cite{martin2013order} for details). Specifically, for the third-order tensor, we get that $\boldsymbol{\tilde{F}}= \boldsymbol{F}_{n_{3}}, \ \boldsymbol{\tilde{A}} =\mathtt{bcirc}(\mathbf{\mathcal{A}})$, and the block circulant matrix $\mathtt{bcirc}(\mathbf{\mathcal{A}})$ is denoted as $$ \mathtt{bcirc}(\mathbf{\mathcal{A}})=\left[\begin{array}{cccc} \boldsymbol{A}^{(1)} & \boldsymbol{A}^{\left(n_{3}\right)} & \cdots & \boldsymbol{A}^{(2)} \\ \boldsymbol{A}^{(2)} & \boldsymbol{A}^{(1)} & \cdots & \boldsymbol{A}^{(3)} \\ \vdots & \vdots & \ddots & \vdots \\ \boldsymbol{A}^{\left(n_{3}\right)} & \boldsymbol{A}^{\left(n_{3}-1\right)} & \cdots & \boldsymbol{A}^{(1)} \end{array}\right]. $$ Now we shall give a brief review of some related definitions of tensors used in the paper. \begin{definition}[t-product for order-$p$ tensors ($p\ge3$), \cite{martin2013order}] \label{deftproduct-p} Let $\mathbf{\mathcal{A}} \in \mathbb{R}^{n_{1} \times n_{2} \times \cdots \times n_{p}}$ and $\mathbf{\mathcal{B}} \in \mathbb{R}^{n_{2} \times \ell \times \cdots \times n_{p}} .$ Then the $t$-product $\mathbf{\mathcal{A}} * \mathbf{\mathcal{B}}$ is the order-$p$ tensor defined recursively as $$ \mathbf{\mathcal{A}} * \mathbf{\mathcal{B}}=\mathtt {fold}(\mathtt{bcirc}(\mathbf{\mathcal{A}}) * \mathtt{unfold}(\mathbf{\mathcal{B}})). $$ The $(p-1)$-order tensor $\mathtt{bcirc}(\mathbf{\mathcal{A}})$ is defined as $$ \mathtt{bcirc}(\mathbf{\mathcal{A}})=\left[\begin{array}{ccccc} \mathcal{A}^{(1)} & \mathcal{A}^{(n_{p})} & \mathcal{A}^{(n_{p}-1)} & \cdots & \mathcal{A}^{(2)} \\ \mathcal{A}^{(2)} & \mathcal{A}^{(1)} & \mathcal{A}^{(n_{p})} & \cdots & \mathcal{A}^{(3)} \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ \mathcal{A}^{(n_{p})} & \mathcal{A}^{(n_{p}-1)} & \cdots & \mathcal{A}^{(2)} & \mathcal{A}^{(1)} \end{array}\right]. $$ Define $\mathtt { unfold }(\cdot)$ by taking an $n_1 \times \cdots \times n_p$ tensor and returning an $n_1n_p \times n_2 \times \cdots \times n_{p-1}$ block tensor in the following way: $$ \mathtt{unfold}(\mathcal{A})=\left[\begin{array}{c} \mathcal{A}^{(1)} \\ \mathcal{A}^{(2)} \\ \vdots \\ \mathcal{A}^{(n_{p})} \end{array}\right]. $$ Thus, the operation $\mathtt { fold }(\cdot)$ takes an $n_{1} n_{p} \times n_{2} \times \cdots \times n_{p-1}$ block tensor and returns an $n_{1} \times \cdots \times n_{p}$ tensor. That is, $$ \mathtt{fold}\left(\mathtt{unfold}(\mathcal{A})\right)=\mathcal{A}. $$ \end{definition} \begin{definition}[Tensor transpose for order-$p$ tensors ($p\ge3$), \cite{martin2013order}] If $\mathcal{A}$ is $n_{1} \times \cdots \times n_{p}$, then $\mathcal{A}^{T}$ is the $n_{2} \times n_{1} \times n_{3} \times \cdots \times n_{p}$ tensor obtained by tensor transposing each $\mathcal{A}^{(i)}$ for $i=1, \ldots, n_{p}$, and then reversing the order of the $\mathcal{A}^{(i)}$ for $2$ through $n_{p} .$ In other words, $$ \mathcal{A}^{T}= \mathtt{ fold }\left(\left[\begin{array}{c} (\mathcal{A}^{(1)})^{T} \\ (\mathcal{A}^{(n_{p})})^{T} \\ (\mathcal{A}^{(n_{p-1})})^{T} \\ \vdots \\ (\mathcal{A}^{(n_{2})})^{T} \end{array}\right]\right). $$ For complex tensor, the tensor transpose is conjugate. \end{definition} \begin{definition}[Identity tensor for third-order tensors, \cite{kilmer2013third}] The identity tensor $\mathbf{\mathcal{I}} \in \mathbb{R}^{n \times n \times n_{3}}$ is the tensor whose first frontal slice is the $n\times n$ identity matrix, and other frontal slices are all zeros. \end{definition} \begin{definition}[Identity tensor for order-$p$ tensors ($p>3$), \cite{martin2013order}] The $n \times n \times \ell_{1}\times \cdots \times \ell_{p-2}$ order-$p$ identity tensor $ \mathcal{I}$ is the tensor such that $\mathcal{I}^{(1)}$ is the $n \times n \times \ell_{1}\times \cdots \times \ell_{p-3}$ order-$(p-1)$ identity tensor, and $\mathcal{I}^{(j)}$ is the order-$(p-1)$ zero tensor for $j=2, \ldots, \ell_{p-2}$. \end{definition} \begin{definition}[Orthogonal tensor for order-$p$ tensors ($p\ge3$), \cite{martin2013order}] A real-valued tensor $\mathbf{\mathcal{Q}} \in \mathbb{R}^{n \times n \times \ell_{1} \times \cdots \times \ell_{p-2}}$ is orthogonal if it satisfies $ \mathbf{\mathcal{Q}}^{T}*\mathbf{\mathcal{Q}}=\mathbf{\mathcal{Q}}*\mathbf{\mathcal{Q}}^{T}=\mathbf{\mathcal{I}} $. A real-valued tensor $\mathbf{\mathcal{Q}} \in \mathbb{R}^{p \times q \times \ell_{1} \times \cdots \times \ell_{p-2}}$ is partially orthogonal if it satisfies $ \mathbf{\mathcal{Q}}^{T}*\mathbf{\mathcal{Q}}=\mathbf{\mathcal{I}}. $ \end{definition} \begin{definition}[f-diagonal tensor for order-$p$ tensors ($p\ge3$), \cite{martin2013order}] The f-diagonal tensor $\mathcal{A}$ has the property that $\mathcal{A}_{i_{1} i_{2} \ldots i_{p}}=0$ unless $i_{1}=i_{2} .$ \end{definition} \begin{definition}[$\ell_{2^{*}}$ norm of tensor column for order-$p$ tensors ($p\ge3$), \cite{zhang2016exact}] Let $\vec{\boldsymbol{x}}$ be an $n_{1} \times 1 \times n_{3}\times \cdots\times n_{p}$ tensor column, the $\ell_{2^{*}}$ norm denotes $$\|\vec{\boldsymbol{x}}\|_{2^{*}}=\sqrt{\sum_{i_1=1}^{n_{1}} \sum_{i_3=1}^{n_{3}} \cdots \sum_{i_p=1}^{n_{p}}\vec{\boldsymbol{x}}_{i_1 1 i_3\ldots i_p}^{2}}.$$ \end{definition} \begin{definition}[Tensor spectral norm for order-$p$ tensors ($p\ge3$)]\label{def_tsn} Given $\mathcal{A} \in \mathbb{R}^{n_{1} \times n_{2} \times \cdots \times n_{p}}$ and $\mathcal{V} \in \mathbb{R}^{n_{2} \times 1 \times \cdots \times n_{p}} $, the tensor spectral norm is defined as \begin{align} \|\mathcal{A}\|: &=\sup _{\|\mathcal{V}\|_{F} \leq 1}\|\mathcal{A} * \mathcal{V}\|_{F} \notag\\ &=\sup _{\|\mathcal{V}\|_{F} \leq 1}\|\operatorname{bcirc}(\mathcal{A}) * \operatorname{unfold}(\mathcal{V})\|_{F} \notag \\ &=\sup _{\|\mathcal{V}\|_{F} \leq 1}\|\boldsymbol{\tilde{A}} \cdot \boldsymbol{\hat{V}} \|_{F}\notag\\ &=\|\boldsymbol{\tilde{A}}\| \label{msnp}\\ &=\|\boldsymbol{\bar{A}}\| ,\label{simprop} \end{align} where $\boldsymbol{\hat{V}}$ is the $ n_2\rho\times 1$ unfold matrix formed from $\mathbf{\mathcal{V}}$ in the base level of recursion. \end{definition} It is not hard to check that the equation (\ref{msnp}) holds by the definition of matrix spectral norm, and the equation (\ref{simprop}) holds by combining (\ref{orthop}) and the property that $\left(\boldsymbol{\tilde{F}} \otimes \boldsymbol{I}_{n_{1}}\right) / \sqrt{\rho}$ is orthogonal. \begin{definition}[t-SVD for order-$p$ tensors ($p\ge3$), \cite{martin2013order}] Let $\mathcal{A}$ be an $n_{1} \times \cdots \times n_{p}$ real-valued tensor. Then $\mathcal{A}$ can be factored as $$ \mathcal{A}=\mathcal{U} * \mathcal{S} * \mathcal{V}^{T} $$ where $\mathcal{U}, \mathcal{V}$ are orthogonal $n_{1} \times n_{1} \times n_{3} \times n_{4} \times \cdots \times n_{p}$ and $n_{2} \times n_{2} \times n_{3} \times n_{4} \times \cdots \times n_{p}$ tensors respectively, and $\mathcal{S}$ is an $n_{1} \times n_{2} \times \cdots \times n_{p}$ f-diagonal tensor. The factorization is called the t-SVD. \end{definition} Basically, t-SVD can be computed efficiently by the following steps: \begin{itemize} \item[1.] Compute $\mathbf{\mathcal{A}}=\mathtt{fft}(\mathbf{\mathcal{A}},[ ], i)$, for $i=3,\ldots,p$; \item[2.] Set $\mathbf{\bar{\mathcal{A}}}:=\mathbf{\mathcal{A}}$; \item[3.] Compute matrix SVD $\boldsymbol{\bar{A}}^{(i)}=\boldsymbol{\bar{U}}^{(i)} \boldsymbol{\bar{S}}^{(i)} \boldsymbol{\bar{V}}^{(i)*}$ for each frontal slice, $ i=1,\ldots,\rho$; \item[4.] Compute $\mathbf{\bar{\mathcal{U}}}=\mathtt{ifft}(\mathbf{\bar{\mathcal{U}}}, [], i)$, $\mathbf{\bar{\mathcal{S}}}=\mathtt{ifft}(\mathbf{\bar{\mathcal{S}}}, [], i),$ and $\mathbf{\bar{\mathcal{V}}}=\mathtt{ifft}(\mathbf{\bar{\mathcal{V}}}, [], i)$, for $i=3,\ldots,p$; \item[5.] Set $\mathbf{\mathcal{U}}:=\mathbf{\bar{\mathcal{U}}}$, $\mathbf{\mathcal{S}}:=\mathbf{\bar{\mathcal{S}}}$ and $\mathbf{\mathcal{V}}:=\mathbf{\bar{\mathcal{V}}}$. \end{itemize} \begin{definition}[Tensor tubal rank for third-order tensors, \cite{lu2019tensor}] \label{tubalrank3} For $\mathbf{\mathcal{A}} \in$ $\mathbb{R}^{n_{1} \times n_{2} \times n_{3}},$ the tensor tubal rank, denoted as $\operatorname{rank}_{t}(\mathbf{\mathcal{A}}),$ is defined as the number of nonzero singular tubes of $\mathbf{\mathcal{S}},$ where $\mathbf{\mathcal{S}}$ is from the t-SVD of $\mathbf{\mathcal{A}}=\mathbf{\mathcal{U}} * \mathbf{\mathcal{S} }* \mathbf{\mathcal{V}}^{T} .$ We can write $$ \operatorname{rank}_{t}(\mathbf{\mathcal{A}})=\#\{i, \mathbf{\mathcal{S}}(i, i, 1) \neq 0\}=\#\{i, \mathbf{\mathcal{S}}(i, i,:) \neq 0\}. $$ \end{definition} In the following, we shall extend the above definition to more general order-$p$ case with $p>3$. \begin{definition}[Tensor tubal rank for order-$p$ tensors ($p>3$)] For $\mathbf{\mathcal{A}} \in$ $\mathbb{R}^{n_{1} \times \cdots \times n_{p}},$ the tensor tubal rank, denoted as $\operatorname{rank}_{t}(\mathbf{\mathcal{A}}),$ is defined as the number of nonzero singular scalars of $\mathbf{\mathcal{S}},$ where $\mathbf{\mathcal{S}}$ is from the t-SVD of $\mathbf{\mathcal{A}}=\mathbf{\mathcal{U}} * \mathbf{\mathcal{S} }* \mathbf{\mathcal{V}}^{T} .$ We can write \begin{align} \operatorname{rank}_{t}(\mathbf{\mathcal{A}})&=\#\{i, \mathbf{\mathcal{S}}(i, i, 1, \ldots ,1) \neq 0\}\notag\\ &=\#\{i, \mathbf{\mathcal{S}}(i, i,:,\ldots,:) \neq 0\}.\notag \end{align} \end{definition} \begin{lemma}[Best tubal-rank-$k$ approximation for third-order tensors, \cite{kilmer2011factorization}] Let the t-SVD of $\mathbf{\mathcal{A}} \in \mathbb{R}^{n_{1} \times n_{2} \times n_{3}}$ be $ \mathbf{\mathcal{A}}=\mathbf{\mathcal{U}} * \mathbf{\mathcal{S}} * \mathbf{\mathcal{V}}^{T} $. For a given positive integer $k$, define $\mathbf{\mathcal{A}}_{k}=\sum_{s=1}^{k} \mathbf{\mathcal{U}}(:, s,:) * \mathbf{\mathcal{S}}(s, s,:) * \mathbf{\mathcal{V}}^{T}(:, s,:)$. Then $\mathbf{\mathcal{A}}_{k}=\underset{\mathbf{\hat{\mathcal{A}}} \in \mathbb{A}}{\arg \min }\|\mathbf{\mathcal{A}}-\mathbf{\hat{\mathcal{A}}}\|_{F},$ where $\mathbb{A}=\left\{\mathcal{X} * \mathcal{Y}^{T} | \mathcal{X} \in\mathbb{R}^{n_{1} \times k \times n_{3}}, \mathcal{Y} \in \mathbb{R}^{n_{2} \times k \times n_{3}}\right\}$. This means that $\mathbf{\mathcal{A}}_{k}$ is the approximation of $\mathbf{\mathcal{A}}$ with the tubal rank at most $k$. \end{lemma} The extension of above result to general order-$p$ tensors is presented as follows, and the proof will be given in Section VI. \begin{lemma}[Best tubal-rank-$k$ approximation for order-$p$ tensors ($p>3$)]\label{lemma2} Let the t-SVD of $\mathbf{\mathcal{A}} \in \mathbb{R}^{n_{1} \times \cdots \times n_{p}}$ be $ \mathbf{\mathcal{A}}=\mathbf{\mathcal{U}} * \mathbf{\mathcal{S}} * \mathbf{\mathcal{V}}^{T} $. For a given positive integer $k$, define $\mathbf{\mathcal{A}}_{k}=\sum_{s=1}^{k} \mathbf{\mathcal{U}}(:, s,:,\ldots,:) * \mathbf{\mathcal{S}}(s, s,:,\ldots,:) * \mathbf{\mathcal{V}}^{T}(:, s,:,\ldots,:)$. Then $\mathbf{\mathcal{A}}_{k}=\underset{\mathbf{\hat{\mathcal{A}}} \in \mathbb{A}}{\arg \min }\|\mathbf{\mathcal{A}}-\mathbf{\hat{\mathcal{A}}}\|_{F},$ where $\mathbb{A}=\left\{\mathcal{X} * \mathcal{Y}^{T} | \mathcal{X} \in\mathbb{R}^{n_{1} \times k \times n_{3} \times \cdots \times n_{p}}, \mathcal{Y} \in \mathbb{R}^{n_{2} \times k \times n_{3} \times \cdots \times n_{p}}\right\}$. This means that $\mathbf{\mathcal{A}}_{k}$ is the approximation of $\mathbf{\mathcal{A}}$ with the tubal rank at most $k$. \end{lemma} \section{The proposed t-FD algorithm} In this section, we shall first focus on deriving the algorithmic procedure and conducting the corresponding theoretical analysis for the third-order tensors, and then extend these to more general order-$p$ tensors with $p>3$. \subsection{Algorithmic procedure} Here we first briefly describe the core idea of matrix FD. It receives the input matrix $\boldsymbol{A} \in \mathbb{R}^{n \times d}$ in a streaming fashion, and produces the sketch matrix $\boldsymbol{B} \in \mathbb{R}^{\ell \times d} $ which contains only $\ell \ll n$ rows but still approximates well for the original matrix $\boldsymbol{A}$. Specifically, $\boldsymbol{B}$ is first initialized to an all-zero valued matrix and then receives each row of matrix $\boldsymbol{A}$ one after the other. Once $\boldsymbol{B}$ is full, we orthogonalize $\boldsymbol{B}$ by taking SVD and then implement a shrinking procedure to construct the new sketch matrix, which are repeated throughout the entire streaming data. As shown in~\cite{ghashami2016frequent}, the algorithm guarantees that \begin{equation} \label{eq:fd} \left\|\boldsymbol{A}^{ T} \boldsymbol{A}-\boldsymbol{B}^{T} \boldsymbol{B}\right\|_{2} \leq \frac{ \left\|\boldsymbol{A}-\boldsymbol{A}_{k}\right\|_{F}^{2}}{\ell-k}. \end{equation} Note that setting\begin{small} $\ell=\lceil k+1/\varepsilon\rceil$\end{small} yields the error of \begin{small}$\varepsilon \left\|\boldsymbol{A}-\boldsymbol{A}_{k}\right\|_{F}^{2}$\end{small}, that is to say, the sketch matrix $\boldsymbol{B}$ is within $(1+\varepsilon)$ best low-rank approximation. For the higher order tensor case, although there has been many explorations focused on the Tucker/CP decomposition dealing with streaming data, the random techniques or complicated optimization strategies are required for getting a good low-rank approximation. Motivated by the matrix FD, we propose a simple and deterministic tensor sketching algorithm (t-FD) as stated in Algorithm $\ref{tensor-FD}$ to get a low-tubal-rank approximation from streaming data. Our goal is to find a small sketch $\mathcal{B}$ that could derive a similar error bound as (\ref{eq:fd}). Assume we have $n_1$ data samples $\mathcal{A}_1, ..., \mathcal{A}_{n_1}$ of size $n_2 \times n_3$, and they are received sequentially, we arrange them to a third-order tensor $\mathcal{A} \in \mathbb{R}^{n_1 \times n_2 \times n_3}$. By utilizing the well defined algebraic structure of tensor-tensor product, the idea of matrix FD could be extended naturally and used to update the components of sketch $\mathcal{B}$. \begin{figure*} \centering \includegraphics[width=0.68\textwidth]{tfd} \caption{Illustration of t-FD.} \label{figtFD} \end{figure*} During the implementation of Algorithm 1, $\mathbf{\mathcal{B}}$ is first initialized to an all-zero valued tensor, and then the all zero valued horizontal slices in $\mathbf{\mathcal{B}}$ are simply replaced by the horizontal slices from $\mathbf{\mathcal{A}}$. Intuitively speaking, each frontal slice $\boldsymbol{B}^{(i)}\ ( i=1,\ldots,n_{3})$ receives one row of $\boldsymbol{A}^{(i)}$ each time one after the other as shown in step 4. Afterwards, the last $\ell+1$ horizontal slices are nullified by a four-stage process (steps 8-12 in Algorithm 1). First, we use the Fast Fourier Transform (FFT) to get $\mathbf{\bar{\mathcal{B}}}$. Then, each frontal slice $\boldsymbol{\bar{B}}^{(i)}$ is rotated (from the left) using its SVD such that its rows are orthogonal and in descending magnitude order. Further then, the rotated frontal slice is implemented by a shrinking procedure to make the last $\ell+1$ rows be zero. Finally, compute the sketch $\mathbf{\mathcal{B}}$ from $\mathbf{\bar{\mathcal{B}}}$ via the inverse FFT. More detailed procedure is illustrated in Fig. \ref{figtFD}. \begin{algorithm}[!htb] \caption{tensor FD (t-FD) for third-order tensors} \label{tensor-FD} \begin{algorithmic}[1] \REQUIRE $\mathbf{\mathcal{A}} \in \mathbb{R}^{n_{1} \times n_{2} \times n_{3}}, \text { sketch size } \ell$\\ \ENSURE $\mathbf{\mathcal{B}} \in \mathbb{R}^{\ell \times n_{2} \times n_{3}}$\\ \STATE Initialize $\mathcal{B} \in \mathbb{R}^{2\ell \times n_2 \times n_3}$ as a tensor with all elements being zero \FOR{$j = 1, \ldots, n_{1}$} \STATE Initialize $\delta_{j}^{(i)}=0 , i=1,\ldots,n_3$ \STATE Insert $\mathcal{A}_j$ into a zero valued horizontal slice of $\mathcal{B}$\\ \IF{$\mathcal{B}$ is fulfilled} \STATE Compute $\mathbf{\bar{\mathcal{B}}}=\mathtt{fft}(\mathbf{\mathcal{B}},[], 3)$\\ \FOR{$i = 1, \ldots, n_{3}$} \STATE $\left[\boldsymbol{\bar{U}}^{(i)}, \boldsymbol{\bar{S}}^{(i)}, \boldsymbol{\bar{V}}^{(i)}\right] \leftarrow \operatorname{svd}(\boldsymbol{\bar{B}}^{(i)})$\\ \STATE $\boldsymbol{\bar{C}}^{(i)} \leftarrow \boldsymbol{\bar{S}}^{(i)}\boldsymbol{\bar{V}}^{(i)T}$\\ \STATE $\delta_{j}^{(i)} \leftarrow s_{\ell}^{(i)2}$\\ \STATE $\boldsymbol{\bar{B}}^{(i)} \leftarrow \sqrt{\max(\boldsymbol{\bar{S}}^{(i)2}-\delta_{j}^{(i)} \boldsymbol{I}_{2\ell},0)} \cdot \boldsymbol{\bar{V}}^{(i)T}$ \STATE $\mathbf{\bar{\mathcal{B}}}^{(i)} \leftarrow \boldsymbol{\bar{B}}^{(i)}$\\ \ENDFOR \STATE Compute $\mathbf{\mathcal{B}}=\mathtt{ifft}(\mathbf{\bar{\mathcal{B}}},[], 3)$\\ \ENDIF \ENDFOR \STATE Set $\mathbf{\mathcal{B}}\leftarrow \mathbf{\mathcal{B}}(1 :\ell, :, :)$ \end{algorithmic} \end{algorithm} As for the theoretical analysis of the new algorithm, it becomes more challenging. First, there are several tensor norms which are more complicated, e.g., the tensor spectral norm. Second, since the truncated procedure is implemented in the Fourier domain, the relationship between the original tensor and the sketch tensor is hard to derive explicitly. So the proof techniques cannot directly move from the matrix FD algorithm to the new t-FD algorithm. \subsection{Error bounds} This subsection presents our main theoretical results for Algorithm \ref{tensor-FD}. In the subsequent analysis, we used two different error metrics to evaluate the distance between original tensor $\mathbf{\mathcal{A}}$ and sketch tensor $\mathbf{\mathcal{B}}$. The first error metric is the tensor covariance error, $\left\|\mathbf{\mathcal{A}}^{T}*\mathbf{\mathcal{A}}-\mathbf{\mathcal{B}}^{T}*\mathbf{\mathcal{B}}\right\|$, which is used to measure the maximal singular value gap between the block diagonal matrices $\boldsymbol{\bar{A}}$ and $\boldsymbol{\bar{B}}$. This can be easily verified by the definition of tensor spectral norm (Def. \ref{def_tsn}). The tensor covariance error is given by the following theorem. \begin{theorem}[Tensor covariance error]\label{main1} Given $\mathbf{\mathcal{A}} \in \mathbb{R}^{n_{1} \times n_{2} \times n_{3}}$ and the sketch size $\ell$, the sketch $\mathbf{\mathcal{B}} \in \mathbb{R}^{\ell \times n_{2} \times n_{3}}$ is constructed by Algorithm \ref{tensor-FD}, then for any $k<\frac{\ell}{c}$, $$ \left\|\mathbf{\mathcal{A}}^{T}*\mathbf{\mathcal{A}}-\mathbf{\mathcal{B}}^{T}*\mathbf{\mathcal{B}}\right\|\le\frac{\left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}}_{k}\right\|_{F}^{2}}{\frac{\ell}{c}-k},$$ where $c=\frac{n_{3}\sum_{j=1}^{n_1}\max\limits_{i} \delta_{j}^{(i)} }{\sum_{j=1}^{n_1}\sum_{i=1}^{n_{3}} \delta_{j}^{(i)}}$. \end{theorem} The second error metric is the tensor projection error, $\left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}} * \mathbf{\mathcal{V}}_{k} * \mathbf{\mathcal{V}}_{k}^{T}\right\|_{F}^{2}$, where $\mathbf{\mathcal{A}} * \mathbf{\mathcal{V}}_{k} * \mathbf{\mathcal{V}}_{k}^{T}$ denotes the projection of $\mathbf{\mathcal{A}}$ on the rank $k$ right orthogonal tensor of $\mathbf{\mathcal{B}}$. Intuitively, the tensor projection error measures the deviation generated during the projection process, and further indicates how accurate the choice of subspace is. The detailed analysis is shown in the following theorem. \begin{theorem}[Tensor projection error]\label{main} Given $\mathbf{\mathcal{A}} \in \mathbb{R}^{n_{1} \times n_{2} \times n_{3}}$ and the sketch size $\ell$, the sketch tensor $\mathbf{\mathcal{B}} \in \mathbb{R}^{\ell \times n_{2} \times n_{3}}$ is constructed by Algorithm \ref{tensor-FD}. Let $\mathbf{\mathcal{B}}=\mathbf{\mathcal{U}}*\mathbf{\mathcal{S}}*\mathbf{\mathcal{V}}^{T}$, and the tubal-rank-$k$ approximation is $\mathbf{\mathcal{B}}_{k}=\mathbf{\mathcal{U}}_{k}*\mathbf{\mathcal{S}}_{k}*\mathbf{\mathcal{V}}_{k}^{T}$. For any $k<\frac{\ell}{c}$, we have that \begin{align}\label{tpecompare} \left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}} * \mathbf{\mathcal{V}}_{k} * \mathbf{\mathcal{V}}_{k}^{T}\right\|_{F}^{2} \le \frac{\ell}{\ell-ck}\left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}}_{k}\right\|_{F}^{2}, \end{align} where $c=\frac{n_{3}\sum_{j=1}^{n_1}\max\limits_{i} \delta_{j}^{(i)} }{\sum_{j=1}^{n_1}\sum_{i=1}^{n_{3}} \delta_{j}^{(i)}}$. \end{theorem} \begin{remark} If we set $\ell=c\lceil k+k / \varepsilon\rceil$, we can get the standard $(1+\epsilon)$ bound form as $$ \left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}} * \mathbf{\mathcal{V}}_{k} * \mathbf{\mathcal{V}}_{k}^{T}\right\|_{F}^{2} \le (1+\varepsilon)\left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}}_{k}\right\|_{F}^{2}.$$ Note that when the rank $k$ and parameter $c$ are fixed, as the sketch size increases, $ \varepsilon$ would decrease linearly, thus we could achieve nearly optimal deterministic error bound. Moreover, when the third dimension $n_3$ is $1$, our algorithm reduce to the matrix FD, that is to say, matrix FD is a special case of t-FD. Remember that we could identify the parameter $c = 1$ when $n_3 = 1$, then the theoretical guarantee of matrix FD is a special case of our Theorems 1 and 2. \end{remark} \begin{algorithm}[!htb] \caption{tensor FD (t-FD) for order-$p$ tensors ($p > 3$)} \label{tensor-FDp} \begin{algorithmic}[1] \REQUIRE $\mathbf{\mathcal{A}} \in \mathbb{R}^{n_{1} \times n_{2} \times \cdots\times n_{p}}, \text { sketch size } \ell$\\ \ENSURE $\mathbf{\mathcal{B}} \in \mathbb{R}^{\ell \times n_{2} \times \cdots\times n_{p}}$\\ \STATE Initialize $\mathcal{B} \in \mathbb{R}^{2\ell \times n_{2} \times \cdots\times n_{p}}$ as a tensor with all elements being zero \FOR{$j=1, \ldots, n_{1}$} \STATE Initialize $\delta_{j}^{(i)}=0 , i=1,\ldots,\rho$ \STATE Insert $\mathcal{A}_j$ into a zero valued horizontal tensor of $\mathcal{B}$, where $\mathbf{\mathcal{A}} =[\mathbf{\mathcal{A}}_1, ..., \mathbf{\mathcal{A}}_{n_1} ]$ and $\mathcal{A}_j \in \mathbb{R}^{1 \times n_2 \times\cdots \times n_{p}}$\\ \IF{$\mathcal{B}$ is fulfilled} \FOR{$i=3,\ldots,p$} \STATE Compute $\mathbf{\mathcal{B}}=\operatorname{fft}(\mathbf{\mathcal{B}},[], i)$\\ \ENDFOR \STATE Set $\mathbf{\bar{\mathcal{B}}}=\mathcal{B}$ \FOR{$i=1,\ldots,\rho$} \STATE $\left[\boldsymbol{\bar{U}}^{(i)}, \boldsymbol{\bar{S}}^{(i)}, \boldsymbol{\bar{V}}^{(i)}\right] \leftarrow \operatorname{svd}(\boldsymbol{\bar{B}}^{(i)})$\\ \STATE $\boldsymbol{\bar{C}}^{(i)} \leftarrow \boldsymbol{\bar{S}}^{(i)}\boldsymbol{\bar{V}}^{(i)T}$\\ \STATE $\delta_{j}^{(i)} \leftarrow s_{\ell}^{(i)2}$\\ \STATE $\boldsymbol{\bar{B}}^{(i)} \leftarrow \sqrt{\max(\boldsymbol{\bar{S}}^{(i)2}-\delta_{j}^{(i)} \boldsymbol{I}_{2\ell},0)} \cdot \boldsymbol{\bar{V}}^{(i)T}$ \STATE $\mathbf{\bar{\mathcal{B}}}^{(i)} \leftarrow \boldsymbol{\bar{B}}^{(i)}$\\ \ENDFOR \FOR{$i=3,\ldots,p$} \STATE Compute $\mathbf{\bar{\mathcal{B}}}=\operatorname{ifft}(\mathbf{\bar{\mathcal{B}}},[], i)$\\ \ENDFOR \STATE Set $\mathcal{B}=\mathbf{\bar{\mathcal{B}}}$ \ENDIF \ENDFOR \STATE Return $\mathbf{\mathcal{B}}$ \end{algorithmic} \end{algorithm} \begin{remark}It should be pointed out, as detailedly shown in Section VI, our proof techniques are different from the matrix FD case in two aspects. Firstly, t-FD algorithm based on t-SVD computes on the Fourier domain, which leads us to use tensor operators such as $\mathtt{bcirc}$ and $\mathtt{unfold}$ as a bridge to find the relationship between original and Fourier domains. Secondly, to bound the information loss in each iteration requires us to utilize the relationship among frontal slices. \end{remark} \subsection{Extension to order-$p$ tensors ($p > 3$)} We now state the algorithmic procedure for the general case of order-$p$ tensors ($p >3$) in Algorithm \ref{tensor-FDp}. Even though the above results are focused on the third-order tensors, these can be fairly easy to generalize for higher order tensors, by combing the well-defined algebraic framework of order-$p$ t-SVD \cite{martin2013order} ($p > 3$) along with the properties of the discrete Fourier transform (DFT) matrix. It is worth mentioning that for the original tensor $\mathbf{\mathcal{A}} \in \mathbb{R}^{n_{1} \times n_{2} \times \cdots\times n_{p}}$ and the sketch tensor $\mathbf{\mathcal{B}} \in \mathbb{R}^{\ell \times n_{2} \times \cdots\times n_{p}}$, the tensor covariance and projection errors are similar to the third-order case, with the only difference being that $c$ changes from $\frac{n_{3}\sum_{j=1}^{n_1}\max\limits_{i} \delta_{j}^{(i)} }{\sum_{j=1}^{n_1}\sum_{i=1}^{n_{3}} \delta_{j}^{(i)}}$ to $\frac{\rho\sum_{j=1}^{n_1}\max\limits_{i} \delta_{j}^{(i)} }{\sum_{j=1}^{n_1}\sum_{i=1}^{\rho} \delta_{j}^{(i)}}$. The details of such error bounds are shown below. \begin{theorem}[Tensor covariance error for order-$p$ tensors ($p > 3$)]\label{tce-p-order} Given $\mathbf{\mathcal{A}} \in \mathbb{R}^{n_{1} \times n_{2} \times \cdots\times n_{p}}$ and the sketch size $\ell$, the sketch $\mathbf{\mathcal{B}} \in \mathbb{R}^{\ell \times n_{2} \times \cdots\times n_{p}}$ is constructed by Algorithm \ref{tensor-FDp}, then for any $k<\frac{\ell}{c}$, $$ \left\|\mathbf{\mathcal{A}}^{T}*\mathbf{\mathcal{A}}-\mathbf{\mathcal{B}}^{T}*\mathbf{\mathcal{B}}\right\|\le\frac{\left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}}_{k}\right\|_{F}^{2}}{\frac{\ell}{c}-k},$$ where $c=\frac{\rho\sum_{j=1}^{n_1}\max\limits_{i} \delta_{j}^{(i)} }{\sum_{j=1}^{n_1}\sum_{i=1}^{\rho} \delta_{j}^{(i)}}$ and $\rho=n_{3}n_{4}\ldots n_p$. \end{theorem} \begin{theorem}[Tensor projection error for order-$p$ tensors ($p > 3$)]\label{tpe-p-order} Given $\mathbf{\mathcal{A}} \in \mathbb{R}^{n_{1} \times n_{2} \times \cdots\times n_{p}}$ and the sketch size $\ell$, the sketch tensor $\mathbf{\mathcal{B}} \in \mathbb{R}^{\ell \times n_{2} \times \cdots\times n_{p}}$ is constructed by Algorithm \ref{tensor-FDp}. Let $\mathbf{\mathcal{B}}=\mathbf{\mathcal{U}}*\mathbf{\mathcal{S}}*\mathbf{\mathcal{V}}^{T}$, and the tubal-rank-$k$ approximation is $\mathbf{\mathcal{B}}_{k}=\mathbf{\mathcal{U}}_{k}*\mathbf{\mathcal{S}}_{k}*\mathbf{\mathcal{V}}_{k}^{T}$. For any $k<\frac{\ell}{c}$, we have that \begin{align} \left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}} * \mathbf{\mathcal{V}}_{k} * \mathbf{\mathcal{V}}_{k}^{T}\right\|_{F}^{2} \le \frac{\ell}{\ell-ck}\left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}}_{k}\right\|_{F}^{2}, \label{thm4111} \end{align} where $c=\frac{\rho\sum_{j=1}^{n_1}\max\limits_{i} \delta_{j}^{(i)} }{\sum_{j=1}^{n_1}\sum_{i=1}^{\rho} \delta_{j}^{(i)}}$ and $\rho=n_{3}n_{4}\ldots n_p$. \end{theorem} \subsection{Comparison to matricization FD} A naive approach for tackling tensor data is the so-called matricization technique, that is, vectorizing the horizontal slices separately and then regarding it as a matrix. We thus set up the following comparison algorithm. For the upcoming data sample $\mathcal{A}_{i}$, we convert it to the mode-1 unfolding matrix and then utilize it to update the sketch matrix $\boldsymbol{B}$. Lastly, a folding procedure is used to obtain the tensor $\mathbf{\mathcal{B}}$. One can see Algorithm $\ref{Matricization-tensorFD}$ for more details. \begin{algorithm}[!htb] \caption{Matricization-tensor-FD (MtFD)} \label{Matricization-tensorFD} \begin{algorithmic}[1] \REQUIRE $\mathbf{\mathcal{A}} \in \mathbb{R}^{n_{1} \times n_{2} \times\cdots\times n_{p}}, \text { sketch size } \ell$\\ \ENSURE $\mathbf{\mathcal{B}} \in \mathbb{R}^{\ell \times n_{2}\times\cdots \times n_{p}}$\\ \STATE Get the mode-1 unfolding matrix $\boldsymbol{A}_{(1)}\in \mathbb{R}^{n_{1} \times n_{2} \rho}$ \STATE Compute $\boldsymbol{B}_{(1)}\in \mathbb{R}^{\ell \times n_{2}\rho}$ = FD($\boldsymbol{A}_{(1)}$) \STATE Get the mode-1 folding tensor $\mathbf{\mathcal{B}} \in \mathbb{R}^{\ell \times n_{2} \times\cdots\times n_{p}}$ \end{algorithmic} \end{algorithm} For this matricization algorithm, we can also derive a simple covariance error bound. \begin{theorem}\label{thm-norm2-matrization} Given $\mathbf{\mathcal{A}} \in \mathbb{R}^{n_{1} \times n_{2} \times\cdots\times n_{p}}$ and sketch size $\ell$, the sketch $\mathbf{\mathcal{B}} \in \mathbb{R}^{\ell \times n_{2} \times \cdots \times n_{p}}$ is constructed by Algorithm \ref{Matricization-tensorFD}, then $$ \left\|\mathbf{\mathcal{A}}^{T}*\mathbf{\mathcal{A}}-\mathbf{\mathcal{B}}^{T}*\mathbf{\mathcal{B}}\right\| \le \frac{\rho}{\ell-k}\left\|\mathbf{\mathcal{A}}\right\|_{F}^{2}. $$ \end{theorem} Since the unfolding operation destroys the correlations among frontal slices, we couldn't derive a tighter bound in the form of $\left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}}_{k}\right\|_{F}^{2}$. However, through the inequality $\left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}}_{k}\right\|_{F}^{2} \leq \left\|\mathbf{\mathcal{A}}\right\|_{F}^{2}$, we are able to obtain a weaker error bound for Theorem \ref{tce-p-order} as $\left\|\mathbf{\mathcal{A}}\right\|_{F}^{2} /( {\frac{\ell}{c}-k})$. In this case, when $\ell \ge \left(1+\frac{1-1/c}{1/c-1/\rho}\right)k$, we achieve a smaller covariance error bound. Even though the parameter $c$ could not be explicitly calculated, the extensive numerical experiments would later demonstrate that $c$ is much smaller than $\rho$ generally. We also notice that when the tensor $\mathcal{A}$ satisfies the low-tubal-rank assumption, $\left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}}_{k}\right\|_{F}^{2} $ would be much smaller than $\left\|\mathbf{\mathcal{A}}\right\|_{F}^{2}$, which reveals the superiority of t-FD compared with MtFD. Here, we only give the tensor covariance error bound of MtFD. The following theorem gives the potential relationship between tensor covariance error and projection error. After that, the projection error can be obtained immediately. \begin{theorem} \label{rem: rel} For any tensor $\mathcal{A} \in \mathbb{R}^{n_1 \times n_2\times\cdots \times n_p}$, we have the following relationship between the projection error and the covariance error bounds: $$\small \left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}} * \mathbf{\mathcal{V}}_{k} * \mathbf{\mathcal{V}}_{k}^{T}\right\|_{F}^{2}\le \left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}}_{k}\right\|_{F}^{2}+2k\left\|\mathbf{\mathcal{A}}^{T}*\mathbf{\mathcal{A}}-\mathbf{\mathcal{B}}^{T}*\mathbf{\mathcal{B}}\right\|.$$ \end{theorem} \begin{remark} By combining Theorems \ref{thm-norm2-matrization} and \ref{rem: rel}, we can derive the projection error bound for MtFD, i.e., \begin{align}\label{tpe} \left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}} * \mathbf{\mathcal{V}}_{k} * \mathbf{\mathcal{V}}_{k}^{T}\right\|_{F}^{2}\le \left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}}_{k}\right\|_{F}^{2}+\frac{2k\rho}{\ell-k}\left\|\mathbf{\mathcal{A}}\right\|_{F}^{2}. \end{align}Due to $\rho$ is larger than $\ell$ and $c$ is usually small (the effect of $c$ is explained in detail in Section V), the right-hand side in (\ref{thm4111}) is much smaller than the second term of the right-hand side in (\ref{tpe}). This indicates the consistently better theoretical guarantee for t-FD compared with MtFD. \end{remark} \subsection{Complexity analysis} { For the proposed t-FD algorithm, we only need to store the sketch $\mathcal{B} \in \mathbb{R}^{2 \ell \times n_2 \times\cdots\times n_p}$, and the memory requirement is $O(\ell n_2 \rho)$ with $\rho=n_3n_4\ldots n_p$, which is substantially smaller than loading an entire tensor into memory. In each update, one fast Fourier Transform and one inverse Fourier Transform are required, which costs $O\left(\ell n_{2}\rho\log \rho\right)$ time. Computing the Singular Value Decomposition for all frontal slices takes $O\left(\ell^{2}n_{2}\rho\right)$. Since the total iterations are at most $\lceil \frac{n_{1}-\ell+1}{\ell+1} \rceil $, the computational costs are bounded by $O\left(n_{1}n_{2}\rho\left(\log \rho+\ell\right)\right)$. In experiments, we assume $\frac{n_{1}-\ell+1}{\ell+1}$ is an integer, otherwise we can append zero horizontal slices to ensure it is an integer. Specifically, for the third-order case, the computational cost is {$O\left(n_{1}n_{2}n_3\left(\log n_3+\ell\right)\right)$ due to $\rho=n_3$. } \section{Experiments} In this section, we compare the efficiency and effectiveness of our proposed t-FD with other three streaming algorithms on both synthetic and real-world tensor data. All these algorithms are implemented in MATLAB R2020a and conducted on a Dual Intel(R) Xeon(R) Gold 5120 CPU @ 2.20GHz with 256 GB memory. In each setting, we run each algorithm 10 times and report the average results. The detailed information of such compared algorithms are listed in the following. \begin{itemize} \item[1.] MtFD: As we briefly illustrated in Algorithm \ref{Matricization-tensorFD}, the direct way of implementing FD for the order-$p$ tensor data is unfolding the tensor firstly, then updating a sketch matrix $\boldsymbol{B} \in \mathbb{R}^{\ell \times n_2 \ldots n_p}$ by FD algorithm, and thus the sketch tensor $\mathcal{B}$ is obtained by folding the matrix $\boldsymbol{B}$. \item[2.] srt-SVD: We adopt the rt-SVD algorithm proposed in \cite{zhang2018randomized} to a single pass randomized algorithm. The sketch $\mathcal{B}$ is calculated from $\mathcal{Q} * \mathcal{A}$, where $\mathcal{Q} \in \mathbb{R}^{\ell \times n_1 \times n_3 \cdots n_p}$ is a random Gaussian tensor such that the first frontal slice $\mathbf{Q}^{(1)} \sim \frac{\mathcal{N}(0,1)}{\sqrt{\ell}}$, and other slices are all zeros. The t-product between $\mathcal{A}$ and $\mathcal{Q}$ can be easily derived in a streaming way. \item[3.] Norm Sampling (NormSamp for short) \cite{drineas2006fast}, \cite{holodnak2015randomized}: We adopt a well-known random sampling method to deal with the tensor data. Precisely, the sketch is formulated by sampling $\ell$ horizontal slices independently from the $n_1$ horizontal slices of $\mathcal{A}$ and rescaling. The $i$-th slice is chosen with probability $p_i = \|\mathcal{A}_i\|_F^2/ \|\mathcal{A}\|_F^2$ and rescaled to $\mathcal{A}_i / \sqrt{\ell p_i}$. Since the value of $\|\mathcal{A}\|_F$ is unknown, we implement this method through two passes over the data by calculating the norm and sampling separately. \end{itemize} We consider three measures to compare the performance: \begin{itemize} \item The projection error: $\frac{\left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}} * \mathbf{\mathcal{V}}_{k} * \mathbf{\mathcal{V}}_{k}^{T}\right\|_{F}^{2}}{ \left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}}_{k}\right\|_{F}^{2}}$ \item The covariance error: $\frac{\left\|\mathbf{\mathcal{A}}^{T}*\mathbf{\mathcal{A}}-\mathbf{\mathcal{B}}^{T}*\mathbf{\mathcal{B}}\right\|}{\left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}}_{k}\right\|_{F}^{2}}$ \item Running time in seconds \end{itemize} \subsection{Synthetic data examples} In this set of experiments, we consider to use third and fourth order synthetic tensor data for illustration. Inspired by the data generation process for testing FD in \cite{liberty2013simple}, we construct the third-order tensor data using $\mathcal{A} = \mathcal{S} * \hat{\mathcal{D}} * \mathcal{U} + \mathcal{N}/\eta$, where $\mathcal{S} \in \mathbb{R}^{n_1 \times k \times n_3}$ is generated by $\mathcal{ S }_{ijk} \sim \mathcal{N}(0,1)$ i.i.d. and $\mathcal{U} \in \mathbb{R}^{n_2 \times k \times n_3}$ is a partially orthogonal tensor. $ \hat{\mathcal{D}} $ is an f-diagonal tensor, in which the elements represent the tensor singular values. Here we consider the singular values in the Fourier domain having different decaying spectrums including linearly, polynomially and exponentially decaying spectrums. For each slice, we randomly choose one of the three types. $\mathcal{N} \in \mathbb{R}^{n_1 \times n_2 \times n_3}$ represents the Gaussian noise and $\eta$ decides the noise level. We also adopt this procedure to generate fourth-order tensors. In this experiment, we mainly measure the approximation power with different rank settings. We consider the cases of $\mathcal{A} \in \mathbb{R}^{10000 \times 1000 \times 10}$ and $\mathcal{A} \in \mathbb{R}^{10000 \times 1000 \times 10 \times 3}$, the noise level $\eta = 10$ and the true rank $k \in \{10, 20, 50\}$. For each setting, we generate three tensors and report the average performance. \begin{table*}[ht!] \centering \begin{adjustbox}{center} \begin{tabular}{|c|c|c|c|c|c|} \hline \rotatebox{90}{\begin{tiny}$\mathsf{Projection\ error}$\end{tiny}} & \raisebox{-1mm}{\includegraphics[scale = 0.22]{ell_10_proj.pdf}} & \raisebox{-1mm}{\includegraphics[scale = 0.22]{ell_20_proj.pdf}} & \raisebox{-1mm}{\includegraphics[scale = 0.23]{ell_50_proj.pdf}} & \raisebox{-1mm}{\includegraphics[scale = 0.22]{highway_proj.pdf}} & \raisebox{-1mm}{\includegraphics[scale = 0.23]{uber_proj.pdf}}\\ \hline \rotatebox{90}{\begin{tiny}$\mathsf{Covariance\ error}$\end{tiny}} & \raisebox{-1mm}{\includegraphics[scale = 0.22]{ell_10_cov.pdf}} & \raisebox{-1mm}{\includegraphics[scale = 0.23]{ell_20_cov.pdf}} & \raisebox{-1mm}{\includegraphics[scale = 0.22]{ell_50_cov.pdf}} & \raisebox{-1mm}{\includegraphics[scale = 0.23]{highway_cov.pdf}} & \raisebox{-1mm}{\includegraphics[scale = 0.23]{uber_cov.pdf}}\\ \hline \rotatebox{90}{\begin{tiny}$\mathsf{Running\ time}$\end{tiny}} & \raisebox{-1mm}{\includegraphics[scale = 0.22]{ell_10_run.pdf}} & \raisebox{-1mm}{\includegraphics[scale = 0.23]{ell_20_run.pdf}} & \raisebox{-1mm}{\includegraphics[scale = 0.22]{ell_50_run.pdf}} & \raisebox{-1mm}{\includegraphics[scale = 0.23]{highway_run.pdf}} & \raisebox{-1mm}{\includegraphics[scale = 0.22]{uber_run.pdf}}\\ \hline \rotatebox{90}{\begin{tiny}$\mathsf{The\ value \ of \ c}$\end{tiny}} & \raisebox{-2mm}{\includegraphics[scale = 0.21]{ell_10_c.pdf}} & \raisebox{-2mm}{\includegraphics[scale = 0.21]{ell_20_c.pdf}} & \raisebox{-2mm}{\includegraphics[scale = 0.21]{ell_50_c.pdf}} & \raisebox{-2mm}{\includegraphics[scale = 0.21]{highway_c.pdf}} & \raisebox{-2mm}{\includegraphics[scale = 0.21]{uber_c.pdf}}\\ \hline & $\mathsf{k = 10}$ & $\mathsf{k = 20}$ & $\mathsf{k = 50}$ & $\mathsf{highway}$ & $\mathsf{Uber}$\\ \hline \end{tabular} \end{adjustbox} \vspace{10pt} \caption{Experimental results for third-order synthetic and real datasets} \label{tab:stimuli} \end{table*} \begin{table*}[ht!] \centering \begin{adjustbox}{center} \begin{tabular}{|c|c|c|c|c|c|} \hline \rotatebox{90}{\begin{tiny}$\mathsf{Projection\ error}$\end{tiny}} & \raisebox{-1mm}{\includegraphics[scale = 0.22]{high_ell_10_proj.pdf}} & \raisebox{-1mm}{\includegraphics[scale = 0.22]{high_ell_20_proj.pdf}} & \raisebox{-1mm}{\includegraphics[scale = 0.23]{high_ell_50_proj.pdf}} & \raisebox{-1mm}{\includegraphics[scale = 0.22]{high_v1_proj.pdf}} & \raisebox{-1mm}{\includegraphics[scale = 0.23]{high_v2_proj.pdf}}\\ \hline \rotatebox{90}{\begin{tiny}$\mathsf{Covariance\ error}$\end{tiny}} & \raisebox{-1mm}{\includegraphics[scale = 0.22]{high_ell_10_cov.pdf}} & \raisebox{-1mm}{\includegraphics[scale = 0.23]{high_ell_20_cov.pdf}} & \raisebox{-1mm}{\includegraphics[scale = 0.22]{high_ell_50_cov.pdf}} & \raisebox{-1mm}{\includegraphics[scale = 0.23]{high_v1_cov.pdf}} & \raisebox{-1mm}{\includegraphics[scale = 0.22]{high_v2_cov.pdf}}\\ \hline \rotatebox{90}{\begin{tiny}$\mathsf{Running\ time}$\end{tiny}} & \raisebox{-1mm}{\includegraphics[scale = 0.22]{high_ell_10_run.pdf}} & \raisebox{-1mm}{\includegraphics[scale = 0.23]{high_ell_20_run.pdf}} & \raisebox{-1mm}{\includegraphics[scale = 0.22]{high_ell_50_run.pdf}} & \raisebox{-1mm}{\includegraphics[scale = 0.23]{high_v1_run.pdf}} & \raisebox{-1mm}{\includegraphics[scale = 0.22]{high_v2_run.pdf}}\\ \hline \rotatebox{90}{\begin{tiny}$\mathsf{The\ value \ of \ c}$\end{tiny}} & \raisebox{-2mm}{\includegraphics[scale = 0.21]{high_ell_10_c.pdf}} & \raisebox{-2mm}{\includegraphics[scale = 0.21]{high_ell_20_c.pdf}} & \raisebox{-2mm}{\includegraphics[scale = 0.21]{high_ell_50_c.pdf}} & \raisebox{-2mm}{\includegraphics[scale = 0.21]{high_v1_c.pdf}} & \raisebox{-2mm}{\includegraphics[scale = 0.21]{high_v2_c.pdf}}\\ \hline & $\mathsf{k = 10}$ & $\mathsf{k = 20}$ & $\mathsf{k = 50}$ & $\mathsf{Tabby Cat}$ & $\mathsf{Park Bench}$\\ \hline \end{tabular} \end{adjustbox} \vspace{10pt} \caption{Experimental results for fourth-order synthetic and real datasets} \label{tab:stimuli:fou} \end{table*} The performance of t-FD is consistently much better than other algorithms in terms of both error measures, especially for the covariance error. For the MtFD, even though the sketch tensor could capture a good subspace to achieve low projection error, it fails to approximate well for the covariance. For the third-order tensor, the covariance error only decreases subtly as the sketch size increases. And it maintains nearly the same for the fourth-order tensor. We attribute this to the intrinsic structure may be destroyed in the update process. However, our algorithm shows an obvious decrease when sketch size grows. Moreover, we notice in the higher rank setting, our method is more competitive. For the other two randomized algorithms, i.e., srt-SVD and NormSamp, there is small difference between their performance. For the running time, all these algorithms show a linearly growth in different settings. Clearly, the srt-SVD method is the slowest. For MtFD and t-FD, the t-FD is only slightly slower than MtFD, however, from the performance analysis, we conclude the improvement in the precision is deserved. Even though NormSamp could be implemented in seconds, the performance is much worse than ours and has no theoretical guarantee as ours.\\ \subsection{Real data examples} We now test our algorithm using four real-world streaming data. For the highway traffic data \cite{chen2020low}, it records the traffic speed time series over weeks from 11160 sensors and thus can be treated as a dense tensor. Here we choose four weeks data and formulate it as a tensor $\mathcal{B} \in \mathbb{R}^{11160 \times 288 \times 28}$. Since the sensor data has strong similarity in our observation, a lower rank 10 is used for this comparison. For the Uber data \cite{smith2017frostt}, it can be represented as an extremely sparse tensor $\mathcal{A} \in \mathbb{R}^{183 \times 24 \times 1140 \times 1717}$ with only $0.038\%$ non-zeros. The value at $(i,j,k,l)$ represents the number of pick-ups on day $i$, hours $j$, at latitude $k$ and longitude $l$. We aggregate the time dimension and subsample the location dimension to a tensor $\mathcal{A} \in \mathbb{R}^{4392 \times 500 \times 500}$. Due to the highly sparsity, we set up the rank to $50$. For the fourth-order tensor datasets, we consider two color video datasets studied in \cite{malik2021sampling}, that is, Tabby Cat that can be represented as a tensor of $1280\times720\times 3 \times286$ and Park Bench that can be represented as a tensor of 1920 $\times$ 1280$\times$ 3 $\times$ 364. The rank is set to $20$ for these two datasets. It can be easily seen from the last two columns of Tables \ref{tab:stimuli} and \ref{tab:stimuli:fou} that, our algorithm is more accurate and stable, especially for the larger and sparser Uber dataset. We also notice that, even the results are averaged over ten runs, srt-SVD and NormSamp could not achieve stable results in some cases. \subsection{Impact of parameter $c$} In our theoretical analysis, the parameter $c$ in Theorems \ref{main1} and \ref{main} is an uncertainty. It is determined by the structure of the tensor. In our synthetic data as well as the real data, the parameter $c$ is much smaller than $\rho$, in which case our algorithm has a superior performance. Here we construct two extreme cases to verify the effect of parameter $c$, in which the generated tensor has the form $\mathcal{A} = \mathcal{B} + \alpha \mathcal{U}$. The each frontal slice of $\mathcal{B}$ is the same, sampled from $\mathcal{N}(0,1)$, and $\mathcal{U}$ is a random tensor uniformly distributed on $[0,1]$. The parameter $\alpha$ is set up to control the difference among all the slices. When $\alpha$ becomes smaller, the parameter $c$ would be closer to $\rho$. Thus we consider $\alpha$ varies in $\{0.01, 100\}$ and the test tensor $\mathcal{A}$ with size of $\mathbb{R}^{3000 \times 300 \times 20}$. Figs. 2 and 3 show the comparison results between MtFD and t-FD in these two extreme cases. It can be seen that larger $c$ really deteriorates the performance of t-FD, however, we still obtain a comparable performance with MtFD. And when $c$ is very small, our estimation becomes more accurate. Both these findings further demonstrate the superiority of our tensor version of FD over the direct matricization technique for tackling the tensor data. In the Tables \ref{tab:stimuli} and \ref{tab:stimuli:fou}, we also draw the parameter $c$ in each setting. For the synthetic datasets, the behaviors of parameter $c$ is consistently decrease as the sketch size becomes larger, and the value of $c$ is much closer to 1. However, for the higher order ParkBench dataset, even though the value of $c$ is larger than those of other methods, it is still much smaller than $\rho$. Moreover, it decreases quickly with larger sketch sizes. Additionally, our algorithm still outperforms other compared methods, which is consistent with the previous experimental results. \begin{figure}[t] \label{fig:com} \centering \subfigure{ \includegraphics[scale=0.25]{1_proj.eps} } \quad \subfigure{ \includegraphics[scale=0.25]{1_cov.eps} } \caption{$\alpha = 0.01, c \approx 19.75$} \subfigure{ \includegraphics[scale=0.25]{10000_proj.eps} } \quad \subfigure{ \includegraphics[scale=0.25]{10000_cov.eps} } \caption{$\alpha = 100, c \approx 1.01$} \end{figure} \vspace{-5mm} \subsection{Application to Video Scene Classification} In this subsection, we present how to use our algorithm to classify real video scenes. The video \cite{malik2018low} is documented by a fixed camera, and a person occurs in the camera twice. It consists of 2200 frames, each of size 1080 by 1980. Our aim is to identify the frames in which a person occurs. We sequentially load the whole tensor through the second dimension, by each time loading a slice $\mathcal{A}_j \in \mathbb{R}^{1080 \times 2200}$. We then choose the sketch size $\ell$ varied in $\{10,20,50\}$. As such, we obtain a sketch tensor $\mathcal{B} \in \mathbb{R}^{\ell \times 1080 \times 2200}$. Thus, by applying the t-SVD, we could obtain the dominant space $\mathcal{U} \in \mathbb{R}^{\ell \times \ell \times 2200}$, and further get the mean matrix $\boldsymbol{U} \in \mathbb{R}^{\ell \times 2200}$ along the second dimension. The $i$-th column of $\boldsymbol{U}$ represents the feature vector of the $i$-th frame. To identify the frames, we apply $K$-means clustering algorithm to those feature vectors corresponding to all 2200 frames. In this real-world application, we could identify most of the frames containing a person by using the proposed t-FD algorithm. Some typical results are demonstrated in Fig. \ref{clf}. In this figure, when no person appears, the frames are classified into green class; when a person is captured by the camera, the frames are marked as orange class. Compared with the previous works \cite{malik2018low, sun2019low}, the classification results obtained by t-FD are similar even though we select smaller clusters than such two works. Additionally, among different sketch sizes, the orange frames are all classified correctly for the smaller sketch size $\ell \in \{10, 20\}$, while more frames including a person are classified for the sketch size $\ell = 50$, but also some frames are misclassified. \begin{figure*}[htp!] \centering \includegraphics[scale=0.56]{cla.png} \vspace{-10pt} \caption{Classification results for different sketch sizes} \label{clf} \end{figure*} \section{Proofs} In this section, we shall prove that our proposed algorithm t-FD is within $1+\varepsilon$ of best tubal-rank-$k$ approximation. Meanwhile, we derive the error bounds of MtFD for comparison. To this end, we first need to prove the following auxiliary properties and lemmas. \subsection{Some useful lemmas and properties} Since Lemma \ref{lemma2} plays an important role in our theoretical analysis, we shall first present the proof of the lemma. \begin{proof}[Proof of Lemma \ref{lemma2}] Set $n=\min \left(n_{1}, n_{2}\right)$, then due to the property that $\left\|\mathcal{A}\right\|_{F}^{2}=\left\|\mathcal{S}\right\|_{F}^{2}=\frac{1}{\rho}\left\|\boldsymbol{\bar{S}}\right\|_{F}^{2}$, we can get that $$ \begin{aligned} &\left\|\mathcal{A}-\mathcal{A}_{k}\right\|_{F}^{2} \notag\\ =&\|\mathcal{S}(k+1: n, k+1: n,:,\ldots,:)\|_{F}^{2} \\ =&\rho\left\|\boldsymbol{\bar{S}}^{(1)}(k+1: n, k+1: n)\right\|_{F}^{2}+\cdots\notag\\ &+\rho\left\|\boldsymbol{\bar{S}}^{(\rho)}(k+1: n, k+1: n)\right\|_{F}^{2}. \end{aligned} $$ Now let $\mathcal{B} \in \mathbb{A}$, so that $\mathcal{B}=\mathcal{X} * \mathcal{Y}^{T}$. Then $$ \begin{aligned} &\|\mathcal{A}-\mathcal{B}\|_{F}^{2}\notag\\ =&\rho\left\|\boldsymbol{\bar{A}}^{(1)}-\boldsymbol{\bar{X}}^{(1)} \boldsymbol{\bar{Y}}^{(1)T}\right\|_{F}^{2}+\cdots+\rho\left\|\boldsymbol{\bar{A}}^{(\rho)}-\boldsymbol{\bar{X}}^{(\rho)} \boldsymbol{\bar{Y}}^{(\rho)T}\right\|_{F}^{2} \\ \geqslant& \rho\left\|\boldsymbol{\bar{S}}^{(1)}(k+1: n, k+1: n)\right\|_{F}^{2}+\cdots\notag\\ &+\rho\left\|\boldsymbol{\bar{S}}^{(\rho)}(k+1: n, k+1: n)\right\|_{F}^{2} . \end{aligned} $$ This finishes the proof of Lemma \ref{lemma2}. \end{proof} The block circulant operation on tensors acts as a bridge for seeking the tensor norm relationship between the original domain and the Fourier domain. At first, we briefly review the relevant properties about it. \begin{lemma}\cite{lund2020tensor}\label{lund} Given tensors $\mathcal{A} \in \mathbb{C}^{n \times n \times p}$ and $\mathcal{B} \in \mathbb{C}^{n \times s \times p} .$ Then\\ (1) $\mathtt{bcirc}(\mathcal{A} * \mathcal{B})=\mathtt{bcirc}(\mathcal{A}) \mathtt{bcirc}(\mathcal{B})$;\\ (2) $(\mathcal{A} * \mathcal{B})^{\top}=\mathcal{B}^{\top} * \mathcal{A}^{\top}$;\\ (3) $\mathtt{bcirc}\left(\mathcal{A}^{\top}\right)=(\mathtt{bcirc}(\mathcal{A}))^{\top}$. \end{lemma} For proving our main theorems, we next need to prove the following three auxiliary properties of Algorithm \ref{tensor-FD}. In the subsequent analysis, the t-SVD of $\mathbf{\mathcal{A}}$ and $\mathbf{\mathcal{B}}$ are expressed as $\mathbf{\mathcal{A}}=\mathbf{\mathcal{Z}}*\mathbf{\mathcal{W}}*\mathbf{\mathcal{Y}}^{T}$ and $\mathbf{\mathcal{B}}=\mathbf{\mathcal{U}}*\mathbf{\mathcal{S}}*\mathbf{\mathcal{V}}^{T}$, respectively. The corresponding rank-$k$ approximation are $\mathbf{\mathcal{A}}_{k}=\mathbf{\mathcal{Z}}_{k}*\mathbf{\mathcal{W}}_{k}*\mathbf{\mathcal{Y}}^{T}_{k}$ and $\mathbf{\mathcal{B}}_{k}=\mathbf{\mathcal{U}}_{k}*\mathbf{\mathcal{S}}_{k}*\mathbf{\mathcal{V}}_{k}^{T}$. Moreover, let $\vec{\boldsymbol{y}}_{i} \in \mathbb{R}^{n_{2} \times 1 \times n_{3}}$ be the $i$-th lateral slice of $\mathbf{\mathcal{Y}}_{k}$, and $\vec{\boldsymbol{v}}_{i} \in \mathbb{R}^{n_{2} \times 1 \times n_{3}}$ be the $i$-th lateral slice of $\mathbf{\mathcal{V}}_{k}$. Let $ \Delta = \sum_{j=1}^{n_1}\max\limits_{i} \delta_{j}^{(i)}$ be the sum of maximum information losses in the truncated procedure. \begin{property}\label{pro1} For any tensor column $\vec{\boldsymbol{x}} \in \mathbb{R}^{n_{2} \times 1 \times n_{3}}$, if $\mathbf{\mathcal{B}}$ is the output result by applying Algorithm \ref{tensor-FD} to the input $\mathbf{\mathcal{A}}$, then $\left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2}-\left\|\mathbf{\mathcal{B}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2} \ge0$. \end{property} \begin{proof} Let $\boldsymbol{x}\in \mathbb{R}^{n_{2}n_{3} \times 1 }$ be the vectorized column vector of $\vec{\boldsymbol{x}} $. According to the definition of t-product (Def. 1), we obtain $$ \left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2}-\left\|\mathbf{\mathcal{B}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2} =\left\|\mathtt{bcirc}(\mathbf{\mathcal{A}})\boldsymbol{x}\right\|^{2}-\left\|\mathtt{bcirc}(\mathbf{\mathcal{B}})\boldsymbol{x}\right\|^{2}. $$ \noindent Furthermore, from algorithm t-FD, it is clear to observe that $\boldsymbol{C}_{j}^{(i)}$ contains two parts, one of which is the $\boldsymbol{B}_{j-1}^{(i)}$ produced by the last iteration, and the other is the newly inserted row $\boldsymbol{A}_{j}^{(i)}$. Then according to the definition of the block circulant matrix, we can obtain that $$\left\|\mathtt{bcirc}(\mathbf{\mathcal{A}})\boldsymbol{x}\right\|^{2}+\sum_{j=1}^{n_1}\left\|\mathtt{bcirc}(\mathbf{\mathcal{B}}_{j-1})\boldsymbol{x}\right\|^{2}=\sum_{j=1}^{n_1}\left\|\mathtt{bcirc}(\mathbf{\mathcal{C}}_{j})\boldsymbol{x}\right\|^{2}.$$ \noindent Therefore, \begin{align} &\left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2}-\left\|\mathbf{\mathcal{B}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2} \notag \\ =&\left\|\mathtt{bcirc}(\mathbf{\mathcal{A}})\boldsymbol{x}\right\|^{2}-\left\|\mathtt{bcirc}(\mathbf{\mathcal{B}})\boldsymbol{x}\right\|^{2} \notag\\ =&\left\|\mathtt{bcirc}(\mathbf{\mathcal{A}})\boldsymbol{x}\right\|^{2}+\sum_{j=1}^{n_1}\left(\left\|\mathtt{bcirc}(\mathbf{\mathcal{B}}_{j-1})\boldsymbol{x}\right\|^{2}-\left\|\mathtt{bcirc}(\mathbf{\mathcal{B}}_{j})\boldsymbol{x}\right\|^{2} \right) \notag\\ =&\sum_{j=1}^{n_1}\left(\left\|\mathtt{bcirc}(\mathbf{\mathcal{C}}_{j})\boldsymbol{x}\right\|^{2}-\left\|\mathtt{bcirc}(\mathbf{\mathcal{B}}_{j})\boldsymbol{x}\right\|^{2} \right) \notag\\ \ge&0. \notag \end{align} This finishes the proof of Property \ref{pro1}. \end{proof} \begin{property}\label{pro2} For any tensor column $\vec{\boldsymbol{x}} \in \mathbb{R}^{n_{2} \times 1 \times n_{3}}$ satisfied $\|\vec{\boldsymbol{x}}\|_{2^{*}} = 1$, if $\mathbf{\mathcal{B}}$ is the output result by applying Algorithm \ref{tensor-FD} to the input $\mathbf{\mathcal{A}}$, then $\left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2}-\left\|\mathbf{\mathcal{B}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2} \le \Delta $ \end{property} \begin{proof} For $\vec{\boldsymbol{x}} \in \mathbb{R}^{n_{2} \times 1 \times n_{3}}$, $\boldsymbol{x}\in \mathbb{R}^{n_{2}n_{3} \times 1 }$ denotes the vectorized column vector of $\vec{\boldsymbol{x}} $. If we let $\boldsymbol{x}$ be a unit vector, as explained in the proof of Property \ref{pro1} above, there holds $$ \left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2}-\left\|\mathbf{\mathcal{B}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2}=\sum_{j=1}^{n_1}\left(\left\|\mathtt{bcirc}(\mathbf{\mathcal{C}}_{j})\boldsymbol{x}\right\|^{2}-\left\|\mathtt{bcirc}(\mathbf{\mathcal{B}}_{j})\boldsymbol{x}\right\|^{2} \right). $$ Since $\boldsymbol{x}$ is the unit vector, we obtain \begin{align} &\left\|\mathtt{bcirc}(\mathbf{\mathcal{C}}_{j})\boldsymbol{x}\right\|^{2}-\left\|\mathtt{bcirc}(\mathbf{\mathcal{B}}_{j})\boldsymbol{x}\right\|^{2} \notag\\ =&\boldsymbol{x}^{T}\left(\mathtt{bcirc}(\mathbf{\mathcal{C}}_{j})^{T}\mathtt{bcirc}(\mathbf{\mathcal{C}}_{j})-\mathtt{bcirc}(\mathbf{\mathcal{B}}_{j})^{T}\mathtt{bcirc}(\mathbf{\mathcal{B}}_{j})\right)\boldsymbol{x} \notag\\ \le&\left\|\mathtt{bcirc}(\mathbf{\mathcal{C}}_{j})^{T}\mathtt{bcirc}(\mathbf{\mathcal{C}}_{j})-\mathtt{bcirc}(\mathbf{\mathcal{B}}_{j})^{T}\mathtt{bcirc}(\mathbf{\mathcal{B}}_{j})\right\| .\notag \end{align} According to Lemma \ref{lund}, we further obtain \begin{align} &\left\|\mathtt{bcirc}(\mathbf{\mathcal{C}}_{j})^{T}\mathtt{bcirc}(\mathbf{\mathcal{C}}_{j})-\mathtt{bcirc}(\mathbf{\mathcal{B}}_{j})^{T}\mathtt{bcirc}(\mathbf{\mathcal{B}}_{j})\right\| \notag\\ =&\left\|\mathtt{bcirc}\left(\mathbf{\mathcal{C}}_{j}^{T}*\mathbf{\mathcal{C}}_{j}-\mathbf{\mathcal{B}}_{j}^{T}*\mathbf{\mathcal{B}}_{j}\right)\right\|, \notag \end{align} then due to $\left(\boldsymbol{F}_{n_{3}} \otimes \boldsymbol{I}_{n_{1}}\right) \cdot \mathtt{bcirc}(\mathbf{\mathcal{A}}) \cdot\left(\boldsymbol{F}_{n_{3}}^{-1} \otimes \boldsymbol{I}_{n_{2}}\right)=\boldsymbol{\bar{A}}$ and the property that $\left(\boldsymbol{F}_{n_{3}} \otimes \boldsymbol{I}_{n_{1}}\right) / \sqrt{n_{3}}$ is orthogonal, we have $$ \left\|\mathtt{bcirc}\left(\mathbf{\mathcal{C}}_{j}^{T}*\mathbf{\mathcal{C}}_{j}-\mathbf{\mathcal{B}}_{j}^{T}*\mathbf{\mathcal{B}}_{j}\right)\right\| =\left\| \text{DFT}\left(\boldsymbol{C}_{j}^{T}\boldsymbol{C}_{j}- \boldsymbol{B}_{j}^{T}\boldsymbol{B}_{j} \right) \right\|. $$ Furthermore, since DFT is a linear transform, the following equation holds $$ \left\| \text{DFT}\left(\boldsymbol{C}_{j}^{T}\boldsymbol{C}_{j}- \boldsymbol{B}_{j}^{T}\boldsymbol{B}_{j} \right) \right\| =\left\| \boldsymbol{\bar{C}}_{j}^{T}\boldsymbol{\bar{C}}_{j}- \boldsymbol{\bar{B}}_{j}^{T}\boldsymbol{\bar{B}}_{j} \right\|. $$ Therefore, \begin{align} \left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2}-\left\|\mathbf{\mathcal{B}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2} \le&\sum_{j=1}^{n_1}\left\| \boldsymbol{\bar{C}}_{j}^{T}\boldsymbol{\bar{C}}_{j}- \boldsymbol{\bar{B}}_{j}^{T}\boldsymbol{\bar{B}}_{j} \right\| \notag\\ =&\sum_{j=1}^{n_1}\max\limits_{i} \delta_{j}^{(i)}=\Delta. \notag \end{align} This completes the proof. \end{proof} \begin{lemma} \label{lemma:re} For the tensor column $\vec{\boldsymbol{x}} \in \mathbb{R}^{n_{2} \times 1 \times n_{3}}$, $\boldsymbol{x}\in \mathbb{R}^{n_{2}n_{3} \times 1 }$ denotes the vectorized column vector of $\vec{\boldsymbol{x}} $. Let $\boldsymbol{x}$ be the eigenvector of $\operatorname{bcirc}(\mathbf{\mathcal{A}})^{T}\operatorname{bcirc}(\mathbf{\mathcal{A}})-\operatorname{bcirc}(\mathbf{\mathcal{B}})^{T}\operatorname{bcirc}(\mathbf{\mathcal{B}})$ corresponding to its largest eigenvalue, then $$ \left\|\mathbf{\mathcal{A}}^{T}*\mathbf{\mathcal{A}}-\mathbf{\mathcal{B}}^{T}*\mathbf{\mathcal{B}}\right\| =\left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2}-\left\|\mathbf{\mathcal{B}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2} . $$ \end{lemma} \begin{proof} In what follows, we take $\boldsymbol{x}$ as the eigenvector of $\mathtt{bcirc}(\mathbf{\mathcal{A}})^{T}\mathtt{bcirc}(\mathbf{\mathcal{A}})-\mathtt{bcirc}(\mathbf{\mathcal{B}})^{T}\mathtt{bcirc}(\mathbf{\mathcal{B}})$ corresponding to its largest eigenvalue. According to the definition of tensor spectral norm (Def. \ref{def_tsn}), we have \begin{align} &\left\|\mathbf{\mathcal{A}}^{T}*\mathbf{\mathcal{A}}-\mathbf{\mathcal{B}}^{T}*\mathbf{\mathcal{B}}\right\| \notag\\ =&\left\| \boldsymbol{\bar{A}}^{T}\boldsymbol{\bar{A}}- \boldsymbol{\bar{B}}^{T}\boldsymbol{\bar{B}}\right\| \notag \\ =&\left\| \mathtt{bcirc}(\mathbf{\mathcal{A}})^{T}\mathtt{bcirc}(\mathbf{\mathcal{A}})-\mathtt{bcirc}(\mathbf{\mathcal{B}})^{T}\mathtt{bcirc}(\mathbf{\mathcal{B}}) \right\| \notag \\ =&\left\|\mathtt{bcirc}(\mathbf{\mathcal{A}})\boldsymbol{x}\right\|^{2}-\left\|\mathtt{bcirc}(\mathbf{\mathcal{B}})\boldsymbol{x}\right\|^{2} \notag\\ =&\left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2}-\left\|\mathbf{\mathcal{B}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2} \notag. \end{align} This concludes the proof of Lemma \ref{lemma:re}. \end{proof} The first two properties bound the projected distance for a tensor column $\vec{\boldsymbol{x}} $ from $\mathcal{A}$ and $\mathcal{B}$, which indicate the tensor sketch $\mathcal{B}$ really captures the principle subspace of $\mathcal{A}$. And Lemma \ref{lemma:re} demonstrates the importance to build an upper bound for $\Delta$. \begin{property}\label{pro3} If $\mathbf{\mathcal{B}}$ is the output result by applying Algorithm \ref{tensor-FD} to the input $\mathbf{\mathcal{A}}$ with prescribed sketch size $\ell$, then for any $\ell> ck$, we have $\Delta \le \frac{1}{\frac{\ell}{c}-k}\left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}}_{k}\right\|_{F}^{2}$, where $c=\frac{n_{3}\sum_{j=1}^{n_1}\max\limits_{i} \delta_{j}^{(i)} }{\sum_{j=1}^{n_1}\sum_{i=1}^{n_{3}} \delta_{j}^{(i)}}$. \end{property} \begin{proof} Noting that $\mathbf{\mathcal{B}}$ is initialized to a all-zero tensor, we have \begin{align} \left\|\mathbf{\mathcal{B}}\right\|_{F}^{2} =&\sum_{j=1}^{n_1}\left(\left\|\mathbf{\mathcal{B}}_{j}\right\|_{F}^{2}-\left\|\mathbf{\mathcal{B}}_{j-1}\right\|_{F}^{2} \right) \notag\\ =&\sum_{j=1}^{n_1}\left[\left(\left\|\mathbf{\mathcal{C}}_{j}\right\|_{F}^{2}-\left\|\mathbf{\mathcal{B}}_{j-1}\right\|_{F}^{2} \right) -\left(\left\|\mathbf{\mathcal{C}}_{j}\right\|_{F}^{2}-\left\|\mathbf{\mathcal{B}}_{j}\right\|_{F}^{2} \right) \right]. \notag \end{align} Since $\boldsymbol{C}_{j}^{(i)}$ is composed of $\boldsymbol{B}_{j-1}^{(i)}$ and $\boldsymbol{A}_{j}^{(i)}$, the following relationship holds $$ \sum_{j=1}^{n_1}\left(\left\|\mathbf{\mathcal{C}}_{j}\right\|_{F}^{2}-\left\|\mathbf{\mathcal{B}}_{j-1}\right\|_{F}^{2} \right) =\left\|\mathbf{\mathcal{A}}\right\|_{F}^{2}. $$ And according to the definition and property of the DFT tensor $\boldsymbol{\bar{C}}_{j}$, we obtain $$\left\|\mathbf{\mathcal{C}}_{j}\right\|_{F}^{2}=\frac{1}{n_{3}}\left\|\mathtt{bcirc}(\mathbf{\mathcal{C}}_{j})\right\|_{F}^{2}=\frac{1}{n_{3}}\left\|\boldsymbol{\bar{C}}_{j}\right\|_{F}^{2},$$ by utilizing the relation between the Frobenius norm and trace of a matrix, we further obtain $$ \left\|\boldsymbol{\bar{C}}_{j}\right\|_{F}^{2}=\text{tr}\left(\boldsymbol{\bar{C}}_{j}^{T}\boldsymbol{\bar{C}}_{j}\right). $$ Therefore, \begin{align} \left\|\mathbf{\mathcal{B}}\right\|_{F}^{2} =&\sum_{j=1}^{n_1}\left[\left(\left\|\mathbf{\mathcal{C}}_{j}\right\|_{F}^{2}-\left\|\mathbf{\mathcal{B}}_{j-1}\right\|_{F}^{2} \right) -\left(\left\|\mathbf{\mathcal{C}}_{j}\right\|_{F}^{2}-\left\|\mathbf{\mathcal{B}}_{j}\right\|_{F}^{2} \right) \right] \notag\\ =&\left\|\mathbf{\mathcal{A}}\right\|_{F}^{2}-\frac{1}{n_{3}}\sum_{j=1}^{n_1}\text{tr}\left(\boldsymbol{\bar{C}}_{j}^{T}\boldsymbol{\bar{C}}_{j}- \boldsymbol{\bar{B}}_{j}^{T}\boldsymbol{\bar{B}}_{j}\right) \notag \\ \le&\left\|\mathbf{\mathcal{A}}\right\|_{F}^{2}-\frac{\ell}{n_{3}}\sum_{j=1}^{n_1}\sum_{i=1}^{n_{3}} \delta_{j}^{(i)} \notag \\ =&\left\|\mathbf{\mathcal{A}}\right\|_{F}^{2}-\frac{\ell}{c} \Delta, \notag \end{align} where $c=\frac{n_{3}\sum_{j=1}^{n_1}\max\limits_{i} \delta_{j}^{(i)} }{\sum_{j=1}^{n_1}\sum_{i=1}^{n_{3}} \delta_{j}^{(i)}}$. Furthermore, based on the property that $\|\mathbf{\mathcal{A}}\|_{F}^{2}=\|\mathbf{\mathcal{A}}*\mathbf{\mathcal{Y}}\|_{F}^{2}=\sum_{i=1}^{r}\left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{y}}_{i}\right\|_{2^{*}} ^{2}$, where $r$ is the tensor tubal rank of $\mathbf{\mathcal{A}}$, we have \begin{align} \frac{\ell}{c} \Delta & \le\|\mathbf{\mathcal{A}}\|_{F}^{2}-\|\mathbf{\mathcal{B}}\|_{F}^{2} \notag\\ &=\sum_{i=1}^{k}\left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{y}}_{i}\right\|_{2^{*}} ^{2}+\sum_{i=k+1}^{r}\left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{y}}_{i}\right\|_{2^{*}} ^{2}-\|\mathbf{\mathcal{B}}\|_{F}^{2} \notag\\ &=\sum_{i=1}^{k}\left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{y}}_{i}\right\|_{2^{*}} ^{2}+\left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}}_{k}\right\|_{F}^{2}-\|\mathbf{\mathcal{B}}\|_{F}^{2} \notag\\ & \leq\left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}}_{k}\right\|_{F}^{2}+\sum_{i=1}^{k}\left(\left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{y}}_{i}\right\|_{2^{*}} ^{2}-\left\|\mathbf{\mathcal{B}} * \vec{\boldsymbol{y}}_{i}\right\|_{2^{*}} ^{2}\right) \notag\\ & \leq\left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}}_{k}\right\|_{F}^{2}+k \Delta. \notag \end{align} Then, we can conclude that $\Delta \le \frac{1}{\frac{\ell}{c}-k}\left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}}_{k}\right\|_{F}^{2}$. \end{proof} \subsection{Error bounds of the proposed t-FD} \begin{proof}[Proof of Theorem \ref{main1}] By Lemma \ref{lemma:re}, when the vectorized column vector of $\vec{\boldsymbol{x}} $ takes the eigenvector of $\mathtt{bcirc}(\mathbf{\mathcal{A}})^{T}\mathtt{bcirc}(\mathbf{\mathcal{A}})-\mathtt{bcirc}(\mathbf{\mathcal{B}})^{T}\mathtt{bcirc}(\mathbf{\mathcal{B}})$ corresponding to its largest eigenvalue, the tensor covariance error is equivalent to the following formulation: \begin{align} \left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2}-\left\|\mathbf{\mathcal{B}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2}. \label{equivalent} \end{align} So, to complete the proof, we only need to analyze the error bound of (\ref{equivalent}). Note that the aforementioned tensor column $\vec{\boldsymbol{x}}$ satisfies the unit norm constraint, thus combining Properties 2 and 3 can get the desired bound. \end{proof} \begin{proof}[Proof of Theorem \ref{main}] By using the Pythagorean theorem, $\left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}} * \mathbf{\mathcal{V}}_{k} * \mathbf{\mathcal{V}}_{k}^{T}\right\|_{F}^{2} =\left\|\mathbf{\mathcal{A}}\right\|_{F}^{2}-\left\|\mathbf{\mathcal{A}} * \mathbf{\mathcal{V}}_{k}\right\|_{F}^{2}$. Since $\vec{\boldsymbol{v}}_{i} \in \mathbb{R}^{n_{2} \times 1 \times n_{3}}$ is the $i$-th lateral slice of $\mathbf{\mathcal{V}}_{k}$, we rewrite $\left\|\mathbf{\mathcal{A}} * \mathbf{\mathcal{V}}_{k} \right\|_{F}^{2}$ as $\sum_{i=1}^{k}\left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{v}}_{i}\right\|_{2^{*}} ^{2}$. So, we obtain $$ \left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}} * \mathbf{\mathcal{V}}_{k} * \mathbf{\mathcal{V}}_{k}^{T}\right\|_{F}^{2} =\left\|\mathbf{\mathcal{A}}\right\|_{F}^{2}-\sum_{i=1}^{k}\left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{v}}_{i}\right\|_{2^{*}} ^{2} . $$ According to Property \ref{pro1}, it is easy to see that $$ \left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{v}}_{i}\right\|_{2^{*}} ^{2}\ge\left\|\mathbf{\mathcal{B}} * \vec{\boldsymbol{v}}_{i}\right\|_{2^{*}} ^{2}. $$ Therefore, we can further obtain that $$ \left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}} * \mathbf{\mathcal{V}}_{k} * \mathbf{\mathcal{V}}_{k}^{T}\right\|_{F}^{2}\le \left\|\mathbf{\mathcal{A}}\right\|_{F}^{2}-\sum_{i=1}^{k}\left\|\mathbf{\mathcal{B}} * \vec{\boldsymbol{v}}_{i}\right\|_{2^{*}} ^{2} . $$ Noting that $\mathbf{\mathcal{B}}_{k}=\mathbf{\mathcal{U}}_{k}*\mathbf{\mathcal{S}}_{k}*\mathbf{\mathcal{V}}_{k}^{T}$, and $\vec{\boldsymbol{v}}_{i} \in \mathbb{R}^{n_{2} \times 1 \times n_{3}}$ is the $i$-th lateral slice of $\mathbf{\mathcal{V}}_{k}$. Thus, $\sum_{i=1}^{k}\left\|\mathbf{\mathcal{B}} * \vec{\boldsymbol{v}}_{i}\right\|_{2^{*}} ^{2}\ge \sum_{i=1}^{k}\left\|\mathbf{\mathcal{B}} * \vec{\boldsymbol{y}}_{i}\right\|_{2^{*}} ^{2}.$ Therefore, $$ \left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}} * \mathbf{\mathcal{V}}_{k} * \mathbf{\mathcal{V}}_{k}^{T}\right\|_{F}^{2} \le\left\|\mathbf{\mathcal{A}}\right\|_{F}^{2}-\sum_{i=1}^{k}\left\|\mathbf{\mathcal{B}} * \vec{\boldsymbol{y}}_{i}\right\|_{2^{*}} ^{2} . $$ Then, it follows from the conclusion of Property \ref{pro2} that \begin{align} \left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}} * \mathbf{\mathcal{V}}_{k} * \mathbf{\mathcal{V}}_{k}^{T}\right\|_{F}^{2} \le&\|\mathbf{\mathcal{A}}\|_{F}^{2}-\sum_{i=1}^{k}\left(\left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{y}}_{i}\right\|_{2^{*}} ^{2}-\Delta\right) \notag \\ =&\|\mathbf{\mathcal{A}}\|_{F}^{2}-\left\|\mathbf{\mathcal{A}}_{k}\right\|_{F}^{2}+k \Delta \notag\\ \le&\frac{\ell}{\ell-ck}\left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}}_{k}\right\|_{F}^{2}, \label{fsimilar} \end{align} where the last inequality can be derived directly from Property \ref{pro3}. This completes the proof of Theorem \ref{main}. And if we set $\ell=c\lceil k+k / \varepsilon\rceil$, then we can get the standard bound that $ \left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}} * \mathbf{\mathcal{V}}_{k} * \mathbf{\mathcal{V}}_{k}^{T}\right\|_{F}^{2} \le (1+\varepsilon)\left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}}_{k}\right\|_{F}^{2}.$ \end{proof} \begin{proof}[Proof of Theorem \ref{tce-p-order}] The proof is very similar to that of Theorem \ref{main1}. Firstly, according to the relationship of the spectral norm between $\mathbf{\mathcal{A}}$, $\boldsymbol{\tilde{A}}$ and $\boldsymbol{\bar{A}}$, the tensor covariance error can be reformulated as: \begin{align} \left\|\mathbf{\mathcal{A}}^{T}*\mathbf{\mathcal{A}}-\mathbf{\mathcal{B}}^{T}*\mathbf{\mathcal{B}}\right\| =\left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2}-\left\|\mathbf{\mathcal{B}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2} .\label{combine1} \end{align} Then the second step is to upper bound $\left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2}-\left\|\mathbf{\mathcal{B}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2}$ by the following formulation: \begin{align} \left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2}-\left\|\mathbf{\mathcal{B}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2} \le \Delta \le \frac{1}{\frac{\ell}{c}-k}\left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}}_{k}\right\|_{F}^{2},\label{combine2} \end{align} where $ \Delta = \sum_{j=1}^{n_1}\max\limits_{i} \delta_{j}^{(i)}$ is the sum of maximum information losses in the truncated procedure, and $c=\frac{\rho\sum_{j=1}^{n_1}\max\limits_{i} \delta_{j}^{(i)} }{\sum_{j=1}^{n_1}\sum_{i=1}^{\rho} \delta_{j}^{(i)}}$. The derivation of (\ref{combine2}) depends on the the relationship between the Frobenius norm of original and Fourier domains, that is, $\left\|\mathbf{\mathcal{C}}_{j}\right\|_{F}^{2}=\frac{1}{\rho}\left\|\boldsymbol{\bar{C}}_{j}\right\|_{F}^{2}$. Different from the third-order case, the number of blocks of $\boldsymbol{\bar{C}}_{j}$ is changed to $\rho$, which directly results in the change of $c$. Finally, combining (\ref{combine1}) with (\ref{combine2}) yields the desired bound. \end{proof} \begin{proof}[Proof of Theorem \ref{tpe-p-order}] The proof follows the similar ideas of the proof of Theorem 2. By using the Pythagorean theorem, we get $\left\|\mathcal{A}-\mathcal{A} * \mathcal{V}_{k} * \mathcal{V}_{k}^{T}\right\|_{F}^{2}=\|\mathcal{A}\|_{F}^{2}-\left\|\mathcal{A} * \mathcal{V}_{k}\right\|_{F}^{2}$. Since $\vec{\boldsymbol{v}}_{i} \in \mathbb{R}^{n_{2} \times 1 \times n_{3} \times \cdots \times n_{p}}$ is the $i$-th lateral slice of $\mathcal{V}_{k}$, we rewrite $\left\|\mathcal{A} * \mathcal{V}_{k}\right\|_{F}^{2}$ as $\sum_{i=1}^{k}\left\|\mathcal{A} * \vec{\boldsymbol{v}}_{\boldsymbol{i}}\right\|_{2^{*}}^{2} .$ Then similar to with the third-order case, it is easy to get $ \left\|\mathcal{A} * \vec{\boldsymbol{v}}_{i}\right\|_{2^{*}}^{2} \geq\left\|\mathcal{B} * \vec{\boldsymbol{v}}_{i}\right\|_{2^{*}}^{2} $ and $\sum_{i=1}^{k}\left\|\mathcal{B} * \vec{\boldsymbol{v}}_{i}\right\|_{2^{*}}^{2} \geq \sum_{i=1}^{k}\left\|\mathcal{B} * \vec{\boldsymbol{y}}_{i}\right\|_{2^{*}}^{2} $. Thus, $$ \left\|\mathcal{A}-\mathcal{A} * \mathcal{V}_{k} * \mathcal{V}_{k}^{T}\right\|_{F}^{2} \leq\|\mathcal{A}\|_{F}^{2}-\sum_{i=1}^{k}\left\|\mathcal{B} * \vec{\boldsymbol{y}}_{i}\right\|_{2^{*}}^{2}. $$ Finally, combining with (\ref{combine2}), we obtain $$ \begin{aligned} \left\|\mathcal{A}-\mathcal{A} * \mathcal{V}_{k} * \mathcal{V}_{k}^{T}\right\|_{F}^{2} & \leq\|\mathcal{A}\|_{F}^{2}-\sum_{i=1}^{k}\left(\left\|\mathcal{A} * \vec{\boldsymbol{y}}_{i}\right\|_{2^{*}}^{2}-\Delta\right) \\ &=\|\mathcal{A}\|_{F}^{2}-\left\|\mathcal{A}_{k}\right\|_{F}^{2}+k \Delta \\ & \leq \frac{\ell}{\ell-c k}\left\|\mathcal{A}-\mathcal{A}_{k}\right\|_{F}^{2}, \end{aligned} $$ where $c=\frac{\rho\sum_{j=1}^{n_1}\max\limits_{i} \delta_{j}^{(i)} }{\sum_{j=1}^{n_1}\sum_{i=1}^{\rho} \delta_{j}^{(i)}}$. This finishes the proof. \end{proof} \subsection{Error bounds of the compared algorithm MtFD} \begin{proof}[Proof of Theorem \ref{thm-norm2-matrization}] For the $n_1\rho \times n_2\rho$ block matrix $\boldsymbol{\tilde{A}}$, its explicit form could be very complicated for higher order case. So for simplicity, we take the third-order case as an example to describe the proof process. And more generally, the proof technique presented here is also applicable to order-$p(p>3)$ case. According to the proof framework of Theorem \ref{main1}, it is easy to see that $$ \left\|\mathbf{\mathcal{A}}^{T}*\mathbf{\mathcal{A}}-\mathbf{\mathcal{B}}^{T}*\mathbf{\mathcal{B}}\right\| =\left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2}-\left\|\mathbf{\mathcal{B}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2}, $$ thus, our core is to bound $\left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2}-\left\|\mathbf{\mathcal{B}} * \vec{\boldsymbol{x}}\right\|_{2^{*}}$. For $\vec{\boldsymbol{x}} \in \mathbb{R}^{n_{2} \times 1 \times n_{3}}$, let $\boldsymbol{x}\in \mathbb{R}^{n_{2}n_{3} \times 1 }$ denote the vectorized column vector of $\vec{\boldsymbol{x}} $ with unit norm. By the definition of t-product (Def. 1), the following equivalent relation is established: $$ \left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2}-\left\|\mathbf{\mathcal{B}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2} =\left\|\mathtt{bcirc}(\mathbf{\mathcal{A}})\boldsymbol{x}\right\|^{2}-\left\|\mathtt{bcirc}(\mathbf{\mathcal{B}})\boldsymbol{x}\right\|^{2}. $$ Divide $\boldsymbol{x}$ into $n_{3}$ parts, where each part $\boldsymbol{x^{i}}\in \mathbb{R}^{n_{2} \times 1 }$. The 2-norm of a vector is defined as the square root of the inner product of the vector with itself. Therefore, after rearranging the blocks in the block circulant matrix and positions of the corresponding $\boldsymbol{x^{i}}$ is, the vector 2-norm remains unchanged. So, \begin{align} &\left\|\mathtt{bcirc}(\mathbf{\mathcal{A}})\boldsymbol{x}\right\|^{2} \notag\\ =&\left\|\left[\begin{array}{cccc} \boldsymbol{A}^{(1)} & \boldsymbol{A}^{\left(n_{3}\right)} & \cdots & \boldsymbol{A}^{(2)} \\ \boldsymbol{A}^{(2)} & \boldsymbol{A}^{(1)} & \cdots & \boldsymbol{A}^{(3)} \\ \vdots & \vdots & \ddots & \vdots \\ \boldsymbol{A}^{\left(n_{3}\right)} & \boldsymbol{A}^{\left(n_{3}-1\right)} & \cdots & \boldsymbol{A}^{(1)} \end{array}\right] \left(\begin{array}{c} \boldsymbol{x^{1}} \\ \boldsymbol{x^{2}} \\ \vdots \\ \boldsymbol{x^{n_{3}}} \end{array}\right)\right\|^{2} \notag \\ =&\left\|\left[\begin{array}{cccc} \boldsymbol{A}^{(1)} & \boldsymbol{A}^{\left(n_{3}\right)} & \cdots & \boldsymbol{A}^{(2)} \end{array}\right] \left(\begin{array}{c} \boldsymbol{x^{1}} \\ \boldsymbol{x^{2}} \\ \vdots \\ \boldsymbol{x^{n_{3}}} \end{array}\right)\right\|^{2} + \cdots \notag \\ =&\left\|\left[\begin{array}{cccc} \boldsymbol{A}^{(1)} & \boldsymbol{A}^{\left(2\right)} & \cdots & \boldsymbol{A}^{(n_{3})} \end{array}\right] \left(\begin{array}{c} \boldsymbol{x^{1}} \\ \boldsymbol{x^{n_{3}}} \\ \vdots \\ \boldsymbol{x^{2}} \end{array}\right)\right\|^{2} + \cdots\notag \\ =&\sum_{i=1}^{n_{3}}\left\|\boldsymbol{A}_{(1)}\boldsymbol{x}_{i}\right\|^{2}, \notag \end{align} where each $\boldsymbol{x}_{i}\in \mathbb{R}^{n_{2}n_{3} \times 1 }$ is a unit vector. Therefore, \begin{align} \left\|\mathbf{\mathcal{A}}^{T}*\mathbf{\mathcal{A}}-\mathbf{\mathcal{B}}^{T}*\mathbf{\mathcal{B}}\right\| =\sum_{i=1}^{n_{3}}\left(\left\|\boldsymbol{A}_{(1)}\boldsymbol{x}_{i}\right\|^{2}-\left\|\boldsymbol{B}_{(1)}\boldsymbol{x}_{i}\right\|^{2}\right). \label{tensor-matrix} \end{align} Then, it follows from the properties of FD proved in Theorem 1.1 of \cite{ghashami2016frequent} that \begin{align} &\sum_{i=1}^{n_{3}}\left(\left\|\boldsymbol{A}_{(1)}\boldsymbol{x}_{i}\right\|^{2}-\left\|\boldsymbol{B}_{(1)}\boldsymbol{x}_{i}\right\|^{2}\right) \notag\\ \le&\frac{n_{3}}{\ell-k}\left\|\boldsymbol{A}_{(1)}-\boldsymbol{A}_{(1)_{k}}\right\|_{F}^{2} \notag \\ \le& \frac{n_{3}}{\ell-k}\left\|\boldsymbol{A}_{(1)}\right\|_{F}^{2} \notag\\ =&\frac{n_{3}}{\ell-k}\left\|\mathbf{\mathcal{A}}\right\|_{F}^{2},\notag \end{align} which together with the equation (\ref{tensor-matrix}) conclude the desired result. \end{proof} \begin{proof}[Proof of Theorem \ref{rem: rel}] Also, we here present the proof for the third-order tensors, and the general order-$p$ case is also applicable. The only difference is that the spectral norm of block circulant operation $\mathtt{bcirc}(\mathbf{\mathcal{A}})$, which acts as the bridge between tensor spectral norm and the $\ell_{2^{*}}$ norm of tensor column, is replaced with the spectral norm of the general $\boldsymbol{\tilde{A}}$. Noting that for any $\vec{\boldsymbol{x}} \in \mathbb{R}^{n_{2} \times 1 \times n_{3}}$, if the vectorized column vector $\boldsymbol{x}\in \mathbb{R}^{n_{2}n_{3} \times 1 }$ is a unit vector, then we have \begin{align} &\left|\left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2}-\left\|\mathbf{\mathcal{B}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2}\right| \notag\\ =&\left|\left\|\mathtt{bcirc}(\mathbf{\mathcal{A}})\boldsymbol{x}\right\|^{2}-\left\|\mathtt{bcirc}(\mathbf{\mathcal{B}})\boldsymbol{x}\right\|^{2}\right| \notag\\ \le&\left\| \mathtt{bcirc}(\mathbf{\mathcal{A}})^{T}\mathtt{bcirc}(\mathbf{\mathcal{A}})-\mathtt{bcirc}(\mathbf{\mathcal{B}})^{T}\mathtt{bcirc}(\mathbf{\mathcal{B}}) \right\|_{2}. \notag \end{align} Then, according to the definition of tensor spectral norm (Def. \ref{def_tsn}), we have \begin{align} &\left\| \mathtt{bcirc}(\mathbf{\mathcal{A}})^{T}\mathtt{bcirc}(\mathbf{\mathcal{A}})-\mathtt{bcirc}(\mathbf{\mathcal{B}})^{T}\mathtt{bcirc}(\mathbf{\mathcal{B}}) \right\|_{2} \notag \\ =&\left\| \boldsymbol{\bar{A}}^{T}\boldsymbol{\bar{A}}- \boldsymbol{\bar{B}}^{T}\boldsymbol{\bar{B}}\right\|_{2} \notag \\ =&\left\|\mathbf{\mathcal{A}}^{T}*\mathbf{\mathcal{A}}-\mathbf{\mathcal{B}}^{T}*\mathbf{\mathcal{B}}\right\| . \notag \end{align} That is to say, \begin{align} \left|\left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2}-\left\|\mathbf{\mathcal{B}} * \vec{\boldsymbol{x}}\right\|_{2^{*}} ^{2}\right| \le \left\|\mathbf{\mathcal{A}}^{T}*\mathbf{\mathcal{A}}-\mathbf{\mathcal{B}}^{T}*\mathbf{\mathcal{B}}\right\|. \label{spectralnormand2*} \end{align} As analyzed in Theorem \ref{main}, $\left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}} * \mathbf{\mathcal{V}}_{k} * \mathbf{\mathcal{V}}_{k}^{T}\right\|_{F}^{2}$ is equal to $\left\|\mathbf{\mathcal{A}}\right\|_{F}^{2}-\sum_{i=1}^{k}\left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{v}}_{i}\right\|_{2^{*}} ^{2}$. Therefore, we obtain \begin{align} &\left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}} * \mathbf{\mathcal{V}}_{k} * \mathbf{\mathcal{V}}_{k}^{T}\right\|_{F}^{2} \notag \\ =&\left\|\mathbf{\mathcal{A}}\right\|_{F}^{2}-\sum_{i=1}^{k}\left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{v}}_{i}\right\|_{2^{*}} ^{2} \notag \\ \le &\left\|\mathbf{\mathcal{A}}\right\|_{F}^{2}-\sum_{i=1}^{k}\left\|\mathbf{\mathcal{B}} * \vec{\boldsymbol{v}}_{i}\right\|_{2^{*}} ^{2} +k\left\|\mathbf{\mathcal{A}}^{T}*\mathbf{\mathcal{A}}-\mathbf{\mathcal{B}}^{T}*\mathbf{\mathcal{B}}\right\| \text{(By Eq.(\ref{spectralnormand2*}))} \notag \\ \le&\left\|\mathbf{\mathcal{A}}\right\|_{F}^{2}-\sum_{i=1}^{k}\left\|\mathbf{\mathcal{B}} * \vec{\boldsymbol{y}}_{i}\right\|_{2^{*}} ^{2} +k\left\|\mathbf{\mathcal{A}}^{T}*\mathbf{\mathcal{A}}-\mathbf{\mathcal{B}}^{T}*\mathbf{\mathcal{B}}\right\| \notag \\ \le&\left\|\mathbf{\mathcal{A}}\right\|_{F}^{2}-\sum_{i=1}^{k}\left\|\mathbf{\mathcal{A}} * \vec{\boldsymbol{y}}_{i}\right\|_{2^{*}} ^{2} +2k\left\|\mathbf{\mathcal{A}}^{T}*\mathbf{\mathcal{A}}-\mathbf{\mathcal{B}}^{T}*\mathbf{\mathcal{B}}\right\| \text{(By Eq.(\ref{spectralnormand2*}))}\notag\\ =&\left\|\mathbf{\mathcal{A}}-\mathbf{\mathcal{A}}_{k}\right\|_{F}^{2}+2k\left\|\mathbf{\mathcal{A}}^{T}*\mathbf{\mathcal{A}}-\mathbf{\mathcal{B}}^{T}*\mathbf{\mathcal{B}}\right\|. \notag \end{align} This completes the proof. \end{proof} \section{Conclusion} In this paper, we propose a simple and effective sketching algorithm for obtaining a low-tubal-rank tensor approximation in the streaming setting. The main idea is to extend matrix FD algorithm to the higher order tensor case using the t-SVD framework. The theoretical analysis shows that our new algorithm could provide a near optimal low-tubal-rank tensor approximation in terms of both covariance and projection errors. Extensive experiments on both synthetic and real data also verify the efficiency and effectiveness of the proposed algorithm. In the future, we are planning to incorporate this new algorithm into some popular tensor recovery models, namely tensor completion and tensor robust PCA, in the streaming setting. \section*{Acknowledgment} This work was supported in part by the National Key Research and Development Program of China under Grant 2018YFB1402600, in part by the National Natural Science Foundation of China under Grant 11971374 and Grant 11501440. \bibliographystyle{IEEEtran}
2024-02-18T23:40:29.736Z
2021-08-24T02:35:40.000Z
algebraic_stack_train_0000
2,565
15,368
proofpile-arXiv_065-12870
\section{Introduction} Ranking systems are ubiquitous across both online marketplaces (e-commerce, gig-economy, multimedia) and other socio-technical systems (admissions or labor platforms), playing a role in which products are bought, who is hired, and what media is consumed. In many of these systems, ranking algorithms form a core aspect of how a large search space is made manageable for \textit{consumers} (employers, buyers, admissions officers, etc). In turn, these algorithms are consequential to the \textit{providers} (sellers, workers, job seekers, content creators, media houses, etc.) who are being ranked. Much of the initial work on such ranking, recommendation, or retrieval systems (RS\footnote{While we often use ``RS'' or ranking systems as shorthand, in this work we often mean ranking, recommendation, retrieval, and constrained allocation algorithmic systems more broadly -- systems that select (and potentially order) a subset of providers from a larger available set.}) focused on learning to maximize \textit{relevance}---often measured through proxies like clickthrough rate---, showing the most relevant items to the consumer, based solely on the consumer's objective \cite{liu2011learning,adomavicius2005toward}. However, like all machine learning techniques, such systems have been found to `unfairly' favor or discriminate against certain individuals or groups of individuals in various scenarios \cite{ekstrand2018all,BaezaYates2018,chen2020bias}. Thus, as part of the burgeoning algorithmic fairness literature \cite{mehrabi2019survey,Mitchell2021}, there have recently been many works on fairness in ranking, recommendation, and constrained allocation more broadly \cite{burke2017multisided,zehlike2017fa, zehlike2022fair, geyik2019fairness, celis2018ranking, asudeh2019designing,singh2018fairness, biega2018equity, surer2018multistakeholder,guo2021stereotyping,cai2020fair}. For example, suppose that the platform is deciding how to rank 10 items on a product search result page, and each item has demographic characteristics (such as those of the seller). Then---in addition to considering each item's relevance---how should the platform rank the items, in a manner that is ``fair'' to the providers, either on an individual or group level? This question is often considered on an abstract level, independent of the specific ranking context; moreover, the literature primarily focuses on fairness of one instance of the ranking \cite{zehlike2017fa, zehlike2020reducing, zehlike2022fair, singh2018fairness}, or multiple independent instances of rankings with an additive objective across instances \cite{biega2018equity, suhr2019two}. The goals of this paper are to synthesize the current state of the fair ranking and recommendation field, and to lay the agenda for future work. In line with recent papers \cite{Jannach2020,Selbst2018} on both broader fairness and recommendations systems, our view is that the fair ranking literature risks being ineffective for problems faced in real-world ranking and recommendation settings, if it focuses too narrowly on an abstract, static ranking settings. To combat this trend, we identify several pitfalls that have been overlooked in the literature, and should be considered in context-specific ways: toward a broader, long-term view of the fairness implications of a particular ranking system. Like much of the algorithmic fairness literature, fair ranking mechanisms typically are designed by abstracting away contextual specifics, under a ``reducibility'' assumption; i.e., many fair ranking problems of interest can be reduced to a standard problem of ranking, that is a set of items or individuals constrained to a chosen notion of fairness or optimized for a suitable fairness measure (or multiple instances of such ranking over time with simple additive extensions); however, as \citet{Selbst2018} elucidate, the abstractions necessary for such a reduction often ``render technical interventions ineffective, inaccurate, and sometimes dangerously misguided.'' \begin{figure}[t!] \center{ \includegraphics[width=1\textwidth]{Arxiv_block_diagram.pdf}} \caption{This figure paints a big picture of the paper and succinctly summarizes our position on the field of fairness in retrieval systems, i.e., current fair RS mechanisms often fail to recognize several real-world nuances like delayed impacts, uncertainties in outcomes, ecosystem behaviour (discussed in \Cref{sec:pitfalls}); thus we must design fairness interventions in an impact-oriented approach with a holistic and long-term view of RS in mind. In \Cref{sec:long_term_fairness}, we discuss how algorithmic impact assessment can be helpful in this regard. More specifically in \Cref{subsec:simulations}, we overview various applied modeling techniques and simulation frameworks which in tandem can be used for impact-oriented studies of fairness in RS. Following this, in \Cref{subsec:data_bottlenecks,subsec:legal_bottlenecks} we briefly discuss various data bottlenecks and legal hurdles which might challenge the efforts towards a holistic view of RS fairness.} \label{fig:block_diagram} \end{figure} \textbf{Overview and Contributions}. In this work, we outline the many ways in which such a reduction often abstracts away many of the important aspects in the fair ranking context: the gap between position-based metrics and true provider utility, spillovers from one ranking to another across time and products, strategic incentives induced by the system, and the (differential) consequences of ranking noise. Studying fair ranking questions in such a reduced format and ignoring these issues might work in the ideal environment chosen during the problem reduction, but is likely insufficient to bring fairness in a real-world ranking system. For example, a ranking algorithm that does not consider how relevance or consumer discrimination affects outcomes, or how early popularity leads to compounding rewards on many platforms, is unlikely to achieve its fairness desiderata; furthermore, ignoring strategic manipulation (such as Sybil attacks where a provider creates multiple copies of their profile or items) may lead to fairness mechanisms amplifying rather than mitigating inequities on the platform. We believe that these aspects must be tackled by the fair ranking literature, in order for this literature to positively affect practice. We then overview methodological paths forward to incorporate these aspects into fair ranking research, as part of a broader long-term framework of algorithmic impact assessments---simulations, applied modeling, and data-driven approaches---along with their challenges. Finally, we conclude with a discussion on the broader regulatory, legal, and external audit landscape, necessary to translate the fair ranking literature into systems in practice. \Cref{fig:block_diagram} summarizes our paper at a high level. \textbf{Outline.} \Cref{sec:ranking_n_fairness} contains an overview of the fair RS literature. \Cref{sec:pitfalls} presents the aspects of ranking systems that we believe should be most covered by future fair RS work. \Cref{sec:long_term_fairness} contains the discussion of the paths forward within the broader data and regulatory landscape. \section{Overview of Fair Ranking Literature}\label{sec:ranking_n_fairness} Designing effective ranking, recommendation, or retrieval systems (RSs) requires tackling many of the same challenges as to build general machine learning algorithms---with additional challenges stemming from the characteristic that such systems make \textit{comparative} judgments across items; a high position in the ranking is a constrained resource. RSs often employ machine learned models to estimate the {\it relevance} (or {\it probability of relevance}) of the items to any search or recommendation query \cite{liu2011learning,adomavicius2005toward}. Historically, while user utility is the broader objective \cite{pu2011user}, the most popular guiding principle is the {\it Probability Ranking Principle} \cite{robertson1977probability}: items are ranked in descending order of their probability to be relevant to the user, often estimated through click-through rates. For a broad range of user utility metrics---such as mean average precision \cite{voorhees2000variations}, mean reciprocal rank \cite{voorhees1999trec}, and cumulative gain based metrics \cite{jarvelin2002cumulated,jarvelin2017ir}---this principle in turn maximizes the expected utility of users \cite{jarvelin2017ir}. However, not only are more (estimated to be) relevant items typically ranked higher, but also users tend to click more on higher positioned items, even conditioned on relevance. Such a {\it position bias} \cite{craswell2008experimental} means that expected attention (\textit{exposure}) from users decreases significantly while moving from the top rank to the bottom one; for example, users may evaluate items sequentially from the top rank, until they find a satisfactory one. It is thus important for producers to be ranked highly; a small difference in relevance estimation could result in a large difference in expected user attention (for example, see Appendix \Cref{tab:position_bias}). Depending on the ranking context, e.g., ranking products vs. ranking job candidates, high ranking positions directly translate to rewards, or at least increase their likelihood. (However, as we explain in the next section, the gap between exposure and true provider utility is an important one to understand.) \textbf{Fairness in Rankings.} Due to the importance of rankings for providers,\footnote{Note that despite the recent explorations into multi-sided fairness in online platforms \cite{burke2017multisided,patro2020fairrec,suhr2019two}, we restrict our discussion to provider fairness which has been studied quite extensively.} and as part of the increased focus on machine learning injustices, there has been much recent interest in fairness and equity for providers rather than just ranking utility for consumers. There are numerous definitions, criteria and evaluation metrics to estimate a system's ability to be \textit{fair} \cite{corbett2018measure,mehrabi2019survey,Mitchell2021,ekstrand2019fairness,distributive,castillo2019fairness,yao2017beyond}. Given heterogeneous settings, the complex environment in which retrieval systems are developed, and the multitude of stakeholders involved that may have differing moral goals \cite{finocchiaro2021bridging} and worldviews \cite{Friedler2021}, there is obviously no universal fairness definition; at a high level, however, many definitions can be classified into whether the objective is to treat similar individuals similarly (\textit{individual fairness}) \cite{dwork2012fairness}, or if different groups of individuals, defined by certain characteristics such as demographics, should be treated in a similar manner (\textit{group fairness}) \cite{speicher2018unified}. In the following, we overview the concepts and works most relevant for our critiques and the agenda that we advocate. Fairness notions from the domain of classification can---to a certain extent---be adopted to serve in ranking settings. They typically only require additional consideration of the comparative nature of rankings and of how utility is modeled \cite{castillo2019fairness}. Compared to relevance-only ranking, adding fairness considerations often leads to the optimization of a multi-objective (or a constrained objective), where the usual utility (or relevance) objective comes along with a fairness constraint or objective focused on the providers \cite{Ribeiro2013,xiao2017fairness}. One branch of the literature \cite{zehlike2017fa, zehlike2022fair, geyik2019fairness, celis2018ranking, asudeh2019designing} reasons about probability-based fairness in the top-$k$ ranking positions, which puts the focus onto group fairness. These works commonly provide a minimum (and for some cases also maximum) number or proportion of items/individuals from a protected groups, that are to be distributed evenly across the ranking. The methods do not usually allow later compensation, if the fairness constraints are not met at any of the top-$k$ positions (e.g., by putting more protected than non-protected items to lower positions). Another set of works \cite{singh2018fairness, biega2018equity, surer2018multistakeholder,diaz2020evaluating, zehlike2020reducing} assign values (often referred to as {\it attention} or {\it exposure} scores) to each ranking position based on the expected user attention or click probability. These works argue that the total exposure is a limited resource on any platform (due to position bias), and advocate for fair distribution of exposure to ensure fairness for the providers. In contrast to the former line of work, using exposure as a metric to quantify provider utility has brought up not only group fairness notions~\cite{singh2018fairness,morik2020controlling}, but also definitions to enhance individual fairness~\cite{singh2018fairness,biega2018equity,bower2021individually}. Further, in contrast to probability-based methods, these methods balance the \emph{total} exposure across individuals or groups, and thus they do allow compensations in lower positions. Generally the problem definitions in these works center around a single instance of ranking, i.e., at a particular point in time we are given a set of items or individuals, their sensitive or protected attribute(s) (e.g., race and gender), and their relevance scores; the task is to create a ranking which follows some notion of fairness (like demographic parity or equal opportunity) for the items or individuals, while maximizing the user utility. Some exceptions are \citet{biega2018equity}, \citet{suhr2019two} and \citet{surer2018multistakeholder}, that propose to deterministically ensure fairness through equity in amortized exposure, i.e., addition over time or over multiple instances of ranking. In the next section, we argue that both these broad approaches (probability-based, and exposure-based) may be incomplete in many applications, due to their exclusive focus (either directly or indirectly) on ranking positions. \section{Pitfalls of existing fair ranking models}\label{sec:pitfalls} In this section, we enumerate several crucial aspects of ranking and recommendation systems that substantially influence their fairness properties, but are ignored when considering an abstract fair ranking setting. The left hand side of \Cref{fig:block_diagram} summarizes this section. We begin in \Cref{subsec:beyond_exposure} by noting that exposure (or more generally, equating higher positions with higher utility) often does not translate to provider utility. \Cref{subsec:temporal_significance} discusses spillovers across rankings, either over time, across different rankings on the same user interface, or competition across platforms. \Cref{subsec:strategic_behavior} discusses strategic provider responses, and how they may counter-act (or worsen) the effects of a fair ranking mechanism. Finally, \Cref{subsec:uncertainty} illustrates how noise---either in demographic variables or in other aspects---may differentially affect providers within a fair ranking mechanism. Note that these issues are also present in other aspects of ranking, and in algorithmic fairness literature more generally; in fact, we also discuss if and how such issues have been studied in related settings. However, we believe that the intersection of fairness and ranking challenges amplify these concerns; for example, the naturally comparative aspect of rankings worsens the effects of competitive behavior and differential uncertainties. Finally, while these pitfalls may not be the only ones, we believe these are the major ones which may cause the failure of proposed fair ranking frameworks in delivering fair outcomes in several real-world scenarios. In the next section (\cref{sec:long_term_fairness}), we elaborate on how to tackle these challenges. \subsection{Provider Utility beyond Position-based Exposure}\label{subsec:beyond_exposure} As discussed above, the fair ranking literature often uses \textit{exposure} as a proxy for provider utility\footnote{Note that, here we are talking about the utility gained by a provider as a result of getting ranked. Thus provider utility is not same as user utility.} \cite{ekstrand2019fairness, singh2018fairness, castillo2019fairness, zehlike2020reducing}. For example, well-known fair ranking mechanisms like {\it equity of attention} \cite{biega2018equity} and {\it fairness of exposure} \cite{singh2018fairness,zehlike2020reducing} emphasize fairly allocating exposure among providers. Such works often implicitly assume that exposure is measured solely through a provider's position in the ranking; i.e., each position is assigned a value, independent of context. While such ranking-position-based exposure is often a useful measure of provider utility, such a focus misses context-specific factors due to which higher exposure does not necessarily lead to increased user attention, or that increased user attention may not directly translate to provider utility, as measured through, e.g., sales or long-term satisfaction. This measurement-construct gap---between exposure as a measurement and provider utility as the construct of interest---is not a challenge unique to fairness-related questions in ranking. For example, not distinguishing between varying levels of attention from users could affect the performance of algorithms designed to maximize sales, as it would affect the predictions of algorithms using exposure to calculate sales probabilities \cite{moe2004dynamic} or information diffusion on a social network \cite{bakshy2012role}. However, this gap may be especially important to be considered in a research direction that often seeks algorithmic solutions to inequities stemming from multiple causes, including the actions of other platform participants; for example, much work has analyzed (statistical or taste-based) discrimination on online platforms in which, even conditional on exposure, one type of stakeholders are treated inequitably by other stakeholders (see, e.g., racial discrimination by employers \cite{edelman2017racial,monachou2019discrimination}). In such settings, fair-exposure based algorithms may not uniformly or even substantially improve outcomes (we give an example in Appendix \Cref{tab:fair_exposure_gone_wrong}); this was recently underscored by \citet{suhr2020does}, which found through a user survey that such algorithms' effectiveness substantially depends on context such as job description and candidate profiles. Another especially relevant contextual factor beyond position is \textit{time}: in fast moving domains like media, items may only be relevant for a short period of time \cite{campos2014time,yuan2013time}. In such scenarios, the stakeholders (both users and providers) most benefit from immediate exposure. For example, recency is an important aspect of relevance in breaking news \cite{chakraborty2017optimizing}, job candidates should be shown before vacancies are filled, and restaurants get more orders if recommended during peak hours to nearby customers \cite{yuan2013time, Banerjee2020AnalyzingM}. More broadly, one should consider \textit{which providers} are being exposed to \textit{which users} and \textit{when}, as the value of a ranking position depends substantially on such match relevance and participant characteristics. Fair ranking models focusing solely on position, and thus oblivious to such context, may not have the desired downstream effects and may fail to deliver on fairness. We illustrate this consequence in an example in Appendix \Cref{tab:temporal_significance}. \subsection{Spillovers effects: compounding popularity, related items, and competition}\label{subsec:temporal_significance} While the immediate effect of an item's position in the ranking (e.g., an immediate sale) may be first-order, there are often substantial \textit{spillover} effects or \textit{externalities}, which should be incorporated in fair RS models. Here, we discuss three of such effects: compounding popularity or first-exposed-advantage, spillovers across products and ranking types, and competition effects. Perhaps the most important spillover is a \textit{compounding popularity} or \textit{first-exposed-advantage},\footnote{The phrase is used to indicate its similarity to the {\it first-mover-advantage} phenomenon \cite{kerin1992first}.} in which the exposure an item receives during its early stages can significantly affect its long-term popularity \cite{figueiredo2014dynamics}. For example, early feedback in terms of clicks, sales, etc. could improve an item's estimated relevance scores, raising its future rankings; there may further be a popularity bias or herding phenomenon in which users are more likely to select an item, if they observe that others have selected it before them \cite{steck2011item,abdollahpouri2017controlling,salganik2008leading}. Similarly, as reflected in re-targeting in advertising, user preferences may change with exposure to an item. Thus, past exposure plays a huge role in determining the long-term effects of future exposure; denial of early exposure could risk the viability of small providers \cite{mladenov2020optimizing}. Though one may intuitively think that continuous re-balancing of exposure through fairness-enhancing methods may overcome (or at least reduce) this problem, the real-world-proof is still to be made and early evidence suggests otherwise (see \citet{suhr2020does}). Second, ranking systems---such as product recommendations---are rarely deployed as stand-alone services. They are often accompanied by associated services such as sponsored advertisements \cite{hillard2010improving}, similar or complementary item recommendations on individual item pages on e-commerce, media-streaming platforms and other marketplaces \cite{pazzani2007content,lai2021understanding}, non-personalized trending items \cite{cremonesi2010performance,benhardus2013streaming,platt2015international}, and other quality endorsements like editor's choice \cite{holly2012play}. Due to the presence of these associated services, user attention reaching an item may spill over to other items \cite{liang2019spillover,raj2021friends}. For example, complementary items or items similar to an item may receive spillover exposure thereby resulting in increased exposure levels for such items, via `you may also be interested` or `items similar to' recommendations, potentially leading to undesirable inequalities even under a fair RS model; we give such an example in Appendix \Cref{tab:spillover_example}. Finally, there are competition and cross-platform spillover effects \cite{krijestorac2020cross,farahat2016app}: users may reach an item, not through the recommendation engine on the platform, but, e.g., via a search engine \cite{jansen2006effectiveness}, product or price comparison sites \cite{jung2014online}, or other platforms like social media \cite{hoffman2010can,saravanakumar2012social}. In these instances, the recommendation engine at the user entry-point, e.g., the search engine’s recommendation system, will have a downstream effect on the exposure of items on the end site where the items are listed. These spillover effects could be important to analyze when designing potential `entry-point' recommendation systems. Perhaps more importantly---since a platform does not have control over all the off-platform systems that may influence item exposure on its own platform---one should consider how such external sources affect both the goals and the behavior of a fair RS system. In this regard, the major questions which remain understudied and unanswered at large are: should a fair RS consider the inequities induced via external systems and seek to counteract through interventions or should it ignore these effects for the sake of free market competition? Together, these spillover effects suggest that fairness in RS (especially in recommendations) should not be modeled in isolation from associated and external services, and must take into account how the recommendations may have downstream consequences over time and space for either the same provider or on other providers. We note that these spillover effects are analogous to the \textit{Ripple Effect trap} as described by \citet{Selbst2018}, in which harmful effects often stem from the failure of understanding how the introduction of new technologies could alter behaviours and values in existing social systems. \subsection{Strategic Behavior}\label{subsec:strategic_behavior} Current fair ranking mechanisms often fail to consider that the providers themselves could be strategic players who might try to \emph{actively} maximize their utilities \cite{tennenholtz2019rethinking,bahar2015economic}. Providers often have an incentive to suitably strategize their offerings, e.g., content creators on media platforms could leave their own area of expertise and try to copy other popular creators or follow the popular trends \cite{ben2018game,ben2020content}, sellers could perform data poisoning attacks (through fake reviews, views, etc.) on the RS to improve their ranking \cite{zhang2020practical}, influencers on social network sites could try to hijack popular trends \cite{goga2015doppelganger,chakraborty2019equality}. Providers can even strategically exploit the deployed fair ranking mechanisms to extract more benefits \cite{frobe2020effect,diincentives}. Not factoring in such strategic behavior could impact ranking and recommendation systems, and especially the performance of fair ranking mechanisms. In the following, we overview some examples of strategic behavior and their consequences. As in the measurement-construct gap between exposure and producer utility, strategic behavior as a reaction to ranking models is not just a question of fairness. Numerous works suggest that relevance estimation models are highly vulnerable to various types of adversarial attacks: \begin{inparaenum} \item \emph{shilling attacks}, in which a provider gets associated with a group of users who then add supportive reviews, feedbacks, clicks, etc. to manipulate rankings in favor of the provider \cite{lam2004shilling}; \item \emph{data poisoning attacks}, where a provider strategically generates malicious data and feeds it into the system through a set of manipulated interactions \cite{li2016data,zhang2020practical}; or \item \emph{doppelganger bot attacks}, where a number of fake users or bots are created and then strategically placed in a social network to hijack news feed ranking systems in favor of the malicious party \cite{goga2015doppelganger,chakraborty2019equality,molavi2013iolaus}. \end{inparaenum} However, some strategic behavior may specifically exploit characteristics of fair ranking algorithms. For example, fair ranking mechanisms may incentivize \emph{content duplication attacks} \cite{frobe2020effect}. Strategic providers can create duplicates or near-duplicates---possibly hard to automatically identify---of their existing offerings in a ranking system. Since certain fair ranking mechanisms may try to ensure benefits for all listed items, providers with more copies of same items stand to gain more benefits \cite{frobe2020effect,diincentives}. We give such an example in Appendix \Cref{tab:duplication_attack}. Other `undesirable' strategic behavior includes the purposeful provision or withholding of information, which may help some participants maximize their ranking; For example, in admissions settings, test-optional admissions policies that aim to be fair to students without test access may inadvertently be susceptible to strategic behavior by students with access but low test scores~\cite{liutestoptional21}. Strategic behavior by providers need not always be malicious; rather, it could also represent a sincere effort for improvement (e.g., effort to improve restaurant's quality \cite{luca2016reviews}) or just a change in content offering strategy (e.g., strategic selection of topics for future content production \cite{halvorson2012content,raifer2017information}). However, such `legitimate' strategic behavior may nevertheless affect the efficacy of fair ranking mechanisms over time, as such behavior may affect the relative performance of marketplace participants. For example, \citet{vonderau2019spotify} shows that providers on various content sharing platforms may partly or completely change their content production strategy to cater to the taste of a ranking algorithm (instead of the taste of users). Studies by \citet{Chaney2018} and \citet{ben2020content} suggest that ranking mechanisms which are unaware of such behavior could cause homogenization of a platform's item-space and degrade user utility over time; such behavior could also risk the long-term viability and welfare of small-scale providers \cite{mladenov2020optimizing}. Theoretically, \citet{liu2021strategic} extend the strategic classification literature to the ranking setting, to show that such effort (and its differential cost) could have substantial equity implications on the ultimate ranking. Fair ranking mechanisms which seek to equalize exposure affect such incentives, both for desirable and undesirable strategic behavior, and it is necessary to take them into account when designing fair ranking mechanisms for real world settings. Designing fairness mechanisms which can distinguish between such desirable and undesirable behavior may be further challenging (cf. \cite{liutestoptional21}). Finally, we note that the above discussion---that of strategic behavior of individual providers---does not consider the setting in which the platform---a seemingly neutral player and deployer of a ranking algorithm---also plays the role of a competitive provider (through a subsidiary or partner). Since such providers have access to private platform data and control over their algorithms, they may be able to deploy undetectable strategic manipulations (e.g., Amazon's private label of products on its marketplace \cite{dash2021umpire}) which the other providers are not able to match, leading to an unfair strategy playing field for providers. The design and auditing of ranking algorithms robust to such behavior is an important direction for future work. \subsection{Consequences of Uncertainty}\label{subsec:uncertainty} Fairness-aware ranking mechanisms proposed for exposure- and probability-based fairness often assume knowledge of true relevance of providers or items, demographic characteristics on which to remain fair and of the value of each position in the ranking. However, such scores are rarely available in real-world settings. For example, machine-learned models or other statistical techniques used to estimate relevance scores are often uncertain about the relevance of items due to various reasons, for example, biased or noisy feedback, the initial unavailability of data \cite{morik2020controlling,yang2021maximizing}, and platform updates in dynamic settings \cite{patro2020incremental}. While such estimation noise (or bias) is important for all algorithmic ranking or recommendations challenges, it is especially important to consider for fair ranking algorithms, as we illustrate below. Current fair ranking mechanisms assume the availability of the demographic data of individuals to be ranked. Whilst such assumptions help algorithmic developments for fair ranking, the availability of demographic data can not be taken for granted. Demographic data such as race and gender is often hard to obtain due to reasons like legal prohibitions or privacy concerns on their collection in various domains \cite{andrus2021we,bogen2020awareness}. To overcome the data gap, platform designers often resort to data-driven inference of demographic information \cite{lahoti2020fairness}, which usually involves huge uncertainty and errors \cite{andrus2021we}; the use of such uncertain estimates of demographic data in fair ranking mechanisms can cause significant harm to vulnerable groups, and ultimately fail to ensure fairness \cite{ghosh2021fair}. Moreover, in dynamic market settings where protected groups of providers or items are often set based on popularity levels, the protected group membership changes over time, thereby adding temporal variations in demographics along with the uncertainty issues \cite{ge2021towards}. To tackle such variations, \citet{ge2021towards} propose to use constrained reinforcement learning algorithms which can dynamically adjust the recommendation policy to nevertheless maintain long-term fairness. However, incorporating such demographic uncertainty to broader fair ranking algorithms remains an open question. Another crucial part of rankings systems is the estimation of position bias \cite{agarwal2019estimating,chandar2018estimating} which acts as a proxy measure for click-through probability and helps quantify the possible utilities of providers based on their ranks \cite{bar2009presentation}. Fairness-aware ranking mechanisms need these position bias estimates to ensure fair randomized or amortized click-through utility (exposure) for the providers. While these estimates are often assumed to be readily available in most of the recent fair ranking systems works \cite{singh2018fairness,biega2018equity,diaz2020evaluating}, it also has huge uncertainty attached since it heavily depends on the specifics of the user interface. Dynamic and interactive user interfaces \cite{mesbah2012crawling} used on many platforms, usually go through automatic changes which affects the attention bias (position and vertical bias) based on changes in web-page layout \cite{oosterhuis2018ranking}. Furthermore, factors like the presence of attractive summaries and highlighted evidences for relevance---often generated in automated manners---alongside ranking results also differentially affect click-through probabilities over time and across items \cite{yue2010beyond,joachims2017accurately}. Finally, the presence of relevant images, their sizes, text fonts, and other design constraints also play a huge role \cite{liu2015influence, wang2016beyond,granka2004eye}. Together, as also discussed in \citet{wang2018position} and \citet{sapiezynski2019quantifying}, inaccuracies in position bias estimation and corresponding consequences remain important challenges in fair RS. Finally, we note that uncertainties, including the above, may be \textit{differential}, affecting some participants more than others, even within the same protected groups. Such differential informativeness, for example, might occur in ranking settings where the platform has more information on some participants (through longer histories, or other access differences) than others \cite{emelianov2020fair,garg2021standardized}. The result of such differential informativeness may cause downstream disparate impact, such as privileging longer-serving providers over newer and smaller ones. Together, these sources and areas of uncertainty should be an important aspect of future work in fair ranking. \vspace{1em} \noindent{\textbf{Fair ranking desiderata. }} What should a comprehensive and long-term view of fairness in RS and its dynamics be composed of? First, the provider utility measure should look beyond mere exposure, and account for user beliefs, perceptions, preferences and effects over time (as discussed in \Cref{subsec:beyond_exposure}). Second, fair RS works should consider not just immediate impacts but also their spillovers, whether over time for the same item or spillover effects on other items (as discussed in \Cref{subsec:temporal_significance}). Third, strategic behavior and systems incentives should also be modeled to anticipate manipulation concerns and their adverse effects (as discussed in \Cref{subsec:strategic_behavior}). Finally, fair RS mechanisms should incorporate the (potentially differential) effects of estimation noise (as discussed in \Cref{subsec:uncertainty}). Putting things together, this section illustrated various challenges and downstream effects of developing and deploying algorithms from the fair RS literature. As we discuss in the next section, overcoming these challenges requires both longer-term thinking---beyond the immediate effect of a ranking position---and moving beyond studying general RS settings to modeling and analyzing specific settings and their context-specific dynamics. \subsection{\bf Data Bottlenecks}\label{subsec:data_bottlenecks} A major challenge faced by researchers outside industry working on long-term comprehensive evaluations of fair RS is the unavailability of suitable data. The traditional RS datasets \cite{harper2015movielens,mcfee2012million,bennett2007netflix,TREC_data} that often used in the literature were collected in times when goals like accuracy or click-through rates and so may not be a good fit for today's impact-oriented research \cite{Jannach2020}. For example, a set of user-item ratings data such as the canonical MovieLens dataset \cite{harper2015movielens} may not capture how a user may value the item differently at different points in time or how a user's preferences evolve over time, or the user's or item's associated demographics. Similarly, such data gives little insight into fake reviews or ratings \cite{luca2016fake,he2021market,li2016data,zhang2020practical}, or other strategic manipulations as discussed above. More broadly, such datasets do not include vital information such interface design changes that may have a behavioural impact on user choice (as discussed in \cref{subsec:uncertainty}), and associated services like complementary recommender systems or embedded advertisement blocks (as elaborated in \cref{subsec:beyond_exposure}) that work alongside the one being audited, the type and time of provider interactions and changes in their behaviour. Such missing components of standard ranking and recommendation system datasets are a major bottleneck to studying the questions from \Cref{sec:pitfalls}. On the other hand, the flourishing of the algorithmic fairness literature have contributed to the spread of several experimental datasets covering a wide range of scenarios such as school admission, credit score, house listings, news articles, and much more (see \cite{Zehlike2021survey, mehrabi2019survey} for a list of datasets used in fair ranking and ML research). Datasets such as \textit{COMPAS} or the \textit{German Credit} datasets, originally classification tasks, have been adapted to ranking settings. A major issue related to the use of these datasets in fair ranking research is that they are often far from the contexts in which fair ranking algorithms would be used. While potentially useful in the advancing the conceptual state-of-the-art in algorithmic fairness research, reliance on such datasets may raise significant concerns to the ecological validity of such research. Therefore, a more detailed analysis on the use and characteristics of such datasets is a much needed work to address in future, similarly to what has been done in the context of Computer Vision research \cite{Miceli2021, Koch2021, Scheuerman2021}. Here, we detail the characteristics that a RS dataset would need to be suitable for impact-oriented fairness analysis, in addition to the traditional indicators of user preference or experience (precision or click through rates). One recurring theme is that ranking and recommendation systems operate within a broader socio-technical environment (that they themselves shape), and existing datasets do not allow researchers to understand this broader environment and the underlying dynamics.\footnote{We note that while \textit{more} data is not always better (e.g., see the case of NLP models discussed by \citet{Bender2021})---we believe that a certain level of {\it completeness and richness of data} is required to perform more comprehensive and long-term impact analysis.} \begin{enumerate}[(1)] \item Most easily, it would be useful to complement existing datasets with past data on the same platform, such as user-provider interactions and their behaviour; on RS's associated services and related rankings; on other contextual details such as user interface, page layout and design; and on past results from rankings, such as whether the user selected a custom sorting criteria like date or price instead of platform's default ranking criteria, whether the user was redirected to a product from an external or affiliate link, and whether the user's behaviour follows the platform's guidelines. Such complementary data would allow understand how the broader environment affects and is affected by a fair ranking algorithm. \item More broadly, a move from static datasets to temporal datasets -- with timestamps on ratings and displayed recommendations/ratings -- would allow finding temporal variations in RS and its stakeholders. It would further allow studying fairness beyond demographic characteristics, such as that related to new providers. For example, as discussed in \Cref{subsec:temporal_significance}, higher ranked results can often lead to increased user attention and conversion rates \cite{craswell2008experimental}, i.e., results initially ranked higher could then have a greater chance of being ranked highly in subsequent rankings. Since such biased feedback could easily creep into temporal datasets, one must factor this in their RS impact analysis (e.g., an unbiased learning method by \citet{joachims2017unbiased} in presence of biased feedback). Studying such dynamics and their fairness implications in the real world requires observing such interactions. \item Finally, as discussed in \Cref{subsec:uncertainty}, a key aspect of fairness in rankings is uncertainty, especially differential uncertainty. While some datasets may allow researchers to infer certain components of recommendation system uncertainty (such as by numbers of ratings for a provider), other uncertainties are hidden. External to such companies, it is unclear how to best reflect the correctness of provided user attributes (such as race and gender so as to avoid uncertainties in a platform's compliance to fairness), the genuineness of ratings and reviews (so as to account for manipulations in fair RS analysis) \cite{trustpilotrankeligibility,youtuberankeligibility}) when feedback is given, and other model uncertainties. While it may be difficult for companies to quantify their uncertainties when releasing datasets, one beneficial step would be to release more information on the origin of the data, i.e., dataset datasheets as described by \citet{Gebru2018}. \end{enumerate} Unfortunately, as might be expected, there are several challenges to such comprehensive datasets. The most important challenges are from the legal domain, which might even affect researchers and developers within a company. For example, the data minimization principle in GDPR \cite{data_min_gdpr} could restrict platforms to collect sensitive information like gender or race, thereby indirectly closing the doors for the implementation of fairness interventions, and inferred attributes would contain huge uncertainty which may render fairness interventions useless (as discussed in \cref{subsec:uncertainty}). In fact, a study by \citet{biega2020operationalizing} finds that the performance might not substantially decrease due to data minimization, but it might disparately impact different users. Additional legal principles which might present challenges are other privacy regulations, data retention policies, intellectual property rights of platforms, etc. We discuss these challenges in the next section. Furthermore, while a comprehensive and long-term view on fair RS may be of huge societal need and expectation, the creation of suitable datasets and their availability to external researchers heavily rely on the interests of platform owners. Such external access, even if restricted in various ways, is an important aspect of regulation and auditing. We now turn to discussing such legal and regulatory concerns. \subsection{\bf Legal Bottlenecks}\label{subsec:legal_bottlenecks} In the previous section we discussed issues of missing data and the challenges to obtain necessary information due to platform interests and legal regulations on privacy. Regulations and other legal interventions by governments are helpful in some aspects of ensuring external audits, while hindering fair ranking and recommendation in other contexts. Legal provisions will vary across jurisdictions, causing different challenges in data access and algorithmic disclosure depending on the location of: the data requested, the users of platforms that implement RS’s, the individuals impacted by the rankings, and the researchers seeking access to RS information. For example, data protection laws may potentially restrict access to data located in the EU, for non-EU based researchers or vice versa. In this section we give an overview of legal hurdles that prevent researchers of fair RS from assessing the impact of their methods, along with information on specific laws and guidelines that can be used as a starting point for discussions to shape a more robust set of legal provisions for long term fair RS. There are existing laws/guidance that could be applied to long term fairness in RS. But the wording of some of these laws/guidance leaves them open to interpretation, such that a platform could reasonably argue that it is fulfilling its obligations under the guidance, without taking into account long term fairness in RS. The European Commission Ethics Guidelines for Trustworthy AI~\cite{EUEthicsGuidelines} state that a system should be tested and validated to ensure it is working as intended throughout its entire life cycle, both during development and after deployment. The guidelines list fairness as well as societal well-being as a requirement of trustworthy AI. However, if the word ``intended'' is interpreted narrowly, as point in time and in isolation from the dynamic and interconnected nature of recommendations, platforms could demonstrate that their systems are working as ``intended,'' considering both fairness and societal impact---even if in practice the platform may not be evaluating for long-term fairness or modelling various spillover effects. In addition, the European Commission Guidelines on Ranking Transparency~ \cite{EURankingTransparency} reflect hesitancy that platforms have to be fully transparent on the details of their ranking; they recognise that providers are ``not required to disclose algorithms or any information that, with reasonable certainty, would result in the enabling of deception of consumers or consumer harm through the manipulation of search results.'' This privacy-transparency trade-off may cause the problem of missing data for algorithmic impact assessments to continue. On the other hand, there is a push from regulators to make data from algorithmic systems available---if not to the general public, at least to independent third party auditors---to mitigate conflicts of interest when platforms audit their own systems. In the US, the FTC’s Algorithmic Accountability Act \cite{FTCAlgorithmicAccountability} provides that if reasonably possible, impact assessments are to be performed in consultation with external third parties, including independent auditors and technology experts. However, the EU harmonised rules for AI \cite{EUHarmonisedAIRules} acknowledge that given the early phase of the regulatory intervention and the fact the AI sector is very innovative, expertise for auditing is only now being accumulated. In the absence of underlying data and full knowledge of the ranking algorithm, researchers could still adopt a forward looking approach of implementing simulations, based on what they do know about the ranking, to help predict the longer term effects of a ranking algorithm (as already explained in Section~\ref{subsec:simulations}). It remains to be seen however, whether the advised disclosure of ``meaningful explanations'' of the main parameters of ranking algorithms---referred to in the European Commission Guidelines on Ranking Transparency \cite{EURankingTransparency}---provide enough information upon which to base an evaluation of the long term fairness of the RS. There is also uncertainty over whether these meaningful explanations reduce sufficiently the impact of information asymmetry between users of the platform, and the platform itself, particularly where the platform both controls the RS, and includes its own items to be eligible in ranking results, alongside those of third party providers. Further consideration also needs to be given to the timing of the release of the explanations when an RS method is updated, to give stakeholders sufficient opportunity to challenge reliance on these parameters, from a long term fairness perspective, pre-implementation of the RS update. Applying laws to, or developing laws for, long term fairness scenarios in RS is in its infancy. Those involved in shaping this legal framework should consider for long term fairness evaluation purposes: data access for different stakeholders, timings for this access, and level of detail that needs to be given; as well as providing actionable guidance on a platform’s responsibility for developing RS with long term fairness goals in mind. \section{Towards Impact-oriented Fairness in Ranking and Recommender Systems}\label{sec:long_term_fairness} In order to avoid the pitfalls discussed in the last section and to design `truly' fair RS, one must understand and assess the full range and long-term effects of various RS mechanisms. In this regard, we apply recent lessons from and critiques of Algorithmic Impact Assessment (AIA), both within and beyond the FAccT community. Algorithmic Impact Assessment (AIA) can be described as a set of practices and measurements with the purpose of establishing the (direct or indirect) impacts of algorithmic systems, identifying the accountability of those causing harms, and designing effective solutions \cite{Metcalf2021,Reisman2018}. More specifically to ranking and recommendation systems, \citet{Jannach2020} introduces a comprehensive collection of issues related to impact-oriented research in RS. There are two broad lessons from this literature, that we explain and apply to the design of fair RS, in a manner that involves integrated effort from different actors and a comprehensive view of their effects. First, as discussed by \citet{Vecchione2021}, a key point when assessing or auditing algorithmic systems is to move \textit{beyond discrete moments of decision making}, i.e., to understand how those decision-points affect the long-run system evolution; this point is particularly true for fairness interventions in ranking and recommender systems, as discussed in \Cref{sec:pitfalls}. \citet{Jannach2020} also highlight the limitations and unsuitability of traditional research in RS, which focused solely on accurately predicting user ratings for items (``leaderboard chasing") or optimizing click-through rates. Thus, in \Cref{subsec:simulations}, we begin with a discussion of methodologies that can be used to study such long-run effects of fair RS mechanisms, that have been used to study other questions in RS fields -- mainly, simulation and applied modeling. We detail not only the useful frameworks but also potential limitations and challenges when studying fairness-specific questions. Second, a key aspect of effective assessments is the participation of every suitable stakeholder, including systems developers, affected communities, external experts, and public agencies; otherwise, a danger is that the research community focuses on impacts most measurable by its preferred methods and ignores others \cite{Metcalf2021}. However, there are bottlenecks to such holistic work, especially for RS used in private or sensitive contexts. We discuss data availability challenges in \Cref{subsec:data_bottlenecks}. Then, in \Cref{subsec:legal_bottlenecks}, we overview various regulatory frameworks -- along with their limitations -- designed to govern RS or algorithmic systems in general, and hold them accountable. Researchers should contribute to tackling these challenges as well. \subsection{Simulation and Applied Modeling to Study Long-term Effects and Context-specific Dynamics}\label{subsec:simulations} Many of the challenges discussed in \Cref{sec:pitfalls} are regarding impacts that do not appear in the short-term, immediately after a given ranking; for example, it may take time for strategic agents to respond to a ranking systems. These long-term impacts are difficult to capture without considering a specific context, or with solely relying on ``traditional'' metrics that assess instantaneous precision-fairness trade-offs. Outside of fair ranking, the recommendations literature has investigated such long-term and indirect effects using \textit{simulation and applied modeling} methods, motivated for example by the observation that offline (and commonly, precision-driven) recommendation experiments are not always predictive of long-term simulation or online A/B testing outcomes \cite{gomez2015netflix,bodapati2008recommendation, krauth2020offline}. However, surprisingly, such an approach has been relatively rare in the fair rankings and recommendations literature; to spur such work, here we overview various simulation and modeling tools along that are advantageous in our context. First, {\bf simulations} have already been used in the past to demonstrate long-term effects of recommender systems and search engines---although unrelated to fairness, in ways that static precision-based analyses can not. Examples are the demonstration of the {\it performance paradox} (users' higher reliance on recommendations may lead to lower RS performance accuracy and discovery) by \citet{Zhang2020}, the study of {\it homogenization} effects on RS users by \citet{Chaney2018}, a study on the emergence of {\it filter bubbles} \cite{Nguyen2014} in collaborative filtering recommendation systems and its impacts by \citet{Aridor2020}, the evaluation of reinforcement learning to rank for search engines by \citet{hu2018reinforcement}, and a study on {\it popularity bias} in search engines by \citet{fortunato2006topical}. All relied on context-specific simulations of RS. Many other works also leverage simulations \cite{Hazrati2020, Ferraro2020, patro2020incremental, Banerjee2020AnalyzingM, Bountouridis2019, DAmour2020, Yao2020, Mansoury2020, patro2020towards} to study various dynamics in recommender systems. In summary, these works illustrate how simulation-based environments can help in {\it (i)} studying various hypothesized relationships between the usage of systems and individual and collective behavior and effects, {\it (ii)} detecting new forms of relationships, and {\it (iii)} replicating results obtained in empirical studies. Given the usefulness of simulations, many simulation frameworks have been developed to study various fairness approaches for information retrieval systems; just to mention a few: MARS-Gym \cite{MARSGYM}, ML-fairness-gym \cite{DAmour2020}, Accordion \cite{McInerney2021}, RecLab \cite{krauth2020offline}, RecSim NG \cite{Mladenov2021}, SIREN \cite{Bountouridis2019}, T-RECS \cite{lucherini2021t}, RecoGym \cite{Rohde2018}, AESim \cite{gao2021imitate}, Virtual-Taobao \cite{shi2019virtual}. Note however, that the simulated environments are created under certain assumptions on the interactions between the stakeholders and the system, which may not always hold in real-world. As emphasized by \citet{Friedler2021}, it is important to question how different value assumptions may be influential on the simulated environments, and which worldviews have been modeled while developing such frameworks. On a positive note, simulation frameworks can be designed to be flexible enough to give freedom in (de)selecting or changing the fundamental value assumptions in fair RS; for example RecoGym \cite{Rohde2018} and MARS-Gym \cite{MARSGYM} provide freedom in setting various types of user behaviours and interactions with the system. This flexibility allows impact and efficacy assessment under different ethical scenarios, and the study of fair RS mechanisms under various delayed effects and user biases (as discussed in \cref{subsec:beyond_exposure,subsec:temporal_significance}) -- we believe that leveraging such simulation frameworks is an important path forward to studying the various effects discussed above in a context-specific manner. Second, various {\bf temporal, behavioural and causal models} have traditionally been used to formally define, understand and study complex dynamical systems in fields like social networks \cite{handcock2010modeling,hanneke2010discrete,farajtabar2017coevolve}, game theory and economics \cite{camerer2003behavioural,ariely2008predictably}, machine learning \cite{yao2021survey,guo2020survey}, and epidemiology \cite{grenfell2001travelling}. These models often rely on real-world observations of individual behaviour, extract broader insights, and then try to formally represent both individual and system dynamics through mathematical modeling. While the simulation frameworks can function as technical tools to study RS dynamics, suitable temporal, behavioural and causal models can be integrated within the simulation to ensure that the eco-system parametrization, stakeholder behaviour and system pay-offs are representative of the real-world. A good example: \citet{radinsky2013behavioral} try to improve search engine performance with the use of suitable behavioural and temporal models in their framework. Similarly, simulation frameworks with suitable applied modeling can be used to design and evaluate fair RS mechanisms which can withstand strategic user behaviour and other temporal environment variations. Causal models can be utilized to study the impact of fair RS \cite{sharma2015estimating, schnabel2016recommendations,wang2020causal} in presence or absence of uncertainties and various associated services. Applied modeling tools are further an effective way to study strategic concerns in ranking, along with their fairness implications \citep{liu2021strategic}. Even though simulations along with applied modeling may not exactly mirror the real world effects of fair RS, they could give enough of a basis to highlight likely risks, which could then be taken into account while designing and optimizing fair RS mechanisms. They also bring an opportunity to model the effects of proposed fairness interventions, so that their long-term and indirect effects can be better understood and compared. However, these approaches would further benefit from availability of certain data and the resolution of related legal bottlenecks. For example, studies on spillover effects can not proceed without the data on complementary and associated services. These data and legal bottlenecks might have also contributed to the fact that there are very few works exploring this direction, and out of the limited works, some are limited to either theoretical analysis \cite{mladenov2020optimizing,ben2020content} or simulations with assumed parametrizations \cite{Zhang2020,ge2021towards,xue2019enhancing} in absence of complementary data.\footnote{Note that a few recent works look into long-term assessment of fair machine learning \cite{liu2018delayed,zhang2020long,DAmour2020}, which we overlook so as not to divert from the primary focus of our discussion.} We discuss these bottlenecks in \cref{subsec:data_bottlenecks} and \cref{subsec:legal_bottlenecks}. \input{Arxiv_4.2_data} \input{Arxiv_4.3_legal} \section{Conclusion} In this paper we provided a critical overview of the current state of research on fairness in ranking, recommendations, and retrieval systems, and especially the aspects often abstracted away in existing research. Much of the existing research has focused on instant-based, static fairness definitions that are prone to oversimplifying real-world ranking systems and their environments. Such a focus may do more harm than good and result in `fair-washing,' if those methods are deployed without continuous critical investigation on their outcomes. Guidelines and methods to consider the effects of the entire ranking system through its life cycle, including effects from interactions with the outside world, are urgently needed. We discussed various aspects beyond the actual ordering of items that affect rankings, such as spillover effects, temporal variations, and varying user characteristics ranging from their levels of activity. We further examined the effects of strategic behaviors and uncertainties in an RS. These effects play an important role for the successful creation and assessment of fair rankings, and yet they are rarely considered in state-of-the-art fair ranking research. Finally, we proposed next steps to overcome these research gaps. As a promising first step we have identified simulations frameworks and applied-modeling methods, which can reflect the complexity of ranking systems and their environments. However, in order to create meaningful impact analysis, concerns around datasets for fair ranking research, certain data bottlenecks and legal hurdles are yet to be resolved. Our analysis concerning existing research gaps is of course by no means exhaustive, and many other issues of high complexity remain to be discussed. In this paper, we focused on fair ranking methods that try to enhance fairness for a single side of stakeholders, mostly the individuals being ranked, or the providers of items that are ranked. Research that is concerned with multi-stakeholder problems has recently started to emerge---finding, for example, that fairness objectives for providers and consumers in conflict to each other. Similarly, we also did not explicitly discuss ranking platforms as two-sided markets, in which both sides may receive rankings for the other side. While it is a promising direction with a vast corpus of economic research on the topic, it is important to understand that \begin{inparaenum}[(1)] \item not all ranking platforms and their environments are two-sided in a literal sense: e.g., Amazon is a platform and a provider at the same time; and \item depending on what is happening on the platform, different justice frameworks have to be applied: e.g., school choice, LinkedIn, and Amazon can all be seen as two-sided markets in a broader sense, but they need very different approaches when it comes to the question on what it means for them to be fair. \end{inparaenum} Depending on whether people or products are ranked, one might expect different user bias manifestations, as well as different requirements on data privacy and minimization policies. These differences have to be taken into account when designing fair ranking methods. Finally, we note that, to the best of our knowledge, all known definitions of fairness in ranking are drawn from an understanding of fairness as distributive justice: (limited) \textit{primary goods}---these are goods essential for a person's life, such as housing, access to job opportunities, health care, etc.---are to be distributed fairly across a set of individuals. Fair ranking definitions of this kind may be a good fit for hiring or admissions, because we distribute a limited number of primary goods, namely jobs and education, among a set of individuals. However, fairness definitions based on the distributive justice framework may not make sense in other scenarios. For instance, e-commerce platforms may not qualify for properties of distributive justice, because they lack the aspect to distribute \emph{primary} goods: e-commerce settings, e.g., whether a single item is sold, may not qualify as immediately life-changing. Overall, we conclude that there is still a long way ahead of us; many more aspects from the ranking systems' universe have to be considered before we achieve substantive and robust algorithmic justice in rankings, recommendations, and retrieval systems. \section*{Appendix: Comprehensive Examples} Here we give some toy examples relevant to our discussion in the paper. \Cref{tab:position_bias} gives an example on how position bias in ranking could further widen the already existing inequalities. In \Cref{tab:fair_exposure_gone_wrong}, we give an example where the traditional fair ranking would fail to ensure equity in presence of user biases. \Cref{tab:temporal_significance} gives an example where fair ranking mechanisms would fail in presence of temporal variations. \Cref{tab:duplication_attack} and \Cref{tab:spillover_example} give examples on how duplication attacks and spillovers could cause the failure of fair ranking mechanisms. \begin{table*}[h] \small \subfloat[An optimal ranking]{ \begin{tabular}{|c|c|c|c|c|} \hline {\bf Rank} & {\bf Expected} & {\bf Individual} & {\bf Relevance} & {\bf Group} \\ & {\bf attention} & & & {\bf membership}\\\cline{1-5} $1$ & $0.5$ & \textcolor{blue}{A} & $0.92$ & \multirow{2}*{\textcolor{blue}{blue}} \\\cline{1-4} $2$ & $0.25$ & \textcolor{blue}{B} & $0.91$ & \\\cline{1-5} $3$ & $0.125$ & \textcolor{red}{C} & $0.90$ & \multirow{2}*{\textcolor{red}{red}}\\\cline{1-4} $4$ & $0.0625$ & \textcolor{red}{D} & $0.89$ & \\\cline{1-5} \end{tabular}\label{tab:position_bias_ranking}} \hfil \subfloat[Group-level analysis]{ \begin{tabular}{|c|c|c|} \hline {\bf Group} & {\bf Mean} & {\bf Exposure}\\ & {\bf relevance} & \\\cline{1-3} \textcolor{blue}{blue} & $0.915$ & $0.75$ \\\cline{1-3} \textcolor{red}{red} & $0.895$ & $0.1875$ \\\cline{1-3} \end{tabular}\label{tab:position_bias_inequality}} \caption{\textmd{Here we give an example (inspired by \citet{singh2018fairness}) on how position bias could further widen the existing inequalities. On a gig-economy platform there are four workers: A, B from the blue group, and C, D from the red group. For a certain employer, the platform wants to create a ranking of the workers. Let us assume that, in reality, all the workers are equally relevant to the employer. However, due to a pre-existing bias in historical training data, the relevance scores estimated by the platform's model are: $0.92$, $0.91$, $0.90$, $0.89$ for A, B, C, D respectively. Using the probability ranking principle \cite{robertson1977probability} we can optimize the user utility by ranking them in descending order of their relevance: i.e., A$\succ$B$\succ$C$\succ$D as given in table (a). The second column of table (a) has the expected user attention for each rank (this follows from real world observations of position or rank bias indicating close to an exponential decrease of attention while moving from the top to bottom ranks \cite{craswell2008experimental}). Next we give a group-level analysis in table (b). The mean relevance scores of the blue group (A \& B) and the red group (C \& D) were $0.915$ and $0.895$ respectively which are not so different. On the other hand the exposure (sum of expected attention) of the blue and red groups---in the optimal ranking--- were $0.75$ and $0.1875$ respectively which are very different. We can clearly see that how the optimal ranking in presence of position bias, could significantly widen the gap in exposure even for a small difference in relevance estimation.}}\label{tab:position_bias} \end{table*} \begin{table*}[h] \small \subfloat[Expected attention]{ \begin{tabular}{|c|c|} \hline {\bf Rank} & {\bf Attention} \\\cline{1-2} $1$ & $0.6$ \\\cline{1-2} $2$ & $0.3$ \\\cline{1-2} $3$ & $0.1$ \\\cline{1-2} \end{tabular}\label{tab:avg_attention}} \hfil \subfloat[Non-discriminatory employer]{ \begin{tabular}{|c|c|c|} \hline {\bf Rank} & {\bf Individual} & {\bf Group}\\\cline{1-3} $1$ & \textcolor{red}{A} & \textcolor{red}{red}\\\cline{1-3} $2$ & \textcolor{blue}{D} & \textcolor{blue}{blue} \\\cline{1-3} $3$ & \textcolor{blue}{E} & \textcolor{blue}{blue} \\\cline{1-3} \end{tabular}\label{tab:ranking1}} \hfil \subfloat[Discriminatory employer]{ \begin{tabular}{|c|c|c|} \hline {\bf Rank} & {\bf Individual} & {\bf Group} \\\cline{1-3} $1$ & \textcolor{blue}{F} & \textcolor{blue}{blue} \\\cline{1-3} $2$ & \textcolor{red}{B} & \textcolor{red}{red} \\\cline{1-3} $3$ & \textcolor{red}{C} & \textcolor{red}{red} \\\cline{1-3} \end{tabular}\label{tab:ranking2}} \caption{\textmd{Here, we give a simple example of ranking in hiring or gig-economy platform setting where exposure, if used as a measure of producer utility, may fail to deliver desired fairness even after satisfying fairness of exposure. We have six workers (A, B, and C from the red group while D, E, and F from the blue group) on the platform. The platform's RS presents a ranked list (size $3$) of workers to the consumers i.e., the employers. In table (a), we give a sample distribution of expected attention from employer over the ranks (i.e., on average there are $0.6$, $0.3$, $0.1$ chances of an employer clicking on individual ranked $1$, $2$, $3$ respectively). Tables (b) and (c) show the rankings given to two different employers. Now the overall exposure of red group will be exposure$(A)+$ exposure$(B)+$ exposure$(C)= 0.6+0.3+0.1=1$. Similarly for blue group's exposure will be exposure$(D)+$ exposure$(E)+$ exposure$(F)= 0.3+0.1+0.6=1$. It is clear that this set of rankings follow the notions of fairness of exposure \cite{singh2018fairness} and equity of attention \cite{biega2018equity}. However, if we look more closely, one employer (in table (b)) is a non-discriminatory employer while the other one (in table (c)) is a discriminatory employer biased against the blue group. The second employer ignores the top ranked individual $F$ from blue group, and treats $B$, $C$ as if they are ranked at the first and second positions. Thus under these circumstances, the expected impact on the red group increases while that of the blue group decreases even though the rankings are fair in terms of exposure distribution.}} \label{tab:fair_exposure_gone_wrong} \end{table*} \begin{table*}[h] \small \subfloat[Expected attention]{ \begin{tabular}{|c|c|} \hline {\bf Rank} & {\bf Attention} \\\cline{1-2} $1$ & $0.6$ \\\cline{1-2} $2$ & $0.4$ \\\cline{1-2} \end{tabular}\label{tab:exp_exposure}} \hfil \subfloat[At Time $t$]{ \begin{tabular}{|c|c|c|c|} \hline {\bf Rank} & {\bf Item} & {\bf Exposure} & {\bf Overall}\\ & & & {\bf interest}\\\cline{1-4} $1$ & $a$ & $0.6$ & \multirow{2}*{$1$} \\\cline{1-3} $2$ & $b$ & $0.4$ & \\\cline{1-4} \end{tabular}\label{tab:with_temporal_sig_1}} \hfil \subfloat[Time $t+1$ ($50\%$ reduction in overall interest)]{ \begin{tabular}{|c|c|c|c|} \hline {\bf Rank} & {\bf Item} & {\bf Exposure} & {\bf Overall} \\ & & & {\bf interest}\\\cline{1-4} $1$ & $b$ & $0.6$ & \multirow{2}*{$0.5$} \\\cline{1-3} $2$ & $a$ & $0.4$ & \\\cline{1-4} \end{tabular}\label{tab:with_temporal_sig_2}} \caption{\textmd{Here we give an example on how temporal variations in the significance of rankings could fail the fair ranking mechanisms. Consider a scenario where two news agencies namely A and B regularly publish their articles on a news aggregator platform which then ranks the news articles while recommending to the readers (users). In table (a), we give a sample distribution of expected attention from readers over the ranks. At some time just before $t$, a big event happens, and both A and B quickly report on this through equally good articles $a$ and $b$, both published at time $t$. The table (b) and (c) show the rankings of articles on the platform at time $t$ and $t+1$. If we sum up the total exposure of each agency, we get exposure$(A)=0.6+0.4=1$ and exposure$(B)=0.4+0.6=1$. However, if we look more closely, the overall interest of readers on the breaking news at time $t$ is $1$ which decreases to $0.5$ at time $t+1$. This is because the readers who have already read the news on the particular event at $t$, will be less likely to read the same news again from a different agency at $t+1$. Thus, even though the exposure metric of the news agencies are the same in this case, they end up getting disparate impact due to the temporal degradation of user interest. A way to avoid such outcomes, would be to design and use suitable context-specific weighting mechanisms for rankings which can anticipate and account for such temporal variations.}} \label{tab:temporal_significance} \end{table*} \begin{table*}[h] \small \subfloat[List of relevant items]{ \begin{tabular}{|c|c|c|} \hline {\bf Relevant} & {\bf Provider} & {\bf \% times}\\ {\bf items} & & {\bf recommended}\\\cline{1-3} $a_1$ & \multirow{2}*{$A$} & \multirow{2}*{$50\%$}\\\cline{1-1} $a_2$ & &\\\cline{1-3} $b_1$ & \multirow{2}*{$B$} & \multirow{2}*{$50\%$}\\\cline{1-1} $b_2$ & &\\\cline{1-3} \end{tabular}\label{tab:normal_catalogue}} \hfil \subfloat[List with item duplication]{ \begin{tabular}{|c|c|c|} \hline {\bf Relevant} & {\bf Provider} & {\bf \% times}\\ {\bf items} & & {\bf recommended}\\\cline{1-3} $a_1$ & \multirow{3}*{$A$} & \multirow{2}*{$60\%$}\\\cline{1-1} $a_1$\_copy & & \\\cline{1-1} $a_2$ & &\\\cline{1-3} $b_1$ & \multirow{2}*{$B$} & \multirow{2}*{$40\%$}\\\cline{1-1} $b_2$ & &\\\cline{1-3} \end{tabular}\label{tab:manipulated_catalogue}} \caption{\textmd{An example of duplication attack (inspired by \citet{diincentives} and the Sybil attacks in networks \cite{goga2015doppelganger}): Here, the table (a) has a list of relevant items for certain information need. The list contains two items each from providers $A$ and $B$. Let us consider a recommender system which recommends exactly one item every time. In such a recommendation setting, the fairness notions which advocate for fair allocation of exposure, visibility, or impact \cite{singh2018fairness,biega2018equity,surer2018multistakeholder}, would try to allocate $25\%$ to each item, i.e., each item is recommended $25\%$ of the time; thus each provider gets $50\%$ of the exposure or visibility. Now, if provider $A$ tries to manipulate by introducing a copy of its own item $a_1$ as a new item $a_1$\_copy (as shown in table (b)) potentially undetectable by the platform, then it is highly likely that the machine learned relevance scoring model would assign same or similar relevance to the copied item. In this scenario, due to the fairness notion, provider $A$ potentially increases her share of exposure to $60\%$ while reducing it to $40\%$ for provider $B$. Allocation-based fair ranking methods can create incentives for providers to do such strategic manipulations. Possible ways to dis-incentivise such duplication would be to actively include those item features in the relevance scoring model which are particularly harder to duplicate (e.g., \#views on YouTube videos, \#reviews on Amazon).}} \label{tab:duplication_attack} \end{table*} \begin{table*}[h] \small \subfloat[Items and recommendations]{ \begin{tabular}{|c|c|c|} \hline {\bf Relevant} & {\bf \% times}\\ {\bf items} & {\bf recommended}\\\cline{1-2} $a$ & $20\%$\\\cline{1-2} $b$ & $20\%$\\\cline{1-2} $c$ & $20\%$\\\cline{1-2} $d$ & $20\%$\\\cline{1-2} $e$ & $20\%$\\\cline{1-2} \end{tabular}\label{tab:item_catalogue}} \hfil \subfloat[Similar items]{ \begin{tabular}{|c|c|c|} \hline {\bf Item} & {\bf Similar items}\\ {\bf page} & {\bf recommended}\\\cline{1-2} $a$ & $b,c$\\\cline{1-2} $b$ & $c,d$\\\cline{1-2} $c$ & $a,b$\\\cline{1-2} $d$ & $b,c$\\\cline{1-2} $e$ & $b,c$\\\cline{1-2} \end{tabular}\label{tab:similar_items_list}} \hfil \subfloat[Resultant exposure distribution]{ \begin{tabular}{|c|c|c|} \hline {\bf Item} & {\bf Resultant exposure}\\ & {\bf (with $20\%$ spillover)}\\\cline{1-2} $a$ & $20-4+1\times 2=18\%$\\\cline{1-2} $b$ & $20-4+4\times 2=24\%$\\\cline{1-2} $c$ & $20-4+4\times 2=24\%$\\\cline{1-2} $d$ & $20-4+1\times 2=18\%$\\\cline{1-2} $e$ & $20-4+0\times 2=16\%$\\\cline{1-2} \end{tabular}\label{tab:resultant_exposure}} \caption{\textmd{An example on exposure spillover: Here we consider an e-commerce setting where there are five items relevant to a certain type of users. Following fairness of exposure \cite{singh2018fairness} or equity of attention \cite{biega2018equity}, each of the five items gets recommended same number of times as shown in table (a). Apart from this regular recommendation, e-commerce platforms often have similar or complementary item recommendations \cite{amazon_reco,sharma2015estimating} towards the bottom of individual item pages. Table (b) shows the similar items shown on each individual item's page. Assuming that there is 20\% spillover, i.e., 20\% of the user crowd coming to any item page moves to the similar items shown on the page, the resultant expected exposure of the items after one step of user spillovers is given in table (c). It can be clearly seen that even though the regular recommender system ensures fairness (as in table (a)), the resultant effects may not be fair due to spillover effects.}} \label{tab:spillover_example} \end{table*}
2024-02-18T23:40:31.434Z
2022-02-01T02:19:51.000Z
algebraic_stack_train_0000
2,652
11,329
proofpile-arXiv_065-12902
\section{Introduction}\label{sec:introduction}} Consider a positive semi-definite matrix ${\mathbf{A}}$. The principle square root ${\mathbf{A}}^{\frac{1}{2}}$ and the inverse square root ${\mathbf{A}}^{-\frac{1}{2}}$ are mathematically of practical interests, mainly because some desired spectral properties can be obtained by such transformations. An exemplary illustration is given in Fig.~\ref{fig:cover}. As can be seen, the matrix square root can shrink/stretch the feature variances along with the direction of principle components, which is known as an effective spectral normalization for covariance matrices. The inverse square root, on the other hand, can be used to whiten the data, \emph{i.e.,} make the data has a unit variance in each dimension. These appealing spectral properties are very useful in many computer vision applications. In Global Covariance Pooling (GCP)~\cite{lin2017improved,li2017second,li2018towards,song2021approximate} and other related high-order representation methods~\cite{xie2021so,gao2021temporal}, the matrix square root is often used to normalize the high-order feature, which can benefit some classification tasks like general visual recognition~\cite{li2017second,li2018towards,xie2021so}, fine-grained visual categorization~\cite{song2022eigenvalues}, and video action recognition~\cite{gao2021temporal}. The inverse square root is used as the whitening transform to eliminate the feature correlation, which is widely applied in decorrelated Batch Normalization (BN)~\cite{huang2018decorrelated,huang2019iterative,huang2020investigation} and other related models that involve the whitening transform~\cite{siarohin2018whitening,ermolov2021whitening}. In the field of neural style transfer, both the matrix square root and its inverse are adopted to perform successive Whitening and Coloring Transform (WCT) to transfer the style information for better generation fidelity~\cite{li2017universal,cho2019image,choi2021robustnet}. \begin{figure}[tbp] \centering \includegraphics[width=0.8\linewidth]{imgs/MatSqrt_Cover.jpg} \caption{Exemplary visualization of the matrix square root and its inverse. Given the original data ${\mathbf{X}}{\in}\mathbb{R}^{2{\times}n}$, the matrix square root performs an effective spectral normalization by stretching the data along the axis of small variances and squeezing the data in the direction with large variances, while the inverse square root transforms the data into the uncorrelated structure that has unit variance in all directions.} \label{fig:cover} \end{figure} To compute the matrix square root, the standard method is via Singular Value Decomposition (SVD). Given the real symmetric matrix ${\mathbf{A}}$, its matrix square root is computed as: \begin{equation} {\mathbf{A}}^{\frac{1}{2}} =({\mathbf{U}}{\mathbf{\Lambda}}{\mathbf{U}}^{T})^{\frac{1}{2}} = {\mathbf{U}}{\mathbf{\Lambda}}^{\frac{1}{2}}{\mathbf{U}}^{T} \end{equation} where ${\mathbf{U}}$ is the eigenvector matrix, and ${\mathbf{\Lambda}}$ is the diagonal eigenvalue matrix. As derived by Ionescu~\emph{et al.}~\cite{ionescu2015training}, the partial derivative of the eigendecomposition is calculated as: \begin{equation} \frac{\partial l}{\partial {\mathbf{A}}} = {\mathbf{U}}\Big({\mathbf{K}}^{T}\odot({\mathbf{U}}^{T}\frac{\partial l}{\partial{\mathbf{U}}})+(\frac{\partial l}{\partial{\mathbf{\Lambda}}})_{\rm diag}\Big){\mathbf{U}}^{T} \label{svd_back} \end{equation} where $l$ is the loss function, $\odot$ denotes the element-wise product, and $()_{\rm diag}$ represents the operation of setting the off-diagonal entries to zero. Despite the long-studied theories and well-developed algorithms of SVD, there exist two obstacles when integrating it into deep learning frameworks. One issue is the back-propagation instability. For the matrix ${\mathbf{K}}$ defined in~\cref{svd_back}, its off-diagonal entry is $K_{ij}{=}\nicefrac{1}{(\lambda_{i}-\lambda_{j})}$, where $\lambda_{i}$ and $\lambda_{j}$ are involved eigenvalues. When the two eigenvalues are close and small, the gradient is very likely to explode, \emph{i.e.,} $K_{ij}{\rightarrow}{\infty}$. This issue has been solved by some methods that use approximation techniques to estimate the gradients~\cite{wang2019backpropagation,wang2021robust,song2021approximate}. The other problem is the expensive time cost of the forward eigendecomposition. As the SVD is not supported well by GPUs~\cite{lahabar2009singular}, performing the eigendecomposition on the deep learning platforms is rather time-consuming. Incorporating the SVD with deep models could add extra burdens to the training process. Particularly for batched matrices, modern deep learning frameworks, such as Tensorflow and Pytorch, give limited optimization for the matrix decomposition within the mini-batch. They inevitably use a for-loop to conduct the SVD one matrix by another. However, how to efficiently perform the SVD in the context of deep learning has not been touched by the research community. To avoid explicit eigendecomposition, one commonly used alternative is the Newton-Schulz iteration (NS iteration)~\cite{schulz1933iterative,higham2008functions} which modifies the ordinary Newton iteration by replacing the matrix inverse but preserving the quadratic convergence. Compared with SVD, the NS iteration is rich in matrix multiplication and more GPU-friendly. Thus, this technique has been widely used to approximate the matrix square root in different applications~\cite{lin2017improved,li2018towards,huang2019iterative}. The forward computation relies on the following coupled iterations: \begin{equation} {\mathbf{Y}}_{k+1}=\frac{1}{2}{\mathbf{Y}}_{k} (3{\mathbf{I}} - {\mathbf{Z}}_{k}{\mathbf{Y}}_{k}), {\mathbf{Z}}_{k+1}=\frac{1}{2}(3{\mathbf{I}}-{\mathbf{Z}}_{k}{\mathbf{Y}}_{k}){\mathbf{Z}}_{k} \label{eq:ns_fp} \end{equation} where ${\mathbf{Y}}_{k}$ and ${\mathbf{Z}}_{k}$ converge to ${\mathbf{A}}^{\frac{1}{2}}$ and ${\mathbf{A}}^{-\frac{1}{2}}$, respectively. Since the NS iteration only converges locally (\emph{i.e.,} $||{\mathbf{A}}||_{2}{<}1$), we need to pre-normalize the initial matrix and post-compensate the resultant approximation as ${\mathbf{Y}}_{0}{=}\frac{1}{||{\mathbf{A}}||_{\rm F}}{\mathbf{A}}$ and$\ {\mathbf{A}}^{\frac{1}{2}}{=}\sqrt{||{\mathbf{A}}||_{\rm F}}{\mathbf{Y}}_{k}$. Each forward iteration involves $3$ matrix multiplications, which is more efficient than the forward pass of SVD. However, the backward pass of the NS iteration takes $14$ matrix multiplications per iteration. Consider that the NS iteration often takes $5$ iterations to achieve reasonable performances~\cite{li2018towards,huang2019iterative}. The backward pass is much more time-costing than the backward algorithm of SVD. The speed improvement could be larger if a more efficient backward algorithm is developed. To address the drawbacks of SVD and NS iteration, \emph{i.e.} the low efficiency in either the forward or backward pass, we derive two methods \textbf{that are efficient in both forward and backward propagation} to compute the differentiable matrix square root and its inverse. In the forward pass (FP), we propose using Matrix Taylor Polynomial (MTP) and Matrix Pad\'e Approximants (MPA) for approximating the matrix square root. The former approach is slightly faster but the latter is more numerically accurate. Both methods yield considerable speed-up compared with the SVD or the NS iteration in the forward computation. The proposed MTP and MPA can be also used to approximate the inverse square root without any additional computational cost. For the backward pass (BP), we consider the gradient function as a Lyapunov equation and propose an iterative solution using the matrix sign function. The backward pass costs fewer matrix multiplications and is more computationally efficient than the NS iteration. Our proposed iterative Lyapunov solver applies to both the matrix square root and the inverse square root. The only difference is that deriving the gradient of inverse square root requires $3$ more matrix multiplications than computing that of matrix square root. Through a series of numerical tests, we show that the proposed MTP-Lya and MPA-Lya deliver consistent speed improvement for different batch sizes, matrix dimensions, and some hyper-parameters (\emph{e.g.,} degrees of power series to match and iteration times). Moreover, our proposed MPA-Lya consistently gives a better approximation of the matrix square root and its inverse than the NS iteration. Besides the numerical tests, we conduct extensive experiments in a number of computer vision applications, including decorrelated batch normalization, second-order vision transformer, global covariance pooling for large-scale and fine-grained image recognition, attentive global covariance pooling for video action recognition, and neural style transfer. Our methods can achieve competitive performances against the SVD and the NS iteration with the least amount of time overhead. Our MPA is suitable in use cases where the high precision is needed, while our MTP works in applications where the accuracy is less demanded but the efficiency is more important. The contributions of the paper are twofold: \begin{itemize} \item We propose two fast methods that compute the differentiable matrix square root and the inverse square root. The forward propagation relies on the matrix Taylor polynomial or matrix Pad\'e approximant, while an iterative backward gradient solver is derived from the Lyapunov equation using the matrix sign function. \item Our proposed algorithms are validated by a series of numerical tests and several real-world computer vision applications. The experimental results demonstrate that our methods have a faster calculation speed and also have very competitive performances. \end{itemize} This paper is an expanded version of~\cite{song2022fast}. In the conference paper~\cite{song2022fast}, the proposed fast algorithms only apply to the matrix square root ${\mathbf{A}}^{\frac{1}{2}}$. For the application of inverse square root ${\mathbf{A}}^{-\frac{1}{2}}$, we have to solve the linear system or compute the matrix inverse. However, both techniques are not GPU-efficient enough and could add extra computational burdens to the training. In this extended manuscript, we target the drawback and extend our algorithm to the case of inverse square root, which avoids the expensive computation and allows for faster calculation in more application scenarios. Compared with computing the matrix square root, computing the inverse square root consumes the same time complexity in the FP and requires 3 more matrix multiplications in the BP. The paper thus presents a complete solution to the efficiency issue of the differentiable spectral layer. Besides the algorithm extension, our method is validated in more computer vision applications: global covariance pooling for image/video recognition and neural style transfer. We also shed light on the peculiar incompatibility of NS iteration and Lyapunov solver discussed in Sec.~\ref{sec:lya_backward}. The rest of the paper is organized as follows: Sec.~\ref{sec:related} describes the computational methods and applications of differentiable matrix square root and its inverse. Sec.~\ref{sec:method} introduces our method that computes the end-to-end matrix square root, and Sec.~\ref{sec:method_inverse} presents the extension of our method to the inverse square root. Sec.~\ref{sec:exp} provides the experimental results, the ablation studies, and some in-depth analysis. Finally, Sec.~\ref{sec:conclusion} summarizes the conclusions. \section{Experiments}\label{sec:exp} In the experimental section, we first perform a series of numerical tests to compare our proposed method with SVD and NS iteration. Subsequently, we evaluate our methods in several real-world applications, including decorrelated batch normalization, second-order vision transformer, global covariance pooling for image/video recognition, and neural style transfer. The implementation details are kindly referred to the Supplementary Material. \subsection{Baselines} In the numerical tests, we compare our two methods against SVD and NS iteration. For the various computer vision experiments, our methods are compared with more differentiable SVD baselines where each one has its specific gradient computation. These methods include (1) Power Iteration (PI), (2) SVD-PI~\cite{wang2019backpropagation}, (3) SVD-Taylor~\cite{wang2021robust,song2021approximate}, and (4) SVD-Pad\'e~\cite{song2021approximate}. We put the detailed illustration of baseline methods in the Supplementary Material. \subsection{Numerical Tests} To comprehensively evaluate the numerical performance and stability, we compare the speed and error for the input of different batch sizes, matrices in various dimensions, different iteration times of the backward pass, and different polynomial degrees of the forward pass. In each of the following tests, the comparison is based on $10,000$ random covariance matrices and the matrix size is consistently $64{\times}64$ unless explicitly specified. The error is measured by calculating the Mean Absolute Error (MAE) and Normalized Root Mean Square Error (NRMSE) of the matrix square root computed by the approximate methods (NS iteration, MTP, and MPA) and the accurate method (SVD). For our algorithm of fast inverse square root, since the theory behind the algorithm is in essence the same with the matrix square root, they are expected to have similar numerical properties. The difference mainly lie in the forward error and backward speed. Thereby, we conduct the FP error analysis and the BP speed analysis for the inverse square root in Sec.~\ref{sec:fp_err_speed} and Sec.~\ref{sec:bp_speed}, respectively. For the error analysis, we compute the error of whitening transform by $||\sigma({\mathbf{A}}^{-\frac{1}{2}}{\mathbf{X}}){-}{\mathbf{I}}||_{\rm F}$ where $\sigma(\cdot)$ denotes the extracted eigenvalues. In the other numerical tests, we only evaluate the properties of the algorithm for the matrix square root. \begin{figure}[htbp] \centering \includegraphics[width=0.99\linewidth]{imgs/fp_sped_err.jpg} \caption{The comparison of speed and error in the FP for the matrix square root (\emph{left}) and the inverse square root (\emph{right}). Our MPA computes the more accurate and faster solution than the NS iteration, and our MTP enjoys the fastest calculation speed. } \label{fig:fp_sped_err} \end{figure} \subsubsection{Forward Error versus Speed} \label{sec:fp_err_speed} Both the NS iteration and our methods have a hyper-parameter to tune in the forward pass, \emph{i.e.,} iteration times for NS iteration and polynomial degrees for our MPA and MTP. To validate the impact, we measure the speed and error of both matrix square root and its inverse for different hyper-parameters. The degrees of our MPA and MTP vary from $6$ to $18$, and the iteration times of NS iteration range from $3$ to $7$. As can be observed from Fig.~\ref{fig:fp_sped_err}, our MTP has the least computational time, and our MPA consumes slightly more time than MTP but provides a closer approximation. Moreover, the curve of our MPA consistently lies below that of the NS iteration, demonstrating our MPA is a better choice in terms of both speed and accuracy. \begin{figure}[htbp] \centering \includegraphics[width=0.49\linewidth]{imgs/bp_speed.jpg} \caption{The speed comparison in the backward pass. Our Lyapunov solver is more efficient than NS iteration as fewer matrix multiplications are involved. Our solver for inverse square root only slightly increases the computational cost.} \label{fig:bp_speed} \end{figure} \subsubsection{Backward Speed versus Iteration} \label{sec:bp_speed} Fig.~\ref{fig:bp_speed} compares the speed of our backward Lyapunov solver and the NS iteration versus different iteration times. The result is coherent with the complexity analysis in Table~\ref{tab:backward_complexiy}: our Lyapunov solver is much more efficient than NS iteration. For the NS iteration of $5$ times, our Lyapunov solver still has an advantage even when we iterate $8$ times. Moreover, the extension of our Lyapunov solver for inverse square root only marginally increases the computational cost and is sill much faster than the NS iteration. \begin{figure}[htbp] \centering \includegraphics[width=0.99\linewidth]{imgs/speed_bs.jpg} \caption{Speed comparison for each method versus different batch sizes. Our methods are more batch-efficient than the SVD or NS iteration. } \label{fig:speed_bs} \end{figure} \subsubsection{Speed versus Batch Size} In certain applications such as covariance pooling and instance whitening, the input could be batched matrices instead of a single matrix. To compare the speed for batched input, we conduct another numerical test. The hyper-parameter choices follow our experimental settings in decorrelated batch normalization. As seen in Fig.~\ref{fig:speed_bs}, our MPA-Lya and MTP-Lya are consistently more efficient than the NS iteration and SVD. To give a concrete example, when the batch size is $64$, our MPA-Lya is $2.58$X faster than NS iteration and $27.25$X faster than SVD, while our MTP-Lya is $5.82$X faster than the NS iteration and $61.32$X faster than SVD. \begin{table*}[htbp] \centering \caption{Validation error of ZCA whitening methods. The covariance matrix is of size $1{\times}64{\times}64$. The time consumption is measured for computing the inverse square root (BP+FP). For each method, we report the results based on five runs.} \resizebox{0.8\linewidth}{!}{ \begin{tabular}{r|c|c|c|c|c|c|c} \hline \multirow{3}*{Methods} & \multirow{3}*{ Time (ms)} & \multicolumn{4}{c|}{ResNet-18} & \multicolumn{2}{c}{ResNet-50}\\ \cline{3-8} & & \multicolumn{2}{c|}{CIFAR10} & \multicolumn{2}{c|}{CIFAR100} & \multicolumn{2}{c}{CIFAR100}\\ \cline{3-8} && mean$\pm$std & min & mean$\pm$std & min & mean$\pm$std & min \\ \hline SVD-Clip &3.37 & 4.88$\pm$0.25 &4.65 & 21.60$\pm$0.39 &21.19 & 20.50$\pm$0.33 &20.17\\ SVD-PI (GPU) &5.27 &4.57$\pm$0.10 &4.45 &21.35$\pm$0.25 &21.05 &19.97$\pm$0.41 & 19.27 \\ SVD-PI & 3.49 & 4.59$\pm$0.09 &4.44 &21.39$\pm$0.23 &21.04 &19.94$\pm$0.44 &19.28 \\ SVD-Taylor &3.41 &4.50$\pm$0.08 &4.40 &21.14$\pm$0.20 &\textbf{20.91} &19.81$\pm$0.24&19.26\\ SVD-Pad\'e &3.39 & 4.65$\pm$0.11 &4.50 &21.41$\pm$0.15 &21.26 &20.25$\pm$0.23&19.98\\ NS Iteration & 2.96 & 4.57$\pm$0.15 &4.37&21.24$\pm$0.20 &21.01&\textbf{19.39$\pm$0.30}&\textbf{19.01}\\ \hline Our MPA-Lya & 2.61 &\textbf{4.39$\pm$0.09} &\textbf{4.25} & \textbf{21.11}$\pm$\textbf{0.12}&20.95& \textbf{19.55$\pm$0.20} &19.24\\ Our MTP-Lya & \textbf{2.56} & 4.49$\pm$0.13 &4.31 & 21.42$\pm$0.21 &21.24 &20.55$\pm$0.37&20.12\\ \hline \end{tabular} } \label{tab:zca_whitening} \end{table*} As discussed before, the current SVD implementation adopts a for-loop to compute each matrix one by one within the mini-batch. This accounts for why the time consumption of SVD grows almost linearly with the batch size. For the NS iteration, the backward pass is not as batch-friendly as our Lyapunov solver. The gradient calculation requires measuring the trace and handling the multiplication for each matrix in the batch, which has to be accomplished ineluctably by a for-loop. Our backward pass can be more efficiently implemented by batched matrix multiplication. \begin{figure}[htbp] \centering \includegraphics[width=0.99\linewidth]{imgs/speed_err_dim.png} \caption{The speed comparison (\emph{left}) and the error comparison (\emph{middle and right}) for matrices in different dimensions. Our MPA-Lya is consistently faster and more accurate than NS iteration for different matrix dimensions. Since the SVD is accurate by default, other approximate methods are compared with SVD to measure the error.} \label{fig:speed_err_dim} \end{figure} \subsubsection{Speed and Error versus Matrix Dimension} In the last numerical test, we compare the speed and error for matrices in different dimensions. The hyper-parameter settings also follow our experiments of ZCA whitening. As seen from Fig.~\ref{fig:speed_err_dim} left, our proposed MPA-Lya and MTP-Lya consistently outperform others in terms of speed. In particular, when the matrix size is very small (${<}32$), the NS iteration does not hold a speed advantage over the SVD. By contrast, our proposed methods still have competitive speed against the SVD. Fig.~\ref{fig:speed_err_dim} right presents the approximation error using metrics MAE and NRMSE. Both metrics agree well with each other and demonstrate that our MPA-Lya always has a better approximation than the NS iteration, whereas our MTP-Lya gives a worse estimation but takes the least time consumption, which can be considered as a trade-off between speed and accuracy. \subsection{Decorrelated Batch Normalization} As a substitute of ordinary BN, the decorrelated BN~\cite{huang2018decorrelated} applies the ZCA whitening transform to eliminate the correlation of the data. Consider the reshaped feature map ${\mathbf{X}}{\in}\mathbb{R}^{C{\times} BHW}$. The whitening procedure first computes its sample covariance as: \begin{equation} {\mathbf{A}}{=}({\mathbf{X}}-\mu({\mathbf{X}}))({\mathbf{X}}-\mu({\mathbf{X}}))^{T}{+}\epsilon{\mathbf{I}} \label{zca_cov} \end{equation} where ${\mathbf{A}}{\in}\mathbb{R}^{C{\times}C}$, $\mu({\mathbf{X}})$ is the mean of ${\mathbf{X}}$, and $\epsilon$ is a small constant to make the covariance strictly positive definite. Afterwards, the inverse square root is calculated to whiten the feature map: \begin{equation} {\mathbf{X}}_{whitend}={\mathbf{A}}^{-\frac{1}{2}}{\mathbf{X}} \end{equation} By doing so, the eigenvalues of ${\mathbf{X}}$ are all ones, \emph{i.e.,} the feature is uncorrelated. During the training process, the training statistics are stored for the inference phase. We insert the decorrelated BN layer after the first convolutional layer of ResNet~\cite{he2016deep}, and the proposed methods and other baselines are used to compute ${\mathbf{A}}^{-\frac{1}{2}}$. Table~\ref{tab:zca_whitening} displays the speed and validation error on CIFAR10 and CIFAR100~\cite{krizhevsky2009learning}. The ordinary SVD with clipping gradient (SVD-Clip) is inferior to other SVD baselines, and the SVD computation on GPU is slower than that on CPU. Our MTP-Lya is $1.16$X faster than NS iteration and $1.32$X faster than SVD-Pad\'e, and our MPA-Lya is $1.14$X and $1.30$X faster. Furthermore, our MPA-Lya achieves state-of-the-art performances across datasets and models. Our MTP-Lya has comparable performances on ResNet-18 but slightly falls behind on ResNet-50. We guess this is mainly because the relatively large approximation error of MTP might affect little on the small model but can hurt the large model. On CIFAR100 with ResNet-50, our MPA-Lya slightly falls behind NS iteration in the average validation error. As a larger and deeper model, ResNet-50 is likely to have worse-conditioned matrices than ResNet-18. Since our MPA involves solving a linear system, processing a very ill-conditioned matrix could lead to some round-off errors. In this case, NS iteration might have a chance to slightly outperform our MPA-Lya. However, this is a rare situation; our MPA-Lya beats NS iteration in most following experiments. \subsection{Global Covariance Pooling} For the application of global covariance pooling, we evaluate our method in three different tasks, including large-scale visual recognition, fine-grained visual categorization, and video action recognition. Since the GCP method requires the very accurate matrix square root~\cite{song2021approximate}, our MTP-Lya cannot achieve reasonable performances due to the relatively large approximation error. Therefore, we do not take it into account for comparison throughout the GCP experiments. \subsubsection{Large-scale Visual Recognition} \begin{figure}[htbp] \centering \includegraphics[width=0.99\linewidth]{imgs/arch_gcp.jpg} \caption{Overview of the GCP network~\cite{li2017second,li2018towards,song2021approximate} for large-scale and fine-grained visual recognition.} \label{fig:arch_gcp} \end{figure} Fig.~\ref{fig:arch_gcp} displays the architecture of a typical GCP network. Different from the standard CNNs, the covariance square root of the last convolutional feature is used as the global representation. Considering the final convolutional feature ${\mathbf{X}}{\in}\mathbb{R}^{B{\times}C{\times}HW}$, a GCP meta-layer first computes the sample covariance as: \begin{equation} \mathbf{P}=\mathbf{X}\Bar{\mathbf{I}}\mathbf{X}^{T},\ \Bar{\mathbf{I}}=\frac{1}{N}(\mathbf{I}-\frac{1}{N}\mathbf{1}\mathbf{1}^{T}) \label{covariance} \end{equation} where $\Bar{\mathbf{I}}$ represents the centering matrix, $\mathbf{I}$ denotes the identity matrix, and $\mathbf{1}$ is a column vector whose values are all ones, respectively. Afterwards, the matrix square root is conducted for normalization: \begin{equation} \mathbf{Q}\triangleq\mathbf{P}^{\frac{1}{2}}=(\mathbf{U}\mathbf{\Lambda}\mathbf{U}^{T})^{\frac{1}{2}}={\mathbf{U}}\mathbf{\Lambda}^{\frac{1}{2}}\mathbf{U}^{T} \label{matrix_power} \end{equation} where the normalized covariance matrix $\mathbf{Q}$ is fed to the FC layer. Our method is applied to calculate ${\mathbf{Q}}$. \begin{table}[htbp] \caption{Comparison of validation accuracy (\%) on ImageNet~\cite{deng2009imagenet} and ResNet-50~\cite{he2016deep}. The covariance is of size {$256{\times}256{\times}256$}, and the time consumption is measured for computing the matrix square root (FP+BP).} \centering \resizebox{0.99\linewidth}{!}{ \begin{tabular}{r|c|c|c} \hline Methods & Time (ms)& Top-1 Acc. & Top-5 Acc. \\ \hline SVD-Taylor &2349.12 &77.09 &93.33 \\ SVD-Pad\'e &2335.56 &\textbf{77.33} &\textbf{93.49} \\ NS iteration &164.43 & 77.19 & 93.40\\ \hline Our MPA-Lya & \textbf{110.61} &77.13 & 93.45\\ \hline \end{tabular} } \label{tab:performances_GCP_CNN} \end{table} Table~\ref{tab:performances_GCP_CNN} presents the speed comparison and the validation error of GCP ResNet-50~\cite{he2016deep} models on ImageNet~\cite{deng2009imagenet}. Our MPA-Lya not only achieves very competitive performance but also has the least time consumption. The speed of our method is about $21$X faster than the SVD and $1.5$X faster than the NS iteration. \subsubsection{Fine-grained Visual Recognition } \begin{table}[htbp] \caption{Comparison of validation accuracy on fine-grained benchmarks and ResNet-50~\cite{he2016deep}. The covariance is of size {$10{\times}64{\times}64$}, and the time consumption is measured for computing the matrix square root (FP+BP).} \centering \resizebox{0.99\linewidth}{!}{ \begin{tabular}{r|c|c|c|c} \hline Methods & Time (ms)& Birds & Aircrafts & Cars \\ \hline SVD-Taylor &32.13 &86.9 &89.9 &92.3 \\ SVD-Pad\'e &31.54 &87.2 &90.5 &\textbf{92.8} \\ NS iteration &5.79 & 87.3 & 89.5 & 91.7 \\ \hline Our MPA-Lya & \textbf{3.89} &\textbf{87.8} &\textbf{91.0} &92.5 \\ \hline \end{tabular} } \label{tab:performances_GCP_fgvc} \end{table} In line with other GCP works~\cite{li2017second,li2018towards,song2021approximate}, after training on ImageNet, the model is subsequently fine-tuned on each fine-grained dataset. Table~\ref{tab:performances_GCP_fgvc} compares the time consumption and validation accuracy on three commonly used fine-grained benchmarks, namely Caltech University Birds (Birds)~\cite{WelinderEtal2010}, FGVC Aircrafts (Aircrafts)~\cite{maji2013fine}, and Stanford Cars (Cars)~\cite{KrauseStarkDengFei-Fei_3DRR2013}. As can be observed, our MPA-Lya consumes $50\%$ less time than the NS iteration and is about $8$X faster than the SVD. Moreover, the performance of our method is slightly better than other baselines on Birds~\cite{WelinderEtal2010} and Aircrafts~\cite{maji2013fine}. The evaluation result on Cars~\cite{KrauseStarkDengFei-Fei_3DRR2013} is also comparable. \subsubsection{Video Action Recognition} \begin{figure}[htbp] \centering \includegraphics[width=0.99\linewidth]{imgs/arch_video_gcp.jpg} \caption{Architecture of the temporal-attentive GCP network for video action recognition~\cite{gao2021temporal}. The channel and spatial attention is used to make the covariance more attentive.} \label{fig:arch_gcp_video} \end{figure} Besides the application of image recognition, the GCP methods can be also used for the task of video recognition~\cite{gao2021temporal}. Fig.~\ref{fig:arch_gcp_video} displays the overview of the temporal-attentive GCP model for video action recognition. The temporal covariance is computed in a sliding window manner by involving both intra- and inter-frame correlations. Supposing the kernel size of the sliding window is $3$, then temporal covariance is computed as: \begin{equation} \begin{gathered} Temp.Cov.(\mathbf{X}_{l})=\underbrace{{\mathbf{X}}_{l-1}{\mathbf{X}}_{l-1}^{T} + {\mathbf{X}}_{l}{\mathbf{X}}_{l}^{T} + {\mathbf{X}}_{l+1}{\mathbf{X}}_{l+1}^{T}}_{intra-frame\ covariance}\\ +\underbrace{{\mathbf{X}}_{l-1}{\mathbf{X}}_{l}^{T} + {\mathbf{X}}_{l}{\mathbf{X}}_{l-1}^{T} + \cdots + {\mathbf{X}}_{l+1}{\mathbf{X}}_{l}^{T}}_{inter-frame\ covariance} \end{gathered} \end{equation} Finally, the matrix square root of the attentive temporal-based covariance is computed and passed to the FC layer. The spectral methods are used to compute the matrix square root of the attentive covariance $Temp.Cov.(\mathbf{X}_{l})$. \begin{table}[htbp] \centering \caption{Validation top-1/top-5 accuracy (\%) on HMBD51~\cite{Kuehne11} and UCF101~\cite{soomro2012ucf101} with backbone TEA R50~\cite{li2020tea}. The covariance matrix is of size $16{\times}128{\times}128$, and the time consumption is measured for computing the matrix square root (BP+FP).} \resizebox{0.99\linewidth}{!}{ \begin{tabular}{r|c|c|c} \hline Methods & Time (ms) & HMBD51 & UCF101 \\ \hline SVD-Taylor &76.17 &73.79/93.84 &\textbf{95.00}/\textbf{99.60} \\ SVD-Pad\'e &75.25 &73.89/93.79 &94.13/99.47 \\ NS Iteration &12.11 &72.75/93.86 &94.16/99.50 \\ \hline Our MPA-Lya &\textbf{6.95} &\textbf{74.05}/\textbf{93.99} &94.24/99.58 \\ \hline \end{tabular} } \label{tab:video_gcp} \end{table} We present the validation accuracy and time cost for the video action recognition in Table~\ref{tab:video_gcp}. For the computation speed, our MPA-Lya is about $1.74$X faster than the NS iteration and is about $10.82$X faster than the SVD. Furthermore, our MPA-Lya achieves the best performance on HMDB51, while the result on UCF101 is also very competitive. To sum up, our MPA-Lya has demonstrated its general applicability in the GCP models for different tasks. In particular, without the sacrifice of performance, our method can bring considerable speed improvements. This could be beneficial for faster training and inference. In certain experiments such as fine-grained classification, the approximate methods (MPA-Lya and NS iteration) can marginally outperform accurate SVD. This phenomenon has been similarly observed in related studies~\cite{li2018towards,huang2019iterative,song2021approximate}, and one likely reason is that the SVD does not have as healthy gradients as the approximate methods. This might negatively influence the optimization process and consequently the performance would degrade. \subsection{Neural Style Transfer} \begin{figure}[htbp] \centering \includegraphics[width=0.9\linewidth]{imgs/style_transfer_arch.jpg} \caption{The architecture overview of our model for neural style transfer. Two encoders take input of the style and content image respectively, and generate the multi-scale content/style features. A decoder is applied to absorb the feature and perform the WCT process at $5$ different scales, which outputs a pair of images that exchange the styles. Finally, a discriminator is further adopted to tell apart the authenticity of the images.} \label{fig:arch_style_transfer} \end{figure} We adopt the WCT process in the network architecture proposed in Cho~\emph{et al.}~\cite{cho2019image} for neural style transfer. Fig.~\ref{fig:arch_style_transfer} displays the overview of the model. The WCT performs successive whitening and coloring transform on the content and style feature. Consider the reshaped content feature $\mathbf{X}_{c}{\in}\mathrm{R}^{B{\times}C{\times}HW}$ and the style feature $\mathbf{X}_{s}{\in}\mathrm{R}^{B{\times}C{\times}HW}$. The style information is first removed from the content as: \begin{equation} \begin{gathered} \mathbf{X}_{c}^{whitened} = \Big((\mathbf{X}_{c}-\mu(\mathbf{X}_{c}))(\mathbf{X}_{c}-\mu(\mathbf{X}_{c}))^{T}\Big)^{-\frac{1}{2}}\mathbf{X}_{c} \end{gathered} \end{equation} Then we extract the desired style information from the style feature $\mathbf{X}_{s}$ and transfer it to the whitened content feature: \begin{equation} \mathbf{X}_{c}^{colored} = \Big((\mathbf{X}_{s}-\mu(\mathbf{X}_{s}))(\mathbf{X}_{s}-\mu(\mathbf{X}_{s}))^{T}\Big)^{\frac{1}{2}}\mathbf{X}_{c}^{whitened} \end{equation} The resultant feature $\mathbf{X}_{c}^{colored}$ is compensated with the mean of style feature and combined with the original content feature: \begin{equation} \mathbf{X} = \alpha (\mathbf{X}_{c}^{colored}+\mu(\mathbf{X}_{s})) + (1-\alpha)\mathbf{X}_{c} \end{equation} where $\alpha$ is a weight bounded in $[0,1]$ to control the strength of style transfer. In this experiment, both the matrix square root and inverse square root are computed. \begin{table}[htbp] \caption{The LPIPS~\cite{zhang2018perceptual} score and user preference (\%) on Artworks~\cite{isola2017image} dataset. The covariance is of size $4{\times}256{\times}256$. We measure the time consumption of whitening and coloring transform that is conducted $10$ times to exchange the style and content feature at different network depths.} \centering \setlength{\tabcolsep}{1.5pt} \resizebox{0.99\linewidth}{!}{ \begin{tabular}{r|c|c|c} \hline Methods & Time (ms) & LPIPS~\cite{zhang2018perceptual} ($\uparrow$) & Preference ($\uparrow$) \\ \hline SVD-Taylor &447.12 & 0.5276 & 16.25\\ SVD-Pad\'e &445.23 & 0.5422 & 19.25\\ NS iteration &94.37 & 0.5578 & 17.00\\ \hline Our MPA-Lya &69.23 &\textbf{0.5615} & \textbf{24.75}\\ Our MTP-Lya &\textbf{40.97} &0.5489 & 18.50\\ \hline \end{tabular} } \label{tab:style_transfer_sum} \end{table} Table~\ref{tab:style_transfer_sum} presents the quantitative evaluation using the LPIPS~\cite{zhang2018perceptual} score and user preference. The speed of our MPA-Lya and MTP-Lya is significantly faster than other methods. Specifically, our MTP-Lya is $2.3$X faster than the NS iteration and $10.9$X faster than the SVD, while our MPA-Lya consumes $1.4$X less time than the NS iteration and $6.4$X less time than the SVD. Moreover, our MPA-Lya achieves the best LPIPS score and user preference. The performance of our MTP-Lya is also very competitive. Fig.~\ref{fig:style_transfer_visual} displays the exemplary visual comparison. Our methods can effectively transfer the style information and preserve the original content, leading to transferred images with a more coherent style and better visual appeal. We give detailed evaluation results on each subset and more visual examples in Supplementary Material. \begin{table*}[htbp] \centering \caption{Validation top-1/top-5 accuracy of the second-order vision transformer on ImageNet~\cite{deng2009imagenet}. The covariance is of size $64{\times}48{\times}48$, where $64$ is the mini-batch size. The time cost is measured for computing the matrix square root (BP+FP).} \resizebox{0.79\linewidth}{!}{ \begin{tabular}{r|c|c|c|c} \hline \multirow{2}*{Methods} & \multirow{2}*{ Time (ms)} & \multicolumn{3}{c}{Architecture} \\ \cline{3-5} & & So-ViT-7 & So-ViT-10 & So-ViT-14 \\ \hline PI & \textbf{1.84} & 75.93/93.04 & 77.96/94.18 & 82.16/96.02 (303 epoch)\\ SVD-PI & 83.43 & 76.55/93.42 & 78.53/94.40 & 82.16/96.01 (278 epoch)\\ SVD-Taylor & 83.29 & 76.66/\textbf{93.52} & 78.64/94.49 & 82.15/96.02 (271 epoch)\\ SVD-Pad\'e & 83.25 & 76.71/93.49 & 78.77/94.51 & 82.17/96.02 (265 epoch)\\ NS Iteration & 10.38 & 76.50/93.44 & 78.50/94.44 & 82.16/96.01 (280 epoch)\\ \hline Our MPA-Lya & 3.25 & \textbf{76.84}/93.46 & \textbf{78.83}/\textbf{94.58} & 82.17/96.03 (\textbf{254} epoch)\\ Our MTP-Lya & 2.39 & 76.46/93.26 & 78.44/94.33 & 82.16/96.02 (279 epoch)\\ \hline \end{tabular} } \label{tab:vit_imagenet} \end{table*} \begin{figure}[htbp] \centering \includegraphics[width=0.9\linewidth]{imgs/style_transfer_visual_small.png} \caption{Visual examples of the neural style transfer on Artworks~\cite{isola2017image} dataset. Our methods generate sharper images with more coherent style and better visual appeal. The red rectangular indicates regions with subtle details.} \label{fig:style_transfer_visual} \end{figure} \subsection{Second-order Vision Transformer} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{imgs/arch_sovit.jpg} \caption{The scheme of So-ViT~\cite{xie2021so}. The covariance square root of the visual tokens are computed to assist the classification. In the original vision transformer~\cite{dosovitskiy2020image}, only the class token is utilized for class predictions.} \label{fig:arch_sovit} \end{figure} The ordinary vision transformer~\cite{dosovitskiy2020image} attaches an empty class token to the sequence of visual tokens and only uses the class token for prediction, which may not exploit the rich semantics embedded in the visual tokens. Instead, The Second-order Vision Transformer (So-ViT)~\cite{xie2021so} proposes to leverage the high-level visual tokens to assist the task of classification: \begin{equation} y = {\rm FC}(c) + {\rm FC}\Big(({\mathbf{X}}\mX^{T})^{\frac{1}{2}}\Big) \end{equation} where $c$ is the output class token, ${\mathbf{X}}$ denotes the visual token, and $y$ is the combined class predictions. We show the model overview in Fig.~\ref{fig:arch_sovit}. Equipped with the covariance pooling layer, So-ViT removes the need for pre-training on the ultra-large-scale datasets and achieves competitive performance even when trained from scratch. To reduce the computational budget, So-ViT further proposes to use Power Iteration (PI) to approximate the dominant eigenvector. We use our methods to compute the matrix square root of the covariance ${\mathbf{X}}\mX^{T}$. Table~\ref{tab:vit_imagenet} compares the speed and performances on three So-ViT architectures with different depths. Our proposed methods significantly outperform the SVD and NS iteration in terms of speed. To be more specific, our MPA-Lya is $3.19$X faster than the NS iteration and $25.63$X faster than SVD-Pad\'e, and our MTP-Lya is $4.34$X faster than the NS iteration and $34.85$X faster than SVD-Pad\'e. For the So-ViT-7 and So-ViT-10, our MPA-Lya achieves the best evaluation results and even slightly outperforms the SVD-based methods. Moreover, on the So-ViT-14 model where the performances are saturated, our method converges faster and spends fewer training epochs. The performance of our MTP-Lya is also on par with the other methods. The PI suggested in the So-ViT only computes the dominant eigenpair but neglects the rest. In spite of the fast speed, the performance is not comparable with other methods. \subsection{Ablation Studies} We conduct three ablation studies to illustrate the impact of the degree of power series in the forward pass, the termination criterion during the back-propagation, and the possibility of combining our Lyapunov solver with the SVD and the NS iteration. \subsubsection{Degree of Power series to Match for Forward Pass} Table~\ref{tab:forward_degree} displays the performance of our MPA-Lya for different degrees of power series. As we use more terms of the power series, the approximation error gets smaller and the performance gets steady improvements from the degree $[3,3]$ to $[5,5]$. When the degree of our MPA is increased from $[5,5]$ to $[6,6]$, there are only marginal improvements. We hence set the forward degrees as $[5,5]$ for our MPA and as $11$ for our MTP as a trade-off between speed and accuracy. \begin{table}[htbp] \centering \setlength{\tabcolsep}{1.5pt} \caption{Performance of our MPA-Lya versus different degrees of power series to match.} \resizebox{0.99\linewidth}{!}{ \begin{tabular}{r|c|c|c|c|c|c|c} \hline \multirow{3}*{Degrees} & \multirow{3}*{ Time (ms)} & \multicolumn{4}{c|}{ResNet-18} & \multicolumn{2}{c}{ResNet-50}\\ \cline{3-8} & & \multicolumn{2}{c|}{CIFAR10} & \multicolumn{2}{c|}{CIFAR100} & \multicolumn{2}{c}{CIFAR100}\\ \cline{3-8} && mean$\pm$std & min & mean$\pm$std & min & mean$\pm$std & min \\ \hline $[3,3]$ &0.80 &4.64$\pm$0.11&4.54 &21.35$\pm$0.18&21.20 &20.14$\pm$0.43 & 19.56\\ $[4,4]$ &0.86 &4.55$\pm$0.08&4.51 &21.26$\pm$0.22&21.03 &19.87$\pm$0.29 & 19.64\\ $[6,6]$ &0.98 &\textbf{4.45$\pm$0.07}&4.33 &\textbf{21.09$\pm$0.14}&21.04 &\textbf{19.51$\pm$0.24}&19.26\\ \hline $[5,5]$ &0.93 &\textbf{4.39$\pm$0.09} &\textbf{4.25} & \textbf{21.11$\pm$0.12} &\textbf{20.95} & \textbf{19.55$\pm$0.20} & \textbf{19.24} \\ \hline \end{tabular} } \label{tab:forward_degree} \end{table} \subsubsection{Termination Criterion for Backward Pass} \label{sec:gradient_error} \begin{table*}[htbp] \centering \setlength{\tabcolsep}{1.5pt} \caption{Performance of our MPA-Lya versus different iteration times. The residual errors $||{\mathbf{B}}_{k}{-}{\mathbf{I}}||$ and $||0.5{\mathbf{C}}_{k}-{\mathbf{X}}||_{\rm F}$ are measured based on $10,000$ randomly sampled matrices.} \resizebox{0.8\linewidth}{!}{ \begin{tabular}{r|c|c|c|c|c|c|c|c|c} \hline \multirow{3}*{Methods} & \multirow{3}*{ Time (ms)} & \multirow{3}*{$||{\mathbf{B}}_{k}{-}{\mathbf{I}}||_{\rm F}$} & \multirow{3}*{$||0.5{\mathbf{C}}_{k}{-}{\mathbf{X}}||_{\rm F}$} & \multicolumn{4}{c|}{ResNet-18} & \multicolumn{2}{c}{ResNet-50}\\ \cline{5-10} & & & & \multicolumn{2}{c|}{CIFAR10} & \multicolumn{2}{c|}{CIFAR100} & \multicolumn{2}{c}{CIFAR100}\\ \cline{5-10} & & & & mean$\pm$std & min & mean$\pm$std & min & mean$\pm$std & min \\ \hline BS algorithm &2.34 &-- &-- & 4.57$\pm$0.10&4.45 &21.20$\pm$0.23&21.01 &\textbf{19.60$\pm$0.16}&19.55 \\ \#iter 5 &1.14& ${\approx}0.3541$ &${\approx}0.2049$ & 4.48$\pm$0.13&4.31 & 21.15$\pm$0.24&\textbf{20.84} &20.03$\pm$0.19&19.78 \\ \#iter 6 &1.33& ${\approx}0.0410$ &${\approx}0.0231$ & 4.43$\pm$0.10 &4.28 & 21.16$\pm$0.19 &20.93 & 19.83$\pm$0.24 &19.57 \\ \#iter 7 &1.52& ${\approx}7e{-}4$ &${\approx}3.5e{-}4$ & 4.45$\pm$0.11&4.29 & 21.18$\pm$0.20&20.95 &19.69$\pm$0.20&19.38 \\ \#iter 9 &1.83& ${\approx}2e{-}7$ &${\approx}7e{-}6$ & \textbf{4.40$\pm$0.07} &4.28 & \textbf{21.08$\pm$0.15} &20.89 & \textbf{19.52$\pm$0.22} &19.25 \\ \hline \#iter 8 &1.62& ${\approx}3e{-}7$ & ${\approx}7e{-}6$& \textbf{4.39$\pm$0.09}&\textbf{4.25} & \textbf{21.11$\pm$0.12}&20.95 & \textbf{19.55$\pm$0.20}&\textbf{19.24} \\ \hline \end{tabular} } \label{tab:back_iteration} \end{table*} Table~\ref{tab:back_iteration} compares the performance of backward algorithms with different termination criteria as well as the exact solution computed by the Bartels-Steward algorithm (BS algorithm)~\cite{bartels1972solution}. Since the NS iteration has the property of quadratic convergence, the errors $||{\mathbf{B}}_{k}{-}{\mathbf{I}}||_{\rm F}$ and $||0.5{\mathbf{C}}_{k}-{\mathbf{X}}||_{\rm F}$ decrease at a larger rate for more iteration times. When we iterate more than $7$ times, the error becomes sufficiently neglectable, \emph{i.e.,} the NS iteration almost converges. Moreover, from $8$ iterations to $9$ iterations, there are no obvious performance improvements. We thus terminate the iterations after iterating $8$ times. The exact gradient calculated by the BS algorithm does not yield the best results. Instead, it only achieves the least fluctuation on ResNet-50 and other results are inferior to our iterative solver. This is because the formulation of our Lyapunov equation is based on the assumption that the accurate matrix square root is computed, but in practice we only compute the approximate one in the forward pass. In this case, calculating \textit{the accurate gradient of the approximate matrix square root} might not necessarily work better than \textit{the approximate gradient of the approximate matrix square root}. \subsubsection{Lyapunov Solver as A General Backward Algorithm} \label{sec:lya_backward} We note that our proposed iterative Lyapunov solver is a general backward algorithm for computing the matrix square root. That is to say, it should be also compatible with the SVD and NS iteration as the forward pass. For the NS-Lya, our previous conference paper~\cite{song2022fast} shows that the NS iteration used in~\cite{higham2008functions,li2017second} cannot converge on any datasets. In this extended manuscript, we found out that the underlying reason is the inconsistency between the FP and BP. The NS iteration of~\cite{higham2008functions,li2017second} is a coupled iteration that use two variables ${\mathbf{Y}}_{k}$ and ${\mathbf{Z}}_{k}$ to compute the matrix square root. For the BP algorithm, the NS iteration is defined to compute the matrix sign and only uses one variable ${\mathbf{Y}}_{k}$. The term ${\mathbf{Z}}_{k}$ is not involved in the BP and we have no control over the gradient back-propagating through it, which results in the non-convergence of the model. To resolve this issue, we propose to change the forward coupled NS iteration to a variant that uses one variable as: \begin{equation} {\mathbf{Z}}_{k+1}=\frac{1}{2}(3{\mathbf{Z}}_{k}-{\mathbf{Z}}_{k}^{3}\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}}) \end{equation} where ${\mathbf{Z}}_{k+1}$ converges to the inverse square root ${\mathbf{A}}^{-\frac{1}{2}}$. This variant of NS iteration is often used to directly compute the inverse square root~\cite{huang2019iterative,bini2005algorithms}. The ${\mathbf{Z}}_{0}$ is initialization with ${\mathbf{I}}$, and post-compensation is calculated as ${\mathbf{Z}}_{k}=\frac{1}{\sqrt{||{\mathbf{A}}||_{\rm F}}} {\mathbf{Z}}_{k}$. Although the modified NS iteration uses only one variable, we note that it is an equivalent representation with the previous NS iteration. More formally, we have: \begin{prop} The one-variable NS iteration of~\cite{huang2019iterative,bini2005algorithms} is equivalent to the two-variable NS iteration of~\cite{li2017second,lin2017improved,higham2008functions}. \end{prop} We give the proof in the Supplementary Material. The modified forward NS iteration is compatible with our iterative Lyapunov solver. Table~\ref{tab:abla_combination} compares the performance of different methods that use the Lyapunov solver as the backward algorithm. Both the SVD-Lya and NS-Lya achieve competitive performances. \begin{table}[htbp] \centering \setlength{\tabcolsep}{1.5pt} \caption{Performance comparison of SVD-Lya and NS-Lya.} \resizebox{0.99\linewidth}{!}{ \begin{tabular}{r|c|c|c|c|c|c|c} \hline \multirow{3}*{Methods} & \multirow{3}*{ Time (ms)} & \multicolumn{4}{c|}{ResNet-18} & \multicolumn{2}{c}{ResNet-50}\\ \cline{3-8} & & \multicolumn{2}{c|}{CIFAR10} & \multicolumn{2}{c|}{CIFAR100} & \multicolumn{2}{c}{CIFAR100}\\ \cline{3-8} && mean$\pm$std & min & mean$\pm$std & min & mean$\pm$std & min \\ \hline SVD-Lya &4.47 &4.45$\pm$0.16 &\textbf{4.20} &21.24$\pm$0.24 &21.02 &\textbf{19.41$\pm$0.11}&19.26 \\ NS-Lya &2.88 & 4.51$\pm$0.14 & 4.34 & 21.16$\pm$0.17 & 20.94 &19.65$\pm$0.35 & 19.39\\ \hline MPA-Lya & 2.61 &\textbf{4.39$\pm$0.09} &4.25 & $\textbf{21.11$\pm$0.12}$ &20.95 & \textbf{19.55$\pm$0.20} &\textbf{19.24}\\ MTP-Lya & \textbf{2.46} & 4.49$\pm$0.13 &4.31 & 21.42$\pm$0.21 &21.24 & 20.55$\pm$0.37&20.12\\ \hline \end{tabular} } \label{tab:abla_combination} \end{table} \label{sec:abla} \section{Supplemen} \section{Summary of Algorithm} Algorithm.~\ref{alg:fp} and Algorithm.~\ref{alg:bp} summarize the forward pass (FP) and the backward pass (BP) of our proposed methods, respectively. The hyper-parameter $K$ in Algorithm.~\ref{alg:fp} means the degrees of power series, and $T$ in Algorithm.~\ref{alg:bp} denotes the iteration times. \begin{algorithm} \SetAlgoLined \KwIn{ ${\mathbf{A}}$ and $K$} \KwOut{ ${\mathbf{A}}^{\frac{1}{2}}$ or ${\mathbf{A}}^{-\frac{1}{2}}$} \eIf{MTP}{ \tcp{FP method is MTP} \eIf{Matrix Square Root}{ ${\mathbf{A}}^{\frac{1}{2}}{\leftarrow} {\mathbf{I}} {-} \sum_{k=1}^{K} \Big|\dbinom{\frac{1}{2}}{k}\Big| ({\mathbf{I}}-\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}})^{k} $\;} { ${\mathbf{A}}^{-\frac{1}{2}}{\leftarrow} {\mathbf{I}} {+} \sum_{k=1}^{\infty} \Big|\dbinom{-\frac{1}{2}}{k}\Big| ({\mathbf{I}}-\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}})^{k}\;$ } } {\tcp{FP method is MPA} $M{\leftarrow}\frac{K-1}{2}$, $N{\leftarrow}\frac{K-1}{2}$\; ${\mathbf{P}}_{M}{\leftarrow} {\mathbf{I}} {-} \sum_{m=1}^{M} p_{m} ({\mathbf{I}}-\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}})^{m}$\; ${\mathbf{Q}}_{N}{\leftarrow} {\mathbf{I}} {-} \sum_{n=1}^{N} q_{n} ({\mathbf{I}}-\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}})^{n}$\; \eIf{Matrix Square Root}{${\mathbf{A}}^{\frac{1}{2}}{\leftarrow}{\mathbf{Q}}_{N}^{-1}{\mathbf{P}}_{M}$\;} {${\mathbf{A}}^{-\frac{1}{2}}{\leftarrow}{\mathbf{P}}_{M}^{-1}{\mathbf{Q}}_{N}$\;} } \eIf{Matrix Square Root}{Post-compensate ${\mathbf{A}}^{\frac{1}{2}}{\leftarrow}\sqrt{||{\mathbf{A}}||_{\rm F}}\cdot{\mathbf{A}}^{\frac{1}{2}}$} {Post-compensate ${\mathbf{A}}^{-\frac{1}{2}}{\leftarrow}\frac{1}{\sqrt{||{\mathbf{A}}||_{\rm F}}}\cdot{\mathbf{A}}^{-\frac{1}{2}}$} \caption{FP of our MTP and MPA for the matrix square root and the inverse square root.} \label{alg:fp} \end{algorithm} \begin{algorithm} \SetAlgoLined \KwIn{$\frac{\partial l}{\partial {\mathbf{A}}^{\frac{1}{2}}}$ or $\frac{\partial l}{\partial {\mathbf{A}}^{-\frac{1}{2}}}$, ${\mathbf{A}}^{\frac{1}{2}}$ or ${\mathbf{A}}^{-\frac{1}{2}}$, and $T$} \KwOut{$\frac{\partial l}{\partial {\mathbf{A}}}$} \eIf{Matrix Square Root} {${\mathbf{B}}_{0}{\leftarrow}{\mathbf{A}}^{\frac{1}{2}}$, ${\mathbf{C}}_{0}{\leftarrow}\frac{\partial l}{\partial {\mathbf{A}}^{\frac{1}{2}}}$, $i{\leftarrow}0$ \;} {${\mathbf{B}}_{0}{\leftarrow}{\mathbf{A}}^{-\frac{1}{2}}$, ${\mathbf{C}}_{0}{\leftarrow}-{\mathbf{A}}^{-1}\frac{\partial l}{\partial {\mathbf{A}}^{-\frac{1}{2}}}{\mathbf{A}}^{-1}$, $i{\leftarrow}0$\;} Normalize ${\mathbf{B}}_{0}{\leftarrow}\frac{{\mathbf{B}}_{0}}{||{\mathbf{B}}_{0}||_{\rm F}}$, ${\mathbf{C}}_{0}{\leftarrow}\frac{{\mathbf{C}}_{0}}{||{\mathbf{B}}_{0}||_{\rm F}}$\; \While{$i<T$}{ \tcp{Coupled iteration} ${\mathbf{B}}_{k+1}{\leftarrow}\frac{1}{2} {\mathbf{B}}_{k} (3{\mathbf{I}}-{\mathbf{B}}_{k}^2)$ \; ${\mathbf{C}}_{k+1}{\leftarrow}\frac{1}{2} \Big(-{\mathbf{B}}_{k}^{2}{\mathbf{C}}_{k} + {\mathbf{B}}_{k}{\mathbf{C}}_{k}{\mathbf{B}}_{k} + {\mathbf{C}}_{k}(3{\mathbf{I}}-{\mathbf{B}}_{k}^2)\Big)$ \; $i{\leftarrow}i+1$\; } $\frac{\partial l}{\partial {\mathbf{A}}}{\leftarrow}\frac{1}{2}{\mathbf{C}}_{k}$ \; \caption{BP of our Lyapunov solver for the matrix square root and the inverse square root.} \label{alg:bp} \end{algorithm} \section{Theoretical Derivation and Proof} \subsection{Iterative Lyapunov Function Solver} \begin{dup}[Matrix Sign Function~\cite{higham2008functions}] \label{sign_2} For a given matrix ${\mathbf{H}}$ with no eigenvalues on the imaginary axis, its sign function has the following properties: 1) $sign({\mathbf{H}})^2={\mathbf{I}}$; 2) if ${\mathbf{H}}$ has the Jordan decomposition ${\mathbf{H}}{=}{\mathbf{T}}{\mathbf{M}}{\mathbf{T}}^{-1}$, then its sign function satisfies $sign({\mathbf{H}}){=}{\mathbf{T}} sign({\mathbf{M}}) {\mathbf{T}}^{-1}$. \end{dup} \begin{proof} The first property is easy to prove. Consider the SVD of ${\mathbf{U}}{\mathbf{S}}{\mathbf{V}}^{T}={\mathbf{H}}$. As the sign depends on the positiveness of the eigenvale, the square of sign function is computed as: \begin{equation} sign({\mathbf{H}})^2= sign({\mathbf{S}})^2 \end{equation} Since all eigenvalues are real, we have $sign({\mathbf{S}})^2{=}{\mathbf{I}}$, and the first property is proved. The alternative definition of matrix sign function is given by: \begin{equation} sign({\mathbf{H}}) = {\mathbf{H}}({\mathbf{H}}^{2})^{-\frac{1}{2}} \end{equation} Injecting $sign({\mathbf{H}}){=}{\mathbf{T}} sign({\mathbf{M}}) {\mathbf{T}}^{-1}$ into the above equation leads to \begin{equation} \begin{aligned} sign({\mathbf{H}}) &= {\mathbf{T}}{\mathbf{M}}{\mathbf{T}}^{-1}({\mathbf{T}}{\mathbf{M}}^2{\mathbf{T}})^{-\frac{1}{2}}\\ &= {\mathbf{T}}{\mathbf{M}}{\mathbf{T}}^{-1} {\mathbf{T}} sign({\mathbf{M}}){\mathbf{M}}^{-1}{\mathbf{T}}^{-1} \\ &= {\mathbf{T}} sign({\mathbf{M}}) {\mathbf{T}}^{-1} \end{aligned} \end{equation} The second property gets proved. \end{proof} Now we switch how to derive the iterative solver for matrix sign function in detail. Lemma~\ref{sign_2}.1 shows that $sign({\mathbf{H}})$ is the matrix square root of the identity matrix. We use the Newton-Schulz iteration to compute $sign({\mathbf{H}})$ as: \begin{equation} \begin{aligned} {\mathbf{H}}_{k+1} &{=} = \frac{1}{2}{\mathbf{H}}_{k}(3{\mathbf{I}}-{\mathbf{H}}_{k}^2)\\ &{=}\frac{1}{2}\begin{bmatrix} {\mathbf{B}}_{k}{(}3{\mathbf{I}}{-}{\mathbf{B}}_{k}^2{)} & 3{\mathbf{C}}_{k}-{\mathbf{B}}_{k}{(}{\mathbf{B}}_{k}{\mathbf{C}}_{k}{-}{\mathbf{C}}_{k}{\mathbf{B}}_{k}{)}{-}{\mathbf{C}}_{k}{\mathbf{B}}_{k}^2 \\ \mathbf{0} & -{\mathbf{B}}_{k}{(}3{\mathbf{I}}{-}{\mathbf{B}}_{k}^2{)} \end{bmatrix} \end{aligned} \end{equation} Lemma~\ref{sign_2}.2 indicates an alternative approach to compute the sign function as: \begin{equation} \begin{aligned} sign({\mathbf{H}}) &= sign\Big(\begin{bmatrix} {\mathbf{B}} & {\mathbf{C}}\\ \mathbf{0} & -{\mathbf{B}} \end{bmatrix}\Big)\\ & = \begin{bmatrix} {\mathbf{I}} & {\mathbf{X}}\\ \mathbf{0} & {\mathbf{I}} \end{bmatrix} sign\Big( \begin{bmatrix} {\mathbf{B}} & \mathbf{0}\\ \mathbf{0} & -{\mathbf{B}} \end{bmatrix} \Big) \begin{bmatrix} {\mathbf{I}} & {\mathbf{X}}\\ \mathbf{0} & {\mathbf{I}} \end{bmatrix}^{-1} \\ & = \begin{bmatrix} {\mathbf{I}} & {\mathbf{X}}\\ \mathbf{0} & {\mathbf{I}} \end{bmatrix} \begin{bmatrix} {\mathbf{I}} & \mathbf{0}\\ \mathbf{0} & -{\mathbf{I}} \end{bmatrix} \begin{bmatrix} {\mathbf{I}} & -{\mathbf{X}}\\ \mathbf{0} & {\mathbf{I}} \end{bmatrix} \\ &=\begin{bmatrix} {\mathbf{I}} & 2 {\mathbf{X}}\\ \mathbf{0} & -{\mathbf{I}} \end{bmatrix} \end{aligned} \end{equation} The above two equations define the coupled iterations and the convergence. \subsection{Equivalence of two sets of MPA} \begin{duplicate} The diagonal MPA $\frac{1}{\sqrt{||{\mathbf{A}}||_{\rm F}}}{\mathbf{S}}_{N}^{-1}{\mathbf{R}}_{M}$ is equivalent to the diagonal MPA $\frac{1}{\sqrt{||{\mathbf{A}}||_{\rm F}}}{\mathbf{P}}_{M}^{-1}{\mathbf{Q}}_{N}$, and the relation $p_{m}{=}-s_{n}$ and $q_{n}{=}-r_{m}$ hold for any $m{=}n$. \end{duplicate} \begin{proof} Though Pad\'e approximants are derived out of a finite Taylor series, they are asymptotic to their infinite Taylor series~\cite{van2006pade}. Let $f(z){=}(1-z)^{\frac{1}{2}}$ and $f(z)^{-1}{=}(1-z)^{-\frac{1}{2}}$. We have the relation: \begin{equation} \begin{gathered} \frac{1+\sum_{m=1}^{M}r_{m}z^{m}}{1+\sum_{n=1}^{N}s_{n}z^{n}} = f(z)^{-1} +R(z^{M+N+1})\\ \frac{1-\sum_{m=1}^{M}p_{m}z^{m}}{1-\sum_{n=1}^{N}q_{n}z^{n}} =f(z) +R(z^{M+N+1})\\ \end{gathered} \end{equation} where $R(z^{M+N+1})$ is the discarded higher-order term. Since $f(z)=\frac{1}{f(z)^{-1}}$, we have: \begin{equation} \frac{1+\sum_{m=1}^{M}r_{m}z^{m}}{1+\sum_{n=1}^{N}s_{n}z^{n}}=\frac{1-\sum_{n=1}^{N}q_{n}z^{n}}{1-\sum_{m=1}^{M}p_{m}z^{m}}. \end{equation} Now we have two sets of Pad\'e approximants at both sides. Since the numerator and denominator of Pad\'e approximants are relatively prime to each other by definition~\cite{baker1970pade}, the two sets of Pad\'e approximants are equivalent and we have: \begin{equation} p_{m}=-s_{n},\ q_{n}=-r_{m} \end{equation} Generalized to the matrix case, this leads to: \begin{equation} {\mathbf{P}}_{M}={\mathbf{S}}_{N},\ {\mathbf{Q}}_{N}={\mathbf{R}}_{M}. \end{equation} Therefore, we also have ${\mathbf{S}}_{N}^{-1}{\mathbf{R}}_{M}{=}{\mathbf{P}}_{M}^{-1}{\mathbf{Q}}_{N}$. The two sets of MPA are actually the same representation when $m{=}n$. \end{proof} \subsection{Equivalence of Newton-Schulz Iteration} \begin{duplicate} The one-variable NS iteration of~\cite{huang2019iterative,bini2005algorithms} is equivalent to the two-variable NS iteration of~\cite{li2017second,lin2017improved,higham2008functions}. \end{duplicate} \begin{proof} For the two-variable NS iteration, the coupled iteration is computed as: \begin{equation} {\mathbf{Y}}_{k+1}=\frac{1}{2}{\mathbf{Y}}_{k} (3{\mathbf{I}} - {\mathbf{Z}}_{k}{\mathbf{Y}}_{k}), {\mathbf{Z}}_{k+1}=\frac{1}{2}(3{\mathbf{I}}-{\mathbf{Z}}_{k}{\mathbf{Y}}_{k}){\mathbf{Z}}_{k} \label{prop_ns_two} \end{equation} where ${\mathbf{Y}}_{k}$ and ${\mathbf{Z}}_{k}$ converge to ${\mathbf{A}}^{\frac{1}{2}}$ and ${\mathbf{A}}^{-\frac{1}{2}}$, respectively. The two variables are initialized as ${\mathbf{Y}}_{0}{=}\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}}$ and ${\mathbf{Z}}_{0}{=}{\mathbf{I}}$. Since the two variables have the relation ${\mathbf{Z}}_{k}^{-1}{\mathbf{Y}}_{k}{=}\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}}$, we can replace ${\mathbf{Y}}_{k}$ in~\cref{prop_ns_two} with ${\mathbf{Z}}_{k}\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}}$: \begin{equation} {\mathbf{Z}}_{k+1}=\frac{1}{2}(3{\mathbf{I}}-{\mathbf{Z}}_{k}^{2}\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}}){\mathbf{Z}}_{k} \end{equation} Notice that ${\mathbf{A}}$ and ${\mathbf{Z}}_{k}$ have the same eigenspace and their matrix product commutes, \emph{i.e.,} ${\mathbf{A}}{\mathbf{Z}}_{k}{=}{\mathbf{Z}}_{k}{\mathbf{A}}$. Therefore, the above equation can be further simplified as: \begin{equation} {\mathbf{Z}}_{k+1}=\frac{1}{2}(3{\mathbf{Z}}_{k}-{\mathbf{Z}}_{k}^{3}\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}}) \end{equation} As indicated above, the two seemingly different NS iterations are in essence equivalent. \end{proof} \section{Baselines} \label{app:baselines} In the experiment section, we compare our proposed two methods with the following baselines: \begin{itemize} \item Power Iteration (PI). It is suggested in the original So-ViT to compute only the dominant eigenpair. \item SVD-PI~\cite{wang2019backpropagation} that uses PI to compute the gradients of SVD. \item SVD-Taylor~\cite{wang2021robust,song2021approximate} that applies the Taylor polynomial to approximate the gradients. \item SVD-Pad\'e~\cite{song2021approximate} that proposes to closely approximate the SVD gradients using Pad\'e approximants. Notice that our MTP/MPA used in the FP is fundamentally different from the Taylor polynomial or Pad\'e approximants used in the BP of SVD-Pad\'e. For our method, we use Matrix Taylor Polynomial (MTP) and Matrix Pad\'e Approximants (MPA) to derive the matrix square root in the FP. For the SVD-Pad\'e, they use scalar Taylor polynomial and scalar Pad\'e approximants to approximate the gradient $\frac{1}{\lambda_{i}-\lambda_{j}}$ in the BP. That is to say, their aim is to use the technique to compute the gradient and this will not involve the back-propagation of Taylor polynomial or Pad\'e approximants. \item NS iteration~\cite{schulz1933iterative,higham2008functions} that uses the Newton-Schulz iteration to compute the matrix square root. It has been widely applied in different tasks, including covariance pooling~\cite{li2018towards} and ZCA whitening~\cite{huang2018decorrelated}. We note that although~\cite{huang2019iterative} and~\cite{higham2008functions} use different forms of NS iteration, the two representations are equivalent to each other (see the proof in the paper). The modified NS iteration in~\cite{huang2019iterative} just replaces ${\mathbf{Y}}_{k}$ with ${\mathbf{Z}}_{k}{\mathbf{A}}$ and re-formulates the iteration using one variable. The computation complexity is still the same. \end{itemize} As the ordinary differentiable SVD suffers from the gradient explosion issue and easily causes the program to fail, we do not take it into account for comparison. Unlike previous methods such as SVD and NS iteration, our MPA-Lya/MTP-Lya does not have a consistent FP and BP algorithm. However, we do not think it will bring any caveat to the stability or performance. Our MTP and MPA do not need coupled iteration in the FP and always have gradient back-propagating through $\mathbf{A}^{\frac{1}{2}}$ or $\mathbf{A}^{-\frac{1}{2}}$ in the BP, which could guarantee the training stability. Moreover, our ablation study implies that our BP Lyapunov solver approximates the real gradient very well (\emph{i.e.,} $||{\mathbf{B}}_{k}{-}{\mathbf{I}}||_{\rm F}{<}3e{-}7$ and $||0.5{\mathbf{C}}_{k}{-}{\mathbf{X}}||_{\rm F}{<}7e{-}6$). Also, our extensive experiments demonstrate the superior performances. In light of these experimental results, we argue that as long as the BP algorithm is accurate enough, the inconsistency between the BP and FP is not an issue. \section{Experimental Settings} \label{app:exp_set} All the source codes are implemented in Pytorch. For the SVD methods, the forward eigendecomposition is performed on the CPU using the official Pytorch function \textsc{torch.svd}, which calls the LAPACK's routine \textit{gesdd} that uses the Divide-and-Conquer algorithm for the fast calculation. All the numerical tests are conducted on a single workstation equipped with a Tesla K40 GPU and a 6-core Intel(R) Xeon(R) GPU @ 2.20GHz. For our method throughout all the experiments, in the forward pass, we match the MTP to the power series of degree $11$ and set the degree for both numerator and denominator of our MPA as $5$. We keep iterating $8$ times for our backward Lyapunov solver. Now we turn to the implementation details for each experiment in the paper. \subsection{Decorrelated Batch Normalization} \begin{figure}[htbp] \centering \includegraphics[width=0.5\linewidth]{imgs/arch_deBN.jpg} \caption{The architecture changes of ResNet models in the experiment of ZCA whitening. The decorrelated batch normalization layer is inserted after the first convolutional layer. The kernel sizes, the stride of the first convolution layer, and the stride of the first ResNet block are changed correspondingly.} \label{fig:arch_BN} \end{figure} Fig.~\ref{fig:arch_BN} displays the detailed architecture changes of ResNet. Suggested by~\cite{wang2020deep}, we truncate the Taylor polynomial to degree $20$ for SVD-Taylor. To make Pad\'e approximant match the same degree with Taylor polynomial, we set the degree of both numerator and denominator to $10$ for SVD-Pad\'e. For SVD-PI, the iteration times are also set as $20$. For the NS iteration, according to the setting in~\cite{li2018towards,huang2018decorrelated}, we set the iteration times to $5$. The other experimental settings follow the implementation in~\cite{wang2021robust}. We use the workstation equipped with a Tesla K40 GPU and a 6-core Intel(R) Xeon(R) GPU @ 2.20GHz for training. Notice that in our previous conference paper, we first calculate the matrix square root ${\mathbf{A}}^{\frac{1}{2}}$ and then compute ${\mathbf{X}}_{whitend}$ by solving the linear system ${\mathbf{A}}^{\frac{1}{2}}{\mathbf{X}}_{whitend}{=}{\mathbf{X}}$. Thanks to the algorithm extension to the inverse square root, we can directly computes ${\mathbf{A}}^{-\frac{1}{2}}$ in this paper. \subsection{Second-order Vision Transformer} We use 8 Tesla G40 GPUs for distributed training and the NVIDIA Apex mixed-precision trainer is used. Except that the spectral layer uses the single-precision (\emph{i.e.,} float32), other layers use the half-precision (\emph{i.e.,} float16) to accelerate the training. Other implementation details follow the experimental setting of the original So-ViT~\cite{xie2021so}. Following the experiment of covariance pooling for CNNs~\cite{song2021approximate}, the degrees of Taylor polynomial are truncated to $100$ for SVD-Taylor, and the degree of both the numerator and denominator of Pad\'e approximants are set to $50$ for SVD-Pad\'e. The iteration times of SVD-PI are set to $100$. In the experiment of covariance pooling, more terms of the Taylor series are used because the covariance pooling meta-layer requires more accurate gradient estimation~\cite{song2021approximate}. For the SVD-based methods, usually the double-precision is required to ensure an effective numerical representation of the eigenvalues. Using a lower precision would make the model fail to converge at the beginning of the training~\cite{song2021approximate}. This is particularly severe for vision transformers which are known slow and hard to converge in the early training stage. One may consider to cast the tensor into double-precision (64 bits) to alleviate this issue. However, this will trigger much larger gradient and introduce round-off errors when the gradient is passed to previous layer in half-precision (16 bits). To avoid this caveat, we first apply the NS iteration to train the network for $50$ epochs, then switch to the corresponding SVD method and continue the training till the end. This hybrid approach can avoid the non-convergence of the SVD methods at the beginning of the training phase. \subsection{Global Covariance Pooling} For the experiment on large-scale and fine-grained image recognition, we refer to~\cite{song2021approximate} for all the experimental settings. In the video action recognition experiment~\cite{gao2021temporal}, the iteration time for NS iteration is set as $5$. Othe implementation details are unchanged. \subsection{Neural Style Transfer} \begin{table*}[htbp] \caption{The detailed LPIPS~\cite{zhang2018perceptual} score and user preference (\%) on each subset of Artworks dataset.} \centering \resizebox{0.9\linewidth}{!}{ \begin{tabular}{r|c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}*{Methods} & \multicolumn{5}{c|}{LPIPS~\cite{zhang2018perceptual} Score ($\uparrow$)} & \multicolumn{5}{c}{User Preference ($\uparrow$)} \\ \cline{2-11} & Cezanne & Monet & Vangogh & Ukiyoe & Average & Cezanne & Monet & Vangogh & Ukiyoe & Average \\ \hline SVD-Taylor &0.4937 &0.4820 &\textbf{0.6074} &0.5274 & 0.5276 & 15 &16 &\textbf{25} &9 &16.25\\ SVD-Pad\'e &0.6179 &0.4783 &0.5307 &0.5419 &0.5422 & \textbf{28} &13 &15 & 21& 19.25\\ NS iteration &0.5328 &\textbf{0.5329} &0.5386 &0.6270 & 0.5578&11 & 18 &21 & 18 & 17.00\\ \hline Our MPA-Lya &\textbf{0.6332} &0.5291 &0.4511 &\textbf{0.6325} & \textbf{0.5615} & 25 &\textbf{29} &18 &\textbf{27} & \textbf{24.75}\\ Our MTP-Lya &0.6080 &0.4826 &0.4796 &{0.6253} & 0.5489 & 17&21 &17 & 19 & 18.50 \\ \hline \end{tabular} } \label{tab:style_transfer_full} \end{table*} \begin{figure*}[htbp] \centering \includegraphics[width=0.85\linewidth]{imgs/style_transfer_visual_large.png} \caption{More exemplary visualizations on Artworks~\cite{isola2017image} dataset. Our methods generate sharper images with more coherent style and better visual appeal. The red rectangular indicates regions with subtle details.} \label{fig:style_transfer_visual_large} \end{figure*} For the loss functions, we follow the settings in~\cite{cho2019image} and use the cycle-consistent reconstruction loss in both the latent and the pixel space. The image is resized to the resolution of $216{\times}216$ before passing to the network, and the model is trained for $100,000$ iterations. The batch size is set to $4$. Table~\ref{tab:style_transfer_full} and Fig.~\ref{fig:style_transfer_visual_large} present the detailed quantitative evaluation and more visual comparison, respectively. As suggested in~\cite{li2017universal,wang2020diversified}, we use the LPIPS~\cite{zhang2018perceptual} score and the user preference as the evaluation metrics. For the LPIPS metric, we compute the score between each pair of transferred image and the content image. A higher LPIPS score implies that the image carries less content information but more style information. For the user study, we randomly select $100$ images from each dataset and ask $20$ volunteers to vote for the image that characterizes more the style information. In some cases where the volunteer thinks none of the images correctly carries the style, he/she can abstain and does not vote for any one. \section{Comparison of Lyapunov Solver against Implicit Function and Automatic Differentiation} Besides our proposed custom Lyapunov gradient solver, one may consider alternative gradient computation schemes, such as reverse-mode automatic differentiation (RMAD) and implicit function (IF). For the RMAD, the backward pass indeed takes roughly the same operation costs as the forward pass. Considering that our MPA uses two sets of matrix power polynomials and one matrix inverse, using RMAD for the gradient computation would be less efficient than the Lyapunov solver which only involves matrix multiplications. Moreover, the gradient of some intermediate variables of MPA would be calculated in the RMAD, which would further increase unnecessary memory costs. For the IF, the function for matrix square root can be defined as $f({\mathbf{A}},{\mathbf{A}}^\frac{1}{2})=({\mathbf{A}}^{\frac{1}{2}})^2-{\mathbf{A}}$ where ${\mathbf{A}}^{\frac{1}{2}}$ can be regarded as a function of ${\mathbf{A}}$. Performing implicit differentiation and multiplying both sides with $\frac{\partial l}{\partial {\mathbf{A}}^{\frac{1}{2}}}$ would lead to the gradient equation $\frac{\partial l}{\partial {\mathbf{A}}}=-(\frac{\partial f}{\partial {\mathbf{A}}^{\frac{1}{2}}})^{-1}\frac{\partial f}{\partial {\mathbf{A}}}\frac{\partial l}{\partial {\mathbf{A}}^{\frac{1}{2}}}$. The memory usage of IF should be small since only the gradient of $f$ is introduced in the computation. However, the time cost can be high due to the function gradient evaluation $\frac{\partial f}{\partial {\mathbf{A}}}$ and $\frac{\partial f}{\partial {\mathbf{A}}^{\frac{1}{2}}}$ as well as the matrix inverse computation. \begin{table}[htbp] \centering \caption{Backward time and speed comparison for batched matrices of size $64{\times}64{\times}64$. We use MPA for forward pass, and the evaluation is averaged on $1,000$ randomly generated matrices.} \begin{tabular}{c|c|c} \toprule Method & Speed (ms) & Memory (MB) \\ \hline Lyapunov &\textbf{2.19} & \textbf{1.99}\\ RMAD &5.69 & 3.08 \\ IF &4.71 & 2.03\\ \bottomrule \end{tabular} \label{tab:rmad_if} \end{table} Table~\ref{tab:rmad_if} compares the speed and memory consumption. Our Lyapunov solver outperforms both schemes in terms of speed and memory. The memory usage of IF is competitive, which also meets our expectation. In general, our Lyapunov-based solver can be viewed as a well-optimized RMAD compiler with the least memory and time consumption. \section{Stability of Pad\'e Approximants} \label{app:stability_pade} When there is the presence of spurious poles~\cite{stahl1998spurious,baker2000defects}, the Pad\'e approximants are very likely to suffer from the well-known defects of instability. The spurious poles mean that when the approximated function has very close poles and zeros, the corresponding Pad\'e approximants will also have close poles and zeros. Consequently, the Pad\'e approximants will become very unstable in the region of defects (\emph{i.e.,} when the input is in the neighborhood of poles and zeros). Generalized to the matrix case, the spurious poles can happen when the determinant of the matrix denominator is zero (\emph{i.e.} $\det{({\mathbf{Q}}_{N})}=0$). However, in our case, the approximated function for matrix square root is $(1-z)^{\frac{1}{2}}$ for $|z|<1$, which only has one zero at $z=1$ and does not have any poles. For the inverse square root, the approximated function $(1-z)^{-\frac{1}{2}}$ has one pole but does not have an zeros. Therefore, the spurious pole does not exist in our approximation and there are no defects of our Pad\'e approximants. Now we briefly prove this claim for the matrix square root. The proof for the inverse square root can be given similarly, and we omit it here for conciseness. Consider the denominator of our Pad\'e approximants: \begin{equation} {\mathbf{Q}}_{N}= {\mathbf{I}} - \sum_{n=1}^{N} q_{n} ({\mathbf{I}}-\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}})^{n} \label{eq:QN_deno} \end{equation} Its determinant is calculated as: \begin{equation} \det{({\mathbf{Q}}_{N})}=\prod_{i=1}(1-\sum_{n=1}^{N} q_{n}(1-\frac{\lambda_{i}}{\sqrt{\sum_{i}\lambda_{i}^{2}}})^{n}) \label{eq:QN_det} \end{equation} The coefficients $q_{n}$ of our $[5,5]$ Pad\'e approximant are pre-computed as $[2.25,-1.75,0.54675,-0.05859375,0.0009765625]$. Let $x_{i}$ denotes $(1-\frac{\lambda_{i}}{\sqrt{\sum_{i}\lambda_{i}^{2}}})$. Then $x_{i}$ is in the range of $[0,1]$, and we have: \begin{equation} \begin{gathered} f(x_{i})=1-2.25x_{i}+1.75x^2_{i}-0.54675x^3_{i}+\\+0.05859375x^{4}_{i}-0.0009765625x^{5}_{i}; \\ \det{({\mathbf{Q}}_{N})}=\prod_{i=1}(f(x_{i})). \end{gathered} \label{eq:QN_nonzero} \end{equation} The polynomial $f(x_{i})$ does not have any zero in the range of $x{\in}[0,1]$. The minimal is $0.0108672$ when $x=1$. This implies that $\det{({\mathbf{Q}}_{N})}\neq0$ always holds for any ${\mathbf{Q}}_{N}$ and our Pad\'e approximants do not have any pole. Accordingly, there will be no spurious poles and defects. Hence, our MPA is deemed stable. Throughout our experiments, we do not encounter any instability issue of our MPA. \iffalse \subsection{Pad\'e Coefficients} To better illustrate the stability of our Pad\'e approximants, here we attach the Pad\'e coefficients of the matrix square root from degree $[3,3]$ to degree $[6,6]$. The numerator $p_{m}$ is: \begin{equation} \begin{split} p_{3}=[1.75,-0.875,0.109375].\\ p_{4}=[2.25,-1.6875,0.46875,-0.03515625].\\ p_{5}=[2.75,-2.75,1.203125,-0.21484375,0.0107421875].\\ p_{6}=[3.25,-4.0625,2.4375,-0.7109375,\\ 0.0888671875,-0.003173828125]. \end{split} \end{equation} And the corresponding denominator $q_{n}$ is: \begin{equation} \begin{split} q_{3}=[1.25,-0.375,0.015625].\\ q_{4}=[1.75,-0.9375,0.15625,-0.00390625].\\ q_{5}=[2.25,-1.75,0.54675,-0.05859375,0.0009765625].\\ q_{6}=[2.75,-2.8125,1.3125,-0.2734375, \\ 0.0205078125,-0.000244140625].\\ \end{split} \end{equation} Notice that for the inverse square root, we have the relation: \begin{equation} p_{m}=-s_{n}, q_{n}=-r_{m} \end{equation} Similar to the deduction in~\cref{eq:QN_deno,eq:QN_det,eq:QN_nonzero}, we can get the polynomial for deriving $\det{({\mathbf{Q}}_{N})}$ as: \begin{equation} \begin{split} f(x_{i})_{q_{3}}=1-1.25x_{i}+0.375x^2_{i}-0.015625x^3_{i}\\ f(x_{i})_{q_{4}}=1-1.75x_{i}+0.9375x^2_{i}-0.15625x^3_{i}\\+0.00390625x^{4}_{i}\\ f(x_{i})_{q_{5}}=1-2.25x_{i}+1.75x^2_{i}-0.54675x^3_{i}\\+0.05859375x^{4}_{i}-0.0009765625x^{5}_{i}\\ f(x_{i})_{q_{6}}{=}1{-}2.75x_{i}{+}2.8125x^2_{i}{-}1.3125x^3_{i}\\{+}0.2734375x^{4}_{i}{-}0.0205078125x^{5}_{i}{+}0.000244140625x^{6}_{i} \end{split} \end{equation} It can be easily verified that all of the polynomials monotonically decrease in the range of $x_{i}{\in}[0,1]$ and have their minimal at $x_{i}{=}1$ as: \begin{equation} \begin{gathered} \min_{x_{i}\in[0,1]} f(x_{i})_{q_{3}} = 0.109375. \\ \min_{x_{i}\in[0,1]} f(x_{i})_{q_{4}} = 0.03515625. \\ \min_{x_{i}\in[0,1]} f(x_{i})_{q_{5}} = 0.0108672. \\ \min_{x_{i}\in[0,1]} f(x_{i})_{q_{6}} = 0.003173828125. \end{gathered} \end{equation} As indicated above, all the polynomials are positive and we have $\det{({\mathbf{Q}}_{N})}\neq0$ consistently holds for the degree $N{=}{3,4,5,6}$. \fi \section{Introduction}\label{sec:introduction}} Consider a positive semi-definite matrix ${\mathbf{A}}$. The principle square root ${\mathbf{A}}^{\frac{1}{2}}$ and the inverse square root ${\mathbf{A}}^{-\frac{1}{2}}$ are mathematically of practical interests, mainly because some desired spectral properties can be obtained by such transformations. An exemplary illustration is given in Fig.~\ref{fig:cover}. As can be seen, the matrix square root can shrink/stretch the feature variances along with the direction of principle components, which is known as an effective spectral normalization for covariance matrices. The inverse square root, on the other hand, can be used to whiten the data, \emph{i.e.,} make the data has a unit variance in each dimension. These appealing spectral properties are very useful in many computer vision applications. In Global Covariance Pooling (GCP)~\cite{lin2017improved,li2017second,li2018towards,song2021approximate} and other related high-order representation methods~\cite{xie2021so,gao2021temporal}, the matrix square root is often used to normalize the high-order feature, which can benefit some classification tasks like general visual recognition~\cite{li2017second,li2018towards,xie2021so}, fine-grained visual categorization~\cite{song2022eigenvalues}, and video action recognition~\cite{gao2021temporal}. The inverse square root is used as the whitening transform to eliminate the feature correlation, which is widely applied in decorrelated Batch Normalization (BN)~\cite{huang2018decorrelated,huang2019iterative,huang2020investigation} and other related models that involve the whitening transform~\cite{siarohin2018whitening,ermolov2021whitening}. In the field of neural style transfer, both the matrix square root and its inverse are adopted to perform successive Whitening and Coloring Transform (WCT) to transfer the style information for better generation fidelity~\cite{li2017universal,cho2019image,choi2021robustnet}. \begin{figure}[tbp] \centering \includegraphics[width=0.8\linewidth]{imgs/MatSqrt_Cover.jpg} \caption{Exemplary visualization of the matrix square root and its inverse. Given the original data ${\mathbf{X}}{\in}\mathbb{R}^{2{\times}n}$, the matrix square root performs an effective spectral normalization by stretching the data along the axis of small variances and squeezing the data in the direction with large variances, while the inverse square root transforms the data into the uncorrelated structure that has unit variance in all directions.} \label{fig:cover} \end{figure} To compute the matrix square root, the standard method is via Singular Value Decomposition (SVD). Given the real symmetric matrix ${\mathbf{A}}$, its matrix square root is computed as: \begin{equation} {\mathbf{A}}^{\frac{1}{2}} =({\mathbf{U}}{\mathbf{\Lambda}}{\mathbf{U}}^{T})^{\frac{1}{2}} = {\mathbf{U}}{\mathbf{\Lambda}}^{\frac{1}{2}}{\mathbf{U}}^{T} \end{equation} where ${\mathbf{U}}$ is the eigenvector matrix, and ${\mathbf{\Lambda}}$ is the diagonal eigenvalue matrix. As derived by Ionescu~\emph{et al.}~\cite{ionescu2015training}, the partial derivative of the eigendecomposition is calculated as: \begin{equation} \frac{\partial l}{\partial {\mathbf{A}}} = {\mathbf{U}}\Big({\mathbf{K}}^{T}\odot({\mathbf{U}}^{T}\frac{\partial l}{\partial{\mathbf{U}}})+(\frac{\partial l}{\partial{\mathbf{\Lambda}}})_{\rm diag}\Big){\mathbf{U}}^{T} \label{svd_back} \end{equation} where $l$ is the loss function, $\odot$ denotes the element-wise product, and $()_{\rm diag}$ represents the operation of setting the off-diagonal entries to zero. Despite the long-studied theories and well-developed algorithms of SVD, there exist two obstacles when integrating it into deep learning frameworks. One issue is the back-propagation instability. For the matrix ${\mathbf{K}}$ defined in~\cref{svd_back}, its off-diagonal entry is $K_{ij}{=}\nicefrac{1}{(\lambda_{i}-\lambda_{j})}$, where $\lambda_{i}$ and $\lambda_{j}$ are involved eigenvalues. When the two eigenvalues are close and small, the gradient is very likely to explode, \emph{i.e.,} $K_{ij}{\rightarrow}{\infty}$. This issue has been solved by some methods that use approximation techniques to estimate the gradients~\cite{wang2019backpropagation,wang2021robust,song2021approximate}. The other problem is the expensive time cost of the forward eigendecomposition. As the SVD is not supported well by GPUs~\cite{lahabar2009singular}, performing the eigendecomposition on the deep learning platforms is rather time-consuming. Incorporating the SVD with deep models could add extra burdens to the training process. Particularly for batched matrices, modern deep learning frameworks, such as Tensorflow and Pytorch, give limited optimization for the matrix decomposition within the mini-batch. They inevitably use a for-loop to conduct the SVD one matrix by another. However, how to efficiently perform the SVD in the context of deep learning has not been touched by the research community. To avoid explicit eigendecomposition, one commonly used alternative is the Newton-Schulz iteration (NS iteration)~\cite{schulz1933iterative,higham2008functions} which modifies the ordinary Newton iteration by replacing the matrix inverse but preserving the quadratic convergence. Compared with SVD, the NS iteration is rich in matrix multiplication and more GPU-friendly. Thus, this technique has been widely used to approximate the matrix square root in different applications~\cite{lin2017improved,li2018towards,huang2019iterative}. The forward computation relies on the following coupled iterations: \begin{equation} {\mathbf{Y}}_{k+1}=\frac{1}{2}{\mathbf{Y}}_{k} (3{\mathbf{I}} - {\mathbf{Z}}_{k}{\mathbf{Y}}_{k}), {\mathbf{Z}}_{k+1}=\frac{1}{2}(3{\mathbf{I}}-{\mathbf{Z}}_{k}{\mathbf{Y}}_{k}){\mathbf{Z}}_{k} \label{eq:ns_fp} \end{equation} where ${\mathbf{Y}}_{k}$ and ${\mathbf{Z}}_{k}$ converge to ${\mathbf{A}}^{\frac{1}{2}}$ and ${\mathbf{A}}^{-\frac{1}{2}}$, respectively. Since the NS iteration only converges locally (\emph{i.e.,} $||{\mathbf{A}}||_{2}{<}1$), we need to pre-normalize the initial matrix and post-compensate the resultant approximation as ${\mathbf{Y}}_{0}{=}\frac{1}{||{\mathbf{A}}||_{\rm F}}{\mathbf{A}}$ and$\ {\mathbf{A}}^{\frac{1}{2}}{=}\sqrt{||{\mathbf{A}}||_{\rm F}}{\mathbf{Y}}_{k}$. Each forward iteration involves $3$ matrix multiplications, which is more efficient than the forward pass of SVD. However, the backward pass of the NS iteration takes $14$ matrix multiplications per iteration. Consider that the NS iteration often takes $5$ iterations to achieve reasonable performances~\cite{li2018towards,huang2019iterative}. The backward pass is much more time-costing than the backward algorithm of SVD. The speed improvement could be larger if a more efficient backward algorithm is developed. To address the drawbacks of SVD and NS iteration, \emph{i.e.} the low efficiency in either the forward or backward pass, we derive two methods \textbf{that are efficient in both forward and backward propagation} to compute the differentiable matrix square root and its inverse. In the forward pass (FP), we propose using Matrix Taylor Polynomial (MTP) and Matrix Pad\'e Approximants (MPA) for approximating the matrix square root. The former approach is slightly faster but the latter is more numerically accurate. Both methods yield considerable speed-up compared with the SVD or the NS iteration in the forward computation. The proposed MTP and MPA can be also used to approximate the inverse square root without any additional computational cost. For the backward pass (BP), we consider the gradient function as a Lyapunov equation and propose an iterative solution using the matrix sign function. The backward pass costs fewer matrix multiplications and is more computationally efficient than the NS iteration. Our proposed iterative Lyapunov solver applies to both the matrix square root and the inverse square root. The only difference is that deriving the gradient of inverse square root requires $3$ more matrix multiplications than computing that of matrix square root. Through a series of numerical tests, we show that the proposed MTP-Lya and MPA-Lya deliver consistent speed improvement for different batch sizes, matrix dimensions, and some hyper-parameters (\emph{e.g.,} degrees of power series to match and iteration times). Moreover, our proposed MPA-Lya consistently gives a better approximation of the matrix square root and its inverse than the NS iteration. Besides the numerical tests, we conduct extensive experiments in a number of computer vision applications, including decorrelated batch normalization, second-order vision transformer, global covariance pooling for large-scale and fine-grained image recognition, attentive global covariance pooling for video action recognition, and neural style transfer. Our methods can achieve competitive performances against the SVD and the NS iteration with the least amount of time overhead. Our MPA is suitable in use cases where the high precision is needed, while our MTP works in applications where the accuracy is less demanded but the efficiency is more important. The contributions of the paper are twofold: \begin{itemize} \item We propose two fast methods that compute the differentiable matrix square root and the inverse square root. The forward propagation relies on the matrix Taylor polynomial or matrix Pad\'e approximant, while an iterative backward gradient solver is derived from the Lyapunov equation using the matrix sign function. \item Our proposed algorithms are validated by a series of numerical tests and several real-world computer vision applications. The experimental results demonstrate that our methods have a faster calculation speed and also have very competitive performances. \end{itemize} This paper is an expanded version of~\cite{song2022fast}. In the conference paper~\cite{song2022fast}, the proposed fast algorithms only apply to the matrix square root ${\mathbf{A}}^{\frac{1}{2}}$. For the application of inverse square root ${\mathbf{A}}^{-\frac{1}{2}}$, we have to solve the linear system or compute the matrix inverse. However, both techniques are not GPU-efficient enough and could add extra computational burdens to the training. In this extended manuscript, we target the drawback and extend our algorithm to the case of inverse square root, which avoids the expensive computation and allows for faster calculation in more application scenarios. Compared with computing the matrix square root, computing the inverse square root consumes the same time complexity in the FP and requires 3 more matrix multiplications in the BP. The paper thus presents a complete solution to the efficiency issue of the differentiable spectral layer. Besides the algorithm extension, our method is validated in more computer vision applications: global covariance pooling for image/video recognition and neural style transfer. We also shed light on the peculiar incompatibility of NS iteration and Lyapunov solver discussed in Sec.~\ref{sec:lya_backward}. The rest of the paper is organized as follows: Sec.~\ref{sec:related} describes the computational methods and applications of differentiable matrix square root and its inverse. Sec.~\ref{sec:method} introduces our method that computes the end-to-end matrix square root, and Sec.~\ref{sec:method_inverse} presents the extension of our method to the inverse square root. Sec.~\ref{sec:exp} provides the experimental results, the ablation studies, and some in-depth analysis. Finally, Sec.~\ref{sec:conclusion} summarizes the conclusions. \section*{Acknowledgments} \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran} \section{Related Work}\label{sec:related} In this section, we recap the previous approaches that compute the differentiable matrix square root and the inverse square root, followed by a discussion on the usage in some applications of deep learning and computer vision. \subsection{Computational Methods} Ionescu~\emph{et al.}~\cite{ionescu2015training,ionescu2015matrix} first formulate the theory of matrix back-propagation, making it possible to integrate a spectral meta-layer into neural networks. Existing approaches that compute the differentiable matrix square root and its inverse are mainly based on the SVD or NS iteration. The SVD calculates the accurate solution but suffers from backward instability and expensive time cost, whereas the NS iteration computes the approximate solution but is more GPU-friendly. For the backward algorithm of SVD, several methods have been proposed to resolve this gradient explosion issue~\cite{wang2019backpropagation,Dang18a,Dang20a,wang2021robust,song2021approximate}. Wang~\emph{et al.}~\cite{wang2019backpropagation} propose to apply Power Iteration (PI) to approximate the SVD gradient. Recently, Song~\emph{et al.}~\cite{song2021approximate} propose to rely on Pad\'e approximants to closely estimate the backward gradient of SVD. To avoid explicit eigendecomposition, Lin~\emph{et al.}~\cite{lin2017improved} propose to substitute SVD with the NS iteration. Following this work, Li~\emph{et al.}~\cite{li2017second} and Huang~\emph{et al.}~\cite{huang2018decorrelated} adopt the NS iteration in the task of global covariance pooling and decorrelated batch normalization, respectively. For the backward pass of the differentiable matrix square root, Lin~\emph{et al.}~\cite{lin2017improved} also suggest viewing the gradient function as a Lyapunov equation. However, their proposed exact solution is infeasible to compute practically, and the suggested Bartels-Steward algorithm~\cite{bartels1972solution} requires explicit eigendecomposition or Schur decomposition, which is again not GPU-friendly. By contrast, our proposed iterative solution using the matrix sign function is more computationally efficient and achieves comparable performances against the Bartels-Steward algorithm (see the ablation study in Sec.~\ref{sec:abla}). \subsection{Applications} \subsubsection{Global Covariance Pooling} One successful application of the differentiable matrix square root is the Global Covariance Pooling (GCP), which is a meta-layer inserted before the FC layer of deep models to compute the matrix square root of the feature covariance. Equipped with the GCP meta-layers, existing deep models have achieved state-of-the-art performances on both generic and fine-grained visual recognition ~\cite{lin2015bilinear,li2017second,lin2017improved,li2018towards,wang2019deep,wang2020deep,song2021approximate,song2022eigenvalues}. Inspired by recent advances of transformers~\cite{vaswani2017attention}, Xie~\emph{et al.}~\cite{xie2021so} integrate the GCP meta-layer into the vision transformer~\cite{dosovitskiy2020image} to exploit the second-order statistics of the high-level visual tokens, which solves the issue that vision transformers need pre-training on ultra-large-scale datasets. More recently, Gao~\emph{et al.}~\cite{gao2021temporal} propose an attentive and temporal-based GCP model for video action recognition. \subsubsection{Decorrelated Batch Normalization} Another line of research proposes to use ZCA whitening, which applies the inverse square root of the covariance to whiten the feature, as an alternative scheme for the standard batch normalization~\cite{ioffe2015batch}. The whitening procedure, \emph{a.k.a} decorrelated batch normalization, does not only standardize the feature but also eliminates the data correlation. The decorrelated batch normalization can improve both the optimization efficiency and generalization ability of deep neural networks~\cite{huang2018decorrelated,siarohin2018whitening,huang2019iterative,pan2019switchable,huang2020investigation,ermolov2021whitening,huang2021group,zhang2021stochastic,cho2021improving}. \subsubsection{Whitening and Coloring Transform} The WCT~\cite{li2017universal} is also an active research field where the differentiable matrix square root and its inverse are widely used. In general, the WCT performs successively the whitening transform (using inverse square root) and the coloring transform (using matrix square root) on the multi-scale features to preserve the content of current image but carrying the style of another image. During the past few years, the WCT methods have achieved remarkable progress in universal style transfer~\cite{li2017universal,li2018closed,wang2020diversified}, domain adaptation~\cite{abramov2020keep,choi2021robustnet}, and image translation~\cite{ulyanov2017improved,cho2019image}. Besides the three main applications discussed above, there are still some minor applications, such as semantic segmentation~\cite{sun2021second} and super resolution~\cite{Dai_2019_CVPR}. \section{Fast Differentiable Matrix Square Root}\label{sec:method} Table~\ref{tab:notation} summarizes the notation we will use from now on. This section presents the forward pass and the backward propagation of our fast differentiable matrix square root. For the inverse square root, we introduce the derivation in Sec.~\ref{sec:method_inverse}. \subsection{Forward Pass} \subsubsection{Matrix Taylor Polynomial} We begin with motivating the Taylor series for the scalar case. Consider the following power series: \begin{equation} (1-z)^{\frac{1}{2}} = 1 - \sum_{k=1}^{\infty} \Big|\dbinom{\frac{1}{2}}{k}\Big| z^{k} \label{taylor_scalar} \end{equation} where $\dbinom{\frac{1}{2}}{k}$ denotes the binomial coefficients that involve fractions, and the series converges when $z{<}1$ according to the Cauchy root test. For the matrix case, the power series can be similarly defined by: \begin{equation} ({\mathbf{I}}-{\mathbf{Z}})^{\frac{1}{2}}={\mathbf{I}} - \sum_{k=1}^{\infty} \Big|\dbinom{\frac{1}{2}}{k}\Big| {\mathbf{Z}}^{k} \label{taylor_matrix} \end{equation} where ${\mathbf{I}}$ is the identity matrix. Let us substitute ${\mathbf{Z}}$ with $({\mathbf{I}}{-}{\mathbf{A}})$, we can obtain: \begin{equation} {\mathbf{A}}^{\frac{1}{2}} = {\mathbf{I}} - \sum_{k=1}^{\infty} \Big|\dbinom{\frac{1}{2}}{k}\Big| ({\mathbf{I}}-{\mathbf{A}})^k \label{mtp_unnorm} \end{equation} Similar with the scalar case, the power series converge only if $||({\mathbf{I}}-{\mathbf{A}})||_{p}{<}1$, where $||\cdot||_{p}$ denotes any vector-induced matrix norms. To circumvent this issue, we can first pre-normalize the matrix ${\mathbf{A}}$ by dividing $||{\mathbf{A}}||_{\rm F}$. This can guarantee the convergence as $||{\mathbf{I}}{-}\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}}||_{p}{<}1$ is always satisfied. Afterwards, the matrix square root ${\mathbf{A}}^{\frac{1}{2}}$ is post-compensated by multiplying $\sqrt{||{\mathbf{A}}||_{\rm F}}$. Integrated with these two operations, \cref{mtp_unnorm} can be re-formulated as: \begin{equation} {\mathbf{A}}^{\frac{1}{2}} =\sqrt{||{\mathbf{A}}||_{\rm F}}\cdot \Big({\mathbf{I}} - \sum_{k=1}^{\infty} \Big|\dbinom{\frac{1}{2}}{k}\Big| ({\mathbf{I}}-\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}})^{k} \Big) \label{mtp_square} \end{equation} Truncating the series to a certain degree $K$ yields the MTP approximation for the matrix square root. For the MTP of degree $K$, $K{-}1$ matrix multiplications are needed. \subsubsection{Matrix Pad\'e Approximant} \begin{figure}[htbp] \centering \includegraphics[width=0.9\linewidth]{imgs/approx_mpa_mtp_ns.png} \caption{The function $(1-z)^{\frac{1}{2}}$ in the range of $|z|<1$ and its approximation including Taylor polynomial, Newton-Schulz iteration, and Pad\'e approximants. The Pad\'e approximants consistently achieves a better estimation for other approximation schemes for any possible input values. } \label{fig:approx_mpa_mtp} \end{figure} The MTP enjoys the fast calculation, but it converges uniformly and sometimes suffers from the so-called "hump phenomenon", \emph{i.e.,} the intermediate terms of the series grow quickly but cancel each other in the summation, which results in a large approximation error. Expanding the series to a higher degree does not solve this issue either. The MPA, which adopts two polynomials of smaller degrees to construct a rational approximation, is able to avoid this caveat. To visually illustrate this impact, we depict the approximation of the scalar square root in Fig.~\ref{fig:approx_mpa_mtp}. The Pad\'e approximants consistently deliver a better approximation than NS iteration and Taylor polynomial. In particular, when the input is close to the convergence boundary ($z{=}1$) where NS iteration and Taylor polynomials suffer from a larger approximation error, our Pad\'e approximants still present a reasonable estimation. The superior property also generalizes to the matrix case. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{imgs/pseudo_code.png} \caption{Python-like pseudo-codes for Pad\'e coefficients.} \label{fig:pade_code} \end{figure} The MPA is computed as the fraction of two sets of polynomials: denominator polynomial $\sum_{n=1}^{N}q_{n}z^{n}$ and numerator polynomial $\sum_{m=1}^{M}p_{m}z^{m}$. The coefficients $q_{n}$ and $p_{m}$ are pre-computed by matching to the corresponding Taylor series. Given the power series of scalar in~\cref{taylor_scalar}, the coefficients of a $[M,N]$ scalar Pad\'e approximant are computed by matching to the series of degree $M{+}N{+}1$: \begin{equation} \frac{1-\sum_{m=1}^{M}p_{m}z^{m}}{1-\sum_{n=1}^{N}q_{n}z^{n}} = 1 - \sum_{k=1}^{M+N} \Big|\dbinom{\frac{1}{2}}{k}\Big| z^{k} \label{pade_match} \end{equation} where $p_{m}$ and $q_{n}$ also apply to the matrix case. This matching gives rise to a system of linear equations: \begin{equation} \begin{cases} - \Big|\dbinom{\frac{1}{2}}{1}\Big| - q_{1} = -p_{1},\\ - \Big|\dbinom{\frac{1}{2}}{2}\Big| + \Big|\dbinom{\frac{1}{2}}{1}\Big|q_{1} - q_{2} = -p_{2}, \\ - \Big|\dbinom{\frac{1}{2}}{M}\Big| + \Big|\dbinom{\frac{1}{2}}{M-1}\Big|q_{1} + \cdots -q_{M} = p_{M}, \\ \cdots\cdots \end{cases} \label{eq:pade_linearsystem} \end{equation} Solving these equations directly determines the coefficients. We give the Python-like pseudo-codes in Fig.~\ref{fig:pade_code}. The numerator polynomial and denominator polynomials of MPA are given by: \begin{equation} \begin{gathered} {\mathbf{P}}_{M}= {\mathbf{I}} - \sum_{m=1}^{M} p_{m} ({\mathbf{I}}-\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}})^{m},\\ {\mathbf{Q}}_{N}= {\mathbf{I}} - \sum_{n=1}^{N} q_{n} ({\mathbf{I}}-\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}})^{n}. \end{gathered} \label{ration_app} \end{equation} Then the MPA for approximating the matrix square root is computed as: \begin{equation} {\mathbf{A}}^{\frac{1}{2}} = \sqrt{||{\mathbf{A}}||_{\rm F}}{\mathbf{Q}}_{N}^{-1}{\mathbf{P}}_{M}. \label{mpa_square} \end{equation} Compared with the MTP, the MPA trades off half of the matrix multiplications with one matrix inverse, which slightly increases the computational cost but converges more quickly and delivers better approximation abilities. Moreover, we note that the matrix inverse can be avoided, as \cref{mpa_square} can be more efficiently and numerically stably computed by solving the linear system ${\mathbf{Q}}_{N}{\mathbf{A}}^{\frac{1}{2}}{=} \sqrt{||{\mathbf{A}}||_{\rm F}}{\mathbf{P}}_{M}$. According to Van~\emph{et al.}~\cite{van2006pade}, diagonal Pad\'e approximants (\emph{i.e.,} ${\mathbf{P}}_{M}$ and ${\mathbf{Q}}_{N}$ have the same degree) usually yield better approximation than the non-diagonal ones. Therefore, to match the MPA and MTP of the same degree, we set $M{=}N{=}\frac{K-1}{2}$. \begin{table} \centering \caption{Comparison of forward operations. For the matrix square root and its inverse, our MPA/MTP consumes the same complexity. The cost of $1$ NS iteration is about that of MTP of $4$ degrees and about that of MPA of $2$ degrees.}\label{tab:forward_complexity} \begin{tabular}{c|c|c|c} \hline Op. & MTP & MPA & NS iteration \\ \hline Mat. Mul. & $K{-}1$ & $\nicefrac{(K{-}1)}{2}$ & 3 $\times$ \#iters \\ Mat. Inv. & 0 & 1 & 0 \\ \hline \end{tabular} \end{table} Table~\ref{tab:forward_complexity} summarizes the forward computational complexity. As suggested in Li~\emph{et al.}~\cite{li2018towards} and Huang~\emph{et al.}~\cite{huang2019iterative}, the iteration times for NS iteration are often set as $5$ such that reasonable performances can be achieved. That is, to consume the same complexity as the NS iteration does, our MTP and MPA can match to the power series up to degree $16$. However, as illustrated in Fig.~\ref{fig:fp_sped_err}, our MPA achieves better accuracy than the NS iteration even at degree $8$. This observation implies that our MPA is a better option in terms of both accuracy and speed. \subsection{Backward Pass} Though one can manually derive the gradient of the MPA and MTP, their backward algorithms are computationally expensive as they involve the matrix power up to degree $K$, where $K$ can be arbitrarily large. Relying on the AutoGrad package of deep learning frameworks can be both time- and memory-consuming since the gradients of intermediate variables would be computed and the matrix inverse of MPA is involved. To attain a more efficient backward algorithm, we propose to iteratively solve the gradient equation using the matrix sign function. Given the matrix $\mathbf{A}$ and its square root ${\mathbf{A}}^{\frac{1}{2}}$, since we have ${\mathbf{A}}^{\frac{1}{2}}{\mathbf{A}}^{\frac{1}{2}}{=}{\mathbf{A}}$, a perturbation on ${\mathbf{A}}$ leads to: \begin{equation} {\mathbf{A}}^{\frac{1}{2}} d {\mathbf{A}}^{\frac{1}{2}} + d{\mathbf{A}}^{\frac{1}{2}} {\mathbf{A}}^{\frac{1}{2}} = d {\mathbf{A}} \end{equation} Using the chain rule, the gradient function of the matrix square root satisfies: \begin{equation} {\mathbf{A}}^{\frac{1}{2}} \frac{\partial l}{\partial {\mathbf{A}}} + \frac{\partial l}{\partial {\mathbf{A}}} {\mathbf{A}}^{\frac{1}{2}} = \frac{\partial l}{\partial {\mathbf{A}}^{\frac{1}{2}}} \label{gradient_function} \end{equation} As pointed out by Li~\emph{et al.}~\cite{lin2017improved}, \cref{gradient_function} actually defines the continuous-time Lyapunov equation (${\mathbf{B}}{\mathbf{X}} {+} {\mathbf{X}}{\mathbf{B}} {=} {\mathbf{C}}$) or a special case of Sylvester equation (${\mathbf{B}}{\mathbf{X}} {+} {\mathbf{X}}{\mathbf{D}} {=} {\mathbf{C}}$). The closed-form solution is given by: \begin{equation} vec(\frac{\partial l}{\partial {\mathbf{A}}}) = \Big({\mathbf{A}}^{\frac{1}{2}}\otimes {\mathbf{I}} + {\mathbf{I}} \otimes {\mathbf{A}}^{\frac{1}{2}} \Big)^{-1} vec(\frac{\partial l}{\partial {\mathbf{A}}^{\frac{1}{2}}}) \end{equation} where $vec(\cdot)$ denotes unrolling a matrix to vectors, and $\otimes$ is the Kronecker product. Although the closed-form solution exists theoretically, it cannot be computed in practice due to the huge memory consumption of the Kronecker product. Supposing that both ${\mathbf{A}}^{\frac{1}{2}}$ and ${\mathbf{I}}$ are of size $256{\times}256$, the Kronecker product ${\mathbf{A}}^{\frac{1}{2}}{\otimes}{\mathbf{I}}$ would take the dimension of $256^{2}{\times}256^{2}$, which is infeasible to compute or store. Another approach to solve~\cref{gradient_function} is via the Bartels-Stewart algorithm~\cite{bartels1972solution}. However, it requires explicit eigendecomposition or Schulz decomposition, which is not GPU-friendly and computationally expensive. To attain a GPU-friendly gradient solver, we propose to use the matrix sign function and iteratively solve the Lyapunov equation. Solving the Sylvester equation via matrix sign function has been long studied in the literature of numerical analysis~\cite{roberts1980linear,kenney1995matrix,benner2006solving}. One notable line of research is using the family of Newton iterations. Consider the following continuous Lyapunov function: \begin{equation} {\mathbf{B}}{\mathbf{X}} + {\mathbf{X}}{\mathbf{B}} = {\mathbf{C}} \label{lyapunov} \end{equation} where ${\mathbf{B}}$ refers to ${\mathbf{A}}^{\frac{1}{2}}$ in~\cref{gradient_function}, ${\mathbf{C}}$ represents $\frac{\partial l}{\partial {\mathbf{A}}^{\frac{1}{2}}}$, and ${\mathbf{X}}$ denotes the seeking solution $\frac{\partial l}{\partial {\mathbf{A}}}$. Eq.~(\ref{lyapunov}) can be represented by the following block using a Jordan decomposition: \begin{equation} {\mathbf{H}}=\begin{bmatrix} {\mathbf{B}} & {\mathbf{C}}\\ \mathbf{0} & -{\mathbf{B}} \end{bmatrix} = \begin{bmatrix} {\mathbf{I}} & {\mathbf{X}}\\ \mathbf{0} & {\mathbf{I}} \end{bmatrix} \begin{bmatrix} {\mathbf{B}} & \mathbf{0}\\ \mathbf{0} & -{\mathbf{B}} \end{bmatrix} \begin{bmatrix} {\mathbf{I}} & {\mathbf{X}}\\ \mathbf{0} & {\mathbf{I}} \end{bmatrix}^{-1} \label{block_lyapunov} \end{equation} The matrix sign function is invariant to the Jordan canonical form or spectral decomposition. This property allows the use of Newton's iterations for iteratively solving the Lyapunov function. Specifically, we have: \begin{lemma}[Matrix Sign Function~\cite{higham2008functions}] \label{sign_1} For a given matrix ${\mathbf{H}}$ with no eigenvalues on the imaginary axis, its sign function has the following properties: 1) $sign({\mathbf{H}})^2={\mathbf{I}}$; 2) if ${\mathbf{H}}$ has the Jordan decomposition ${\mathbf{H}}{=}{\mathbf{T}}{\mathbf{M}}{\mathbf{T}}^{-1}$, then its sign function satisfies $sign({\mathbf{H}}){=}{\mathbf{T}} sign({\mathbf{M}}) {\mathbf{T}}^{-1}$. \end{lemma} We give the complete proof in the Supplementary Material. Lemma~\ref{sign_1}.1 shows that $sign({\mathbf{H}})$ is the matrix square root of the identity matrix, which indicates the possibility of using Newton's root-finding method to derive the solution~\cite{higham2008functions}. Here we also adopt the Newton-Schulz iteration, the modified inverse-free and multiplication-rich Newton iteration, to iteratively compute $sign({\mathbf{H}})$. This leads to the coupled iteration as: \begin{equation} \begin{gathered} {\mathbf{B}}_{k+1} =\frac{1}{2} {\mathbf{B}}_{k} (3{\mathbf{I}}-{\mathbf{B}}_{k}^2), \\ {\mathbf{C}}_{k+1} =\frac{1}{2} \Big(-{\mathbf{B}}_{k}^{2}{\mathbf{C}}_{k} + {\mathbf{B}}_{k}{\mathbf{C}}_{k}{\mathbf{B}}_{k} + {\mathbf{C}}_{k}(3{\mathbf{I}}-{\mathbf{B}}_{k}^2)\Big). \label{lya_iterations} \end{gathered} \end{equation} The equation above defines two coupled iterations for solving the Lyapunov equation. Since the NS iteration converges only locally, \emph{i.e.,} converges when $||{\mathbf{H}}_{k}^{2}{-}{\mathbf{I}}||{<}1$, here we divide ${\mathbf{H}}_{0}$ by $||{\mathbf{B}}||_{\rm F}$ to meet the convergence condition. This normalization defines the initialization ${\mathbf{B}}_{0}{=}\frac{{\mathbf{B}}}{||{\mathbf{B}}||_{\rm F}}$ and ${\mathbf{C}}_{0}{=}\frac{{\mathbf{C}}}{||{\mathbf{B}}||_{\rm F}}$. Relying on Lemma~\ref{sign_1}.2, the sign function of \cref{block_lyapunov} can be also calculated as: \begin{equation} \begin{aligned} sign({\mathbf{H}})&= sign\Big(\begin{bmatrix} {\mathbf{B}} & {\mathbf{C}}\\ \mathbf{0} & -{\mathbf{B}} \end{bmatrix}\Big)=\begin{bmatrix} {\mathbf{I}} & 2 {\mathbf{X}}\\ \mathbf{0} & -{\mathbf{I}} \end{bmatrix} \end{aligned} \label{sign_block_lya} \end{equation} As indicated above, the iterations in~\cref{lya_iterations} have the convergence: \begin{equation} \lim_{k\rightarrow\infty}{\mathbf{B}}_{k} = \mathbf{I}, \lim_{k\rightarrow\infty}{\mathbf{C}}_{k} = 2{\mathbf{X}} \end{equation} After iterating $k$ times, we can get the approximate solution ${\mathbf{X}}{=}\frac{1}{2}{\mathbf{C}}_{k}$. Instead of choosing setting iteration times, one can also set the termination criterion by checking the convergence $||{\mathbf{B}}_{k}-{\mathbf{I}}||_{\rm F}{<}\tau$, where $\tau$ is the pre-defined tolerance. Table~\ref{tab:backward_complexiy} compares the backward computation complexity of the iterative Lyapunov solver and the NS iteration. Our proposed Lyapunov solver spends fewer matrix multiplications and is thus more efficient than the NS iteration. Even if we iterate the Lyapunov solver more times (\emph{e.g.,} 7 or 8), it still costs less time than the backward calculation of NS iteration that iterates $5$ times. \begin{table} \centering \caption{Comparison of backward operations. For the inverse square root, our Lyapunov solver uses marginally $3$ more matrix multiplications. The cost of $1$ NS iteration is about that of $2$ iterations of Lyapunov solver.}\label{tab:backward_complexiy} \begin{tabular}{c|c|c|c} \hline Op. & Lya (Mat. Sqrt.) & Lya (Inv. Sqrt.) & NS iteration\\ \hline Mat. Mul. & 6 $\times$ \#iters & 3 + 6 $\times$ \#iters& 4 + 10 $\times$ \#iters \\ Mat. Inv. & 0 & 0 & 0 \\ \hline \end{tabular} \end{table} \section{Fast Differentiable Inverse Square Root}\label{sec:method_inverse} In this section, we introduce the extension of our algorithm to the inverse square root. \subsection{Forward Pass} \subsubsection{Matrix Taylor Polynomial} To derive the MTP of inverse square root, we need to match to the following power series: \begin{equation} (1-z)^{-\frac{1}{2}} = 1 + \sum_{k=1}^{\infty}\Big|\dbinom{-\frac{1}{2}}{k}\Big| z^{k} \label{taylor_inv_scalar} \end{equation} Similar with the procedure of the matrix square root in~\cref{taylor_matrix,mtp_unnorm}, the MTP approximation can be computed as: \begin{equation} {\mathbf{A}}^{-\frac{1}{2}} ={\mathbf{I}} + \sum_{k=1}^{\infty} \Big|\dbinom{-\frac{1}{2}}{k}\Big| ({\mathbf{I}}-\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}})^{k} \end{equation} Instead of the post-normalization of matrix square root by multiplying $\sqrt{||{\mathbf{A}}||_{\rm F}}$ as done in~\cref{mtp_square}, we need to divide $\sqrt{||{\mathbf{A}}||_{\rm F}}$ for computing the inverse square root: \begin{equation} {\mathbf{A}}^{-\frac{1}{2}} =\frac{1}{\sqrt{||{\mathbf{A}}||_{\rm F}}}\cdot \Big({\mathbf{I}} + \sum_{k=1}^{\infty} \Big|\dbinom{-\frac{1}{2}}{k}\Big| ({\mathbf{I}}-\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}})^{k} \Big) \end{equation} Compared with the MTP of matrix square root in the same degree, the inverse square root consumes the same computational complexity. \subsubsection{Matrix Pad\'e Approximant} The matrix square root ${\mathbf{A}}^{\frac{1}{2}}$ of our MPA is calculated as $\sqrt{||{\mathbf{A}}||_{\rm F}}{\mathbf{Q}}_{N}^{-1}{\mathbf{P}}_{M}$. For the inverse square root, we can directly compute the inverse as: \begin{equation} {\mathbf{A}}^{-\frac{1}{2}}=(\sqrt{||{\mathbf{A}}||_{\rm F}}{\mathbf{Q}}_{N}^{-1}{\mathbf{P}}_{M})^{-1}=\frac{1}{\sqrt{||{\mathbf{A}}||_{\rm F}}}{\mathbf{P}}_{M}^{-1}{\mathbf{Q}}_{N} \label{mpa_square_inverse1} \end{equation} The extension to inverse square root comes for free as it does not require additional computation. For both the matrix square root and inverse square root, the matrix polynomials ${\mathbf{Q}}_{N}$ and ${\mathbf{P}}_{M}$ need to be first computed, and then one matrix inverse or solving the linear system is required. Another approach to derive the MPA for inverse square root is to match the power series in~\cref{taylor_inv_scalar} and construct the MPA again. The matching is calculated as: \begin{equation} \frac{1+\sum_{m=1}^{M}r_{m}z^{m}}{1+\sum_{n=1}^{N}s_{n}z^{n}} = 1 + \sum_{k=1}^{M+N} \Big|\dbinom{-\frac{1}{2}}{k}\Big| z^{k} \end{equation} where $r_{m}$ and $s_{n}$ denote the new Pad\'e coefficients. Then the matrix polynomials are computed as: \begin{equation} \begin{gathered} {\mathbf{R}}_{M}= {\mathbf{I}} + \sum_{m=1}^{M} r_{m} ({\mathbf{I}}-\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}})^{m},\\ {\mathbf{S}}_{N}= {\mathbf{I}} + \sum_{n=1}^{N} s_{n} ({\mathbf{I}}-\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}})^{n}. \end{gathered} \end{equation} The MPA for approximating the inverse square root is calculated as: \begin{equation} {\mathbf{A}}^{-\frac{1}{2}} = \frac{1}{\sqrt{||{\mathbf{A}}||_{\rm F}}}{\mathbf{S}}_{N}^{-1}{\mathbf{R}}_{M}. \label{mpa_square_inverse2} \end{equation} This method for deriving MPA also leads to the same complexity. Notice that these two different computation methods are equivalent to each other. Specifically, we have: \begin{prop} The diagonal MPA $\frac{1}{\sqrt{||{\mathbf{A}}||_{\rm F}}}{\mathbf{S}}_{N}^{-1}{\mathbf{R}}_{M}$ is equivalent to the diagonal MPA $\frac{1}{\sqrt{||{\mathbf{A}}||_{\rm F}}}{\mathbf{P}}_{M}^{-1}{\mathbf{Q}}_{N}$, and the relation $p_{m}{=}-s_{n}$ and $q_{n}{=}-r_{m}$ hold for any $m{=}n$. \end{prop} We give the detailed proof in Supplementary Material. Since two sets of MPA are equivalent, we adopt the implementation of inverse square root in~\cref{mpa_square_inverse1} throughout our experiments, as it shares the same ${\mathbf{P}}_{M}$ and ${\mathbf{Q}}_{N}$ with the matrix square root. \subsection{Backward Pass} For the inverse square root, we can also rely on the iterative Lyapunov solver for the gradient computation. Consider the following relation: \begin{equation} {\mathbf{A}}^{\frac{1}{2}}{\mathbf{A}}^{-\frac{1}{2}}={\mathbf{I}}. \end{equation} A perturbation on both sides leads to: \begin{equation} d{\mathbf{A}}^{\frac{1}{2}}{\mathbf{A}}^{-\frac{1}{2}} + {\mathbf{A}}^{\frac{1}{2}}d{\mathbf{A}}^{-\frac{1}{2}} = d{\mathbf{I}}. \end{equation} Using the chain rule, we can obtain the gradient equation after some arrangements: \begin{equation} \frac{\partial l}{\partial {\mathbf{A}}^{\frac{1}{2}}} = -{\mathbf{A}}^{-\frac{1}{2}}\frac{\partial l}{\partial {\mathbf{A}}^{-\frac{1}{2}}}{\mathbf{A}}^{-\frac{1}{2}}. \end{equation} Injecting this equation into~\cref{gradient_function} leads to the re-formulation: \begin{equation} \begin{gathered} {\mathbf{A}}^{\frac{1}{2}} \frac{\partial l}{\partial {\mathbf{A}}} + \frac{\partial l}{\partial {\mathbf{A}}} {\mathbf{A}}^{\frac{1}{2}} = -{\mathbf{A}}^{-\frac{1}{2}}\frac{\partial l}{\partial {\mathbf{A}}^{-\frac{1}{2}}}{\mathbf{A}}^{-\frac{1}{2}} \\ {\mathbf{A}}^{-\frac{1}{2}}\frac{\partial l}{\partial {\mathbf{A}}} + \frac{\partial l}{\partial {\mathbf{A}}}{\mathbf{A}}^{-\frac{1}{2}} = -{\mathbf{A}}^{-1}\frac{\partial l}{\partial {\mathbf{A}}^{-\frac{1}{2}}}{\mathbf{A}}^{-1}. \label{inverse_gradient} \end{gathered} \end{equation} As can be seen, now the gradient function resembles the continuous Lyapunov equation again. The only difference with~\cref{gradient_function} is the r.h.s. term, which can be easily computed as $-({\mathbf{A}}^{-\frac{1}{2}})^2\frac{\partial l}{\partial {\mathbf{A}}^{-\frac{1}{2}}}({\mathbf{A}}^{-\frac{1}{2}})^2$ with $3$ matrix multiplications. For the new iterative solver of the Lyapunov equation ${\mathbf{B}}{\mathbf{X}}{+}{\mathbf{X}}{\mathbf{B}}{=}{\mathbf{C}}$, we have the following initialization: \begin{equation} \begin{gathered} {\mathbf{B}}_{0}=\frac{{\mathbf{A}}^{-\frac{1}{2}}}{||{\mathbf{A}}^{-\frac{1}{2}}||_{\rm F}}=||{\mathbf{A}}^{\frac{1}{2}}||_{\rm F}{\mathbf{A}}^{-\frac{1}{2}}\\ {\mathbf{C}}_{0}=\frac{-{\mathbf{A}}^{-1}\frac{\partial l}{\partial {\mathbf{A}}^{-\frac{1}{2}}}{\mathbf{A}}^{-1}}{||{\mathbf{A}}^{-\frac{1}{2}}||_{\rm F}}=-||{\mathbf{A}}^{\frac{1}{2}}||_{\rm F}{\mathbf{A}}^{-1}\frac{\partial l}{\partial {\mathbf{A}}^{-\frac{1}{2}}}{\mathbf{A}}^{-1}. \end{gathered} \end{equation} Then we use the coupled NS iteration to compute the gradient $\frac{\partial l}{\partial {\mathbf{A}}}{=}\frac{1}{2}{\mathbf{C}}_{k}$. Table~\ref{tab:backward_complexiy} presents the complexity of the backward algorithms. Compared with the gradient of matrix square root, this extension marginally increases the computational complexity by $3$ more matrix multiplications, which is more efficient than a matrix inverse or solving a linear system. \section{Conclusion}\label{sec:conclusion} In this paper, we propose two fast methods to compute the differentiable matrix square root and the inverse square root. In the forward pass, the MTP and MPA are applied to approximate the matrix square root, while an iterative Lyapunov solver is proposed to solve the gradient function for back-propagation. A number of numerical tests and computer vision applications demonstrate that our methods can achieve both the fast speed and competitive performances. \section{Related Work}\label{sec:related} In this section, we recap the previous approaches that compute the differentiable matrix square root and the inverse square root, followed by a discussion on the usage in some applications of deep learning and computer vision. \subsection{Computational Methods} Ionescu~\emph{et al.}~\cite{ionescu2015training,ionescu2015matrix} first formulate the theory of matrix back-propagation, making it possible to integrate a spectral meta-layer into neural networks. Existing approaches that compute the differentiable matrix square root and its inverse are mainly based on the SVD or NS iteration. The SVD calculates the accurate solution but suffers from backward instability and expensive time cost, whereas the NS iteration computes the approximate solution but is more GPU-friendly. For the backward algorithm of SVD, several methods have been proposed to resolve this gradient explosion issue~\cite{wang2019backpropagation,Dang18a,Dang20a,wang2021robust,song2021approximate}. Wang~\emph{et al.}~\cite{wang2019backpropagation} propose to apply Power Iteration (PI) to approximate the SVD gradient. Recently, Song~\emph{et al.}~\cite{song2021approximate} propose to rely on Pad\'e approximants to closely estimate the backward gradient of SVD. To avoid explicit eigendecomposition, Lin~\emph{et al.}~\cite{lin2017improved} propose to substitute SVD with the NS iteration. Following this work, Li~\emph{et al.}~\cite{li2017second} and Huang~\emph{et al.}~\cite{huang2018decorrelated} adopt the NS iteration in the task of global covariance pooling and decorrelated batch normalization, respectively. For the backward pass of the differentiable matrix square root, Lin~\emph{et al.}~\cite{lin2017improved} also suggest viewing the gradient function as a Lyapunov equation. However, their proposed exact solution is infeasible to compute practically, and the suggested Bartels-Steward algorithm~\cite{bartels1972solution} requires explicit eigendecomposition or Schur decomposition, which is again not GPU-friendly. By contrast, our proposed iterative solution using the matrix sign function is more computationally efficient and achieves comparable performances against the Bartels-Steward algorithm (see the ablation study in Sec.~\ref{sec:abla}). \subsection{Applications} \subsubsection{Global Covariance Pooling} One successful application of the differentiable matrix square root is the Global Covariance Pooling (GCP), which is a meta-layer inserted before the FC layer of deep models to compute the matrix square root of the feature covariance. Equipped with the GCP meta-layers, existing deep models have achieved state-of-the-art performances on both generic and fine-grained visual recognition ~\cite{lin2015bilinear,li2017second,lin2017improved,li2018towards,wang2019deep,wang2020deep,song2021approximate,song2022eigenvalues}. Inspired by recent advances of transformers~\cite{vaswani2017attention}, Xie~\emph{et al.}~\cite{xie2021so} integrate the GCP meta-layer into the vision transformer~\cite{dosovitskiy2020image} to exploit the second-order statistics of the high-level visual tokens, which solves the issue that vision transformers need pre-training on ultra-large-scale datasets. More recently, Gao~\emph{et al.}~\cite{gao2021temporal} propose an attentive and temporal-based GCP model for video action recognition. \subsubsection{Decorrelated Batch Normalization} Another line of research proposes to use ZCA whitening, which applies the inverse square root of the covariance to whiten the feature, as an alternative scheme for the standard batch normalization~\cite{ioffe2015batch}. The whitening procedure, \emph{a.k.a} decorrelated batch normalization, does not only standardize the feature but also eliminates the data correlation. The decorrelated batch normalization can improve both the optimization efficiency and generalization ability of deep neural networks~\cite{huang2018decorrelated,siarohin2018whitening,huang2019iterative,pan2019switchable,huang2020investigation,ermolov2021whitening,huang2021group,zhang2021stochastic,cho2021improving}. \subsubsection{Whitening and Coloring Transform} The WCT~\cite{li2017universal} is also an active research field where the differentiable matrix square root and its inverse are widely used. In general, the WCT performs successively the whitening transform (using inverse square root) and the coloring transform (using matrix square root) on the multi-scale features to preserve the content of current image but carrying the style of another image. During the past few years, the WCT methods have achieved remarkable progress in universal style transfer~\cite{li2017universal,li2018closed,wang2020diversified}, domain adaptation~\cite{abramov2020keep,choi2021robustnet}, and image translation~\cite{ulyanov2017improved,cho2019image}. Besides the three main applications discussed above, there are still some minor applications, such as semantic segmentation~\cite{sun2021second} and super resolution~\cite{Dai_2019_CVPR}. \section{Fast Differentiable Matrix Square Root}\label{sec:method} Table~\ref{tab:notation} summarizes the notation we will use from now on. This section presents the forward pass and the backward propagation of our fast differentiable matrix square root. For the inverse square root, we introduce the derivation in Sec.~\ref{sec:method_inverse}. \subsection{Forward Pass} \subsubsection{Matrix Taylor Polynomial} We begin with motivating the Taylor series for the scalar case. Consider the following power series: \begin{equation} (1-z)^{\frac{1}{2}} = 1 - \sum_{k=1}^{\infty} \Big|\dbinom{\frac{1}{2}}{k}\Big| z^{k} \label{taylor_scalar} \end{equation} where $\dbinom{\frac{1}{2}}{k}$ denotes the binomial coefficients that involve fractions, and the series converges when $z{<}1$ according to the Cauchy root test. For the matrix case, the power series can be similarly defined by: \begin{equation} ({\mathbf{I}}-{\mathbf{Z}})^{\frac{1}{2}}={\mathbf{I}} - \sum_{k=1}^{\infty} \Big|\dbinom{\frac{1}{2}}{k}\Big| {\mathbf{Z}}^{k} \label{taylor_matrix} \end{equation} where ${\mathbf{I}}$ is the identity matrix. Let us substitute ${\mathbf{Z}}$ with $({\mathbf{I}}{-}{\mathbf{A}})$, we can obtain: \begin{equation} {\mathbf{A}}^{\frac{1}{2}} = {\mathbf{I}} - \sum_{k=1}^{\infty} \Big|\dbinom{\frac{1}{2}}{k}\Big| ({\mathbf{I}}-{\mathbf{A}})^k \label{mtp_unnorm} \end{equation} Similar with the scalar case, the power series converge only if $||({\mathbf{I}}-{\mathbf{A}})||_{p}{<}1$, where $||\cdot||_{p}$ denotes any vector-induced matrix norms. To circumvent this issue, we can first pre-normalize the matrix ${\mathbf{A}}$ by dividing $||{\mathbf{A}}||_{\rm F}$. This can guarantee the convergence as $||{\mathbf{I}}{-}\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}}||_{p}{<}1$ is always satisfied. Afterwards, the matrix square root ${\mathbf{A}}^{\frac{1}{2}}$ is post-compensated by multiplying $\sqrt{||{\mathbf{A}}||_{\rm F}}$. Integrated with these two operations, \cref{mtp_unnorm} can be re-formulated as: \begin{equation} {\mathbf{A}}^{\frac{1}{2}} =\sqrt{||{\mathbf{A}}||_{\rm F}}\cdot \Big({\mathbf{I}} - \sum_{k=1}^{\infty} \Big|\dbinom{\frac{1}{2}}{k}\Big| ({\mathbf{I}}-\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}})^{k} \Big) \label{mtp_square} \end{equation} Truncating the series to a certain degree $K$ yields the MTP approximation for the matrix square root. For the MTP of degree $K$, $K{-}1$ matrix multiplications are needed. \subsubsection{Matrix Pad\'e Approximant} \begin{figure}[htbp] \centering \includegraphics[width=0.9\linewidth]{imgs/approx_mpa_mtp_ns.png} \caption{The function $(1-z)^{\frac{1}{2}}$ in the range of $|z|<1$ and its approximation including Taylor polynomial, Newton-Schulz iteration, and Pad\'e approximants. The Pad\'e approximants consistently achieves a better estimation for other approximation schemes for any possible input values. } \label{fig:approx_mpa_mtp} \end{figure} The MTP enjoys the fast calculation, but it converges uniformly and sometimes suffers from the so-called "hump phenomenon", \emph{i.e.,} the intermediate terms of the series grow quickly but cancel each other in the summation, which results in a large approximation error. Expanding the series to a higher degree does not solve this issue either. The MPA, which adopts two polynomials of smaller degrees to construct a rational approximation, is able to avoid this caveat. To visually illustrate this impact, we depict the approximation of the scalar square root in Fig.~\ref{fig:approx_mpa_mtp}. The Pad\'e approximants consistently deliver a better approximation than NS iteration and Taylor polynomial. In particular, when the input is close to the convergence boundary ($z{=}1$) where NS iteration and Taylor polynomials suffer from a larger approximation error, our Pad\'e approximants still present a reasonable estimation. The superior property also generalizes to the matrix case. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{imgs/pseudo_code.png} \caption{Python-like pseudo-codes for Pad\'e coefficients.} \label{fig:pade_code} \end{figure} The MPA is computed as the fraction of two sets of polynomials: denominator polynomial $\sum_{n=1}^{N}q_{n}z^{n}$ and numerator polynomial $\sum_{m=1}^{M}p_{m}z^{m}$. The coefficients $q_{n}$ and $p_{m}$ are pre-computed by matching to the corresponding Taylor series. Given the power series of scalar in~\cref{taylor_scalar}, the coefficients of a $[M,N]$ scalar Pad\'e approximant are computed by matching to the series of degree $M{+}N{+}1$: \begin{equation} \frac{1-\sum_{m=1}^{M}p_{m}z^{m}}{1-\sum_{n=1}^{N}q_{n}z^{n}} = 1 - \sum_{k=1}^{M+N} \Big|\dbinom{\frac{1}{2}}{k}\Big| z^{k} \label{pade_match} \end{equation} where $p_{m}$ and $q_{n}$ also apply to the matrix case. This matching gives rise to a system of linear equations: \begin{equation} \begin{cases} - \Big|\dbinom{\frac{1}{2}}{1}\Big| - q_{1} = -p_{1},\\ - \Big|\dbinom{\frac{1}{2}}{2}\Big| + \Big|\dbinom{\frac{1}{2}}{1}\Big|q_{1} - q_{2} = -p_{2}, \\ - \Big|\dbinom{\frac{1}{2}}{M}\Big| + \Big|\dbinom{\frac{1}{2}}{M-1}\Big|q_{1} + \cdots -q_{M} = p_{M}, \\ \cdots\cdots \end{cases} \label{eq:pade_linearsystem} \end{equation} Solving these equations directly determines the coefficients. We give the Python-like pseudo-codes in Fig.~\ref{fig:pade_code}. The numerator polynomial and denominator polynomials of MPA are given by: \begin{equation} \begin{gathered} {\mathbf{P}}_{M}= {\mathbf{I}} - \sum_{m=1}^{M} p_{m} ({\mathbf{I}}-\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}})^{m},\\ {\mathbf{Q}}_{N}= {\mathbf{I}} - \sum_{n=1}^{N} q_{n} ({\mathbf{I}}-\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}})^{n}. \end{gathered} \label{ration_app} \end{equation} Then the MPA for approximating the matrix square root is computed as: \begin{equation} {\mathbf{A}}^{\frac{1}{2}} = \sqrt{||{\mathbf{A}}||_{\rm F}}{\mathbf{Q}}_{N}^{-1}{\mathbf{P}}_{M}. \label{mpa_square} \end{equation} Compared with the MTP, the MPA trades off half of the matrix multiplications with one matrix inverse, which slightly increases the computational cost but converges more quickly and delivers better approximation abilities. Moreover, we note that the matrix inverse can be avoided, as \cref{mpa_square} can be more efficiently and numerically stably computed by solving the linear system ${\mathbf{Q}}_{N}{\mathbf{A}}^{\frac{1}{2}}{=} \sqrt{||{\mathbf{A}}||_{\rm F}}{\mathbf{P}}_{M}$. According to Van~\emph{et al.}~\cite{van2006pade}, diagonal Pad\'e approximants (\emph{i.e.,} ${\mathbf{P}}_{M}$ and ${\mathbf{Q}}_{N}$ have the same degree) usually yield better approximation than the non-diagonal ones. Therefore, to match the MPA and MTP of the same degree, we set $M{=}N{=}\frac{K-1}{2}$. \begin{table} \centering \caption{Comparison of forward operations. For the matrix square root and its inverse, our MPA/MTP consumes the same complexity. The cost of $1$ NS iteration is about that of MTP of $4$ degrees and about that of MPA of $2$ degrees.}\label{tab:forward_complexity} \begin{tabular}{c|c|c|c} \hline Op. & MTP & MPA & NS iteration \\ \hline Mat. Mul. & $K{-}1$ & $\nicefrac{(K{-}1)}{2}$ & 3 $\times$ \#iters \\ Mat. Inv. & 0 & 1 & 0 \\ \hline \end{tabular} \end{table} Table~\ref{tab:forward_complexity} summarizes the forward computational complexity. As suggested in Li~\emph{et al.}~\cite{li2018towards} and Huang~\emph{et al.}~\cite{huang2019iterative}, the iteration times for NS iteration are often set as $5$ such that reasonable performances can be achieved. That is, to consume the same complexity as the NS iteration does, our MTP and MPA can match to the power series up to degree $16$. However, as illustrated in Fig.~\ref{fig:fp_sped_err}, our MPA achieves better accuracy than the NS iteration even at degree $8$. This observation implies that our MPA is a better option in terms of both accuracy and speed. \subsection{Backward Pass} Though one can manually derive the gradient of the MPA and MTP, their backward algorithms are computationally expensive as they involve the matrix power up to degree $K$, where $K$ can be arbitrarily large. Relying on the AutoGrad package of deep learning frameworks can be both time- and memory-consuming since the gradients of intermediate variables would be computed and the matrix inverse of MPA is involved. To attain a more efficient backward algorithm, we propose to iteratively solve the gradient equation using the matrix sign function. Given the matrix $\mathbf{A}$ and its square root ${\mathbf{A}}^{\frac{1}{2}}$, since we have ${\mathbf{A}}^{\frac{1}{2}}{\mathbf{A}}^{\frac{1}{2}}{=}{\mathbf{A}}$, a perturbation on ${\mathbf{A}}$ leads to: \begin{equation} {\mathbf{A}}^{\frac{1}{2}} d {\mathbf{A}}^{\frac{1}{2}} + d{\mathbf{A}}^{\frac{1}{2}} {\mathbf{A}}^{\frac{1}{2}} = d {\mathbf{A}} \end{equation} Using the chain rule, the gradient function of the matrix square root satisfies: \begin{equation} {\mathbf{A}}^{\frac{1}{2}} \frac{\partial l}{\partial {\mathbf{A}}} + \frac{\partial l}{\partial {\mathbf{A}}} {\mathbf{A}}^{\frac{1}{2}} = \frac{\partial l}{\partial {\mathbf{A}}^{\frac{1}{2}}} \label{gradient_function} \end{equation} As pointed out by Li~\emph{et al.}~\cite{lin2017improved}, \cref{gradient_function} actually defines the continuous-time Lyapunov equation (${\mathbf{B}}{\mathbf{X}} {+} {\mathbf{X}}{\mathbf{B}} {=} {\mathbf{C}}$) or a special case of Sylvester equation (${\mathbf{B}}{\mathbf{X}} {+} {\mathbf{X}}{\mathbf{D}} {=} {\mathbf{C}}$). The closed-form solution is given by: \begin{equation} vec(\frac{\partial l}{\partial {\mathbf{A}}}) = \Big({\mathbf{A}}^{\frac{1}{2}}\otimes {\mathbf{I}} + {\mathbf{I}} \otimes {\mathbf{A}}^{\frac{1}{2}} \Big)^{-1} vec(\frac{\partial l}{\partial {\mathbf{A}}^{\frac{1}{2}}}) \end{equation} where $vec(\cdot)$ denotes unrolling a matrix to vectors, and $\otimes$ is the Kronecker product. Although the closed-form solution exists theoretically, it cannot be computed in practice due to the huge memory consumption of the Kronecker product. Supposing that both ${\mathbf{A}}^{\frac{1}{2}}$ and ${\mathbf{I}}$ are of size $256{\times}256$, the Kronecker product ${\mathbf{A}}^{\frac{1}{2}}{\otimes}{\mathbf{I}}$ would take the dimension of $256^{2}{\times}256^{2}$, which is infeasible to compute or store. Another approach to solve~\cref{gradient_function} is via the Bartels-Stewart algorithm~\cite{bartels1972solution}. However, it requires explicit eigendecomposition or Schulz decomposition, which is not GPU-friendly and computationally expensive. To attain a GPU-friendly gradient solver, we propose to use the matrix sign function and iteratively solve the Lyapunov equation. Solving the Sylvester equation via matrix sign function has been long studied in the literature of numerical analysis~\cite{roberts1980linear,kenney1995matrix,benner2006solving}. One notable line of research is using the family of Newton iterations. Consider the following continuous Lyapunov function: \begin{equation} {\mathbf{B}}{\mathbf{X}} + {\mathbf{X}}{\mathbf{B}} = {\mathbf{C}} \label{lyapunov} \end{equation} where ${\mathbf{B}}$ refers to ${\mathbf{A}}^{\frac{1}{2}}$ in~\cref{gradient_function}, ${\mathbf{C}}$ represents $\frac{\partial l}{\partial {\mathbf{A}}^{\frac{1}{2}}}$, and ${\mathbf{X}}$ denotes the seeking solution $\frac{\partial l}{\partial {\mathbf{A}}}$. Eq.~(\ref{lyapunov}) can be represented by the following block using a Jordan decomposition: \begin{equation} {\mathbf{H}}=\begin{bmatrix} {\mathbf{B}} & {\mathbf{C}}\\ \mathbf{0} & -{\mathbf{B}} \end{bmatrix} = \begin{bmatrix} {\mathbf{I}} & {\mathbf{X}}\\ \mathbf{0} & {\mathbf{I}} \end{bmatrix} \begin{bmatrix} {\mathbf{B}} & \mathbf{0}\\ \mathbf{0} & -{\mathbf{B}} \end{bmatrix} \begin{bmatrix} {\mathbf{I}} & {\mathbf{X}}\\ \mathbf{0} & {\mathbf{I}} \end{bmatrix}^{-1} \label{block_lyapunov} \end{equation} The matrix sign function is invariant to the Jordan canonical form or spectral decomposition. This property allows the use of Newton's iterations for iteratively solving the Lyapunov function. Specifically, we have: \begin{lemma}[Matrix Sign Function~\cite{higham2008functions}] \label{sign_1} For a given matrix ${\mathbf{H}}$ with no eigenvalues on the imaginary axis, its sign function has the following properties: 1) $sign({\mathbf{H}})^2={\mathbf{I}}$; 2) if ${\mathbf{H}}$ has the Jordan decomposition ${\mathbf{H}}{=}{\mathbf{T}}{\mathbf{M}}{\mathbf{T}}^{-1}$, then its sign function satisfies $sign({\mathbf{H}}){=}{\mathbf{T}} sign({\mathbf{M}}) {\mathbf{T}}^{-1}$. \end{lemma} We give the complete proof in the Supplementary Material. Lemma~\ref{sign_1}.1 shows that $sign({\mathbf{H}})$ is the matrix square root of the identity matrix, which indicates the possibility of using Newton's root-finding method to derive the solution~\cite{higham2008functions}. Here we also adopt the Newton-Schulz iteration, the modified inverse-free and multiplication-rich Newton iteration, to iteratively compute $sign({\mathbf{H}})$. This leads to the coupled iteration as: \begin{equation} \begin{gathered} {\mathbf{B}}_{k+1} =\frac{1}{2} {\mathbf{B}}_{k} (3{\mathbf{I}}-{\mathbf{B}}_{k}^2), \\ {\mathbf{C}}_{k+1} =\frac{1}{2} \Big(-{\mathbf{B}}_{k}^{2}{\mathbf{C}}_{k} + {\mathbf{B}}_{k}{\mathbf{C}}_{k}{\mathbf{B}}_{k} + {\mathbf{C}}_{k}(3{\mathbf{I}}-{\mathbf{B}}_{k}^2)\Big). \label{lya_iterations} \end{gathered} \end{equation} The equation above defines two coupled iterations for solving the Lyapunov equation. Since the NS iteration converges only locally, \emph{i.e.,} converges when $||{\mathbf{H}}_{k}^{2}{-}{\mathbf{I}}||{<}1$, here we divide ${\mathbf{H}}_{0}$ by $||{\mathbf{B}}||_{\rm F}$ to meet the convergence condition. This normalization defines the initialization ${\mathbf{B}}_{0}{=}\frac{{\mathbf{B}}}{||{\mathbf{B}}||_{\rm F}}$ and ${\mathbf{C}}_{0}{=}\frac{{\mathbf{C}}}{||{\mathbf{B}}||_{\rm F}}$. Relying on Lemma~\ref{sign_1}.2, the sign function of \cref{block_lyapunov} can be also calculated as: \begin{equation} \begin{aligned} sign({\mathbf{H}})&= sign\Big(\begin{bmatrix} {\mathbf{B}} & {\mathbf{C}}\\ \mathbf{0} & -{\mathbf{B}} \end{bmatrix}\Big)=\begin{bmatrix} {\mathbf{I}} & 2 {\mathbf{X}}\\ \mathbf{0} & -{\mathbf{I}} \end{bmatrix} \end{aligned} \label{sign_block_lya} \end{equation} As indicated above, the iterations in~\cref{lya_iterations} have the convergence: \begin{equation} \lim_{k\rightarrow\infty}{\mathbf{B}}_{k} = \mathbf{I}, \lim_{k\rightarrow\infty}{\mathbf{C}}_{k} = 2{\mathbf{X}} \end{equation} After iterating $k$ times, we can get the approximate solution ${\mathbf{X}}{=}\frac{1}{2}{\mathbf{C}}_{k}$. Instead of choosing setting iteration times, one can also set the termination criterion by checking the convergence $||{\mathbf{B}}_{k}-{\mathbf{I}}||_{\rm F}{<}\tau$, where $\tau$ is the pre-defined tolerance. Table~\ref{tab:backward_complexiy} compares the backward computation complexity of the iterative Lyapunov solver and the NS iteration. Our proposed Lyapunov solver spends fewer matrix multiplications and is thus more efficient than the NS iteration. Even if we iterate the Lyapunov solver more times (\emph{e.g.,} 7 or 8), it still costs less time than the backward calculation of NS iteration that iterates $5$ times. \begin{table} \centering \caption{Comparison of backward operations. For the inverse square root, our Lyapunov solver uses marginally $3$ more matrix multiplications. The cost of $1$ NS iteration is about that of $2$ iterations of Lyapunov solver.}\label{tab:backward_complexiy} \begin{tabular}{c|c|c|c} \hline Op. & Lya (Mat. Sqrt.) & Lya (Inv. Sqrt.) & NS iteration\\ \hline Mat. Mul. & 6 $\times$ \#iters & 3 + 6 $\times$ \#iters& 4 + 10 $\times$ \#iters \\ Mat. Inv. & 0 & 0 & 0 \\ \hline \end{tabular} \end{table} \section{Fast Differentiable Inverse Square Root}\label{sec:method_inverse} In this section, we introduce the extension of our algorithm to the inverse square root. \subsection{Forward Pass} \subsubsection{Matrix Taylor Polynomial} To derive the MTP of inverse square root, we need to match to the following power series: \begin{equation} (1-z)^{-\frac{1}{2}} = 1 + \sum_{k=1}^{\infty}\Big|\dbinom{-\frac{1}{2}}{k}\Big| z^{k} \label{taylor_inv_scalar} \end{equation} Similar with the procedure of the matrix square root in~\cref{taylor_matrix,mtp_unnorm}, the MTP approximation can be computed as: \begin{equation} {\mathbf{A}}^{-\frac{1}{2}} ={\mathbf{I}} + \sum_{k=1}^{\infty} \Big|\dbinom{-\frac{1}{2}}{k}\Big| ({\mathbf{I}}-\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}})^{k} \end{equation} Instead of the post-normalization of matrix square root by multiplying $\sqrt{||{\mathbf{A}}||_{\rm F}}$ as done in~\cref{mtp_square}, we need to divide $\sqrt{||{\mathbf{A}}||_{\rm F}}$ for computing the inverse square root: \begin{equation} {\mathbf{A}}^{-\frac{1}{2}} =\frac{1}{\sqrt{||{\mathbf{A}}||_{\rm F}}}\cdot \Big({\mathbf{I}} + \sum_{k=1}^{\infty} \Big|\dbinom{-\frac{1}{2}}{k}\Big| ({\mathbf{I}}-\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}})^{k} \Big) \end{equation} Compared with the MTP of matrix square root in the same degree, the inverse square root consumes the same computational complexity. \subsubsection{Matrix Pad\'e Approximant} The matrix square root ${\mathbf{A}}^{\frac{1}{2}}$ of our MPA is calculated as $\sqrt{||{\mathbf{A}}||_{\rm F}}{\mathbf{Q}}_{N}^{-1}{\mathbf{P}}_{M}$. For the inverse square root, we can directly compute the inverse as: \begin{equation} {\mathbf{A}}^{-\frac{1}{2}}=(\sqrt{||{\mathbf{A}}||_{\rm F}}{\mathbf{Q}}_{N}^{-1}{\mathbf{P}}_{M})^{-1}=\frac{1}{\sqrt{||{\mathbf{A}}||_{\rm F}}}{\mathbf{P}}_{M}^{-1}{\mathbf{Q}}_{N} \label{mpa_square_inverse1} \end{equation} The extension to inverse square root comes for free as it does not require additional computation. For both the matrix square root and inverse square root, the matrix polynomials ${\mathbf{Q}}_{N}$ and ${\mathbf{P}}_{M}$ need to be first computed, and then one matrix inverse or solving the linear system is required. Another approach to derive the MPA for inverse square root is to match the power series in~\cref{taylor_inv_scalar} and construct the MPA again. The matching is calculated as: \begin{equation} \frac{1+\sum_{m=1}^{M}r_{m}z^{m}}{1+\sum_{n=1}^{N}s_{n}z^{n}} = 1 + \sum_{k=1}^{M+N} \Big|\dbinom{-\frac{1}{2}}{k}\Big| z^{k} \end{equation} where $r_{m}$ and $s_{n}$ denote the new Pad\'e coefficients. Then the matrix polynomials are computed as: \begin{equation} \begin{gathered} {\mathbf{R}}_{M}= {\mathbf{I}} + \sum_{m=1}^{M} r_{m} ({\mathbf{I}}-\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}})^{m},\\ {\mathbf{S}}_{N}= {\mathbf{I}} + \sum_{n=1}^{N} s_{n} ({\mathbf{I}}-\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}})^{n}. \end{gathered} \end{equation} The MPA for approximating the inverse square root is calculated as: \begin{equation} {\mathbf{A}}^{-\frac{1}{2}} = \frac{1}{\sqrt{||{\mathbf{A}}||_{\rm F}}}{\mathbf{S}}_{N}^{-1}{\mathbf{R}}_{M}. \label{mpa_square_inverse2} \end{equation} This method for deriving MPA also leads to the same complexity. Notice that these two different computation methods are equivalent to each other. Specifically, we have: \begin{prop} The diagonal MPA $\frac{1}{\sqrt{||{\mathbf{A}}||_{\rm F}}}{\mathbf{S}}_{N}^{-1}{\mathbf{R}}_{M}$ is equivalent to the diagonal MPA $\frac{1}{\sqrt{||{\mathbf{A}}||_{\rm F}}}{\mathbf{P}}_{M}^{-1}{\mathbf{Q}}_{N}$, and the relation $p_{m}{=}-s_{n}$ and $q_{n}{=}-r_{m}$ hold for any $m{=}n$. \end{prop} We give the detailed proof in Supplementary Material. Since two sets of MPA are equivalent, we adopt the implementation of inverse square root in~\cref{mpa_square_inverse1} throughout our experiments, as it shares the same ${\mathbf{P}}_{M}$ and ${\mathbf{Q}}_{N}$ with the matrix square root. \subsection{Backward Pass} For the inverse square root, we can also rely on the iterative Lyapunov solver for the gradient computation. Consider the following relation: \begin{equation} {\mathbf{A}}^{\frac{1}{2}}{\mathbf{A}}^{-\frac{1}{2}}={\mathbf{I}}. \end{equation} A perturbation on both sides leads to: \begin{equation} d{\mathbf{A}}^{\frac{1}{2}}{\mathbf{A}}^{-\frac{1}{2}} + {\mathbf{A}}^{\frac{1}{2}}d{\mathbf{A}}^{-\frac{1}{2}} = d{\mathbf{I}}. \end{equation} Using the chain rule, we can obtain the gradient equation after some arrangements: \begin{equation} \frac{\partial l}{\partial {\mathbf{A}}^{\frac{1}{2}}} = -{\mathbf{A}}^{-\frac{1}{2}}\frac{\partial l}{\partial {\mathbf{A}}^{-\frac{1}{2}}}{\mathbf{A}}^{-\frac{1}{2}}. \end{equation} Injecting this equation into~\cref{gradient_function} leads to the re-formulation: \begin{equation} \begin{gathered} {\mathbf{A}}^{\frac{1}{2}} \frac{\partial l}{\partial {\mathbf{A}}} + \frac{\partial l}{\partial {\mathbf{A}}} {\mathbf{A}}^{\frac{1}{2}} = -{\mathbf{A}}^{-\frac{1}{2}}\frac{\partial l}{\partial {\mathbf{A}}^{-\frac{1}{2}}}{\mathbf{A}}^{-\frac{1}{2}} \\ {\mathbf{A}}^{-\frac{1}{2}}\frac{\partial l}{\partial {\mathbf{A}}} + \frac{\partial l}{\partial {\mathbf{A}}}{\mathbf{A}}^{-\frac{1}{2}} = -{\mathbf{A}}^{-1}\frac{\partial l}{\partial {\mathbf{A}}^{-\frac{1}{2}}}{\mathbf{A}}^{-1}. \label{inverse_gradient} \end{gathered} \end{equation} As can be seen, now the gradient function resembles the continuous Lyapunov equation again. The only difference with~\cref{gradient_function} is the r.h.s. term, which can be easily computed as $-({\mathbf{A}}^{-\frac{1}{2}})^2\frac{\partial l}{\partial {\mathbf{A}}^{-\frac{1}{2}}}({\mathbf{A}}^{-\frac{1}{2}})^2$ with $3$ matrix multiplications. For the new iterative solver of the Lyapunov equation ${\mathbf{B}}{\mathbf{X}}{+}{\mathbf{X}}{\mathbf{B}}{=}{\mathbf{C}}$, we have the following initialization: \begin{equation} \begin{gathered} {\mathbf{B}}_{0}=\frac{{\mathbf{A}}^{-\frac{1}{2}}}{||{\mathbf{A}}^{-\frac{1}{2}}||_{\rm F}}=||{\mathbf{A}}^{\frac{1}{2}}||_{\rm F}{\mathbf{A}}^{-\frac{1}{2}}\\ {\mathbf{C}}_{0}=\frac{-{\mathbf{A}}^{-1}\frac{\partial l}{\partial {\mathbf{A}}^{-\frac{1}{2}}}{\mathbf{A}}^{-1}}{||{\mathbf{A}}^{-\frac{1}{2}}||_{\rm F}}=-||{\mathbf{A}}^{\frac{1}{2}}||_{\rm F}{\mathbf{A}}^{-1}\frac{\partial l}{\partial {\mathbf{A}}^{-\frac{1}{2}}}{\mathbf{A}}^{-1}. \end{gathered} \end{equation} Then we use the coupled NS iteration to compute the gradient $\frac{\partial l}{\partial {\mathbf{A}}}{=}\frac{1}{2}{\mathbf{C}}_{k}$. Table~\ref{tab:backward_complexiy} presents the complexity of the backward algorithms. Compared with the gradient of matrix square root, this extension marginally increases the computational complexity by $3$ more matrix multiplications, which is more efficient than a matrix inverse or solving a linear system. \section{Experiments}\label{sec:exp} In the experimental section, we first perform a series of numerical tests to compare our proposed method with SVD and NS iteration. Subsequently, we evaluate our methods in several real-world applications, including decorrelated batch normalization, second-order vision transformer, global covariance pooling for image/video recognition, and neural style transfer. The implementation details are kindly referred to the Supplementary Material. \subsection{Baselines} In the numerical tests, we compare our two methods against SVD and NS iteration. For the various computer vision experiments, our methods are compared with more differentiable SVD baselines where each one has its specific gradient computation. These methods include (1) Power Iteration (PI), (2) SVD-PI~\cite{wang2019backpropagation}, (3) SVD-Taylor~\cite{wang2021robust,song2021approximate}, and (4) SVD-Pad\'e~\cite{song2021approximate}. We put the detailed illustration of baseline methods in the Supplementary Material. \subsection{Numerical Tests} To comprehensively evaluate the numerical performance and stability, we compare the speed and error for the input of different batch sizes, matrices in various dimensions, different iteration times of the backward pass, and different polynomial degrees of the forward pass. In each of the following tests, the comparison is based on $10,000$ random covariance matrices and the matrix size is consistently $64{\times}64$ unless explicitly specified. The error is measured by calculating the Mean Absolute Error (MAE) and Normalized Root Mean Square Error (NRMSE) of the matrix square root computed by the approximate methods (NS iteration, MTP, and MPA) and the accurate method (SVD). For our algorithm of fast inverse square root, since the theory behind the algorithm is in essence the same with the matrix square root, they are expected to have similar numerical properties. The difference mainly lie in the forward error and backward speed. Thereby, we conduct the FP error analysis and the BP speed analysis for the inverse square root in Sec.~\ref{sec:fp_err_speed} and Sec.~\ref{sec:bp_speed}, respectively. For the error analysis, we compute the error of whitening transform by $||\sigma({\mathbf{A}}^{-\frac{1}{2}}{\mathbf{X}}){-}{\mathbf{I}}||_{\rm F}$ where $\sigma(\cdot)$ denotes the extracted eigenvalues. In the other numerical tests, we only evaluate the properties of the algorithm for the matrix square root. \begin{figure}[htbp] \centering \includegraphics[width=0.99\linewidth]{imgs/fp_sped_err.jpg} \caption{The comparison of speed and error in the FP for the matrix square root (\emph{left}) and the inverse square root (\emph{right}). Our MPA computes the more accurate and faster solution than the NS iteration, and our MTP enjoys the fastest calculation speed. } \label{fig:fp_sped_err} \end{figure} \subsubsection{Forward Error versus Speed} \label{sec:fp_err_speed} Both the NS iteration and our methods have a hyper-parameter to tune in the forward pass, \emph{i.e.,} iteration times for NS iteration and polynomial degrees for our MPA and MTP. To validate the impact, we measure the speed and error of both matrix square root and its inverse for different hyper-parameters. The degrees of our MPA and MTP vary from $6$ to $18$, and the iteration times of NS iteration range from $3$ to $7$. As can be observed from Fig.~\ref{fig:fp_sped_err}, our MTP has the least computational time, and our MPA consumes slightly more time than MTP but provides a closer approximation. Moreover, the curve of our MPA consistently lies below that of the NS iteration, demonstrating our MPA is a better choice in terms of both speed and accuracy. \begin{figure}[htbp] \centering \includegraphics[width=0.49\linewidth]{imgs/bp_speed.jpg} \caption{The speed comparison in the backward pass. Our Lyapunov solver is more efficient than NS iteration as fewer matrix multiplications are involved. Our solver for inverse square root only slightly increases the computational cost.} \label{fig:bp_speed} \end{figure} \subsubsection{Backward Speed versus Iteration} \label{sec:bp_speed} Fig.~\ref{fig:bp_speed} compares the speed of our backward Lyapunov solver and the NS iteration versus different iteration times. The result is coherent with the complexity analysis in Table~\ref{tab:backward_complexiy}: our Lyapunov solver is much more efficient than NS iteration. For the NS iteration of $5$ times, our Lyapunov solver still has an advantage even when we iterate $8$ times. Moreover, the extension of our Lyapunov solver for inverse square root only marginally increases the computational cost and is sill much faster than the NS iteration. \begin{figure}[htbp] \centering \includegraphics[width=0.99\linewidth]{imgs/speed_bs.jpg} \caption{Speed comparison for each method versus different batch sizes. Our methods are more batch-efficient than the SVD or NS iteration. } \label{fig:speed_bs} \end{figure} \subsubsection{Speed versus Batch Size} In certain applications such as covariance pooling and instance whitening, the input could be batched matrices instead of a single matrix. To compare the speed for batched input, we conduct another numerical test. The hyper-parameter choices follow our experimental settings in decorrelated batch normalization. As seen in Fig.~\ref{fig:speed_bs}, our MPA-Lya and MTP-Lya are consistently more efficient than the NS iteration and SVD. To give a concrete example, when the batch size is $64$, our MPA-Lya is $2.58$X faster than NS iteration and $27.25$X faster than SVD, while our MTP-Lya is $5.82$X faster than the NS iteration and $61.32$X faster than SVD. \begin{table*}[htbp] \centering \caption{Validation error of ZCA whitening methods. The covariance matrix is of size $1{\times}64{\times}64$. The time consumption is measured for computing the inverse square root (BP+FP). For each method, we report the results based on five runs.} \resizebox{0.8\linewidth}{!}{ \begin{tabular}{r|c|c|c|c|c|c|c} \hline \multirow{3}*{Methods} & \multirow{3}*{ Time (ms)} & \multicolumn{4}{c|}{ResNet-18} & \multicolumn{2}{c}{ResNet-50}\\ \cline{3-8} & & \multicolumn{2}{c|}{CIFAR10} & \multicolumn{2}{c|}{CIFAR100} & \multicolumn{2}{c}{CIFAR100}\\ \cline{3-8} && mean$\pm$std & min & mean$\pm$std & min & mean$\pm$std & min \\ \hline SVD-Clip &3.37 & 4.88$\pm$0.25 &4.65 & 21.60$\pm$0.39 &21.19 & 20.50$\pm$0.33 &20.17\\ SVD-PI (GPU) &5.27 &4.57$\pm$0.10 &4.45 &21.35$\pm$0.25 &21.05 &19.97$\pm$0.41 & 19.27 \\ SVD-PI & 3.49 & 4.59$\pm$0.09 &4.44 &21.39$\pm$0.23 &21.04 &19.94$\pm$0.44 &19.28 \\ SVD-Taylor &3.41 &4.50$\pm$0.08 &4.40 &21.14$\pm$0.20 &\textbf{20.91} &19.81$\pm$0.24&19.26\\ SVD-Pad\'e &3.39 & 4.65$\pm$0.11 &4.50 &21.41$\pm$0.15 &21.26 &20.25$\pm$0.23&19.98\\ NS Iteration & 2.96 & 4.57$\pm$0.15 &4.37&21.24$\pm$0.20 &21.01&\textbf{19.39$\pm$0.30}&\textbf{19.01}\\ \hline Our MPA-Lya & 2.61 &\textbf{4.39$\pm$0.09} &\textbf{4.25} & \textbf{21.11}$\pm$\textbf{0.12}&20.95& \textbf{19.55$\pm$0.20} &19.24\\ Our MTP-Lya & \textbf{2.56} & 4.49$\pm$0.13 &4.31 & 21.42$\pm$0.21 &21.24 &20.55$\pm$0.37&20.12\\ \hline \end{tabular} } \label{tab:zca_whitening} \end{table*} As discussed before, the current SVD implementation adopts a for-loop to compute each matrix one by one within the mini-batch. This accounts for why the time consumption of SVD grows almost linearly with the batch size. For the NS iteration, the backward pass is not as batch-friendly as our Lyapunov solver. The gradient calculation requires measuring the trace and handling the multiplication for each matrix in the batch, which has to be accomplished ineluctably by a for-loop. Our backward pass can be more efficiently implemented by batched matrix multiplication. \begin{figure}[htbp] \centering \includegraphics[width=0.99\linewidth]{imgs/speed_err_dim.png} \caption{The speed comparison (\emph{left}) and the error comparison (\emph{middle and right}) for matrices in different dimensions. Our MPA-Lya is consistently faster and more accurate than NS iteration for different matrix dimensions. Since the SVD is accurate by default, other approximate methods are compared with SVD to measure the error.} \label{fig:speed_err_dim} \end{figure} \subsubsection{Speed and Error versus Matrix Dimension} In the last numerical test, we compare the speed and error for matrices in different dimensions. The hyper-parameter settings also follow our experiments of ZCA whitening. As seen from Fig.~\ref{fig:speed_err_dim} left, our proposed MPA-Lya and MTP-Lya consistently outperform others in terms of speed. In particular, when the matrix size is very small (${<}32$), the NS iteration does not hold a speed advantage over the SVD. By contrast, our proposed methods still have competitive speed against the SVD. Fig.~\ref{fig:speed_err_dim} right presents the approximation error using metrics MAE and NRMSE. Both metrics agree well with each other and demonstrate that our MPA-Lya always has a better approximation than the NS iteration, whereas our MTP-Lya gives a worse estimation but takes the least time consumption, which can be considered as a trade-off between speed and accuracy. \subsection{Decorrelated Batch Normalization} As a substitute of ordinary BN, the decorrelated BN~\cite{huang2018decorrelated} applies the ZCA whitening transform to eliminate the correlation of the data. Consider the reshaped feature map ${\mathbf{X}}{\in}\mathbb{R}^{C{\times} BHW}$. The whitening procedure first computes its sample covariance as: \begin{equation} {\mathbf{A}}{=}({\mathbf{X}}-\mu({\mathbf{X}}))({\mathbf{X}}-\mu({\mathbf{X}}))^{T}{+}\epsilon{\mathbf{I}} \label{zca_cov} \end{equation} where ${\mathbf{A}}{\in}\mathbb{R}^{C{\times}C}$, $\mu({\mathbf{X}})$ is the mean of ${\mathbf{X}}$, and $\epsilon$ is a small constant to make the covariance strictly positive definite. Afterwards, the inverse square root is calculated to whiten the feature map: \begin{equation} {\mathbf{X}}_{whitend}={\mathbf{A}}^{-\frac{1}{2}}{\mathbf{X}} \end{equation} By doing so, the eigenvalues of ${\mathbf{X}}$ are all ones, \emph{i.e.,} the feature is uncorrelated. During the training process, the training statistics are stored for the inference phase. We insert the decorrelated BN layer after the first convolutional layer of ResNet~\cite{he2016deep}, and the proposed methods and other baselines are used to compute ${\mathbf{A}}^{-\frac{1}{2}}$. Table~\ref{tab:zca_whitening} displays the speed and validation error on CIFAR10 and CIFAR100~\cite{krizhevsky2009learning}. The ordinary SVD with clipping gradient (SVD-Clip) is inferior to other SVD baselines, and the SVD computation on GPU is slower than that on CPU. Our MTP-Lya is $1.16$X faster than NS iteration and $1.32$X faster than SVD-Pad\'e, and our MPA-Lya is $1.14$X and $1.30$X faster. Furthermore, our MPA-Lya achieves state-of-the-art performances across datasets and models. Our MTP-Lya has comparable performances on ResNet-18 but slightly falls behind on ResNet-50. We guess this is mainly because the relatively large approximation error of MTP might affect little on the small model but can hurt the large model. On CIFAR100 with ResNet-50, our MPA-Lya slightly falls behind NS iteration in the average validation error. As a larger and deeper model, ResNet-50 is likely to have worse-conditioned matrices than ResNet-18. Since our MPA involves solving a linear system, processing a very ill-conditioned matrix could lead to some round-off errors. In this case, NS iteration might have a chance to slightly outperform our MPA-Lya. However, this is a rare situation; our MPA-Lya beats NS iteration in most following experiments. \subsection{Global Covariance Pooling} For the application of global covariance pooling, we evaluate our method in three different tasks, including large-scale visual recognition, fine-grained visual categorization, and video action recognition. Since the GCP method requires the very accurate matrix square root~\cite{song2021approximate}, our MTP-Lya cannot achieve reasonable performances due to the relatively large approximation error. Therefore, we do not take it into account for comparison throughout the GCP experiments. \subsubsection{Large-scale Visual Recognition} \begin{figure}[htbp] \centering \includegraphics[width=0.99\linewidth]{imgs/arch_gcp.jpg} \caption{Overview of the GCP network~\cite{li2017second,li2018towards,song2021approximate} for large-scale and fine-grained visual recognition.} \label{fig:arch_gcp} \end{figure} Fig.~\ref{fig:arch_gcp} displays the architecture of a typical GCP network. Different from the standard CNNs, the covariance square root of the last convolutional feature is used as the global representation. Considering the final convolutional feature ${\mathbf{X}}{\in}\mathbb{R}^{B{\times}C{\times}HW}$, a GCP meta-layer first computes the sample covariance as: \begin{equation} \mathbf{P}=\mathbf{X}\Bar{\mathbf{I}}\mathbf{X}^{T},\ \Bar{\mathbf{I}}=\frac{1}{N}(\mathbf{I}-\frac{1}{N}\mathbf{1}\mathbf{1}^{T}) \label{covariance} \end{equation} where $\Bar{\mathbf{I}}$ represents the centering matrix, $\mathbf{I}$ denotes the identity matrix, and $\mathbf{1}$ is a column vector whose values are all ones, respectively. Afterwards, the matrix square root is conducted for normalization: \begin{equation} \mathbf{Q}\triangleq\mathbf{P}^{\frac{1}{2}}=(\mathbf{U}\mathbf{\Lambda}\mathbf{U}^{T})^{\frac{1}{2}}={\mathbf{U}}\mathbf{\Lambda}^{\frac{1}{2}}\mathbf{U}^{T} \label{matrix_power} \end{equation} where the normalized covariance matrix $\mathbf{Q}$ is fed to the FC layer. Our method is applied to calculate ${\mathbf{Q}}$. \begin{table}[htbp] \caption{Comparison of validation accuracy (\%) on ImageNet~\cite{deng2009imagenet} and ResNet-50~\cite{he2016deep}. The covariance is of size {$256{\times}256{\times}256$}, and the time consumption is measured for computing the matrix square root (FP+BP).} \centering \resizebox{0.99\linewidth}{!}{ \begin{tabular}{r|c|c|c} \hline Methods & Time (ms)& Top-1 Acc. & Top-5 Acc. \\ \hline SVD-Taylor &2349.12 &77.09 &93.33 \\ SVD-Pad\'e &2335.56 &\textbf{77.33} &\textbf{93.49} \\ NS iteration &164.43 & 77.19 & 93.40\\ \hline Our MPA-Lya & \textbf{110.61} &77.13 & 93.45\\ \hline \end{tabular} } \label{tab:performances_GCP_CNN} \end{table} Table~\ref{tab:performances_GCP_CNN} presents the speed comparison and the validation error of GCP ResNet-50~\cite{he2016deep} models on ImageNet~\cite{deng2009imagenet}. Our MPA-Lya not only achieves very competitive performance but also has the least time consumption. The speed of our method is about $21$X faster than the SVD and $1.5$X faster than the NS iteration. \subsubsection{Fine-grained Visual Recognition } \begin{table}[htbp] \caption{Comparison of validation accuracy on fine-grained benchmarks and ResNet-50~\cite{he2016deep}. The covariance is of size {$10{\times}64{\times}64$}, and the time consumption is measured for computing the matrix square root (FP+BP).} \centering \resizebox{0.99\linewidth}{!}{ \begin{tabular}{r|c|c|c|c} \hline Methods & Time (ms)& Birds & Aircrafts & Cars \\ \hline SVD-Taylor &32.13 &86.9 &89.9 &92.3 \\ SVD-Pad\'e &31.54 &87.2 &90.5 &\textbf{92.8} \\ NS iteration &5.79 & 87.3 & 89.5 & 91.7 \\ \hline Our MPA-Lya & \textbf{3.89} &\textbf{87.8} &\textbf{91.0} &92.5 \\ \hline \end{tabular} } \label{tab:performances_GCP_fgvc} \end{table} In line with other GCP works~\cite{li2017second,li2018towards,song2021approximate}, after training on ImageNet, the model is subsequently fine-tuned on each fine-grained dataset. Table~\ref{tab:performances_GCP_fgvc} compares the time consumption and validation accuracy on three commonly used fine-grained benchmarks, namely Caltech University Birds (Birds)~\cite{WelinderEtal2010}, FGVC Aircrafts (Aircrafts)~\cite{maji2013fine}, and Stanford Cars (Cars)~\cite{KrauseStarkDengFei-Fei_3DRR2013}. As can be observed, our MPA-Lya consumes $50\%$ less time than the NS iteration and is about $8$X faster than the SVD. Moreover, the performance of our method is slightly better than other baselines on Birds~\cite{WelinderEtal2010} and Aircrafts~\cite{maji2013fine}. The evaluation result on Cars~\cite{KrauseStarkDengFei-Fei_3DRR2013} is also comparable. \subsubsection{Video Action Recognition} \begin{figure}[htbp] \centering \includegraphics[width=0.99\linewidth]{imgs/arch_video_gcp.jpg} \caption{Architecture of the temporal-attentive GCP network for video action recognition~\cite{gao2021temporal}. The channel and spatial attention is used to make the covariance more attentive.} \label{fig:arch_gcp_video} \end{figure} Besides the application of image recognition, the GCP methods can be also used for the task of video recognition~\cite{gao2021temporal}. Fig.~\ref{fig:arch_gcp_video} displays the overview of the temporal-attentive GCP model for video action recognition. The temporal covariance is computed in a sliding window manner by involving both intra- and inter-frame correlations. Supposing the kernel size of the sliding window is $3$, then temporal covariance is computed as: \begin{equation} \begin{gathered} Temp.Cov.(\mathbf{X}_{l})=\underbrace{{\mathbf{X}}_{l-1}{\mathbf{X}}_{l-1}^{T} + {\mathbf{X}}_{l}{\mathbf{X}}_{l}^{T} + {\mathbf{X}}_{l+1}{\mathbf{X}}_{l+1}^{T}}_{intra-frame\ covariance}\\ +\underbrace{{\mathbf{X}}_{l-1}{\mathbf{X}}_{l}^{T} + {\mathbf{X}}_{l}{\mathbf{X}}_{l-1}^{T} + \cdots + {\mathbf{X}}_{l+1}{\mathbf{X}}_{l}^{T}}_{inter-frame\ covariance} \end{gathered} \end{equation} Finally, the matrix square root of the attentive temporal-based covariance is computed and passed to the FC layer. The spectral methods are used to compute the matrix square root of the attentive covariance $Temp.Cov.(\mathbf{X}_{l})$. \begin{table}[htbp] \centering \caption{Validation top-1/top-5 accuracy (\%) on HMBD51~\cite{Kuehne11} and UCF101~\cite{soomro2012ucf101} with backbone TEA R50~\cite{li2020tea}. The covariance matrix is of size $16{\times}128{\times}128$, and the time consumption is measured for computing the matrix square root (BP+FP).} \resizebox{0.99\linewidth}{!}{ \begin{tabular}{r|c|c|c} \hline Methods & Time (ms) & HMBD51 & UCF101 \\ \hline SVD-Taylor &76.17 &73.79/93.84 &\textbf{95.00}/\textbf{99.60} \\ SVD-Pad\'e &75.25 &73.89/93.79 &94.13/99.47 \\ NS Iteration &12.11 &72.75/93.86 &94.16/99.50 \\ \hline Our MPA-Lya &\textbf{6.95} &\textbf{74.05}/\textbf{93.99} &94.24/99.58 \\ \hline \end{tabular} } \label{tab:video_gcp} \end{table} We present the validation accuracy and time cost for the video action recognition in Table~\ref{tab:video_gcp}. For the computation speed, our MPA-Lya is about $1.74$X faster than the NS iteration and is about $10.82$X faster than the SVD. Furthermore, our MPA-Lya achieves the best performance on HMDB51, while the result on UCF101 is also very competitive. To sum up, our MPA-Lya has demonstrated its general applicability in the GCP models for different tasks. In particular, without the sacrifice of performance, our method can bring considerable speed improvements. This could be beneficial for faster training and inference. In certain experiments such as fine-grained classification, the approximate methods (MPA-Lya and NS iteration) can marginally outperform accurate SVD. This phenomenon has been similarly observed in related studies~\cite{li2018towards,huang2019iterative,song2021approximate}, and one likely reason is that the SVD does not have as healthy gradients as the approximate methods. This might negatively influence the optimization process and consequently the performance would degrade. \subsection{Neural Style Transfer} \begin{figure}[htbp] \centering \includegraphics[width=0.9\linewidth]{imgs/style_transfer_arch.jpg} \caption{The architecture overview of our model for neural style transfer. Two encoders take input of the style and content image respectively, and generate the multi-scale content/style features. A decoder is applied to absorb the feature and perform the WCT process at $5$ different scales, which outputs a pair of images that exchange the styles. Finally, a discriminator is further adopted to tell apart the authenticity of the images.} \label{fig:arch_style_transfer} \end{figure} We adopt the WCT process in the network architecture proposed in Cho~\emph{et al.}~\cite{cho2019image} for neural style transfer. Fig.~\ref{fig:arch_style_transfer} displays the overview of the model. The WCT performs successive whitening and coloring transform on the content and style feature. Consider the reshaped content feature $\mathbf{X}_{c}{\in}\mathrm{R}^{B{\times}C{\times}HW}$ and the style feature $\mathbf{X}_{s}{\in}\mathrm{R}^{B{\times}C{\times}HW}$. The style information is first removed from the content as: \begin{equation} \begin{gathered} \mathbf{X}_{c}^{whitened} = \Big((\mathbf{X}_{c}-\mu(\mathbf{X}_{c}))(\mathbf{X}_{c}-\mu(\mathbf{X}_{c}))^{T}\Big)^{-\frac{1}{2}}\mathbf{X}_{c} \end{gathered} \end{equation} Then we extract the desired style information from the style feature $\mathbf{X}_{s}$ and transfer it to the whitened content feature: \begin{equation} \mathbf{X}_{c}^{colored} = \Big((\mathbf{X}_{s}-\mu(\mathbf{X}_{s}))(\mathbf{X}_{s}-\mu(\mathbf{X}_{s}))^{T}\Big)^{\frac{1}{2}}\mathbf{X}_{c}^{whitened} \end{equation} The resultant feature $\mathbf{X}_{c}^{colored}$ is compensated with the mean of style feature and combined with the original content feature: \begin{equation} \mathbf{X} = \alpha (\mathbf{X}_{c}^{colored}+\mu(\mathbf{X}_{s})) + (1-\alpha)\mathbf{X}_{c} \end{equation} where $\alpha$ is a weight bounded in $[0,1]$ to control the strength of style transfer. In this experiment, both the matrix square root and inverse square root are computed. \begin{table}[htbp] \caption{The LPIPS~\cite{zhang2018perceptual} score and user preference (\%) on Artworks~\cite{isola2017image} dataset. The covariance is of size $4{\times}256{\times}256$. We measure the time consumption of whitening and coloring transform that is conducted $10$ times to exchange the style and content feature at different network depths.} \centering \setlength{\tabcolsep}{1.5pt} \resizebox{0.99\linewidth}{!}{ \begin{tabular}{r|c|c|c} \hline Methods & Time (ms) & LPIPS~\cite{zhang2018perceptual} ($\uparrow$) & Preference ($\uparrow$) \\ \hline SVD-Taylor &447.12 & 0.5276 & 16.25\\ SVD-Pad\'e &445.23 & 0.5422 & 19.25\\ NS iteration &94.37 & 0.5578 & 17.00\\ \hline Our MPA-Lya &69.23 &\textbf{0.5615} & \textbf{24.75}\\ Our MTP-Lya &\textbf{40.97} &0.5489 & 18.50\\ \hline \end{tabular} } \label{tab:style_transfer_sum} \end{table} Table~\ref{tab:style_transfer_sum} presents the quantitative evaluation using the LPIPS~\cite{zhang2018perceptual} score and user preference. The speed of our MPA-Lya and MTP-Lya is significantly faster than other methods. Specifically, our MTP-Lya is $2.3$X faster than the NS iteration and $10.9$X faster than the SVD, while our MPA-Lya consumes $1.4$X less time than the NS iteration and $6.4$X less time than the SVD. Moreover, our MPA-Lya achieves the best LPIPS score and user preference. The performance of our MTP-Lya is also very competitive. Fig.~\ref{fig:style_transfer_visual} displays the exemplary visual comparison. Our methods can effectively transfer the style information and preserve the original content, leading to transferred images with a more coherent style and better visual appeal. We give detailed evaluation results on each subset and more visual examples in Supplementary Material. \begin{table*}[htbp] \centering \caption{Validation top-1/top-5 accuracy of the second-order vision transformer on ImageNet~\cite{deng2009imagenet}. The covariance is of size $64{\times}48{\times}48$, where $64$ is the mini-batch size. The time cost is measured for computing the matrix square root (BP+FP).} \resizebox{0.79\linewidth}{!}{ \begin{tabular}{r|c|c|c|c} \hline \multirow{2}*{Methods} & \multirow{2}*{ Time (ms)} & \multicolumn{3}{c}{Architecture} \\ \cline{3-5} & & So-ViT-7 & So-ViT-10 & So-ViT-14 \\ \hline PI & \textbf{1.84} & 75.93/93.04 & 77.96/94.18 & 82.16/96.02 (303 epoch)\\ SVD-PI & 83.43 & 76.55/93.42 & 78.53/94.40 & 82.16/96.01 (278 epoch)\\ SVD-Taylor & 83.29 & 76.66/\textbf{93.52} & 78.64/94.49 & 82.15/96.02 (271 epoch)\\ SVD-Pad\'e & 83.25 & 76.71/93.49 & 78.77/94.51 & 82.17/96.02 (265 epoch)\\ NS Iteration & 10.38 & 76.50/93.44 & 78.50/94.44 & 82.16/96.01 (280 epoch)\\ \hline Our MPA-Lya & 3.25 & \textbf{76.84}/93.46 & \textbf{78.83}/\textbf{94.58} & 82.17/96.03 (\textbf{254} epoch)\\ Our MTP-Lya & 2.39 & 76.46/93.26 & 78.44/94.33 & 82.16/96.02 (279 epoch)\\ \hline \end{tabular} } \label{tab:vit_imagenet} \end{table*} \begin{figure}[htbp] \centering \includegraphics[width=0.9\linewidth]{imgs/style_transfer_visual_small.png} \caption{Visual examples of the neural style transfer on Artworks~\cite{isola2017image} dataset. Our methods generate sharper images with more coherent style and better visual appeal. The red rectangular indicates regions with subtle details.} \label{fig:style_transfer_visual} \end{figure} \subsection{Second-order Vision Transformer} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{imgs/arch_sovit.jpg} \caption{The scheme of So-ViT~\cite{xie2021so}. The covariance square root of the visual tokens are computed to assist the classification. In the original vision transformer~\cite{dosovitskiy2020image}, only the class token is utilized for class predictions.} \label{fig:arch_sovit} \end{figure} The ordinary vision transformer~\cite{dosovitskiy2020image} attaches an empty class token to the sequence of visual tokens and only uses the class token for prediction, which may not exploit the rich semantics embedded in the visual tokens. Instead, The Second-order Vision Transformer (So-ViT)~\cite{xie2021so} proposes to leverage the high-level visual tokens to assist the task of classification: \begin{equation} y = {\rm FC}(c) + {\rm FC}\Big(({\mathbf{X}}\mX^{T})^{\frac{1}{2}}\Big) \end{equation} where $c$ is the output class token, ${\mathbf{X}}$ denotes the visual token, and $y$ is the combined class predictions. We show the model overview in Fig.~\ref{fig:arch_sovit}. Equipped with the covariance pooling layer, So-ViT removes the need for pre-training on the ultra-large-scale datasets and achieves competitive performance even when trained from scratch. To reduce the computational budget, So-ViT further proposes to use Power Iteration (PI) to approximate the dominant eigenvector. We use our methods to compute the matrix square root of the covariance ${\mathbf{X}}\mX^{T}$. Table~\ref{tab:vit_imagenet} compares the speed and performances on three So-ViT architectures with different depths. Our proposed methods significantly outperform the SVD and NS iteration in terms of speed. To be more specific, our MPA-Lya is $3.19$X faster than the NS iteration and $25.63$X faster than SVD-Pad\'e, and our MTP-Lya is $4.34$X faster than the NS iteration and $34.85$X faster than SVD-Pad\'e. For the So-ViT-7 and So-ViT-10, our MPA-Lya achieves the best evaluation results and even slightly outperforms the SVD-based methods. Moreover, on the So-ViT-14 model where the performances are saturated, our method converges faster and spends fewer training epochs. The performance of our MTP-Lya is also on par with the other methods. The PI suggested in the So-ViT only computes the dominant eigenpair but neglects the rest. In spite of the fast speed, the performance is not comparable with other methods. \subsection{Ablation Studies} We conduct three ablation studies to illustrate the impact of the degree of power series in the forward pass, the termination criterion during the back-propagation, and the possibility of combining our Lyapunov solver with the SVD and the NS iteration. \subsubsection{Degree of Power series to Match for Forward Pass} Table~\ref{tab:forward_degree} displays the performance of our MPA-Lya for different degrees of power series. As we use more terms of the power series, the approximation error gets smaller and the performance gets steady improvements from the degree $[3,3]$ to $[5,5]$. When the degree of our MPA is increased from $[5,5]$ to $[6,6]$, there are only marginal improvements. We hence set the forward degrees as $[5,5]$ for our MPA and as $11$ for our MTP as a trade-off between speed and accuracy. \begin{table}[htbp] \centering \setlength{\tabcolsep}{1.5pt} \caption{Performance of our MPA-Lya versus different degrees of power series to match.} \resizebox{0.99\linewidth}{!}{ \begin{tabular}{r|c|c|c|c|c|c|c} \hline \multirow{3}*{Degrees} & \multirow{3}*{ Time (ms)} & \multicolumn{4}{c|}{ResNet-18} & \multicolumn{2}{c}{ResNet-50}\\ \cline{3-8} & & \multicolumn{2}{c|}{CIFAR10} & \multicolumn{2}{c|}{CIFAR100} & \multicolumn{2}{c}{CIFAR100}\\ \cline{3-8} && mean$\pm$std & min & mean$\pm$std & min & mean$\pm$std & min \\ \hline $[3,3]$ &0.80 &4.64$\pm$0.11&4.54 &21.35$\pm$0.18&21.20 &20.14$\pm$0.43 & 19.56\\ $[4,4]$ &0.86 &4.55$\pm$0.08&4.51 &21.26$\pm$0.22&21.03 &19.87$\pm$0.29 & 19.64\\ $[6,6]$ &0.98 &\textbf{4.45$\pm$0.07}&4.33 &\textbf{21.09$\pm$0.14}&21.04 &\textbf{19.51$\pm$0.24}&19.26\\ \hline $[5,5]$ &0.93 &\textbf{4.39$\pm$0.09} &\textbf{4.25} & \textbf{21.11$\pm$0.12} &\textbf{20.95} & \textbf{19.55$\pm$0.20} & \textbf{19.24} \\ \hline \end{tabular} } \label{tab:forward_degree} \end{table} \subsubsection{Termination Criterion for Backward Pass} \label{sec:gradient_error} \begin{table*}[htbp] \centering \setlength{\tabcolsep}{1.5pt} \caption{Performance of our MPA-Lya versus different iteration times. The residual errors $||{\mathbf{B}}_{k}{-}{\mathbf{I}}||$ and $||0.5{\mathbf{C}}_{k}-{\mathbf{X}}||_{\rm F}$ are measured based on $10,000$ randomly sampled matrices.} \resizebox{0.8\linewidth}{!}{ \begin{tabular}{r|c|c|c|c|c|c|c|c|c} \hline \multirow{3}*{Methods} & \multirow{3}*{ Time (ms)} & \multirow{3}*{$||{\mathbf{B}}_{k}{-}{\mathbf{I}}||_{\rm F}$} & \multirow{3}*{$||0.5{\mathbf{C}}_{k}{-}{\mathbf{X}}||_{\rm F}$} & \multicolumn{4}{c|}{ResNet-18} & \multicolumn{2}{c}{ResNet-50}\\ \cline{5-10} & & & & \multicolumn{2}{c|}{CIFAR10} & \multicolumn{2}{c|}{CIFAR100} & \multicolumn{2}{c}{CIFAR100}\\ \cline{5-10} & & & & mean$\pm$std & min & mean$\pm$std & min & mean$\pm$std & min \\ \hline BS algorithm &2.34 &-- &-- & 4.57$\pm$0.10&4.45 &21.20$\pm$0.23&21.01 &\textbf{19.60$\pm$0.16}&19.55 \\ \#iter 5 &1.14& ${\approx}0.3541$ &${\approx}0.2049$ & 4.48$\pm$0.13&4.31 & 21.15$\pm$0.24&\textbf{20.84} &20.03$\pm$0.19&19.78 \\ \#iter 6 &1.33& ${\approx}0.0410$ &${\approx}0.0231$ & 4.43$\pm$0.10 &4.28 & 21.16$\pm$0.19 &20.93 & 19.83$\pm$0.24 &19.57 \\ \#iter 7 &1.52& ${\approx}7e{-}4$ &${\approx}3.5e{-}4$ & 4.45$\pm$0.11&4.29 & 21.18$\pm$0.20&20.95 &19.69$\pm$0.20&19.38 \\ \#iter 9 &1.83& ${\approx}2e{-}7$ &${\approx}7e{-}6$ & \textbf{4.40$\pm$0.07} &4.28 & \textbf{21.08$\pm$0.15} &20.89 & \textbf{19.52$\pm$0.22} &19.25 \\ \hline \#iter 8 &1.62& ${\approx}3e{-}7$ & ${\approx}7e{-}6$& \textbf{4.39$\pm$0.09}&\textbf{4.25} & \textbf{21.11$\pm$0.12}&20.95 & \textbf{19.55$\pm$0.20}&\textbf{19.24} \\ \hline \end{tabular} } \label{tab:back_iteration} \end{table*} Table~\ref{tab:back_iteration} compares the performance of backward algorithms with different termination criteria as well as the exact solution computed by the Bartels-Steward algorithm (BS algorithm)~\cite{bartels1972solution}. Since the NS iteration has the property of quadratic convergence, the errors $||{\mathbf{B}}_{k}{-}{\mathbf{I}}||_{\rm F}$ and $||0.5{\mathbf{C}}_{k}-{\mathbf{X}}||_{\rm F}$ decrease at a larger rate for more iteration times. When we iterate more than $7$ times, the error becomes sufficiently neglectable, \emph{i.e.,} the NS iteration almost converges. Moreover, from $8$ iterations to $9$ iterations, there are no obvious performance improvements. We thus terminate the iterations after iterating $8$ times. The exact gradient calculated by the BS algorithm does not yield the best results. Instead, it only achieves the least fluctuation on ResNet-50 and other results are inferior to our iterative solver. This is because the formulation of our Lyapunov equation is based on the assumption that the accurate matrix square root is computed, but in practice we only compute the approximate one in the forward pass. In this case, calculating \textit{the accurate gradient of the approximate matrix square root} might not necessarily work better than \textit{the approximate gradient of the approximate matrix square root}. \subsubsection{Lyapunov Solver as A General Backward Algorithm} \label{sec:lya_backward} We note that our proposed iterative Lyapunov solver is a general backward algorithm for computing the matrix square root. That is to say, it should be also compatible with the SVD and NS iteration as the forward pass. For the NS-Lya, our previous conference paper~\cite{song2022fast} shows that the NS iteration used in~\cite{higham2008functions,li2017second} cannot converge on any datasets. In this extended manuscript, we found out that the underlying reason is the inconsistency between the FP and BP. The NS iteration of~\cite{higham2008functions,li2017second} is a coupled iteration that use two variables ${\mathbf{Y}}_{k}$ and ${\mathbf{Z}}_{k}$ to compute the matrix square root. For the BP algorithm, the NS iteration is defined to compute the matrix sign and only uses one variable ${\mathbf{Y}}_{k}$. The term ${\mathbf{Z}}_{k}$ is not involved in the BP and we have no control over the gradient back-propagating through it, which results in the non-convergence of the model. To resolve this issue, we propose to change the forward coupled NS iteration to a variant that uses one variable as: \begin{equation} {\mathbf{Z}}_{k+1}=\frac{1}{2}(3{\mathbf{Z}}_{k}-{\mathbf{Z}}_{k}^{3}\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}}) \end{equation} where ${\mathbf{Z}}_{k+1}$ converges to the inverse square root ${\mathbf{A}}^{-\frac{1}{2}}$. This variant of NS iteration is often used to directly compute the inverse square root~\cite{huang2019iterative,bini2005algorithms}. The ${\mathbf{Z}}_{0}$ is initialization with ${\mathbf{I}}$, and post-compensation is calculated as ${\mathbf{Z}}_{k}=\frac{1}{\sqrt{||{\mathbf{A}}||_{\rm F}}} {\mathbf{Z}}_{k}$. Although the modified NS iteration uses only one variable, we note that it is an equivalent representation with the previous NS iteration. More formally, we have: \begin{prop} The one-variable NS iteration of~\cite{huang2019iterative,bini2005algorithms} is equivalent to the two-variable NS iteration of~\cite{li2017second,lin2017improved,higham2008functions}. \end{prop} We give the proof in the Supplementary Material. The modified forward NS iteration is compatible with our iterative Lyapunov solver. Table~\ref{tab:abla_combination} compares the performance of different methods that use the Lyapunov solver as the backward algorithm. Both the SVD-Lya and NS-Lya achieve competitive performances. \begin{table}[htbp] \centering \setlength{\tabcolsep}{1.5pt} \caption{Performance comparison of SVD-Lya and NS-Lya.} \resizebox{0.99\linewidth}{!}{ \begin{tabular}{r|c|c|c|c|c|c|c} \hline \multirow{3}*{Methods} & \multirow{3}*{ Time (ms)} & \multicolumn{4}{c|}{ResNet-18} & \multicolumn{2}{c}{ResNet-50}\\ \cline{3-8} & & \multicolumn{2}{c|}{CIFAR10} & \multicolumn{2}{c|}{CIFAR100} & \multicolumn{2}{c}{CIFAR100}\\ \cline{3-8} && mean$\pm$std & min & mean$\pm$std & min & mean$\pm$std & min \\ \hline SVD-Lya &4.47 &4.45$\pm$0.16 &\textbf{4.20} &21.24$\pm$0.24 &21.02 &\textbf{19.41$\pm$0.11}&19.26 \\ NS-Lya &2.88 & 4.51$\pm$0.14 & 4.34 & 21.16$\pm$0.17 & 20.94 &19.65$\pm$0.35 & 19.39\\ \hline MPA-Lya & 2.61 &\textbf{4.39$\pm$0.09} &4.25 & $\textbf{21.11$\pm$0.12}$ &20.95 & \textbf{19.55$\pm$0.20} &\textbf{19.24}\\ MTP-Lya & \textbf{2.46} & 4.49$\pm$0.13 &4.31 & 21.42$\pm$0.21 &21.24 & 20.55$\pm$0.37&20.12\\ \hline \end{tabular} } \label{tab:abla_combination} \end{table} \label{sec:abla} \section{Conclusion}\label{sec:conclusion} In this paper, we propose two fast methods to compute the differentiable matrix square root and the inverse square root. In the forward pass, the MTP and MPA are applied to approximate the matrix square root, while an iterative Lyapunov solver is proposed to solve the gradient function for back-propagation. A number of numerical tests and computer vision applications demonstrate that our methods can achieve both the fast speed and competitive performances. \section{Supplemen} \section{Summary of Algorithm} Algorithm.~\ref{alg:fp} and Algorithm.~\ref{alg:bp} summarize the forward pass (FP) and the backward pass (BP) of our proposed methods, respectively. The hyper-parameter $K$ in Algorithm.~\ref{alg:fp} means the degrees of power series, and $T$ in Algorithm.~\ref{alg:bp} denotes the iteration times. \begin{algorithm} \SetAlgoLined \KwIn{ ${\mathbf{A}}$ and $K$} \KwOut{ ${\mathbf{A}}^{\frac{1}{2}}$ or ${\mathbf{A}}^{-\frac{1}{2}}$} \eIf{MTP}{ \tcp{FP method is MTP} \eIf{Matrix Square Root}{ ${\mathbf{A}}^{\frac{1}{2}}{\leftarrow} {\mathbf{I}} {-} \sum_{k=1}^{K} \Big|\dbinom{\frac{1}{2}}{k}\Big| ({\mathbf{I}}-\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}})^{k} $\;} { ${\mathbf{A}}^{-\frac{1}{2}}{\leftarrow} {\mathbf{I}} {+} \sum_{k=1}^{\infty} \Big|\dbinom{-\frac{1}{2}}{k}\Big| ({\mathbf{I}}-\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}})^{k}\;$ } } {\tcp{FP method is MPA} $M{\leftarrow}\frac{K-1}{2}$, $N{\leftarrow}\frac{K-1}{2}$\; ${\mathbf{P}}_{M}{\leftarrow} {\mathbf{I}} {-} \sum_{m=1}^{M} p_{m} ({\mathbf{I}}-\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}})^{m}$\; ${\mathbf{Q}}_{N}{\leftarrow} {\mathbf{I}} {-} \sum_{n=1}^{N} q_{n} ({\mathbf{I}}-\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}})^{n}$\; \eIf{Matrix Square Root}{${\mathbf{A}}^{\frac{1}{2}}{\leftarrow}{\mathbf{Q}}_{N}^{-1}{\mathbf{P}}_{M}$\;} {${\mathbf{A}}^{-\frac{1}{2}}{\leftarrow}{\mathbf{P}}_{M}^{-1}{\mathbf{Q}}_{N}$\;} } \eIf{Matrix Square Root}{Post-compensate ${\mathbf{A}}^{\frac{1}{2}}{\leftarrow}\sqrt{||{\mathbf{A}}||_{\rm F}}\cdot{\mathbf{A}}^{\frac{1}{2}}$} {Post-compensate ${\mathbf{A}}^{-\frac{1}{2}}{\leftarrow}\frac{1}{\sqrt{||{\mathbf{A}}||_{\rm F}}}\cdot{\mathbf{A}}^{-\frac{1}{2}}$} \caption{FP of our MTP and MPA for the matrix square root and the inverse square root.} \label{alg:fp} \end{algorithm} \begin{algorithm} \SetAlgoLined \KwIn{$\frac{\partial l}{\partial {\mathbf{A}}^{\frac{1}{2}}}$ or $\frac{\partial l}{\partial {\mathbf{A}}^{-\frac{1}{2}}}$, ${\mathbf{A}}^{\frac{1}{2}}$ or ${\mathbf{A}}^{-\frac{1}{2}}$, and $T$} \KwOut{$\frac{\partial l}{\partial {\mathbf{A}}}$} \eIf{Matrix Square Root} {${\mathbf{B}}_{0}{\leftarrow}{\mathbf{A}}^{\frac{1}{2}}$, ${\mathbf{C}}_{0}{\leftarrow}\frac{\partial l}{\partial {\mathbf{A}}^{\frac{1}{2}}}$, $i{\leftarrow}0$ \;} {${\mathbf{B}}_{0}{\leftarrow}{\mathbf{A}}^{-\frac{1}{2}}$, ${\mathbf{C}}_{0}{\leftarrow}-{\mathbf{A}}^{-1}\frac{\partial l}{\partial {\mathbf{A}}^{-\frac{1}{2}}}{\mathbf{A}}^{-1}$, $i{\leftarrow}0$\;} Normalize ${\mathbf{B}}_{0}{\leftarrow}\frac{{\mathbf{B}}_{0}}{||{\mathbf{B}}_{0}||_{\rm F}}$, ${\mathbf{C}}_{0}{\leftarrow}\frac{{\mathbf{C}}_{0}}{||{\mathbf{B}}_{0}||_{\rm F}}$\; \While{$i<T$}{ \tcp{Coupled iteration} ${\mathbf{B}}_{k+1}{\leftarrow}\frac{1}{2} {\mathbf{B}}_{k} (3{\mathbf{I}}-{\mathbf{B}}_{k}^2)$ \; ${\mathbf{C}}_{k+1}{\leftarrow}\frac{1}{2} \Big(-{\mathbf{B}}_{k}^{2}{\mathbf{C}}_{k} + {\mathbf{B}}_{k}{\mathbf{C}}_{k}{\mathbf{B}}_{k} + {\mathbf{C}}_{k}(3{\mathbf{I}}-{\mathbf{B}}_{k}^2)\Big)$ \; $i{\leftarrow}i+1$\; } $\frac{\partial l}{\partial {\mathbf{A}}}{\leftarrow}\frac{1}{2}{\mathbf{C}}_{k}$ \; \caption{BP of our Lyapunov solver for the matrix square root and the inverse square root.} \label{alg:bp} \end{algorithm} \section{Theoretical Derivation and Proof} \subsection{Iterative Lyapunov Function Solver} \begin{dup}[Matrix Sign Function~\cite{higham2008functions}] \label{sign_2} For a given matrix ${\mathbf{H}}$ with no eigenvalues on the imaginary axis, its sign function has the following properties: 1) $sign({\mathbf{H}})^2={\mathbf{I}}$; 2) if ${\mathbf{H}}$ has the Jordan decomposition ${\mathbf{H}}{=}{\mathbf{T}}{\mathbf{M}}{\mathbf{T}}^{-1}$, then its sign function satisfies $sign({\mathbf{H}}){=}{\mathbf{T}} sign({\mathbf{M}}) {\mathbf{T}}^{-1}$. \end{dup} \begin{proof} The first property is easy to prove. Consider the SVD of ${\mathbf{U}}{\mathbf{S}}{\mathbf{V}}^{T}={\mathbf{H}}$. As the sign depends on the positiveness of the eigenvale, the square of sign function is computed as: \begin{equation} sign({\mathbf{H}})^2= sign({\mathbf{S}})^2 \end{equation} Since all eigenvalues are real, we have $sign({\mathbf{S}})^2{=}{\mathbf{I}}$, and the first property is proved. The alternative definition of matrix sign function is given by: \begin{equation} sign({\mathbf{H}}) = {\mathbf{H}}({\mathbf{H}}^{2})^{-\frac{1}{2}} \end{equation} Injecting $sign({\mathbf{H}}){=}{\mathbf{T}} sign({\mathbf{M}}) {\mathbf{T}}^{-1}$ into the above equation leads to \begin{equation} \begin{aligned} sign({\mathbf{H}}) &= {\mathbf{T}}{\mathbf{M}}{\mathbf{T}}^{-1}({\mathbf{T}}{\mathbf{M}}^2{\mathbf{T}})^{-\frac{1}{2}}\\ &= {\mathbf{T}}{\mathbf{M}}{\mathbf{T}}^{-1} {\mathbf{T}} sign({\mathbf{M}}){\mathbf{M}}^{-1}{\mathbf{T}}^{-1} \\ &= {\mathbf{T}} sign({\mathbf{M}}) {\mathbf{T}}^{-1} \end{aligned} \end{equation} The second property gets proved. \end{proof} Now we switch how to derive the iterative solver for matrix sign function in detail. Lemma~\ref{sign_2}.1 shows that $sign({\mathbf{H}})$ is the matrix square root of the identity matrix. We use the Newton-Schulz iteration to compute $sign({\mathbf{H}})$ as: \begin{equation} \begin{aligned} {\mathbf{H}}_{k+1} &{=} = \frac{1}{2}{\mathbf{H}}_{k}(3{\mathbf{I}}-{\mathbf{H}}_{k}^2)\\ &{=}\frac{1}{2}\begin{bmatrix} {\mathbf{B}}_{k}{(}3{\mathbf{I}}{-}{\mathbf{B}}_{k}^2{)} & 3{\mathbf{C}}_{k}-{\mathbf{B}}_{k}{(}{\mathbf{B}}_{k}{\mathbf{C}}_{k}{-}{\mathbf{C}}_{k}{\mathbf{B}}_{k}{)}{-}{\mathbf{C}}_{k}{\mathbf{B}}_{k}^2 \\ \mathbf{0} & -{\mathbf{B}}_{k}{(}3{\mathbf{I}}{-}{\mathbf{B}}_{k}^2{)} \end{bmatrix} \end{aligned} \end{equation} Lemma~\ref{sign_2}.2 indicates an alternative approach to compute the sign function as: \begin{equation} \begin{aligned} sign({\mathbf{H}}) &= sign\Big(\begin{bmatrix} {\mathbf{B}} & {\mathbf{C}}\\ \mathbf{0} & -{\mathbf{B}} \end{bmatrix}\Big)\\ & = \begin{bmatrix} {\mathbf{I}} & {\mathbf{X}}\\ \mathbf{0} & {\mathbf{I}} \end{bmatrix} sign\Big( \begin{bmatrix} {\mathbf{B}} & \mathbf{0}\\ \mathbf{0} & -{\mathbf{B}} \end{bmatrix} \Big) \begin{bmatrix} {\mathbf{I}} & {\mathbf{X}}\\ \mathbf{0} & {\mathbf{I}} \end{bmatrix}^{-1} \\ & = \begin{bmatrix} {\mathbf{I}} & {\mathbf{X}}\\ \mathbf{0} & {\mathbf{I}} \end{bmatrix} \begin{bmatrix} {\mathbf{I}} & \mathbf{0}\\ \mathbf{0} & -{\mathbf{I}} \end{bmatrix} \begin{bmatrix} {\mathbf{I}} & -{\mathbf{X}}\\ \mathbf{0} & {\mathbf{I}} \end{bmatrix} \\ &=\begin{bmatrix} {\mathbf{I}} & 2 {\mathbf{X}}\\ \mathbf{0} & -{\mathbf{I}} \end{bmatrix} \end{aligned} \end{equation} The above two equations define the coupled iterations and the convergence. \subsection{Equivalence of two sets of MPA} \begin{duplicate} The diagonal MPA $\frac{1}{\sqrt{||{\mathbf{A}}||_{\rm F}}}{\mathbf{S}}_{N}^{-1}{\mathbf{R}}_{M}$ is equivalent to the diagonal MPA $\frac{1}{\sqrt{||{\mathbf{A}}||_{\rm F}}}{\mathbf{P}}_{M}^{-1}{\mathbf{Q}}_{N}$, and the relation $p_{m}{=}-s_{n}$ and $q_{n}{=}-r_{m}$ hold for any $m{=}n$. \end{duplicate} \begin{proof} Though Pad\'e approximants are derived out of a finite Taylor series, they are asymptotic to their infinite Taylor series~\cite{van2006pade}. Let $f(z){=}(1-z)^{\frac{1}{2}}$ and $f(z)^{-1}{=}(1-z)^{-\frac{1}{2}}$. We have the relation: \begin{equation} \begin{gathered} \frac{1+\sum_{m=1}^{M}r_{m}z^{m}}{1+\sum_{n=1}^{N}s_{n}z^{n}} = f(z)^{-1} +R(z^{M+N+1})\\ \frac{1-\sum_{m=1}^{M}p_{m}z^{m}}{1-\sum_{n=1}^{N}q_{n}z^{n}} =f(z) +R(z^{M+N+1})\\ \end{gathered} \end{equation} where $R(z^{M+N+1})$ is the discarded higher-order term. Since $f(z)=\frac{1}{f(z)^{-1}}$, we have: \begin{equation} \frac{1+\sum_{m=1}^{M}r_{m}z^{m}}{1+\sum_{n=1}^{N}s_{n}z^{n}}=\frac{1-\sum_{n=1}^{N}q_{n}z^{n}}{1-\sum_{m=1}^{M}p_{m}z^{m}}. \end{equation} Now we have two sets of Pad\'e approximants at both sides. Since the numerator and denominator of Pad\'e approximants are relatively prime to each other by definition~\cite{baker1970pade}, the two sets of Pad\'e approximants are equivalent and we have: \begin{equation} p_{m}=-s_{n},\ q_{n}=-r_{m} \end{equation} Generalized to the matrix case, this leads to: \begin{equation} {\mathbf{P}}_{M}={\mathbf{S}}_{N},\ {\mathbf{Q}}_{N}={\mathbf{R}}_{M}. \end{equation} Therefore, we also have ${\mathbf{S}}_{N}^{-1}{\mathbf{R}}_{M}{=}{\mathbf{P}}_{M}^{-1}{\mathbf{Q}}_{N}$. The two sets of MPA are actually the same representation when $m{=}n$. \end{proof} \subsection{Equivalence of Newton-Schulz Iteration} \begin{duplicate} The one-variable NS iteration of~\cite{huang2019iterative,bini2005algorithms} is equivalent to the two-variable NS iteration of~\cite{li2017second,lin2017improved,higham2008functions}. \end{duplicate} \begin{proof} For the two-variable NS iteration, the coupled iteration is computed as: \begin{equation} {\mathbf{Y}}_{k+1}=\frac{1}{2}{\mathbf{Y}}_{k} (3{\mathbf{I}} - {\mathbf{Z}}_{k}{\mathbf{Y}}_{k}), {\mathbf{Z}}_{k+1}=\frac{1}{2}(3{\mathbf{I}}-{\mathbf{Z}}_{k}{\mathbf{Y}}_{k}){\mathbf{Z}}_{k} \label{prop_ns_two} \end{equation} where ${\mathbf{Y}}_{k}$ and ${\mathbf{Z}}_{k}$ converge to ${\mathbf{A}}^{\frac{1}{2}}$ and ${\mathbf{A}}^{-\frac{1}{2}}$, respectively. The two variables are initialized as ${\mathbf{Y}}_{0}{=}\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}}$ and ${\mathbf{Z}}_{0}{=}{\mathbf{I}}$. Since the two variables have the relation ${\mathbf{Z}}_{k}^{-1}{\mathbf{Y}}_{k}{=}\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}}$, we can replace ${\mathbf{Y}}_{k}$ in~\cref{prop_ns_two} with ${\mathbf{Z}}_{k}\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}}$: \begin{equation} {\mathbf{Z}}_{k+1}=\frac{1}{2}(3{\mathbf{I}}-{\mathbf{Z}}_{k}^{2}\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}}){\mathbf{Z}}_{k} \end{equation} Notice that ${\mathbf{A}}$ and ${\mathbf{Z}}_{k}$ have the same eigenspace and their matrix product commutes, \emph{i.e.,} ${\mathbf{A}}{\mathbf{Z}}_{k}{=}{\mathbf{Z}}_{k}{\mathbf{A}}$. Therefore, the above equation can be further simplified as: \begin{equation} {\mathbf{Z}}_{k+1}=\frac{1}{2}(3{\mathbf{Z}}_{k}-{\mathbf{Z}}_{k}^{3}\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}}) \end{equation} As indicated above, the two seemingly different NS iterations are in essence equivalent. \end{proof} \section{Baselines} \label{app:baselines} In the experiment section, we compare our proposed two methods with the following baselines: \begin{itemize} \item Power Iteration (PI). It is suggested in the original So-ViT to compute only the dominant eigenpair. \item SVD-PI~\cite{wang2019backpropagation} that uses PI to compute the gradients of SVD. \item SVD-Taylor~\cite{wang2021robust,song2021approximate} that applies the Taylor polynomial to approximate the gradients. \item SVD-Pad\'e~\cite{song2021approximate} that proposes to closely approximate the SVD gradients using Pad\'e approximants. Notice that our MTP/MPA used in the FP is fundamentally different from the Taylor polynomial or Pad\'e approximants used in the BP of SVD-Pad\'e. For our method, we use Matrix Taylor Polynomial (MTP) and Matrix Pad\'e Approximants (MPA) to derive the matrix square root in the FP. For the SVD-Pad\'e, they use scalar Taylor polynomial and scalar Pad\'e approximants to approximate the gradient $\frac{1}{\lambda_{i}-\lambda_{j}}$ in the BP. That is to say, their aim is to use the technique to compute the gradient and this will not involve the back-propagation of Taylor polynomial or Pad\'e approximants. \item NS iteration~\cite{schulz1933iterative,higham2008functions} that uses the Newton-Schulz iteration to compute the matrix square root. It has been widely applied in different tasks, including covariance pooling~\cite{li2018towards} and ZCA whitening~\cite{huang2018decorrelated}. We note that although~\cite{huang2019iterative} and~\cite{higham2008functions} use different forms of NS iteration, the two representations are equivalent to each other (see the proof in the paper). The modified NS iteration in~\cite{huang2019iterative} just replaces ${\mathbf{Y}}_{k}$ with ${\mathbf{Z}}_{k}{\mathbf{A}}$ and re-formulates the iteration using one variable. The computation complexity is still the same. \end{itemize} As the ordinary differentiable SVD suffers from the gradient explosion issue and easily causes the program to fail, we do not take it into account for comparison. Unlike previous methods such as SVD and NS iteration, our MPA-Lya/MTP-Lya does not have a consistent FP and BP algorithm. However, we do not think it will bring any caveat to the stability or performance. Our MTP and MPA do not need coupled iteration in the FP and always have gradient back-propagating through $\mathbf{A}^{\frac{1}{2}}$ or $\mathbf{A}^{-\frac{1}{2}}$ in the BP, which could guarantee the training stability. Moreover, our ablation study implies that our BP Lyapunov solver approximates the real gradient very well (\emph{i.e.,} $||{\mathbf{B}}_{k}{-}{\mathbf{I}}||_{\rm F}{<}3e{-}7$ and $||0.5{\mathbf{C}}_{k}{-}{\mathbf{X}}||_{\rm F}{<}7e{-}6$). Also, our extensive experiments demonstrate the superior performances. In light of these experimental results, we argue that as long as the BP algorithm is accurate enough, the inconsistency between the BP and FP is not an issue. \section{Experimental Settings} \label{app:exp_set} All the source codes are implemented in Pytorch. For the SVD methods, the forward eigendecomposition is performed on the CPU using the official Pytorch function \textsc{torch.svd}, which calls the LAPACK's routine \textit{gesdd} that uses the Divide-and-Conquer algorithm for the fast calculation. All the numerical tests are conducted on a single workstation equipped with a Tesla K40 GPU and a 6-core Intel(R) Xeon(R) GPU @ 2.20GHz. For our method throughout all the experiments, in the forward pass, we match the MTP to the power series of degree $11$ and set the degree for both numerator and denominator of our MPA as $5$. We keep iterating $8$ times for our backward Lyapunov solver. Now we turn to the implementation details for each experiment in the paper. \subsection{Decorrelated Batch Normalization} \begin{figure}[htbp] \centering \includegraphics[width=0.5\linewidth]{imgs/arch_deBN.jpg} \caption{The architecture changes of ResNet models in the experiment of ZCA whitening. The decorrelated batch normalization layer is inserted after the first convolutional layer. The kernel sizes, the stride of the first convolution layer, and the stride of the first ResNet block are changed correspondingly.} \label{fig:arch_BN} \end{figure} Fig.~\ref{fig:arch_BN} displays the detailed architecture changes of ResNet. Suggested by~\cite{wang2020deep}, we truncate the Taylor polynomial to degree $20$ for SVD-Taylor. To make Pad\'e approximant match the same degree with Taylor polynomial, we set the degree of both numerator and denominator to $10$ for SVD-Pad\'e. For SVD-PI, the iteration times are also set as $20$. For the NS iteration, according to the setting in~\cite{li2018towards,huang2018decorrelated}, we set the iteration times to $5$. The other experimental settings follow the implementation in~\cite{wang2021robust}. We use the workstation equipped with a Tesla K40 GPU and a 6-core Intel(R) Xeon(R) GPU @ 2.20GHz for training. Notice that in our previous conference paper, we first calculate the matrix square root ${\mathbf{A}}^{\frac{1}{2}}$ and then compute ${\mathbf{X}}_{whitend}$ by solving the linear system ${\mathbf{A}}^{\frac{1}{2}}{\mathbf{X}}_{whitend}{=}{\mathbf{X}}$. Thanks to the algorithm extension to the inverse square root, we can directly computes ${\mathbf{A}}^{-\frac{1}{2}}$ in this paper. \subsection{Second-order Vision Transformer} We use 8 Tesla G40 GPUs for distributed training and the NVIDIA Apex mixed-precision trainer is used. Except that the spectral layer uses the single-precision (\emph{i.e.,} float32), other layers use the half-precision (\emph{i.e.,} float16) to accelerate the training. Other implementation details follow the experimental setting of the original So-ViT~\cite{xie2021so}. Following the experiment of covariance pooling for CNNs~\cite{song2021approximate}, the degrees of Taylor polynomial are truncated to $100$ for SVD-Taylor, and the degree of both the numerator and denominator of Pad\'e approximants are set to $50$ for SVD-Pad\'e. The iteration times of SVD-PI are set to $100$. In the experiment of covariance pooling, more terms of the Taylor series are used because the covariance pooling meta-layer requires more accurate gradient estimation~\cite{song2021approximate}. For the SVD-based methods, usually the double-precision is required to ensure an effective numerical representation of the eigenvalues. Using a lower precision would make the model fail to converge at the beginning of the training~\cite{song2021approximate}. This is particularly severe for vision transformers which are known slow and hard to converge in the early training stage. One may consider to cast the tensor into double-precision (64 bits) to alleviate this issue. However, this will trigger much larger gradient and introduce round-off errors when the gradient is passed to previous layer in half-precision (16 bits). To avoid this caveat, we first apply the NS iteration to train the network for $50$ epochs, then switch to the corresponding SVD method and continue the training till the end. This hybrid approach can avoid the non-convergence of the SVD methods at the beginning of the training phase. \subsection{Global Covariance Pooling} For the experiment on large-scale and fine-grained image recognition, we refer to~\cite{song2021approximate} for all the experimental settings. In the video action recognition experiment~\cite{gao2021temporal}, the iteration time for NS iteration is set as $5$. Othe implementation details are unchanged. \subsection{Neural Style Transfer} \begin{table*}[htbp] \caption{The detailed LPIPS~\cite{zhang2018perceptual} score and user preference (\%) on each subset of Artworks dataset.} \centering \resizebox{0.9\linewidth}{!}{ \begin{tabular}{r|c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}*{Methods} & \multicolumn{5}{c|}{LPIPS~\cite{zhang2018perceptual} Score ($\uparrow$)} & \multicolumn{5}{c}{User Preference ($\uparrow$)} \\ \cline{2-11} & Cezanne & Monet & Vangogh & Ukiyoe & Average & Cezanne & Monet & Vangogh & Ukiyoe & Average \\ \hline SVD-Taylor &0.4937 &0.4820 &\textbf{0.6074} &0.5274 & 0.5276 & 15 &16 &\textbf{25} &9 &16.25\\ SVD-Pad\'e &0.6179 &0.4783 &0.5307 &0.5419 &0.5422 & \textbf{28} &13 &15 & 21& 19.25\\ NS iteration &0.5328 &\textbf{0.5329} &0.5386 &0.6270 & 0.5578&11 & 18 &21 & 18 & 17.00\\ \hline Our MPA-Lya &\textbf{0.6332} &0.5291 &0.4511 &\textbf{0.6325} & \textbf{0.5615} & 25 &\textbf{29} &18 &\textbf{27} & \textbf{24.75}\\ Our MTP-Lya &0.6080 &0.4826 &0.4796 &{0.6253} & 0.5489 & 17&21 &17 & 19 & 18.50 \\ \hline \end{tabular} } \label{tab:style_transfer_full} \end{table*} \begin{figure*}[htbp] \centering \includegraphics[width=0.85\linewidth]{imgs/style_transfer_visual_large.png} \caption{More exemplary visualizations on Artworks~\cite{isola2017image} dataset. Our methods generate sharper images with more coherent style and better visual appeal. The red rectangular indicates regions with subtle details.} \label{fig:style_transfer_visual_large} \end{figure*} For the loss functions, we follow the settings in~\cite{cho2019image} and use the cycle-consistent reconstruction loss in both the latent and the pixel space. The image is resized to the resolution of $216{\times}216$ before passing to the network, and the model is trained for $100,000$ iterations. The batch size is set to $4$. Table~\ref{tab:style_transfer_full} and Fig.~\ref{fig:style_transfer_visual_large} present the detailed quantitative evaluation and more visual comparison, respectively. As suggested in~\cite{li2017universal,wang2020diversified}, we use the LPIPS~\cite{zhang2018perceptual} score and the user preference as the evaluation metrics. For the LPIPS metric, we compute the score between each pair of transferred image and the content image. A higher LPIPS score implies that the image carries less content information but more style information. For the user study, we randomly select $100$ images from each dataset and ask $20$ volunteers to vote for the image that characterizes more the style information. In some cases where the volunteer thinks none of the images correctly carries the style, he/she can abstain and does not vote for any one. \section{Comparison of Lyapunov Solver against Implicit Function and Automatic Differentiation} Besides our proposed custom Lyapunov gradient solver, one may consider alternative gradient computation schemes, such as reverse-mode automatic differentiation (RMAD) and implicit function (IF). For the RMAD, the backward pass indeed takes roughly the same operation costs as the forward pass. Considering that our MPA uses two sets of matrix power polynomials and one matrix inverse, using RMAD for the gradient computation would be less efficient than the Lyapunov solver which only involves matrix multiplications. Moreover, the gradient of some intermediate variables of MPA would be calculated in the RMAD, which would further increase unnecessary memory costs. For the IF, the function for matrix square root can be defined as $f({\mathbf{A}},{\mathbf{A}}^\frac{1}{2})=({\mathbf{A}}^{\frac{1}{2}})^2-{\mathbf{A}}$ where ${\mathbf{A}}^{\frac{1}{2}}$ can be regarded as a function of ${\mathbf{A}}$. Performing implicit differentiation and multiplying both sides with $\frac{\partial l}{\partial {\mathbf{A}}^{\frac{1}{2}}}$ would lead to the gradient equation $\frac{\partial l}{\partial {\mathbf{A}}}=-(\frac{\partial f}{\partial {\mathbf{A}}^{\frac{1}{2}}})^{-1}\frac{\partial f}{\partial {\mathbf{A}}}\frac{\partial l}{\partial {\mathbf{A}}^{\frac{1}{2}}}$. The memory usage of IF should be small since only the gradient of $f$ is introduced in the computation. However, the time cost can be high due to the function gradient evaluation $\frac{\partial f}{\partial {\mathbf{A}}}$ and $\frac{\partial f}{\partial {\mathbf{A}}^{\frac{1}{2}}}$ as well as the matrix inverse computation. \begin{table}[htbp] \centering \caption{Backward time and speed comparison for batched matrices of size $64{\times}64{\times}64$. We use MPA for forward pass, and the evaluation is averaged on $1,000$ randomly generated matrices.} \begin{tabular}{c|c|c} \toprule Method & Speed (ms) & Memory (MB) \\ \hline Lyapunov &\textbf{2.19} & \textbf{1.99}\\ RMAD &5.69 & 3.08 \\ IF &4.71 & 2.03\\ \bottomrule \end{tabular} \label{tab:rmad_if} \end{table} Table~\ref{tab:rmad_if} compares the speed and memory consumption. Our Lyapunov solver outperforms both schemes in terms of speed and memory. The memory usage of IF is competitive, which also meets our expectation. In general, our Lyapunov-based solver can be viewed as a well-optimized RMAD compiler with the least memory and time consumption. \section{Stability of Pad\'e Approximants} \label{app:stability_pade} When there is the presence of spurious poles~\cite{stahl1998spurious,baker2000defects}, the Pad\'e approximants are very likely to suffer from the well-known defects of instability. The spurious poles mean that when the approximated function has very close poles and zeros, the corresponding Pad\'e approximants will also have close poles and zeros. Consequently, the Pad\'e approximants will become very unstable in the region of defects (\emph{i.e.,} when the input is in the neighborhood of poles and zeros). Generalized to the matrix case, the spurious poles can happen when the determinant of the matrix denominator is zero (\emph{i.e.} $\det{({\mathbf{Q}}_{N})}=0$). However, in our case, the approximated function for matrix square root is $(1-z)^{\frac{1}{2}}$ for $|z|<1$, which only has one zero at $z=1$ and does not have any poles. For the inverse square root, the approximated function $(1-z)^{-\frac{1}{2}}$ has one pole but does not have an zeros. Therefore, the spurious pole does not exist in our approximation and there are no defects of our Pad\'e approximants. Now we briefly prove this claim for the matrix square root. The proof for the inverse square root can be given similarly, and we omit it here for conciseness. Consider the denominator of our Pad\'e approximants: \begin{equation} {\mathbf{Q}}_{N}= {\mathbf{I}} - \sum_{n=1}^{N} q_{n} ({\mathbf{I}}-\frac{{\mathbf{A}}}{||{\mathbf{A}}||_{\rm F}})^{n} \label{eq:QN_deno} \end{equation} Its determinant is calculated as: \begin{equation} \det{({\mathbf{Q}}_{N})}=\prod_{i=1}(1-\sum_{n=1}^{N} q_{n}(1-\frac{\lambda_{i}}{\sqrt{\sum_{i}\lambda_{i}^{2}}})^{n}) \label{eq:QN_det} \end{equation} The coefficients $q_{n}$ of our $[5,5]$ Pad\'e approximant are pre-computed as $[2.25,-1.75,0.54675,-0.05859375,0.0009765625]$. Let $x_{i}$ denotes $(1-\frac{\lambda_{i}}{\sqrt{\sum_{i}\lambda_{i}^{2}}})$. Then $x_{i}$ is in the range of $[0,1]$, and we have: \begin{equation} \begin{gathered} f(x_{i})=1-2.25x_{i}+1.75x^2_{i}-0.54675x^3_{i}+\\+0.05859375x^{4}_{i}-0.0009765625x^{5}_{i}; \\ \det{({\mathbf{Q}}_{N})}=\prod_{i=1}(f(x_{i})). \end{gathered} \label{eq:QN_nonzero} \end{equation} The polynomial $f(x_{i})$ does not have any zero in the range of $x{\in}[0,1]$. The minimal is $0.0108672$ when $x=1$. This implies that $\det{({\mathbf{Q}}_{N})}\neq0$ always holds for any ${\mathbf{Q}}_{N}$ and our Pad\'e approximants do not have any pole. Accordingly, there will be no spurious poles and defects. Hence, our MPA is deemed stable. Throughout our experiments, we do not encounter any instability issue of our MPA. \iffalse \subsection{Pad\'e Coefficients} To better illustrate the stability of our Pad\'e approximants, here we attach the Pad\'e coefficients of the matrix square root from degree $[3,3]$ to degree $[6,6]$. The numerator $p_{m}$ is: \begin{equation} \begin{split} p_{3}=[1.75,-0.875,0.109375].\\ p_{4}=[2.25,-1.6875,0.46875,-0.03515625].\\ p_{5}=[2.75,-2.75,1.203125,-0.21484375,0.0107421875].\\ p_{6}=[3.25,-4.0625,2.4375,-0.7109375,\\ 0.0888671875,-0.003173828125]. \end{split} \end{equation} And the corresponding denominator $q_{n}$ is: \begin{equation} \begin{split} q_{3}=[1.25,-0.375,0.015625].\\ q_{4}=[1.75,-0.9375,0.15625,-0.00390625].\\ q_{5}=[2.25,-1.75,0.54675,-0.05859375,0.0009765625].\\ q_{6}=[2.75,-2.8125,1.3125,-0.2734375, \\ 0.0205078125,-0.000244140625].\\ \end{split} \end{equation} Notice that for the inverse square root, we have the relation: \begin{equation} p_{m}=-s_{n}, q_{n}=-r_{m} \end{equation} Similar to the deduction in~\cref{eq:QN_deno,eq:QN_det,eq:QN_nonzero}, we can get the polynomial for deriving $\det{({\mathbf{Q}}_{N})}$ as: \begin{equation} \begin{split} f(x_{i})_{q_{3}}=1-1.25x_{i}+0.375x^2_{i}-0.015625x^3_{i}\\ f(x_{i})_{q_{4}}=1-1.75x_{i}+0.9375x^2_{i}-0.15625x^3_{i}\\+0.00390625x^{4}_{i}\\ f(x_{i})_{q_{5}}=1-2.25x_{i}+1.75x^2_{i}-0.54675x^3_{i}\\+0.05859375x^{4}_{i}-0.0009765625x^{5}_{i}\\ f(x_{i})_{q_{6}}{=}1{-}2.75x_{i}{+}2.8125x^2_{i}{-}1.3125x^3_{i}\\{+}0.2734375x^{4}_{i}{-}0.0205078125x^{5}_{i}{+}0.000244140625x^{6}_{i} \end{split} \end{equation} It can be easily verified that all of the polynomials monotonically decrease in the range of $x_{i}{\in}[0,1]$ and have their minimal at $x_{i}{=}1$ as: \begin{equation} \begin{gathered} \min_{x_{i}\in[0,1]} f(x_{i})_{q_{3}} = 0.109375. \\ \min_{x_{i}\in[0,1]} f(x_{i})_{q_{4}} = 0.03515625. \\ \min_{x_{i}\in[0,1]} f(x_{i})_{q_{5}} = 0.0108672. \\ \min_{x_{i}\in[0,1]} f(x_{i})_{q_{6}} = 0.003173828125. \end{gathered} \end{equation} As indicated above, all the polynomials are positive and we have $\det{({\mathbf{Q}}_{N})}\neq0$ consistently holds for the degree $N{=}{3,4,5,6}$. \fi \section*{Acknowledgments} \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
2024-02-18T23:40:31.545Z
2022-10-20T02:10:04.000Z
algebraic_stack_train_0000
2,661
33,814
proofpile-arXiv_065-13209
\section{Introduction}\label{sec.intro} Astronomical observations suggest the expansion of our universe. In a relativistic theory of gravity, the expanding spacetime is described by the de Sitter solution \cite{Peebles:2002gy,Padmanabhan:2002ji} that solves the Einstein equations of motions with a positive cosmological constant \cite{Griffiths:2009dfa}. In the de Sitter spacetime, there exists the so-called cosmological horizon that behaves similar to black hole horizon in many ways. For example, one can compute the Hawking temperature associated to this horizon \cite{Gibbons:1977mu}. Consequently, there also exists a first law of thermodynamics relation for this horizon as proposed in \cite{Teitelboim:2001skl,Gomberoff:2003ea,Sekiwa:2006qj}. Then it is understood that if a de Sitter spacetime contains black hole, then the spacetime is equipped with multiple horizons. The Kerr/CFT correspondence is used to understand some aspects of black hole horizon \cite{Guica:2008mu,Hartman:2008pb,Compere:2012jk}. The warped AdS structure in the near horizon geometry of an extremal black hole opens the possibility to use some two dimensional conformal field (CFT$_2$) theory methods in understanding several aspects of black hole. For example, the Cardy formula for entropy in a CFT$_2$ can recover the Bekenstein-Hawking entropy of the black hole. Another one is the scattering near horizon which can also be described by using a two point function calculation in a CFT$_2$. The Kerr/CFT correspondence can also be extended to the non-extremal case where the conformal symmetry is hidden in the test scalar wave equation. Related to the Cardy formula in non-extremal setup, we assume that the corresponding central charge does not change as the geometry evolves from the extremal state. In literature, Kerr/CFT has been discussed for many cases of black holes that can exist in various theories of gravity \cite{Compere:2012jk,Ghezelbash:2012qn,Ghezelbash:2014aqa,Sakti:2019zix,Sakti:2020jpo}. Various similar properties between the black hole and cosmological horizons motivate us to establish the Kerr/CFT correspondence for the cosmological horizon. In literature, there exists a particular holography for de Sitter spacetime \cite{Strominger:2001pn} that relates quantum gravity on D-dimensional de Sitter space to conformal field theory on a single (D-1)-sphere. However, in this work, we would like to employ the Kerr/CFT correspondence to the cosmological horizon in Kerr-Newman-Taub-NUT-de Sitter spacetime that has several parameters, namely the rotation, electric charge, NUT parameter, and a positive cosmological constant. Here we also anticipate the newly proposed first law of thermodynamics for a NUT spacetime \cite{Wu:2019pzr,Wu:2022rmx} where it introduces some new conserved charges as thermodynamic parameters in the system. Following \cite{Sekiwa:2006qj}, we also consider the cosmological constant that can vary \cite{Caldarelli:1999xj} which modify the first law of cosmological horizon mechanics. On the other hand, the pair production can occur near the horizon of near extremal as a Schwinger effect \cite{Chen:2012zn,Siahaan:2019ysk,Chen:2020mqs,Zhang:2020apg,Chen:2021jwy} in analogy to such effect in strong electromagnetic field. It can be shown that the violation of the Breitenlohner-Freedman bound that associates to the massive scalar field near the horizon of the near extremal black hole leads to the conclusion that the pair production can exist. The resemblances between black hole and cosmological horizons suggest we investigate such an effect for the cosmological horizon in Kerr-Newman-Taub-NUT-de Sitter spacetime. To the best of our knowledge, such study has not appeared in literature. The organization of this paper is as follows. In the next section we provide a short review on Kerr-Newman-Taub-NUT-de Sitter spacetime. The twofold hidden conformal symmetry is constructed in section \ref{s3.hidden}. The microscopic entropy calculation is performed in section \ref{s4.microentropy}, whereas the supporting holographic scattering discussion is worked out in section \ref{s5.scattering}. The study of pair production near the cosmological horizon is performed in section \ref{sec.pair}. Finally, a conclusion is given. In this paper we consider the natural units $G = {\hbar} = k_B = c = 1$.  \section{Kerr-Newman-Taub-NUT-de Sitter spacetime}\label{sec.KNTNdSreview} Kerr-Newman-Taub-NUT-de Sitter (KNTNdS) spacetime solves the Einstein-Maxwell equations of motion with a positive cosmological constant, namely \be \label{eq.Einstein} R_{\mu \nu }  - \frac{1}{2}g_{\mu \nu } R + \frac{3}{l^2} g_{\mu \nu }  = 2F_{\mu \alpha } F_\nu ^\alpha   - \frac{1}{2}g_{\mu \nu } F_{\alpha \beta } F^{\alpha \beta } \,, \ee where $F_{\mu\nu}=\partial_\mu A_\nu -\partial_\nu A_\mu $. The vector field obeys the source free condition $\nabla _\mu  F^{\mu \nu }  = 0$ and the Bianchi identity $\nabla _\mu  F_{\alpha \beta }+\nabla _\beta  F_{\mu \alpha }+\nabla _\alpha  F_{\beta \mu }=0$. The KNTNdS spacetime metric can be expressed as \be \label{metricKNTNdS} ds^2  =  - \frac{{\Delta _r }}{{\rho ^2 }}\left( {dt - \left( {a\Delta _x  - 2nx} \right)d\phi } \right)^2  + \rho ^2 \left( {\frac{{dr^2 }}{{\Delta _r }} + \frac{{dx^2 }}{{\Delta _x \Delta _l }}} \right) + \frac{{\Delta _x \Delta _l }}{{\rho ^2 }}\left( {adt - \left( {r^2  + a^2  + n^2 } \right)d\phi } \right)^2 \ee where $\Delta_x = 1-x^2$, \be \Delta _r  = r^2  - 2Mr + Q^2  + a^2  - n^2  - \frac{3}{{l^2 }}\left( {\left( {a^2  - n^2 } \right)n^2  + \left( {\frac{{a^2 }}{3} + 2n^2 } \right)r^2  + \frac{{r^4 }}{3}} \right)\,, \ee \be \Delta _l  = 1 + \frac{ax\left(4n+ax\right)}{l^2} \,, \ee and $\rho ^2  = r^2  + (n+ax)^2 $. It is understood that $M$, $a$, $n$ and $Q$ are the mass, rotation, NUT charge, and electric charge parameters. The corresponding cosmological constant in the equation above is $\Lambda = 3 l^{-2}$. The accompanying vector components to solve the Einstein equations (\ref{eq.Einstein}) are \be \label{eq.vector} A_\mu dx^\mu = \frac{Qr}{\rho^2} \left( dt + \left(2n x - a \Delta_x \right)d\phi\right)\,. \ee \begin{figure} \begin{center} \includegraphics[scale=0.5]{extreme.eps} \end{center} \caption{Case $l=M$. Black curves represent $r_c=r_b$, whereas blue ones for $r_b=r_i$.}\label{fig.extreme} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.5]{extremeQ.eps} \end{center} \caption{Case $l=M$. Black curves represent $r_c=r_b$, whereas blue ones for $r_b=r_i$.}\label{fig.extremeQ} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.5]{extremen.eps} \end{center} \caption{Case $l=M$. Black curves represent $r_c=r_b$, whereas blue ones for $r_b=r_i$. The dashed blue line does not exist since the case of $a=Q=0$ describes the Taub-NUT-de Sitter spacetime.}\label{fig.extremen} \end{figure} It is known that multiple horizons with positive radii can exist in Kerr-Newman-Taub-NUT-de Sitter spacetime. The largest one is acknowledged as the cosmological horizon $r_c$, the outer black hole horizon is $r_b$, and the inner one is denoted by $r_i$, where $r_c > r_b > r_i$. The coincidence of cosmological and outer black hole horizons is known as the extremal state for cosmological horizon, whereas another coincidence $r_b=r_i$ is understood as the extremal configuration for black holes in KNTNdS spacetime. Some illustrations of these extremal states are given in Figs. \ref{fig.extreme}, \ref{fig.extremeQ}, and \ref{fig.extremen}. These plots confirm several extremalities that can exist in KNTNdS spacetime. The area of cosmological horizon can be computed by using the standard formula \be A_c  = \int\limits_{\phi  = 0}^{2\pi } {\int\limits_{\theta  = 0}^\pi  {\sqrt {g_{\theta \theta } g_{\phi \phi } } d\theta d\phi } }  = 4\pi \left( {r_c^2  + a^2 + n^2 } \right)\,, \ee whereas the surface gravity associated to the cosmological horizon can be given by \cite{Wu:2019pzr,Wu:2022rmx} \be \kappa_c  = \frac{{2\pi }}{{A_c }}\left. {\frac{{d\Delta _r }}{{dr}}} \right|_{r_c } \,, \ee which gives \be \kappa_c  = \frac{{\left( {r_c  - M} \right) l^2  - r_c \left( {2r_c^2  + 6n^2  + a^2 } \right)}}{{\left( {r_c^2  + a^2 + n^2 } \right)l^2  }}\,. \ee Accordingly, the Hawking temperature that corresponds to the cosmological horizon can be computed by using the relation $T_c = \kappa_c/2\pi$. The angular velocity and electric potential at the cosmological horizon can be written as \be \Omega _c  = \frac{a}{{r_c^2  + a^2  + n^2 }}\,, \ee and \be \Phi _c  = \frac{{qr_c }}{{r_c^2  + a^2  + n^2 }}\,, \ee respectively. The thermodynamics of Taub-NUT spacetime has become a lively discussion in recent years \cite{Wu:2019pzr,Wu:2022rmx,Awad:2022jgn,BallonBordo:2020mcs,Rodriguez:2021hks,Pradhan:2020ofm}. One of the proposals is to associate an entropy for the Misner string in the Taub-NUT spacetime \cite{BallonBordo:2020mcs}. Another is to introduce some conserved charges, i.e. $\Xi = Mn$ and $N=n$, so the resulting first law of horizon mechanics would read \cite{Wu:2019pzr} \be \label{eq.firstlaw} \delta M = T\delta S + \Omega \delta J + \Phi \delta Q + \Omega_n \delta J_n  + \Psi \delta N + \Theta \delta \Lambda \,. \ee The last term in eq. (\ref{eq.firstlaw}) represents the change of total energy inside of the cosmological horizon due to the variation of cosmological constant. In section \ref{s4.microentropy} where we compute the microscopic formula for the entropy, eq. (\ref{eq.firstlaw}) will be used to obtain the corresponding conjugate charge. Note that in the vanishing of $\delta \Lambda$, $\delta N$, and $\delta J_n$, this first law of horizon mechanics simply reduces to the one which is related to the Kerr-Newman black hole \cite{Chen:2010ywa}. \section{Hidden conformal symmetries}\label{s3.hidden} The hidden conformal symmetry can be shown from the corresponding massless test scalar field in the near region of cosmological horizon. Here we consider the test charged massless scalar with an equation of motion \be \label{KGeqtn} \left( {\nabla _\mu + iqA_\mu } \right)\left( {\nabla ^\mu + iqA^\mu } \right)\Psi = 0\,. \ee Note that the KNTNdS spacetime possesses the stationary and axial Killing symmetry, which allow us to express the test scalar wave function to be separable in the following way, \be \Psi = e^{ - i\omega t + im\phi } X\left( x \right)R\left( r \right)\,. \ee Using this ansatz, the Klein-Gordon equation above can be written as \[ \frac{1}{X\left( x \right)}\frac{d}{{dr}}\left( {\Delta _r \frac{{dR\left( r \right)}}{{dr}}} \right) + \frac{1}{R\left( r \right)}\frac{d}{{dx}}\left( {\Delta _x \Delta _l \frac{{dX\left( x \right)}}{{dx}}} \right) \] \be + \left\{ {\frac{{\left( {\left( {r^2 + a^2 + n^2 } \right)\omega + qQr - ma} \right)^2 }}{{\Delta _r }} } - \frac{{\left( {ax^2 + 2nx\omega - am\omega } \right)^2 }}{{\Delta _l \Delta _x }} \right\} = 0\,. \ee It turns out that the last equation can be separated into two equations, namely the radial part \be \label{eq.rad} \frac{d}{{dr}}\left( {\Delta _r \frac{{dR\left( r \right)}}{{dr}}} \right) + \left( {\frac{{\left( {\left( {r^2 + a^2 + n^2 } \right)\omega + qQr - ma} \right)^2 }}{{\Delta _r }} - \lambda } \right)R\left( r \right) = 0\,, \ee and the angular one \be \label{eq.ang} \frac{d}{{dx}}\left( {\Delta _x \Delta _l \frac{{dX\left( x \right)}}{{dx}}} \right) - \left( - \frac{{\left( {ax^2 + 2nx\omega - am\omega } \right)^2 }}{{\Delta _l \Delta _x }} - \lambda \right)X\left( x \right) = 0\,, \ee with $\lambda$ as the separable constant. The hidden conformal symmetry is related to the radial equation (\ref{eq.rad}) under some circumstances. However, before we proceed to show the hidden conformal symmetry, we need to consider an approximation to the $\Delta_r$ in the radial equation (\ref{eq.rad}) so it can be quadratic in $r$. Such approximation has been repeatedly performed for the hidden conformal symmetry in the (anti)-de Sitter spacetime \cite{Sakti:2019zix,Sakti:2020jpo,Chen:2010bh}. The approximation is basically a Taylor expansion for $\Delta_r$ near the cosmological horizon radius, \be \Delta _r \simeq \left. {\Delta _r } \right|_{r_c } + \left. {\frac{{d\Delta _r }}{{dr}}} \right|_{r_c } \left( {r - r_c } \right) + \left. {\frac{{d^2 \Delta _r }}{{dr^2 }}} \right|_{r_c } \frac{{\left( {r - r_c } \right)^2 }}{2}\,, \ee that can give us \be \Delta _r \simeq K\left( {r - r_c } \right)\left( {r - r_c^* } \right)\,. \ee Related to the KNTNdS spacetime (\ref{metricKNTNdS}), we have \be K = 1 - l^{ - 2} \left( {a^2 + 6n^2 + 6r_c^2 } \right)\,, \ee and \be r_c^* = \frac{{2r_c^3 + r_c \left( {l^2 - a^2 - 6n^2 } \right) - 2Ml^2 }}{{a^2 + 6n^2 + 6r_c^2 - l^2 }}\,. \ee Now let us impose the near horizon region for the test scalar, given by $\omega r \ll 1$. We also consider that the test scalar is in the low frequency, $\omega M \ll 1$, and its charge is extremely small, $qQ \ll 1$. Subsequently, the low frequency can be related to the conditions $\omega a \ll 1$, $\omega Q \ll 1$, and $\omega n \ll 1$. Under such considerations, the radial equation (\ref{eq.rad}) can be rewritten approximately as \be \label{eq.rad.app} \frac{d}{{dr}}\left( {\left( {r - r_c } \right)\left( {r - r_c^* } \right)\frac{{dR}}{{dr}}} \right) + {\frac{{F_1 R}}{{\left( {r - r_c } \right)}} + \frac{{F_2 R}}{{\left( {r - r_c^* } \right)}}} + F_3 R = 0\,, \ee with \be F_1 = \frac{{\left( {\omega \left( {r_c^{2} + a^2 + n^2 } \right)+ qQr_c - ma } \right)^2 }}{{K^2 \left( {r_c - r_c^* } \right) }}\,, \ee \be F_2 = - \frac{{\left( {\omega \left( {r_c^{*2} + a^2 + n^2 } \right) + qQr_c^*- ma } \right)^2 }}{{K^2 \left( {r_c - r_c^* } \right) }}\,, \ee and \be F_3 = \frac{{q^2 Q^2 -\lambda }}{{K }}\,. \ee Furthermore, to show the symmetry property of eq. (\ref{eq.rad.app}), we consider the following conformal coordinates \be\label{w0} \omega^ + = \sqrt {\frac{{r - r_c }}{{r - r_c^* }}} \exp \left( {2\pi T_R \phi + 2n_R t} \right)\,, \ee \be \label{wp} \omega^ + = \sqrt {\frac{{r - r_c }}{{r - r_c^* }}} \exp \left( {2\pi T_L \phi + 2n_L t} \right)\,, \ee \be \label{wm} \omega^0 = \sqrt {\frac{{r_c - r_c^* }}{{r - r_c^* }}} \exp \left( {\pi \left( {T_R + T_L } \right)\phi + \left( {n_R + n_L } \right)t} \right)\,. \ee By using these coordinates, and also the notations \be \partial _{+ } = \frac{\partial }{{\partial \omega^ + }}~~,~~\partial _{ - } = \frac{\partial }{{\partial \omega^ - }}~~,~~\partial _{0} = \frac{\partial }{{\partial \omega^0 }}\,, \ee one can define two copies of $SL(2,\mathbb{R})$ generators that read \be H_{+ } = i\partial _{+ } \,, \ee \be H_{0} = i\left( {\omega^ + \partial _{k, + } + \frac{1}{2}\omega^0 \partial _{0} } \right)\,, \ee \be H_{- } = i\left( {\left( {\omega^ + } \right)^2 \partial _{ + } + \omega^ + \omega^0 \partial _{0} - \left( {\omega^0 } \right)^2 \partial _{- } } \right)\,, \ee and \be \bar H_{+ } = i\partial _{ - } \,, \ee \be \bar H_{0} = i\left( {\omega^ - \partial _{- } + \frac{1}{2}\omega^0 \partial _{0} } \right)\,, \ee \be \bar H_{- } = i\left( {\left( {\omega^ - } \right)^2 \partial _{- } + \omega^ - \omega^0 \partial _{0} - \left( {\omega^0 } \right)^2 \partial _{ + } } \right)\,. \ee The generators above satisfy the $SL(2,\mathbb{R})$ algebra, \be \left[ {H_{0} ,H_{ \pm } } \right] = \mp iH_{ \pm }~~,~~ \left[ {H_{ + } ,H_{ - } } \right] = 2iH_{0} \,, \ee and \be \left[ {\bar H_{0} ,\bar H_{ \pm } } \right] = \mp i\bar H_{ \pm } ~~,~~\left[ {\bar H_{ + } ,\bar H_{ - } } \right] = 2i\bar H_{0} \,. \ee Moreover, based on the generators above, one can also construct the $SL(2,\mathbb{R})$ Casimir operators \be {\cal H}^2 = - H_{0}^2 + \frac{1}{2}\left( {H_{ + } H_{ - } + H_{ - } H_{ + } } \right)\,, \ee and \be \bar {\cal H}^2 = - \bar H_{0}^2 + \frac{1}{2}\left( {\bar H_{ + } \bar H_{ - } + \bar H_{ - } \bar H_{ + } } \right)\,. \ee These Casimir operators commute with all generators in the group. It turns out that despite the Casimir operators ${\cal H}^2$ and $\bar{\cal H}$ are constructed by a different set of operators, they take exactly the same form in terms of the conformal coordinates \be \label{Casimir} {\cal H}^2 = \bar {\cal H}^2 = \frac{1}{4}\left( {\left( {\omega ^0 } \right)^2 \partial _{0}^2 - \omega ^0 \partial _{0} } \right) + \left( {\omega ^0 } \right)^2 \partial _{ + } \partial _{ - } \ee Furthermore, in terms of the coordinates $(t,r,\phi)$, the Casimir operator (\ref{Casimir}) explicitly can be written as \[ {\cal H}^2 = \left( {r - r_c } \right)\left( {r - r_c^* } \right)\frac{{\partial ^2 }}{{\partial r^2 }} + \left( {2r - r_c^* - r_c } \right)\frac{\partial }{{\partial r}} + \frac{{\left( {r_c - r_c^* } \right)}}{{r - r_c^* }}\left( {\frac{{n_L - n_R }}{{4\pi {\Xi} }}\frac{\partial }{{\partial \phi }} - \frac{{T_L - T_R }}{{4{\Xi} }}\frac{\partial }{{\partial t}}} \right)^2 \] \be \label{eq.Casimir} - \frac{{\left( {r_c - r_c^* } \right)}}{{r - r_c }}\left( {\frac{{n_L + n_R }}{{4\pi {\Xi} }}\frac{\partial }{{\partial \phi }} - \frac{{T_L + T_R }}{{4{\Xi} }}\frac{\partial }{{\partial t}}} \right)^2 \,, \ee where $\Xi = n_L T_R - n_R T_L$. The hidden conformal symmetry is stated from the fact that the radial equation (\ref{eq.rad.app}) is just the eigen equation for the Casimir operator ${\cal H}^2$ in eq. (\ref{eq.Casimir}) with the separation constant $\lambda$ as the corresponding eigenvalue. In this fashion, it is said that the radial equation (\ref{eq.rad.app}) exhibits the $SL(2,\mathbb{R})_L \times SL(2,\mathbb{R})_R$ symmetry. In the development, the discussion of hidden conformal symmetry for the black hole/CFT correspondence was found to exist in several pictures \cite{Chen:2010ywa,Chen:2011kt}. The first one is known as the $J$-picture which uses the neutral test scalar to probe the symmetry. This is actually the original proposal of hidden conformal symmetry in Kerr/CFT correspondence \cite{Castro:2010fd}, where the property of low frequency neutral test scalar in the near region of Kerr black hole is explored. The second one is called $Q$-picture \cite{Chen:2010as}, where a $U(1)$ internal space is introduced and the test scalar is considered to be in the state of zeroth quantum magnetic number. Subsequently, these $J$ and $Q$ pictures can be formulated into a single treatment known as the general picture \cite{Chen:2011kt} where the CFT duals are generated by the modular group $SL(2,{\mathbb Z})$. In the following we show that the twofold or even the general pictures of hidden conformal symmetry that are applied for black hole horizon \cite{Chen:2010ywa,Chen:2011kt} can also be found near the cosmological horizon. The $J$ picture is obtained by considering a neutral scalar test probe in the near horizon region. This consideration yields eq. (\ref{eq.rad.app}) reduces to \be \label{eq.rad.app.Jpic} \frac{d}{{dr}}\left( {\left( {r - r_c } \right)\left( {r - r_c^* } \right)\frac{{dR}}{{dr}}} \right) + {\frac{{G_1^J R}}{{\left( {r - r_c } \right)}} + \frac{{G_2^J R}}{{\left( {r - r_c^* } \right)}}} + G_3^J R = 0\,, \ee where \be \label{C1J} G_1^J = \frac{{\left( {\omega \left( {r_c^{*2} + a^2 + n^2 } \right) - ma } \right)^2 }}{{K^2 \left( {r_c - r_c^* } \right) }}\,, \ee \be \label{C2J} G_2^J = - \frac{{\left( {\omega \left( {r_c^{*2} + a^2 + n^2} \right) - ma } \right)^2 }}{{K^2 \left( {r_c - r_c^* } \right) }}\,, \ee and \be \label{C3J} G_3^J = - \frac{\lambda }{K}\,. \ee To have an agreement between the radial equation (\ref{eq.rad.app.Jpic}) with the general form (\ref{eq.Casimir}), we need to set the corresponding parameters as \be n_R^J = 0~~,~~n_L^J = - \frac{K}{{2\left( {r_c + r_c^* } \right)}}\,, \ee and \be \label{TLTRJ} T_R^J = \frac{{K \left( {r_c - r_c^* } \right)}}{{4\pi a}}~~,~~T_L^J = \frac{{K \left( {r_c^2 + {r_c^*}^2 + 2a^2 + 2n^2 } \right)}}{{4\pi a \left( {r_c + r_c^* } \right)}}\,. \ee In this way, we have constructed the $J$ picture hidden conformal symmetry near the cosmological horizon of the KNTNdS spacetime. If the $J$-picture corresponds to the neutral scalar test field, the $Q$-picture can be probed by using the scalar field at $m=0$ state. However, the $Q$-picture hidden conformal symmetry requires the existence of a $U(1)$ internal space $\chi$. The dependence of scalar wave $\Psi$ with respect to this new coordinate is given by the eigen equation \be \partial _\chi \Psi = i q \Psi \,. \ee Indeed, it has an analogy with the equation for the angular coordinate $\phi$ where the scalar probe dependence is given by $\partial _\phi \Psi = i m \Psi$. For the $m=0$ state, the radial equation (\ref{eq.rad.app}) takes the form \be \label{eq.rad.app.Qpic} \frac{d}{{dr}}\left( {\left( {r - r_c } \right)\left( {r - r_c^* } \right)\frac{{dR}}{{dr}}} \right) + {\frac{{G_1^Q R}}{{\left( {r - r_c } \right)}} + \frac{{G_2^Q R}}{{\left( {r - r_c^* } \right)}}}+ G_3^Q R = 0\,, \ee where \be \label{C1Q} G_1^Q = \frac{{\left( {\omega \left( {r_c^{2} + a^2 +n^2} \right) + qQr_c } \right)^2 }}{{K^2 \left( {r_c - r_c^* } \right) }}\,, \ee \be \label{C2Q} G_2^Q = - \frac{{\left( {\omega \left( {r_c^{*2} + a^2+n^2 } \right) + qQr_c^* } \right)^2 }}{{K^2 \left( {r_c - r_c^* } \right) }}\,, \ee and \be \label{C3Q} G_3^Q = \frac{{q^2 Q^2 }}{{K^2 }}\,. \ee In order to have an agreement between (\ref{eq.Casimir}) and (\ref{eq.rad.app.Qpic}), the corresponding parameters should be set as \be n_L^Q = - \frac{{K\left( {r_c + r_c^* } \right)}}{{4 \left( {r_cr_c^* - a^2-n^2 } \right) }}~~,~~n_R^Q =- \frac{{K\left( {r_c - r_c^* } \right)}}{{4 \left( {r_cr_c^* - a^2-n^2 } \right) }}\,, \ee and \be \label{TLTRQ} T_L^Q = -\frac{{K \left( {r_c^2 + r_c^{*2} +2 a^2 + 2 n^2 } \right)}}{{4\pi Q\left( {r_c r_c^* - a^2 -n^2} \right)}}~~,~~T_R^Q = -\frac{{K \left( {r_c^2 - r_c^{*2} } \right)}}{{4\pi Q\left( {r_c r_c^* - a^2 - n^2 } \right)}}\,. \ee Here we have shown that the $Q$-picture of hidden conformal symmetry for the cosmological horizon of KNTNdS spacetime does exist as well. Now let us discuss the general picture which combines both $J$ and $Q$-pictures in a single formulation. The similarity between eqs. (\ref{eq.rad.app.Jpic}) and (\ref{eq.rad.app.Qpic}) are obvious. It then allows us to write an equation that utilizes the $SL(2,{\mathbb Z})$ modular group which can reduce to both eqs. (\ref{eq.rad.app.Jpic}) and (\ref{eq.rad.app.Qpic}) for some special cases. The $SL(2,{\mathbb Z})$ modular group itself acts on the torus with coordinates $(\phi, \chi)$ that appear in eqs. (\ref{eq.rad.app.Jpic}) and (\ref{eq.rad.app.Qpic}). The corresponding $SL(2,{\mathbb Z})$ transformation is given by \be \left( {\begin{array}{*{20}c} {\phi '} \\ {\chi '} \\ \end{array}} \right) = \left( {\begin{array}{*{20}c} \alpha & \beta \\ \gamma & \delta \\ \end{array}} \right)\left( {\begin{array}{*{20}c} \phi \\ \chi \\ \end{array}} \right)\,. \ee Intuitively, related to the new coordinates $\phi '$ and $\chi '$, the corresponding eigenvalues of operators $i\partial_{\phi '}$ and $i\partial_{\chi '}$ are $m'$ and $q'$, respectively. Therefore, one can write the relation between the parameters $(m,q)$ and $(m',q')$ as \be m = \alpha m' + \gamma q'~~,~~q = \beta m' + \delta q'\,. \ee In terms of $(m',q')$ parameters, the radial equation (\ref{eq.rad.app}) can be rewritten as \be \label{eq.rad.app.gen} \frac{d}{{dr}}\left( {\left( {r - r_c } \right)\left( {r - r_c^* } \right)\frac{{dR}}{{dr}}} \right) + {\frac{{G_1^G R}}{{\left( {r - r_c } \right)}} + \frac{{G_2^G R}}{{\left( {r - r_c^* } \right)}}} + G_3^G R = 0\,, \ee where \be G_1^G = \frac{{\left( {\omega \left( {r_c^{2} + a^2 + n^2 } \right) - \left( {a\alpha - Q\beta r_c } \right)m' - \left( {a\gamma - Q\delta r_c } \right)q' } \right)^2 }}{{K^2 \left( {r_c - r_c^* } \right) }}\,, \ee \be G_2^G = - \frac{{\left( {\omega \left( {r_c^{*2} + a^2 + n^2 } \right) - \left( {a\alpha - Q\beta r_c^* } \right)m' - \left( {a\gamma - Q\delta r_c^* } \right)q' } \right)^2 }}{{K^2 \left( {r_c - r_c^* } \right) }}\,, \ee and \be G_3^G = \frac{{(\beta m' + \delta q')^2 Q^2 -\lambda }}{{K }}\,. \ee In this general picture, the $J'$-picture can be obtained by setting $q' = 0$, and the $Q'$-picture is achieved at $m'=0$. Note that these two $J'$ and $Q'$ pictures are equal to each other. In $Q'$-picture, the hidden conformal symmetry exists if we set the corresponding parameters in eq. (\ref{eq.Casimir}) to be \be n_L^G = - \frac{{K\left( {2a\gamma - \delta Q\left( {r_c + r_c^* } \right)} \right)}}{{4\left( {a\gamma \left( {r_c + r_c^* } \right) + \delta Q\left( {a^2 + n^2 - r_c r_c^* } \right)} \right)}}\,, \ee \be n_R^G = \frac{{K\delta Q\left( {r_c - r_c^* } \right)}}{{4\left( {a\gamma \left( {r_c + r_c^* } \right) + \delta Q\left( {a^2 + n^2 - r_c r_c^* } \right)} \right)}}\,, \ee \be T_L^G = \frac{{K\left( {r_c^2 + \left( {r_c^* } \right)^2 + 2a^2 + 2n^2 } \right)}}{{4\pi \left( {a\gamma \left( {r_c + r_c^* } \right) + \delta Q\left( {a^2 + n^2 - r_c r_c^* } \right)} \right)}}\,, \ee and \be T_R^G = \frac{{K\left( {r_c^2 - \left( {r_c^* } \right)^2 } \right)}}{{4\pi \left( {a\gamma \left( {r_c + r_c^* } \right) + \delta Q\left( {a^2 + n^2 - r_c r_c^* } \right)} \right)}}\,. \ee \section{Microscopic entropy}\label{s4.microentropy} In the previous section, we have established the hidden conformal symmetries that associate to the cosmological horizon in KNTNdS spacetime. These symmetries open the possibility to apply some CFT$_2$ methods to understand several aspects related to the cosmological horizon. In this section, we show that the cosmological horizon entropy has a dual description from a CFT$_2$ point of view by using the Cardy formula. The same argument has been used to construct Kerr/CFT holography to recover the entropy of black holes \cite{Guica:2008mu,Compere:2012jk}. The dual calculations for cosmological horizon entropy are performed separately in $J$ and $Q$-pictures where each of them requires a particular approach to obtain the corresponding near horizon geometry. Let us first consider the $J$-picture. The appropriate near horizon coordinate transformation in the near extremal state is \be \label{eq.transNearKerrCFT} r = \frac{{r_c  + r_c^* }}{2} + \varepsilon r_0 \tilde r~~,~~r_c  - r_c^*  = \mu \varepsilon r_0 ~~,~~t = \tilde t\frac{{r_0}}{\varepsilon }~~,~~\phi  = \tilde \phi  + \Omega _H \tilde t \frac{{r_0}}{\varepsilon }\,. \ee Applying this transformation to the metric (\ref{metricKNTNdS}) gives \be \label{metric.nearJ} ds^2  = \Gamma \left( \theta  \right)\left\{ { - {\left( {\tilde r - \frac{{\mu }}{2}} \right)\left( {\tilde r + \frac{{\mu }}{2}} \right) } d\tilde t^2  + \frac{{d\tilde r^2 }}{\left( {\tilde r - \frac{{\mu }}{2}} \right)\left( {\tilde r + \frac{{\mu }}{2}} \right)} + \alpha \left( x  \right)dx^2 } \right\} + \gamma \left( \theta  \right)\left( {d\tilde \phi  + \tilde k^J {\tilde r} d\tilde t} \right)^2 \,, \ee where \be \Gamma \left( \theta  \right) = \frac{{\rho _ c ^2 }}{{K}}\,, \ee \be \alpha \left( x \right) = \frac{{K }}{{\Delta _l \Delta_x }}\,, \ee \be \gamma \left( \theta  \right) = \frac{{\Delta _l \Delta_x \left( {r_c^2  + a^2 + n^2 } \right)^2 }}{{\rho _c^2 }}\,, \ee \be \tilde k^J = \frac{{2 ar_c }}{{ \left( {r_ c ^2  + a^2 + n^2 } \right) K }}\,, \ee and \be \rho _c^2  = r_c^2  + a^2 x^2 \,. \ee A general formula to compute the central charge associated to the near horizon of a near extremal geometry in the family of Einstein-Maxwell theory has been computed in \cite{Hartman:2008pb}, and the obtained formula can apply to the near horizon geometry that appears in eq. (\ref{metric.nearJ}). The central charges are \cite{Hartman:2008pb} \be \label{c.formula} c_L^J  = c_R^J  = 3\tilde k^J \int\limits_{-1}^1  {dx \sqrt {\Gamma \left( x \right)\gamma \left( x  \right)\alpha \left( x  \right)} } \,, \ee   for the cosmological horizon in KNTNdS spacetime we have \be \label{centralchargeJ} c_L^J  = c_R^J = \frac{6a(r_c + r_c^*)}{K}\,. \ee Using the central charge (\ref{centralchargeJ}), the Cardy formula \be S_{Cardy}^J = \frac{{\pi ^2 }}{3}\left( {c_L^J T_L^J  + c_R^J T_R^J } \right) \ee gives us the entropy of cosmological horizon in KNTNdS spacetime, namely \be \label{entropy} S_{Cardy}^J = {\pi \left( {r_c^2  + a^2 +n^2 } \right)}\,. \ee In the next section, we will provide the holographic calculation for absorption cross section by using a two point function of the operator that is dual to the scalar field.  For that calculation, one needs to identify the conjugate charges $\delta E_L^J$ and $\delta E_R^J$ that satisfy \be \delta S_{Cardy}^J  = \frac{{\delta E_L^J }}{{T_L^J }} + \frac{{\delta E_R^J }}{{T_R^J }}\,. \ee The conjugate charges can be obtained by using the first law of thermodynamics for the cosmological horizon (\ref{eq.firstlaw}) and the corresponding equation is \be\label{eq.charges} \delta S  = \frac{{\delta M  - \Omega \delta J - \Phi \delta Q - \Psi dN - \Omega_n \delta J_n  - \Theta  \delta \Lambda  }}{{T}} = \frac{{\delta E_L^J }}{{T_L^J }} + \frac{{\delta E_R^J }}{{T_R^J }}\,. \ee Recall that for the Kerr-Newman black hole, the related first law of thermodynamics can be written as \be \label{eq.deltaMnormal} \delta M = T\delta S + \Omega \delta J + \Phi \delta Q\,. \ee To obtain the conjugate charges in Kerr/CFT correspondence for the Kerr-Newman black hole, the variations of black hole parameters in eq. (\ref{eq.deltaMnormal}) are related to some of the scalar probe parameters. The change in angular momentum is related to the magnetic quantum number $m$, whereas the change in electric charge is connected with the probe's electric charge. However, the first law of thermodynamics (\ref{eq.charges}) contains several terms that do not have a direct counterpart with the scalar probe's properties. Therefore, here we propose some generalizations for the variations related to the rotational and electromagnetic works, i.e. \be \label{Omprimed} \Omega ' \delta J'  = \Omega \delta J + \Omega _n \delta J_n  + \Theta \delta \Lambda \,. \ee and \be \label{Qprimed} \Phi ' \delta Q'  = \Phi  \delta Q + \Psi \delta N \,, \ee respectively. In such generalizations, we can have $\delta M = \omega$ and $\delta J' = m$. In $J$ picture, we can have \be \label{dEJ} \delta E_L^J  = \omega _L^J  ~~{\rm and}~~\delta E_R^J  = \omega _R^J  \,, \ee where \be \label{wLRJ} \omega _L^J  = \frac{{r_c^2  + \left( {r_c^* } \right)^2  + 2a^2 + 2n^2 }}{{2a }}\omega ~~,~~ \omega _R^J  = \frac{{r_c^2  + \left( {r_c^* } \right)^2  + 2a^2 + 2n^2 }}{{2a }}\omega - m\,, \ee \be \label{qLJ} q_L^J = q_R^J=0\,,~~{\rm and}~~\mu_L^J = \mu_R^J = 0\,. \ee Now let us turn to microscopic entropy calculation associated with the $Q$-picture hidden conformal symmetry. As it has been mentioned previously, we need to add a $U(1)$ internal extra dimension in the theory denoted by the coordinate $\chi$. The corresponding five dimensional spacetime metric then can be read as \be ds^2  = ds_4^2  + \left( {d\chi  + \tilde {\bf A}} \right)^2 \,, \ee where $ds_4^2$ is the near horizon metric of near extremal horizon (\ref{metric.nearJ}) and $\tilde {\bf A}$ is the associated gauge field. Now let us consider the fluctuation of five dimensional metric with a set of coordinate $\{ \tilde t,\tilde \phi,\theta ,\tilde r ,\chi \} $ in the following way, \be \label{eq.BCqpic} h_{\mu \nu }  \sim \left( {\begin{array}{*{20}c} {{\cal O}\left( {\tilde r^2 } \right)} & {{\cal O}\left( {\tilde r} \right)} & {{\cal O}\left( {1/\tilde r} \right)} & {{\cal O}\left( {1/\tilde r^2 } \right)} & {{\cal O}\left( 1 \right)}  \\ {} & {{\cal O}\left( {1/\tilde r} \right)} & {{\cal O}\left( 1 \right)} & {{\cal O}\left( {1/\tilde r} \right)} & {{\cal O}\left( 1 \right)}  \\ {} & {} & {{\cal O}\left( {1/\tilde r} \right)} & {{\cal O}\left( {1/\tilde r^2 } \right)} & {{\cal O}\left( {1/\tilde r} \right)}  \\ {} & {} & {} & {{\cal O}\left( {1/\tilde r^3 } \right)} & {{\cal O}\left( {1/\tilde r} \right)}  \\ {} & {} & {} & {} & {{\cal O}\left( 1 \right)}  \\ \end{array}} \right)\,. \ee   The most general diffeomorphism that preserves the boundary condition in (\ref{eq.BCqpic}) is \be \zeta ^{\left( \chi  \right)}  = \varepsilon \left( \chi  \right)\frac{\partial }{{\partial \chi }} - \tilde r\frac{{d\varepsilon \left( \chi  \right)}}{{d\chi }}\frac{\partial }{{\partial \tilde r}}\,, \ee where $\varepsilon \left( \chi  \right) = \exp \left(-iq\chi\right)$. To compute the central charge, we can generalize the treatment developed in \cite{Hartman:2008pb} and we can get \be \label{centralchargeQ} c_L^Q  = c_R^Q  = 3 {\tilde k^Q} \int\limits_{-1}^{1}  {dx \sqrt {\Gamma \left( x  \right)\gamma \left( x  \right)\alpha \left( x  \right)} } = \frac{6Q \left(r_c r_c^* -a^2 - n^2\right)}{K}\,. \ee   Note that the negative value of this central charge leads to the non-unitary CFT$_2$. Therefore, to guarantee the unitary, we have to impose $r_c r_c^* > a^2 +n^2$ and the positive values consideration of $Q$ and $K$. However, sometime the non-unitary dual CFT$_2$ is unavoidable in several systems. For example, in the case of a strong magnetic field for the magnetized Kerr/CFT holography \cite{Siahaan:2015xia}. This central charge then can be used to compute the microscopic entropy by using Cardy formula \be S = \frac{{\pi ^2 }}{3}\left( {c_L^Q T_L^Q  + c_R^Q T_R^Q } \right)\,, \ee which recovers the entropy of the cosmological horizon in KNTNdS spacetime in eq. (\ref{entropy}). Similar to the $J$-picture, the conjugate charges in $Q$-picture can be extracted from the first law of thermodynamics, i.e. \be\label{eq.chargesQ} \delta S  = \frac{{\delta M  - \Omega' \delta J' - \Phi ' \delta Q'  }}{{T }} = \frac{{\delta E_L^Q }}{{T_L^Q }} + \frac{{\delta E_R^Q }}{{T_R^Q }}\,, \ee where $\Omega' \delta J'$ is given in (\ref{Omprimed}) and $\Phi ' \delta Q'$ is given in (\ref{Qprimed}). However, since in the $Q$ picture we consider the case of $m=0$ only, the term $\Omega' \delta J'$ can be neglected. The last equation can be solved by setting \be \delta E_L^Q  = \omega _L^Q  - q_L^Q \mu _L^Q    ~~,~~\delta E_R^Q  = \omega _R^Q  - q_R^Q \mu _R^Q \,, \ee where \be\label{wLRQ} \omega _L^Q  = \omega _R^Q  = \frac{{n\left( {r_c  + r_c^* } \right)\left( {r_c^2  + \left( {r_c^* } \right)^2  + 2a^2 + 2n^2} \right)}}{{2Q\left( {r_c r_c^*  - a^2 - n^2} \right)}}\omega  \,, \ee \be\label{qLRQ} q_L^Q  = q_R^Q  = \delta Q_c  = q\,, \ee and \be\label{muLRQ} \mu _L^Q  = \frac{{n\left( {r_c^2  + \left( {r_c^* } \right)^2  + 2a^2+2n^2 } \right)}}{{2\left( {r_c r_c^*  - a^2 -n^2 } \right)}}~~,~~\mu _R^Q  = \frac{{n\left( {r_c  + r_c^* } \right)^2 }}{{2\left( {r_c r_c^*  - a^2-n^2 } \right)}}\,. \ee Before proceeding to the next section, let us here give some remarks on the results above. As we have mentioned in the introduction that a holography for de Sitter geometry has been reported in \cite{Strominger:2001pn} where the author established a connection between the gravity theory in D-dimensional de Sitter space with a conformal field theory living in (D-1)-sphere. Particularly, for the three dimensional de Sitter (dS$_3$) background, the corresponding dual theory is a two dimensional conformal field theory which is exactly similar to the case presented in this paper. Moreover, the technique to compute the central charge that associates to the asymptotics symmetries of dS$_3$ is the analogous calculation by Brown and Henneaux in \cite{Brown:1986nw}. In fact, the method by Brown and Henneaux for the central charge is also the method adopted in Kerr/CFT correspondence \cite{Guica:2008mu}. As we have mentioned previously, the work presented in this paper is an application of Kerr/CFT correspondence to the cosmological horizon of KNTNdS spacetime. Therefore, one can say that basically the work in this paper and the dS/CFT correspondence reported in \cite{Strominger:2001pn} depart from the same basis. Note that the author of \cite{Strominger:2001pn} is also one of the authors of \cite{Guica:2008mu} and \cite{Castro:2010fd} that establish the Kerr/CFT correspondence. However, the constructions of dS/CFT holography in \cite{Strominger:2001pn} and the one presented here are distinguished in several aspects. First, the dual CFT theory proposed in \cite{Strominger:2001pn} is (D-1)-dimensional, for D as the dimension of gravitational theory in dS$_3$. Typically in the Kerr/CFT holography, the dual CFT theory is two dimensional regardless of the dimension of bulk spacetime under consideration \cite{Strominger:2001pn,Compere:2012jk}. Second, the asymptotic geometry considered in \cite{Strominger:2001pn} is still de Sitter, whereas in this paper the corresponding near-horizon of (near)-extremal geometry as appeared in eq. (\ref{metric.nearJ}) is warped anti-de Sitter. Consequently, these distinguished asymptotic geometries require different types of boundary conditions to be used in the diffeomorphism generators that lead to distinctive central charges. Third, the central charge obtained in this work fails to reduce smoothly to the non-rotating or neutral cases. It can be observed that in general the central charges (\ref{centralchargeJ}) and (\ref{centralchargeQ}) vanish in the case of $a=0$ and $Q=0$, respectively. This limitation is well known for the Kerr/CFT correspondence as it works for the rotating and/or charged case only, i.e. the black hole/CFT correspondence for Schwarzschild black hole in the style of \cite{Guica:2008mu} is not well established. On the other hand, the holography for de Sitter spacetime established in \cite{Strominger:2001pn} can be extended to the case that incorporates black holes \cite{Klemm:2001ea}. \section{CFT$_2$ correlators and superradiant scattering}\label{s5.scattering} Normally, in the Kerr/CFT correspondence discussions for black holes, another evidence in supporting the holography is the dual calculation for scalar absorption near the horizon. Similar to that, here we also provide such calculations in the near cosmological horizon, and show that the dictionary of Kerr/CFT correspondence still applies. To start, let us consider the frequency near superradiant bound for the scalar field \be \omega - m\Omega - q \Phi = {\bar \omega} \frac{\varepsilon}{r_0}\,. \ee The scattering analysis can be done in $J$ or $Q$ picture, and consequently it should be feasible in the general picture as well. Here let us consider the $J$-picture only, where the corresponding radial equation is eq. (\ref{eq.rad.app.Jpic}). The solution to this equation reads \be \label{eq.RadSolScattering} R\left( \zeta \right) = \zeta^\alpha \left( {1 - \zeta} \right)^\beta F\left( {a,b,c;\zeta} \right)\,, \ee where $\zeta = \left( {r - r_c } \right)\left( {r - r_c^* } \right)^{ - 1} $, \be i\alpha = \sqrt {G_1^J } \,, \ee \be 2 \beta = 1 - \sqrt {1 - 4 G_3^J } \,, \ee and \be a^{*} = \alpha + \beta + i\sqrt { - G_2^J }~~,~~ b^{*} = \alpha + \beta - i\sqrt { - G_2^J } ~~,~~c^{*} = 1 + 2\alpha \,. \ee Moreover, the radial solution (\ref{eq.RadSolScattering}) in the near horizon of near extremal setup can be read as \be\label{eq.RadSolNear} R\left( {\bar \zeta} \right) = {\bar \zeta}^{\bar \alpha } \left( {1 - {\bar \zeta}} \right)^{\bar \beta } F\left( {\bar a^* ,\bar b^* ,\bar c^* ;{\bar \zeta}} \right)\,, \ee where \be {\bar \zeta} = \frac{\left( {\tilde r - \mu /2} \right)}{\left( {\tilde r + \mu /2} \right)}\,, \ee and the incorporating parameters are \be i\bar \alpha = \sqrt {\bar G_1^J } \,, \ee \be 2\bar \beta = 1 - \sqrt {1 - 4\bar G_3^J } \,, \ee \be \bar a^* = \bar \alpha + \bar \beta + i\sqrt { - \bar G_2^J } ~~,~~\bar b^* = \bar \alpha + \bar \beta - i\sqrt { - \bar G_2^J }~~,~~ \bar c^* = 1 + 2\bar \alpha \,, \ee and \be \bar G_1^J = \frac{{\left( {\bar \omega } \right)^2 }}{{\bar \lambda ^2 }}~~,~~ \bar G_2^J = - \left( {\frac{{\bar \omega }}{{\bar \lambda }} - \frac{{2r_c m\Omega _c }}{K}} \right)^2 ~~,~~ \bar G_3^J = - \frac{{\bar\lambda}}{K}\,. \ee Now let us consider the asymptotics of radial solution (\ref{eq.RadSolNear}) above. If one takes $\bar\zeta \gg \mu$, which corresponds to a large $\tilde r$, the radial solution above reduces to \be R(\bar\zeta) \sim C_ + {\bar\zeta}^{h - 1} + C_ - {\bar\zeta}^{ - h} \,. \ee In such asymptotics, one can get the corresponding retarded Green function in the form \be G_R \sim \frac{{C_ - }}{{C_ + }}\,. \ee Explicitly in our near horizon of near extremal setup, the reading of this retarded Green function is \be \label{GreenR} G_R \sim \frac{{\Gamma \left( {1 - 2h} \right)}}{{\Gamma \left( {2h - 1} \right)}}\frac{{\Gamma \left( {h - i\left( {{\bar G}_1^J - {\bar G}_2^J } \right)} \right)\Gamma \left( {h - i\left( {{\bar G}_1^J + {\bar G}_2^J } \right)} \right)}}{{\Gamma \left( {1 - h - i\left( {{\bar G}_1^J - {\bar G}_2^J } \right)} \right)\Gamma \left( {1 - h - i\left( {{\bar G}_1^J + {\bar G}_2^J } \right)} \right)}}\,. \ee Interestingly, the Green function above has an agreement with a two point function of the scalar operator in a CFT$_2$. For a scalar operator $\hat O$ that is dual to the scalar test probe, the two point function can be written as \be \label{twopt1} G\left( {{\tau} ^ + ,{\tau} ^ - } \right) = \left\langle {\hat O^\dag \left( {{\tau} ^ + ,{\tau} ^ - } \right),\hat O\left( {0,0} \right)} \right\rangle \,, \ee where the left and right moving coordinates in a two dimensional worldsheet are given by ${\tau} ^ +$ and ${\tau} ^ -$, respectively. The two point function (\ref{twopt1}) is dictated by the conformal invariance and can be read as \cite{Chen:2010as,Chen:2010bh} \be G\left( {{\tau} ^ + ,{\tau} ^ - } \right) \sim \left( { - 1} \right)^{h_L + h_R } \left( {\frac{{\pi T_L^J }}{{\sinh \left( {\pi T_L^J {\tau} ^ + } \right)}}} \right)^{2h_L } \left( {\frac{{\pi T_R^J }}{{\sinh \left( {\pi T_R^J {\tau} ^ - } \right)}}} \right)^{2h_R } e^{i\left( {q_L^J \mu _L^J {\tau} ^ + + q_R^J \mu _R^J {\tau} ^ - } \right)} \,, \ee with $\left(h_L,h_R\right)$ are the conformal dimensions, and $h_L = h_R =h$. It is understood that $\left(q_L^J,q_R^J\right)$ are the charges, $\left(T_L^J,T_R^J\right)$ are the temperatures, and $\left(\mu_L^J,\mu_R^J \right)$ are the chemical potentials that associate to the operator $\hat O$. In the equation above, we have incorporated the parameters in $J$-picture. The Euclidean correlator can be evaluated along the positive imaginary $\left\{\omega_{L}^J,\omega_{R}^J\right\}$ axis, i.e. \be G_E \left( {\omega _{L,E}^J ,\omega _{L,E}^J } \right) = G_R \left( {i\omega _{L,E}^J ,i\omega _{L,E}^J } \right)\,, \ee for a positive $\left\{\omega _{L,E}^J ,\omega _{L,E}^J\right\}$. In the theory with a finite temperature, $\omega _{L,E}^J $ and $\omega _{L,E}^J$ are discrete and take the values of Matsubara frequencies, \be \frac{{\omega _{L,E}^J }}{{2\pi T_L^J }} = m_L ~~{\rm and}~~\frac{{\omega _{R,E} }}{{2\pi T_R^J }} = m_R \,, \ee where $m_L$ and $m_R$ are integers. Performing a Wick rotation to the 2D worldsheet coordinate, i.e. \be \tau^ + \to i\tau _L ~~{\rm and}~~\tau^ - \to i\tau _R \,, \ee where the Euclidean time $\tau_L$ and $\tau_R$ have the periodicity $2\pi/T_L^J$ and $2\pi/T_R^J$, respectively. Accordingly, the Euclidean correlator obtained from the two point function (\ref{twopt1}) can be written as \[ G_E \left( {\omega _{L,E}^J ,\omega _{R,E}^J } \right) \sim {T_L^J}^{2h_L - 1} {T_R^J}^{2h_R - 1} \exp \left[ {i\left( {\frac{{\tilde \omega _{L,E}^J }}{{2T_L^J }} + \frac{{\tilde \omega _{R,E} }}{{2T_R^J }}} \right)} \right]\Gamma \left( {h_L + \frac{{\tilde \omega _{L,E}^J }}{{2\pi T_L^J }}} \right)\Gamma \left( {h_L - \frac{{\tilde \omega _{L,E}^J }}{{2\pi T_L^J }}} \right) \] \be \label{twopt2} \times \Gamma \left( {h_R + \frac{{\tilde \omega _{R,E} }}{{2\pi T_R^J }}} \right)\Gamma \left( {h_R - \frac{{\tilde \omega _{R,E} }}{{2\pi T_R^J }}} \right)\,, \ee where \be \tilde \omega _{L,E}^J = \omega _{L,E}^J - iq_L^J \mu _L^J ~~{\rm and}~~\tilde \omega _{R,E}^J = \omega _{R,E}^J - iq_R^J \mu _R^J \,. \ee The agreement between (\ref{twopt2}) and (\ref{GreenR}) establishes the correspondence between a two dimensional conformal field theory and cosmological horizon in terms of scalar absorption calculation. It can be viewed as another supporting evidence for the Kerr/CFT correspondence of the cosmological horizon. \section{Pair production}\label{sec.pair} It has been shown that there exists a strong correlation between the existence of non-extremal Kerr/CFT correspondence and the pair production known as Schwinger effect near the horizon at near extremal state \cite{Chen:2012zn,Siahaan:2019ysk}. In the previous sections, we have shown that the Kerr/CFT holography dictionary can also be applied to the cosmological horizon. It motivates us to investigate the possibility for Schwinger effect to take place near the KNTNdS cosmological horizon. However, the required near horizon transformation is slightly different compared to the one in eq. (\ref{eq.transNearKerrCFT}) when we compute the central charge. The appropriate one to show the Schwinger effect is \be \label{eq.nearhorizonpair} r \to {\tilde r}_0 + \varepsilon \hat r ~~,~~dt \to \frac{{\left( {{\tilde r}_0^2 + a^2 + n^2 } \right)}}{{\Delta _0 \varepsilon }}d\hat t ~~,~~d\phi \to d\hat \phi + \frac{{\hat \Omega \left( {{\tilde r}_0^2 + a^2 + n^2 } \right)}}{{\Delta _0 \varepsilon }}d\hat t\,. \ee Note that we use the ``hat'' notation for the near horizon coordinate to distinguish it with the ``tilde'' one in (\ref{metric.nearJ}). In addition to the coordinate changes above, the mass is also shifted as \be \label{eq.MassShift} M \to M_0 + \frac{{\Delta _0 B^2 }}{{2{\tilde r}_0 }}\varepsilon ^2 \,. \ee Applying the transformations above to the spacetime metric (\ref{metricKNTNdS}) and the vector solution (\ref{eq.vector}) followed by taking $\varepsilon \to 0$ yields \be\label{metricnearPair} ds^2 = - \frac{{\rho _0^2 }}{{\Delta _0 }}\left( {{\hat r}^2 - B^2 } \right)d\hat t^2 + \frac{{\rho _0^2 }}{{\Delta _0 \left( {{\hat r}^2 - B^2 } \right)}}d\hat r^2 + \frac{{\rho _0^2 }}{{\Delta _l \Delta _x }}dx^2 + \frac{{Z^2 \Delta _l \Delta _x }}{{\rho _0^2 }}\left( {d\hat \phi + \frac{{2a{\tilde r}_0 \hat r}}{{Z\Delta _0 }}d\hat t} \right) \ee where $Z={\tilde r}_0^2 + a^2 + n^2$, \be \Delta _0 = 1 - \frac{{a^2 + 6\left( {{\tilde r}_0^2 + n^2 } \right)}}{{l^2 }} \,, \ee \be \rho_0^2 = {\tilde r}_0^2 + (n+ax)^2 \,, \ee and the corresponding gauge field is \be \label{vectornear} A_\mu dx^\mu = - \frac{{Q\left( {{\tilde r}_0^2 - \left( {n + ax} \right)^2 } \right)\hat r}}{{\rho _0^2 \Delta _0 }}d\hat t - \frac{{Q{\tilde r}_0 Z}}{{a\rho _0^2 }}d\hat \phi\,. \ee Interestingly, the result for near horizon geometry (\ref{metricnearPair}) looks similar to the near horizon metric (\ref{metric.nearJ}) that gives the central charge (\ref{centralchargeJ}). However, a closer look will show that these near horizon metrics are actually different from one another. It is understandable since the associated coordinate transformations to produce these near horizon metrics are slightly different. The coordinate transformation (\ref{eq.transNearKerrCFT}) incorporates the variables $r_c^*$ and $K$, which are basically the outcome of approaching $\Delta_r$ in KNTNdS metric (\ref{metricKNTNdS}) to be quadratic in $r$. Recall that such quadratic property is required to reveal the hidden conformal symmetry of the scalar probe. Accordingly, the associated central charge for the near horizon geometry (\ref{metric.nearJ}) is the one needed to produce the Cardy entropy (\ref{entropy}) after employing the left and right movers temperatures in the related hidden conformal symmetry generators. On the other hand, the near horizon geometry (\ref{metricnearPair}) and the corresponding coordinate transformation (\ref{eq.nearhorizonpair}) do not incorporate $r_c^*$ and $K$, as we consider the KNTNdS line element without any approximation for the $\Delta_r$ function. However, suppose one compute the central charge that correspond to the asymptotic symmetry of near horizon geometry (\ref{metricnearPair}), the resulting entropy after using Cardy formula and the suitable temperatures obtained in section \ref{s3.hidden} will not match the expected entropy for KNTNdS cosmological horizon. Nevertheless, for the case without cosmological constant, the near horizon of extremal black holes obtained by using the coordinate transformation (\ref{eq.nearhorizonpair}) can give the appropriate central charge to reproduce the related Bekenstein-Hawking entropy. For example, using the general central charge formula (\ref{c.formula}) for the near horizon of extremal Kerr-Newman black hole in \cite{Chen:2016caa} gives the correct central charge $c=12 Ma$ for the microscopic entropy calculation in $J$-picture \cite{Chen:2010ywa}. Note that ref. \cite{Chen:2016caa} discusses Schwinger effect near the Kerr-Newman black hole. Now let us consider the Klein-Gordon equation for a charged massive test scalar in the near horizon of near extremal geometry constructed above. The general form reads \be \label{eq.KGnear1} \left( {\nabla _\mu + iqA_\mu } \right)\left( {\nabla ^\mu + iqA^\mu } \right)\Psi = \mu^2 \Psi \,, \ee where $\mu$ is mass for the scalar field. To proceed, we employ the similar ansatz as the one we use to show the hidden conformal symmetry in section \ref{s3.hidden}. In the appropriate coordinate, it reads \be \hat \Phi = \exp \left( { - i\hat \omega \hat t + i\hat m\hat \phi } \right)X\left( x \right)\hat R\left( {\hat r} \right) \,. \ee Subsequently, the resulting equations are \be \label{eq.rad.pair} \Delta _0 \partial _{\hat r} \left\{ {\left( {\hat r^2 - B^2 } \right)\partial _{\hat r} \hat R\left( {\hat r} \right)} \right\} + \left\{ {\frac{{\left( {Z\Delta _0 \hat \omega - qQZ\hat r + 2{\tilde r}_0 a\hat m\hat r} \right)^2 }}{{Z^2 \Delta _0 \left( {\hat r^2 - B^2 } \right)}} - \mu ^2 Z - \lambda _l } \right\}R = 0 \,, \ee and \be \label{eq.ang.pair} \partial _x \left\{ {\Delta _l \Delta _x X\left( x \right)} \right\} - \left\{ {\frac{{\left( {qQ{\tilde r}_0 Z - a\rho _0^2 \hat m} \right)^2 }}{{a^2 \Delta _x Z^2 \Delta _l }} - \mu ^2 \left( {a\Delta _x - 2nx} \right) - \lambda _l } \right\}X\left( x \right) = 0\,. \ee Accordingly, the effective mass in the radial equation (\ref{eq.rad.pair}) can be written as \cite{Chen:2012zn,Siahaan:2019ysk,Chen:2020mqs} \be m_{\rm eff}^2 = m^2 - \frac{{\left( {2{\tilde r}_0 a\hat m - qQZ} \right)^2 }}{{Z^3 \Delta _0 }} + \frac{{\lambda _l }}{Z}\,. \ee The corresponding Breitenlohner-Freedman bound that associates to the stability condition in the near horizon of AdS space, i.e. $m_{\rm eff}^2 \ge \tfrac{1}{4}L_{AdS}^{-2}$, can be expressed as \be \label{eq.BF} m_{{\rm{eff}}}^2 \ge - \frac{{\Delta _0 }}{{4Z}}\,. \ee The violation of Breitenlohner-Freedman bound as stated in (\ref{eq.BF}) leads to the Schwinger effect near the cosmological horizon in KNTNdS spacetime. Similar to the pair production from black hole horizon in the near extremal state, such an effect can take place as well near the cosmological horizon. Indeed, the horizon must be in the near extremal condition, so it has a warped AdS structure that is needed in the prescription to show the Schwinger effect \cite{Chen:2012zn,Siahaan:2019ysk}. We do not present here the thermal interpretation for the Schwinger effect above. But it can be done straightforwardly by following the prescription given in \cite{Chen:2012zn,Chen:2020mqs}. Illustrations for the Breitenlohner-Freedman bound violation are given in fig. \ref{fig.BF} where the corresponding ${\tilde r}_0$ values are evaluated in fig. \ref{fig.r0}. Note that we require ${\tilde r}_0$ to be real so the corresponding plots in fig. \ref{fig.BF} are well defined. The plots in fig. \ref{fig.BF} confirms the pair production can exist, and we can observe how NUT parameter affects the magnitude of $\Upsilon =m_{{\rm{eff}}}^2 + \tfrac{{\Delta _0 }}{{4Z}} <0$, where we can infer that the presence of NUT parameter yields the stronger violation of Breitenlohner-Freedman bound. \begin{figure}[h] \begin{center} \includegraphics[scale=0.5]{BF.eps}\caption{Plots for $\Upsilon =m_{{\rm{eff}}}^2 + \tfrac{{\Delta _0 }}{{4Z}}$}\label{fig.BF} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[scale=0.5]{r0.eps}\caption{The values of ${\tilde r}_0$ that correspond to the plots in fig. \ref{fig.BF}.}\label{fig.r0} \end{center} \end{figure} \section{Conclusion}\label{s6.conc} In this work, we have presented the hidden conformal symmetry for the cosmological horizon in Kerr-Newman-Taub-NUT-de Sitter spacetime. As expected, the symmetry does exist for the cosmological horizon in a similar way for the black hole case. The method to construct the symmetry is straightforward by using the prescription that is normally employed for the black hole horizon. The obtained hidden conformal symmetry allows one to consider the Kerr/CFT holography for the cosmological horizon, such as in reproducing the entropy and the absorption cross section. The compatibility of Kerr/CFT holography for the cosmological horizon is expected due to the similarities between black hole and cosmological horizons, for example the associated entropy and Hawking radiation. Interestingly, the $AdS_2 \times S^2$ or warped $AdS_3$ structure of the near horizon of a near extremal black hole can be used in showing the Schwinger effect near the black hole horizon. In fact, these structures of the horizon play an important role in computing the central charge in the Cardy formula. After showing compatibility of the Kerr/CFT holography for the cosmological horizon in KNTNdS spacetime, we show that the Schwinger effect can also occur near this horizon. Obviously such an effect can take place as well near the cosmological horizon of some special cases in the family of KNTNdS spacetime. That includes the Reissner-Nordstrom~-NUT-de Sitter and the Kerr-NUT-de Sitter spacetime, including the vanishing NUT parameter consideration. For the future works, we can investigate the hidden conformal symmetry and pair production near the cosmological horizon of dilatonic de Sitter \cite{Gao:2004tu}. Such study for the de Sitter geometry in low energy heterotic string theory \cite{Wu:2020cgf} can be attractive as well, considering the Kerr/CFT can be applied to this case \cite{Ghezelbash:2012qn}. Studying the pair production near the Kaluza-Klein black hole can also be interesting, due to its resemblances to the Kerr-Newman case. \section*{Acknowledgement} This work is supported by LPPM UNPAR under contract no. III/LPPM/2022-02/79-P. I thank the anonymous referee for his/her useful comments.
2024-02-18T23:40:32.956Z
2022-12-07T02:15:47.000Z
algebraic_stack_train_0000
2,718
9,159
proofpile-arXiv_065-13277
\section{INTRODUCTION}\label{Intro} Let $\V$ be a variety of algebras of some type $\Om$ and $\Th (\V)$ be the category of all $\V$-algebras and their homomorphisms. For an infinite set $X\sb0$, let $\Th \sp 0 (\V )$ denote the full subcategory of $\Th (\V )$ which is determined by all free $\V$-algebras over finite subsets of the set $X\sb0$. The problem is to describe all automorphisms of the category $\Th \sp 0 (\V )$. Motivations for this research can be found in the papers \cite{MPP, MPP2, Seven, AlgGeom}. The most important case is when automorphisms of this category are inner or close to inner in a sense. An automorphism $\Phi$ of a category $\C$ is called inner if it is isomorphic to the identity functor $Id \sp {\C}$ in the category of all endofunctors of $\C$. It means that for every object $A$ of the given category there exists an isomorphism $\sigma\sb A : A\to \Phi (A)$ such that for every morphism $\mu: A\to B$ we have $\Phi (\mu)= \sigma\sb B \circ \mu \circ \sigma\sb A \sp {-1}$. Thus an automorphism $\Phi$ may be inner only in the case when there is an isomorphism between $A$ and $\Phi (A)$ for every $\C$-object $A$. As regards the inverse proposition, it was shown in the papers \cite{ZhPlVar,ZhPlUnivAlg} that if the objects $\Phi (A)$ and $A$ are isomorphic for every $\C$-object $A$, then $\Phi$ is inner or so called potentially inner. The last one means that $\Phi $ is inner in a category obtained by adding to $\C$ some new morphisms. More accurately, there exists a family $(s\sb A :A \to \Phi (A))\sb {A \in Ob \C}$ of morphisms of an extended category such that for every morphism $\mu : A\to B$ of the category $\C$ we have $\Phi (\mu)=s\sb B \circ \mu \circ s\sb A \sp {-1}$. That way, the problem is to describe these new morphisms $s\sb A$. A general method how to do it was suggested in the mentioned papers for categories $\C =\Th \sp 0 (\V )$ where $\V$ is a suitable variety. Below is a summary of the essence of this method. Consider a $\C$-algebra $A$ free generated by the set $\{x\sb 1,\dots , x\sb n\}$ of variables, where $n$ is greater than arities of all $\Om$-operators. We can assume that algebras $\Phi (A) $ and $A$ have the common base set $|A|$ and the same free generators. Then for every $k$-ary operation symbol $\om \in \Om$ we have two $k$-ary operations on $|A|$: $\om \sb A$ and $\om \sb {\Phi (A)}$. It is obvious that every element of the algebra $A$ can be considered as a term in the language corresponding to the algebra $\Phi (A)$ and vice versa. In other words, every operation $\om \sb A$ is a derived operation in $\Phi (A) $, and vice versa, every operation $\om \sb {\Phi (A)}$ is a derived operation in $A$. Thus the mappings $s\sb A:|A|\to|A|$ such that $s\sb A (x\sb i)=x\sb i$ and $s\sb A (\om \sb A (x\sb 1, \dots ,x\sb k))=\om \sb {\Phi (A)} (x\sb 1, \dots ,x\sb k))$ for every $\om \in \Om$ determine the automorphism $\Phi$. The problem is reduced to fined terms $\tilde{\om }$ in the algebra $\Phi (A)$ such that the derived algebra $(|A|, (\tilde{\om })\sb {\om \in \Om}) $ belongs to the variety $\V$. Some new results were obtained on this way. Particularly, A. Tsurkov successfully applied this method to many-sorted algebras (for example in \cite{Tsurkov}). It is known that if a variety $\V$ satisfies IBN-property then every automorphism of the category $\Th \sp 0(\V ) $ takes every object to an isomorphic one. But there are varieties without IBN-property. If it is unknown whether $A$ and $\Phi (A) $ are isomorphic or not, the method described above does not work. The aim of the current investigation is to fill up this gap. Since an automorphism may be not inner in the general case, we use a new type of automorphisms called the {\it quasi-inner automorphisms} (Definition \ref{inner}), and it turns out that this notion is enough to characterize arbitrary automorphisms. Further reasoning as a matter of fact follows the ideas of the papers \cite{ZhPlVar,ZhPlUnivAlg} when $A$ and $\Phi (A)$ are isomorphic, but instead of the method outlined above, a new method is proposed that to the author's opinion is more successful. This method reduces the problem to the case when the underlying set $|A|$ of an algebra $A$ is a subset of the underlying set of the algebra $\Phi (A)$, and every endomorphism $\mu$ of the algebra $A$ is the restriction of the endomorphism $\Phi (\mu)$ of the algebra $\Phi (A)$ to the set $|A|$ and every restriction to $|A|$ of an endomorphism $\nu $ of $\Phi (A) $ is an endomorphism $\mu$ of $A$ such that $\Phi (\mu)=\nu$. That circumstance gives us an opportunity to describe the action of the automorphism $\Phi$. The main results are formulated in two theorems. Theorem \ref{allAuto} states that every automorphism of a category $\C$ supplied with a forgetful functor $\Q:\C \to \Set$ which satisfies two acceptable conditions is potentially quasi-inner. Theorem \ref{product} states that every automorphism $\Phi$ of the category $\Th \sp 0 (\V )$ for an arbitrary variety $\V$ is the product of two functors $\Phi =\G \circ \Psi$. The first of them is an inner isomorphism $\Psi :\Th \sp 0 (\V ) \to \D$ ,where $D$ is a full subcategory of $\Th \sp 0 (\V )$ which is described. The second functor $\G: \D \to \Th \sp 0 (\V )$ is a so called extension functor, that is, $|A| \subseteq |\G (A)|$ for every $\C$-algebra $A$ and $\mu \subseteq \G (\mu) $ for every $\C$-morphism $\mu$. The last part of the paper contains two examples. The first of them shows the preference of our method in a known situation (the variety of all semigroups) and the second one presents a new result for the variety of all modules over arbitrary ring with unit. As a consequence of the last result we obtain that in the case when the ring does not contain zero divisors all automorphisms are semi-inner. \section{Quasi-inner automorphisms}\label{quasi} We consider only such categories $\C$ which are supplied with a faithful functor $\Q: \C \to Set$, where $Set$ is the category of all sets. We call $\Q$ the forgetful functor as it is accepted. \begin{defn}\label{inner} An automorphism $\Phi$ of a category $\C$ is called inner if there is an isomorphism between $\Phi$ and the identity functor $Id \sp{\C}$ in the category of all endofunctors of $\C$. An automorphism $\Phi$ is said to be {\it quasi-inner} if there is a bimorphism $Id \sp{\C}\to \Phi$ . \end{defn} This definition means that $\Phi$ is inner if for every object $A$ of the given category there exists an isomorphism $\sigma\sb A : A\to \Phi (A)$ such that for every morphism $\mu: A\to B$ the following diagram commutes: \[ \CD A @> \sigma \sb A>> \Phi (A)\\ @V\mu VV @VV\Phi (\mu) V \\ B @> \sigma \sb B >> \Phi (B) \endCD \] Hence we get that $\Phi (\mu)=\sigma\sb B \circ \mu \circ \sigma\sb A \sp {-1}$. As a generalization of this notion, $\Phi$ is quasi-inner if for every object $A$ of the given category there exists a {\it bimorphism} $\sigma\sb A : A\to \Phi (A)$ such that for every morphism $\mu: A\to B$ the diagram above is commutative. Hence we have only the following condition $\sigma\sb B \circ\mu =\Phi (\mu) \circ \sigma\sb A $ but not the expression for $\Phi (\mu)$ like in the case of inner automorphism if the morphism $\sigma\sb A $ is not invertible. Nevertheless, we have $\Q (\mu)=(\Q (\sigma\sb B ))\sp {-1}\circ \Q (\Phi (\mu)) \circ \Q (\sigma\sb A) $. \begin{lem}\label{square} Let $\Phi$ be a quasi-inner automorphism of $\C$ and $(\sigma\sb A)_{A\in Ob\C}:Id^\C \to \Phi$ be a bimorphism. If $\sigma\sb B \circ \mu =\nu \circ \sigma\sb A $ for $\mu:A\to B, \; \nu : \Phi (A) \to \Phi (B) $, then $\nu=\Phi (\mu)$. Hence $\Phi (\mu)$ is uniquely determined by the following inclusion: $\Q (\sigma\sb B )\circ \Q (\mu) \circ (Q (\sigma\sb A))\sp {-1} \subseteq \Q (\Phi (\mu)) $. \end{lem} \begin{proof} It is obvious because $\sigma\sb A $ is an epimorphism. \end{proof} \section{potentially quasi-inner automorphisms}\label{PotQuasi} In this section, we assume that every category $\C$ under consideration is supplied with a forgetful functor $\Q$ such that there exist a $\C$-object $A\sb 0$ and an element $x\sb 0 \in \Q (A\sb 0)$ which satisfy the following conditions: \begin{enumerate} \item[1Q)] $\Q$ is represented by the pair $(A\sb 0 ,x\sb 0)$, i.e., for every object $A$ of $\C$ and for every element $a\in \Q(A)$ there is exactly one morphism $\alpha:A\sb 0 \to A$ such that $ \Q(\alpha) (x\sb 0)=a$. \item[2Q)] for every $\C$-object $A$ there exists a morphism $\a:A\to A\sb 0$ such that for every element $x\in \Q (A\sb 0 )$ we have $x=\Q (\a) (a)$ for some element $a\in \Q(A)$, in other words, $\Q (\a):\Q (A)\to \Q (A\sb 0)$ is surjective. \end{enumerate} Some of the simple properties of such categories need to be noted: \begin{lem}\label{mono_and_epi} \item[1.] Let $\mu: A\to B$ be a $\C$-morphism. If $\Q(\mu):\Q(A) \to \Q(B)$ is surjective then $\mu$ is an epimorphism. \item[2.] A morphism $\mu: A\to B$ is a monomorphism if and only if the map $\Q(\mu):\Q(A) \to \Q(B)$ is injective. \item[3.] Let $\Phi$ be an automorphism of the category $\C$. There exists an epimorphism $\eta :A\sb 0 \to \Phi (A\sb 0)$. If $\eta$ is an isomorphism then $\Phi$ is potentially inner. \end{lem} \begin{proof} \item[1.] It is obvious. \item[2.] It is also obvious that if $\Q (\mu)$ is injective then $\mu$ is a monomorphism. Let $\mu: A\to B$ is a monomorphism. Suppose that $\Q (\mu)$ is not injective, i. e., there exist elements $a\sb 1, a\sb 2 \in \Q (A)$ such that $a\sb 1 \not =a\sb 2 $ but $\Q (\mu) (a\sb 1)=\Q (\mu) (a\sb 2)$. According to property 1Q, there are morphisms $\a\sb 1, \a\sb 2 :A\sb 0 \to A$ such that $\Q (\a\sb 1) (x\sb 0) =a\sb 1$ and $\Q (\a\sb 2) (x\sb 0) =a\sb 2$. We have $\Q(\mu) ( \Q(\a\sb 1) (x\sb 0))=\Q (\mu) ( \Q(\a\sb 2) (x\sb 0))$ and hence $\mu\circ \a\sb 1=\mu\circ \a\sb 2$. Since $\mu$ is a monomorphism we obtain that $\a\sb 1 =\a\sb 2$ which contradicts the supposition $a\sb 1 \not =a\sb 2 $ . \item[3.] Consider the object $\Phi \sp {-1} (A\sb 0)$. According to the property Q2 there is a morphism $\eta \sb 0 : \Phi \sp {-1} (A\sb 0) \to A\sb 0$ such that the mapping $\Q (\eta \sb 0) :\Q ( \Phi \sp {-1} (A\sb 0)) \to \Q (A\sb 0)$ is surjective. Thus $\eta \sb 0 : \Phi \sp {-1} (A\sb 0) \to A\sb 0$ is an epimorphism. Hence $\eta =\Phi (\eta \sb 0) : A\sb 0 \to \Phi (A\sb 0)$ is an epimorphism too. If $\eta $ is an isomorphism, then $\Phi$ is potentially inner according to Theorem 1 in \cite{ZhPlVar}. \end{proof} \begin{defn}\label{main_epi} The surjective morphism $\eta \sb 0 : \Phi \sp {-1} (A\sb 0) \to A\sb 0$ and the epimorphism $\eta :A\sb 0 \to \Phi (A\sb 0)$ introduced by the proof of \lemref{mono_and_epi} are fixed as main epimorphisms connected with $\Phi$ and are denoted by $\eta \sb 0\sp \Phi$ and $\eta \sp \Phi$ correspondingly. If $A\sb 0 = \Phi (A\sb 0)$ we assume that $ \eta \sp \Phi= \eta \sb 0 \sp \Phi = 1\sb {A\sb 0} $. \end{defn} \begin{defn}\label{bijection} Let $A,B\in Ob\C$. A mapping $s:\Q(A)\to Q(B)$ is called $\C$-bijection if $s$ is injective and for any two $\C$-morphisms $\a\sb 1, \a\sb 2: B\to C$ the equality $\Q(\a\sb 1)\circ s =\Q(\a\sb 2)\circ s$ implies $\a\sb 1=\a\sb 2$. Roughly speaking the mapping $s$ has the epimorphism property only in relation to $\C$-morphisms. \end{defn} It is obvious that on the one hand the notion of $\C$-bijection is a generalization of the notion of bijection and, on the other hand, if for a $\C$-morphism $\st :A\to B$ the mapping $\Q(\st):\Q(A)\to Q(B)$ is a $\C$-bijection then $\st :A\to B$ is a bimorphism. \begin{defn}\label{potentially} An automorphism $\Phi$ of a category $\C$ is called potentially quasi-inner if there exists a family of $\C$-bijections $(s\sb A :\Q(A) \to \Q(\Phi (A)))\sb {A\in Ob\C}$ such that for every $\C$-morphism $\mu :A\to B$ the following diagram commutes: $$ \CD \Q (A) @> s\sb A>>\Q (\Phi (A))\\ @V\Q(\mu )VV @VV\Q(\Phi (\mu)) V \\ \Q(B) @> s\sb B >>\Q ( \Phi (B)) \endCD $$ It is easy to see that for potentially quasi-inner automorphisms the fact analogous to\lemref{square} is true too, i. e., if $s\sb B \circ \Q(\mu) =\Q (\nu) \circ s\sb A $ for some morphism $\nu : \Phi (A) \to \Phi (B)$ then $\nu =\Phi (\mu)$. Thus the morphism $\Phi (\mu)$ is uniquely determined by the commutativity of the diagram above. \end{defn} \begin{defn}\label{alpha} Let $A$ be a $\C$-object and $a\in \Q (A)$. We denote by $\a \sb a \sp A$ the unique morphism $A\sb 0 \to A$ such that $\Q (\a \sb a \sp A) (x\sb 0)=a$. \end{defn} Now we are ready to prove the first of two main theorems. \begin{thm}\label{allAuto} Every automorphism of a category which satisfies conditions 1Q and 2Q is potentially quasi-inner. \end{thm} \begin{proof} Let $\Phi$ be an automorphism of a category $\C$ and $A$ be a $\C$-object. Define the mapping $s\sb A :\Q (A) \to \Q (\Phi (A))$ by the formula: \begin{equation}\label{s} s\sb A (a)=\Q(\Phi (\a \sb a \sp A)\circ \eta ) (x\sb 0) \end{equation} for every $a\in A$. Here $\eta =\eta \sp \Phi $ according to Definition \ref{main_epi}. Start to check that $s\sb A$ is a $\C$-bijection. \item{1)}. Let $s\sb A (u)=s\sb A (v)$ for $u,v\in \Q(A)$. Then $\Q (\Phi (\a \sb u \sp A)\circ \eta ) (x\sb 0) =\Q (\Phi (\a \sb v \sp A)\circ \eta )(x\sb 0)$. Hence $\Phi (\a \sb u \sp A)\circ \eta =\Phi (\a \sb v \sp A)\circ \eta $. Since $\eta $ is a $\C$-epimorphism we obtain that $\Phi (\a \sb u \sp A)=\Phi (\a \sb v \sp A)$ which implies $\a \sb u \sp A =\a \sb v \sp A$, i. e., $u=v$. Hence $s\sb A$ is injective. \item{2)}. Let $\Q (\g)\circ s\sb A =\Q (\dl) \circ s\sb A$ for some $\C$-morphisms $\g , \dl :\Phi (A) \to B$. Using \ref{s} we have for all $a\in \Q (A)$ $$\Q (\g)\circ \Q(\Phi (\a \sb a \sp A)\circ \eta)(x\sb 0)=\Q (\dl) \circ \Q(\Phi (\a \sb a \sp A)\circ \eta)(x\sb 0)$$ and hence $$ \g\circ \Phi (\a \sb a \sp A)\circ \eta=\dl\circ \Phi (\a \sb a \sp A)\circ \eta $$ for all $a\in \Q (A)$. Since $\eta$ is an epimorphism we obtain $\g\circ \Phi (\a \sb a \sp A)=\dl\circ \Phi (\a \sb a \sp A)$. Applying $\Phi\sp {-1}$ to both sides of this equation we get that $\Phi\sp {-1}(\g)\circ \a \sb a \sp A =\Phi\sp {-1}(\dl)\circ \a \sb a \sp A$ for all $a\in \Q (A)$. Hence $\Phi\sp {-1}(\g) =\Phi\sp {-1}(\dl)$ which gives $\g =\dl$. Thus $s\sb A$ is $\C$-bijective. It remains to check that the corresponding square in Definition \ref{potentially} is commutative. According to \ref{s} we have for every $a\in \Q (A)$ \begin{gather*} \Q (\Phi (\mu))\circ s\sb A (a) =\Q (\Phi (\mu))\circ \Q(\Phi (\a \sb a \sp A)\circ \eta)(x\sb 0)=\\ =\Q(\Phi (\mu)\circ \Phi (\a \sb a \sp A)\circ \eta)(x\sb 0) =\Q(\Phi (\mu \circ \a \sb a \sp A) \circ \eta)(x\sb 0) =\\ =\Q(\Phi (\a \sb {\Q (\mu) (a)} \sp B) \circ \eta)(x\sb 0) =s\sb B (\Q (\mu ) (a))=s\sb B\circ \Q (\mu) (a). \end{gather*} Thus $\Q (\Phi (\mu))\circ s\sb A =s\sb B \circ \Q (\mu)$. \end{proof} \begin{defn}\label{main_function} The family of $\C$-bijections $(s\sb A :\Q(A) \to \Q(\Phi (A)))\sb {A\in Ob\C}$ defined by \ref{s} we call the {\bf main function} corresponding to $\Phi$. \end{defn} It is easy to see that this notion is a generalization of the notion of the main function defined in \cite{ZhPlVar}. In fact, if the main epimorphism $\eta : A\sb 0 \to \Phi (A\sb 0)$ is an isomorphism we get the same formula for $s\sb A$ (see \cite{ZhPlVar} formula 2.2). Of course the main function is not the unique function $(\Q(A) \to \Q(\Phi (A)))\sb {A\in Ob\C}$ which makes the squares in Definition \ref{potentially} commutative. For example, the main function for the identity functor is equal (according to \ref{s} and Definition \ref{main_epi}) to $s\sb A (a)=\Q (\a \sb a \sp A\circ 1\sb {A\sb 0} ) (x\sb 0)=a$ , that is, $s\sb A =\Q (1\sb A)$. Hence in that case we have $ s\sb B \circ \Q (\mu ) =\Q (\mu )\circ s\sb A$ for every morphism $\mu :A\to B$. Having in mind this property we use the following notion (see \cite{ZhPlVar,ZhPlUnivAlg}). \begin{defn}\label{cntr} The family $(c\sb A)\sb {A\in Ob\C}$ of $\C$-bijective mappings $c\sb A :\Q(A) \to \Q(A)$ having the following property: $c\sb B \circ \Q (\mu )=\Q (\mu )\circ c\sb A$ for every morphism $\mu :A\to B$ is called a {\it central function}. In other words, a function $A \to c\sb A $ is a central function if it determines the identity automorphism of the category $\C$. \end{defn} Let $\Phi$ be an automorphism of $\C$. \thmref{allAuto} states that both $\Phi$ and $\Phi \sp{-1}$ are potentially quasi-inner. Therefore we have two main functions $(s\sp {\Phi}\sb A :A \to \Phi (A))\sb {A\in Ob\C}$ and $(s\sp {\Phi \sp {-1}}\sb A :A \to \Phi \sp {-1} (A))\sb {A\in Ob\C}$ of $\C$-bijections and two main epimorphisms $\eta \sp {\Phi}: A\sb 0\to \Phi (A\sb 0)$ and $\eta \sp {\Phi \sp {-1}}: A\sb 0\to \Phi \sp{-1} (A\sb 0)$. Although in general case, the main functions do not satisfy many conditions which are valid in the case $\eta$ is an isomorphism, they have some similar properties. In particular, we get the following commutative diagram for every morphism $\mu :A\to B$: $$ \CD \Q (A) @> s\sp {\Phi}\sb A>>\Q ( \Phi (A))@>s\sp {\Phi \sp {-1}}\sb {\Phi (A)}>>\Q (A)\\ @V\Q(\mu )VV @VV\Q(\Phi (\mu)) V @VV\Q(\mu )V\\ \Q (B) @>s \sp {\Phi}\sb B >>\Q (\Phi (B))@>s\sp {\Phi \sp {-1}}\sb {\Phi (B)}>>\Q(B) \endCD $$ We see that the family of mappings $c\sb A \sp {\Phi}=s\sp {\Phi \sp {-1}}\sb {\Phi (A)}\circ s\sp {\Phi}\sb A:\Q (A)\to \Q (A)$ is a central function. This central function will be called the {\it main central function} for the automorphism $\Phi$. It is obvious that for every function $A\mapsto s\sb A $ that determines an automorphism $\Phi$ and any two central functions $A \mapsto e\sb A$ and $A\mapsto d\sb A$ the function $A \mapsto (e\sb {\Phi (A)} \circ s\sb A \circ d\sb A )$ determines the same automorphism. The converse proposition is also true. \begin{prop}\label{twofunctions} Let a function $A\mapsto t\sb A$, where $A\in Ob\C$, determines an automorphism $\Phi$ of a category $\C$. Then there exists a central function $A\mapsto d\sb A$ such that $t\sb A =(c\sb{\Phi (A)}\sp {\Phi \sp{-1}}) \sp {-1}\circ s\sp {\Phi}\sb A \circ d\sb A $ for all $\C$-objects $A$. \end{prop} \begin{proof} Let $d\sb A = s\sb {\Phi (A)}\sp {\Phi \sp {-1}}\circ t\sb A$. It is obvious that the function $A\mapsto d\sb A$ is central. We get $s\sb{\Phi (A)}\sp {\Phi} \circ s\sb {\Phi (A)}\sp {\Phi \sp {-1}}\circ t\sb A=s\sb {\Phi (A)}\sp {\Phi} \circ d\sb A$. Since $s\sb{\Phi (A)}\sp {\Phi} \circ s\sb {\Phi (A)}\sp {\Phi \sp {-1}}=c\sb{\Phi (A)}\sp {\Phi \sp{-1}}$ is injective, we get $t\sb A =(c\sb{\Phi (A)}\sp {\Phi \sp{-1}}) \sp {-1}\circ s\sp {\Phi}\sb A \circ d\sb A $ \end{proof} \section{The categories of free universal algebras}\label{Free} From now until the end of the article we deal with an arbitrary variety $\V$ of universal algebras of some signature $\Om$ and with the corresponding category $\Th (\V)\sp 0$ of all finitely generated $\V$-algebras and their homomorphisms. This category and all its subcategories are supplied with usual forgetful functor $\Q$ that assigns to every algebra $A$ its underlying set $|A|$ and to every homomorphism $A\to B$ the corresponding mapping $|A|\to |B|$. If there are no misunderstanding we denote an algebra by the same letter as its underlying set and do the same for homomorphisms. Let $A\sb 0$ be a monogenic algebra free generated by an element $x\sb 0$. We consider such full subcategories of $\Th (\V)\sp 0$ that contain the object $A\sb 0$. It is obvious that every such category satisfies the conditions 1Q and 2Q formulated in \secref{PotQuasi} because the pair $(A\sb 0, x\sb 0)$ represents the forgetful functor $\Q$. Let $\C$ be a full subcategory of $\Th (\V)\sp 0$ and $A\sb 0$ be an object of $\C$. Let $\Phi$ be an automorphism of $\C$. \thmref{allAuto} states that both $\Phi$ and $\Phi \sp{-1}$ are potentially quasi-inner. Therefore we have two main functions $(s\sp {\Phi}\sb A :A \to \Phi (A))\sb {A\in Ob\C}$ and $(s\sp {\Phi \sp {-1}}\sb A :A \to \Phi \sp {-1} (A))\sb {A\in Ob\C}$ of $\C$-bijections and four main epimorphisms: $$\eta \sp {\Phi}\sb 0: \Phi \sp {-1} (A\sb 0) \to A\sb 0,\; \eta \sp {\Phi}: A\sb 0\to \Phi (A\sb 0),$$ $$\eta \sb 0\sp {\Phi \sp {-1}}: \Phi (A\sb 0) \to \ A\sb 0,\; \eta \sp {\Phi \sp {-1}}: A\sb 0\to \Phi \sp{-1} (A\sb 0).$$ By definition, $\eta \sp {\Phi} =\Phi (\eta \sp {\Phi}\sb 0)$. The epimorphism $\eta \sp {\Phi}\sb 0$ was introduced in the process of the proof of \lemref{mono_and_epi} as an epimorphism $\Phi \sp {-1} (A\sb 0) \to A\sb 0$ but now we can specify it. Let $X$ be a basis of $\Phi \sp {-1} (A\sb 0)$. Then we set $\eta \sp {\Phi}\sb 0 (x) =x\sb 0$ for all $x\in X$. It is obvious that some arbitrariness remains in the definition of $\eta \sp {\Phi}\sb 0$ which depends on the choice of $X$. Specifically, if there is an isomorphism between $\Phi \sp {-1} (A\sb 0)$ and $A\sb 0$ we prefer to assume that $\eta \sp {\Phi}\sb 0$ is equal to corresponding isomorphism. All this concerns the epimorphism $\eta \sb 0\sp {\Phi \sp {-1}}$ too. Just as in the previous section, we have the main central function $(c\sb A\sp{\Phi})\sb {A\in Ob\C}$. Now we describe the images of this function. \begin{prop}\label{c-work} Let $A$ be a $\C$-algebra and $a\in A$. Let $c\sb A\sp {\Phi} =s\sp {\Phi \sp {-1}}\sb {\Phi (A)}\circ s\sp {\Phi}\sb A$. Denote by $w\sp {\Phi}(x\sb 0)$ the term $\eta \sp {\Phi} \sb 0\circ \eta \sp {\Phi \sp {-1}} (x\sb 0)\in A\sb 0$. Then $c\sb A\sp {\Phi} (a)$ is an element of subalgebra of $A$ generated by $a$, namely, $ c\sb A\sp {\Phi} (a)=w\sp {\Phi}(a)$. If $w\sp {\Phi}(x\sb 0)=x\sb 0$ then all mappings $c\sb A \sp {\Phi}$ are the identity mappings and therefore $\Phi$ is a potentially inner automorphism. \end{prop} \begin{proof}We have $$c\sb A \sp {\Phi}(a)=s\sp {\Phi \sp {-1}}\sb {\Phi (A)}(s\sp {\Phi}\sb A (a))=\Phi \sp {-1}(\a \sp {\Phi (A)}\sb {s\sp {\Phi}\sb A (a)})\circ \eta \sp {\Phi \sp {-1}}(x\sb 0) =\Phi \sp {-1}(\a \sp {\Phi (A)}\sb {s\sp {\Phi}\sb A (a)}\circ \eta \sp {\Phi \sp {-1}}\sb 0) (x\sb 0). $$ Now let calculate the mapping that is in parentheses. Let $x\sb 1, ... ,x\sb n$ be a basis of $\Phi (A)$. By definition $\eta \sp {\Phi \sp {-1}}\sb 0 (x\sb i )=x\sb 0$. Thus \begin{gather*} \a \sp {\Phi (A)}\sb {s\sp {\Phi}\sb A (a)}\circ \eta \sp {\Phi \sp {-1}}\sb 0 (x\sb i)= \a \sp {\Phi (A)}\sb {s\sp {\Phi}\sb A (a)}(x\sb 0)=s\sp {\Phi}\sb A (a) =\Phi (\a\sp A \sb a)\circ \eta \sp {\Phi} (x\sb 0)=\\ =\Phi (\a\sp A \sb a \circ \eta \sp {\Phi} \sb 0) (x\sb 0)=\Phi (\a\sp A \sb a \circ \eta \sp {\Phi} \sb 0)\circ\eta \sp {\Phi \sp {-1}}\sb 0 (x\sb i). \end{gather*} Since $x\sb i$ is an arbitrary element of basis we obtain that $$\a \sp {\Phi (A)}\sb {s\sp {\Phi}\sb A (a)}\circ \eta \sp {\Phi \sp {-1}}\sb 0 =\Phi (\a\sp A \sb a \circ \eta \sp {\Phi} \sb 0)\circ\eta \sp {\Phi \sp {-1}}\sb 0. $$ If we substitute the mapping in parentheses by the obtained value we get that \begin{gather*} c\sb A \sp {\Phi}(a)=\Phi \sp {-1}(\Phi (\a\sp A \sb a \circ \eta \sp {\Phi} \sb 0)\circ\eta \sp {\Phi \sp {-1}}\sb 0) (x\sb 0)=\\ =\a\sp A \sb a \circ \eta \sp {\Phi} \sb 0\circ \Phi \sp {-1}(\eta \sp {\Phi \sp {-1}} \sb 0) (x\sb 0)=\a\sp A \sb a \circ \eta \sp {\Phi} \sb 0\circ \eta \sp {\Phi \sp {-1}} (x\sb 0) =\a\sp A \sb a (w\sp {\Phi}(x\sb 0))=w\sp {\Phi}(a). \end{gather*} Thus $c\sb A \sp {\Phi}(a)\in \langle a\rangle$. If all mappings $c\sb A \sp {\Phi}$ are identity mappings then all $s\sp {\Phi}\sb A$ are bijections and therefore $\Phi$ is a potentially inner automorphism. \end{proof} \begin{prop} Let $A$ be a $\C$-algebra and $X$ be a basis of $A$. For any central function $(c\sb A)\sb {A\in Ob\C}$, either none of elements of $X$ belongs to $c\sb A(A)$ or $c\sb A$ is surjective. \end{prop} \begin{proof} Let $x \in c\sb A(A)$ for some $x\in X$ and $a\in A$. Let $\mu$ be an endomorphism of $A$ such that $\mu (x) =a$. Then $a\in \mu (c\sb A (A))$. Since $c\sb A$ is central we have $a\in c\sb A (\mu (A))$ that implies $a\in c\sb A (A)$. Thus $c\sb A$ is surjective because $a$ is an arbitrary element of $A$. \end{proof} We show that the same fact takes place for $s\sp {\Phi} \sb A$. Hereinafter we write $s\sb A$ instead of $s\sp {\Phi}\sb A$ if it is clear what we have in mind. \begin{prop}\label{notsurjective} Let $A$ be a $\C$-algebra and $X$ be a basis of $\Phi (A)$. Then either none of elements of $X$ belongs to $s\sb A (A)$ or $s\sb A$ is surjective. \end{prop} \begin{proof} Let $x\in s\sb A (A)$ for some $x\in X$. Let $u$ be an arbitrary element of $\Phi (A)$. There is an endomorphism $\mu$ of $\Phi (A)$ such that $\mu (x)=u$. We have the equation $s\sb A \circ \Phi \sp{-1} (\mu) =\mu \circ s\sb A $. Since $x\in s\sb A (A)$ there is $a\in A$ such that $x=s\sb A (a)$. We have $s\sb A ( \Phi \sp{-1} (\mu) (a))=\mu (s\sb A (a)) =\mu (x)=u$. Thus $u\in s\sb A (A)$. Since $u$ is an arbitrary element of $\Phi (A)$, $s\sb A $ is surjective. \end{proof} Let $X=\{x\sb 1,\dots ,x\sb n\}$ be a basis of $\C$-algebra $A$. Let $y\sb i=s\sb A\sp {\Phi }(x\sb i)$ for $i=1,\dots , n$. Since $s\sb A \sp {\Phi }$ is injective all elements of the set $Y=\{y\sb 1,\dots ,y\sb n\}$ are pairwise different. One the other hand, \propref{notsurjective} declares that none of elements of $Y$ belongs to any basis of $\Phi (A)$. Nevertheless the set $Y$ has the following interesting property. \begin{prop}\label{uniq} Let $B$ be a $\C$-algebra and $\b, \g :\Phi (A) \to B$ be two homomorphisms. If $\b (y\sb i)=\g (y\sb i)$ for all $i=1,\dots , n$ then $\b =\g$. \end{prop} \begin{proof} We have $\b (y\sb i)=\b (s\sb A (x\sb i)) =\b (\Phi (\a \sb {x\sb i})\circ \eta (x\sb 0))=\b \circ \Phi (\a \sb {x\sb i})\circ \eta (x\sb 0)$. The same we have for $\g$: $\g (y\sb i)=\g \circ \Phi (\a \sb {x\sb i})\circ \eta (x\sb 0)$. Under the suggestion, we have $$\b \circ \Phi (\a \sb {x\sb i})\circ \eta (x\sb 0)=\g \circ \Phi (\a \sb {x\sb i})\circ \eta (x\sb 0), $$ which implies $$\b \circ \Phi (\a \sb {x\sb i}) =\g \circ \Phi (\a \sb {x\sb i}) $$ and then $$\Phi (\Phi {-1}(\b) \circ \a \sb {x\sb i}) =\Phi (\Phi {-1}(\g) \circ \a \sb {x\sb i}). $$ Since $X$ is a basis of $A$ then after some obvious steps we obtain that $\b =\g $. \end{proof} Now our aim is to describe images of the main function, that is, the sets $ s\sb A\sp {\Phi} (A)$ for every object $A$. \begin{thm}\label{image} There exists a term ${\bf t}(x\sb 0) $ containing only one variable $x\sb 0$ such that for every $\C$-algebra $A$ the following expression takes place: $$ s\sb A \sp {\Phi} (A) =\{{\bf t}(u) \vert u\in \Phi (A)\}.$$ \end{thm} \begin{proof} Let $x$ be a member of some basis of the algebra $\Phi \sp {-1} (A\sb 0)$. Then $s\sb {\Phi \sp {-1} (A\sb 0)}\sp {\Phi}(x)$ is an element of $A\sb 0$ which we consider as a term ${\bf t}(x\sb 0) $. Let $a\in A$ and $\mu :\Phi \sp {-1} (A\sb 0) \to A$ be a homomorphism such that $\mu (x) = a$. We have the following commutative diagram: $$ \CD \Phi \sp {-1} (A\sb 0) @> s\sb {\Phi \sp {-1} (A\sb 0)}\sp {\Phi}>>A\sb 0\\ @V\mu VV @VV\Phi (\mu) V \\ A @> s\sb A\sp {\Phi} >> \Phi (A) \endCD $$ Hence $s\sb A\sp {\Phi}(a)=\Phi (\mu) ( s\sb {\Phi \sp {-1} (A\sb 0)}\sp {\Phi}(x))=\Phi (\mu) ({\bf t}(x\sb 0)) ={\bf t}(\Phi (\mu) (x\sb 0))$. Thus there exists an element $u\in \Phi (A)$ such that $s\sb A\sp {\Phi} (a)={\bf t}(u)$. It is obvious that the element ${\bf t}(u)$ belongs to $s\sb A \sp {\Phi}(A)$ for every $u\in \Phi (A)$. \end{proof} \section{Full description of automorphisms of categories of free algebras.}\label{description} The object of this section is to give a description of automorphisms of arbitrary categories which are subcategories of the category $\Th \sp 0 ({\V})$ and which contain the object $A\sb 0 $. To this end, we select two kinds of functors. Because of the fact that we may have two different algebraic structures on the same set it makes sense to use different notations for an algebra and for its underlying set, namely, $A$ and $|A|$ correspondingly. \begin{defn}\label{functors} Let $\C$ and $\D$ be full subcategories of the category $\Th \sp 0 ({\V})$. \item{1.} An isomorphism $\Psi :\C \to \D$ is called inner if there is a family of isomorphisms $(\st\sb {A} :A\to \Psi (A))\sb {A\in Ob \C}$ such that $\Psi (\mu) =\st\sb B \circ \mu \circ \st\sb A \sp {-1}$ for every morphism $\mu :A\to B$ of the category $\C$. That is, $\Psi$ is isomorphic to the embedding functor $Id\sb {\C} \sp{\Th \sp 0 ({\V})} :\C \to \Th \sp 0 ({\V})$. \item{2.} An injective functor $\G :\C \to \D$ is called an extension functor if \item{(i)} $|A| \subseteq |\G (A)|$ for every $\C$-object $A$, \item{(ii)} $\mu \subseteq \G (\mu)$ for every $\C$-morphism $\mu$, \item{(iii)} if $\nu :\G (A) \to \G (B)$, $\mu :A\to B$ and $\mu \subseteq \nu$ then $\nu =\G (\mu)$ for any $\C$-objects $A,B$ and morphisms $\mu ,\nu$. \end{defn} Below we show that any automorphism of a category $\C$ under consideration is a product of two functors the first of which is an inner isomorphism and the second one is an extension functor. Let $\Phi$ be an automorphism of a given category $\C$. We follow an idea used in \cite{ZhPlVar,ZhPlUnivAlg}. Since $s\sp {\Phi}\sb A :|A|\to |\Phi (A)|$ and $s\sp {\Phi \sp{-1}}\sb A : |A|\to |\Phi \sp {-1} (A)|$ are $\C$-bijections they determine algebraic structures on the sets $\Im s\sp {\Phi}\sb A=s\sp {\Phi}\sb A |A|) \subseteq |\Phi (A)|$ and $\Im s\sp {\Phi \sp{-1}}\sb A=s\sp {\Phi \sp{-1}}\sb A (|A|) \subseteq |\Phi \sp {-1} (A)|$ correspondingly. Let $\om$ be a symbol of a $k$-ary operation and $\om \sb A$ be the corresponding operation of $A$. Define new $\om$-operations on $\Im s\sp {\Phi}\sb A $ and $\Im s\sp {\Phi \sp{-1}}\sb A $ by setting for all $a\sb 1,\dots ,a\sb k \in |A|$ $$\om\sb A \sp * (s\sp {\Phi}\sb A (a\sb 1),\dots , s\sp {\Phi}\sb A (a\sb k)) =s\sp {\Phi}\sb A (\om \sb A (a\sb 1,\dots ,a\sb k)),$$ $$\om\sb A \sp \# (s\sp {\Phi \sp{-1}}\sb A (a\sb 1),\dots , s\sp {\Phi \sp{-1}}\sb A (a\sb k)) =s\sp {\Phi \sp{-1}}\sb A (\om \sb A (a\sb 1,\dots ,a\sb k)).$$ The set $\Im s\sb A \sp {\Phi}$ supplied with the operations $\om\sb A \sp *$ for all $\om \in \Om$ is an $\Om$-algebra which we denote by $A\sp *$. It is obvious that the bijective mapping $(s\sb A \sp {\Phi}) \sp*: |A|\to \Im s\sb A \sp {\Phi}$ which is the mapping $s\sb A \sp {\Phi}: |A| \to |\Phi (A)|$ considered as the mapping onto $\Im s\sb A \sp {\Phi}$ is an isomorphism $(s\sb A\sp {\Phi}) \sp*: A\to A\sp *$ . Thus $A\sp *$ and $A$ are isomorphic. Similarly we have the algebra $A\sp \# $ on the set $\Im s\sp {\Phi \sp{-1}}\sb A$ and the isomorphism $s\sp\#\sb A: A\to A\sp\#$. \begin{thm}\label{restriction} Let $\mu : \Phi (A) \to \Phi (B)$ be a homomorphism. The restriction of $\mu$ on $A \sp *$ is a homomorphism $A \sp * \to B \sp *$. Backwards, any homomorphism $\nu : A \sp * \to B \sp *$ is a restriction of exactly one homomorphism $\mu : \Phi (A)\to \Phi (B)$. \end{thm} \begin{proof} Let $\mu : \Phi (A) \to \Phi (B)$. We have $s\sp {\Phi}\sb B \circ \Phi \sp {-1} (\mu) = \mu \circ s\sp {\Phi}\sb A$. Then $ \mu (s\sp {\Phi}\sb A (A)) \subseteq s\sp {\Phi}\sb B (B)$, and it is correct to consider the restriction of $\mu$ on $|A \sp *|$ as a mapping $|A \sp *|\to |B \sp *|$. Denoting this restriction by $\nu$ we get $s\sp*\sb B \circ \Phi \sp {-1} (\mu) = \nu \circ s\sp *\sb A$. Consequently, $\nu =s\sp*\sb B \circ \Phi \sp {-1} (\mu) \circ (s\sp *\sb A)\sp {-1}$ and hence $\nu$ is a homomorphism $ A \sp * \to B \sp *$. Backwards, let $\nu : A \sp * \to B \sp *$. Then $\g =(s\sp*\sb B )\sp {-1}\circ \nu \circ s\sp *\sb A$ is a homomorphism $\g :A\to B$. Let $\mu =\Phi (\g)$. It is obvious that $\mu :\Phi (A) \to \Phi (B)$ and $\nu \subseteq \mu$. The uniqueness of such homomorphism follows from \propref{uniq}. \end{proof} The next fact may be useful in some cases. \begin{prop}\label{condQuasi} An automorphism $\Phi$ of the category $\C$ is quasi-inner if and only if there exists a central function $(c\sb A)\sb {A\in Ob\C}$ such that for each $ \C$-object $A$ the mapping $c\sb A$ is a bimorphism $A\to \Phi (A)\sp \# $. \end{prop} \begin{proof} Let $\Phi $ be quasi-inner and $(\st \sb A:A\to \Phi (A)) \sb {A \in Ob \C}$ be the corresponding family of bimorphisms (Definition \ref{inner}). Let $c\sb A =s\sp {\Phi \sp {-1}}\sb {\Phi (A)}\circ \st \sb A$. Obviously $(c\sb A )\sb A$ is a central function. Since the mapping $s\sp {\Phi \sp {-1}}\sb {\Phi (A)}$ can be considered as the isomorphism $(s\sp {\Phi \sp {-1}}\sb {\Phi (A)})\sp\#: \Phi (A)\to \Phi (A)\sp \# $, we get that $c\sb A: A\to \Phi (A)\sp \# $ is a bimorphism. Now suppose that there exists a central function $(c\sb A) \sb {A \in Ob \C}$ such that every $c\sb A$ is a bimorphism $A\to \Phi (A)\sp \# $. Let us define $\st \sb A :A\to \Phi (A)$ as $$\st \sb A =(s\sb {\Phi (A)}\sp {\Phi \sp {-1}} )\sp{-1} \circ c \sb A =(s\sb {\Phi (A)}\sp {\Phi \sp {-1}\#} )\sp{-1}\circ c \sb A$$ for every $\C$-algebra $A$. This definition is correct because $c\sb A (|A|)\subseteq |\Phi (A)\sp\#|=\Im s\sb {\Phi (A)}\sp {\Phi \sp {-1}\#}$ and hence $\st \sb A :A\to \Phi (A)$ is a bimorphism. Then we have for every $\mu :A\to B$ that $\mu \circ s\sb {\Phi (A)}\sp {\Phi \sp {-1}}=s\sb {\Phi (B)}\sp {\Phi \sp {-1}}\circ \Phi (\mu)$ which implies $$(s\sb {\Phi (B)}\sp {\Phi \sp {-1}})\sp {-1}\circ \mu \circ s\sb {\Phi A)}\sp {\Phi \sp {-1}}\circ (s\sb {\Phi A)}\sp {\Phi \sp {-1}})\sp {-1}\circ c\sb A =(s\sb {\Phi (B)}\sp {\Phi \sp {-1}})\sp {-1}\circ s\sb {\Phi (B)}\sp {\Phi \sp {-1}}\circ \Phi (\mu)\circ \st \sb A$$ and hence $\st \sb B \circ \mu =\Phi (\mu) \circ \st \sb A$. Thus $\Phi$ is quasi-inner. \end{proof} \begin{rem} It may be that an automorphism $\Phi$ is quasi-inner but $\Phi \sp {-1}$ is not a such one. It is clear that for $\Phi \sp {-1}$ the statement like \propref{condQuasi} is true in which "*" is used instead of "\#" . \end{rem} Now, we proceed to describing for each $\C$-object $A$ the algebra $A\sp *$, taking into account that $A\sp *$ perhaps is not an object of $\C$. Suppose that arities of operations of our variety $V$ are less of some number $n$. Let $F$ be a $\C$-algebra and $X=\{x\sb 1 ,\dots , x\sb n\}$ be a basis of $F$. Let $\om $ be a symbol of a $k$-ary operation and $\om \sb A$ be the corresponding operation on an algebra $A$. We consider the expression ${\bf w}=\om (x\sb 1 ,\dots , x\sb k)$ as a term in the language corresponding to the our variety. For every $k$-tuple $(a\sb1 ,\dots , a\sb k)$ of elements of an algebra $A$, the value $\om \sb A (a\sb 1, \dots ,a\sb k)$ is equal to $\th (\om (x\sb 1, \dots ,x\sb k))$ where $\th : F\to A$ such that $\th (x\sb i)=a\sb i$ for $i=1,\dots, k$. Now consider the element $\tilde {\bf w}=s\sb A (\bf w)$ of the free algebra $\Phi (F)$. It turns out that the element $\tilde {\bf w}$ determines an $\om$-operation on the set $s\sb A (|A|)$ for every $\C$-algebra $A$. Let $(u\sb1 ,\dots , u\sb k)$ be a $k$-tuple of elements of $s\sb A (|A|)$. Then we have an unique $k$-tuple $(a\sb1 ,\dots , a\sb k)$ of elements of the algebra $A$ such that $s\sb A (a\sb i) =u\sb i$ for $i=1,\dots, k$. Let $\th : F\to A$ be a homomorphism such that $\th (x\sb i)=a\sb i$ for $i=1,\dots, k$. We define the new operation $\widetilde \om \sb {\Phi (A )}$ by the value of the term $\tilde{\bf w}$ as follows: \begin{equation}\label{derivedoper} \widetilde \om \sb {\Phi (A )}(u\sb1 ,\dots , u\sb k)=\Phi (\th) (\tilde {\bf w}) \end{equation} On the other hand, have the commutative diagram: \[ \CD F @> s\sb F>> \Phi (F)\\ @V\th VV @VV\Phi (\th) V \\ A @> s\sb A >>\Phi (A) \endCD \] We obtain $u\sb i=s\sb A (a\sb i)=s\sb A(\th (x\sb i)) =\Phi (\th ) (s\sb F (x\sb i))$ and \begin{gather*} \Phi (\th )(\widetilde {\bf w})= \Phi (\th ) \circ s\sb F (\om (x\sb 1,\dots ,x\sb k))= s\sb A (\om \sb A (a\sb1, \dots , a\sb k))=\\ = \om \sb A\sp * (s\sb A (a\sb 1),\dots ,s\sb A (a\sb k))=\om \sb A\sp * (u\sb 1,\dots ,u\sb k) . \end{gather*} Hence $\widetilde \om\sb {\Phi (A)} (u\sb1, \dots , u\sb k)=\om \sb A\sp * (u\sb1, \dots , u\sb k)$ and the operation $\om \sb A\sp *$ is the derived operation $ \widetilde \om \sb {\Phi (A)}$ on $\Im s\sb A$. By the way this result shows that the operation defined by \eqref{derivedoper} does not depend on the choice of an algebra $F$ having the assumed property. Thus we have proved the following result. \begin{thm} For every $\C$-algebra $A$ the derived operation $\widetilde{\omega }\sb \Phi (A) $ defined by \eqref{derivedoper} coincides with the induced operation $\omega \sb A \sp *$, that is, $$s\sb A (\omega \sb A (a\sb 1 ,\dots ,a\sb k))=\widetilde{\omega }\sb {\Phi (A )}(s\sb A (a\sb 1 ),\dots , s\sb A (a\sb k))$$ for all $a\sb 1,\dots ,a\sb k \in A$. \end{thm} Now we are ready to prove the second main theorem. \begin{thm}\label{product} Let $\C$ be a full subcategory of $\Th \sp 0 (\V)$ containing a monogenic algebra $A\sb 0$. Let $\Phi$ be an automorphism of $\C$ . Then \item {(i)} there is a full subcategory $\D$ of $\Th\sp 0 (\V)$ such that $\Phi$ is a product $\Phi =\G \circ \Psi$, where $\Psi : \C \to \D$ is an inner isomorphism and $\G : \D \to \C$ is an extension functor. \item {(ii)} there is a term ${\bf t}(x\sb 0) \in |A\sb 0|$ such that $|\Psi (A)|=\{ {\bf t}(u)\vert u\in |\Phi (A)\}|$ for every $\C$-object $A$. \end{thm} \begin{proof} Let $\D$ be the full subcategory of $\Th \sp 0 (\V)$ whose objects are all algebras $A\sp *$ where $A$ is an arbitrary $\C$-object. Let $\Psi (A)=A\sp *$ for every $\C$-object $A$ and $\Psi (\mu) =s\sp *\sb B \circ \mu \circ (s\sp * \sb A)\sp {-1}$ for every $\C$-morphism $\mu :A\to B$. It is clear that $\Psi : \C \to \D $ is a functor and, what is more, $\Psi$ is an inner isomorphism between $\C$ and $\D$. Now let $\G =\Phi \circ \Psi \sp{-1}$. Then $\G$ is a functor $\D\to \C$. Of course, $\G$ is an isomorphism between these categories and $\Phi =\G \circ \Psi$. We have $\G (A\sp *) =\Phi (A)$ and therefore $|A\sp *| \subseteq |\G (A\sp *)|$. Let $\mu :A\sp* \to B\sp *$. According to the definition of $\Psi$, we have $\nu =\Psi \sp{-1}(\mu) =(s\sp *\sb B)\sp {-1} \circ \mu \circ s\sp * \sb A$ is a homomorphism $A\to B$. Thus $\G (\mu ) =\Phi (\nu)$. On the other hand, $s\sb B \circ (s\sp *\sb B)\sp{-1} \circ \mu \circ s\sp * \sb A \circ (s\sb A)\sp {-1}=\mu$, that is, $\mu$ is the restriction of $\G (\mu)$ on $A\sp*$. In view of Theorem \ref{restriction} and Definition \ref{functors} we obtain that $\G$ is an extension functor. The second statement follows from \thmref{image}. \end{proof} \begin{cor}\label{Corollary} The last theorem shows that the process of describing of an automorphism $\Phi$ of a category $\C$ under consideration is reduced to the following steps: \begin{enumerate} \item We view the elements of $A\sb 0$ and find the general form of the term ${\bf t}(x\sb 0) \in |A\sb 0|$. Then we describe the subset $s\sb A (|A|)$ of an algebra $\Phi (A)$ according to the formula $s\sb A (|A|)=\{ {\bf t}(u): u\in |\Phi (A)|\}$. In the case ${\bf t}(x\sb 0) =x\sb 0$ this set is equal to the underlying set of $\Phi (A)$. \item In order to describe operations of $A\sp *$, we use the fact that the restriction of any endomorphism of $\Phi (A)$ to $s\sb A (|A|)$ must be an endomorphism of the algebra $A\sp *$, and vice versa, any endomorphism of $A\sp *$ could be extended up to an endomorphism of $\Phi (A)$ which is uniquely determined. \item We use the fact that the correspondence described above is an isomorphism between semigroups $\End (A\sp *)$ and $\End (\Phi (A))$. \end{enumerate} If the requirements of these steps are fulfilled, it remains to describe the kind of embedding of $A\sp *$ in $\Phi (A)$, which may be an isomorphism or some new kind of a correspondence, for example, a mirror homomorphism or a screw-homomorphism. \end{cor} \begin{examples*} \item{1.} First we will show how method suggested above can be applied in a simple case when the result is already known, namely for the category $SEM$ of all finitely generated free semigroups. Let $\C$ be a full subcategory of $SEM$ containing a monogenic semigroup $A\sb 0$. Let $\Phi$ be an automorphism of $\C$ . Any term ${\bf t}(x\sb 0) \in A\sb 0$ in our case has a form ${\bf t}(x\sb 0) =x\sb 0 \sp k$ where $k\geq 1$ is a whole number. Let $ F=F(x\sb 1,\dots , x\sb n)$ be a free semigroup generated by variables $x\sb1,\dots , x\sb n$ and $A=\Phi \sp{-1}( F)$. Thus $A\sp *=\{u\sp k\vert u\in F\}$. Let $||w||$ denote the length of the word $w\in F$. Let $y$ be an element of a basis of $A\sp *$, then there exists an endomorphism $\g$ of $A\sp *$ such that $x\sb 1\sp k =\g (y)$. Let the endomorphism $\tilde{\g}$ of $ F$ be the extension of $\g$. Thus $\tilde{\g} (y)=x\sb 1\sp k$. Since $y=u\sp k$ for some $u\in F$ we get $k\geq ||y||\geq k$ and hence $y\in \{x\sb 1\sp k ,\dots , x\sb n\sp k\}$. Applying this result to $A\sb 0$ (n=1) we obtain that $(\Phi \sp {-1}(A\sb 0))\sp *$ and $A\sb 0$ are isomorphic and hence $\Phi \sp {-1}(A\sb 0)$ and $A\sb 0$ are isomorphic. We know (\secref{Free}) that in this case all mappings $s\sb A$ are surjective and hence $k=1$. Therefore $(x\sb 1, x\sb 2)$ is the common basis of semigroups $ F=F(x\sb 1, x\sb2)$ and $A\sp *$ which are isomorphic. Since $(x\sb 1, x\sb 2)$ is the unique basis of $F$ and the unique basis of $A\sp*$, there are only two isomorphisms $A\sp* \to F$, namely $\varphi (x\sb 1) =x\sb 1 , \varphi (x\sb2 ) = x\sb 2$ or $\varphi (x\sb 1) =x\sb 2, \varphi (x\sb 2 )=x\sb1$. In the former case we have $\varphi (x\sb 1 *x\sb 2) =x\sb 1x\sb 2 $ and hence $\varphi$ is the identity mapping. In the latter case, we have $\varphi (x\sb 1 *x\sb 2) =x\sb 2x\sb 1 $ and therefore $\varphi$ maps every word $u$ to the word $\underline{u}$, where all letters are written in the reverse order. We obtain that $\Phi$ is produced by isomorphisms, that is, $\Phi$ is inner, or all mappings $s \sb A : A\to \Phi (A)$ are anti-isomorphisms. \item{2.} Now we apply our method to the variety of modules. As far as the author knows there are no results in the general case. There are some essential results in this topic in \cite{KLP} and \cite{Lipyan}. Let $ R$ be an arbitrary ring with unit and ${\bf Mod - R }$ denote the category of all finitely generated free left $R$-modules. We consider a full subcategory $\C$ of the category ${\bf Mod - R }$ which satisfies the accepted conditions. In this case $A\sb 0 =Rx\sb 0$. Let $\Phi$ be an automorphism of $\C$. Consider a free left $R$-module ${\bf M} =(M, +, 0, F)$ where $F: R\times M \to M$ is the left action of $R$ on $M$. Assume that $\{x\sb 1 ,x\sb 2\}$ is a basis of ${\bf M}$, that is, $M = Rx\sb 1 \oplus Rx\sb 2$. Let $A=\Phi \sp {-1} ({\bf M})$. We have to describe the finite generated free left $R$-module $A\sp *=(|A\sp *|,+\sp *, 0\sp *, F\sp *)$. Since any term ${\bf t}(x\sb 0) \in A\sb 0$ in our case has a form ${\bf t}(x\sb 0) =rx\sb 0 $ where $r\in R ,\; r\not = 0$, we get $|A\sp *|=rM$. It is clear that $0\in |A\sp*|$. On the other hand, for every endomorphism $\varphi$ of $A\sp *$ we have $\varphi (0\sp *)=0\sp *$. Then according to (2) in \corref{Corollary} $\g (0\sp *)=0\sp *$ for every endomorphism $\g$ of $\bf M$. Thus $0\sp * =0$. Further $r x\sb 1 +\sp* r x\sb 2 =i x\sb 1 +j x\sb 2 $ for some $i,\; j \in R$. Let $\g :{\bf M}\to {\bf M} $ such that $\g ( x\sb 1)=x\sb 1,\;\g ( x\sb 2) =0$. Since restriction of $\g$ to $|A\sp*|$ is an endomorphism of $A\sp*$ too, we get $$\g (r x\sb 1 +\sp* r x\sb 2)=\g (r x\sb 1) +\sp* \g (r x\sb 2) =r\g (x\sb 1)+\sp* r\g ( x\sb 2)=r x\sb 1+\sp* 0=rx\sb 1.$$ On the other hand, $\g (i x\sb 1 +j x\sb 2) =ix\sb 1$. Thus $rx\sb 1=ix\sb 1$ and hence $i=r$. In exactly the same way we get $j=r$ and hence $r x\sb 1 +\sp* r x\sb 2 =r x\sb 1 +r x\sb 2 $ . Let $a\sb1 , a\sb 2 \in M$ and $\g$ be an endomorphism of $\bf M$ such that $\g (x\sb1)=a\sb 1$ and $\g (x\sb 2)=a\sb 2$. Just as above we get $$ \g (r x\sb 1 +\sp* r x\sb 2)=\g (r x\sb 1)+\sp* \g (r x\sb 2)=r\g (x\sb 1)+\sp* r\g (x\sb 2)=ra\sb1+\sp* ra\sb 2.$$ On the other other hand we get $$\g (r x\sb 1 + r x\sb 2)=\g (r x\sb 1) +\g (r x\sb 2)=r\g (x\sb 1) +r\g(x\sb 2)=ra\sb 1+ra\sb 2 .$$ As a result we obtain $$ra\sb1+\sp* ra\sb 2 =ra\sb 1+ra\sb 2,$$ which leads after obvious calculations to the fact that operations $+\sp *$ and $+$ coincide on $|A\sp*|$. Thus the different between structures $\bf M$ and $ A\sp *$ may be only in actions of the ring $R$. It is obvious that for every $k\in R$ there is an element $k\sp*\in R$ such that $k*rx\sb1=rk\sp* x\sb1$. Such an element $k\sp*$ may be is not uniquely determined. But there exists a function $\a :R\to R$ such that $k*rx\sb1=r\a (k)x\sb1$. For two such function $\a\sb 1$ and $\a\sb 2$ we have $r\a\sb 1 =r\a\sb 2$. In the same way as above we get: $k*ra=r\a (k)a$ for every $a\in M$. Further $$( k\sb 1 k\sb 2)* ra =k\sb 1 * (k\sb 2 *ra)=k\sb 1* r\a (k\sb 2)a =r(\a (k\sb 1)\a (k\sb 2))a .$$ On the other other hand we get $$ ( k\sb 1 k\sb 2)* ra=r\a (k\sb 1 k\sb 2)a.$$ Since $a$ is an arbitrary element of $M$ we obtain that \begin{equation}\label{function1} r\a (k\sb 1 k\sb 2)=r(\a (k\sb 1)\a (k\sb 2)). \end{equation} In the same way we get that \begin{equation}\label{function2} r\a (k\sb 1+k\sb 2) =r\a (k\sb 1)+r\a (k\sb 2) \end{equation} and \begin{equation}\label{function3} r\a (1)=r. \end{equation} Summing up these investigations, we obtain the following description of automorphisms of $\C$. For every automorphism $\Phi$ of $\C$ the main function $(s\sb A: A \to \Phi (A))\sb {A\in Ob\C}$ satisfies the following conditions: \item (1) there exists an element $r\in R$ such that for every $a\in A$ $s\sb A (a)=ru$ for some $u\in \Phi (A)$, \item (2) every $\C$-bijection $ s\sb A: A \to \Phi (A)$ is an additive mapping, \item (3) there exist a function $\a : R\to R$ such that for every $k\in R$ and every $ a\in A $ it holds $s\sb A (ka)=r \a (k)u$ where $ru=s\sb A (a), u\in \Phi (A)$, \item (4) the function $\a $ satisfies the equations \ref{function1} - \ref{function3}. Consider the case when the ring $R$ does not contain zero divisors. Let $B=\Phi\sp {-1}(Rx\sb 0)$. Suppose that a basis of the module $B\sp *$ contains two different elements $y\sb 1$ and $y\sb 2$. Let $\g$ be the automorphism of $B\sp *$ such that $\g (y\sb 1) =y\sb 2$ and $\g (y\sb 2) =y\sb 1$ and $\tilde {\g}$ is the extension of $\g$ up to an automorphism of $Rx\sb 0$. Since $\g \sp 2 =1\sb {B\sp *}$ the same property is valid for $\tilde {\g}$. Therefore we get $\tilde{\g}\sp 2 (x\sb 0) =x\sb 0$. Since $\tilde {\g}(x\sb 0) =kx\sb 0$ for some $k\in R$ we get that $k\sp 2 =1$ and hence $k=1$ or $k=-1$. Thus $y\sb 1 = y\sb 2$ or $y\sb 1 =- y\sb 2$ that is contrary to the assumption. We obtain that $\Phi\sp {-1}(Rx\sb 0)$ and $(Rx\sb 0)$ are isomorphic. In this case all mappings $s\sb A :A\to \Phi (A) $ are surjective and $r=1$. Therefore all automorphisms of $\C$ are semi-inner, that is, the mappings $s\sb A : A\to \Phi (A)$ are additive bijections and there exists an endomorphism $\a$ of $R$ such that $s\sb A (ka)=\a (k) s\sb A (a)$ for all $k\in R$ and $a\in A$. Since the same is true for the automorphism $\Phi \sp {-1}$ we conclude that $\a$ is an automorphism of the ring $R$. \end{examples*} {\bf Acknowledgments} The author is pleased to thank B. Plotkin and E. Plotkin, R. Lipjansky and G. Mashevitsky for useful discussions and interesting suggestions.
2024-02-18T23:40:33.269Z
2021-02-08T02:16:57.000Z
algebraic_stack_train_0000
2,737
9,751
proofpile-arXiv_065-13322
\section{Introduction} \label{sec:intro} End-to-end automatic speech recognition (ASR) has become a promising approach for the speech recognition community. It simplifies model design, training, and decoding procedure compared to conventional approaches like hybrid systems using hidden Markov model (HMM). However, the improvement comes with a computational cost: many state-of-the-art ASR architectures employ attention-based deep encoder-decoder architecture~\cite{chorowski2015attention, 7472621, 8462506, Karita2019, gulati2020conformer}, which requires heavy computational cost and large model size. Also, the decoder runs in an autoregressive fashion and requires sequential computation, i.e., the generation of an output token can be started only after the completion of the previous token. Compared to the encoder-decoder modeling, the Connectionist Temporal Classification (CTC)~\cite{graves2006connectionist} does not require a separate decoder, thus allows designing more compact and fast models. Also, CTC provides a greedy decoding algorithm for generating sentences in a fast and parallel way, especially compared to autoregressive decoder of encoder-decoder models. Although recent advances on architectural design~\cite{Pratap2020,quartznet} and pre-training method~\cite{wav2vec2} have improved the performance with CTC, it is usually weaker than encoder-decoder models, often credited to its strong conditional independence assumption, and overcoming the performance often requires external language models (LMs) and beam search algorithm \cite{miao2015eesen,ueno2018acoustic}, which demand extra computational costs and effectively makes the model an autoregressive one. Therefore, it is important to improve CTC modeling to reduce overall computational overhead, ideally without the help of LM and beam search. There also has been a great interest on non-autoregressive speech recognition toward reaching the performance of autoregressive models~\cite{chen2019non,higuchi2020mask,chan2020imputer,fujita2020insertion,tian2020spike}, inspired by the success of non-autoregressive models in neural machine translation \cite{gu2018non,ghazvininejad2019mask, ma2019flowseq,Shu2020LaNMT}. Non-autoregressive ASR would allow faster token generation compared to autoregressive ASR, as the generation of a token does not directly depend on the previous token. CTC itself can be viewed as an early instance of non-autoregressive ASR, and recently proposed methods, Mask CTC~\cite{higuchi2020mask} and Imputer~\cite{chan2020imputer}, use CTC as a part of non-autoregressive modeling: they first generate initial output from CTC, then refine it via the other network. Therefore, improving CTC is also important for improving non-autoregressive methods in general. In this work, we show the performance of CTC can be improved with a proposed auxiliary task. The proposed task, named intermediate CTC loss, is constructed by first obtaining the intermediate representation of the model then computing its corresponding CTC loss. The model is trained with the original CTC loss in conjunction with the proposed loss, with a very small computational overhead. During inference, the usual CTC decoding algorithm is used, thus there is no overhead. We show the proposed method can improve Transformer~\cite{NIPS2017_7181} with various depths, and also Conformer~\cite{gulati2020conformer}, recently proposed architecture combining self-attention and convolution layers. Also, we show the method can be combined with the other regularization method, stochastic depth~\cite{huang2016deep, Pham2019}, for further enhancement. The contributions of this paper are as follows: \begin{itemize} \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}\setlength{\parsep}{0pt} \item We present a simple yet efficient auxiliary loss, called intermediate CTC loss, for improving performance of CTC ASR network. \item We combine the intermediate CTC loss and stochastic depth regularization to achieve better performance than using only one of them. \item We show application to the Conformer encoder, recently proposed architecture. We show the proposed method is also effective for Conformer. \item We achieve comparable to state-of-the-art results, specifically word error rate (WER) 9.9\% on Wall Street Journal (WSJ) and character error rate (CER) 5.2\% on AISHELL-1, using CTC modeling and greedy decoding only. \end{itemize} \section{Architecture} \label{sec:architecture} We consider a multi-layer architecture with the CTC loss function. For given input $x_0 \in \mathbb{R}^{T \times D}$ of length $T$ and dimension $D$, the encoder consists of $L$ layers as follows: \begin{equation} \label{equation:encoder} x_l = \textbf{EncoderLayer}_l(x_{l-1}), \end{equation} where $\textbf{EncoderLayer}_l$ is the $l$-th layer of the network explained at Section~\ref{sec:encoder}. \subsection{Connectionist Temporal Classification} CTC~\cite{graves2006connectionist} computes the likelihood of target sequence $y$ by considering all possible alignments for the label and the input length $T$. For the encoder output $x_L$ and target sequence $y$, the likelihood is defined as: \begin{equation} \label{equation:ctc_prob} P_\mathsf{CTC}(y|x_L) := \sum_{a \in \beta^{-1}(y)} P(a|x_L) \end{equation} where $\beta^{-1}(y)$ is the set of alignment $a$ of length $T$ compatible to $y$ including the special blank token. The alignment probability $P(a|x_L)$ is factorized with the following conditional independence assumption: \begin{equation} \label{equation:alignment_probability} P(a|x_L) = \prod_{t} P(a[t]|x_L[t]) \end{equation} where $a[t]$ and $x_L[t]$ denote the $t$-th symbol of $a$ and the $t$-th representation vector of $x_L$, respectively. At training time, we minimize the negative log-likelihood induced by CTC by using $P_\mathsf{CTC}(y|x_L)$ in Eq.~\eqref{equation:ctc_prob}: \begin{equation} {\mathcal{L}_{\text{CTC}}} := - \log P_\mathsf{CTC}(y|x_L). \end{equation} At test time, we use greedy search to find the most probable alignment for fast inference. \subsection{Encoder} \label{sec:encoder} We use two encoder architectures: Transformer~\cite{NIPS2017_7181} and Conformer~\cite{gulati2020conformer}. Transformer uses self-attention ($\text{SelfAttention}(\cdot)$ shown in Eq.~\eqref{equation:transformer}) for learning global representation, and layer normalization~\cite{ba2016layer} and residual connection~\cite{7780459} for stabilizing learning. With Transformer, \textbf{EncoderLayer}$(\cdot)$ in Eq.~\eqref{equation:encoder} consists of: \begin{align} \label{equation:transformer} x^\text{MHA}_l &= \text{SelfAttention}(x_{l-1}) + x_{l-1}, \\ x_l &= \text{FFN}(x^\text{MHA}_l) + x^\text{MHA}_l, \end{align} where $\text{FFN}(\cdot)$ denotes the feed forward layers. Conformer combines Transformer and convolution neural layers for efficient learning of both global and local representations. With Conformer, \textbf{EncoderLayer}$(\cdot)$ in Eq.~\eqref{equation:encoder} consists of: \begin{align} x^\text{FFN}_l &= \frac{1}{2} \text{FFN}(x_{l-1}) + x_{l-1} \\ x^\text{MHA}_l &= \text{SelfAttention}(x^\text{FFN}_l) + x^\text{FFN}_l \\ x^\text{Conv}_l &= \text{Convolution}(x^\text{MHA}_l) + x^\text{MHA}_l \\ x_l &= \text{LayerNorm}(\frac{1}{2} \text{FFN}(x^\text{Conv}_l) + x^\text{Conv}_l). \end{align} \subsection{Stochastic Depth} \label{sec:stochastic_depth} Stochastic depth~\cite{huang2016deep, Pham2019} is a regularization technique for residual network. It helps training of very deep networks by randomly skipping some layers. It can be viewed as training an ensemble of $2^L$ sub-models, induced by removing some layers of the model. Consider \textbf{EncoderLayer}$(\cdot)$ in Eq.~\eqref{equation:encoder} with residual connection: \begin{equation} \label{equation:encoder_residual} x_l = x_{l-1} + f_l(x_{l-1}) \end{equation} for some layer $f_l(\cdot)$. Let $b_l$ be a Bernoulli random variable which takes value 1 with probability $p_l$. During training, the layer is computed as: \begin{equation} \label{equation:stochastic_depth} x_l = \begin{cases} x_{l-1} & \text{if } b_l = 0, \\ x_{l-1} + \frac{1}{p_l} \cdot f_l(x_{l-1}) & \text{otherwise.}\\ \end{cases} \end{equation} Thus, with probability $1 - p_l$, the layer skips the $f_l(x_{l-1})$ part. The denominator $\frac{1}{p_1}$ ensures the expectation matches the Eq.~\eqref{equation:encoder}. During testing, we do not skip the layers and use Eq.~\eqref{equation:encoder_residual}. The per-layer survival probability is given as $p_l = 1 - \frac{l}{L}(1 - p_L)$ with hyper-parameter $p_L$. This assigns higher skipping probability to higher layers, as skipping lower layers may harm the overall performance~\cite{huang2016deep}. We use $p_L = 0.7$ for all experiments. \section{Intermediate CTC Loss} \label{sec:intermediate_ctc} The stochastic depth aims to improve training of multi-layer network using a stochastic ensemble approach, but the experiments show the improvement only comes with sufficiently deep networks, e.g. with 24 or more layers~\cite{Pham2019}. We hypothesize that while the stochastic depth is effective for regularizing higher layers, it is not effective for regularizing lower layers, due to the its ensemble strategy. As each layer has own random variable for skipping, the probability of skipping all high layers is very low. Therefore, for most cases, the lower layers may rely on the remaining higher layers rather than learn regularized representation by themselves. In this context, we propose to skip the higher layers as a whole. We choose a layer, called ``intermediate layer'', and induce a sub-model by skipping all layers after the intermediate layer. The sub-model relies on the lower layers rather than higher layers, thus training the sub-model would regularize the lower part of the full model. For the position of the intermediate layer, this paper mainly uses ${\left\lfloor L/2 \right\rfloor}$, as it seems a safe choice between lower and higher layers. We later discuss other choices at Section~\ref{sec:position_variants}. As the sub-model and the full model share lower structure, it is possible to denote the output of the sub-model as $x_{\left\lfloor L/2 \right\rfloor}$, the intermediate representation of the full-model. Like the full-model, we use a CTC loss for the sub-model: \begin{equation} \label{equation:inter_loss} {\mathcal{L}_{\text{InterCTC}}} := - \log P_\mathsf{CTC}(y|x_{\left\lfloor L/2 \right\rfloor}). \end{equation} Then we note that the sub-model representation $x_{\left\lfloor L/2 \right\rfloor}$ is naturally obtained when we compute the full model. Thus, after computing the CTC loss of the full model, we can compute the CTC loss of the sub-model with a very small overhead. The proposed training objective is the weighted sum of the two losses: \begin{equation} \label{equation:total_loss} \mathcal{L} := (1-w) {\mathcal{L}_{\text{CTC}}} + w {\mathcal{L}_{\text{InterCTC}}}, \end{equation} where we use $w = 0.3$ for all experiments. During testing, we do not use the intermediate prediction and only use the final representation $x_L$ for decoding. The intermediate loss can also be used jointly with stochastic depth. We expect the intermediate loss regularizes the lower layers, and the stochastic depth regularizes the higher layers, thus combining them further improves the whole model. We show the empirical result at Section~\ref{sec:experiments}. \subsection{Position variants} \label{sec:position_variants} We also consider different sub-model configurations and investigate their effects. We consider the following variants: \begin{itemize} \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}\setlength{\parsep}{0pt} \item {\bf Lower than the middle}. Depending on the number of layers $L$, the optimal ratio of lower layers to higher layers may differ. To find the effect of position of the intermediate loss, we consider lower position than middle, e.g., $\left\lfloor L/4 \right\rfloor$, for the sub-model. \item {\bf Multiple sub-models}. We consider multiple sub-models rather than only one. For the number of sub-models $K$, we compute the following loss: \begin{equation} - \frac{1}{K} \sum_{k=1}^{K} P_\mathsf{CTC}(y|x_{\left\lfloor \frac{k L}{K + 1} \right\rfloor}). \end{equation} For $K = 1$, the loss corresponds to Eq.~\eqref{equation:inter_loss}. \item {\bf Random position}. We also consider randomly choosing sub-model among multiple models. We introduce a uniform random variable $u$ with range from ${\left\lfloor L/2 \right\rfloor}$ to $L - 1$, and choose $u$-th layer for the intermediate representation. \end{itemize} We show the experimental results at Section~\ref{sec:exp_variants}. \subsection{Stochastic variant of Intermediate Loss} \label{sec:stoch_variant} In Eq.~\eqref{equation:total_loss}, we compute the weighted sum of the two sub-models. Instead, we may compute the stochastic variant of the loss, like stochastic depth, as follows. Let $b$ a Bernoulli random variable which takes value 1 with probability $w$. the stochastic intermediate CTC objective is: \begin{equation} \label{equation:stoch_inter_ctc} \mathcal{L}' := \begin{cases} {\mathcal{L}_{\text{CTC}}} & \text{if } b = 0,\\ {\mathcal{L}_{\text{InterCTC}}} & \text{otherwise}. \end{cases} \end{equation} The loss coincides with Eq.~\eqref{equation:total_loss} in expectation. We argue the deterministic version is better than stochastic one for gradient-based learning even if they have same expected value. For the stochastic variant, the loss and its gradient only have access to ${\mathcal{L}_{\text{InterCTC}}}$ if $b = 1$, and the model may forget features useful for ${\mathcal{L}_{\text{CTC}}}$ but not for ${\mathcal{L}_{\text{InterCTC}}}$. On the other hand, the deterministic variant always computes two losses at the same time, therefore, the risk of forgetting features is low. At Section~\ref{sec:exp_variants}, we experimentally show while the stochastic variant also improves the model, it is not so effective as the deterministic one. \subsection{Application to other non-autoregressive ASR: Mask CTC} \label{sec:mask_ctc} Mask CTC~\cite{higuchi2020mask} consists of an encoder, a CTC layer on top of the encoder, and a conditional masked language model (CMLM) \cite{ghazvininejad2019mask}. During decoding, the model first generates initial hypotheses from the CTC layer, and replaces any token of low probability (below a given threshold) with special token \texttt{<MASK>}. The CMLM predicts the token of masked position given the masked hypothesis. During training, the target $y$ is randomly masked and fed to CMLM. The CMLM predicts the token of masked position for the masked input. Let $y_\text{obs}$ the masked input and $y_\text{mask}$ the prediction for the mask. The training objective is: \begin{multline} - w_\mathsf{CTC} \log P_\mathsf{CTC}(y|x_L) - (1 - w_\mathsf{CTC}) \log P_\mathsf{CMLM}(y_\mathsf{mask}|y_\mathsf{obs}, x_L) \end{multline} with hyper-parameter $w_\mathsf{CTC}$. As the initial hypothesis is predicted from the CTC layer, its performance is crucial for the overall performance. We aim to improve the CTC layer using the proposed intermediate loss. We take the intermediate output $x_{\left\lfloor L/2 \right\rfloor}$ from the encoder and compute the intermediate CTC probability $P_\mathsf{CTC}(y|x_{\left\lfloor L/2 \right\rfloor})$. The extended training objective is: \begin{multline} - w_\mathsf{CTC} \log P_\mathsf{CTC}(y|x_L) - w_\mathsf{InterCTC} \log P_\mathsf{CTC}(y|x_{\left\lfloor L/2 \right\rfloor}) \\ - (1 - w_\mathsf{CTC} - w_\mathsf{InterCTC}) \log P_\mathsf{CMLM}(y_\mathsf{mask}|y_\mathsf{obs}, x_L). \end{multline} We present the experimental result for Mask CTC at Section~\ref{sec:exp_mask_ctc}. \subsection{Related work} \label{sec:comparison_hctc} Hierarchical CTC~\cite{10.5555/1625275.1625400,DBLP:journals/corr/abs-1807-06234,Toshniwal2017} (HCTC) introduced an auxiliary CTC task based on the assumption that different layers learn different level of abstraction. While HCTC looks similar to intermediate loss, it requires additional labeling effort (e.g., phoneme) or various tokenization (e.g., sub-word for high-level and character for low-level), which may not be applicable for certain cases, e.g., when the character-based tokenization is the best effort for the data like Mandarin and Japanese \cite{watanabe2018espnet}. In contrast, intermediate CTC is based on the sub-model regularization, therefore it does not require additional low-level labels, and it is natural to combine intermediate CTC with stochastic depth. \cite{9052964} and \cite{liu2020improving} introduced additional networks to train the intermediate layer of the encoder for CTC and RNN-Transducer~\cite{graves2012sequence} respectively. We note that intermediate CTC does not require additional network, and has very little overhead at the training time, in contrast to~\cite{liu2020improving}, due to the structure of CTC architecture. \section{Experiments} \label{sec:experiments} We evaluate the performance of intermediate CTC loss on the three corpora: Wall Street Journal (WSJ)~\cite{paul1992design} (English, 81 hours), TED-LIUM2~\cite{rousseau-etal-2014-enhancing} (English, 207 hours), and AISHELL-1~\cite{bu2017aishell} (Chinese, 170 hours). We use ESPnet~\cite{watanabe2018espnet} for all experiments. We use 80-dimensional log-mel feature and 3-dimensional pitch feature for the input, and apply SpecAugment~\cite{park2019specaugment} during training. For WSJ and AISHELL-1, we tokenize label sentences as characters. For TED-LIUM2, we tokenize label sentences as sub-words with sentencepiece~\cite{kudo-richardson-2018-sentencepiece}. For WSJ, the model is trained for 100 epochs. For TED-LIUM2 and AISHELL-1, the model is trained for 50 epochs. After training, the model parameter is obtained by averaging models from last 10 epochs. Note that we do not use any external language models (LMs) or beam search, and only use greedy decoding for CTC. Thus, all experiments are based on the \textit{non-autoregressive} setup in order to keep the benefit of fast and parallel inference of CTC. \subsection{Results} \label{sec:experimental_results} We show the experimental results for Transformer and Conformer architectures. For each architecture, we compare four regularization configurations: \begin{itemize} \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}\setlength{\parsep}{0pt} \item Baseline (no regularization) \item Intermediate CTC (``InterCTC'') \item Stochastic depth (``StochDepth'') \item Intermediate CTC + Stochastic depth (``both'') \end{itemize} \input{exp_table} For Transformer, we use 12-layer, 24-layer and 48-layer models. Table~\ref{table:transformer} shows the word error rates (WERs) for WSJ and TED-LIUM2, and character error rates (CERs) for AISHELL-1. For all of the experiment, intermediate CTC gives an improvement over the baseline model. Stochastic depth improves 24-layer and 48-layer models, but does not improve 12-layer models well for WSJ and AISHELL-1. Using both the intermediate loss and the stochastic depth gives better result than using only one of them. Thus, we conclude the two methods have complimentary effects. Additionally, we apply intermediate CTC to 6-layer Transformer for WSJ, and get WER improvement from 21.1\% to 18.3\%. This suggests the intermediate CTC is still beneficial for smaller networks. For Conformer, we use 12-layer model. The results are at Table~\ref{table:conformer}. Again, intermediate CTC gives consistent improvement over baseline. Stochastic depth gives improvement for WSJ and AISHELL-1, but does not give improvement for TED-LIUM2. The combination of intermediate loss and stochastic depth achieves WER \textbf{9.9\%} for WSJ and CER \textbf{5.2\%} for AISHELL-1. For WSJ, it outperforms the previously published non-autoregressive results~\cite{higuchi2020mask,chan2020imputer,chi2020align}, and is close to the state-of-the-art autoregressive result (9.3\%)~\cite{sabour2018optimal}. Also, for AISHELL-1, it outperforms Transformer-based encoder-decoder models~\cite{karita2019comparative,gao2020sanm}, and is close to the state-of-the-art autoregressive result (5.1\%)~\cite{zhou2020selfandmixed}. Note that the referred state-of-the-art results use an autoregressive decoder and \cite{zhou2020selfandmixed} also uses an external LM. On the other hand, our result is solely based on CTC with greedy decoding, without LM or beam search. \subsection{Study on Intermediate Loss design} \label{sec:exp_variants} \begin{table}[t] \centering \caption{Word error rates (WERs) of the intermediate loss variants for WSJ. See Section~\ref{sec:exp_variants} for details.} \label{table:variants} \scalebox{0.8}{ \begin{tabular}{l l|c c} \toprule & & dev93 & eval92 \\ \midrule \bf{12-layer} & Default & 17.5 & 13.6 \\ & Random & 17.4 & 14.3 \\ & Stochastic & 19.0 & 15.0 \\ \midrule \bf{24-layer} & Default & 15.3 & 12.4 \\ & Lower & 15.8 & 12.9 \\ & Multiple & 15.1 & 12.0 \\ & Random & 15.4 & 12.4 \\ \midrule \bf{48-layer} & Default & 14.9 & 12.6 \\ & Multiple & 15.4 & 12.1 \\ & Random & 14.7 & 12.0 \\ \bottomrule \end{tabular} } \vspace{-4mm} \end{table} To compare the proposed intermediate loss to the position variants (Section~\ref{sec:position_variants}) and the stochastic variant (Section~\ref{sec:stoch_variant}), we conduct additional experiments for WSJ corpus. The result is at Table~\ref{table:variants}. We conduct the following experiments: \begin{itemize} \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}\setlength{\parsep}{0pt} \item {\bf Lower position}. We conduct this variant for the 24-layer model, which is sufficiently deep to consider a lower position. We used 6th layer for the experiment. Despite the deep network, the variant performs slightly worse than the default. \item {\bf Multiple positions}. We conduct this variant for the 24-layer and the 48-layers, which are very deep and more sub-models may help. We use $K = 3$ for 24-layer and $K = 7$ for 48-layer, to select all layer positions of power of 6. It gives a small improvement for the 24-layer, but gives a mixed result for the 48-layer. \item {\bf Random position}. We conduct this variant for all models. The result is mixed: it gives no improvement for the 12-layer and the 24-layer, although a small improvement for 48-layer. \item {\bf Stochastic variant}. We conduct this variant for 12-layer model. As discussed in Section~\ref{sec:stoch_variant}, the stochastic variant is worse than the deterministic one, although it is still better than no regularization. \end{itemize} From the experimental results, we conclude that the proposed design is a simple yet reasonable choice among the variants. \subsection{Application to other non-autoregressive ASR} \label{sec:exp_mask_ctc} \begin{table}[t] \centering \caption{Word error rates (WERs) of Mask CTC-based non-autoregressive ASR for WSJ. See Section~\ref{sec:exp_mask_ctc} for details.} \label{table:mask_ctc} \scalebox{0.8}{ \begin{tabular}{l|l|c c} \toprule & threshold & dev93 & eval92 \\ \midrule \textbf{12enc-6dec} & 0.0 & 16.5 & 13.5 \\ & 0.999 & 15.7 & 12.9 \\ \midrule + InterCTC & 0.0 & 14.4 & 11.6 \\ & 0.999 & \textbf{14.1} & \textbf{11.3} \\ \midrule Mask CTC~\cite{higuchi2020mask} & 0.999 & 15.4 & 12.1 \\ Align-Refine~\cite{chi2020align} & - & 13.7 & 11.4 \\ \bottomrule \end{tabular} } \vspace{-4mm} \end{table} We present an experimental result of Mask CTC-based non autoregressive ASR and intermediate loss, as described at Section~\ref{sec:mask_ctc}. The WSJ corpus is used for the experiment. We use $w_\text{CTC} = 0.3$, and for intermediate CTC variant, we also use $w_\text{InterCTC} = 0.3$. We use a Transformer model with 12-layer encoder and 6-layer decoder. The model is trained for 500 epochs and parameters of last 60 epochs are averaged. Table~\ref{table:mask_ctc} shows the WERs for Mask CTC. The second column indicates the threshold of probability for CTC prediction; Mask CTC uses 0.999 by default. If threshold is 0.0, the model does not use the decoder and just treats the CTC result as the final prediction. We see the intermediate CTC improves the performance of CTC prediction, from 13.5\% to 11.6\%. We also see the improvement of CTC leads the overall improvement of Mask CTC, as the WER reduced from 12.9\% to 11.3\%. It is also lower than Align-Refine~\cite{chi2020align} (11.4\%) which improves Mask CTC by modifying the role of CMLM. This shows the intermediate loss helps the training of Mask CTC. \section{Conclusion} We present intermediate CTC loss, an auxiliary task for improving CTC-based speech recognition. The proposed loss is easy to implement, has small overhead at training time and no overhead at test time. We empirically show the intermediate CTC loss improves Transformer and Conformer architectures, and combining the loss with stochastic depth further improves training, reaching word error rate (WER) 9.9\% on WSJ and character error rate (CER) 5.2\% on AISHELL-1, without an autoregressive decoder or external language model. \bibliographystyle{IEEEbib}
2024-02-18T23:40:33.475Z
2021-02-08T02:17:55.000Z
algebraic_stack_train_0000
2,742
3,907
proofpile-arXiv_065-13344
\section{Introduction} The notable advance in wireless technology and the availability of low-cost hardware have led to the emergence of real-time monitoring services. In these systems, the monitor needs to know the status of one or multiple processes observed by remote sources. Specifically, the sources send packets that contain the information about the process of interest to the monitor to perform a given task. To that extent, the main goal in these applications is to keep the monitor up to date by receiving the fresh information from different sources. This concept of freshness is captured by the Age of Information (AoI) which is introduced for the first time in \cite{kaul2012real}. Since then, the AoI has become a hot research topic, and a considerable number of research works have been published on the subject \cite{maatouk2020optimality,hsu2019scheduling,kadota2018scheduling}. Although this metric quantifies the information time lag at the monitor, it fails to capture the correctness of the information at the monitor side. Specifically, the evolution of this metric doesn't take into consideration the state of the information at the monitor side. This has been confirmed in \cite{jiang2019unified} where the authors establish that minimizing AoI gives a sub-optimal policy in minimizing the status error in remotely estimating Markovian sources. To deal with this issue, some works propose to minimize the estimation error or the mean square error \cite{sun2019sampling,kam2018towards}. However, the metrics developed in these works are unable to capture the concept of freshness. In other words, there is no penalty incurred to the monitor or the central entity for being in incorrect state for a long time. To meet the timeliness requirement in the process estimation framework, the authors in \cite{maatouk2019age} have designed a new metric dubbed as Age of incorrect information AoII that captures the freshness of the information while taking into account the information content acquired from the transmitter. This metric is adopted in the context where a given source is represented as a process denoted by $X(t)$ and the transmitter send status update to the receiver to inform it about the current state of $X(t)$. Under energy or transmission rate constraint, the transmitter cannot use at each time slot the channel to transmit the packet. In this case, as long as the transmitter is in the idle mode, the monitor keeps the last information which may be in the erroneous state compared to the current state of the process $X(t)$. Denoting by $\hat{X}(t)$ the estimated state in the monitor side, being at the incorrect state or equivalently $\hat{X}(t) \neq X(t)$, is clearly an undesirable situation with regards to the monitor, and therefore a penalty should be paid. The metric developed by \cite{maatouk2019age}, AoII matches with this notion of penalty. Specifically, unlike AoI, AoII evolves only if the estimated state $\hat{X}(t)$ in the monitor side is different from the real state of the process of interest $X(t)$. While if $X(t)=\hat{X}(t)$, the AoII doesn't evolve. To that extent in \cite{maatouk2019age,maatouk2020age}, the authors consider the problem of minimizing the average AoII in a transmitter-receiver pair scenario where packets are sent over an unreliable channel subject to a transmission rate constraint. They derive the optimal solution which is of the form threshold-based policy. The work in \cite{kam2020age} studies the AoII metric in the simple context of monitoring a symmetric binary information source over a delay system with feedback. The authors proposes a dynamic programming algorithm to compute the optimal sampling policy. However, \cite{maatouk2019age,maatouk2020age,kam2020age} assume that the scheduler has a perfect knowledge about the process $X(t)$ at each time slot $t$ and restrict the analysis to one transmitter-receiver pair communication. On the opposite of that, in this paper, we tackle a realistic case in which a scheduler tracks the states of multiple remote sources and selects at each time a subset of them send their updates, in such a way to minimize the Mean Age of Incorrect Information (MAoII). Furthermore, the scheduler does not know the instantaneous state of the remote sources until it receives their updates. Specifically, our contributions can be summarized as follows: \begin{itemize} \item Since the scheduler cannot know at each time the current states of the sources before it receives their updates, it cannot know exactly the value of MAoII and has to track/predict its evolution. To that end, we introduce a belief state at the monitor, which can be interpreted as the probability that the state at the monitor side is correct, i.e. $\hat{X}(t)=X(t)$. We then describe how this belief state can be derived and can be used in the development of the scheduling policy. \item We then formulate the MAoII-based scheduling problem and show that it belongs to the family of Restless Multi-Armed Bandit (RMAB) problems. The optimal solution of this type of problem is known to be out of reach. To circumvent this difficulty, we develop the low-complex and efficient policy called Whittle's index policy (WIP) using the Lagrangian Relaxation Approach. \end{itemize} \iffalse The remainder of the paper is organized as follows. In Section \ref{sec:Syst_mod}, we present the system model and we describe the evolution of the metrics studied, namely, the age of information (AoI) and the mean age of incorrect information (MAoII). In Section \ref{sec:prob_form}, we formulate the problem of minimizing the average age of the network when $M$ among $N$ users can communicate with a central entity simultaneously. In Section \ref{sec:lag_relax_whi_ind}, we introduce the Lagrangian relaxation, derive the Whittle's index expressions and establish the Whittle’s index policy for the original scheduling problem. Numerical results that corroborate these findings are given in Section \ref{sec:num_reslt} while Section \ref{sec:concl} concludes the paper. \fi \section{System Model}\label{sec:Syst_mod} \subsection{Network description}\label{subsec:Net_descrip} We consider in our paper $N_u$ users that generate and send status updates about the process of interest to a central entity over unreliable channels. Time is considered to be discrete and normalized to the time slot duration. More specifically, each user $i$ observes an information process of interest $X_i(t)$ and at the request of the monitor, it samples the process $X_i(t)$ and send it to the monitor over an unreliable channel. Based on the last received update, the monitor constructs an estimate of the process, denoted by $\hat{X}_i(t)$. Given that the time duration of packet's transmission is one time slot, then if the monitor allows a user $i$ to transmit at time $t$, it receives the value of $X_i(t)$ at time slot $t+1$ in the case where the packet is successfully transmitted. Therefore, it updates the estimate process as $\hat{X}_i(t+1)=X_i(t)$. In any other case, namely when the user $i$ is not authorized to transmit or when the packet is unsuccessfully transmitted, the monitor keeps the same value at time slot $t$, specifically $\hat{X}_i(t+1)=\hat{X}_i(t)$. As for the unreliable channel, we suppose that for user $i$, at each time slot $t$, the probability of having successful transmission is $\rho_i$, and $1-\rho_i$ otherwise. Consequently, the channel realizations are independent and identically distributed (i.i.d.) over time slots that we denote $c_i(t)$, i.e. $c_i(t)=1$ if the packet is successfully transmitted and $c_i(t)=0$ otherwise. The next aspect of our model that we tackle is the nature of the process $X_i(t)$. To that extent, for each user $i$, the information process of interest $X_i(t)$ evolves under Markov chain. For that, we define the probability of remaining at the same state in the next time slot as $p_i$. Similarly, the probability of transitioning to another state is $r_i$. Denoting by $N_i$ the number of possible states of $X_i(t)$, then the following always holds: \begin{equation}\label{eq:relation_p_r} p_i+(N_i-1)r_i=1 \end{equation} In this chapter, We study the case where $p_i \geq r_i$. \begin{figure}[H] \centering \includegraphics[width=0.4\textwidth]{Markovian-source.png} \caption{Illustration of process $X_i(t)$} \end{figure} \subsection{Penalty function dynamics} In this paper, we study the mean age of incorrect information (MAoII) penalty function and we compare it with the age of information metric (AoI). We see how it is relevant, accurate and more realistic to consider MAoII metric compared to AoI metric in order to have a good performance with regard to the empirical value of the age of information. For that purpose, we start by reintroducing in the next section the age of information to emphasize its shortcomings, then we propose as an alternative metric MAoII. \subsubsection{Age of information penalty function} The standard metric (AoI) that captures the freshness of information for user $i$ is: \begin{equation} \delta_{AoI}(t)=t-g_i(t) \end{equation} where $g_i(t)$\footnote{Considering our system model detailed in \ref{subsec:Net_descrip}, $g_i(t)$ refers also to the sampling time of the information of interest contained in the last successfully received packet} is the time-stamp of the last successfully received packet by monitor. This metric captures the lifetime of the last update at the monitor without taking into account the correctness of the information. Thereby, this makes it fall short in some applications. For instance in some scenarios, the age will increase but the information of interest remains at the same state. Nevertheless, to further emphasize the shortcoming of this metric, we provide the Whittle index policy considering this metric which is already derived in \cite{maatouk2019age}. And we give some numerical results that show the shortage of this policy. \subsubsection{Mean Age of incorrect information penalty function}\label{subsubsec:MAoII_pf} The age of incorrect information has been introduced the first time in \cite{maatouk2019age}. This metric captures the freshness of informative updates. Specifically, if the monitor acquires the information about the process $X_i(t)$, as long as the state of the process $X_i(t)$ remains at the same state in the next time slots, the age of the incorrect information will not increase, since there is no new information unknown by the monitor. In \cite{maatouk2019age}, the authors presume that the scheduler has a perfect knowledge of the process at each time slot and restrict their analysis to a transmitter-receiver pair communication. While in our case, we consider that the monitor which plays the role of the scheduler, knows only the state of the last successively received packet and we extend our analysis to a communication involving several users that can transmit at each time slot. Accordingly, the explicit expression of MAoII metric is: \begin{equation} \delta_{MAoII}(t)=\mathbb{E}_{V_i}[(t-V_i(t)] \end{equation} where $V_i(t)$ is the last time instant such that $\mathbf{1}_{\{X_i(V_i(t))= \hat{X}_i(g_i(t)+1)\}}=1$. \begin{remark} It is worth mentioning that, as it was explained in Section \ref{subsec:Net_descrip}, the reception of the successfully transmitted packet takes place at time slot $g_i(t)+1$. This means that $\hat{X}_i(g_i(t)+1)=X_i(g_i(t))$. \end{remark} In order to use this metric effectively in a partially Observable Markov Decision Process Problem, we need to take into consideration the markovian nature of the process $X_i(t)$. To that extent, we introduce in the next section the notion of the belief that represents the probability that $\hat{X}_i(t)$ is in the correct state.\\ \subsection{Metrics evolution}\label{subsec:metrics_evolution} In this section, we describe mathematically the evolution of each metric depending on the system parameters and the action taken. We denote by $d_i(t)$ the action prescribed to user $i$ at time slot $t$ and by $a_i$, $b_i$, the age of information and the mean age of incorrect information penalty functions respectively. \subsubsection{AoI}\label{subsubsec:AoI_evolution} Considering our system model, the age of information of user $i$ evolves as follows: If the user $i$ is scheduled ($d_i(t)=1$), the value of AoI goes to state $1$ if the packet is successively transmitted ($c_i(t)=1$), otherwise, the value of AoI is increased by one ($c_i(t)=0$). If the user $i$ is not scheduled ($d_i(t)=0$), the value of AoI is increased by one. Accordingly, the evolution of the age of the user $i$ can be summarized in the following: \begin{equation} a_i(t+1)=\left\{ \begin{array}{ll} 1 & if d_i(t)=1, c_i(t)=1 \\ a_i(t)+1 & else \end{array} \right. \end{equation} As for the second metric, to highlight the notion of correctness, the monitor maintains a belief value $\pi_i(t)$ which is defined as the probability that the information state in the monitor, $\hat{X_i}(t)=\hat{X_i}(g_i(t)+1)=X_i(g_i(t))$\footnote{We recall that as long as the monitor has not received any new update from the source at time instant $t$, it maintains the last update successively received at time instant $g_i(t)+1$. In other words, $\hat{X_i}(t)=\hat{X_i}(g_i(t)+1)$} at time $t$ being correct. Explicitly $\pi_i(t)=Pr(\hat{X_i}(t)=X_i(t))$. One can show that $\pi_i(t)$ evolves as follows: \begin{lemma}\label{lem:pi_evolution} \begin{equation} \pi_i(t+1)=\left\{ \begin{array}{ll} p_i & if d_i(t)=1, c_i(t)=1 \\ \pi_i(t)p_i+r_i(1-\pi_i(t)) & else \end{array} \right. \end{equation} \end{lemma} \begin{IEEEproof} See appendix \ref{app:lem:pi_evolution}. \end{IEEEproof} \subsubsection{MAoII}\label{subsubsec:MAoII_evolution} According to the expression of MAoII given in section \ref{subsubsec:MAoII_pf}, $(t-V_i(t))$ is a random variable that we denote $A_i(t)$ that satisfies: \begin{lemma}\label{lem:random_variable} \begin{align} A_i(t)=\left\{ \begin{array}{lll} 0 & w.p & \pi_i(t)\\ 1 & w.p & \pi_i(t-1).(1-p_i)\\ 2 & w.p & \pi_i(t-2).(1-p_i).(1-r_i)\\ 3 & \cdots & \cdots\\ \vdots & &\\ t-g_i(t)-1& w.p & \pi_i(g_i(t)+1).(1-p_i)\\ & & \ .(1-r_i)^{t-g_i(t)-2} \\ \\ t-g_i(t)& w.p & (1-p_i).(1-r_i)^{t-g_i(t)-1} \end{array} \right. \end{align} \end{lemma} \begin{IEEEproof} See appendix \ref{app:lem:random_variable}. \end{IEEEproof} Therefore, the mean of the age of the incorrect information at slot $t$ equals to the mean of $A_i(t)$, i.e. \begin{align} n_i(t)=&\mathbb{E}[A_i(t)] \nonumber \\ =&\sum_{k=0}^{t-g_i(t)-1} k(1-p_i)(1-r_i)^{k-1} \pi_i(t-k) \nonumber \\ &+(t-g_i(t)).(1-p_i).(1-r_i)^{t-g_i(t)-1} \nonumber \\ =&\sum_{k=1}^{t-g_i(t)} (t-g_i(t)-k)(1-p_i)(1-r_i)^{t-g_i(t)-k-1} \pi_i(g_i(t)+k)\nonumber \\ &+ (t-g_i(t)).(1-p_i).(1-r_i)^{t-g_i(t)-1} \end{align} One can establish that for all $t$, using definition of $g_i(t)$, $\pi_i(g_i(t)+1)=p_i$. Hence, according to the evolution of $\pi_i(\cdot)$ in Lemma \ref{lem:pi_evolution}, for all $k \leq t$, $\pi_i(g_i(t)+k)$ depends only on $k$ and $i$. More precisely, we have that for each $k\leq t$, $\pi_i(g_i(t)+k)=\pi_i^k$ where $\pi_i^k$ is a sequence defined by induction as follows: \begin{equation} \pi^k_i=\left\{ \begin{array}{ll} 1 & if k=0 \\ p_i \pi_i^k+r_i(1-\pi_i^k) & if k>0 \end{array} \right. \end{equation} In light of that fact, we have that: \begin{equation} n_i(t)=\sum_{k=0}^{t-g_i(t)} (t-g_i(t)-k)(1-p_i)(1-r_i)^{t-g_i(t)-k-1} \pi_i^k \end{equation} We conclude that $n_i(t)$ depends on $t-g_i(t)$ and $i$. Therefore, we let $n_i(t)\overset{\Delta}{=}n_i(t-g_i(t))$.\\ To that extent, at time slot $t$, if the user $i$ is scheduled and the packet is successively transmitted, then $g_i(t+1)=t$. Accordingly, at time slot $t+1$, MAoII equals to $n_i(t+1-g_i(t+1))=n_i(1)$. If the user $i$ is not scheduled or if the packet is not successively transmitted, then $g_i(t+1)=g_i(t)$. Therefore, MAoII will transit to $n_i(t+1-g_i(t+1))=n_i(t-g_i(t)+1)$. Based on this and denoting $j(t)$ the index such that $n_i(j(t))$ is the value of MAoII at time slot $t$, MAoII will transit to the value $n_i(j(t)+1)$ at time instant $t+1$. To sum up, the evolution of MAoII can be summarized as follows: \begin{equation} b_i(t+1)=\left\{ \begin{array}{ll} n_i(1) & if d_i(t)=1, c_i(t)=1 \\ n_i(j(t)+1) & else \end{array} \right. \end{equation} where $b_i(t)=n_i(j(t))$. \section{Problem formulation}\label{sec:prob_form} In this section, we consider a given metric denoted by $m$ where $m$ can be either AoI or MAoII. We denote further by $N_u$ the total number of users in the system. We let the vector $\boldsymbol m$ at time $t$ be $\boldsymbol{m}(t)=(m_{1}(t),\ldots,m_{N_u}(t))$ where $m_i(t)$ is the penalty function at the central entity of user $i$ with respect to the metric $m$ at time slot $t$. Our aim is to find a scheduling policy that allocates per each time slot, the available channels ($M$ channels) to a given subset of users ($M$ users, $M \leq N_u$) in a such way to minimize the total expected average penalty function of the metric considered. A scheduling policy $\phi$ is defined as a sequence of actions $\phi=(\boldsymbol{d}^{\phi}(0),\boldsymbol{d}^{\phi}(1),\ldots)$ where $\boldsymbol{d}^{\phi}(t)=(d_1^{\phi}(t),d_2^{\phi}(t),\ldots,d_{N_u}^{\phi}(t))$ is a binary vector such that $d_i^{\phi}(t)=1$ if the user $i$ is scheduled at time $t$. Denoting by $\Phi$, the set of all causal scheduling policies, then our scheduling problem can be formulated as follows: \begin{equation} \setlength{\belowdisplayskip}{0pt} \setlength{\belowdisplayshortskip}{0pt} \setlength{\abovedisplayskip}{0pt} \setlength{\abovedisplayshortskip}{0pt} \begin{aligned} & \underset{\phi\in \Phi}{\text{minimize}} & & \lim_{T\to+\infty} \text{sup}\:\frac{1}{T}\mathbb{E}^{\phi\in \Phi}\Big(\sum_{t=0}^{T-1}\sum_{i=1}^{N_u}m_i^{\phi}(t)|\boldsymbol{m}(0)\Big)\\ & \text{subject to} & & \sum_{i=1}^{N_u}d_{i}^{\phi}(t)\leq\alpha N_u \quad t=1,2,\ldots \end{aligned} \label{eq:original_problem} \end{equation} where $\alpha N_u=M$. The problem in (\ref{eq:original_problem}) falls into Multi-armed bandit problems and especially Restless Bandit framework. RMAB problems are known to be generally difficult to solve them as they are PSPACE-Hard \cite{papadimitriou1999complexity}. To circumvent this complexity, a well-known heuristic is proposed for these types of problems called Whittle's index policy \cite{weber1990index}. This policy is based on a Lagrangian relaxation, and was shown to have remarkable performance in real-life applications. To that extent, we tackle in more depth in the next section the Lagrangian relaxation approach applied to our RBP problem. Then, we provide the theoretical analysis to get the low-complex policy: Whittle index policy that we denote by WIP for the two metrics namely, AoI and MAoII. \section{Lagrangian Relaxation and Whittle's Index}\label{sec:lag_relax_whi_ind} \subsection{Relaxed problem} The Lagrangian relaxation technique is the key component for defining the Whittle's index scheduling policy. First, it consists of relaxing the constraint on the available resources by letting it be satisfied on average rather than in every time slot. More specifically, we define our Relaxed Problem (\textbf{RP}) as follows: \begin{equation}\label{eq:relaxed_problem} \setlength{\belowdisplayskip}{0pt} \setlength{\belowdisplayshortskip}{0pt} \setlength{\abovedisplayskip}{0pt} \setlength{\abovedisplayshortskip}{0pt} \begin{aligned} & \underset{\phi\in \Phi}{\text{minimize}} & & \lim_{T\to+\infty} \text{sup}\:\frac{1}{T}\mathbb{E}^{\phi}\Big(\sum_{t=0}^{T-1}\sum_{i=1}^{N_u}m_i^{\phi}(t)|\boldsymbol{m}(0)\Big)\\ & \text{subject to} & & \lim_{T\to+\infty}\text{sup}\frac{1}{T}\mathbb{E}^{\phi}\Big(\sum_{t=0}^{T-1}\sum_{i=1}^{N_u}d_{i}^{\phi}(t)\Big)\leq\alpha N_u \end{aligned} \end{equation} \color{black} The Lagrangian function $f(W,\phi)$ of the problem \eqref{eq:relaxed_problem} is defined as: \begin{equation} \lim_{T\to+\infty} \text{sup}\:\frac{1}{T}\mathbb{E}^{\phi}\Big(\sum_{t=0}^{T-1}\sum_{i=1}^{N_u}m_i^{\phi}(t)+Wd_{i}^{\phi}(t)|\boldsymbol{m}(0)\Big)-W\alpha N_u \end{equation} where $W \geq 0$ can be seen as a penalty for scheduling users. Thus, by following the Lagrangian approach, our next objective is to solve the following problem: \begin{equation}\label{eq:dual_problem} \\\\ \underset{\phi\in \Phi}{\text{min}} f(W,\phi) \end{equation} As the term $W\alpha N_u$ is independent of $\phi$, it can be eliminated from the analysis. Baring that in mind, we present the steps to obtain the Whittle's index policy: \begin{enumerate} \item We focus on the one-dimensional version of the problem in (\ref{eq:dual_problem}). Indeed, it can be shown that the $N_u$-dimensional problem can be decomposed into $N_u$ one-dimensional problems that can be solved independently \cite{kriouile2018asymptotically}. Accordingly, we drop the user's index for ease of notation involving all user's parameters, and we deal with the one-dimensional problem: \begin{equation}\label{eq:individual_dual_problem} \setlength{\belowdisplayskip}{0pt} \setlength{\belowdisplayshortskip}{0pt} \setlength{\abovedisplayskip}{0pt} \setlength{\abovedisplayshortskip}{0pt} \begin{aligned} & \underset{\phi\in \Phi}{\text{min}} & & \lim_{T\to+\infty} \text{sup}\:\frac{1}{T}\mathbb{E}^{\phi}\Big(\sum_{t=0}^{T-1}m^{\phi}(t)+Wd^{\phi}(t)|m(0)\Big) \end{aligned} \end{equation} \item We give the structural results on the optimal solution of the one-dimensional problem. \item We establish the indexability property, which ensures the existence of the Whittle's indices. \item We derive a closed-form expression of the Whittle's index and, thereby, define the proposed scheduling policy (WIP) for the original problem (\ref{eq:original_problem}). \end{enumerate} \subsection{Structural results} The problem in (\ref{eq:individual_dual_problem}) can be viewed as an infinite horizon average cost Markov decision process that is defined as follows: \begin{itemize} \item \textbf{States}: The state of the MDP at time $t$ is the penalty function $m(t)$. \item \textbf{Actions}: The action at time $t$, denoted by $d(t)$, specify if the user is scheduled (value $1$) or not (value $0$). \item \textbf{Transitions probabilities}: The transitions probabilities between the different states. \item \textbf{Cost}: We let the instantaneous cost of the MDP, $C(m(t),d(t))$, be equal to $m(t)+Wd(t)$. \end{itemize} The optimal policy $\phi^*$ of the one-dimensional problem \eqref{eq:individual_dual_problem} can be obtained by solving the following Bellman equation for each state $m$: \begin{align} \theta &+ V(m) \nonumber \\ &=\min_{d\in\{0,1\}}\big\{m+Wd+\sum_{m'\in A^m }\Pr(m\rightarrow m'|d)V(m')\big\} \label{eq:bellman_general} \end{align} where $\Pr(m\rightarrow m'|d)$ is the transition probability from state $m$ to $m'$ under action $d$, $\theta$ is the optimal value of the problem, $V(m)$ is the differential cost-to-go function and $A^m$ is the set of states of the metric $m$. There exist several numerical algorithms that are developed to solve (\ref{eq:bellman_general}), such as the value iteration algorithm. This later consists first of updating per each iteration the value function $V_t(.)$ following the recurrence relation for each state $m$: \begin{align}\label{eq:bellman_equation_time_t} \theta &+ V_{t+1}(m)\\ &=\min_{d\in\{0,1\}}\big\{m+Wd+\sum_{m'\in A^m }\Pr(m\rightarrow m'|d)V_t(m')\big\} \end{align} given that $V_0(.)=0$. Then concluding for $V(.)$ exploiting the fact that $\underset{t \rightarrow +\infty}{\text{lim}} V_t(m)=V(m)$. The main shortcoming of this algorithm that it requires high memory and computational complexity. To overcome this complexity, rather than computing the value of $V(.)$ for all states, we limit ourself to study the structure of the optimal scheduling policy by exploiting the fact that $\underset{t \rightarrow +\infty}{\text{lim}} V_t(m)=V(m)$. In that way, we show that the optimal solution of Problem \eqref{eq:bellman_general} is a threshold-based policy: \begin{definition} A threshold policy is a policy $\phi \in \Phi$ for which there exists $n$ such that when the current state $m < n$, the prescribed action is $d^- \in \{0,1\}$, and when $ m \geq n$, the prescribed action is $d^+ \in \{0,1\}$ while baring in mind that $d^- \neq d^+$. \end{definition} To that extent, we show that for both metrics considered in our paper, namely AoI and MAoII, the optimal policy of \eqref{eq:bellman_general} is a threshold based policy. To that end, we specify first the states space $A^m$ for both metrics, then we provide the expression of the corresponding Bellman equation \eqref{eq:bellman_general}. After that, we establish our desired result. \subsubsection{AoI} According to Section \ref{subsubsec:AoI_evolution}, $a(t)$ evolves in the state space: \begin{equation} A^a=\{a^{j}: j>0, a^{j}=j\} \end{equation} The corresponding Bellman equation is: \begin{align} \theta_a + V(a^{j})=\min\big\{&a^{j}+V(a^{j+1}); \nonumber \\ &a^{j}+W+\rho V(a^{1})+(1-\rho)V(a^{j+1})\big\} \end{align} The analysis have been already done in \cite{maatouk2020optimality} regarding this metric. Effectively, in \cite{maatouk2020optimality}, the authors demonstrate that the structure of the optimal policy of Problem \eqref{eq:individual_dual_problem} is a threshold based policy. They prove further that this policy is increasing with the age. i.e.: \begin{proposition} When $m=a$, the optimal solution of the problem in (\ref{eq:individual_dual_problem}) is an increasing threshold policy. Explicitly, there exists $n$ such that when the current state $a^{j} < a^{n}$, the prescribed action is a passive action, and when $ a^{j} \geq a^{n}$, the prescribed action is an active action. \end{proposition} One can see the detailed proof in \cite{maatouk2020optimality}. \subsubsection{MAoII} According to Section \ref{subsubsec:MAoII_evolution}, $b(t)$ evolves in the state space: \begin{equation} A^b=\{b^{j}: j>0, b^{j}=\sum_{k=0}^{j} k(1-p)(1-r)^{k-1} \pi^{j-k}\} \end{equation} Therefore, the expression of Bellman equation at state $b^{j}$ \begin{align} \theta_b + V(b^{j})=\min\big\{&b^{j}+V(b^{j+1});\nonumber \\ &b^{j}+W+\rho V(b^{1})+(1-\rho)V(b^{j+1})\big\} \end{align} \begin{theorem}\label{theo:threshold_policy_2} When $m=b$, the optimal solution of the problem in (\ref{eq:individual_dual_problem}) is an increasing threshold policy. Explicitly, there exists $n$ such that when the current state $b^{j} < b^{n}$, the prescribed action is a passive action, and when $ b^{j} \geq b^{n}$, the prescribed action is an active action. \end{theorem} \begin{IEEEproof} The proof can be found in Appendix \ref{app:theo:threshold_policy_2}. \end{IEEEproof} \subsection{Indexability and Whittle's index expressions} In order to establish the indexability of the problem and find the Whittle's index expressions, we provide the steady-state form of the problem in (\ref{eq:individual_dual_problem}) under a threshold policy $n$. Explicitly: \begin{equation} \begin{aligned} & \underset{n\in \mathbb{N}^*}{\text{minimize}} & & \overline{m^{n}}+W\overline{d^n} \end{aligned} \label{thresholdobjective} \end{equation} where $\overline{m^{n}}$ is the average value of the penalty function with the respect to the metric $m$, and $\overline{d^n}$ is the average active time under threshold policy $n$. Specifically: \begin{align} \overline{m^{n}}&=\lim_{T\to+\infty} \text{sup}\:\frac{1}{T}\mathbb{E}^{n}\Big(\sum_{t=0}^{T-1}m(t)|m(0),tp(n)\Big)\label{eq:average_age}\\ \overline{d^n}&=\lim_{T\to+\infty} \text{sup}\:\frac{1}{T}\mathbb{E}^{n}\Big(\sum_{t=0}^{T-1}d(t)|m(0),tp(n)\Big)\label{eq:average_active_time} \end{align} where $tp(n)$ denotes the threshold policy $n$. With the intention of computing $\overline{m^{n}}$ and $\overline{d^n}$, we derive the stationary distribution of the Discrete Time Markov Chain, DTMC that represents the evolution of MAoII under threshold policy $n$. One can show that the steady state distribution in question is the same for both metrics, AoI and MAoII. Specifically: \begin{proposition}\label{prop:stationary_distribution} For $m=a,b$, for a given threshold $n$, the DTMC admits $u^n(m^{j})$ as its stationary distribution: \begin{equation} u^n(m^{j})=\left\{ \begin{array}{ll} \frac{\rho }{n\rho +1-\rho } & \text{if} \ 1 \leq j \leq n \\ (1-\rho )^{j-n} \frac{\rho }{n\rho +1-\rho } & \text{if} \ j \geq n\\ \end{array} \right. \end{equation} \label{stationarydistribution} \end{proposition} \vspace{-10pt} \begin{IEEEproof} The proof can be found in Appendix \ref{app:prop:stationary_distribution}. \end{IEEEproof} By exploiting the above results, we can now proceed with finding a closed-form of the average cost of any threshold policy. \begin{proposition} For a given threshold $n$, the average cost of the threshold policy is $\overline{m^{n}}$: \begin{itemize} \item $m=a$: \begin{align} \overline{a^{n}}=&\frac{[(n-1)^2+(n-1)]\rho ^2+2\rho (n-1)}{2\rho ((n-1)\rho +1)}\nonumber\\&+\frac{2}{2\rho ((n-1)\rho+1)} \label{costgamma} \end{align} \item $m=b$: \begin{align} \overline{b^{n}}=&\frac{\rho }{n\rho +1-\rho }[\frac{n(N-1)}{Nr}-\frac{(1-Nr)^{n+2}}{(Nr)^2}\nonumber\\ &+\frac{(1-r)^{n+2}}{r^2}+\frac{(1-\rho )(1-Nr)^{n+2}}{Nr(1-(1-\rho )(1-r))}-\nonumber\\ &\frac{(1-\rho )(1-r)^{n+2}}{r(1-(1-\rho )(1-r))}+C] \end{align} \end{itemize} where $C=\frac{(1-Nr)^2}{(Nr)^2}-\frac{(1-r)^2}{r^2}+\frac{(N-1)(1-\rho )}{Nr\rho}$. \end{proposition} \begin{IEEEproof} By leveraging the results of Proposition \ref{prop:stationary_distribution} and using the expression of $m^{j}$ for $j>0$, by definition of $\overline{m^{n}}$ given in \eqref{eq:average_age}, we get after algebraic manipulations the desired results. \end{IEEEproof} \begin{proposition} For any given threshold $n$, the active average time is $\overline{d^n}$: \begin{align} \overline{d^n}=&\frac{1}{n\rho +1-\rho } \end{align} \end{proposition} \begin{IEEEproof} Likewise, exploiting the results in Proposition \ref{prop:stationary_distribution} and according to the expression \eqref{eq:average_active_time}, we obtain the desired results. \end{IEEEproof} To ensure the existence of the Whittle's indices, we need first to establish the indexability property for all users. To that end, we first formalize the indexability and the Whittle's index in the following definitions. We note that in the sequel, we precise the indices of users to differentiate between them. \begin{definition} Considering Problem~\eqref{eq:individual_dual_problem} for a given $W$ and a given user $i$, we define $D_i^m(W)$ as the set of states in which the optimal action (with respect to the optimal solution of Problem~\eqref{eq:individual_dual_problem} considering the metric $m$) is the passive one. In other words, $m_i^{n} \in D_i^m(W)$ if and only if the optimal action at state $m_i^{n}$ is the passive one. \end{definition} $D_i^m(W)$ is well defined for both metrics as the optimal solution of Problem~\eqref{eq:individual_dual_problem} is a stationary policy, more precisely, a threshold based policy. \begin{definition}\label{def:Whitt_index} A class is indexable if the set of states in which the passive action is the optimal action increases with $W$, that is, $W' < W \Rightarrow D_i^m(W') \subseteq D_i^m(W)$. When the class is indexable, the Whittle's index in state $m_i^{n}$ is defined as: \begin{equation} W(m_i^{n})=\min \{W |m_i^{n} \in D_i^m(W)\} \end{equation} \end{definition} \begin{proposition} For each user $i$, the one-dimensional problem is indexable for both metrics. \end{proposition} \begin{IEEEproof} The proof rests on the decrease of $\overline{d_i^n}$ with $n$. One can see \cite{maatouk2020optimality} for a detailed proof. \end{IEEEproof} As the indexability property has been established in the above proposition, we can now assert the existence of the Whittle's index. \begin{theorem}\label{theo:Whittle_index_expressions} For any user $i$ and state $m_i^{n}$, the Whittle's index is: \begin{itemize} \item $m=a$ \begin{equation} W_i(a_i^{n})=\frac{n(n-1)\rho_i}{2}+n \end{equation} \item $m=b$ \begin{align} W_i(b_i^{n})=&\frac{(1-r_i)^2 \rho_i}{r_i^2}-\frac{(1-N_ir_i)^2 \rho_i}{(N_ir_i)^2} \nonumber \\ &+(1-N_ir_i)^{n+2}(n \rho_i+1+\frac{\rho_i(1-N_i r_i)}{N_i r_i}) \nonumber \\ & \ \ \ \times [\frac{1-(1-\rho_i)(1+(N_i-1)r_i)}{N_i r_i (1-(1-\rho_i)(1-r_i))} ]\nonumber \\ &-(1-r_i)^{n+2}(n \rho_i+1+\frac{\rho_i(1-r_i)}{r_i}) \nonumber \\ & \ \ \ \times [\frac{\rho_i}{ r_i (1-(1-\rho_i)(1-r_i))}] \end{align} \end{itemize} \end{theorem} \begin{IEEEproof} The proof can be found in Appendix \ref{app:theo:Whittle_index_expressions}. \end{IEEEproof} Based on the above proposition, we provide in the following the Whittle's index scheduling policy for the original problem \eqref{eq:original_problem}. \begin{algorithm} \caption{Whittle's index scheduling policy}\label{euclid} \begin{algorithmic}[1] \State At each time slot $t$, compute the Whittle's index of all users in the system using the expressions given in Proposition \ref{theo:Whittle_index_expressions}. \State Allocating the $M$ channels to the $M$ users having the highest Whittle's index values at time $t$. \end{algorithmic} \end{algorithm}\\ \section{Numerical Results}\label{sec:num_reslt} Our goal in this section is to compare the average empirical age of incorrect information under the developed Whittle index policy WIP-MAoII to the baseline policy, denoted by WIP-AoI, that considers the standard AoI metric. More precisely, we plot $C^{\phi,N_u}=\frac{1}{N_u}\lim_{T\to+\infty} \text{sup}\:\frac{1}{T}\mathbb{E}^{\phi}\Big(\sum_{t=0}^{T-1}\sum_{i=1}^{N_u}m_i^{emp,\phi}(t)|\boldsymbol{m}^{emp}(0),\phi\Big)$ for $\phi$ equals to WIP-MAoII and WIP-AoI, in function of $N_u$, where $m^{emp,\phi}_i(\cdot)$ evolves as follows: \begin{itemize} \item If $m_i^{emp,\phi}(t)=0$, then $\hat{X}_i(t)=X_i(t)$. Therefore: \begin{equation} m_i^{emp,\phi}(t+1)=\left\{ \begin{array}{lll} 0 & w.p & p_i \\ 1 & w.p & 1-p_i \\ \end{array} \right. \end{equation} \item If $m_i^{emp,\phi}(t) \neq 0$, then $\hat{X}(t) \neq X(t)$. Therefore: \begin{itemize} \item If $\phi_i(t)=1$, \begin{equation} m_i^{emp,\phi}(t+1)=\left\{ \begin{array}{lll} 0 & w.p & \rho_i p_i \\ 1 & w.p & \rho_i (1-p_i) \\ m_i^{emp,\phi}(t)+1 & w.p & (1-\rho_i) \\ & & \times (1-r_i) \\ 0 & w.p & (1-\rho_i)r_i \\ \end{array} \right. \end{equation} \item If $\phi_i(t)=0$, \begin{equation} m_i^{emp,\phi}(t+1)=\left\{ \begin{array}{lll} m_i^{emp,\phi}(t)+1 & w.p & (1-r_i) \\ 0 & w.p & r_i \\ \end{array} \right. \end{equation} \end{itemize} \end{itemize} We consider two scenarios of the network settings: \begin{enumerate} \item For the first scenario, we consider two classes with the respective parameters: \begin{itemize} \item Class 1: $\rho_1=0.7$, $N_1=8$, $r_1=0.1$\footnote{The value of $p_i$ can be directly deduced from Equation \ref{eq:relation_p_r}}. \item Class 2: $\rho_2=0.5$, $N_2=2$, $r_2=0.4$. \end{itemize} \item Regarding the second scenario, to shed light on the importance of taking into account the source parameters namely, $p_i$, $r_i$ and $N_i$, in the derivation of Whittle's indices, we consider that the two classes share the same channel statics, specifically $\rho_1=\rho_2$, while they don't have the same source parameters. To that extent, we consider the following case: \begin{itemize} \item Class 1: $\rho_1=0.4$, $N_1=10$, $r_1=0.05$ \item Class 2: $\rho_2=0.4$, $N_2=3$, $r_2=0.3$ \end{itemize} \end{enumerate} \begin{figure}[H] \label{fig:comp_aoi_maoii_diff} \centering \includegraphics[scale=0.6]{comparison_average_age_MAoII_AoI_0,7_0,5_8_2_0,1_0,4} \caption{Comparison between WIP-MAoII and WIP-AoI in terms of the empirical average age: different channel statics} \end{figure} \begin{figure}[H]\label{fig:comp_aoi_maoii_same} \centering \includegraphics[scale=0.6]{comparison_average_age_MAoII_AoI_0,4_0,4_10_3_0,05_0,3} \caption{Comparison between WIP-MAoII and WIP-AoI in terms of the empirical average age: same channel statics} \end{figure} One can observe that effectively WIP-MAoII gives us better performance than WIP-AoI in terms of minimizing the average empirical age of incorrect information. As consequence, our derivation of Whittle's indices in the Markovian source framework turns out to be relevant in terms of tracking the real state of remote sources. \iffalse \section{Numerical Results}\label{sec:num_reslt} Our goal in this section is to compare the average empirical age of incorrect information under the developed Whittle index policy WIP-MAoII to the baseline policy, denoted by WIP-AoI, that considers the standard AoI metric (one can refer to \cite{KriouileExtended21} for more details). More precisely, we plot $C^{\phi,N_u}=\frac{1}{N_u}\lim_{T\to+\infty} \text{sup}\:\frac{1}{T}\mathbb{E}^{\phi}\Big(\sum_{t=0}^{T-1}\sum_{i=1}^{N_u}m_i^{emp,\phi}(t)|\boldsymbol{m}^{emp}(0),\phi\Big)$ for $\phi$ equals to WIP-MAoII and WIP-AoI, in function of $N_u$. For the sake of space, the evolution of $m^{emp,\phi}_i(\cdot)$ is reported in the extended version of this paper \cite{KriouileExtended21}. \iffalse \begin{itemize} \item If $m_i^{emp,\phi}(t)=0$, then $\hat{X}_i(t)=X_i(t)$. Therefore: \begin{equation} m_i^{emp,\phi}(t+1)=\left\{ \begin{array}{lll} 0 & w.p & p_i \\ 1 & w.p & 1-p_i \\ \end{array} \right. \end{equation} \item If $m_i^{emp,\phi}(t) \neq 0$, then $\hat{X}(t) \neq X(t)$. Therefore: \begin{itemize} \item If $\phi_i(t)=1$, \begin{equation} m_i^{emp,\phi}(t+1)=\left\{ \begin{array}{lll} 0 & w.p & \rho_i p_i \\ 1 & w.p & \rho_i (1-p_i) \\ m_i^{emp,\phi}(t)+1 & w.p & (1-\rho_i) \\ & & \times (1-r_i) \\ 0 & w.p & (1-\rho_i)r_i \\ \end{array} \right. \end{equation} \item If $\phi_i(t)=0$, \begin{equation} m_i^{emp,\phi}(t+1)=\left\{ \begin{array}{lll} m_i^{emp,\phi}(t)+1 & w.p & (1-r_i) \\ 0 & w.p & r_i \\ \end{array} \right. \end{equation} \end{itemize} \end{itemize} \fi To shed light on the importance of taking into account the source parameters namely, $p_i$, $r_i$ and $N_i$, in the derivation of Whittle's indices, we consider that the two classes share the same channel statics, specifically $\rho_1=\rho_2$, while they don't have the same source parameters. To that extent, we consider the following case: \iffalse \item For the first scenario, we consider two classes with the respective parameters: \begin{itemize} \item Class 1: $\rho_1=0.7$, $N_1=8$, $r_1=0.1$\footnote{The value of $p_i$ can be directly deduced from Equation \ref{eq:relation_p_r}}. \item Class 2: $\rho_2=0.5$, $N_2=2$, $r_2=0.4$. \end{itemize} \item Regarding the second scenario, to shed light on the importance of taking into account the source parameters namely, $p_i$, $r_i$ and $N_i$, in the derivation of Whittle's indices, we consider that the two classes share the same channel statics, specifically $\rho_1=\rho_2$, while they don't have the same source parameters. To that extent, we consider the following case: \fi \begin{itemize} \item Class 1: $\rho_1=0.4$, $N_1=10$, $r_1=0.05$ \item Class 2: $\rho_2=0.4$, $N_2=3$, $r_2=0.3$ \end{itemize} One can observe that effectively WIP-MAoII gives us better performance than WIP-AoI in terms of minimizing the average empirical age of incorrect information. As consequence, our derivation of Whittle's indices in the Markovian source framework turns out to be relevant in terms of tracking the real state of remote sources. \iffalse \begin{figure}[H] \label{fig:comp_aoi_maoii_diff} \centering \includegraphics[scale=0.6]{comparison_average_age_MAoII_AoI_0,7_0,5_8_2_0,1_0,4} \caption{Comparison between WIP-MAoII and WIP-AoI in terms of the empirical average age: different channel statics} \end{figure} \fi \begin{figure}[H]\label{fig:comp_aoi_maoii_same} \centering \includegraphics[scale=0.5]{comparison_average_age_MAoII_AoI_0,4_0,4_10_3_0,05_0,3} \caption{Comparison between WIP-MAoII and WIP-AoI in terms of the empirical average age} \end{figure} \fi \section{Conclusion}\label{sec:concl} In this paper, we considered the problem of remote monitoring of multiple sources where a central entity selects at each time a subset of the sources to send their updates, in such a way to minimize the MAoII metrics. Since the scheduler is unaware of the current state of the source, we introduce a belief state at the monitor in order to predict the evolution of the states of the sources and to derive an estimation of the MAoII. We then developed an efficient scheduling policy based on Whittle's index framework. Finally, we have provided numerical results that highlight the performance of our policy. \iffalse \subsection{Comparison between WIP-AoI and WIP-MAoII in function of the transition probability $p$} Another interesting case to study is to see how the average empirical age evolves in function of the transition probability $p_i$ for fixed $N_i$ and $\rho_i$ as well as the number of total users $N_u$. For that purpose, we consider two users ($N_u=2$) with the following parameters: \begin{itemize} \item User 1: $\rho_1=0.8$, $N_1=20$ \item User 2: $\rho_2=0.3$, $N_2=5$ \end{itemize} Next, we plot $C^{\phi,N}$ under policy MAoII-WIP and AoI-WIP in function of $p=p_1=p_2$ when $p$ varies in the interval $[0,0.9]$. One can notice in "", that for different values of $p$, MAoII-WIP performs better than AoI-WIP. We can also see that the performance gap between the two policies shrinks $p$ increases. The reason behind that is that \fi \bibliographystyle{IEEEtran}
2024-02-18T23:40:33.614Z
2021-02-08T02:18:45.000Z
algebraic_stack_train_0000
2,747
7,020
proofpile-arXiv_065-13355
\section{Introduction and Background} Sound design is the process of using a synthesizer and audio effects to craft a desired output sound, typically by leveraging virtual studio technology (VST) on a computer. Often, the audio effects applied to the synthesizer play the biggest role in producing a desired sound. Sound design for the music industry is a very difficult task done by professionals with years of experience. Educational tools are limited and beginners are usually forced to learn via trial and error or from online resources created by others more experienced who typically also learned in a similar way. Prior work leveraging AI to program audio VSTs uses genetic algorithms [1; 2; 3], genetic programming [3], k-means clustering + tree-search [4], and deep convolutional neural networks [2; 5] to achieve this objective. There has also been research on using deep learning to model or apply audio effects directly [6; 7; 8; 9]. These systems typically suffer from one or more of the following problems: they are applied to toy VSTs with little practical use, they are incompatible with existing VSTs, their inference time is prohibitively long, or they are black-boxes with uninterpretable results. When using an AI assisted system, the user's sense of ownership over their work should be preserved. Our system is inspired by white-box automatic image post-processing systems [10] and collaborative production tools [11; 12] that can educate and augment a user rather than aiming to replace them. \section{System Overview} Our system iteratively nudges an input audio towards the same timbre of a desired target audio and provides interpretable intermediate steps. It uses, to the best of our knowledge, a novel approach consisting of an ensemble of models working together: a recurrent neural network (RNN) to select which effect to apply next and then a collection of convolutional neural networks (CNN), one per supported effect, to apply the correct effect parameters. An example sequence of spectrograms and steps output by our system is shown in Figure 1. We collect training data using five timbre-changing audio effects from Serum's effects rack: multi-band compression, distortion, equalizer (EQ), phaser, and hall reverb. We also use 12 different synthesizer presets split into three groups: \emph{basic shapes} (sine, triangle, saw, and square wave), \emph{advanced shapes} (four presets), and \emph{advanced modulating shapes} (four presets). Since our system focuses on reproducing a desired timbre, we represent input audio as power dB Mel spectrograms. Using a modified automated VST rendering tool [13], \textasciitilde120k one second long audio clips are generated for each synthesizer preset and are sampled from all possible combinations of the five supported effects. The CNN effect models take as input two spectrograms stacked together (two channels total): the target spectrogram and the input spectrogram. Their outputs vary depending on the effect they are modeling, but consist of some combination of binary, categorical, and continuous outputs. A Cartesian product is used for selecting current and target spectrograms to train on, thus resulting in \textasciitilde1.2M available training data points for each effect. The RNN model takes as input an arbitrary length sequence consisting of the CNN effect model spectrogram input and a sequence of one hot vectors of the same length representing the used effects. Its output is a 5-dimensional softmax layer indicating the probability of the next effect to be applied. More details about data collection, model architectures, and training can be found in supplemental Figures 2 and 3 and supplemental Table 2. \section{Evaluation and Discussion} \begin{table}[h] \caption{Mean errors and $\Delta$s for input and output audio from the \emph{basic shapes} preset group.} \centering \begin{tabular}{l} \toprule \multicolumn{1}{c}{} \\ \cmidrule(r){1-1} Metric \\ \midrule MSE \\ MAE \\ MFCC \\ LSD \\ \bottomrule \end{tabular} \begin{tabular}{lll} \toprule \multicolumn{3}{c}{Mean Error against Target Audio} \\ \cmidrule(r){1-3} Init. Audio & Final Audio & $\Delta$ \\ \midrule 0.055 & 0.012 & -0.043 \\ 0.172 & 0.074 & -0.098 \\ 157.15 & 70.06 & -87.09 \\ 16.16 & 7.62 & -8.54 \\ \bottomrule \end{tabular} \begin{tabular}{lllll} \toprule \multicolumn{5}{c}{Mean Error $\Delta$ per Step} \\ \cmidrule(r){1-5} 1 & 2 & 3 & 4 & 5 \\ \midrule -0.024 & -0.013 & -0.008 & -0.004 & -0.002 \\ -0.052 & -0.034 & -0.020 & -0.008 & -0.004 \\ -45.97 & -28.64 & -20.13 & -7.30 & -4.28 \\ -4.62 & -3.04 & -1.64 & -0.60 & -0.32 \\ \bottomrule \end{tabular} \end{table} The CNN effect models are evaluated individually against their parameter reconstruction ability and how closely their output matches the target audio. Audio similarity is measured via four different metrics: MAE and MSE between the two power dB spectrograms, and the mean Euclidean distance between the first 20 Mel frequency cepstral coefficients (MFCC) and the log-spectral distance (LSD) between the two power spectrograms. The RNN model is evaluated against its prediction accuracy for the next effect and the entire ensemble of models is evaluated against changes in audio similarity as steps are taken by the system. Evaluation results for our entire system are shown in Table 1 and additional results can be found in supplemental Tables 3, 4, 5, and 6. The results indicate that our system is consistently able to produce intermediate steps that bring the input audio significantly closer to the target audio.\footnote{Audio examples can be listened to at \url{https://bit.ly/serum_rnn}} Our system also provides near real-time, quantitative feedback about which effects are the most important. The user can pick and choose which intermediate steps they would like to use and can feed tweaked versions back into the system for additional fine-tuning or to learn more. We also noticed fun creative applications when our system produced unexpected results or was given significantly out of domain target audio. \section{Ethical Implications} Combining artificial intelligence with creativity carries with it various different ethical considerations, one of which is a potential future decrease in demand for professional audio producers due to an increasing ability to replace them with technology. We believe the best approach to this is to build systems that are collaborative and can augment people rather than replacing them entirely. While we believe our research is just the tip of the iceberg for building AI powered sound design tools, we can imagine a future where tools like ours might be able to find more efficient and simpler methods of creating sounds, thus educating students more effectively and democratizing sound design. We compare this to a similar situation that occurred when beat-matching technology was invented and added to DJ systems (to the disgruntlement of some DJing "purists"). However, this sometimes controversial technology democratized DJing and enabled a new generation of artists to focus on new creative applications, thus progressing the community as a whole. \section{Acknowledgements} We would like to thank Erwin Wu for providing additional computing resources. \section{References} \small [1] Tatar, K., Matthieu Macret and P. Pasquier. “Automatic Synthesizer Preset Generation with PresetGen.” \emph{Journal of New Music Research 45} (2016). [2] Yee-King, Matthew, Leon Fedden and Mark d'Inverno. “Automatic Programming of VST Sound Synthesizers Using Deep Networks and Other Techniques.” \emph{IEEE Transactions on Emerging Topics in Computational Intelligence 2} (2018). [3] Macret, Matthieu and P. Pasquier. “Automatic design of sound synthesizers as pure data patches using coevolutionary mixed-typed cartesian genetic programming.” \emph{GECCO '14} (2014). [4] Cáceres, Juan-Pablo. “Sound Design Learning for Frequency Modulation Synthesis Parameters.” (2007). [5] Barkan, Oren, David Tsiris, O. Katz and Noam Koenigstein. “InverSynth: Deep Estimation of Synthesizer Parameter Configurations From Audio Signals.” \emph{IEEE/ACM Transactions on Audio, Speech, and Language Processing 27} (2019). [6] Ramírez, M. A. M. and J. Reiss. “End-to-end equalization with convolutional neural networks.” \emph{International Conference on Digital Audio Effects} (2018). [7] Damskägg, Eero-Pekka, Lauri Juvela, Etienne Thuillier and V. Välimäki. “Deep Learning for Tube Amplifier Emulation.” \emph{ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)} (2019). [8] Sheng, Di and Gyorgy Fazekas. “A Feature Learning Siamese Model for Intelligent Control of the Dynamic Range Compressor.” \emph{2019 International Joint Conference on Neural Networks (IJCNN)} (2019). [9] Engel, J., Lamtharn Hantrakul, Chenjie Gu and Adam Roberts. “DDSP: Differentiable Digital Signal Processing.” \emph{2020 International Conference on Learning Representations (ICLR)} (2020). [10] Hu, Yuanming \& He, Hao \& Xu, Chenxi \& Wang, Baoyuan \& Lin, Stephen. "Exposure: A White-Box Photo Post-Processing Framework." \emph{ACM Transactions on Graphics.} (2017). [11] Sommer, Nathan and A. Ralescu. “Developing a Machine Learning Approach to Controlling Musical Synthesizer Parameters in Real-Time Live Performance.” \emph{MAICS} (2014). [12] Thio, Vibert, and Chris Donahue. "Neural Loops." \emph{2019 NeurIPS Workshop on Machine Learning for Creativity and Design} (2019). [13] Fedden, Leon. "RenderMan". GitHub. \url{https://github.com/fedden/RenderMan} (accessed 2020). \pagebreak \normalsize \section{Supplemental Material} \subsection{Data Collection} \begin{table}[h] \caption{Parameters sampled from the Serum VST synthesizer.} \centering \begin{tabular}{llll} \toprule Effect & Parameter Name & Type & Sampled Values \\ \midrule Compressor & Low-band Compression & Continuous & [0.0, 1.0] \\ Compressor & Mid-band Compression & Continuous & [0.0, 1.0] \\ Compressor & High-band Compression & Continuous & [0.0, 1.0] \\ Distortion & Mode & Categorical & 12 classes \\ Distortion & Drive & Continuous & [0.3, 1.0] \\ Equalizer & High Frequency Cutoff & Continuous & [0.50, 0.95] \\ Equalizer & High Frequency Resonance & Continuous & [0.0, 1.0] \\ Equalizer & High Frequency Gain & Continuous & [0.0, 0.4] and [0.6, 1.0] \\ Phaser & LFO Depth & Continuous & [0.0, 1.0] \\ Phaser & Frequency & Continuous & [0.0, 1.0] \\ Phaser & Feedback & Continuous & [0.0, 1.0] \\ Hall Reverb & Mix & Continuous & [0.3, 0.7] \\ Hall Reverb & Low Frequency Cutoff & Continuous & [0.0, 1.0] \\ Hall Reverb & High Frequency Cutoff & Continuous & [0.0, 1.0] \\ \bottomrule \end{tabular} \end{table} Data collection and processing systems represent a significant portion of the software engineering effort required for this project. Table 2 summarizes which Serum synthesizer parameters are sampled for each supported effect. Parameter sampling value ranges are occasionally limited to lie within practical, everyday use regions. The \emph{basic shapes} preset group consists of the single oscillator sine, triangle, saw, and square wave default Serum presets. The \emph{advanced shapes} preset group consists of the dry (no effects) dual oscillator \texttt{"LD Power 5ths"}, \texttt{"SY Mtron Saw"}, \texttt{"SY Shot Dirt Stab"}, and \texttt{"SY Vintage Bells"} default Serum presets. The \emph{advanced modulating shapes} preset group consists of the dry dual oscillator \texttt{"LD Iheardulike5ths"}, \texttt{"LD Postmodern Talking"}, \texttt{"SQ Busy Lines"}, and \texttt{"SY Runtheharm"} default Serum presets. All of these presets also use intense time varying modulations. Audio samples are played and rendered for one second using a MIDI pitch of C4, maximum velocity, and a sampling rate of 44100 Hz. In the future we would like to include audio pitch, attack, decay, sustain, and release features directly into our system by modifying these values. Mel spectrograms are calculated using a hop length of 512 samples, a FFT window length of 4096, and 128 Mel filters. \subsection{Modeling} \begin{figure}[ht] \centering \includegraphics[width=0.85\linewidth]{images/cnn_2x.png} \caption{CNN effect model architecture (not to scale).} \end{figure} All five CNN effect models use ELU activations and a 50\% dropout rate for each of their fully connected (FC) layers. Their architecture is shown in Figure 2. They are trained using a batch size of 128, mean squared error loss for continuous parameters, binary cross-entropy loss for binary parameters, and categorical cross-entropy loss for categorical parameters. \begin{figure}[ht] \centering \includegraphics[width=0.72\linewidth]{images/rnn_cnn.png} \caption{CNN model architecture used in the RNN next effect prediction model (not to scale).} \end{figure} The RNN model consists of a bi-directional, 128-dimensional LSTM layer followed by a 128-dimensional FC layer and lastly the 5-dimensional softmax output layer. The FC layer uses ELU activation units and a 50\% dropout rate. Features are extracted from the Mel spectrogram sequence input using a smaller, time-distributed CNN with an architecture displayed in Figure 3. This sequence of extracted, 128-dimensional Mel spectrogram features is concatenated with the sequence of one hot vectors representing which effects have been used and is then fed as input to the LSTM layer. The RNN model is trained with a batch size of 32. All models are trained for 100 epochs with early stopping and a validation and test split of 0.10 and 0.05 respectively. The Adam optimizer is used with a learning rate of 0.001. \subsection{Evaluation} \begin{table}[h] \caption{CNN effect models mean error reduction for all three preset groups.} \centering \begin{tabular}{lllll} \toprule && \multicolumn{3}{c}{Mean Error $\Delta$ against Target Audio} \\ \cmidrule(r){3-5} Effect & Metric & \emph{Basic Shapes} & \emph{Adv. Shapes} & \emph{Adv. Mod. Shapes} \\ \midrule Compressor & MSE & -0.012 & -0.013 & -0.007 \\ Compressor & MAE & -0.050 & -0.049 & -0.030 \\ Compressor & MFCC & -53.19 & -54.66 & -37.42 \\ Compressor & LSD & -4.20 & -4.15 & -2.42 \\ \midrule Distortion & MSE & -0.036 & -0.019 & -0.037 \\ Distortion & MAE & -0.062 & -0.056 & -0.082 \\ Distortion & MFCC & -60.50 & -60.70 & -88.40 \\ Distortion & LSD & -5.30 & -5.11 & -7.18 \\ \midrule Equalizer & MSE & -0.004 & -0.009 & -0.005 \\ Equalizer & MAE & -0.018 & -0.038 & -0.019 \\ Equalizer & MFCC & -24.63 & -45.20 & -29.93 \\ Equalizer & LSD & -1.31 & -3.27 & -1.54 \\ \midrule Phaser* & MSE & 0.002 & 0.000 & 0.002 \\ Phaser* & MAE & 0.005 & -0.002 & 0.008 \\ Phaser* & MFCC & 1.23 & -5.98 & 1.96 \\ Phaser* & LSD & 0.64 & -0.11 & 0.99 \\ \midrule Hall Reverb & MSE & -0.016 & -0.005 & -0.007 \\ Hall Reverb & MAE & -0.064 & -0.029 & -0.033 \\ Hall Reverb & MFCC & -47.16 & -26.61 & -31.59 \\ Hall Reverb & LSD & -6.27 & -2.46 & -3.10 \\ \bottomrule \end{tabular} \end{table} * It's important to note that Mel spectrograms do not represent phase information well. As a result, the error metrics used are less representative of the phaser effect's error. A positive error $\Delta$ may occur even when the predicted audio sample clearly sounds much closer to the target audio sample when compared to the initial audio sample. We plan to include phase information in future iterations of our system. \bigskip \bigskip \begin{table}[h] \caption{RNN model next effect prediction accuracy for all three preset groups.} \centering \begin{tabular}{llll} \toprule & \multicolumn{3}{c}{Mean Next Effect Prediction Accuracy} \\ \cmidrule(r){2-4} Step & \emph{Basic Shapes} & \emph{Adv. Shapes} & \emph{Adv. Mod. Shapes} \\ \midrule 1 & 0.997 & 0.993 & 0.997 \\ 2 & 0.983 & 0.985 & 0.989 \\ 3 & 0.981 & 0.979 & 0.983 \\ 4 & 0.973 & 0.988 & 0.977 \\ 5 & 0.999 & 1.000 & 0.997 \\ \midrule All & 0.983 & 0.985 & 0.986 \\ \bottomrule \end{tabular} \end{table} \bigskip \bigskip \begin{table}[h] \caption{Mean errors and $\Delta$s for input and output audio from the \emph{advanced shapes} preset group.} \centering \begin{tabular}{l} \toprule \multicolumn{1}{c}{} \\ \cmidrule(r){1-1} Metric \\ \midrule MSE \\ MAE \\ MFCC \\ LSD \\ \bottomrule \end{tabular} \begin{tabular}{lll} \toprule \multicolumn{3}{c}{Mean Error against Target Audio} \\ \cmidrule(r){1-3} Init. Audio & Final Audio & $\Delta$ \\ \midrule 0.039 & 0.009 & -0.030 \\ 0.150 & 0.067 & -0.083 \\ 146.79 & 62.25 & -84.54 \\ 14.13 & 6.71 & -7.42 \\ \bottomrule \end{tabular} \begin{tabular}{lllll} \toprule \multicolumn{5}{c}{Mean Error $\Delta$ per Step} \\ \cmidrule(r){1-5} 1 & 2 & 3 & 4 & 5 \\ \midrule -0.017 & -0.010 & -0.007 & -0.003 & 0.000 \\ -0.042 & -0.030 & -0.022 & -0.009 & -0.001 \\ -43.45 & -30.36 & -21.87 & -9.00 & -0.40 \\ -3.91 & -2.66 & -1.83 & -0.70 & -0.05 \\ \bottomrule \end{tabular} \end{table} \bigskip \bigskip \begin{table}[h] \caption{Mean errors and $\Delta$s for input and output audio from the \emph{advanced mod. shapes} preset group.} \centering \begin{tabular}{l} \toprule \multicolumn{1}{c}{} \\ \cmidrule(r){1-1} Metric \\ \midrule MSE \\ MAE \\ MFCC \\ LSD \\ \bottomrule \end{tabular} \begin{tabular}{lll} \toprule \multicolumn{3}{c}{Mean Error against Target Audio} \\ \cmidrule(r){1-3} Init. Audio & Final Audio & $\Delta$ \\ \midrule 0.049 & 0.013 & -0.036 \\ 0.181 & 0.077 & -0.104 \\ 176.37 & 72.52 & -103.85 \\ 16.90 & 7.74 & -9.16 \\ \bottomrule \end{tabular} \begin{tabular}{lllll} \toprule \multicolumn{5}{c}{Mean Error $\Delta$ per Step} \\ \cmidrule(r){1-5} 1 & 2 & 3 & 4 & 5 \\ \midrule -0.022 & -0.010 & -0.008 & -0.000 & -0.002 \\ -0.070 & -0.026 & -0.018 & -0.004 & -0.004 \\ -68.97 & -24.92 & -18.50 & -4.26 & -3.49 \\ -6.17 & -2.22 & -1.49 & -0.33 & -0.32 \\ \bottomrule \end{tabular} \end{table} \end{document}
2024-02-18T23:40:33.674Z
2021-02-08T02:16:40.000Z
algebraic_stack_train_0000
2,751
2,801
proofpile-arXiv_065-13425
\section{Introduction} Zero-Knowledge proof systems have incentivized its importance in stimulating blockchain security with its mathematical proof. The applicability of zero-knowledge proof in the modern blockchain is enhancing trust amidst its entrants with verifiable proof without revealing the data itself. The proof requires multiexponential prover time in linear verification time for protocols such as bulletproofs, the advantage of bulletproofs is its short cryptographic proof size. Snarks is another blockchain protocol that escalates verification time due to its short proof size with an initially required trusted setup that ensures the verifier of the veracity of the prover. For this paper, the protocol chosen for establishing a correlation between zero-knowledge proof and VDFs is zk-snarks for the reason that the initial trusted setup required in Snarks gives it acceleration over bulletproofs and helps achieve much lesser verification time. Snarks efficiently establish successful entrant verification using Zokrates, a toolbox for deploying zero-knowledge proof verification on Ethereum. The definition of probabilistic zero-knowledge digital proof extends to the domain of blockchain transaction data such that the prover(P) can individually verify and validate the transaction data with a probability of providing a shred of cogent evidence to the verifier, to be at least 1-$\epsilon$ (Completeness) and at most $\epsilon$(Soundness). $^{[1][2]}$ We use this verifiable non-interactive probabilistic proof along with deterministic recursive delay functions for offering real-time applicability with a Nash equilibrium for providing security against arbitrary attacks by entrants. In a proof system with a large susceptible amount of processors running in parallel, the security of the system is contingent upon the inability of the adversary to be able to process the output of the function in arbitrary circumstances in any less than (t) sequential steps for a given verifiable delay (t). VDF’s are sequential, differentiating from an honest prover evaluating the input in (t) steps. So, given with a parallel polynomial no of processing machinery to an adversary, a verifiable delay function makes it impossible to distinguish the function output y corresponding to an input x from random data for any adversary attempting a clandestine attack in significantly fewer steps. VDF’s are unique and verifiable, a VDF consists of a verifier $Setup( \lambda, t )$ → $pp$, for a security parameter ($\lambda$), and a delay of t resulting in a public parameter pp. This public parameter is required for evaluation of input x, $Eval(x, pp)$→ $(y, \pi)$, with the correct output y corresponding to the input x, and a proof $\pi$ required for the verification, $Verify(pp, y, x, \pi)$. This successive order of the protocol makes it efficiently verifiable. VDF also states this constraint that for a given output y on x that is verifiable, it is not possible to hold a second value different from the output y in the preceding evaluation with respect to the same input x, uniqueness.$^{[3]}$ By architecture designing, evaluation of the input x, runs in parallel time $\sigma$(t), on polynomial processors p(t), for the given delay t. Sequentiality $\sigma$(t) of the system in terms of security becomes vulnerable for the parallel processors, as the ideal sequentiality $\sigma$(t) is achieved at, $\sigma$(t) = t -1, which can be taken as $\sigma$(t) = t - $\epsilon$t, where system security becomes more vulnerable as $\epsilon$→ 1. For achieving sequentiality $\sigma$(t) in order to build a VDF, there are approaches, which result in significant as well as insignificant outcomes. In the likelihood of possibly verifying the outcome of the proof points on the elliptic curve, it violates the constraints of time boundation, Eval must be parallel time in $O(polylog(t))$ processors, similarly, the restrictions on t include that it must be subexponential in the $\lambda$. It is possible to obtain a trivial VDF, by iteratively hashing SHA256, this can possibly build a weak VDF, given the above constraints. For a deterministic iterative sequentially verifiable function, where $g: X$ → $X$, is the root function which is sequentially iterated in t steps, such that $f : N × X$ → $X$, then by definition $f ( n, x ) = g^nx$, i.e. iterating the root function n no of times. The mathematical definition of a continuous verifiable delay function, is similar and comprehensive, although the assumption takes into consideration that g is a verifiable delay function with polynomial processing and subsequently iterating n such functions over the same constraints, achieve a CVDF. Although for a single VDF such sequentiality trivially achieves a VDF but does not meet the ideal constraints of exponentially faster verification than the prover, despite having soundness. The later section of this research elucidates this mechanism and expounds our original contribution in regard to the incrementally verifiable computation approach, for zero-knowledge proof systems. \section{Literature Review} Verifiable Delay functions as discussed in the introduction section of this paper, have a uniqueness constraint in the protocol, lacking this was the work of Mahmoody$^{[4]}$, which stated that the hashing algorithm scheme along with the use of intense robust directed acyclic graphs, will help achieve time lock puzzles in random oracle models, and established the same architecture using Fiat-Shamir non-interactive verification scheme. His model also proposed the gaps that it is impossible to simultaneously use a generator function for solving a puzzle and generating another solution. Mahmoody’s publicly verifiable sequential proof for random oracles using time-lock puzzles is useful against denial of service attacks. Rivest, Shamir, and Wagner$^{[5]}$ assume sequentiality on this time-lock model over exponentiation modulo, the same time-lock concept was applied to the coin flipping scheme by Boneh and Naor$^{[6]}$ Even Cohen and Pietrzak$^{[7]}$ construct a more efficient binary tree with extra edges in place of collision-resistant hashes and DAG for random oracles. Moreover [7] couldn’t cipher the problem of valid outputs for different polynomials, such that for a given solution y, their construction model proved to be valid for some y’, failing the uniqueness constraint. The previously discussed work by [4] postulates the generated proof size to be linear in delay T, i.e. O(N), but [7] reduces it to O(log N) space. Meena Jagadeesan’s$^{[8]}$ proof of sequentiality is based on the previously mentioned authors’ contribution, she commensurates, [4]’s construction and [7]’s assumption to bring together a fair possibility of implementing coin flipping scheme as proposed by [6], it takes into account the possibility of a shared random bit, between two entrants and provisions each of a forced commitment in case one entrant aborts, to complete the entire protocol, given it is immune to second preimage attacks. She proposes various improvements on [4]’s drawn conclusions, mitigates the research gap on [7]’s design model and its assumptions. Lenstra and Wesolowski$^{[9]}$, propose the nearest proof of sequential work (PoSW) by chaining puzzles together into the sloth-hash chain function. They prove a sloth can be verified in a hundred times lesser computation than required to sequentially evaluate it, their work was nearest to a pseudo-VDF. However, still the verification time of their construction couldn’t run in total polylog(t), failing the asymptotic constraint of a VDF. Benjamin Wesolowski$^{[10]}$ later proposes more efficient verifiable functions by constructing using a group of unknown order elements which result in smaller outputs that are single elements on group functions and streamlined correctness verification. The author posits the potential of these for decentralized networks on the blockchain, random beacons, peer-to-peer trust due to verifiable proofs and resource-efficient distributed applications. However, the paper [9], [10] doesn’t provide any such practical formulation of the posits and does no work in that regard. Our approach is to build such a proof system that requires prover simulators, verifier setup and provides a more secure-trustable system, with both on-chain and off-chain computation, of the authenticated proof and efficiently verifiable delays. Proven by Jin Yi Cai$^{[11]}$, the primitive sequential function was supposed to be a finite group of unknown order, repetitively squaring it certain no. of times for iterative sequentiality. Similar to this approach was Burt Kal’s[12] contribution which superseded by stating the utility of hashing functions for this. [12] proposed SHA256 as the hash for achieving this and [4] illustrated it. For these conventions given t iterations on a VDF, successive squaring of this computation along with constant round proofs system heuristics (a variant of Fiat-Shamir protocol) results in a Continuous Verifiable Delay function (CVDF). Naomi$^{[13]}$ proposed this from the complexity perspective as this model supersedes the traditional VDF. At any point of the iteration given the state is verifiable in polynomial time $O(polylog$ $t)$ and the system does not require to be reiterated to the designated point, a CVDF is better in terms of solving a harder Nash Equilibrium problem instances that can handle more complex computation which for a simple VDF seems quixotic. Consequently, the challenges arising due to more byzantine problem instances are better solvable by CVDF’s as stated by [13], but this research is a computation of its bisimulation with zero-knowledge proof for a single VDF. [4] suggests such verifiable computation for SNARKS achieving $\sigma$(t), where $\sigma(t) = (1 - \epsilon)·t (k+1)·log(t)$ sequentiality, to be nonfunctional. The author conduces to this dichotomy because it is 100,000 times expensive for such a computation on SNARG rather VDF making the approach highly quixotic for the specified constraints and due to this we have adopted the incrementally verifiable approach for achieving $\sigma$(t) which is asymptotically close to the numerator i.e. (1- $\epsilon$)·t and is independent of t. Unlike [13] our approach is iterative sequentiality for zero-knowledge proof systems. Incrementally verifiable computation for our model is our novel approach in this research, Boneh$^{[14]}$ mentions the post-quantum security problems for any attack from an adversary with access to a quantum computer. The author’s illustration for just theoretically quantifying the quotients in exponents with attempts of accelerating prover time and the meaningful assimilation of MiMC with STARK for quantum security, addresses some concerns for the traditional VDF’s. However, our setup of a SNARK uses a different approach for addressing some relevant security concerns for zero-knowledge proof applications on blockchain, and through combining these with the incrementally verifiable approach of computation we come up with our model with the aim of eliminating the research gaps. For an incremental proof of sequential work (IPoS), Dottling$^{[15]}$ proposes provers to individually compute the defined number of steps of the computation to be further carried on by subsequent prover’s from the last computed step without curtailing any prover’s resource. Dottling proposed to achieve this in a 2-type construction, where for the first type dottling had a relatively unique mechanism of considering only a single processor and only relies on hash functions. Mahmoody and Smith$^{[16] [18]}$ propose the construction of a VDF using the black-box construction which explicit ideal hash functions for random oracles. Concurrently Dottling$^{[17]}$ proposes inefficient prover time in Tight Verifiable delay functions (TVDF) by combining the result of non-interactive argument into iterative hashing along side stating the possibility of black-box construction from random oracles to be successfully evaluated in $T + O(T \delta )$ for any constant $\delta$ < 1. Prior to his work, the existence of such TVDF in random oracles was dubious. [18] eliminates these distinguished research gaps for privately and publicly verifiable PoSW. Unlike [15] that applies a polylogarithmic factor to the proof size, our work has a subexponential constraint factor in each characteristic step of sequential computation. The characteristic steps for each subsequent iteration enumerate the subexponential factor summing up to the final subexponential running time of the Snark compilation. By doing this we qualify our approach of a Snark delay computable to more intricate problem domains requiring more collective interoperability of proof systems, and as proposed by [13] for more byzantine nash equilibrium problem instances. Landerreche$^{[19]}$ unlike our correlation proposes a basic idea of culminating non-interactive cryptographic timestamping and some applications in decentralization, the paper revises on the impossibility of elapsed time in non-interactive timestamping in a UC framework and also limits the time-window for any forged timestamp attack by an adversary. Our correlation with non-interactive cryptography in Snarks and elapsed time-delays for decentralization is for developing better and more secure proof systems. For establishing circuit verifiability it is required to be adequately tested, so the correctness of timing can be inferred using delay tests$^{[20]}$, such delay tests are applied to the iterative argument of knowledge for arithmetic circuits in our work. Modeling an arithmetic circuit for non-interactive proof using zero-knowledge protocol is a list of manipulation gates in regard to sets of linear consistency equations describing corresponding input and output schemes.$^{[21]}$ These gates are further represented with bilinear consistency equations on those inputs and outputs such that the equation satisfies the arithmetic circuit. These arithmetic circuits are iteratively computed in subexponential iterable time. Benedikt Bunz’s$^{[22]}$ work is on dark np relations, with linear preprocessing requiring succinct communication and logarithmic verification circuit complexity. Unlike his work, our work requires a trusted setup, their work emphasizes transparency, on the other hand we detail security for decentralization concerns; through our approach by analysing our proof system we establish a relatively different evaluation method and a verifiable delay. The formulation of the most optimal evaluation scheme from our model, in order to assimilate it with delayed verification for SNARKS is the key motive behind this research. So to summarize, in this section none of the aforementioned research works achieve a much safer decentralised economy against adversaries behind arbitrary attacks in cross domains proof verification systems using a verifiable delay for traditional snarks. To the best of our knowledge this is the first research work that achieves this. The next section expounds the research work and related proofs in detail. \section{Methodology} IVC modelling for SNARKS is a three step protocol as in the traditional SNARKS. The non-interactive succinct proof begins with a precomputation instantiated by the verifier. The precomputation is a trusted setup used as a problem instance to be unravelled by the prover, it is a cryptographically encoded parameter scheme which is used to verify the quadratic polynomial points later on the elliptic curve for a multivariate proof. Now the reason for a verifiable delay to come into existence for this model is because of our trusted setup, in a trusted setup there may be a conclave amidst the parties, therefore it engenders the need to mitigate the risks encumbered in a trusted setup. For a given cryptographic scheme of the prover P with the knowledge of solutions to a polynomial equation over a group of unknown order say H, such that mathematically it convinces a verifier V ( with an arbitrary computational challenge in a trusted setup $\lambda$ using a proof $\pi$ ), for some given input x over the set X corresponding to a unique output y of the polynomial equation over the set Y, we can say \begin{center} $\forall (x$ $\epsilon$ $X$ \& $y$ $\epsilon$ $Y )$ $\exists$ $\pi$ s.t. $Verify(\lambda,x,y,\pi) = accept$ \end{center} As discussed in section 2, building a VDF model for SNARK follows an incrementally iterative approach for Zokrates evaluation scheme(PoSW). Before delineating into the same we consider an assumption that the computational model in a non-interactive ZKP, for a language L where P, V, S are taken to be Turing machines where V is a PPT (probabilistic polynomial time$^{[23]}$) Turing machine then, \begin{center} $\forall j$ $\epsilon$ $L$ \& $z$ $\epsilon$ $\{0,1\}^{*} $ $View_{V} [ P(j)$ $\leftrightarrow$ $V(j,z)]$ $\exists$ $S(j,z)$ \end{center} For even a single interaction $View_{V}$ that happened between P(j) and V(j,z) there exists an efficient Simulator S to reproduce the result from the given input. \subsection{Model} This follows the iterative hashing approach for achieving PoSW. We use SHA256 as the hashing algorithm. Evaluating a SNARK proof using a single processor involves a series of mathematical computations. SNARK has many variants, delineating the QAP variant which uses arithmetic circuits to represent directed acyclic graphs (DAG) are just a representation of the Zokrates proof. While evaluating a Zokrates proof, the protocol it follows encodes it into a quadratic equation of a polynomial problem. The significance of the problem is that it holds only when it is instantiated with correct values so that a prover can convince a verifier of the veritability if the equation turns out to be true. While a turing machine does the computation, the range constraints system allows a connection between a proof and a property which uses succinctness by random sampling to ensure an unbiased verifier secret evaluation. Homomorphic encoding scheme is a part of this series of protocols to do the final computation. Our IVC approach for evaluating a VDF for SNARKS follows the traditional convention steps of $Setup, Gen, Eval, Verify$. \subsubsection{Model Definition} A VDF $V_{IVC}\rightarrow \{Setup, Gen, Eval, Verify\}$ for Snarks in our model can be encapsulated in four stages as \{TrustedSetup, RandGen, Eval, Verify\} . $TrustedSetup(\lambda) \rightarrow \{pk,vk\}$: For a given input circuit C: F$_{L}$ x F$_{R}$ -> F$_{O}$ and output set \{pk,vk\}, the trusted setup generates a publicly verifiable structured reference string for a given linear relation R$_{l}$ over a secret parameter $\lambda$, here F$_{L}$ \& F$_{R}$ are the functions responsible for evaluating the left and right circuit in precomputation to result F$_{O}$. The proving key pk and verifying key vk are later used by P and V in the proof generation and verification phases. The $V_{IVC}$ in this setup encodes input domain X with an output Y which is defined in a relation over the language of Zokrates proof, in our model we use a polynomial equation for the proof as an iterated polynomial permutation. $RandGen (G) \rightarrow R$ : Given a generator G which generates a random number R in a finite range, where R determines the further quantifying iterable steps of computation during evaluation. $Eval(Hash_{IVC}( P(pk,x,w)), T) \rightarrow (Er, y, \pi)$ : The SNARK input of proof evaluation are iteratively hashed for a time delay T, the input x, proving key pk and witness w are a Prover function constituent that result in a proof $\pi$ and output y. $V_{IVC}$ eval hashes(Er) these iteratively computed proofs where corresponding to each step the proof is unique and sequential to the next step. To consider iterative hashing is unique let us take $Hash_{IVC}$ over a function f to be such that there is a possibility of collision, then the mathematical probability of such a case remains negligibly small, it is worth mentioning a multiple iteration count over a 256-bit hash function, on average requires 2$^{128}$ operations which produces a big cycle of average length to be 2$^{128}$, the massive length is large enough to consider iterative hashing to be unique and the possibility of a collision to be improbable in practice. $V_{IVC}$ establishes this iterative hashing in subexponential time where the deterministic steps of the delay T is a calculable quantified result of some random number R such that Eval is in subexponential time. \begin{algorithm} \caption{ Pre-computation in subexponential time}\label{alg:cap} \begin{algorithmic}\\ \begin{itemize} \item For some $R: RandGen(G) \rightarrow R$, the double log function D computes in some random R such that, $D(R) \rightarrow R_{a}$ and exponential of 2 equals logarithmic $(R)^{R_{a}}\rightarrow \alpha $. \end{itemize} \While{$\alpha \neq 1 $}\\ \begin{itemize} \item Common input: encode CRS into SRS using non-public randomness, trusted party generates $p_{k}$ , $v_{k}$. \item Prover computes random permutation $\pi$, for input $x$ $\epsilon$ $L, \pi \leftarrow (p_{k}, x, w)$ \item Prover sends this $Com(p_{k},x,w)$ to the verifier (has uniform randomness) \item Verifier uses this to compute $decom(\pi,v_{k})$; outputs 1 if valid, if not halts and outputs 0. \end{itemize} \EndWhile \end{algorithmic} \end{algorithm} For $c= com(x)$ for a random string r, we know that$^{[27]}$ $decom(c)= x$ over $r$. $Verify(pk, x,(y, \pi)) \rightarrow \{0,1\}^{*}$ : SNARKS has a Verifier machine that is turing complete and uses proving key pk, input x and proof $\pi$ to output {accept,reject}. Verify for our $V_{IVC}$ model consists of both onchain and offchain computation, each step of the sequentially generated proof is verifiable on chain. So, for T sequential steps which are calculated in $polylog(\lambda)$ queries we have a deterministically calculable delay T. This deterministic delay is secure against an adversary with $polylog(\lambda)$ processors$^{[3],[17]}$ as during the computation for evaluating a VDF, SHA256 is parallelizable which allows the adversary to try hundreds of combinations with a quick speed factor q, say q = 100, on a cheap hardware but repeating doesn’t do much here. In our model for the precomputation P such that q > 0. Let $t = \lfloor P^{1/q}\rfloor $, Then for a given group operation function G, such that \begin{center} $g_{i}^{P} \epsilon$ G \& $g(x) = \prod g_{i}^{P}(x)$ \end{center} then, \begin{equation} \label{eq1} g(x) = \prod g_{i}^{(t)^{q(q+1)/2}}(x) \; \epsilon \; \text{G} \end{equation} \begin{center} $\forall$ $i = 1,2 … T $ \end{center} \begin{center} $\Rightarrow$ $g_{i}^{t}. g_{i}^{t^{2}}. g_{i}^{t^{3}}… g_{i}^{t^{q(x)}}$ $\epsilon$ G \end{center} For any individual $g_{i}^{t^{q}}$ $\epsilon$ G subexponential in T over the input x, we can say $g_{i}^{t^{(n-3)}}.g_{i}^{t^{(n-2)}}.g_{i}^{t^{(n-1)}}.g_{i}^{t^{(n) (x)}}$ is subexponential in T, so, g(x) in equation 1 is subexponential in T. All the T sequential steps in the function is comparable to solving T challenges on a single turing machine. The model uniqueness is each step of the sequential computation is verifiable which makes it comparable to a CVDF model. So machine P can prove that a certain state is the current state of computation using zero knowledge proof. The expression being parallelizable can be computed in parallel by an adversary with a factor of q speed over a normal sequential solver. But this type of attack violates the sequential property of a VDF and as a first order approximation time taken to solve the challenges increases linearly in q. For practical values of q, this expression is nearly in the order of $q.2^{-256}$; To mount this attack the adversary must compute and solve all the T challenges otherwise it can gain only a factor of two-speed up along with almost negligible probability in $\lambda$. Hence, the scheme is secure for at most T challenges after which new public parameters are required to be generated. Entropy of a random variable is the average amount of uncertainty inherent in the variable’s possible outcomes. Our model uses a factor D which is entropy difference for entering the accept state as the result of publicly generated randomness$^{[24]}$, so for an adversary computing in parallel time $\sigma$(t) and at most $polylog(t)$ processors the probability it can win is nearly negl($\lambda$) as discussed above, irrespective of the probability such a computation is possible without $\pi$ in the case of a VDF$^{[3]}$. Our model construction that works on achieving proof of sequential work for zero knowledge proof system, eliminates this gap as only an adversary that can correctly verify $\pi$ on the elliptic curve and obtains the calculable output D which is the result emitted after the verifier successfully validates the proof, can enter the accept state. \\ We can say for a valid D, \\ \begin{equation} Verify(pk,x,(y,\pi)) \rightarrow D \Rightarrow 1 \tag {Accept} \end{equation} Otherwise, \begin{equation} Verify(pk,x,(y,\pi)) \rightarrow Garbage \Rightarrow 0 \tag{Reject} \end{equation} \textbf{NP Complete}: This model is said to be np complete over the round function, \begin{equation} g(t) (x) = g \circ g \circ . . . \circ g (x) \tag {t times} \end{equation} where $g^{(t)}(x)$ is said to be an iterated sequential function, so for $\lambda$ $\epsilon$ N, $g_{\lambda} X$ $\rightarrow X$ $\exists$ ($\epsilon$,t) such that $t = polylog(\lambda)$. $V_{IVC} = \{TrustedSetup, RandGen, Eval, Verify\}$ is said to be in NP as it satisfies the following properties : \textbf{Completeness}: For a language l in the linear relation $R_{l}$ , where $x$ $\epsilon$ $l$ iff $\exists$ $w$, s.t $R_{l}(x,w) = 1$. Verification algorithm for $polylog(t)$ queries accepts only for $(x, y, \pi)$ which is unique. So for SNARKS, $V_{IVC}$ holds true for, \[ Pr \left[ \begin{array}{c|c} \begin{aligned} Verify(pk, x, (y, \pi)) \rightarrow D = 1 \end{aligned} & \begin{aligned} \{pk,vk\} \leftarrow TrustedSetup (\lambda) \\ R \leftarrow RandGen(G) \\ (Er,y,\pi) \leftarrow Eval(Hash_{IVC}(P(pk,x,w)), T) \end{aligned}\\ \end{array} \right] = 1 \] \textbf{Soundness}: Verification algorithm rejects an adversary A when given $(x , y’, \pi’)$ for any $y \neq y’$ and $\pi \neq \pi’$ over k iterations. So for Snarks, $V_{IVC}$ holds true for, \[ Pr \left[ \begin{array}{c|c} \begin{aligned} Verify(pk, x, (y’, \pi’)) \rightarrow D = 1 \\ f(t,x) \neq y \end{aligned} & \begin{aligned} R \leftarrow RandGen(G) \\ (Er,y’,\pi’) \leftarrow A(H(pk,x,w)) \end{aligned}\\ \end{array} \right] \leq negl(\lambda) \] \section{Conclusion and Future Work} Concluding with the holistic view of analysed outcomes our model is close to a VDF with the current proof system. This research aims to coalesce on-chain and off-chain computation to stimulate the practical applications of $V_{IVC}$, although currently only through SNARKS, however our future works aim to escalate the domain of VDF applications with more random hashes like pederson hash, and with more high level proof systems. In the current system, the monolithic process requires indispensable memory control as running the circuit on a singular machine quickly escalates memory bounds, so for a shared memory cluster in a distributed zero knowledge(DIZK)$^{[25]}$ enabling the computation to leverage aggregated cluster memory, we put forward one of the possibilities that our current work aims to extend ahead. Furthermore, Poseidon hash is applicable over the same circuit with 8 times fewer constraints required per message bit over pedersen hash$^{[26]}$, so integrating our existing model using Poseidon Hash is under consideration. We also plan to refine our model with more efficient evaluation mechanisms in our future works. \section{References} [1] : Barak Boaz, “ Zero Knowledge Proofs”, 250-262, https://files.boazbarak.org/crypto/lec\_14\_zero\_knowledge.pdf [2] : B Manuel et al. “Non-Interactive Zero knowledge”, SIAM Journal on Computing 20.6 (1991): 1084-1118. [3] : B Dan et al. “Verifiable Delay Functions”, (2019) [4] : M. Mahmoody, T. Moran, and S. Vadhan. “Publicly verifiable proofs of sequential work.” In Proceedings of the 4th conference on Innovations in Theoretical Computer Science. ACM, 2013. [5]: Ronald L. Rivest, Adi Shamir, and David A. Wagner, “Time-lock puzzles and timed release crypto”, Tech. Report MIT/LCS/TR-684, MIT, February 1996. [6]: Dan Boneh and Moni Naor, Timed commitments, CRYPTO (Mihir Bellare, ed.), Lecture Notes in Computer Science, vol. 1880, Springer, 2000, pp. 236–254. [7]: Bram Cohen and Krzysztof Pietrzak. “Simple proofs of sequential work.” In Advances in Cryptology - EUROCRYPT 2018 - 37th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Tel Aviv, Israel, April 29 - May 3, 2018 Proceedings, Part II, pages 451–467, 2018. [8]: Jagadeesan Meena et al. “Proof of Sequential Work,” 2018 [9]: A. K. Lenstra and B. Wesolowski. “A random zoo: sloth, unicorn, and trx.” IACR Cryptology ePrint Archive, 2015, 2015. [10]: Wesolowski B. (2019) “Efficient Verifiable Delay Functions.” In: Ishai Y., Rijmen V. (eds) Advances in Cryptology – EUROCRYPT 2019. EUROCRYPT 2019. Lecture Notes in Computer Science, vol 11478. Springer, Cham [11]: Jin-yi Cai, Richard J. Lipton, Robert Sedgewick, and Andrew Chi-Chih Yao. “Towards uncheatable benchmarks.” In 8th Structure in Complexity Theory Conference, pages 2–11. IEEE Computer Society, 1993. [12]: Burt Kaliski. Pkcs \#5: Password-based cryptography specification version 2.0, 2000. [13]: Ephraim N., Freitag C., Komargodski I., Pass R. (2020) “Continuous Verifiable Delay Functions.” In: Canteaut A., Ishai Y. (eds) Advances in Cryptology – EUROCRYPT 2020. EUROCRYPT 2020. Lecture Notes in Computer Science, vol 12107. Springer, Cham. https://doi.org/10.1007/978-3-030-45727-3\_5 [14]: Boneh, Dan, Benedikt Bünz, and Ben Fisch. "A Survey of Two Verifiable Delay Functions." IACR Cryptol. ePrint Arch. 2018 (2018): 712. [15]: Döttling, Nico, Russell WF Lai, and Giulio Malavolta. "Incremental Proofs of Sequential Work." [16]: Mahmoody, Mohammad, Caleb Smith, and David J. Wu. "A Note on the (Im) possibility of Verifiable Delay Functions in the Random Oracle Model." IACR Cryptol. ePrint Arch. 2019 (2019): 663. [17]: Döttling, Nico, et al. "Tight Verifiable Delay Functions." IACR Cryptol. ePrint Arch. 2019 (2019): 659. [18]: Mahmoody, Mohammad, Caleb Smith, and David J. Wu. "Can Verifiable Delay Functions be Based on Random Oracles?." ICALP, 2020. [19]: Landerreche, Esteban, Marc Stevens, and Christian Schaffner. "Non-interactive cryptographic timestamping based on verifiable delay functions." International Conference on Financial Cryptography and Data Security. Springer, Cham, 2020. [20]: Ke, Wuudiann, and Premachandran R. Menon. "Synthesis of delay-verifiable combinational circuits." IEEE Transactions on Computers 44.2 (1995): 213-222. [21]: Bootle J., Cerulli A., Chaidos P., Groth J., Petit C. (2016) “Efficient Zero-Knowledge Arguments for Arithmetic Circuits in the Discrete Log Setting.” In: Fischlin M., Coron JS. (eds) Advances in Cryptology – EUROCRYPT 2016. EUROCRYPT 2016. Lecture Notes in Computer Science, vol 9666. Springer, Berlin, Heidelberg [22]: Bünz, Benedikt, Ben Fisch, and Alan Szepieniec. "Transparent SNARKs from DARK Compilers." [23]: Datta A., Derek A., Mitchell J.C., Shmatikov V., Turuani M. (2005) “Probabilistic Polynomial-Time Semantics for a Protocol Security Logic.” In: Caires L., Italiano G.F., Monteiro L., Palamidessi C., Yung M. (eds) Automata, Languages and Programming. ICALP 2005. Lecture Notes in Computer Science, vol 3580. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11523468\_2 [24]: J. Bonneau, J. Clark, and S. Goldfeder. “On bitcoin as a public randomness source.”, https://eprint.iacr.org/2015/1015.pdf [25]: H. Wu, W. Zheng, A. Chiesa, R. A. Popa, and I. Stoica. “DIZK: A distributed zero knowledge proof system.” In 27th USENIX Security Symposium, USENIX Security 2018, Baltimore, MD, USA, August 15--17, 2018., pages 675--692, 2018. [26]: Grassi, L., Khovratovich, D., Roy, A., Rechberger, C., Schofnegger, M., “Poseidon: A new hash function for zero-knowledge proof systems.” In: USENIX (2020). [27]: Lindell, Y., “How to simulate it - a tutorial on the simulation proof technique.” In: Tutorials on the Foundations of Cryptography, pp. 277–346 (2017) \end{document}
2024-02-18T23:40:34.042Z
2021-08-17T02:02:44.000Z
algebraic_stack_train_0000
2,763
5,171
proofpile-arXiv_065-13499
\section{Introduction} A record is a basic data structure that provides a flexible way to aggregate data and which is present in several programming and specification languages. Record polymorphism has been studied using approaches based on subtyping~\cite{CardelliW85,Cardelli90}, kinded quantification~\cite{Ohori95,OhoriB88} and, most commonly, on the mechanism of row variables~\cite{Wand87,Jategaonkar93,Remy92,Wand89}, among others. Row variables range over finite sets of field types, which are constructed by extension starting from the empty row $\{||\}$. With the aim of developing a sound polymorphic programming language that supported labelled records and variants, while providing efficient compilation, Ohori~\cite{Ohori95} followed an approach based on the notion of a kind. In Ohori's system, variables ranging over record types, are annotated with a specification that represents the fields the record is expected to contain. This refines ML-type quantification to what is called kinded quantification, of the form $\forall \vtype :: \gkind.\ttype$, where type variable $\vtype$ is constrained to range only over the set of types denoted by kind $\gkind$. A kind $\gkind$ is either the universal kind $\ukind$, denoting the set of all types, or a record kind of the form $\kind{\glab_1 : \mtype_1, \dots, \glab_n : \mtype_n}$, denoting the set of all record types that contain fields $\glab_1, \dots, \glab_n$, with types $\mtype_1, \dots, \mtype_n$, respectively. The type inference algorithm in~\cite{Ohori95} provides a sound extension of ML's let-polymorphism~\cite{DamasM82}, which allows for a polymorphic treatment of record-based operations such as field selection and modification, but with the limitation of lacking support for extensible records. This limitation is often accepted in practical implementations of languages with record types, in a trade for efficiency, or due to the difficulty in guaranteeing the correctness of types for more flexible operations. Nevertheless, record types, and in particular polymorphic record types, are most relevant not only in the context of ML-style programming languages, but also due to its relevance to areas were data aggregation and manipulation is a key feature. The style of record polymorphism developed by Ohori was recently explored in the context of event processing, through the development of a domain specific higher-order functional language for events~\cite{AlvesFR20}, with a typing system that was both a restriction and an extension of Ohori’s polymorphic record calculus. Although this domain specific language proved to be adequate to deal with the notion of generic events, with the potential of providing a formal semantics to Complex Event Processing (CEP) systems, the lack of support for extensibility was once more a limitation, since the ability to extend a record with a new field or remove an existing field from a record is often useful in the context of CEP. In this paper we address this limitation and develop a polymorphic record calculus with extensible records, that is, records that can have new fields added to them, or preexisting fields removed from them. To that end, we refine the notion of record kind, such that, a kind is of the form $$\rkind{\glab_1 : \mtype_1, \dots, \glab_n : \mtype_n}{\glab'_1 : \mtype_1', \dots, \glab'_m : \mtype'_m}$$ and denotes the set of all record types that contain the fields before $\mid\!\mid$ and do not contain the fields after $\mid\!\mid$. This system allows us to represent polymorphic versions of various types of record operations, such as field selection and modification, just as before, but is also powerful enough to represent field extension (the operation that adds a new field to a record) and field removal (the operation that removes a preexisting field from a record). The notion of record types is also extended to accommodate the notion of extensible record types. We refine the notion of kinded restrictions, using the refined notion of record kind, so that we can impose conditions on the extension and removal operations: a record can only be extended with a field that it does not already contain, and one can only remove existing fields from records. We extend Ohori's calculus with two new operations on records: $\cnt{\gterm}{\glab}$ removes field $\glab$ from term $\gterm$, provided that $\gterm$ is a record with that field, and $\ext{\gterm_1}{\glab}{\gterm_2}$ extends term $\gterm_1$ with a field labelled $\glab$ with value $\gterm_2$, provided that $\gterm_1$ is a record not containing $\glab$. These restrictions are imposed by the type system. We present a sound and complete ML-style type inference algorithm for our extended calculus. The main contributions of this paper are: \begin{itemize} \item The design definition of an ML-style record calculus with operations for field extension and field removal. \item An ML-style type system for this calculus based on the notion of extensible types. \item A sound and complete type inference algorithm, that extends the ML-style record calculus in~\cite{Ohori95}. \end{itemize} \paragraph{Overview} In Section~\ref{sec:calculus} we define our ML-style record calculus with extensible records. In Section~\ref{sec:ta} we define a type system for our calculus and in Section~\ref{sec:ti} we present a type inference algorithm, which is proved to be sound and complete. We discuss related work in Section~\ref{sec:rw} and we finally conclude and discuss further work in Section~\ref{sec:conc}. \section{An ML-style Record Calculus with Extensible Records} \label{sec:calculus} In this section we introduce an ML-style record calculus with extensible records. Our set of terms follows the one used by Ohori in~\cite{Ohori95}, except for the exclusion of variants and the addition of two new terms: one for adding a new field to a record; and another for removing a preexisting field from a record. We assume some familiarity with the $\lambda$-calculus (see~\cite{Barendregt85} for a detailed reference). \subsection{Terms} We start by formally defining the set of terms. In the following, let $k$ range over a countable set of constants, $x,y,z,\dots$ range over a countable set of variables and $\glab,\glab_1,\dots$ range over a countable set $\slab$ of labels. Additionally we assume a finite set of base types $\mathbb{B}$, ranged by $\btype$. \begin{definition} The set of terms is given by the following grammar: % \[ \begin{array}{lcl} \gterm & ::= & \vterm \mid \cterm^{\btype} \mid \abs{\vterm}{\gterm} \mid \gterm \gterm \mid \letin{\vterm}{\gterm}{\gterm} \\ & & \{\glab = \gterm, \dots, \glab = \gterm\} \mid \gterm.\glab \mid \modif{\gterm}{\glab}{\gterm} \mid \cnt{\gterm}{\glab} \mid \ext{\gterm}{\glab}{\gterm} \end{array} \] \end{definition} \subsection{Types and Kinds} We now define the set of types and kinds. Following Damas and Milner's presentation of ML~\cite{DamasM82}, we divide the set of types into monotypes (ranged over by $\mtype$) and polytypes (ranged over by $\ttype$). Monotypes can be base types (represented by $\btype$, which is obtained from a given set of base types), extensible types (ranged over by $\xtype$), and arrow types (of the form $\atype{\mtype}{\mtype}$). Polytypes can be monotypes, or quantified types of the form $\forall \vtype::\gkind.\ttype$, where $\gkind$ is a kind restriction and $\vtype$ is a type variable quantified over the set of types denoted by $\gkind$. Extensible types can be type variables (represented by $\vtype$, which is obtained from a given countably infinite set of type variables), record types (of the form $\{\glab : \mtype, \dots, \glab : \mtype\}$, where $\glab$ is obtained from a given set of labels), type extensions (of the form $\extype{\xtype}{\glab}{\mtype}$), or type contractions (of the form $\cntype{\xtype}{\glab}{\mtype}$). Type extensions of the form $\extype{\xtype}{\glab}{\mtype}$ are the type of records of type $\xtype$ that are extended with a new field with label $\glab$ and type $\mtype$ and type contractions of the form $\cntype{\xtype}{\glab}{\mtype}$ are the type of records of type $\xtype$ that have a preexisting field with label $\glab$ and type $\mtype$ removed. \begin{definition}The set of types $\ttype$ and kinds $\gkind$ are specified by the following grammar: % \begin{align*} \ttype & ::= \mtype \mid \forall \vtype::\gkind.\ttype \\ \mtype & ::= \btype \mid \xtype \mid \atype{\mtype}{\mtype} \\ \xtype & ::= \vtype \mid \{\glab:\mtype, \dots, \glab:\mtype\} \mid \extype{\xtype}{\glab}{\mtype} \mid \cntype{\xtype}{\glab}{\mtype}\\ \gkind & ::= \ukind \mid \rkind{\glab : \ttype, \dots, \glab : \ttype}{\glab : \ttype, \dots, \glab : \ttype} \end{align*} % \end{definition} All empty records are typed with the same type: the empty record type $\{\}$. Note that a type variable kinded with the universal kind can be instantiated with any type, while a type variable kinded with the empty kind restriction $\rkind{}{}$ can only be instantiated with a record type. Also, note that extensible types are defined recursively and can either have type variables or records types as base cases. To a type that appears as the base case of an extensible type we will call the base type of the extensible type (or just base type, if the context makes this clear). Also, if $\xtype$ is an extensible type, then $\xbase{\xtype}$ is its base type. Labels that appear in types and kinds must always be pairwise distinct and each label can only be assigned one type during its existence. Also, the order in which labels occur is insignificant. Finally, if two extensible types have the same base type and have the same kind restrictions, we will consider them equal. This will made precise in Section~\ref{sec:ti}, when we introduce a type reduction mechanism for extensible types. \begin{example} \label{ex:equalityofexttype} The following extensible types are equal: \begin{align} \cntype{\extype{\vtype}{\glab_1}{\mtype_1}}{\glab_2}{\mtype_2} & \equiv \extype{\cntype{\vtype}{\glab_2}{\mtype_2}}{\glab_1}{\mtype_1} \\ \cntype{\cntype{\extype{\vtype}{\glab_1}{\mtype_1}}{\glab_2}{\mtype_2}}{\glab_1}{\mtype_1} & \equiv \cntype{\cntype{\extype{\vtype}{\glab_1}{\mtype_1}}{\glab_1}{\mtype_1}}{\glab_2}{\mtype_2} \end{align} \begin{itemize} \item[(1)] Clearly, despite the order of their type extensions and contractions, both types have the same base type and tell us that records with type $\vtype$ must not have the field $\{\glab_1 : \mtype_1\}$ and must have the field $\{\glab_2 : \mtype_2\}$. \item[(2)] Here, we know that both types are equal because, despite the order of their type extensions and contractions, both have the same base type and tell us that records with type $\vtype$ must have the field $\{\glab_2 : \mtype_2\}$ and must not have the field $\{\glab_1 : \mtype_1\}$. \end{itemize} The two following extensible types are not equal: \begin{align} \cntype{\extype{\vtype_1}{\glab_1}{\mtype_1}}{\glab_2}{\mtype_2} & \not\equiv \extype{\cntype{\vtype_2}{\glab_1}{\mtype_1}}{\glab_2}{\mtype_2} \\ \extype{\extype{\cntype{\vtype_1}{\glab_1}{\mtype_1}}{\glab_2}{\mtype_2}}{\glab_1}{\mtype_1} & \not\equiv \extype{\cntype{\extype{\vtype_2}{\glab_1}{\mtype_1}}{\glab_1}{\mtype_1}}{\glab_2}{\mtype_2} \end{align} \begin{itemize} \item[(3)] Clearly, these two types are not equal, and not just because they have syntactically different base types, but because the type on the left tells us that $\vtype_1$ must not have the field $\{\glab_1 : \mtype_1\}$ and must have the field $\{\glab_2 : \mtype_2\}$, while the type on the right tells us that $\vtype_2$ must have the field $\{\glab_1 : \mtype_1\}$ and must not have the field $\{\glab_2 : \mtype_2\}$. \item[(4)] Here, we also have two types that are not equal, because they must have two syntactically different base types. The reason is that the type on the left tells us that its base type must have the field $\{\glab_1 : \mtype_1\}$ and the type on the right tells us that its base type must not have that same field. \end{itemize} \end{example} Note that we can only ignore the order of type extensions and contractions with \emph{different labels} since changing the ordering between a type extensions and type contraction of a type will render it ``ill'' formed, as can be seen in Example~\ref{ex:equalityofexttype}. This property is enforced by the kinding rules in Definition~\ref{def:kindrules}. \begin{definition} The set of free type variables of a type $\ttype$ or a kind $\gkind$ are denoted by $\ftv{\ttype}$ and $\ftv{\gkind}$, respectively. For second-order types, it is defined as $\ftv{\forall \vtype :: \gkind.\ttype} = \ftv{\gkind} \cup (\ftv{\ttype} \setminus \{\vtype\})$. \textit{FTV} for other types and kinds are defined as expected. \end{definition} We say that a type $\ttype$ is closed if $\ftv{\ttype} = \emptyset$. We assume that all bounded type variables are distinct and different from any free type variables, and that this property is preserved by substitution through $\alpha$-equivalence. The type construct $\forall \vtype :: \gkind.\ttype$ binds the type variable $\vtype$ in $\ttype$, but not in $\gkind$. \begin{definition} We write $\{\vtype_1 :: \gkind_1, \dots, \vtype_n :: \gkind_n\}$ for the kind assignment that binds $\vtype_i$ to $\gkind_i$, $(1 \leq i \leq n)$ and $\emptyset$ for the empty kind assignment. We will also write $\kenv \{\vtype :: \gkind\}$ for $\kenv \cup \{\vtype :: \gkind\}$ provided that $\kenv$ is well formed, $\vtype \not\in \dom{\kenv}$, and $\ftv{\gkind} \subseteq \dom{\kenv}$. Note that $\kenv \{\vtype_1 :: \gkind_1, \dots, \vtype_n :: \gkind_n\}$ implies that $\vtype_i \not\in \ftv{\gkind_j}$ for any $1 \leq j < i \leq n$. \end{definition} Any type variables that appear in $\kenv$ must also be properly kinded by $\kenv$ itself. \begin{definition}\label{def:wfkindassign} A kind assignment $\kenv$ is well formed if for all $\vtype \in \dom{\kenv}, \ftv{\kenv(\vtype)} \subseteq \dom{\kenv}$, where $\dom{f}$ denotes the domain of a function $f$. \end{definition} From now on, we assume that every kind assignment is well formed. \begin{definition}\label{def:wftuk} A type $\ttype$ is well formed under a kind assignment $\kenv$ if $\ftv{\ttype} \subseteq \dom{\kenv}$. \end{definition} Definition~\ref{def:wftuk} is naturally extended to other syntactic constructs containing types except substitutions (see Definition~\ref{def:wfsubs}). \subsection{Kind Restrictions} \begin{definition}\label{def:kindrules} Type $\mtype$ has kind restriction $\gkind$, that we denote as $\kenv \Vdash \mtype :: \gkind$, if it can be derived from the following kinding rules: % \begin{align*} i) \ \kenv & \Vdash \mtype :: \ukind \ \text{for any} \ \mtype \ \text{well formed under} \ \kenv \\ ii) \ \kenv & \Vdash \{l^{l}_1 : \mtype^{l}_1, \dots, l^{l}_n : \mtype^{l}_n, \dots\} :: \rkind{l^{l}_1 : \mtype^{l}_1, \dots, l^{l}_n : \mtype^{l}_n}{l^{r}_1 : \mtype^{r}_1, \dots, l^{r}_m : \mtype^{r}_m} \\ & \qquad \text{if} \ \{l^{l}_1, \dots, l^{l}_n, \dots\} \cap \{l^{r}_1, \dots, l^{r}_m\} = \emptyset, \\ & \qquad \quad \text{both }\{l^{l}_1 : \mtype^{l}_1, \dots, l^{l}_n : \mtype^{l}_n, \dots\} \ \ \text{and} \ \mtype^{r}_i \ (1 \leq i \leq m) \ \text{are well formed under} \ \kenv \\ iii) \ \kenv & \Vdash \vtype :: \rkind{l^{l}_1 : \mtype^{l}_1, \dots, l^{l}_n : \mtype^{l}_n}{l^{r}_1 : \mtype^{r}_1, \dots, l^{r}_m : \mtype^{r}_m} \\ & \qquad \text{if} \ \kenv(\vtype) = \rkind{l^{l}_1 : \mtype^{l}_1, \dots, l^{l}_n : \mtype^{l}_n, \dots}{l^{r}_1 : \mtype^{r}_1, \dots, l^{r}_m : \mtype^{r}_m, \dots} \\ iv) \ \kenv & \Vdash \extype{\xtype}{\glab}{\mtype} :: \rkind{l^{l}_1 : \mtype^{l}_1, \dots, l^{l}_n : \mtype^{l}_n, [\glab : \mtype]}{l^{r}_1 : \mtype^{r}_1, \dots, l^{r}_m : \mtype^{r}_m} \\ & \qquad \text{if} \ \kenv \Vdash \xtype :: \rkind{l^{l}_1 : \mtype^{l}_1, \dots, l^{l}_n : \mtype^{l}_n}{l^{r}_1 : \mtype^{r}_1, \dots, l^{r}_m : \mtype^{r}_m, \glab : \mtype} \\ v) \ \kenv & \Vdash \cntype{\xtype}{\glab}{\mtype} :: \rkind{l^{l}_1 : \mtype^{l}_1, \dots, l^{l}_n : \mtype^{l}_n}{l^{r}_1 : \mtype^{r}_1, \dots, l^{r}_m : \mtype^{r}_m, [\glab : \mtype]} \\ & \qquad \text{if} \ \kenv \Vdash \xtype :: \rkind{l^{l}_1 : \mtype^{l}_1, \dots, l^{l}_n : \mtype^{l}_n, \glab : \mtype}{l^{r}_1 : \mtype^{r}_1, \dots, l^{r}_m : \mtype^{r}_m} \end{align*} where $[\glab : \tau]$ means that the inclusion of $\glab : \tau$ in its respective kind is optional. \end{definition} Note that if $\kenv \Vdash \mtype : \gkind$, then both $\gkind$ and $\mtype$ are well formed under $\kenv$. \begin{example} Consider the type $\mtype = \cntype{\extype{\{\glab_1 : \mtype_1\}}{\glab_2}{\mtype_2}}{\glab_1}{\mtype_1}$ and the empty kind assignment $\kenv = \emptyset$. These are some possible derivations of kind restrictions for $\mtype$ and its subtypes: \begin{align*} & \emptyset \Vdash \{\glab_1 : \mtype_1\} :: \rkind{}{} \\ & \emptyset \Vdash \{\glab_1 : \mtype_1\} :: \rkind{\glab_1 : \mtype_1}{} \\ & \emptyset \Vdash \extype{\{\glab_1 : \mtype_1\}}{\glab_2}{\mtype_2} :: \rkind{\glab_2 : \mtype_2}{} \\ & \emptyset \Vdash \extype{\{\glab_1 : \mtype_1\}}{\glab_2}{\mtype_2} :: \rkind{}{\glab_3 : \mtype_3} \\ & \emptyset \Vdash \cntype{\extype{\{\glab_1 : \mtype_1\}}{\glab_2}{\mtype_2}}{\glab_1}{\mtype_1} :: \rkind{\glab_2 : \mtype_2}{\glab_1 : \mtype_1} \end{align*} Note that we cannot derive any kind restrictions for $\mtype$ where $\{\glab_1 : \mtype_1\}$ appears on the left of the $\mid\!\mid$. \end{example} \begin{proposition}\label{prop:kindrules} The kinding rules ensure the two following properties: \begin{enumerate} \item If $\kenv \Vdash \mtype :: \rkind{\glab : \mtype', \dots}{\dots}$, then $\kenv \not\Vdash \mtype :: \rkind{\dots}{\glab : \mtype', \dots}$; \item If $\kenv \Vdash \mtype :: \rkind{\dots}{\glab : \mtype', \dots}$, then $\kenv \not\Vdash \mtype :: \rkind{\glab : \mtype', \dots}{\dots}$. \end{enumerate} \end{proposition} \subsection{Kinded Substitutions} \begin{definition}\label{def:typesubs} A type substitution is a function from a finite set of type variables to types. We write $[\ttype_1/\vtype_1, \dots, \ttype_n/\vtype_n]$ for the substitution that maps each $\vtype_i$ to $\ttype_i$. A substitution $S$ is extended to the set of all type variables by letting $S(\vtype) = \vtype$ for all $\vtype \not\in \dom{S}$, where $\dom{f}$ denotes the domain of a function $f$, and is extended uniquely to polytypes, record types, function types and extensible types as follows: \begin{align*} S(\forall\vtype :: \gkind.\ttype) & = \forall\vtype :: S(\gkind).S(\ttype) \\ S(\{\glab_1 : \mtype_1, \dots, \glab_n : \mtype_n\}) & = \{\glab_1 : S(\mtype_1), \dots, \glab_n : S(\mtype_n)\}) \\ S(\atype{\mtype_1}{\mtype_2}) & = \atype{S(\mtype_1)}{S(\mtype_2)} \\ S(\cntype{\xtype}{\glab}{\mtype}) & = \cntype{S(\xtype)}{\glab}{S(\mtype)} \\ S(\extype{\xtype}{\glab}{\mtype}) & = \extype{S(\xtype)}{\glab}{S(\mtype)}\ \end{align*} \end{definition} \begin{definition}\label{def:wfsubs} A substitution $S$ is well formed under a kind assignment $\kenv$ if for any $\vtype \in \dom{S}$, $S(\vtype)$ is well formed under $\kenv$. \end{definition} Since all type variables are kinded by a kind assignment, the conventional notion of substitution (see Definition~\ref{def:typesubs}) must be refined by also taking into consideration kind constraints. \begin{definition} A kinded substitution is a pair $(\kenv, S)$ of a kind assignment $\kenv$ and a substitution $S$ that is well formed under $\kenv$. A kinded substitution $(\kenv, S)$ is ground if $\kenv = \emptyset$. We will write $S$ for a ground kinded substitution $(\emptyset, S)$. \end{definition} \begin{definition}\label{def:ksuskenv} A kinded substitution $(\kenv_1, S)$ respects a kind assignment $\kenv_2$ if for any $\vtype \in \dom{\kenv_2}$, $\kenv_1 \Vdash S(\vtype) :: S(\kenv_2(\vtype))$. \end{definition} This notion specifies the condition under which a substitution can be applied, i.e., if $(\kenv_1, S)$ respects $\kenv$ then it can be applied to a type $\ttype$ kinded by $\kenv$, yielding a type $S(\ttype)$ kinded by $\kenv_1$. \begin{lemma}\label{lem:kindsubs} If $\kenv \Vdash \mtype :: \gkind$ and a kinded substitution $(\kenv_1, S)$ respects $\kenv$, then $\kenv_1 \Vdash S(\mtype) :: S(\gkind)$. \end{lemma} \begin{corollary}\label{cor:kindsubs} If $(\kenv_1, S_1)$ respects $\kenv$ and $(\kenv_2, S_2)$ respects $\kenv_1$, then $(\kenv_2, S_2 \circ S_1)$ respects $\kenv$. \end{corollary} \begin{proof} Since $(\kenv_1, S_1)$ respects $\kenv$, we know that $\forall\vtype \in \dom{\kenv}, \kenv_1 \Vdash S_1(\vtype) :: S_1(\kenv(\vtype))$. Since $(\kenv_2, S_2)$ respects $\kenv_1$, we also know that $\forall\vtype \in \dom{\kenv_1}, \kenv_2 \Vdash S_2(\vtype) :: S_2(\kenv_1(\vtype))$. But, since we know that $(\kenv_2, S_2)$ respects $\kenv_1$, we have that $\kenv_2 \Vdash S_2 \circ S_1(\vtype) :: S_2 \circ S_1(\kenv(\vtype))$. Therefore, $\forall \vtype \in \dom{\kenv}, \kenv_2 \Vdash S_2 \circ S_1(\vtype) :: S_2 \circ S_1(\kenv(\vtype))$ and $(\kenv_2, S_2 \circ S_1)$ respects $\kenv$. \end{proof} A kind assignment is a constraint on possible substitutions of type variables. Furthermore, since types may depend on type variables other than their own free type variables, we need to take into consideration kind assignments and extend the notion of free type variables of a type. \begin{definition}\label{def:eftv} For a type $\ttype$ well formed under $\kenv$, the set of essentially-free type variables of $\ttype$ under $\kenv$, denoted $\eftv{\kenv}{\ttype}$, is the smallest set satisfying: \begin{enumerate} \item [$(i)$] $\ftv{\ttype} \subseteq \eftv{\kenv}{\ttype}$. \item [$(ii)$] $\mathit{If\ \vtype \in \eftv{K}{\ttype}, then\ \ftv{K(\vtype)} \subseteq \eftv{K}{\ttype}}$. \end{enumerate} % \end{definition} \section{Type System} \label{sec:ta} Before going into the typing rules, we first need to define what are generic instance of monotypes and closures of polytypes. To define the closure of a type, we first need to define what is a type assignment. \begin{definition} We write $\{\vterm_1 : \ttype_1, \dots, \vterm_n : \ttype_n\}$ for the type assignment that binds $\vterm_i$ to $\ttype_i$, $(1 \leq i \leq n)$ and $\emptyset$ for the empty type assignment. We will also write $\tenv \{\vterm : \ttype\}$ for $\tenv \cup \{\vterm : \ttype\}$ provided that $\vterm \not\in \dom{\tenv}$. \end{definition} \begin{definition} We say that a type assignment $\tenv$ is well formed under a kind assignment $\kenv$, if $\forall \vterm \in \dom{\tenv}$, $\tenv(\vterm)$ is well formed under $\kenv$. \end{definition} \begin{definition}\label{def:typecls} Let $\tenv$ and $\mtype$ be well formed under $\kenv$. The closure of $\mtype$ under $\tenv$, $\kenv$, denoted by $\cls{\kenv}{\tenv}{\mtype}$, is a pair $(\kenv', \forall \vtype_1 :: \gkind_1 \cdots \forall \vtype_n :: \gkind_n.\mtype)$ such that $\kenv' \{\vtype_1 :: \gkind_1, \dots, \vtype_n :: \gkind_n \} = \kenv$ and $\{\vtype_1, \dots, \vtype_n\} = \eftv{\kenv}{\mtype} \setminus \eftv{\kenv}{\tenv}$. \end{definition} \begin{definition}\label{def:typeinst} Let $\ttype_1$ be a polytype well formed under $\kenv$. We say that $\ttype_2$ is a generic instance of $\ttype_1$ under $\kenv$, written $\kenv \Vdash \ttype_1 \geq \ttype_2$, if $\ttype_1 = \forall \vtype^{1}_{1} :: \gkind^{1}_{1} \cdots \vtype^{1}_{n} :: \gkind^{1}_{n}.\mtype_1$, $\ttype_2 = \forall \vtype^{2}_{1} :: \gkind^{2}_{1} \cdots \vtype^{2}_{1} :: \gkind^{2}_{m}.\mtype_2$, and there is a substitution $S$ such that $\dom{S} = \{\vtype^{1}_{1}, \dots, \vtype^{1}_{n}\}$, $(\kenv \{\vtype^{2}_{1} :: \gkind^{2}_{1}, \dots, \vtype^{2}_{m} :: \gkind^{2}_{m}\}, S)$ respects $\kenv \{\vtype^{1}_{1} :: \gkind^{1}_{1}, \dots, \vtype^{1}_{n} :: \gkind^{1}_{n}\}$ and $\mtype_2 = S(\mtype_1)$. \end{definition} \begin{lemma}\label{lem:transinst} If $\kenv \Vdash \ttype_1 \geq \ttype_2$ and $\kenv \Vdash \ttype_2 \geq \ttype_3$, then $\kenv \Vdash \ttype_1 \geq \ttype_3$. \end{lemma} \begin{proof} Without loss of generality, let us assume that $\ttype_1 = \forall \vtype^{1}_{1} :: \gkind^{1}_{1} \cdots \vtype^{1}_{n} :: \gkind^{1}_{n}.\mtype_1$, $\ttype_2 = \forall \vtype^{2}_{1} :: \gkind^{2}_{1} \cdots \vtype^{2}_{m} :: \gkind^{1}_{m}.\mtype_2$, and $\ttype_3 = \forall \vtype^{3}_{1} :: \gkind^{3}_{1} \cdots \vtype^{3}_{k} :: \gkind^{3}_{k}.\mtype_3$. Since $\kenv \Vdash \ttype_1 \geq \ttype_2$, we know that there exists a substitution $S_1$ such that $\dom{S_1} = \{\vtype^1_{1}, \dots, \vtype^1_{n}\}$, $(\kenv\{\vtype^2_{1} :: \gkind^2_{1}, \dots, \vtype^2_{m} :: \gkind^2_{m}\}, S_1)$ respects $\kenv\{\vtype^1_{1} :: \gkind^1_{n} :: \gkind^1_{n}\}$ and $\mtype_2 = S_1(\mtype_1)$. Since $\kenv \Vdash \ttype_2 \geq \ttype_3$, we know that there exists a substitution $S_2$ such that $\dom{S_2} = \{\vtype^2_{1}, \dots, \vtype^2_{m}\}$, $(\kenv\{\vtype^3_{1} :: \gkind^3_{1}, \dots, \vtype^3_{k} :: \gkind^3_{k}\}, S_2)$ respects $\kenv\{\vtype^2_{1} :: \gkind^2_{n} :: \gkind^2_{m}\}$ and $\mtype_3 = S_2(\mtype_2)$. To show that $\kenv \Vdash \ttype_1 \geq \ttype_3$, we just need to find a substitution $S_3$, such that $\dom{S_3} = \{\vtype^1_{1}, \dots, \vtype^1_{n}\}$, $(\kenv\{\vtype^3_{1} :: \gkind^3_{1}, \dots, \vtype^3_{k} :: \gkind^3_{k}\}, S_3)$ respects $\kenv\{\vtype^1_{1} :: \gkind^1_{n} :: \gkind^1_{n}\}$ and $\mtype_3 = S_3(\mtype_1)$. If we choose $S_3 = S_2 \circ S_1$, then $\dom{S_3} = \dom{S_1} = \{\vtype^1_{1}, \dots, \vtype^1_{n}\}$, by Corollary~\ref{cor:kindsubs}, $(\kenv\{\vtype^3_{1} :: \gkind^3_{1}, \dots, \vtype^3_{k} :: \gkind^3_{k}\}, S_2 \circ S_1)$ respects $\kenv$, and $\mtype_3 = S_3(\mtype_1) = S_2 \circ S_1(\mtype_1) = S_2(S_1(\mtype_1)) = S_2(\mtype_2)$. \end{proof} The type assignment system is given in Figure~\ref{fig:typesystem}. We use $\kenv,\tenv \vdash \gterm: \sigma$ to denote that term $\gterm$ has type $\sigma$ given the type and kind assignments $\tenv$ and $\kenv$, respectively. \begin{figure} \begin{prooftree} \AxiomC{$\tenv \ \text{is well formed under} \ \kenv$} \AxiomC{$\kenv \Vdash \tenv(\vterm) \geq \mtype$} \RightLabel{(Var)} \BinaryInfC{$\kenv, \tenv \vdash \vterm : \mtype$} \end{prooftree} % \begin{prooftree} \AxiomC{$\tenv \ \text{is well formed under} \ \kenv$} \RightLabel{(Const)} \UnaryInfC{$\kenv, \tenv \vdash \cterm^\btype : \btype$} \end{prooftree} % \begin{prooftree} \AxiomC{$\kenv, \tenv \{\vterm : \mtype_1\} \vdash \gterm : \mtype_2$} \RightLabel{(Abs)} \UnaryInfC{$\kenv, \tenv \vdash \abs{\vterm}{\gterm} : \atype{\mtype_1}{\mtype_2}$} \end{prooftree} % \begin{prooftree} \AxiomC{$\kenv, \tenv \vdash \gterm_1 : \atype{\mtype_1}{\mtype_2}$} \AxiomC{$\kenv, \tenv \vdash \gterm_2 : \mtype_1$} \RightLabel{(App)} \BinaryInfC{$\kenv, \tenv \vdash \app{\gterm_1}{\gterm_2} : \mtype_2$} \end{prooftree} % \begin{prooftree} \AxiomC{$\kenv, \tenv \vdash \gterm_1 : \ttype$} \AxiomC{$\kenv, \tenv \{\vterm : \ttype\} \vdash \gterm_2 : \mtype$} \RightLabel{(Let)} \BinaryInfC{$\kenv, \tenv \vdash \letin{\vterm}{\gterm_1}{\gterm_2} : \mtype$} \end{prooftree} % \begin{prooftree} \AxiomC{$\kenv, \tenv \vdash \gterm_i : \mtype_i \ (1 \le i \le n)$} \RightLabel{(Rec)} \UnaryInfC{$\kenv, \tenv \vdash \{l_1 = \gterm_1, \dots, l_n = \gterm_n\} : \{l_1 : \mtype_1, \dots, l_n : \mtype_n\}$} \end{prooftree} % \begin{prooftree} \AxiomC{$\kenv, \tenv \vdash \gterm : \mtype_1$} \AxiomC{$\kenv \Vdash \mtype_1 :: \rkind{\glab : \mtype_2}{}$} \RightLabel{(Sel)} \BinaryInfC{$\kenv, \tenv \vdash \sel{M}{\glab} : \mtype_2$} \end{prooftree} % \begin{prooftree} \AxiomC{$\kenv, \tenv \vdash \gterm_1 : \mtype_1$} \AxiomC{$\kenv, \tenv \vdash \gterm_2 : \mtype_2$} \AxiomC{$\kenv \Vdash \mtype_1 :: \rkind{\glab : \mtype_2}{}$} \RightLabel{(Modif)} \TrinaryInfC{$\kenv, \tenv \vdash \modif{\gterm_1}{\glab}{\gterm_2} : \mtype_1$} \end{prooftree} % \begin{prooftree} \AxiomC{$\kenv, \tenv \vdash \gterm : \mtype$} \AxiomC{$\cls{\kenv}{\tenv}{\mtype} = (\kenv', \ttype)$} \RightLabel{(Gen)} \BinaryInfC{$\kenv', \tenv \vdash \gterm : \ttype$} \end{prooftree} % \begin{prooftree} \AxiomC{$\kenv, \tenv \vdash \gterm : \mtype_1$} \AxiomC{$\kenv \Vdash \mtype_1 :: \rkind{\glab : \mtype_2}{}$} \RightLabel{(Contr)} \BinaryInfC{$\kenv, \tenv \vdash \cnt{\gterm}{\glab} : \cntype{\mtype_1}{\glab}{\mtype_2}$} \end{prooftree} % \begin{prooftree} \AxiomC{$\kenv, \tenv \vdash \gterm_1 : \mtype_1$} \AxiomC{$\kenv, \tenv \vdash \gterm_2 : \mtype_2$} \AxiomC{$\kenv \Vdash \mtype_1 :: \rkind{}{\glab : \mtype_2}$} \AxiomC{$\xbase{\mtype_1} \not\in \ftv{\mtype_2}$} \RightLabel{(Ext)} \QuaternaryInfC{$\kenv, \tenv \vdash \ext{\gterm_1}{\glab}{\gterm_2} : \extype{\mtype_1}{\glab}{\mtype_2}$} \end{prooftree} \caption{Typing rules for ML-style record calculus with extensible records} \label{fig:typesystem} \end{figure} \begin{example} Let $\kenv = \{\vtype_1 :: \rkind{}{\glab : \vtype_2}, \vtype_2 :: \ukind\}$ and $\tenv = \{\vterm : \vtype_1, y : \vtype_2\}$. Then we can construct the following type derivation for $\sel{\ext{\vterm}{\glab}{y}}{\glab}$: \begin{prooftree} \AxiomC{$\kenv \Vdash \tenv(\vterm) \geq \vtype_1$} \RightLabel{(Var)} \UnaryInfC{$\kenv, \tenv \vdash \vterm : \vtype_1$} \AxiomC{$\kenv \Vdash \tenv(y) \geq \vtype_2$} \RightLabel{(Var)} \UnaryInfC{$\kenv, \tenv \vdash y : \vtype_2$} \AxiomC{$\kenv \Vdash \vtype_1 :: \rkind{}{\glab : \vtype_2}$} \AxiomC{$\vtype_1 \not\in \ftv{\vtype_2}$} \RightLabel{(Ext)} \LeftLabel{$\Delta =$} \QuaternaryInfC{$\kenv, \tenv \vdash \ext{\vterm}{\glab}{y} : \extype{\vtype_1}{\glab}{\vtype_2}$} \end{prooftree} \begin{prooftree} \AxiomC{$\Delta$} \AxiomC{$\kenv \Vdash \extype{\vtype_1}{\glab}{\vtype_2} :: \rkind{\glab : \vtype_2}{}$} \RightLabel{(Sel)} \BinaryInfC{$\kenv, \tenv \vdash \sel{\ext{\vterm}{\glab}{y}}{\glab} : \vtype_2$} \end{prooftree} \end{example} The following lemma allows us to strengthen the type assignment. \begin{lemma}\label{lem:eqinst} If $\kenv, \tenv \{\vterm : \ttype_1\} \vdash \gterm : \mtype$ and $\kenv \Vdash \ttype_2 \geq \ttype_1$, then $\kenv, \tenv \{\vterm : \ttype_2\} \vdash \gterm : \mtype$. \end{lemma} The following lemma shows that typings are closed under kind-respecting kinded substitutions. \begin{lemma}\label{lem:closedksubs} If $\kenv_1, \tenv \vdash \gterm : \ttype$ and $(\kenv_2, S)$ respects $\kenv_1$, then $\kenv_2, S(\tenv) \vdash \gterm : S(\ttype)$. \end{lemma} \section{Type Inference} \label{sec:ti} If we add a field to a record and then immediately remove it, we end up with a record with the same set of fields and the same kind restrictions, but with a different type. The same thing happens if we remove a preexisting field from a record and then immediately add it again. This means that extensible types can have different forms, but represent the same set of record types and records. This induces a set of identities on extensible types that can be interpreted as rewrite rules and used to define a type reduction system. \begin{definition} \label{def:rewriterules}Let $\xtype$ be well formed under some kind assignment $\kenv$. The rewrite rules for the type reduction system depend on the form of $\xtype$ and are the following: \begin{align*} i) & \ \{\glab_1 : \mtype_1, \dots, \glab_i : \mtype_i, \dots, \glab_n : \mtype_n\} - \{\glab_i : \mtype_i\} \pm \{\glab' : \mtype'\} \cdots\ \ \red \ \ \{\glab_1 : \mtype_1, \dots, \glab_n : \mtype_n\} \pm \{\glab' : \mtype'\} \cdots \\\\ ii) & \ \{\glab_1 : \mtype_1, \dots, \glab_n : \mtype_n\} + \{\glab : \mtype\} \pm \{\glab' : \mtype'\} \cdots\ \ \red \ \ \{\glab_1 : \mtype_1, \dots, \glab_n : \mtype_n, \glab : \mtype\} \pm \{\glab' : \mtype'\} \cdots \\\\ iii) & \ \vtype \pm_1 \{\glab_1 : \mtype_1\} \cdots -_i \{\glab : \mtype\} \cdots +_j \{\glab : \mtype\} \cdots\ \ \red \\ & \ \vtype \pm_1 \cdots \pm_{i-1} \{\glab_{i-1} : \mtype_{i~+1}\} \pm_{i+1} \{\glab_{i+1} : \mtype_{i+1}\} \cdots \pm_{j-1} \{\glab_{j-1} : \mtype_{j-1}\} \pm_{j+1} \{\glab_{j+1} : \mtype_{j+1}\} \cdots \\ \\ iv) & \ \vtype \pm_1 \{\glab_1 : \mtype_1\} \cdots +_i \{\glab : \mtype\} \cdots -_j \{\glab : \mtype\} \cdots \ \ \red \\ & \ \vtype \pm_1 \cdots \pm_{i-1} \{\glab_{i-1} : \mtype_{i-1}\} \pm_{i+1} \{\glab_{i+1} : \mtype_{i+1}\} \cdots \pm_{j-1} \{\glab_{j-1} : \mtype_{j-1}\} \pm_{j+1} \{\glab_{j+1} : \mtype_{j+1}\} \cdots \end{align*} To ensure that $\red$ is confluent without considering the order of type extensions and/or contractions, we are going to assume that, in rules \textit{iii)} and \textit{iv)}, $i$ and $j$ are the positions of the two first occurrences of those same-labelled type contraction and extension, in the case of rule \textit{iii)}, or type extension and contraction, in the case of rule \textit{ix)}. \end{definition} We can show that the type reduction system is convergent, i.e. that normal forms are unique. This fact in conjunction with the fact that each reduction step preserves the set of kind restrictions of a type (see Proposition~\ref{prop:preserve}), allows us to replace any type with its normal form. Having types in their normal forms allows us to to identify types that correspond to records that are constructed from a particular record by adding and removing fields in different orders but that end up having the same set of fields. \begin{proposition}\label{prop:convergence} $\red$ is convergent. \end{proposition} It is obvious that $\red$ is terminating, since the number of type extensions and type contractions diminishes with each reduction step. Also, note that $\red$ is confluent by construction. \begin{definition}\label{def:normform}Let $\red^*$ be the reflexive and transitive closure of $\red$. $(i)$ We say that $\ttype_2$ is a normal form of $\ttype_1$ if, and only if, $\ttype_1 \red^* \ttype_2$ and there is no $\ttype_3$ such that $\ttype_2 \red \ttype_3$. $(ii)$ We write $\normof{\ttype_1}$ for the (unique) normal form of $\ttype_1$. $(iii)$ We write $\ttype_1 \norm \ttype_2$ if $\normof{\ttype_1} = \normof{\ttype_2}$. \end{definition} Now we show that each reduction step $\red$ preserves kind restrictions. \begin{proposition}\label{prop:preserve} Every rewrite rule $\red$, transforms a type $\xtype_1$ into a $\xtype_2$, such that $\kenv \Vdash \xtype_1 :: \gkind$ if, and only if, $\kenv \Vdash \xtype_2 :: \gkind$. \end{proposition} Every field that appears in an extensible type that is a subtype of a type in normal form occurs exactly once. \begin{proposition}\label{prop:canonicity} If $\xtype$ is in normal form, then every label that appears in it occurs exactly once. \end{proposition} Note that if a type is in normal form, then every extensible type that appears as its subtype will have a type variable base type and each field that appears in it will occur exactly once. \begin{proposition}\label{prop:iden_norm} Let $S$ be a substitution and $\mtype$ be a type. Then $\normof{S(\normof{\mtype})} = \normof{S(\mtype)}$. \end{proposition} \begin{proof} Let $\normof{\mtype} = \mtype$. Then there exists an extensible type $\xtype$ in $\mtype$ that can be normalized to its normal form $\normof{\xtype}$. Let $\vtype \in \dom{S}$ appear in $\xtype$, but not in $\normof{\xtype}$. then $\vtype$ only appears in type operations of $\xtype$ that were removed by normalization. Clearly, $S(\vtype)$ will appear in $S(\xtype)$, but not in $S(\normof{\xtype})$. But then, since $\vtype$ only appeared in type operations of $\xtype$ that were removed by normalization, $S(\vtype)$ will only appear in those same type operations. Therefore, $S(\vtype)$ will not appear in $\normof{S(\xtype)}$. Thus $S(\vtype)$ will neither appear in $\normof{S(\normof{\xtype})}$ nor in $\normof{S(\xtype)}$. Let $\vtype \in \dom{S}$ appear in $\xtype$ and in $\normof{\xtype}$. Then $\vtype$ must not appear in any type operations of $\xtype$ that can be removed by normalization. Then $S(\vtype)$ will appear both in $S(\xtype)$ and $S(\normof{\xtype})$: \begin{itemize} \item If $S(\normof{\xtype})$ is in normal form, then $S(\vtype)$ is in normal form and will appear in both $\normof{S(\xtype)}$ and $\normof{S(\normof{\xtype})}$; \item If $S(\normof{\xtype})$ is not in normal form, then either $S(\vtype)$ is not in normal form, $S(\vtype)$ is in normal form, but type operations that can be cancelled out by preexisting type operations in $\normof{\xtype}$ were added to $\normof{\xtype}$, or both. In any case, these operations will not appear in $\normof{S(\normof{\xtype})}$ or $\normof{S(\xtype)}$. \end{itemize} This means we can conclude that type type operations that appear in $\normof{S(\normof{\xtype})}$ and $\normof{S(\xtype)}$ are precisely the same, \textit{i.e.} $\normof{S(\normof{\xtype})} = \normof{S(\xtype)}$. \end{proof} To account for normal forms, we need to redefine what it means for two substitutions to be equal. \begin{definition}\label{def:eqsubs} Let $S_1$ and $S_2$ be two substitutions. We say that $S_1 = S_2$ if, for any type $\mtype$, $S_1(\mtype) \norm S_2(\mtype)$. \end{definition} Note that if two substitutions are syntactically equal, then they are also equal in the sense of Definition~\ref{def:eqsubs}. \begin{proposition} Let $S_1$, $S_2$, $S_1'$, and $S_2'$ be substitutions, such that $S_1 = S_2$ and $S_1' = S_2'$. Then $S_1 \circ S_1' = S_2 \circ S_2'$. \end{proposition} \begin{proof}Since $S_1 = S_2$ and $S_1' = S_2'$, we know that $S_1(\tau) = S_2(\tau)$ and $S_1'(\tau) = S_2'(\tau)$ for any type $\tau$. But then $\normof{S_1'(\tau)} = \normof{S_2'(\tau)}$. Therefore, $S_1(\normof{S_1'(\tau)}) = S_2(\normof{S_2'(\tau)})$ and, by Proposition~\ref{prop:iden_norm}, $S_1(S'_1(\tau)) = S_2 (S'_2(\tau))$. Thus, $S_1 \circ S'_1 = S_2 \circ S'_2$. \end{proof} \subsection{Kinded Unification} We are going to extend the kinded unification algorithm used by Ohori in~\cite{Ohori95} with rules for unifying extensible types with type variables and other extensible types. \begin{definition} A kinded set of equations is a pair $(\kenv, \geqs)$ consisting of a kind assignment $\kenv$ and a set $\geqs$ of pairs of types such that $\geqs$ is well formed under $\kenv$. \end{definition} \begin{definition} A substitution $S$ satisfies $\geqs$ if $S(\mtype_1) \norm S(\mtype_2)$, for all $(\mtype_1, \mtype_2) \in \geqs$. \end{definition} \begin{definition} A kinded substitution $(\kenv_1, S)$ is a unifier of a kinded set of equations $(\kenv, \geqs)$ if it respects $\kenv$ and if $S$ satisfies $\geqs$. \end{definition} \begin{definition} $(\kenv_1, S)$ is a most general unifier of $(\kenv_2, \geqs)$ if it is a unifier of $(\kenv_2, \geqs)$ and if for any unifier $(\kenv_3, S_2)$ of $(\kenv_2, \geqs)$, there is some substitution $S_3$ such that $(\kenv_3, S_3)$ respects $\kenv_1$ and $S_2 = S_3 \circ S$. \end{definition} The unification algorithm $\mathcal{U}$ is defined by transformation. Each rule transforms a 4-tuple of the form $(\geqs, \kenv, S, \skenv)$ consisting of a set $\geqs$ of type equations, a kind assignment $\kenv$, a substitution $S$, and a (not necessarily well-formed) kind assignment $\skenv$. $\geqs$ keeps the set of equations to be unified; $\kenv$ specifies kind constraints to be verified; $S$ records ``solved'' equations in the form of a substitution; and $\skenv$ records ``solved'' kind constraints that have already been verified for $S$. In specifying rules, we treat $\kenv$, $\skenv$, and $S$ as sets of pairs. We also use the following notations. Let $\mathit{F}$ range over functions from a finite set of labels to types. We write $\{\mathit{F}\}$ and $\rkind{\fields^{l}}{\fields^{r}}$ to denote the record type identified by $\mathit{F}$ and the record kind identified by $\fields^{l}$ and $\fields^{r}$, respectively. For two functions $\mathit{F}_1$ and $\mathit{F}_2$ we write $\mathit{F}_1 + \mathit{F}_2$ for the function $\mathit{F}$ such that $\dom{\mathit{F}} = \dom{\mathit{F}_1} \cup \dom{\mathit{F}_2}$ and, for $\glab \in \dom{\mathit{F}}$, if $\glab \in \dom{\mathit{F}_1}$, then $\mathit{F}(\glab) = \mathit{F}_1(\glab)$; otherwise, $\mathit{F}(\glab) = \mathit{F}_2(\glab)$. For those same two functions, we write $\mathit{F}_1 - \mathit{F}_2$ for the function $\mathit{F}$ such that $\dom{\mathit{F}} = \dom{\mathit{F}_1} \setminus \dom{\mathit{F}_2}$ and, for $\glab \in \dom{\mathit{F}}$, $\mathit{F}(\glab) = \mathit{F}_1(\glab)$. For an extensible type $\xtype$, we write $\efields{\xtype}$ and $\cfields{\xtype}$ for the functions that represent the set of type extensions and contractions of $\xtype$, respectively. \begin{example} Let $\xtype = \extype{\cntype{\extype{\vtype}{\glab_1}{\mtype_1}}{\glab_2}{\mtype_2}}{\glab_3}{\mtype_3}$. Then $\efields{\xtype}$ is the function (of domain $\{\glab_1, \glab_3\}$) that sends $\glab_1$ to $\mtype_1$ and $\glab_3$ to $\mtype_3$ and $\cfields{\xtype}$ is the function (of domain $\{\glab_2\}$) that sends $\glab_2$ to $\mtype_2$. \end{example} \begin{definition}\label{def:unifalg} Let $(\kenv, \geqs)$ be a given kinded set of equations. Algorithm $\mathcal{U}$ first transforms the system $(\geqs, \kenv, \emptyset, \emptyset)$ into $(\geqs', \kenv', S, \skenv)$ until no more rules can be applied. It then returns the pair $(\kenv', S)$, if $\geqs' = \emptyset$; otherwise, it fails. Its rules can be found in Figure~\ref{fig:kindunif}. \end{definition} \begin{figure} {\small \begin{align*} i) & \ (\geqs \cup \{(\mtype_1, \mtype_2)\}, \kenv, S, \skenv) \Rightarrow (\geqs, \kenv, S, \skenv) \ \text{if} \ \mtype_1 \norm \mtype_2 \\ ii) & \ (\geqs \cup \{(\vtype, \mtype)\}, \kenv \cup \{(\vtype, \ukind)\}, S, \skenv) \Rightarrow ([\mtype/\vtype] \geqs, [\mtype/\vtype] \kenv, [\mtype/\vtype](S) \cup \{(\vtype, \mtype)\}, [\mtype/\vtype](\skenv) \cup \{(\vtype, \ukind)\}) \\ & \qquad \text{if} \ \vtype \not\in \ftv{\mtype} \\ iii) & \ (\geqs \cup \{(\vtype_1, \vtype_2)\}, \kenv \cup \{(\vtype_1, \rkind{\fields^{l}_1}{\fields^{r}_1}), (\vtype_2, \rkind{\fields^{l}_2}{\fields^{r}_2})\}, S, \skenv) \Rightarrow \\ & \qquad ([\vtype_2/\vtype_1](\geqs \cup \{(\fields^{l}_1(\glab), \fields^{l}_2(\glab)) \mid \glab \in \dom{\fields^{l}_1} \cap \dom{\fields^{l}_2}\} \cup \{(\fields^{r}_1(\glab), \fields^{r}_2(\glab)) \mid \glab \in \dom{\fields^{r}_1} \cap \dom{\fields^{r}_2}\}), \\ & \qquad \ [\vtype_2/\vtype_1](\kenv) \cup \{(\vtype_2, [\vtype_2/\vtype_1](\rkind{\fields^{l}_1 + \fields^{l}_2}{\fields^{r}_1 + \fields^{r}_2}))\}, \\ & \qquad \ [\vtype_2/\vtype_1](S)\cup \{(\vtype_1, \vtype_2)\}, [\vtype_2/\vtype_1](\skenv) \cup \{(\vtype_1, \rkind{\fields^{l}_1}{\fields^{r}_1})\}) \\ & \qquad \text{if} \ \dom{\fields^{l}_1} \cap \dom{\fields^{r}_2} = \emptyset \ \text{and} \ \dom{\fields^{r}_1} \cap \dom{\fields^{l}_2} = \emptyset \\ iv) & \ (\geqs \cup \{(\vtype, \{F_2\})\}, \kenv \cup \{(\vtype, \rkind{\fields^{l}_1}{\fields^{r}_1})\}, S, \skenv) \Rightarrow \\ & \qquad ([\{\mathit{F}_2\}/\vtype](\geqs \cup \{(\fields^{l}_1(l), \mathit{F}_2(l)) \mid \glab \in \dom{\fields^{l}_1}\}), \\ & \qquad \ [\{\mathit{F}_2\}/\vtype](\kenv), [\{\mathit{F}_2\}/\vtype](S) \cup \{(\vtype, \{\mathit{F}_2)\})\}, [\{\mathit{F}_2\}/\vtype](\skenv) \cup \{(\vtype, \rkind{\fields^{l}_1}{\fields^{r}_1})\}) \\ & \qquad \text{if} \ \dom{\fields^{l}_1} \subseteq \dom{\mathit{F}_2}, \dom{\fields^{r}_1} \cap \dom{\mathit{F}_2} = \emptyset, \ \text{and} \ \vtype \not \in \ftv{\{F_2\}} \\ v) & \ (\geqs \cup \{(\{F_1\}, \{F_2\})\}, \kenv, S, \skenv) \Rightarrow (\geqs \cup \{(F_1(l), F_2(l)) \mid l \in \dom{F_1}\}, \kenv, S, \skenv) \\ & \qquad \text{if} \ \dom{F_1} = \dom{F_2} \\ vi) & \ (\geqs \cup \{(\atype{\mtype^1_1}{\mtype^2_1}, \atype{\mtype^1_2}{\mtype^2_2}\}, \kenv, S, \skenv) \Rightarrow (\geqs \cup \{(\mtype^1_1, \mtype^1_2), (\mtype^2_1, \mtype^2_2)\}, \kenv, S, \skenv) \\ vii) & \ (\geqs \cup \{(\vtype, \xtype)\}, \kenv \cup \{(\vtype, \rkind{\fields^{l}_1}{\fields^{r}_1}), (\xbase{\xtype}, \rkind{\fields^{l}_2}{\fields^{r}_2})\}, S, \skenv) \Rightarrow \\ & \qquad ([\xtype/\vtype](\geqs \cup \{(\fields^{l}_1(\glab), (\fields^{r}_2 + (\fields^{l}_2 - \cfields{\normof{\xtype}}))(\glab)) \mid \glab \in \dom{\fields^{l}_1} \cap \dom{\fields^{r}_2 + (\fields^{l}_2 - \cfields{\normof{\xtype}})}\} \\ & \qquad \qquad \qquad \cup \{(\fields^{r}_1(\glab), \cfields{\normof{\xtype}}(\glab)) \mid \glab \in \dom{\fields^{r}_1} \cap \dom{\cfields{\normof{\xtype}}}\}), \\ & \qquad \ [\xtype/\vtype](\kenv) \cup \{(\xbase{\xtype}, [\xtype/\vtype](\rkind{\fields^{l}_2 + (\fields^{l}_1 - (\fields^{r}_2 + (\fields^{l}_2 - \cfields{\normof{\xtype}})))}{\fields^{r}_2 + (\fields^{r}_1 - \cfields{\normof{\xtype}})}))\}, \\ & \qquad \ [\xtype/\vtype](S)\cup \{(\vtype, \xtype)\}, [\xtype/\vtype](\skenv) \cup \{(\vtype, \rkind{\fields^{l}_1}{\fields^{r}_1})\}) \\ & \qquad \text{if} \ \dom{\fields^{l}_1} \cap \dom{\cfields{\normof{\xtype}}} = \emptyset, \dom{\fields^{r}_1} \cap \dom{\fields^{r}_2 + (\fields^{l}_2 - \cfields{\normof{\xtype}})} = \emptyset , \ \text{and} \ \vtype \not\in \ftv{\xtype} \\ viii) & \ (\geqs \cup \{(\vtype^1 \pm^1_1 \{\glab^1_1 : \mtype^1_1\} \cdots \pm^1_i \{\glab^1_i : \mtype^1_i\} \cdots \pm^1_n \{\glab^1_n : \mtype^1_n\}, \\ & \qquad \quad \ \ \vtype^2 \pm^2_1 \{\glab^2_1 : \mtype^2_1\} \cdots \pm^2_j \{\glab^2_j : \mtype^2_j\} \cdots \pm^2_m \{\glab^1_m : \mtype^2_m\})\}, \kenv, S, \skenv) \Rightarrow \\ & \qquad (\geqs \cup \{(\mtype^1_i, \mtype^2_j), (\vtype^1 \pm^1_1 \{\glab^1_1 : \mtype^1_1\} \cdots \pm^1_{i-1} \{\glab^1_{i-1} : \mtype^1_{i-1}\} \pm^1_{i+1} \cdots \pm^1_n \{\glab^1_n : \mtype^1_n\}, \\ & \qquad \qquad \qquad \qquad \quad \vtype^2 \pm^2_1 \{\glab^2_1 : \mtype^2_1\} \cdots \pm^2_{j-1} \{\glab^2_{j-1} : \mtype^2_{j-1}\} \pm^2_{j+1} \cdots \pm^2_m \{\glab^2_m : \mtype^2_m\})\}, \kenv, S, \skenv) \\ & \qquad \text{if} \ (\pm^1_i = \pm^2_j \wedge \glab^1_i = \glab^2_j), \forall i < k \leq n : \glab^1_k \not= \glab^2_i, \ \text{and} \ \forall j < r \leq m : \glab^2_r \not=\glab^2_j \\ ix) & \ (\geqs \cup \{(\vtype^1 \pm^1_1 \{\glab^1_1 : \mtype^1_1\} \cdots \pm^1_n \{\glab^1_n : \mtype^1_n\}, \vtype^2 \pm^2_1 \{\glab^2_1 : \mtype^2_1\} \cdots \pm^2_m \{\glab^2_m : \mtype^2_m\})\}, \\ & \qquad \kenv \cup \{(\vtype_1, \rkind{\fields^{l}_1}{\fields^{r}_1}), (\vtype_2, \rkind{\fields^{l}_2}{\fields^{r}_2})\}, S, \skenv) \Rightarrow \\ & \qquad (S_{ix)}(\geqs \cup \{(\fields^{l}_1(\glab), \fields^{l}_2(\glab)) \mid \glab \in \dom{\fields^{l}_1} \cap \dom{\fields^{l}_2}\} \cup \{(\fields^{r}_1(\glab), \fields^{r}_2(\glab)) \mid \glab \in \dom{\fields^{r}_1} \cap \dom{\fields^{r}_2}\}), \\ & \qquad \ S_{ix)}(\kenv) \cup \{(\vtype, S_{ix)}(\rkind{\fields^{l}_1 + \fields^{l}_2}{\fields^{r}_1 + \fields^{r}_2})\}, \\ & \qquad \ S_{ix)}(S) \cup \{(\vtype_1, \vtype \pm^2_1 \{\glab^2_1 : \mtype^2_1\} \cdots \pm^2_m \{\glab^2_m : \mtype^2_m\}), (\vtype_2, \vtype \pm^1_1 \{\glab^1_1 : \mtype^1_1\} \cdots \pm^1_n \{\glab^1_n : \mtype^1_n\})\} \\ & \qquad \ S_{ix)}(\skenv) \cup \{(\vtype_1, \rkind{\fields^{l}_1}{\fields^{r}_1}), (\vtype_2, \rkind{\fields^{l}_2}{\fields^{r}_2})\}) \\ & \qquad \ \text{where} \ S_{ix)} = [\vtype \pm^2_1 \{\glab^2_1 : \mtype^2_1\} \cdots \pm^2_m \{\glab^2_m : \mtype^2_m\}/\vtype_1, \vtype \pm^1_1 \{\glab^1_1 : \mtype^1_1\} \cdots \pm^1_n \{\glab^1_n : \mtype^1_n\}/\vtype_2] \\ & \qquad \text{if} \ \dom{\fields^{l}_1} \cap \dom{\fields^{r}_2} = \emptyset, \dom{\fields^{r}_1} \cap \dom{\fields^{l}_1} = \emptyset, \vtype_1 \not\in \ftv{\vtype^2 \pm^2_1 \{\glab^2_1 : \mtype^2_1\} \cdots \pm^2_m \{\glab^2_m : \mtype^2_m\}}, \\ & \qquad \quad \vtype_2 \not\in \ftv{\vtype^1 \pm^1_1 \{\glab^1_1 : \mtype^1_1\} \cdots \pm^1_n \{\glab^1_n : \mtype^1_n\}}, \forall 1 \leq i \leq n, 1 \leq j \leq m, \glab^1_i \not= \glab^1_j \ \text{and} \ \vtype \ \text{is fresh} \end{align*}} \vspace{-0.2in} \caption{Kinded Unification} \label{fig:kindunif} \end{figure} Since the constructions present in rule \textit{vii)} from Definition~\ref{def:unifalg} are somewhat intricate, we will state the following facts in hope that they will help the reader better understand this rule. Let $\vtype$ be type variable and $\xtype$ an extensible type, both well formed under some kind assignment $\kenv$, such that $\xbase{\xtype} = \vtype_{\xtype}$, $\kenv(\vtype) = \rkind{\fields^{l}_1}{\fields^{r}_1}$, $\kenv(\vtype_{\xtype}) = \rkind{\fields^{l}_2}{\fields^{r}_2}$. Also, let us assume that $\dom{\fields^{l}_1} \cap \dom{\cfields{\normof{\xtype}}} = \emptyset$, $\dom{\fields^{r}_1} \cap \dom{\fields^{r}_2 + (\fields^{l}_2 - \cfields{\normof{\xtype}})} = \emptyset$ and $\vtype \not\in \ftv{\xtype}$: \begin{itemize} \item Any field that appears in $\fields^{l}_2$ was either introduced by the (Sel), (Modif) or (Contr) rule. This means that the set of the labels that appear in type contractions of $\xtype$ is $\dom{\cfields{\normof{\xtype}}} \subseteq \dom{\fields^{l}_2}$. \item Any field that appears on $\fields^{r}_2$ was introduced by the (Ext) rule. This means that the set of the labels that appear in type extensions of $\xtype$ is $\dom{\fields^{r}_2} \subseteq \dom{\efields{\normof{\xtype}}}$. \item The set of fields that are guaranteed to appear in a record typed with $\xtype$ is represented by the function $\fields^{r}_2 + (\fields^{l}_2 - \cfields{\normof{\xtype}})$ and the set of fields that are guaranteed to \textit{not} appear in a record typed with $\xtype$ is represented by the function $\cfields{\normof{\xtype}}$. \item The set of fields that are guaranteed to exist by $\kenv(\vtype)$, but not by $\xtype$, is represented by the function $\fields^{l}_1 - (\fields^{r}_2 + (\fields^{l}_2 - \cfields{\normof{\xtype}})))$ and the set of fields that are guaranteed \textit{not} to exist by $\kenv(\vtype)$, but not by $\xtype$, is represented by the function $\fields^{r}_1 - \cfields{\normof{\xtype}}$. \end{itemize} Note that we use $\normof{\xtype}$ instead of $\xtype$ in $\cfields{\normof{\xtype}}$ so that constructing this function is more straightforward. That being said, to keep proofs simpler, we do not normalize extensible types during unification \begin{theorem}\label{thm:unifalg} Algorithm $\mathcal{U}$ takes any kinded set of equations and computes a most general unifier, if one exists; otherwise it fails. \end{theorem} \begin{example} Let $\kenv = \{\vtype :: \rkind{}{\glab : \vtype''}, \vtype' :: \rkind{\glab : \vtype''}{}, \vtype'' :: \ukind\}$. Then $\mathcal{U}(\kenv, \{(\cntype{\extype{\vtype}{\glab}{\vtype''}}{\glab}{\vtype''}, \cntype{\vtype'}{\glab}{\vtype''})\} = (\{\vtype :: \rkind{\glab : \vtype'}{}\}, \{(\vtype', \extype{\vtype}{\glab}{\vtype''})\})$: \begin{align*} & (\{(\cntype{\extype{\vtype}{\glab}{\vtype''}}{\glab}{\vtype''},\cntype{\vtype'}{\glab}{\vtype''})\}, \kenv, \emptyset, \emptyset) \\ \overset{\textit{viii)}}{\Rightarrow} & (\{(\extype{\vtype}{\glab}{\vtype''}, \vtype')\}, \kenv, \emptyset, \emptyset) \\ \overset{\textit{vii)}}{\Rightarrow} & (\emptyset, \{\vtype :: \rkind{}{\glab : \vtype''}, \vtype'' :: \ukind\}, \{(\vtype', \extype{\vtype}{\glab}{\vtype''})\}, \{(\vtype', \rkind{\glab : \vtype''}{})\}) \end{align*} Note that the first transformation is given by rule \textit{viii)} and not by rule \textit{ix)}. This is the case because the latter rule can only be applied whenever two extensible types have no matching type extensions or contractions. \end{example} It is true that our unification algorithm introduces an overhead associated with the search of matching type extensions and contraction (see rules \textit{viii)} and \textit{ix)}) when compared with Ohori's original algorithm. That being said, we believe that with some care during the algorithm's implementation this overhead should be negligible. \subsection{Type Inference} Using the kinded unification, we extend Ohori's type inference algorithm to the cases of record field extension and record field removal. The \emph{type inference algorithm}, $\infer{\kenv}{\tenv}{\gterm}$, is defined in Figure~\ref{fig:typeinf}. Given a kind assignment $\kenv$, a type assignment $\tenv$, and a term $\gterm$, the Type Inference Algorithm $\infer{\kenv}{\tenv}{\gterm}$ returns a tuple $(\kenv', S, \ttype)$. It is implicitly assumed that the algorithm fails if unification or any of the recursive calls fail. \begin{figure}[htbp] {\small \begin{align*} i) & \ \infer{\kenv}{\tenv}{\vterm} = \text{if} \ \vterm \not \in \dom{\tenv} \ \text{then} \ \textit{fail} \\ & \ \quad \qquad \qquad \qquad \text{else let} \ \forall \vtype_1::\gkind_1 \cdots \forall \vtype_n::\gkind_n.\mtype = \tenv(\vterm), \\ & \ \quad \quad \qquad \qquad \qquad \qquad \ S = [\beta_1/\vtype_1, \dots, \beta_n/\vtype_n] \ (\beta_1, \dots, \beta_n \ \text{are fresh}) \\ & \qquad \qquad \qquad \quad \ \text{in} \ (\kenv \{\beta_1::S(\gkind_1), \dots, \beta_n::S(\gkind_n)\}, \textit{id}, \normof{S(\mtype)}) \\ ii) & \ \infer{\kenv}{\tenv}{\abs{\vterm}{\gterm}} = \text{let} \ (\kenv_1, S_1, \mtype_1) = \infer{\kenv \{\vtype::\ukind\}}{\tenv \{\vterm : \vtype\}}{\gterm} \ (\vtype \ \text{fresh}) \\ & \qquad \qquad \qquad \qquad \ \ \ \ \text{in} \ (\kenv_1, S_1, \atype{\normof{S_1(\vtype)}}{\mtype_1}) \\ iii) & \ \infer{\kenv}{\tenv}{\app{\gterm_1}{\gterm_2}} = \text{let} \ (\kenv_1, S_1, \mtype_1) = \infer{\kenv}{\tenv}{\gterm_1} \\ & \qquad \qquad \qquad \qquad \qquad \ \ (\kenv_2, S_2, \mtype_2) = \infer{\kenv_1}{S_1(\tenv)}{\gterm_2} \\ & \qquad \qquad \qquad \qquad \qquad \ \ (\kenv_3, S_3) = \unify{\kenv_2 \{\vtype :: \ukind\}}{\{(S_2(\mtype_1), \atype{\mtype_2}{\vtype})\}} \ (\vtype \ \text{fresh}) \\ & \qquad \qquad \qquad \qquad \qquad \text{in} \ (\kenv_3, S_3 \circ S_2 \circ S_1, \normof{S_3(\vtype)}) \\ iv) & \ \infer{\kenv}{\tenv}{\letin{x}{\gterm_1}{\gterm_2}} = \text{let} \ (\kenv_1, S_1, \mtype_1) = \infer{\kenv}{\tenv}{\gterm_1} \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \ \ \ \ (\kenv'_1, \sigma) = \cls{\kenv_1}{S_1(\tenv)}{\mtype_1} \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \ \ \ \ (\kenv_2, S_2, \mtype_2) = \infer{\kenv'_1}{(S_1(\tenv))\{\vterm : \ttype\}}{\gterm_2} \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \ \ \text{in} \ (\kenv_2, S_2 \circ S_1, \mtype_2) \\ v) & \ \infer{\kenv}{\tenv}{\{l_1 = \gterm_1, \dots, l_n = \gterm_n\}} = \\ & \qquad \text{let} \ (\kenv_1, S_1, \mtype_1) = \infer{\kenv}{\tenv}{\gterm_1} \\ & \qquad \qquad (\kenv_i, S_i, \mtype_i) = \infer{\kenv_{i-1}}{S_{i-1} \circ \cdots \circ S_1 (\tenv)}{\gterm_i} \ (2 \le i \le n) \\ & \qquad \text{in} \ (\kenv_n, S_n \circ \cdots \circ S_2 \circ S_1, \{l_1 : \normof{S_n \circ \cdots \circ S_2(\mtype_1)}, \dots, l_i : \normof{S_n \circ \cdots \circ S_{i+1}(\mtype_i)}, \dots, l_n : \mtype_n\}) \\ vi) & \ \infer{\kenv}{\tenv}{\sel{\gterm}{l}} = \text{let} \ (\kenv_1, S_1, \mtype_1) = \infer{\kenv}{\tenv}{\gterm} \\ & \qquad \qquad \qquad \qquad \ \ \ (\kenv_2, S_2) = \unify{\kenv_1 \{\vtype_1 :: \ukind, \vtype_2 :: \rkind{l : \vtype_1}{}\}}{\{(\vtype_2, \mtype_1)\}} \ (\vtype_1, \vtype_2 \ \text{fresh}) \\ & \qquad \qquad \qquad \qquad \ \text{in} \ (\kenv_2, S_2 \circ S_1, \normof{S_2(\vtype_1)}) \\ vii) & \ \infer{\kenv}{\tenv}{\modif{\gterm_1}{l}{\gterm_2}} = \\ & \qquad \text{let} \ (\kenv_1, S_1, \mtype_1) = \infer{\kenv}{\tenv}{\gterm_1} \\ & \qquad \qquad (\kenv_2, S_2, \mtype_2) = \infer{\kenv_1}{S_1(\tenv)}{\gterm_2} \\ & \qquad \qquad (\kenv_3, S_3) = \unify{\kenv_2 \{\vtype_1 :: \ukind, \vtype_2 :: \rkind{l : \vtype_1}{}\}}{\{(\vtype_1, \mtype_2), (\vtype_2, S_2(\mtype_1))\}} \ (\vtype_1, \vtype_2 \ \text{fresh}) \\ & \qquad \text{in} \ (\kenv_3, S_3 \circ S_2 \circ S_1, \normof{S_3(\vtype_2)}) \\ viii) & \ \infer{\kenv}{\tenv}{\cnt{\gterm}{\glab}} = \text{let} \ (\kenv_1, S_1, \mtype_1) = \infer{\kenv}{\tenv}{\gterm} \\ & \qquad \qquad \qquad \qquad \quad \ \ \ (\kenv_2, S_2) = \unify{\kenv_1 \{\vtype_1 :: \ukind, \vtype_2 :: \rkind{\glab : \vtype_1}{}\}}{\{(\vtype_2, \mtype_1)\}} \ (\vtype_1, \vtype_2 \ \text{fresh}) \\ & \qquad \qquad \qquad \qquad \quad \ \text{in} \ (\kenv_2, S_2 \circ S_1, \normof{S_2(\cntype{\vtype_2}{\glab}{\vtype_1})}) \\ ix) & \ \infer{\kenv}{\tenv}{\ext{\gterm_1}{\glab}{\gterm_2}} = \\ & \qquad \text{let} \ (\kenv_1, S_1, \mtype_1) = \infer{\kenv}{\tenv}{\gterm_1} \\ & \qquad \qquad (\kenv_2, S_2, \mtype_2) = \infer{\kenv_1}{S_1(\tenv)}{\gterm_2} \\ & \qquad \text{in} \ \text{if} \ \xbase{\mtype_1} \in \ftv{\mtype_2} \ \text{then} \ \textit{fail} \\ & \quad \qquad \ \text{else let} \ (\kenv_3, S_3) = \unify{\kenv_2 \{\vtype_1 :: \ukind, \vtype_2 :: \rkind{}{\glab : \vtype_1}\}}{\{(\vtype_1, \mtype_2), (\vtype_2, S_2(\mtype_1))\}} \ (\vtype_1, \vtype_2 \ \text{fresh}) \\ & \ \quad \qquad \qquad \text{in} \ (\kenv_3, S_3 \circ S_2 \circ S_1, \normof{S_3(\extype{\vtype_2}{\glab}{\vtype_1})}) \\ \end{align*}} \vspace{-0.35in} \caption{Type inference algorithm} \label{fig:typeinf} \end{figure} \begin{proposition}\label{prop:normalform} If $\infer{\kenv}{\tenv}{\gterm} = (\kenv', S, \mtype)$, then $\mtype$ is in normal form. \end{proposition} \begin{theorem}\label{thm:typeinfalg} The Type Inference Algorithm is sound and complete: \begin{itemize} \item (Soundness) If $\infer{\kenv}{\tenv}{\gterm} = (\kenv', S, \mtype)$, then $(\kenv', S)$ respects $\kenv$, $\kenv', S(\tenv) \vdash \gterm : \mtype$. \item (Completeness) \begin{itemize} \item If $\infer{\kenv}{\tenv}{\gterm} = (\kenv', S, \mtype)$ and $\kenv_0, S_0(\tenv) \vdash \gterm : \mtype_0$, for some $(\kenv_0, S_0)$ and $\mtype_0$ such that $(\kenv_0, S_0)$ respects $\kenv$, then there is some $S'$, such that $(\kenv_0, S')$ respects $\kenv'$, $S'(\mtype) \downarrow \mtype_0$, and $S_0(\tenv) = S' \circ S(\tenv)$; \item If $\infer{\kenv}{\tenv}{\gterm} = \textit{fail}$, then there is no $(\kenv_0, S_0)$ and $\mtype_0$ such that $(\kenv_0, S_0)$ respects $\kenv$ and $\kenv_0, S_0(\tenv) \vdash \gterm : \mtype_0$. \end{itemize} \end{itemize} \end{theorem} \begin{example} Let $\kenv = \{\vtype_1 :: \rkind{}{\glab : \vtype_2}, \vtype_2 :: \ukind\}$ and $\tenv = \{\vterm : \vtype_1, y : \vtype_2\}$. Then we can apply the type inference algorithm to $\sel{\ext{\vterm}{\glab}{y}}{\glab}$ and get the following results: \begin{align*} & \infer{\kenv}{\tenv}{\sel{\ext{\vterm}{\glab}{y}}{\glab}} = (\kenv, \{(\vtype_3, \vtype_2), (\vtype_4, \vtype_1), (\vtype_5, \vtype_2), (\vtype_6, \extype{\vtype_1}{\glab}{\vtype_2})\}, \vtype_2) \\ & \infer{\kenv}{\tenv}{\ext{\vterm}{\glab}{y}} = (\kenv, \{(\vtype_3, \vtype_2), (\vtype_4, \vtype_1)\}, \extype{\vtype_1}{\glab}{\vtype_2}) \\ & \infer{\kenv}{\tenv}{\vterm} = (\kenv, \textit{id}, \vtype_1) \\ & \infer{\kenv}{\tenv}{y} = (\kenv, \textit{id}, \vtype_2) \end{align*} \end{example} \section{Related Work} \label{sec:rw} There are several alternative type systems in the literature that deal with polymorphic records with some form of extensibility. The most common approaches are based on subtyping~\cite{Cardelli90,Jategaonkar93} or row variables~\cite{Wand87,HarperM93,Remy92,Wand89}, but there are also others based on flags~\cite{Remy89,Remy94}, predicates~\cite{HarperP91,Gaster98} and scope variables~\cite{Leijen05}, to name a few. The approaches using subtyping~\cite{Cardelli88,CardelliW85,Jategaonkar93,PT1994} have been widely used to build polymorphic type systems with records, in particular for object-oriented programming languages. However, there are several issues that arise when combining record polymorphism with a strong mechanism of subtyping. In the the presence of a subtyping relation $(r_1 \leq r_2)$, meaning that $r_1$ contains at least the fields in $r_2$, a selection operator $(\_.l)$ can have a type $\forall \alpha.\forall \beta \leq \{l:\alpha\}. \beta \rightarrow \alpha$, meaning that a label $l$ can be selected for a record with type $\beta$, if $\beta$ is a subtype of $\{l:\alpha\}$. This leads to additional information on the remaining fields of $\beta$ being lost, making it harder to define operations dealing with extensibility. This was overcome by moving to a second-order type system~\cite{Cardelli90,CardelliM91}, however the resulting type system relied on explicitly typed extensible records, yielding a system where type-checking and subtyping are decidable, but type-inference is not addressed. The existence of a subtyping relation also complicates compilation, (again) because information on the exact type of a record can be lost, leading to the need of incorporating some degree of dynamic typing at runtime. Several approaches dealing with extensibility use Wand's notion of row variables~\cite{Wand87}, which are variables that range over sets of field types, allowing for incremental construction of records. However, unlike the approach followed in our paper, operations in~\cite{Wand87} are unchecked, meaning that, when extending a row with a field $l$, one does not check if this introduces a new field or replaces an existing one, leading to programs for which a principal type does not exist. Flexible systems with extensible records have been constructed over the mechanism of row variables~\cite{Remy89,Remy94}, extended with the notion of flags, yielding a system with extensible records and principal types. Flags are used to provide information on which fields the record must have, and which it cannot have. However, despite the flexibility to define various powerful operations on records, compilation is not dealt efficiently in the presence of flags, due in part to the ability to support some unchecked operations. Harper and Pierce~\cite{HarperP91} have studied type systems for extensible records where presence and absence of fields is given by predicates on types, thus leading to a system with checked operations, but without dealing with type inference or compilation. The use of predicates was further developed by Jones in his general theory of qualified types~\cite{Jones94,Jones94a}, where extensible records are presented has a special case. Building on that is the approach by Gaster and Jones~\cite{Gaster96,Gaster98} that combines the notion of row variables with the notion of qualified types, and is perhaps the work that is more closely related to ours. In this approach, row extension is used to capture positive information about the fields, while predicates are used to capture negative information, thus avoiding duplicated labels. In our approach both negative and positive information is given by the kind restrictions, resulting in a type system where constraints on label addition and label removal are treated in a uniform way. Building on the work of Wand, Rémy and Gaster and Jones~\cite{Wand87,Remy94,Gaster96}, Leijen has developed a polymorphic type system with extensible records based on scoped labels~\cite{Leijen05}. In this approach, duplicate labels are allowed and retained, and an appropriate scoping mechanism is provided to prevent ambiguity and still allow for safe operations. This provides a notion of free extension where update and extension operations are clearly separated, yielding a system that is flexible from the user's point of view. Our approach does not allow for duplicated labels and uses the kinding restrictions to implement a strict notion of extensibility. In addition to record extension, there are several systems that deal with other powerful record operations such as concatenation~\cite{HarperP91,Remy92,Wand89}, or the natural join~\cite{BunemanO96} (an operation largely used in database programming, where labeled records play an important role). We choose not to include these operation as they tend to complicate both the implementation and the typing analysis, and simply follow Ohori's approach~\cite{Ohori95}, which efficiently supports the basic operations on records, extending it with two basic operations that support extensibility. Nevertheless, Rémy~\cite{Remy92} has developed an encoding of record concatenation via record extension, thus proving that a system supporting checked record extension can also support some form of record concatenation. Our decision to build our work on Ohori's record calculus was highly motivated by the existence of an efficient compilation for such a calculus. This was achieved by translating the polymorphic record calculus into an implementation calculus, in which records are represented as vectors whose elements are accessed by direct indexing based on a total order $\ll$ on the set of labels. Polymorphic functions containing polymorphic record operations are compiled through the insertion of appropriate index abstractions, indicated by the kinded quantifiers given by the type of the function. Ohori then shows that this algorithm preserves types and that the compilation calculus has the subject reduction property, thus showing that the compilation algorithm preserves the operational behaviour of the original polymorphic record calculus. We believe that an efficient compilation algorithm can also be defined for the calculus developed in this work because variables still range over complete record types and the negative information that was added to kinds only affects type inference, not compilation. For these reasons, it should be possible to extend the compilation method in~\cite{Ohori95} to our extensible records. \section{Conclusions and Future Work} \label{sec:conc} We have presented an ML-style polymorphic record calculus with extensible records, developed a typing system based on the notion of kinded quantification and a sound and complete type inference algorithm, based on kinded unification. While records are a basic commodity in a variety of programming languages, regarding the use of more powerful operations on records, there is still a gap between the theory and the practice, with no consensus on what is the best approach. With this work, we hope to contribute to that discussion. Ohori's main goal was to support the most common operations dealing with polymorphic records, while maintaining an efficient compilation method. As already stated in the related work, although we do not deal with compilation in this paper, this is something that we would like to address in future work. \bibliographystyle{eptcs}
2024-02-18T23:40:34.354Z
2021-12-30T02:13:37.000Z
algebraic_stack_train_0000
2,776
9,993
proofpile-arXiv_065-13508
\section{Analysis Plan} \label{sec:analysis} To obtain the results from the study, we follow the recommendation by Robbins and Heiberger~\cite{Robbins2011}. To plot the demographic questions (Q1 to Q5), we use a normal bar chart. The graph we intend to use is a diverging stacked bar chart with counts (see Figure~10 in \cite{Robbins2011}) to plot the results for questions (Q6 to Q18, FQ1 to FQ6, and FE1 to FE8). The X-axis label of the graph shows counts and percentages, the Y-axis label shows the demographic answers. To present the pretest and posttest experiment results in a comparative way, we use grouped bar charts. For the comparative graph of task questions (TQ1 to TQ9), X-axis labels are individual questions, and Y-axis are the number of correct answers. Likewise, for understanding the result questions (PRQ1 to PRQ4 and POQ1 to POQ4), X-axis labels are scale values (\cf \tab{likert_scale}), and Y-axis is the count for every scale value. We do not associate demographic answers for plotting the comparative graphs. To make a reliable argument, we use the demographic answers for discussing the comparative graph. For example: \emph{"2 out of 10 participants who have more than ten years of experience answered TQx correctly"}. Qualitative statements received from participants are gathered, organized, and summarized individually for every question. We summarize the qualitative statements through the following three steps: \textbf{(i)~Microanalysis:} The answers from participants are gone through individually by the first author and he assigns labels to statements. The rest of the authors will validate the initial labels and provide feedback for improvement. At the end of this step, all authors come to a mutual agreement on the initial labels. \textbf{(ii)~Categorization:} Based on the feedback for improvisation, the first author performs second iteration. As a result, a set of themes are extracted which are deemed to be essential. \textbf{(iii)~Saturation:} This is the final step where all the authors come to the final agreement on labels, themes, and summarized statements. Since the qualitative statement is a medium to express an individual opinion, the categorization of labels are associated with the demographic answers. For example: \emph{"an engineer who has seven years of experience states that the counterexample explanation approach can promote the usage of model checkers among system engineers"}. \section{Design of the User Study} \label{sec:design} In this section, we describe the design, questionnaires, and tools used for both the \emph{online survey (Part\,1)}, and the \emph{one-group pretest-posttest user study (Part\,2)}. \subsection{Part\,1: Online Survey} \label{sec:part1} For \emph{Part\,1}, we use a cross sectional survey~\cite{KitchenhamP08} to collect data from engineers to achieve the objective of \goal{1}. For planning and conducting this online survey, we follow the guidelines of Neuman~\cite[Chapter~7]{Neuman14} (majorly), Kitchenham and Pfleeger~\cite{KitchenhamP08}, and Fink~\cite{Fink03}. In addition to Neuman~\cite{Neuman14}, we follow Robson and McCartan ~\cite[Chapter~11]{RobsonM16}, and Babbie [5, Chapter 9] for the questionnaire construction. Further, we refer to and adapt some of the questionnaires from existing user surveys by Gleirscher and Marmsoler~\cite{GleirscherM20}, and Garavel \etal~\cite{GaravelBP20}. Gleirscher and Marmsoler~\cite{GleirscherM20} perform the largest cross sectional survey with 216 participants to study the existing and intended use of formal methods. Similarly, Garavel \etal~\cite{GaravelBP20} conduct a user survey with 130 participants and 30 questions to collect information on the past, present, and future of formal methods in research, industry, and education. Our main contribution wrt. similar surveys is: (1) we particularly focus on identifying challenges that engineers face in identifying inconsistent specifications, not general challenges of using formal methods, and (2) the study is performed with engineers who work on real-world automotive projects. \begin{table*}[bth] \centering \caption{Online survey questions for study Part\,1.} \label{tab:survey_questions} \begin{tabular}{p{0.55cm}|m{11cm}|l|m{2cm}} \hline \rowcolor{gray!10} \textbf{Label} & \textbf{Questions} & \textbf{Scale} & \textbf{Label (\cf \tab{likert_scale})} \\ \hline \rowcolor{gray!10} \multicolumn{4}{l}{\textbf{Demographic Questions}} \\ \hline Q1 & Rate your knowledge of formal methods & Ordinal & LS1 \\ \hline Q2 & How many years have you used formal methods in your daily work? (answer the duration separately for academia and industry) & Ordinal & LS2 \\ \hline Q3 & List the application(s) you worked on using formal methods (if available) & Nominal & Not Applicable \\ \hline Q4 & How many years have you worked in the safety domain? (only in industries) & Ordinal & LS2 \\ \hline Q5 & List the application(s) you worked on focusing on safety aspects (if available) & Nominal & Not Applicable \\ \hline \rowcolor{gray!10} \multicolumn{4}{l}{\textbf{Main Survey Questions}} \\ \hline Q6 & How easy it is for you to identify inconsistent specifications? & Nominal and Ordinal & LS3 \\ \hline Q7 & What sort of methods do you use to identify inconsistent specifications? & Nominal & Not Applicable \\ \hline Q8 & How fast could you identify inconsistent specifications? & Nominal and Ordinal & LS4 \\ \hline Q9 & What are the challenges that you face in order to identify inconsistent specifications? & Nominal & Not Applicable \\ \hline Q10 & How easy is it for you to maintain consistency when refining requirements for sub-components? & Nominal and Ordinal & LS3 \\ \hline Q11 & In your opinion, how beneficial is the identification of inconsistent specifications for a safety analysis? & Nominal and Ordinal & LS5 \\ \hline Q12 & How hard is it for you to check the consistency of requirements that are associated with components? & Nominal and Ordinal & LS3 \\ \hline Q13 & How easy is it for you to understand formal notations? & Nominal and Ordinal & LS3 \\ \hline Q14 & What is your opinion on using formal verification? & Nominal & Not Applicable \\ \hline Q15 & In your opinion, will the usage of formal verification make systems safer? & Nominal and Ordinal & LS5 \\ \hline Q16 & In your opinion, can the formal verification be an add-on to the functional safety methods to ensure safety? & Nominal and Ordinal & LS5 \\ \hline Q17 & Can you imagine using formal methods if their understanding of formal notations is made easier? & Nominal and Ordinal & LS5 \\ \hline Q18 & Do you think formal methods are usable in real-world development processes? & Nominal and Ordinal & LS5 \\ \hline \end{tabular} \end{table*} \tab{survey_questions} presents the questionnaire prepared for our online survey. The response for each question is captured either as qualitative statements, a set of predefined scale answers, or a combination of both. We use a 7-point scale as it increases the reliability of answers from participants over a 5-point scale according to Joshi \etal~\cite{JoshiKCP15}. The scale answers set we use in this survey is listed in \tab{likert_scale}. \begin{table*}[!tbh] \centering \caption{Likert and answer scales used for this study.} \label{tab:likert_scale} \begin{tabular}{p{0.5cm}|m{1.2cm}|m{14.7cm}} \hline \rowcolor{gray!10} \textbf{Label} & \textbf{Type} & \textbf{Likert \& Answer Scales} \\ \hline LS1 & Expertise & Novice, Advanced Beginner, Competent, Proficient, Expert, Mastery, Practical Wisdom, No Opinion \\ \hline LS2 & Experience & $>=$\,1, $<$\,1 to 2, $<$\,2 to 4, $<$\,4 to 6, $<$\,6 to 8, $<$\,8 to 10, $<$\,10, No Experience \\ \hline LS3 & Agreement & Extremely Hard, Hard, Slightly Hard, Neither Hard nor Easy, Slightly Easy, Easy, Extremely Easy, No Opinion \\ \hline LS4 & Agreement & Extremely Fast, Fast, Slightly Fast, Neither Fast nor Slow, Slightly Slow, Slow, Extremely Slow, No Opinion \\ \hline LS5 & Likelihood & Definitely, Very Probably, Probably, Neither Probably or Possibly, Possibly, Probably Not, Definitely Not, No Opinion \\ \hline LS6 & Agreement & Strongly agree, Agree, Somewhat agree, Neither agree or disagree, Somewhat disagree, Disagree, Strongly disagree, No Opinion \\ \hline LS7 & Usefulness & Exceptional, Excellent, Very Good, Good, Fair, Poor, Very Poor, No Opinion \\ \hline \end{tabular} \end{table*} \subsection{Part\,2: One-Group Pretest-Posttest Design} \label{sec:part2} \emph{Part\,2} of our study is an exploratory pre-experimental user study following a \emph{one-group pretest-posttest design} to attain goal \goal{2}. We follow the guidelines by Campbell and Stanley~\cite{CampbellS63} to conduct this part of our study. One of the main drawbacks of using a one-group pretest-posttest design is that it does not meet the scientific standards of an experimental design. For example, the pre-experimental study designs does not have a control group like a true experiment~\cite{WohlinRHO12}. Thus, comparison and generalization of the results based on the provided intervention/stimulus may not be possible. However, we intend to use this pre-experimental user study design because of the scarcity of participants. To find a considerable number of participants (30 to 40) with knowledge of formal methods and model checkers inside an industrial organization is ambitious. Performing a true experiment with a lower number of participants raises the threat to external validity. Therefore, we intend to perform a one-group pretest-posttest experiment with Bosch automotive engineers that allows us to capture results from real-world user behavior, even with a limited number of participants. However, the pre-experimental study has several internal and external threats to be considered. We discuss handling of the threats listed by Campbell and Stanley~\cite[Table 1]{CampbellS63} in \sect{tov}. Along with the guidelines by Campbell and Stanley, we refer to the protocol by Zaidman \etal~\cite{ZaidmanMSD13} for a one-group pretest-posttest experiment. They evaluate a tool called \emph{FireDetective} that supports understanding of Ajax applications at both the client-side (browser) and server-side. Their evaluation is performed using two user study variants (i)~pretest-posttest user study, and (ii)~a field user study, where the former is performed with eight participants and the latter is performed with two participants. We plan to perform the one-group pretest-posttest experiment with Bosch automotive engineers and discard the field user study for our evaluation. The questionnaire presented in \tab{onegroup} is used for the one-group pretest-posttest study (\emph{Part\,2} of our overall study). Similar to \emph{Part\,1}, the response for each question is either a qualitative statement, or a set of predefined scale answers with 7-point scale, or a combination of both. \begin{table*}[!tbh] \centering \caption{One-group pretest and posttest questions.} \label{tab:onegroup} \begin{tabular}{p{0.5cm}|m{11.5cm}|l|m{2cm}} \hline \rowcolor{gray!10} \textbf{Label} & \textbf{Questions} & \textbf{Scale} & \textbf{Label (\cf \tab{likert_scale})} \\ \hline \rowcolor{gray!10} \multicolumn{4}{l}{\textbf Task Questions} \\ \hline TQ1 & How difficult was this use case for you to understand? & Nominal and Ordinal & LS5 \\ \hline TQ2 & Do you think this use case is difficult? & Nominal and Ordinal & LS5 \\ \hline TQ3 & Do you think you have understood results from the model checker? (This question is only for pretest) & Nominal and Ordinal & LS5 \\ \hline TQ4 & Do you think you have understood the explanations? (This question is only for the posttest) & Nominal and Ordinal & LS5 \\ \hline TQ5 & Of the following list, please select the inconsistent components. & Nominal & Not Applicable \\ \hline TQ6 & Of the following list, please select the inconsistent specifications. & Nominal & Not Applicable \\ \hline TQ7 & Please explain the reason that makes the specifications inconsistent from your understanding. & Nominal & Not Applicable \\ \hline TQ8 & Please provide a solution to fix the inconsistency from your understanding. & Nominal & Not Applicable \\ \hline TQ9 & Please provide a nominal behavior that is expected in the counterexample's erroneous states from your understanding. & Nominal & Not Applicable \\ \hline \rowcolor{gray!10} \multicolumn{4}{l}{\textbf{Understanding Model Checker Results (Pretest)}} \\ \hline PRQ1 & The results from the model checker allow me to understand the inconsistencies. & Nominal and Ordinal & LS6 \\ \hline PRQ2 & The results from the model checker make me confident that I really understand the inconsistencies that I am investigating. & Nominal and Ordinal & LS6 \\ \hline PRQ3 & The value added by such a result from the model checker will be minimal. & Nominal and Ordinal & LS6 \\ \hline PRQ4 & Such a result from the model checker could save me time. & Nominal and Ordinal & LS6 \\ \hline \rowcolor{gray!10} \multicolumn{4}{l}{\textbf{Understanding our Counterexample Explanation Approach Results (Posttest)}} \\ \hline POQ1 & The value added by an approach like counterexample explanation is minimal. & Nominal and Ordinal & LS6 \\ \hline POQ2 & An approach like counterexample explanation saves me time. & Nominal and Ordinal & LS6 \\ \hline POQ3 & An approach like counterexample explanation allows me to better understand inconsistencies. & Nominal and Ordinal & LS6 \\ \hline POQ4 & An approach like counterexample explanation makes me more confident that I really understand the inconsistencies that I am investigating. & Nominal and Ordinal & LS6 \\ \hline \rowcolor{gray!10} \multicolumn{4}{l}{\textbf{Counterexample Explanation Features (Ratings)}} \\ \hline FQ1 & Translation of specifications from formal temporal format to natural language-like format. & Nominal and Ordinal & LS7 \\ \hline FQ2 & Listing inconsistent specification. & Nominal and Ordinal & LS7 \\ \hline FQ3 & Highlighting sub-parts of the inconsistent specifications that leads to an inconsistency. & Nominal and Ordinal & LS7 \\ \hline FQ4 & Providing the component name that belongs to the inconsistent specifications. & Nominal and Ordinal & LS7 \\ \hline FQ5 & Providing an expected nominal behavior in the explanation for the corresponding erroneous states and variables of the counterexample. & Nominal and Ordinal & LS7 \\ \hline FQ6 & Highlighting the erroneous states and variables in the counterexample. & Nominal and Ordinal & LS7 \\ \hline \rowcolor{gray!10} \multicolumn{4}{l}{\textbf{Feedbacks (After completion of the experiment)}} \\ \hline FE1 & Are inconsistencies easier to understand with the results created by the counterexample explanation approach in comparison to those of the original model checker? & Nominal and Ordinal & LS5 \\ \hline FE2 & What challenges did you face while analyzing inconsistencies in a specification with the proposed approach? & Nominal & Not Applicable \\ \hline FE3 & Do you think it is easy to maintain consistency with the proposed counterexample explanation while refining requirements into requirements for sub-components? & Nominal and Ordinal & LS3 \\ \hline FE4 & Do you think the proposed counterexample explanation approach is usable in real-world development processes? & Nominal and Ordinal & LS5 \\ \hline FE5 & Would you consider using formal methods with our approach in real-world projects? & Nominal and Ordinal & LS5 \\ \hline FE6 & Would you consider using the presented approach in your project? If so, please name the project and a contact person? & Nominal and Ordinal & LS5 \\ \hline FE7 & Do you think presenting a list of possible suggestions/fixes would be helpful to understand and fix inconsistencies? & Nominal and Ordinal & LS5 \\ \hline FE8 & Suggestions for further improvements. & Nominal & Not Applicable \\ \hline \end{tabular} \end{table*} \subsection{Tools used for the Study} \label{sec:survey_tool} Since we do not require time recording, we will use Microsoft Forms for Excel\footnote{\url{https://support.microsoft.com/en-us/office/surveys-in-excel-hosted-on-the-web-5fafd054-19f8-474c-97ec-b606fcda0ff9}} for the user study that provides required features for performing a survey. Further, it is easily accessible within the company and already familiar to the participants. Later, we plan to transfer the results to an Microsoft Excel to perform the analysis. All content-wise explanations for both \emph{Part\,1} and \emph{Part\,2} of the study are provided as a video and are accessible via an online platform, \eg YouTube or the Bosch-internal equivalent called \emph{BoschTube}. \section{Execution Plan} \label{sec:execution} In this section, we describe the execution plan of both the \emph{online survey (Part\,1)} and the \emph{one-group pretest-posttest user study (Part\,2)}, depicted in \fig{steps}. \subsection{Execution Plan of Part\,1} \label{sec:plan_part1} With the accepted participants from the sampling process described in \sect{participants}, we perform an \emph{online survey (Part\,1)} that comprises four steps (\cf \fig{steps}). First, we notify participants regarding the data processing agreement. Additionally, we also state explicitly that their names, project- and product-related information will be removed while results are shared for evaluation. Then we show a video, welcoming the participant and explaining the background and motivation of this survey. Then, we ask participant to answer the demographic questions (Q1 to Q5 in \tab{survey_questions}), and further the main survey questions (Q6 to Q18 in \tab{survey_questions}). Finally we conclude the survey with a thanks note. \subsection{Execution Plan of Part\,2} \label{sec:plan_part2} For the one-group pretest-posttest user study, we invite participants from \emph{Part\,1} who indicated knowledge of formal methods. Similar to \emph{Part\,1}, \emph{Part\,2} starts with a data processing agreement, followed by a background and motivation video. Our one-group pretest-posttest user study is executed with the invited participants as follows: a pretest experiment, then intervention, and finally the posttest experiment. \paragraph{Pretest} The pretest experiment starts with a video demonstrating the pretest experiment with a simple example of an OR-gate behavior. After that, another video explains the system model and specification of an airbag system that serves as a use case for the pretest experiment. During the actual experiment, the participant analyzes the violated specification and the counterexample returned by the model checker to understand the inconsistent parts of the specification. Further, based on her understanding, the participant answers the task questions (TQ1 to TQ9 except of TQ4 in \tab{onegroup}). Finally, the pretest is concluded by answering the pre-questionnaire survey questions PRQ1 to PRQ4. \paragraph{Intervention} After the pretest experiment, a video explains the counterexample explanation approach~\cite{KaleeswaranNVG20}. This serves as an intervention in our study. \paragraph{Posttest} Like the steps followed for the pretest experiment, the posttest experiment starts with a demonstration video with the same use case of the OR-gate behavior, but this time with the counterexample explanation approach. This is followed by a video that explains the system model and specification of the electronic power steering system (EPS), a commercial Bosch product. Then the participants interpret the explanation provided by the counterexample explanation approach to understand the inconsistency. Based on the explanation, participants answer the task questions (TQ1 to TQ9 except of TQ3 in \tab{onegroup}). Subsequently, they answer the post-questionnaire survey questions POQ1 to POQ4. After completing the posttest experiment, participants rate the features (FQ1 to FQ6 in \tab{onegroup}) provided by the counterexample explanation approach and respond to the feedback questions (FE1 to FE8 in \tab{onegroup}). Finally, \emph{Part\,2} of our study concludes with a thanks note to the participants. \section{Implications} \label{sec:mmplications} \emph{Part\,1} of our study will find the importance and difficulty of finding inconsistent specifications introduced during the refinement of top-level specifications as well as the necessities, acceptance, and challenges of using formal methods at Bosch. Further with \emph{Part\,2} of our study, we will evaluate whether our counterexample explanation approach is beneficial for the difficulties and challenges identified in \emph{Part\,1} and therefore for the adoption of formal methods in industrial projects at Bosch. \section{Introduction} \label{sec:introduction} \begin{figure}[!tbh] \centering \includegraphics[width=\linewidth]{fig/Overview.pdf} \caption{ \small Overview of the proposed study. \emph{Part\,1} is an online survey performed with a wide range of participants, \emph{Part\,2} is a one-group pretest-posttest experiment performed with formal methods experts. Gray color boxes indicate the main tasks, \eg survey questionnaire and pretest and posttest experiments.} \label{fig:steps} \vspace{-1.5em} \end{figure} During the development of safety-critical systems, as and when the requirements or the system specification change, the consistency of the system specification must be verified. In an industrial setting where this re-verification is done almost always manually~\cite{WaliaC09}, contract-based design (CBD)~\cite{CimattiT12} can substitute this manual work by automating the verification process using a model checker (\cf~Fig.\xspace1 and Sect.\xspace2 in~\cite{KaleeswaranNVG20} for an example). Whenever an inconsistency is found during the verification, the model checker exemplifies the violation by generating a counterexample. It is then up to an engineer to understand the counterexample and to identify the root cause of the violation by manually tracing the violation back from the counterexample to the original system specification. Identifying the inconsistent specifications from a set of specifications is challenging, though, because specifications of real-world use cases can comprise hundreds of pages~\cite{Schuppan16}. Further, identifying faults from a counterexample is error-prone and time-consuming, especially for non-experts in formal methods because counterexamples are lengthy and cryptic~\cite{BergSJ07,LeueB12,MuramTZ15,BarbonLS19,OvsiannikovaBPV21}. Thus, an automated method for explaining counterexamples is highly desirable to assist engineers in understanding counterexamples and thus, in identifying faults in their models. Formal methods are not new to Bosch. They are used to specify requirements as pattern-based specifications to support verification during product development~\cite{PostMHP12,PostH12}. Additionally, we have presented a \emph{counterexample explanation approach} that attempts to ease the use of formal methods by reducing the manual work and difficulty of interpreting the verification results generated by model checkers~\cite{KaleeswaranNVG20}. Particularly, we target refinements in system design and the verification of their consistency. Usually, engineers refine a top-level component and its specification into sub-components and their respective individual specifications. A model checker can verify the consistency of such a refinement. If an inconsistency was introduced during the refinement by an engineer, the model checker returns a counterexample. Our approach provides an additional explanation of this counterexample to engineers in order to ease understanding of the model checker result and identifying the inconsistency of the refinement. To explore whether our counterexample explanation approach~\cite{KaleeswaranNVG20} does improve the understanding of the model checking results, we intend to perform a \emph{one-group pretest-posttest experiment}. Since we want to perform the study with professional engineers working at Bosch, our study requires working time from these engineers and thus implicitly incurs costs. Thus, the number of engineers to participate in the study will be limited. The one-group pretest-posttest experiment supports conducting the study with a limited number of participants. Further, to explore the general acceptance of formal methods and to contemplate on challenges and the complexity faced by engineers in identifying inconsistent specifications, we intend to perform an \emph{online survey}. In this paper, we summarize the research questions that we evaluate, the design and execution plan of the study, target participants, analysis plan, and threats to validity. \section{Participants} \label{sec:participants} Our counterexample explanation approach focuses on enhancing safety analysis for automotive systems~\cite{KaleeswaranNVG20}. Thus, we are interested in performing this user study only with Bosch automotive engineers, particularly engineers working on system development, requirement elicitation, and safety analysis. The target population for our study is very specific and thus, it is hard to make a finite list of participants by applying probabilistic sampling. As per Kitchenham and Pfleeger~\cite{KitchenhamP08}, when a target population is very specific and limited, non-probabilistic sampling can be used to identify the participants. Therefore, we intend to use two non-probabilistic sampling methods for \emph{Part\,1} of our study, namely, \emph{convenience sampling} and \emph{snowball sampling}. Further, we invite participants with knowledge on formal methods for \emph{Part\,2} of our study by filtering the participants of \emph{Part\,1} based on the responses to the demographic questions Q1 to Q3. First, we start with the convenience sampling for \emph{Part\,1}. We send e-mails with the survey link to participants collected through department mailing lists and community mailing lists of all relevant Bosch business units. We perform snowball sampling with the accepted participants by asking for further potential participants at the end of the survey. In the e-mail invitation, we will explicitly mention that the anonymity of results will be preserved. So, while publishing results or sharing the survey responses for evaluation, we will remove all personal, product- and project-related information. \section*{Reviewer 1} \noindent We thank the reviewer for the detailed comments and for highlighting the portions that need to be discussed explicitly. . \vspace{7pt} \noindent\textbf{R1.1)} The two parts of the study (corresponding to goals G1 and G2) are at quite different levels of generality. G1 is about challenges of identifying inconsistent specifications in a broader context, whereas G2 focuses on the understandability of model-checking counterexamples. The two levels are an effective way of organizing the research questions. However, the gap between the two levels remains fairly wide. Maybe you can think about adding a research question that explicitly connects the two levels. Perhaps this is already implicit in answering RQ1 and RQ2, but the design would benefit from a more explicit discussion of this aspect. \vspace{3pt} \noindent\add{Thanks for pointing at this gap. We rephrased RQ3 in the revised draft under the section \textit{Research Questions}.} \vspace{3pt} \noindent\add{RQ3: To what extent do engineers prefer to use formal methods (model checkers particularly) if the difficulty is reduced for understanding verification results to identify inconsistent specifications?} \vspace{7pt} \noindent\textbf{R1.2)} For Part 2, have you considered a different experimental design where a participant is assigned a different treatment (in this case, only the counterexample, or the counterexample plus the additional explanation) on different tasks, and the tasks are chosen to be sufficiently different (for example, they refer to different parts of the system)? This way, provided the risk of a learning effect can be reduced, there would be a more robust design. Perhaps this is not feasible in terms of required resources; but you could explicitly discuss it and give reasons for why it may or may not be infeasible. \vspace{3pt} \noindent\add{First, lets say that we are fully aware of the limitations of our design choice for our experimental design. As discussed in the subsection Part 2: One-Group Pretest-Posttest Design, one of the main drawbacks of using a one-group pretest-posttest design is that it does not meet the scientific standards of an experimental design. For example, the pre-experimental study designs do not have a control group like a true experiment. Thus, comparison and generalization of the results based on the provided intervention/stimulus may not be possible.} \vspace{3pt} \noindent\add{However our decision for a pre-experimental user study design is based on the scarcity of potential participants. To find a considerable number of participants (30 to 40) with knowledge of formal methods and model checkers inside an industrial organization is ambitious. Yet, performing a true experiment with a lower number of participants would raise the threat to the external validity of our study. Therefore, we intend to perform a one-group pretest-posttest experiment with Bosch automotive engineers that allow us to capture results from real-world user behavior with the limited number of potential participants.} \vspace{7pt} \noindent\textbf{R1.3)} Do you plan to do only visualization for the data in Part 2 of the study? Do you think statistics would not be justified given the limited number of participants? Here too, I think these points are worth discussing even if they would confirm my guess about the reasons behind them. \vspace{3pt} \noindent\add{In order to argue the results with statistics, we need two different groups of participants. With these two groups, it is possible to perform a true experiment to obtain the statistical analysis. However, since we follow a one-group pretest-posttest design due to the limited number of industrial participants, we would prefer to choose qualitative analysis and resort to descriptive statistics and visualisation to help the reader to understand our qualitative results.} \vspace{7pt} \noindent\textbf{R1.4)} Section III.A: I'm not sure I understand what the expression "add-on to system safety" is supposed to mean as a variable. Could you find a clearer expression (perhaps "increase in confidence in system safety/correctness"?) \vspace{3pt} \noindent\add{We revised the name of the variable according to the suggestion, i.e., “increase in confidence in system safety”.} \vspace{7pt} \section*{Reviewer 2} \noindent We thank the reviewer for the detailed comments and for highlighting weak spots in our design. We agree with the reviewer’s comments and have already considered these points as threats as well as discussed to handle these threats in the \textit{Threats to Validity} section. \vspace{7pt} \noindent\textbf{R2.1)} To be honest, I find them a bit unambitious: Increasing preference is frail evidence. I could argue that any additional information might increase the subjective preference. Why do you avoid measuring any kind of actual improvement in understanding? \vspace{3pt} \noindent\add{In order to measure an actual improvement, a true experiment with a control group would need to be performed. However, due to the lack of a significantly large number Bosch engineers with formal methods expertise, we would not find enough participants to perform a true experiment. The pre-experimental design is therefore a compromise to yield assertive results with this limited number of participants. These results will help the field and provide a stepping stone for future investigations and experiments.} \noindent\add{We provide measures to mitigate potential threats to validity. The threat of history/maturation is mitigated to a certain degree by explicitly stating to participants that the study results will serve as a reference in the future to use our counterexample explanation approach for real-world projects at Bosch. Additionally, to avoid overwhelmed responses and accept only valid responses, we will cross-check the answers provided for the task questions (TQ1 to TQ9). Further, to reduce biasing between the pretest and the posttest experiment, the use case of an airbag system (a toy example) used in the pretest is significantly less complex than the use case of the Bosch EPS system.} \vspace{7pt} \noindent\textbf{R2.2)} In Tables I, II and III: Only LS6 is actually a Likert Scale. All the others are simply ordinal scales. \vspace{3pt} \noindent\add{Agreed, we use the Likert Scale and 7 point answer scales. As a result we changed the term now to Likert and answer scales.} \vspace{7pt} \noindent\textbf{R2.3)} I was wondering a bit why you need an entirely new questionnaire. What about reusing the TAM2? \vspace{3pt} \noindent\add{We agree with the reviewer’s suggestion of considering using the TAM2 model since part of the study also deals with usability. However, since we were able to adapt the structure of the pretest-posttest experiment and questionnaire from Zaidman et al.~[ZaidmanMSD13], we decided to reuse the same then using TAM2 model.} \vspace{3pt} \noindent\add{[ZaidmanMSD13] A. Zaidman, N. Matthijssen, M. D. Storey, and A. van Deursen, “Understanding ajax applications by connecting client and server-side execution traces,” Empir. Softw. Eng., vol. 18, no. 2, pp. 181–218, 2013.} \vspace{7pt} \noindent\textbf{R2.4)} I think one can argue for the pre-post-test design. Yet, I think a design with a control group comparing differences in understanding would be much stronger, even if the statistical power might be weak. I would rather argue that replications could ramify this. Instead, with your design, we only learn possible changes in preference that might be caused by other effects. \vspace{3pt} \noindent\add{We thank the reviewer for highlighting this option. During our early phase of design, we had the same opinion. However, considering the threats due to the limited number of participants for a controlled experiment (see response R1.2, R2.1), we believe threats listed by Campbell and Stanley [CampbellS63] for the one-group pretest-posttest experiment can be handled. We came to this conclusion also by considering Zaidman et al.~[ZaidmanMSD13] work as a reference.} \vspace{3pt} \noindent\add{[CampbellS63] D. T. Campbell and J. C. Stanley, Experimental and quasi-experimental designs for research. Rand McNally Chicago, 1963.} \vspace{3pt} \noindent\add{[ZaidmanMSD13] A. Zaidman, N. Matthijssen, M. D. Storey, and A. van Deursen, “Understanding ajax applications by connecting client and server-side execution traces,” Empir. Softw. Eng., vol. 18, no. 2, pp. 181–218, 2013.} \vspace{7pt} \section*{Reviewer 3} \noindent We thank the reviewer for the comments and for highlighting the portions that need to be discussed explicitly. \vspace{7pt} \noindent\textbf{R3.1)} Some previous studies have identified challenges or difficulties in using formal methods to develop software systems. What is the special significance of this work designed for Bosch automotive engineers? Therefore, the authors should highlight the difference between this work and previous studies. \vspace{3pt} \noindent\add{The following details are added in the revised draft under the subsection \textit{Part 1: Online Survey}. We refer to and adapt some of the questionnaires from existing latest user surveys by Gleirscher and Marmsoler [GleirscherM20], and Garavel et al.~[GaravelBP20]. Gleirscher and Marmsoler [GleirscherM20] perform the largest cross sectional survey with 216 participants to study the existing and intended use of formal methods. Similarly, Garavel et al. [GaravelBP20] conduct a user survey with 130 participants and 30 questions to collect information on the past, present, and future of formal methods in research, industry, and education. The major differences our study makes to the existing surveys are: (1) most of the surveys focus on general challenges using formal methods, but we specifically focus on identifying challenges that engineers face in identifying inconsistent specifications as well as an understanding counterexample, and (2) the study is performed with the engineers who work on real-world projects.} \vspace{3pt} \noindent\add{[GleirscherM20] M. Gleirscher and D. Marmsoler, “Formal methods in dependable systems engineering: a survey of professionals from europe and north america,” Empir. Softw. Eng., vol. 25, no. 6, pp. 4473–4546, 2020.} \vspace{3pt} \noindent\add{[GaravelBP20] H. Garavel, M. H. ter Beek, and J. van de Pol, “The 2020 expert survey on formal methods,” in Formal Methods for Industrial Critical Systems -25th International Conference, FMICS 2020, pp. 3–69.} \vspace{7pt} \noindent\textbf{R3.2)} As the authors stated, Part 2 of this study is an exploratory pre-experimental user study. Why not design controlled experiments by offering a training course for selected engineers? The reviewer believes that this experiment may be more valuable to answer RQ3. \vspace{3pt} \noindent\add{We agree that a true experiment would be beneficial. The study design is a compromise due to the limited number of potential participants with formal methods expertise within Bosch, see responses to R2.1 and R2.4. Additionally, as a small side note the didactic skills of people of the training course and several other aspects will be confounding factors for the experiment that would limit the trustworthiness of a full controlled experiments. As a result we made this design choice.} \vspace{7pt} \noindent\textbf{R3.3)} In the reviewer's opinion, this work may output more qualitative analysis results. Why not use quantitative analysis methods? \vspace{3pt} \noindent\add{The qualitative results of the study study with engineers working on real-world projects will be valuable for the community to better understand the problem and its context. We decided against heavy use of quantitative analysis methods because of the nature of our experimental design. However, we providing quantitative results collected from our engineers as much as possible to enrich the qualitative evaluation.} \vspace{7pt} \noindent\textbf{R3.4)} Do the authors plan to design and develop a formal verification-oriented translator for end engineers to facilitate the development process using formal methods? The reviewer believes that this work may make more sense. \vspace{3pt} \noindent\add{Absolutely! Modeling the system model and formalizing specifications are performed using the framework FASTEN~[RatiuNMCV21] for the counterexample explanation approach we discussed in Part 2. This framework has explicit and extensive support for domain-specific (or even project-specific) abstractions and visualizations. We started working on domain-specific explanations and will continue to do so.} \vspace{3pt} \noindent\add{[RatiuNMCV21] Ratiu D., Nordmann A., Munk P., Carlan C., Voelter M. (2021) FASTEN: An Extensible Platform to Experiment with Rigorous Modeling of Safety-Critical Systems. In: Domain-Specific Languages in Practice. Springer, Cham.} \vspace{7pt} \noindent\textbf{R3.5)} The apparent limitation of a specific domain (automobile) and a single company (Bosch). \vspace{3pt} \noindent\add{While the focus of our survey might look like a single domain and company, automotive is an important safety-critical domain, where complexity is rising. We argue that automotive is therefore a domain particularly suited to investigate our research questions. Additionally, Bosch is the world’s biggest automotive supplier. Thus, we believe our study result will be valuable to the community to know the potential challenges using formal methods specifically on counterexample interpretation or identifying inconsistent specifications for safety-critical applications. } \newpage \twocolumn \section{Research Questions} \label{sec:rq} Our study aims to explore and understand the challenges in identifying inconsistent specifications, and the acceptance of formal methods by Bosch automotive engineers. Therefore, this user study has two significant goals: \textbf{(G1)}~to understand challenges faced by Bosch engineers in order to identify inconsistent specifications and challenges along with their opinions to use formal verification or formal methods in real-world development processes, and \textbf{(G2)}~to explore whether Bosch engineers are interested in using formal methods, particularly model checking, in real-world development processes if the difficulty of understanding model checking results is reduced by our counterexample explanation approach. Considering these two goals, we formulate the following research questions: \noindent\emph{\rqs{1}: To what extent do engineers face challenges in identifying inconsistent specifications in formal models that are introduced during a refinement of a system?}\\ \noindent With this RQ we want to investigate whether: \begin{itemize}[leftmargin=*] \item Understanding formal notations is difficult for engineers. \item Identifying inconsistent specifications that are introduced during a refinement of a top-level specification is difficult. \end{itemize} \noindent\emph{\rqs{2}: To what extent is identifying inconsistent specifications and using formal methods beneficial to a real-world development process?} \\ \noindent With this RQ we want to investigate whether: \begin{itemize}[leftmargin=*] \item Usage of formal verification or formal methods is beneficial in a real-world development process. \item Identifying inconsistent specifications is beneficial in a real-world development process. \end{itemize} \noindent\emph{\rqs{3}: To what extent do engineers prefer to use formal methods (model checkers particularly) if the difficulty is reduced for understanding verification results to identify inconsistent specifications?} \\ \noindent With this RQ we want to investigate whether: \begin{itemize}[leftmargin=*] \item The counterexample explanation approach eases comprehension compared to interpreting the raw model checker output for engineers with a formal methods background. \item The counterexample explanation approach is understandable by engineers with a background in formal methods. \item It is possible for engineers with a background in formal methods to identify and fix inconsistent specifications based on the counterexample explanation approach. \item The counterexample explanation approach can promote formal verification and usage of model checking in real-world development processes. \end{itemize} \section{Threats to Validity} \label{sec:tov} In this section, we discuss threats that may jeopardize the validity of our study results as well as measures we take to reduce these threats. We consider threats to validity as discussed by Wohlin \etal~\cite{WohlinRHO12}, Kitchenham and Pfleeger~\cite{KitchenhamP08}, and Campbell and Stanley~\cite{CampbellS63}. In the following, we structure them according to construct validity, internal validity, and external validity. \paragraph{Construct Validity} The prime threats to construct validity are related to the completeness of the questionnaire and in phrasing questions in a way that is understood by all participant in the same way. To mitigate these threats, we have taken the following measures during our survey preparation: (i)~we incorporated feedback from two senior engineers with a background in formal methods and model checking, (ii)~we incorporated feedback regarding unbiased questions from a psychologist, and (iii)~we intend to perform a pilot test with five research engineers to check for completeness and understandability. \paragraph{Internal Validity} The critical internal threat to be considered for the online survey is the selection of participants. Since we follow snowball sampling for participant selection, there could be a possibility of several participants working in the same project, which could bias the final result. Therefore, we will consider only a small number of participants from each project and neglect further project members. We consider threats to internal validity listed by Campbell and Stanley~\cite{CampbellS63} for the pretest-posttest experiment. To mitigate the \emph{history} and \emph{maturation} threats, we plan to perform both the pretest and the posttest experiments on the same day. The most severe threats to be considered in this experimental design are \emph{testing} and \emph{instrumentation}. Those threats arise because participants get overwhelmed with the intervention. Consequently, participants could answer more positively in the posttest experiment than the actual value. To mitigate these threats, we state to the participants explicitly that the obtained study results will serve as a reference in the future to use our counterexample explanation approach for real-world projects at Bosch. Additionally, to avoid overwhelmed responses and accept only valid responses, we will cross-check the answers provided for the task questions (TQ1 to TQ9). Further, to reduce biasing between the pretest and the posttest experiment, the use case of an airbag system (a toy example) used in the pretest is significantly less complex than the use case of the Bosch EPS system. However, to adjust the difficulty level of the systems used for the experiment, we plan to perform a pilot study with five research engineers. Adjustment of difficulty will be done by increasing or decreasing the number of components and size of the specifications that need to be understood by the participants. \paragraph{External Validity} One of the severe drawbacks of the one-group pretest-posttest experiment is its generalization. However, the benefit of our study is that we use a real-world system for the posttest experiment, and the participants are professional engineers who work on real-world automotive projects at Bosch. \section{Variables} \label{sec:variables} To attain the goals \textbf{G1} and \textbf{G2}, we perform two different types of exploratory user studies as shown in \fig{steps}. The first study is an \emph{online survey (Part\,1)}, the second study is a \emph{one-group pretest-posttest user study (Part\,2)}. \subsection{Variables of Part\,1: Online Survey} \label{sec:var_part1} Our online survey evaluates the research question \rqs{1} and \rqs{2}. The independent variables of \emph{Part\,1} are \emph{participants' professional background and experience}. The dependent variables are different for each research questions. For the research question \rqs{1}, the dependent variable is the \emph{difficulty in understanding} that infers that understanding formal notations and identifying inconsistent specifications by engineers are difficult. Similarly, the dependent variable for \rqs{2} is the \emph{increase in confidence in system safety}, that is, the identification of inconsistent specifications and use of formal methods in real-world development processes can make systems safer. \subsection{Variables of Part\,2: One-Group Pretest-Posttest Design} \label{sec:var_part2} As per Babbie~\cite{Babbie16}, an experimental stimulus (also called an intervention) is the independent variable. In the one-group pretest-posttest design, we use our counterexample explanation approach as the intervention. Therefore, it serves as the independent variable of \emph{Part\,2}. Further, the research question \rqs{3} are evaluated based on the following four attributes that serve as dependent variables for \emph{Part\,2} of our study: \begin{enumerate}[leftmargin=*] \item \emph{Better understanding:} Does the counterexample explanation approach allow engineers to understand model checking results and identify inconsistencies more effectively? \item \emph{Quicker understanding:} Does the counterexample explanation approach allow engineers to understand model checking results and identify inconsistencies more efficiently? \item \emph{Confidence:} Does the counterexample explanation approach make engineers more confident in their understanding of the system and its inconsistency resp. safety? \item \emph{No value:} This attribute is inversely related to the above attributes. Will the counterexample explanation approach provide no or only minimal value to real-world projects? \end{enumerate}
2024-02-18T23:40:34.417Z
2021-08-17T02:02:11.000Z
algebraic_stack_train_0000
2,778
7,262
proofpile-arXiv_065-13579
\section{Introduction} A central challenge for the realization of a universal quantum computer is the loss of entanglement and coherence due to the detrimental effects of uncontrolled interactions between the system storing quantum information and its environment \cite{Unruh95}. To understand the underlying error processes and predict the performance of quantum processor designs, a realistic model for the manipulation by control hardware and the noise introduced by the environment is required. Such a model is also essential for optimizing the performance, for example using quantum optimal control (QOC) techniques, which adapt control pulses such that a suitably chosen metric is maximized \cite{TrainSchroedCat, qoc_outlook_2010}. QOC methods will be essential to achieve the best possible gate accuracy for quantum devices, both for noisy intermediate-scale systems and future universal quantum computers \cite{optimizedPulsesTransmon, OQCwithrandomizedbenchmarking, Preskill2018, Wang2014, Yang2016, Anderson2015}. For increasing the number of qubits in a quantum processing unit, QOC techniques will likely be needed to address effects like crosstalk of control fields, frequency crowding and unintended coupling between qubits. Furthermore, a well-founded performance assessment for benchmarking, platform selection and system design is only possible if the best possible control approaches are considered. In a physical system, the time-dependent control fields implementing a desired quantum gate, which we refer to as control pulses, are applied to a qubit using classical control hardware (such as arbitrary waveform generators or lasers). In numerical QOC, a qubit's evolution is simulated for a given control pulse and then optimized prior to the application to the experiment. To achieve the best possible qubit performance, this optimization requires a model of the whole system including any effects that influence its evolution. These factors can include the intrinsic properties of the qubits themselves, their coupling, the unintended interaction with the environment \cite{Floether_2012MarkVsNonMark} as well as their response to and properties of the control hardware. Incorporating the control hardware into the models is also essential for the design of tailored control electronics for quantum computation, e.g, cryolectronic systems operated in close vicinity of the qubits for achieving a high wiring density. While providing at least the minimally required control capabilities, additional constraints like a reduced heat dissipation compatible with cryogenic cooling need to be considered \cite{Geck_2019, CharbonCryoCMOS} for such systems. Even if the models used for pulse optimization are not sufficiently accurate for the direct ("open loop") experimental application of the results, they can be a useful starting point for further fine-tuning in a closed loop with feedback from an actual experiment \cite{PascalNatureComms, PascalGSCTheo, WilhelmAdHOC2014}. Even then, they should behave as similar as possible to the physical system and at least qualitatively capture all relevant effects. Simulation frameworks are also helpful to simulate and develop such experimental tuning procedures \cite{PascalHighFidGates, PascalHighFidSingleQubitGatesTheo, WilhelmAdHOC2014}, or they can be combined with the characterization of a qubit in an optimization loop \cite{C3wittler2020integrated}. The desire to achieve realistic models leads to a number of required features. The signal transduction from the hardware to the qubit must be modeled including all relevant technological limitations, such as the finite bandwidth of arbitrary waveform generators \cite{TransferFunc} and signal pathways, that effect the actually implemented gate, can also cause gate-bleedthrough \cite{KellyBT2014} and crosstalk between different control signals. Furthermore, nonlinear effects may arise, either due to the control-hardware itself, or because of a nonlinear relation between the physical control fields and the effective control Hamiltonian, e.g., due to a truncation of the Hilbert space involving a Schrieffer-Wolf transformation. Regarding decoherence effects, it is important to realistically consider all relevant noise sources and their properties. An important aspect are correlations of noise, which can be incorporated by nontrivial spectral noise densities \cite{Floether_2012MarkVsNonMark}. Correlated noise is a limiting factor for high-fidelity gates in many systems \cite{ChargeNoiseSpectrPhysRevLett.110.146804, PascalHighFidGates, 999Yoneda2018}, but correlations can also be exploited to reduce decoherence effects in dynamically corrected gates. The performance quantification of quantum operations can be extended to such a mitigation of noise effects, often termed robust optimal control \cite{Floether_2012MarkVsNonMark}. In case of nonlinear relations between control fields and qubit Hamiltonian, the effect of noise in the control field automatically depends on the control field, thus representing drive-dependent noise. From a performance point of view, it is important to be able to use efficient algorithms that are well-suited for the problem at hand. Many algorithms for the numerical optimization discretize control pulses in time to yield a finite dimensional parameter space. The discrete elements of the control pulses can then be updated simultaneously using gradient ascent methods such as GRAPE \cite{grape} or subsequently with Krotov's method \cite{Krotov_Schirmer_2011} as well as gradient-free methods \cite{Huang2017}. More advanced gradient-based algorithms include the use of second order derivatives \cite{KuprovSecondOrderGradient}, gradient optimization of analytical controls (GOAT) \cite{goat} and the application of the Kalman filter for the estimation of gradients \cite{TeskeKalman}. Another approach is to parameterize control pulses in terms of a randomly chosen subspace using CRAB \cite{crab} or the remote version RedCRAB \cite{RedCRABHeck}. To ease the application of advanced models and algorithms, general purpose, flexible and easily usable software implementations are highly advantageous. An early example is the unifying algorithmic framework DYNAMO \cite{DYNAMO}, which implements GRAPE and Krotov's method in Matlab and inspired the implementation of an optimal control package in the Quantum Toolbox in Python (QuTiP) \cite{QUTIP}, an open-source software for the simulation of open quantum systems. An additional package introduces Krotov's method to QuTiP as described by Goerz \textit{et al.} \cite{Krotov}. QuTiP's extension by the subpackage for quantum information processing introduces the capability to simulate quantum gates on the pulse level with the option to include noise but without optimal control techniques. There are also special purpose optimization frameworks like QEngine \cite{SORENSEN2019135QEngine} for ultracold atoms or Pulser \cite{pulser} for neutral-atom arrays. Some of these implementations can be generalized to noisy systems if an open system description based on master equations is adopted \cite{Koch_2016robustopensys}, thus readily treating Markovian noise. One possibility do deal with non-Markovian noise is the use of ancillary qubits \cite{Floether_2012MarkVsNonMark, Pawela2016methodsdecoherence}, which however is computationally very costly as it substantially increases the Hilbert space dimension. A methodology combining open and closed loop optimization is the C3 tool set for integrated control, calibration and characterization \cite{C3wittler2020integrated}. While these simulation frameworks are widely and successfully used in their respective domains, we found them to be less suited and difficult to extend to address the above requirements for the realistic, hardware aware simulation. We thus implemented the new python package \qopt{} \cite{qopt}, which was in many ways inspired by QuTiP's optimal control subpackage, but has in some aspects a different structure. Specifically, the performance of sequential optimization algorithms like Krotov's method is based on the possibility to efficiently update pulses independently in each time step, which is incompatible with our current implementation of parameterized pulses and transfer functions. Concurrent with our work, the startup Q-CTRL developed a software with similar methods but pursued a different strategy and targeted a commercial audience \cite{QCTRLball2020software}. As for \qopt{}, these methods include generalized filter functions and the simulation of noise by explicitly sampling noise distributions as Monte Carlo simulations. The imperfections of control hardware are modeled by transfer functions. Additionally, Q-CTRL provides methods for the noise characterization of a given system. While one may expect that the commercial multi-purpose software of Q-CTRL leads to a feature-rich and easy to use solution, the closed-source approach reduces the transparency and flexibility, which is often important for research use. As an GPL3-licensed open-source package, \qopt{} complements this approach, targeting mainly the scientific community in academia and industry. Besides low barriers to entry, the modular structure and complete API documentation \cite{qopt-docs} provide full flexibility in the implementation of new techniques that expand the application to unsolved problems. The user can supply her own optimization algorithm or replace any other relevant part. Multi-processing is also supported. This paper gives an overview of \qopt{}'s capabilities while a full documentation and numerous introductory examples can be found online on readthedocs \cite{qopt-docs}. Section \ref{sec:problem and methods} describes the mathematical formulation used for the experimentally oriented simulation of qubits and the application in QOC. The actual implementation of the \qopt{} package is portrayed in Section \ref{sec::implementation}, including a practical example. Finally, an outlook is given in Section \ref{sec:summary and outlook}. \section{Problem formulation and simulation methods} \label{sec:problem and methods} \subsection{Problem formulation} A rather general model for a driven qubit system subject to (classical) noise can be described by a Hamiltonian of the form \begin{align} H(t) &= H_c(t) + H_d(t) + H_n(t) \\ H_c(t) &= \sum_k u_k(t) C_k \label{eq::hamiltonian} \\ H_n(t) &= \sum_k b_k(t) s_k(t) C_k \end{align} with the control Hamiltonian $H_c$, the drift Hamiltonian $H_d$ and the noise Hamiltonian $H_n$. The control Hamiltonian models the manipulation of the system with time dependent control amplitudes $u_k$ and Hermitian operators $C_k$. The drift Hamiltonian $H_d$ incorporates any effects that cannot be freely controlled but still affect the dynamics. It describes the natural evolution of the system in absence of any control, e.g., due to a fixed energy splitting. The noise Hamiltonian $H_n$ models unintentional interactions of the system with the control hardware or the environment, like electrical noise on the control amplitudes or the interaction with electromagnetic fields from the host material. The noise amplitudes $b_k$ describe the strength of the perturbation while the noise susceptibilities $s_k$ describe the coupling strength to the noise source. $s_k$ can depend on the control amplitudes $u_k$ to model noise originating from the control mechanism. We assume the noise to be classical with zero mean and (wide-sense) stationary. Auto-correlated classical noise is characterized (up to higher-order correlations for non-Gaussian processes) by its spectral density $S_k(\omega)$ defined as the Fourier transform of the correlator $\langle b_k(t_1)b_k(t_2) \rangle = \frac{1}{2 \pi} \int_{- \infty}^{\infty} S_k(\omega) e^{-i\omega (t_1 - t_2)} d\omega$. In many experiments the control amplitudes $u_k(t)$ in the Hamiltonian from Eq.~\eqref{eq::hamiltonian} are not directly controllable by the experimentalist. Rather, they are functions of physical control fields $v_i(t)$. We call the mapping of $v_i$ to $u_k$ the \emph{amplitude function} (see Fig. \ref{fig:activity diagram}). An example is the control by Rabi-driving where the control amplitude $v_1(t) = A(t)$ and phase offset $v_2(t) = \delta(t)$ of a control signal appear in the Hamiltonian as $u_1(t) = A(t) \text{sin}(\omega t + \delta(t))$. Nonlinear amplitude functions can also arise from the truncation of Hilbert space. For example, the exchange interaction of two electron spins depends nonlinearly on the detuning between different orbital states of the electrons, when the orbital states are truncated. Furthermore, imperfections of the control electronics can be modeled by the use of linear transfer functions \cite{TransferFunc} acting on the controllable optimization parameters $v_k$. This can be done by oversampling and smoothing the control pulse e.g., by convolution with a Gaussian kernel or by using a realistic transfer function, measured for a specific device. Our implementation also allows the user to add boundary conditions by padding the beginning and end of each pulse with appropriate values. A common use case is an additional idle time at the end of each pulse in order to avoid transients across pulses \cite{PascalHighFidGates}. Note that the amplitude and transfer functions have similar roles, but different constraints. The amplitude function can be nonlinear but must be local in time, whereas the transfer function must be linear but can be nonlocal in time. The transfer function is applied before the amplitude function. \subsection{Noise Simulation} For the numerical solution of the Schroedinger equation, we assume piecewise constant control during $n_t$ time steps of length $(\Delta t_1, \dots, \Delta t_{n_t})$. The total unitary propagator of an evolution is calculated as a product of matrix exponentials of the time-independent Hamiltonians $U = e^{-i H(t_{n_t})\Delta t_{n_t}} \dots e^{-i H(t_2)\Delta t_2} \cdot e^{-i H(t_1)\Delta t_1}$ using the convention $\hbar = 1$. Noise can be taken into account with several methods, which might be more or less appropriate and numerically efficient depending on the noise properties. Explicitly generating numerous noise traces whose Fourier transform converges (on average) to the noise spectral density $S$ is one of the simplest methods. In this approach the highest relevant noise frequency sets the required time step for the numerical integration, so that additional oversampling is required if it exceeds the bandwidth of the pulse itself. Since the numerical complexity of the simulation grows proportionally with oversampling, such a Monte Carlo approach becomes computationally inefficient if the noise spectral density cannot be neglected at frequencies much higher than the bandwidth frequency of the control electronics. Even if this is not an issue, many repetitions are required to gather statistics. If the main noise contribution occurs at frequencies much lower than the simulation dynamics, it is sufficient to consider the noise amplitude to be static during the pulse. For few noise sources, explicit numerical integration of the (typically Gaussian) probability distribution of these noise values is more efficient than Monte Carlo sampling in small dimensions \cite{CaflischMonteCarloGeneral}. The user can choose between both methods in \qopt{}. If high noise frequencies are relevant, more efficient methods than Monte Carlo or numerical integration are available. They are based on master equations and filter functions for white and auto-correlated noise described by spectral densities, respectively \cite{ConvertLindblad, Green_2013, PascalHighFidSingleQubitGatesTheo, PascalHighFidGates}. In the special case of Markovian (uncorrelated) noise, the influence on the qubit system can be described by an effective master equation in Lindblad form \cite{ConvertLindblad}. In this master equation, the von-Neumann equation is complemented by a dissipation term leading to non-unitary dynamics. The Lindblad form is \begin{align} \dot{\rho} = -\frac{i}{\hbar}\left[ H, \rho \right] + \sum_n \gamma_n \left(L_n \rho L_n^\dagger - \frac{1}{2}\{ L_n^\dagger L_n, \rho \}\right), \end{align} where the Lindblad operators $L_n$ describe dissipation effects and can themselves depend on the control amplitudes. The master equation is written as linear system of equations with the Kronecker matrix product $\otimes$ and calculated as matrix exponential: \begin{align} \text{vec}(\rho) (t) & = \exp[(-i\mathcal{H} + \mathcal{G})t] \; \text{vec}(\rho)(0), \\ \mathcal{H} &= I \otimes H - H^T \otimes I, \\ \mathcal{G} &= \sum^K_{k=0} \mathcal{D}(L_k), \\ \mathcal{D}(L) &= L^\ast \otimes L - \frac{1}{2} I \otimes (L^\dag L) - \frac{1}{2} (L^TL^\ast) \otimes I, \end{align} where $\text{vec}(\rho)$ denotes the density matrix written as vector in column-wise order. The derivation of a master equation in Lindblad form requires the assumption of Markovian (uncorrelated) noise. This approximation does not hold for many experimentally relevant noise sources such as flux noise in superconducting qubits and charge noise in many types of solid state qubits, which typically have an $1/f$-like spectrum. The filter function formalism provides a mathematical tool which can (perturbatively) model the decoherence caused by arbitrary classical noise \cite{Green_2013}. Both master equation and filter functions have already been employed for numerical pulse optimization \cite{ffoptPhysRevA.99.042310, PascalHighFidSingleQubitGatesTheo}. In the filter function formalism, a so-called filter function $F_\alpha$ can be calculated given the evolution of the system for each noise source $\alpha$. For a given control pulse, $F_\alpha$ captures the susceptibility of the resulting quantum channel to the noisy quantity as function of frequency. The noise contribution to the entanglement infidelity or other figures of merit can be calculated as the integral of the filter function and the spectral noise density \begin{align} \label{eq:entanginfidff} \mathcal{I}_{\text{ff}} = \int_{-\infty}^{\infty} \frac{d\omega}{2 \pi} S_\alpha(\omega) F_\alpha(\omega). \end{align} The filter function formalism is an approximate and efficient method to calculate the infidelity caused by fast non-Markovian noise of small amplitude. It outperforms Monte Carlo simulations for small systems, while Monte Carlo methods scale better with an increasing number of qubits \cite{hangleiter2021filter, cerfontaine2021filter}. The numerical routines for the calculation of filter functions and their derivatives with respect to the control amplitudes $\frac{\partial F_\alpha}{\partial u_k(t)}$ are provided by the software package filter\_functions \cite{filter_func_package}. \subsection{Fidelity measures} \qopt{} implements various fidelity measures to quantify the accuracy of quantum operations. For state transfers, the state fidelity between the initial and final quantum states described by the density matrices $\rho_1$ and $\rho_2$ can be used, which is defined as $ \mathcal{F}_{\text{st}} (\rho_1, \rho_2) = \left[\text{tr}\left (\sqrt{\sqrt{\rho_1}\rho_2 \sqrt{\rho_1}}\right)\right]^2. $ One commonly used measure for the closeness of two quantum gates is the entanglement infidelity $\mathcal{I}_{\text{e}}(V^\dag \circ U)$. If both the simulated propagator $U$ and the target gate $V$ are unitary, the entanglement fidelity $\mathcal{F}_{\text{e}} = 1- \mathcal{I}_{\text{e}}$ is given by the Hilbert-Schmidt norm as \begin{align} \mathcal{F}_{\text{e}}(V^\dag \circ U) = \frac{1}{d^2} |\text{tr}(V^\dag U)|^2. \end{align} This fidelity can also be generalized for open quantum systems \cite{Grace_2010openfidelity} and we calculate it as \begin{align} \mathcal{F}_o(V, U_s) &= \frac{1}{d^2} \text{tr}\left( \left(V^T \otimes V^\dag \right) U_s \right), \end{align} where $U_s$ is the simulated noisy quantum process. Leakage occurs in a quantum processor if states outside the computational subspace are populated. To simulate this error source, we extend the Hilbert space of computational states $\mathcal{H}_c$ as vector space sum $\mathcal{H} = \mathcal{H}_c + \mathcal{H}_l$ by the space $\mathcal{H}_l$ spanned by the leakage states. We quantify the amount of information lost into the leakage subspace by cropping the unitary evolution on the entire system to the computational states and calculating the distance to unitarity of the projected propagator $U_c$ as \begin{align} \mathcal{L} = 1 - \text{tr}(U_c^\dag U_c) / d. \end{align} In order to investigate the amount of incoherent leakage, i.e., caused by noise, this cost function can be combined with a noise simulation. \subsection{Optimization Procedures} The minimization of such a fidelity over the optimization space spanned by all possible control pulses $u(t)$ formally defines QOC as the minimization problem: \begin{align} \min_{u(t)} \mathcal{I}(u(t)) = \mathcal{I} (u^\ast(t)). \end{align} Although \qopt{} provides a general interface for cost functions that can be used with any optimization algorithm, we set a special focus on the use of gradient-based optimization algorithms. They are widely used and also applicable in QOC \cite{grape}. For this purpose we implement the analytic calculation of exact gradients, which do not require any assumption about the time discretization nor about the control strength and represent an alternative to the use of automatic differentiation \cite{C3wittler2020integrated, TrajectoriesAutodiff}. When multiple cost functions are evaluated, the software supplies them as a vector to the optimization algorithm. This leaves more options for the optimization and allows to give each cost function an individual weight. \section{Implementation} \label{sec::implementation} \begin{figure*} \centering \includegraphics[width=\textwidth]{QoptStructure.pdf} \caption{Activity diagram of a pulse optimization with \qopt{}. The solid and encircled dots show the start and end point respectively. The arrows mark the flow of activity and the rounded rectangles the classes of the \qopt{} package with base class names written in bold and examples in brackets, while the parallelograms contain direct input from the model. The box of the simulator is dashed because it defines an interface between the simulation and the optimization. Concrete examples or explanations are given in the brackets below the class name, i.e., the example for a transfer function is the convolution with a Gaussian kernel.} \label{fig:activity diagram} \end{figure*} In this section we present \qopt{}'s object-oriented implementation by first discussing the structure of an optimal control setup with \qopt{}, and then outlining the optimization of an $X_{\pi / 2}$ gate for a single qubit controlled by Rabi-driving as a simple illustrative example. \subsection{Program Flow} The intended setup is plotted as an activity diagram in Fig.~\ref{fig:activity diagram} giving a full picture, although not every feature and every class needs to be used in practice. Only the solver class is central to the simulation. The diagram shows the modular software structure and the flow of information between the classes. The optimization commences with a set of initial optimization parameters $v_k$, which can be chosen randomly or based on some insight. The package also features convenience functions to run the simulation with many different initial conditions in parallel to exploit the problem structure for trivial full parallelization. First, the ideal pulse parameters are mapped to the actual pulse seen by the qubits as defined by a transfer function class. Then, the control amplitudes $u_k$ are calculated by the amplitude function class. The control amplitudes enter the Schroedinger or master equation in Lindlbad form together with the drift dynamics and the noise. The appropriate differential equation to describe the system is chosen by selecting the solver algorithm class. The solution of the differential equation is subsequently passed to the cost function class, to calculate the figure of merit for the optimization. The simulator class is encircled in Fig.~\ref{fig:activity diagram} by a dashed box because it provides the interface between this simulation and the optimization algorithm. Furthermore it gathers run time statistics like the time spent in each cost function and for the calculation of the gradient. The optimizer class uses this interface to run the simulation in a loop until the internal termination criteria are met. Then it saves the final state in a result class and passes it to the data container class. The analyzer class can be used to visualize the results of several optimization runs stored in the data container class. The object-oriented modular implementation of the code allows the user to easily replace single parts of the optimization framework. Among other things, this allows the user to make changes to the cost function (i.e., to use a different fidelity metric) or use a specific transfer or amplitude function. With the interface provided by the simulation class it is possible to use most standard optimization algorithms. Currently, the 'minimize' and 'least squares' functions of scipy's optimization subpackage are supported \cite{scipy}. Numeric operations are encapsulated in an operator class. The computationally most expensive single operation during the simulation is the calculation of a matrix exponential as required for the numeric solution of the differential equations. The encapsulation allows the exchange of algorithms for the calculation of this matrix exponential (see for examples \cite{19matrixexponentials}). The Qobj class from QuTiP can be converted automatically into the \qopt{} operator class to improve the compatibility between both packages. This makes it easy to transfer simulations to \qopt{}. More information about \qopt{} can be found in the online documentation \cite{qopt-docs}. It features a complete API reference documentation and two series of IPython notebooks. The former explains each feature of \qopt{} in detail while the latter discusses practical examples including some information about the physics and numerics of qubit simulation, how noise sources can be characterized, which type of noise simulation is the most efficient in each case and which effects noise will have on the system. These notebooks also serve as integration tests by demonstrating the consistency of different methods and the comparison with analytic calculations. Together with various unit tests for the critical parts of the implementation, they ensure the reliability of \qopt{}. \subsection{Example} \begin{figure} \centering \includegraphics{Optimization.eps} \caption{Infidelities as cost functions of the iteration during the optimization. (A) During the first optimization, we minimize the infidelity in a Monte Carlo simulation of quasi-static noise $\mathcal{I}_{\text{qs}}$ to find a pulse which is not susceptible to slow noise. (B) Subsequently, we use the final parameters of the first optimization to optimize the pulse for pink noise. We can see that the infidelity $\mathcal{I}_{\text{ff}}$ of Eq.~\eqref{eq:entanginfidff} decreases during the second optimization by about a factor of six. Thus pulses that mitigate slow noise are far from perfect in mitigating pink noise.} \label{fig:optimization_example} \end{figure} \begin{figure} \centering \includegraphics{QsPulse.eps}\\ \includegraphics{FastPulse.eps} \caption{Plots of the optimized control amplitudes as function of time. (A) In the first optimization run, the control pulse is optimized for quasi-static noise. (B) The control amplitudes of the first run are used as starting point for an optimization in the presence of pink noise.} \label{fig:pulse_plots} \end{figure} We illustrate the usage of \qopt{} in the pulse optimization with the example of an $X_{\pi/2}$ single-qubit gate and optimize the pulse separately for quasi-static and auto-correlated fast noise, so that we can demonstrate how different noise models are implemented in \qopt{}'s API. The Hamiltonian of the example will be a single qubit manipulated by Rabi-driving in the rotating frame, which can be denoted as $H = u_x(t) \sigma_x + u_y(t) \sigma_y + \delta_\omega(t) \sigma_z$, where $u_x(t)$ and $u_y(t)$ are the control amplitudes (corresponding to the in-phase amplitude I and the quadrature amplitude Q of quadrature amplitude control), $\delta_\omega(t)$ the deviation of the driving frequency from the resonance frequency and the Pauli matrix $\sigma_i$ for $i \in {x,y,z}$. The detuning $\delta_\omega(t)$ enters Schrodinger's equation as a stochastic variable. In a first optimization we assume that the resonance frequency changes much slower than our gate time and is thus assumed to be constant during the pulse ($\delta_\omega(t) =\delta_\omega$). We therefore integrate $\delta_\omega$ over a Gaussian distribution and calculate the corresponding infidelity $\mathcal{I}_{\text{qs}}$ using a Monte Carlo method. In the second optimization we will use the final parameters of the first optimization as the initial pulse and assume that the resonance frequency is subject to pink noise and therefore the power spectral density of $\delta_\omega(t)$ has the form $S(f) = S_0 / f$. In this case we calculate the entanglement infidelity caused by systematic deviations $\mathcal{I}_{\text{e}}$ separately from the one caused by noise, which we calculate with filter functions $\mathcal{I}_{\text{ff}}$ as in Eq.~\eqref{eq:entanginfidff}. \lstset{firstnumber=last} We optimize a pulse with 20 time steps of equal length: \begin{lstlisting} import qopt as qo import numpy as np import matplotlib.pyplot as plt n_time_steps = 20 delta_t = .5 * np.pi \end{lstlisting} We start setting up the first simulation with quasi-static noise. The noise trace generator (NTG) provides the noise samples for the solver algorithm: \begin{lstlisting} noise_gen = qo.NTGQuasiStatic( n_samples_per_trace=n_time_steps, n_traces=10, standard_deviation=[.05, ] ) \end{lstlisting} The solver class holds the information about the Hamiltonian including the corresponding noise trace generator. The quantum operators are represented by the dense operator class that is based on a numpy array. We can also choose the exponential method, which is the algorithm used to calculate the matrix exponential and on demand its derivative (usually in combination). \qopt{} implements, among other methods, a spectral decomposition or as in this example the default scipy method for calculation of the Fréchet derivative of the matrix exponential. The solver class also interfaces to the filter functions package \cite{filter_func_package}. \begin{lstlisting} solver = qo.SchroedingerSMonteCarlo( h_ctrl=[.5 * qo.DenseOperator.pauli_x(), .5 * qo.DenseOperator.pauli_y()], h_drift=[0 * qo.DenseOperator.pauli_x()], tau=delta_t * np.ones(n_time_steps), exponential_method='Frechet', h_noise=[.5 * qo.DenseOperator.pauli_z()], noise_trace_generator=noise_gen, filter_function_h_n=[ [qo.DenseOperator.pauli_z().data, np.ones(n_time_steps)] ] ) \end{lstlisting} The target operation is an $X_{\pi/2}$-gate and the cost function for the first simulation evaluates the mean deviation between the simulated propagator and the target gate. We can also choose to neglect the systematic errors to calculate the entanglement fidelity on average between the unperturbed simulation and the simulation including noise. The optimization algorithm is per default the gradient-based L-BFGS-B algorithm implemented in scipy and the bounds restrict the search space. \begin{lstlisting} target = ( qo.DenseOperator.pauli_x() ).exp(.25j * np.pi) cost_func_qs = qo.OperationNoiseInfidelity( solver=solver, target=target, neglect_systematic_errors=False, label=[r'$\mathcal{I}_{\mathrm{qs}}$', ] ) optimizer_qs = qo.ScalarMinimizingOptimizer( system_simulator=qo.Simulator( solvers=[solver, ], cost_funcs=[cost_func_qs, ] ), bounds=[[-1, 1], ] * 2 * n_time_steps ) \end{lstlisting} In the second simulation we use one cost function for the systematic deviations calculated by the standard entanglement fidelity and a second cost function calculating the infidelity caused by pink noise based on filter functions. The sampling frequencies for the integral in equation \eqref{eq:entanginfidff} are set at the keyword argument \texttt{omega}. \begin{lstlisting} cost_func_plain = qo.OperationInfidelity( solver=solver, target=target, label=[r'$\mathcal{I}_{\mathrm{e}}$'] ) def noise_psd(f): return 1e-3 / f total_time = n_time_steps * delta_t start = np.log10(1 / total_time) end = np.log10(1 / delta_t) cost_func_ff = \ qo.OperatorFilterFunctionInfidelity( solver=solver, noise_power_spec_density=noise_psd, omega=np.logspace(start, end, 200), label=[r'$\mathcal{I}_{\mathrm{ff}}$'] ) optimizer_ff = qo.ScalarMinimizingOptimizer( system_simulator=qo.Simulator( solvers=[solver, ], cost_funcs=[cost_func_plain, cost_func_ff] ), bounds=[[-1, 1], ] * 2 * n_time_steps ) \end{lstlisting} The simulation is executed with the following commands: \begin{lstlisting} np.random.seed(0) result = optimizer_qs.run_optimization( initial_control_amplitudes= np.random.rand(20, 2)) result_ff = optimizer_ff.run_optimization( initial_control_amplitudes= result.final_parameters) \end{lstlisting} The resulting data can be stored with the data container class and plotted with the analyzer class. \begin{lstlisting} data_qs = qo.DataContainer() data_qs.append_optim_result(result) analyser_qs = qo.Analyser(data_qs) data_ff = qo.DataContainer() data_ff.append_optim_result(result_ff) analyser_ff = qo.Analyser(data_ff) \end{lstlisting} Some cosmetics in plotting commands are shorted for the sake of brevity. We can plot a final pulse (compare to Fig.~\ref{fig:pulse_plots}) with: \begin{lstlisting} solver.transfer_function.plot_pulse(result.final_parameters) \end{lstlisting} And the infidelity during the optimization (compare to Fig.~\ref{fig:optimization_example}) with: \begin{lstlisting} fig, axes = plt.subplots(2) analyser_qs.plot_costs(ax=axes[0]) analyser_ff.plot_costs(ax=axes[1]) \end{lstlisting} Each optimization took less than $\SI{10}{\s}$ on a desktop PC. The decrease in the infidelities during the optimizations can be seen in Fig.\,\ref{fig:optimization_example} and the final pulses are plotted in Fig.\,\ref{fig:pulse_plots}. With the second optimization run, the infidelity as calculated in Eq.~\eqref{eq:entanginfidff} is decreased by about a factor of six, demonstrating the benefit of explicitly considering fast auto-correlated noise. \section{Summary and Outlook} \label{sec:summary and outlook} Our open-source software package \qopt{} \cite{qopt} provides a general platform for the realistic simulation of qubit systems and QOC. We set the focus on the accurate and efficient simulation of arbitrary classical noise by including three noise simulation methods with distinct application areas. Quasi-static noise is efficiently simulated with Monte Carlo methods or numerical integration, Markovian noise is described best by a master equation in Lindblad form, while fast non-Markovian noise can be treated with filter functions. In the implementation of each method, the noise model can be drive-dependent. The limitations of control hardware are accounted for by the use of transfer functions. In addition to the example provided in this paper, the online documentation \cite{qopt-docs} contains a complete API documentation and numerous IPython notebooks discussing \qopt{}'s features and an introduction to practical simulations. We also provide an open-source repository of public simulation and optimal control projects called \texttt{qopt-applications} \cite{qopt-applications}. It can serve as starting point for new simulations and facilitates the transfer of knowledge and new optimal control techniques. QOC will continue to play a role in the search for the optimal qubit system for the construction of universal quantum computers. The increasing number of qubits in quantum processors leads to new challenges like the mitigation of crosstalk between adjacent qubits, robustness of quantum gates towards qubit inhomogeneities or increased noise levels in quantum processors operated at higher temperatures. Since the improvements from adopting QOC can be dramatic, any assessment of a quantum computation platform should take the applicable control techniques into account. The clean interface between the simulation and the optimization algorithm make \qopt{} ideal for the comparison of various optimization algorithms in the context of QOC. Novel AI-based optimization algorithms are interesting candidates. While we have found \qopt{} to be very useful in a number of applications, there is certainly much room for extensions. One such a feature could be the application of \qopt{} for spectral noise analysis. This could, for example, be achieved by the introduction of a cost function class measuring the sensitivity of a pulse towards noise of a specific frequency with the help of filter functions. If performance becomes a bottleneck, \qopt{} could profit from a high performance compilation with numba \cite{numba}, the use of other algorithms for the calculation of matrix exponentials, or highly optimized implementations of performance critical functions in a compiled language. For even more general modeling and pulse parameterization capabilities, the amplitude and the transfer function classes could be generalized allowing for example the application of the amplitude function before the transfer function. We published \qopt{} with the vision of establishing a new community standard for qubit simulations. The application of a common simulation platform makes simulations less time-consuming and more reproducible compared to the use of special-purpose simulation code. Reproducibility increases the trust in simulation results and facilitates the transfer of simulation and optimal control techniques between different qubit systems. We thus encourage users to upload new simulation code to \texttt{qopt-applications} \cite{qopt-applications} to increase their visibility and contribute to the advancement of the state-of-the-art. We encourage the participation in the development and welcome feedback on which new features would be useful. \acknowledgements{We thank Alexander Pitchford, Eric Giguère, Neill Lambert and Franco Nori for helpful discussions and their advice in the design of \qopt{}. We also thank Alexander Willmes, Christian Gorjaew, Paul Surrey, Frederike Butt and Jiaqi Ai for testing \qopt{} and providing feedback. We acknowledge support from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 679342), Impulse and Networking Fund of the Helmholtz Association. } \bibliographystyle{apsrev4-1}
2024-02-18T23:40:34.733Z
2021-10-13T02:16:37.000Z
algebraic_stack_train_0000
2,792
5,777
proofpile-arXiv_065-13700
\section{Introduction} Data-driven algorithms have altered the way we approach problem-solving in computer graphics. Machine learning tools garner top performance for tasks like image editing, user interaction, image synthesis, and layout, supported by large, well-curated datasets. Yet, while learning tools for areas like computational photography and rendering are widely adopted, another branch of graphics has been resistant to change: mesh-based shape analysis. Numerous technical challenges preclude modern learning methods from being adopted for meshes. Deep learning---arguably the most popular recent learning methodology---relies on \emph{regularity} of the data and \emph{differentiability} of the objective function for efficiency. For example, convolutional neural network (CNN) training is built on high-throughput processing of images through convolution and per-pixel computations to obtain gradients with respect to network weights, required for stochastic gradient descent. Meshes, a primary means of representing geometry in graphics, defy the considerations above. They come as sparse, irregular networks of vertices varying in number; the same piece of geometry easily can be represented by multiple meshes and at multiple resolutions/densities. Advances in graph neural networks (GNNs) have as a byproduct helped advance mesh processing, but typical graphs in geometry processing are fundamentally different from those in network science---vertices have low valence, are related through long chains of edges, can be connected in many roughly-equivalent ways, and can be deformed through rigid motions and isometries. The end result is that mesh-based learning architectures often contort input data to make it compatible with existing learning toolkits. Restricting to GPU-parallel, regularly-structured computation is a vast limitation for mesh analysis. For example, while geometry processing algorithms frequently rely on inversion and eigenanalysis of sparse matrices, these operations are hardly compatible with deep learning. Instead, mesh-based learning algorithms differ from successful non-learning geometry processing algorithms, relying on easily differentiated/parallelized local operations. In this paper, we ask whether we can invert this relationship: Rather than inventing new data streams for geometry processing to suit existing learning algorithms, can we develop learning methodologies from successful geometry processing techniques? We target applications in shape analysis using a prevalent tool in that domain, spectral geometry. Myriad shape analysis algorithms follow a similar template, building a positive (semi)definite matrix whose sparsity pattern is inherited from the mesh and then using its spectral structure to infer information about meshed geometry. Some examples include the Laplacian operator, the bilaplacian operator, the Dirac operator, and modal analysis. Our broad goal---also explored in some past work (see \S\ref{sec:related})---is to \emph{learn} the entries of this operator as functions of local geometry. Unlike past work, however, we observe that classical shape analysis relies on \emph{near-zero eigenvalues} of these operators (and the corresponding eigenvectors); high-frequency information is discarded. This is a key reason why classical geometry processing algorithms involve sparse matrix inversion and partial computation of eigenvalues. Partial eigenvector computation from a sparse matrix, however, is incompatible with most existing learning pipelines, so learning algorithms that use low-order eigenanalysis typically precompute the relevant eigenvectors from a fixed operator. Approaches to operator learning work with operator-vector products (rather than inverting the operator), restrict to a pre-computed basis, or compute the full spectrum as a dense matrix, which is prohibitive for large meshes. In this paper, we approximately differentiate through sparse operator construction for one class of operators motivated by discrete differential geometry. As a result, we can learn operators whose entries are functions of local geometry, which together modify the spectral structure of the operator---a global computation. Our method is competitive with existing mesh-based learning tools while being implemented from standard components of the geometry processing toolkit, and we show how to handle boundary conditions and vector-valued data. We make some unconventional design decisions that resemble geometry processing rather than deep learning. For instance, our spectral computation and operator construction are implemented using sparse linear algebra on the CPU, and we implement geometry processing-specific strategies for data augmentation that promote resolution independence. These decisions do not hamper efficiency of our method relative to past work. \paragraph*{Contributions.} We present a lightweight model for learning from triangle meshes, with or without boundary. Contributions include: \begin{itemize} \item a learnable class of sparse operators on meshes built from standard constructions in discrete exterior calculus; \item parallelizable algorithms for differentiating eigencomputation from these operators, including approximate backpropagation without sparse computation; \item end-to-end architectures for learning per-element or per-mesh features starting from mesh geometry without additional features; \item simple strategies for data augmentation and other practical techniques to improve performance of our method; and \item experiments demonstrating effectiveness in shape analysis tasks, including \changed{the generalization of our model} to high-resolution meshes that are too dense to be compatible with related methods. \end{itemize} \section{Related Work}\label{sec:related} Machine learning from geometry is becoming a popular subfield of graphics and vision. \citet{bronstein2017geometric} provide a broad overview of challenges in this discipline; here, we focus on work directly related to our task of learning from meshed geometry. \subsection{Spectral Shape Analysis} Our method is built on ideas from spectral geometry, which captures shape properties through the lens of spectral (eigenvalue/eigenvector) problems. \citet{wang2019intrinsic} provide a comprehensive introduction to this approach to geometry processing. The \emph{Laplace--Beltrami} (or, \emph{Laplacian}) operator is ubiquitous in spectral geometry processing. Most relevant to our work, numerous per-vertex and per-mesh features have been built from Laplacian eigenvalues and eigenvectors, including the global point signature \citep{rustamov2007laplace}, the heat kernel signature \citep{sun2009concise}, the wave kernel signature \citep{aubry2011wave}, and the heat kernel map \citep{ovsjanikov2010one}. These descriptors underlie algorithms for tasks as varied as symmetry detection \citep{ovsjanikov2008global}, correspondence \citep{ovsjanikov2012functional}, shape recognition \citep{reuter2006laplace,bronstein2010shape}, and shape retrieval \citep{bronstein2011shape}---among countless others. The Laplacian is popular given its multiscale sensitivity to intrinsic geometry, but recent work proposes replacements sensitive to other aspects of geometry like extrinsic deformation. Examples include the Dirac operator \citep{liu2017dirac,ye2018unified}, modal analysis \citep{hildebrandt2012modal,huang2009shape}, the Hamiltonian \citep{choukroun2018sparse}, the curvature Laplacian \citep{liu2007mesh}, the concavity-aware Laplacian \citep{au2011mesh,wang2014spectral}, the volumetric Laplacian \citep{raviv2010volumetric}, and the Dirichlet-to-Neumann operator \citep{wang2018steklov}. Other works add invariances to the Laplacian, e.g., to local scaling \citep{bronstein2010scale} or affine deformation \citep{raviv2011affine}, while others incorporate local features like photometric information \citep{kovnatsky2011photometric,spagnuolo2012affine}. Nearly all these algorithms---with the notable exception of volumetric methods \citep{raviv2010volumetric,wang2018steklov}---follow the same outline: Build an operator matrix whose sparsity pattern is inherited from the edges of a triangle mesh and construct features from its eigenvectors and eigenvalues; a widely-used strategy of \emph{truncation} approximates spectral features using partial eigeninformation, usually the eigenvalues closest to $0$. Other spectral methods use or produce \emph{vectorial} data, working with operators that manipulate tangential fields. Vector diffusion operators move information along a manifold or surface while accounting for parallel transport \citep{singer2012vector,sharp2019vector}. The Killing operator also has been applied to intrinsic symmetry detection \citep{ben2010discrete}, segmentation \cite{solomon2011discovery}, deformation \cite{solomon2011killing,claici2017isometry}, level set tracking \citep{tao2016near}, and registration/reconstruction \citep{chan2013reconstruction,slavcheva2017killingfusion}. These methods again analyze a sparse operator built from local features and mesh structure, although there is less agreement on the discretization of operators acting on vector-valued data \citep{de2016vector}. Spectral representations of geometry can be ``complete'' in the sense that a shape's intrinsic structure or embedding can be reconstructed from the eigenvalues and eigenvectors of certain operators. For example, the discrete Laplacian determines mesh edge lengths \cite{zeng2012discrete}, and a modified operator adds the extrinsic information needed to obtain an embedding \cite{corman2017functional}. \citep{boscaini2015shape,corman2017functional,cosmo2019isospectralization} solve related inverse problems in practice. Transitioning to the next section, an early machine learning method by \citet{litman2013learning} uses regression to learn spectral descriptors on meshes through learnable functions of Laplacian eigenvalues. This method does not learn the operator but rather the way per-vertex features are constructed from Laplacian eigenvalues. \citet{henaff2015deep} propose a similar approach on graphs. We attempt to generalize many of the methods above. Rather than defining a ``bespoke'' operator and, mapping from eigeninformation to features for each new task, however, we learn an operator from data. \subsection{Neural Networks on Meshes} Many papers propose algorithms for learning from meshes and other geometric representations. Here, we summarize past approaches for learning features from meshes, although specialized methods for mesh-based learning appear in tasks like generative modeling \citep{liu2020neural,hertz2020deep}, meshing \cite{sharp2020ptn}, and reconstruction \citep{gao2020learning,hanocka2020point}. \paragraph*{Learning from graphs.} Since triangle meshes are structured graphs, algorithms for learning from graphs inspired approaches to learning from meshes. Indeed, graph neural networks (GNNs) \citep{kipf2017semi} are often used as baselines for geometric learning. The graph analog of spectral geometry employs Laplacian matrices that act on per-vertex functions. Graph Laplacians provide a linear model for aggregating information between neighboring vertices. Spectral networks \citep{bruna2013spectral} project per-vertex features onto a low-frequency Laplacian eigenbasis before applying a learned linear operator, followed by a per-vertex nonlinearity in the standard basis; convolution on images can be understood as a spectral filter, so these networks generalize image-based convolutional neural networks (CNNs). Subsequent work accelerated learning and inference from spectral networks, often using matrix functions in lieu of computing a Laplacian eigenbasis, e.g., via Chebyshev polynomials \citep{defferrard2016convolutional}, random walks \citep{atwood2016diffusion}, or rational functions \citep{levie2018cayleynets}. \paragraph*{Spatial domain.} Many mesh-based learning methods operate in the ``spatial domain,'' relating vertices to their neighbors through constructions like local parameterization or tangent plane approximation. These methods often can be understood as GNNs with geometrically-motivated edge weights. Starting with \citep{masci2015geodesic}, many methods define convolution-like operations within local neighborhoods by parameterizing vertices and their $k$-ring neighborhoods. A challenge is how to orient the convolution kernel, since the tangent plane is different at every point; strategies include taking a maximum over all possible orientations \citep{masci2015geodesic, sun2020zernet}, dynamically computing weights from neighboring features \citep{verma2018feastnet}, aligning to principal curvatures \citep{BoscainiMRB16}, learning pseudo-coordinate functions represented as mixtures of Gaussians \citep{Monti_2017_CVPR}, projecting onto tangent planes \citep{tatarchenko2018tangent}, sorting nearby vertices based on feature similarity \citep{wang20183d}, aligning to a 4-symmetry field \citep{huang2019texturenet}, and weighting by normal vector similarity \citep{song2020meshgraphnet} or directional curvature \citep{he2020curvanet}. These and other methods must also define a means of representing localized convolution kernels. Many choices are available, including localized spectral filters \citep{boscaini2015learning}, B-splines \citep{fey2018splinecnn}, Zernike polynomials \citep{sun2020zernet}, wavelets \citep{schonsheck2018parallel}, and extrinsic Euclidean convolution \citep{schult2020dualconvmesh}. Additional machinery is needed to compute vectorial features or relate tangent kernels at different vertices---a problem related to choosing a canonical orientation per vertex. Parallel transport is a choice motivated by differential geometry \citep{pan2018convolutional}, which can be combined with circular harmonics \cite{wiersma2020cnns} or pooling over multiple coordinates \citep{poulenard2018multi} to avoid dependence on a local coordinate system. \citet{yang2020pfcnn} employ locally flat connections for a similar purpose. Simple GNN layers like \citep{kipf2017semi} communicate information only among neighboring vertices. This small receptive field---inherited by several methods above---is a serious challenge for learning from meshes, which are sparse graphs for which a single such layer becomes more and more local as resolution increases. This issue creates dependency of performance on mesh resolution. \paragraph*{Mesh-based constructions.} While it is valid to interpret meshes as graphs, this neglects the fact that meshes are highly-structured relative to graphs in other disciplines; a few learning algorithms leverage this additional structure to engineer mesh-specific convolutional-style layers. The popular MeshCNN architecture \citep{hanocka2019meshcnn} learns edge features and performs pooling based on edge collapse operations. PD-MeshNet \citep{milano2020primal} augments the graph of mesh edges with the graph of dual edges capturing triangle adjacency, with pooling techniques inspired by mesh simplification and dynamic local aggregation using attention. \paragraph*{Global parameterization.} Surface parameterization is a standard technique for texture mapping; some methods parameterize meshes into an image domain on which standard CNNs can be used for learning and inference from pushed-forward features. \citet{sinha2016deep} pioneered this approach using geometry images \citep{gu2002geometry} for parameterization. \citet{maron2017convolutional} use seamless toric covers, conformally mapping four copies of a surface into a flat torus; this work was extended by \citet{haim2019surface} to general covers to reduce distortion. Rendering-based techniques can also be understood as simple parameterizations onto the image plane, e.g., using panoramic \citep{shi2015deeppano,sfikas2017exploiting} or multi-view \citep{su2015multi,wei2016dense,kalogerakis20173d} projections. \paragraph*{Fixed operator methods.} Some methods use operators on surfaces to construct convolution-like operations. Surface Networks \cite{kostrikov2018surface} use discrete Laplacian and Dirac operators as edge weights in GNNs. \citet{yi2017syncspeccnn} define kernels in Laplacian eigenbases, including spectral parameterizations of dilated convolutional kernels and transformer networks. \citet{qiao2020learning} use Laplacian spectral clustering to define neighborhoods for pooling. \paragraph*{Learned operators.} Some past methods learn relevant differential operators to a geometric learning task. Closest to ours, \citet{wang2019learning} learn a parameterized sparse operator for geometry processing; see \S\ref{sec:operatorconstruction} for comparison of our operator to theirs. Their layers simulate iterations of algorithms like conjugate gradients by applying their operator, limiting its receptive field to the number of layers. \changed{In contrast, we explicitly perform eigendecomposition in our differentiable pipeline, allowing us to engineer the inductive bias inspired by the Hodge Laplacian.} Similar discretizations are found in methods like \citep{eliasof2020diffgcn} for learning PDEs from data; this method uses algebraic multigrid to increase the receptive field. \paragraph*{Other.} We mention a few other methods for learning from meshes that do not fall into the categories above. \citet{xu2017directionally} present a pipeline that combines purely local and mesh-wise global features; \citet{feng2019meshnet} also propose extracting purely local features. \citet{lim2018simple} apply recurrent neural networks (RNNs) to compute vertex features after unrolling local neighborhoods into prescribed spiral patterns. Deep functional maps \citep{litany2017deep} largely rely on precomputed features for geometric information, although some recent efforts bring this correspondence method closer to end-to-end \citep{donati2020deep,sharma2020weakly}. \paragraph*{Concurrent and unreviewed work.} Machine learning is a fast-paced discipline, with new papers released daily. Here, we acknowledge some ``late-breaking'' \changed{concurrent work}. \citet{sharp2020diffusion} propose a ``learned diffusion layer'' in which features are diffused along a geometric domain via the isotropic heat equation with learned amount of diffusion; they include diffusion time as a learnable parameter. Similarly to \citep{bruna2013spectral}, their diffusion is implemented in a fixed low-frequency Laplacian eigenbasis, computed during learning/inference. Additional features incorporate anisotropy via inner products of spatial gradients. Unlike our work, they use a prescribed Laplacian operator. Other methods include \citep{de2020gauge}, which proposes anisotropic gauge-invariant kernels using a message passing scheme built from parallel transport; \citep{lahav2020meshwalker}, an RNN-based approach employing random walks; \citep{schneider2020medmeshcnn}, which improves MeshCNN's memory efficiency and resilience to class imbalance for medical applications; \changed{and \citep{budninskiy2020laplacian}, which optimizes for a graph Laplacian parameterized by edge features}. \section{Overview} \begin{figure} \includegraphics[trim=0 25 0 0,clip,width=\linewidth]{figures/flowchart/flowchart.pdf} \caption{Data flow in HodgeNet; yellow boxes contain learnable parameters.}\label{fig:data} \end{figure} Figure \ref{fig:data} gives an overview of our HodgeNet architecture for learning from triangle meshes. The boxes highlighted in yellow have learnable parameters, while the remaining boxes are fixed computations. Our goal is to learn an operator and associated spectral descriptor for a given learning-from-meshes task. As with most methods, the learning stage uses stochastic gradient descent to optimize for model parameters, which are fixed during inference. Our model inputs a triangle mesh $M=(V,E,T)$ and constructs three objects: \begin{itemize} \item a combinatorial \emph{differential} matrix $d\in\{-1,0,1\}^{|E|\times|V|}$, \item a diagonal \emph{0-form Hodge star} matrix $\star_0\in\mathbb R^{|V|\times|V|}$, and \item a diagonal \emph{1-form Hodge star} matrix $\star_1\in\mathbb R^{|E|\times|E|}$. \end{itemize} The matrix $d$ is a fixed function of $M$, while $\star_0,\star_1$ are learnable functions of local neighborhoods around mesh vertices. HodgeNet then computes the $k$ eigenvectors $x^i$ of the semidefinite Laplacian-type matrix $L=\star_0^{-1}d^\top\star_1 d$ whose eigenvalues $\lambda^i$ are closest to zero. Finally, per-vertex or per-mesh features are gathered from $\{(x^i,\lambda^i)\}$ using learnable formulas that generalize the form of popular spectral features like the heat kernel signature. During training, we need to differentiate our loss function through the steps above. Most of the operations above are simple nonlinearities that can be differentiated using standard backpropagation methods. We show in \S\ref{sec:derivatives} how to obtain approximate derivatives of the eigenproblem efficiently. \section{Operator Construction}\label{sec:operatorconstruction} HodgeNet relies on a parameterized class of learnable operators whose entries are functions of local geometry. The basis of our construction, designed to encapsulate operator constructions in spectral geometry, resembles that proposed by \citet{wang2019learning}, with the key difference that we expose per-edge and per-vertex features using diagonal Hodge star operators; this difference greatly simplifies our backpropagation procedure in \S\ref{sec:derivatives}. In \S\ref{sec:vectorial}, we also show how to generalize this construction for vectorial operators. \subsection{Operator} Given an oriented manifold mesh $M=(V,E,T)$ (optionally with boundary) with vertices $V\subset \mathbb R^3$, edges $E\subset V\times V$, and oriented triangles $T\subset V\times V\times V$, HodgeNet constructs a positive (semi)definite operator matrix $L\in\mathbb R^{|V|\times|V|}$ whose spectral structure will be used for a mesh-based learning task. Inspired by the factorization of the Laplacian in discrete exterior calculus \citep{desbrun2005discrete}, we parameterize $L$ as a product: \begin{equation}\label{eq:operator} L=\star_0^{-1} d^\top \star_1 d. \end{equation} Here, $d\in\{-1,0,1\}^{|E|\times|V|}$ is the differential operator given by $$ d_{ev}=\left\{ \begin{array}{rl} 1 & \textrm{ if }v=v_2\\ -1 & \textrm{ if }v=v_1\\ 0 & \textrm{ otherwise}, \end{array} \right. $$ where $e=(v_1,v_2)$ is an oriented edge. While $d$ is determined by mesh topology, the diagonal Hodge star matrices $\star_0\in\mathbb R^{|V|\times|V|}$ and $\star_1\in\mathbb R^{|E|\times|E|}$ are learnable functions of local mesh geometry. To construct $\star_0,\star_1$, we input $D$ per-vertex features $F\in\mathbb R^{|V|\times D}$. \changed{In our experiments, we use positions and normals as the per-vertex features, except when noted otherwise.} We detail the construction of these operators $\star_0(F),\star_1(F)$ from $F$ below. \paragraph{Per-vertex features $\star_0$.} Our construction of $\star_0\changed{(F)}$ imitates area weight computation for discrete Laplacians. It takes place in two steps. First, we compute \emph{per-triangle} features using a learnable function $f_\Phi:\mathbb R^{3D}\to\mathbb R$, where $\Phi$ contains the parameters of our model. To ensure positive (semi)definiteness for $\star_0\changed{(F)}$ we square $f_\Phi$. Finally, we gather features from triangles to vertices by summing and optionally adding a small constant $\epsilon$ \changed{(in practice, $\epsilon=10^{-4}$)} to improve conditioning of the eigensystem. Overall, we can write our expression as follows: \begin{equation}\label{eq:star0vv} (\star_0\changed{(F)})_{vv}=\epsilon+\sum_{t\sim v} f_\Phi(F_{v_1},F_{v_2},F_{v_3})^2 \end{equation} where $t=(v_1,v_2,v_3)\in T$ is a triangle with vertices $v_1,v_2,v_3$ in counterclockwise order. This sum over $t$ has a potentially different number of terms for each vertex, equal to the valence. If $\epsilon=0$ and $f_\Phi^2$ measures triangle area scaled by $\nicefrac13$, then $\star_0$ becomes the barycentric area weights matrix often used in finite elements and discrete exterior calculus. We give the details of our choice of functions $f_\Phi$ in \S\ref{sec:detailsandparameters}. Squaring the inner part of \eqref{eq:star0vv} is one of many ways to make sure $(\star_0)_{vv}\geq0$ and could be replaced, e.g., by ReLU activation, but we found empirically that this simple expression led to the best performance. \begingrou \paragraph{Per-edge features $\star_1$.} The diagonal matrix $\star_1\changed{(F)}$ contains per-edge features on its diagonal. Unlike \eqref{eq:star0vv}, to compute $\star_1\changed{(F)}$ we do not need to gather features from a variable-sized ring. Instead, we learn a function $g_\Phi : \mathbb R^{4D} \to \mathbb R$ and, for an interior edge $e=(v_1,v_2)$, compute \begin{equation}\label{eq:star1ee} (\star_1\changed{(F)})_{ee} = \epsilon + g_\Phi(F_{v_1},F_{v_2},F_{v_3},F_{v_4})^2, \end{equation} \setlength{\columnsep}{.1in} \begin{wrapfigure}[5]{r}{.4\linewidth}\centering\vspace{-.3in} \includegraphics[width=\linewidth]{figures/triangle/triangle.pdf} \end{wrapfigure} where $v_3$ and $v_4$ are \emph{opposite} the edge $e$ as shown to the right. We order $v_3$ and $v_4$ so that $(v_1,v_2,v_3)$ and $(v_2,v_1,v_4)$ are all consistently oriented. We learn a separate function $\bar g_\Phi(v_1,v_2,v_3)$ for boundary edges, since there is only one opposite \changed{vertex} in this case. \endgroup If $\epsilon=0$ and $g_\Phi^2$ gives the sum of interior angle cotangents at $v_3$ and $v_4$, then $\star_1$ will be the famous cotangent Laplacian matrix common in geometry processing. While we have chosen to square the function $g$, thanks to conjugation by $d$ in \eqref{eq:operator} this is sufficient but not necessary for positive (semi)definiteness of $L$, and indeed this design choice prevents us from exactly reproducing the cotangent Laplacian in the presence of obtuse triangles. Our architecture could easily be adjusted to allow for negative $(\star_1)_{ee}$ values and hence to reproduce the cotangent Laplacian operator, but the stability and ease of squaring $g_\Phi$ to ensure that $L$ has no negative eigenvalues outweighed this largely theoretical consideration. \paragraph{Discussion.} Our parameterizations of $L$, $\star_0$, and $\star_1$ largely imitate the flow of information used to construct discrete Laplacian operators and related objects. They are readily incorporated into geometry processing pipelines and have familiar sparsity patterns encountered in this discipline. It is worth acknowledging a few design decisions intended to simplify our framework at the cost of mathematical structure: \begin{itemize} \item Squaring $g_\Phi$ in \eqref{eq:star1ee} means we cannot reproduce the cotangent Laplacian operator for poorly-conditioned meshes with negative cotangent weights. \item We arbitrarily choose one of three possible cyclic orderings of the inputs to $f_\Phi$ in \eqref{eq:star0vv}. \item Similarly, we arbitrarily choose among two orderings of the inputs to $g_\Phi$ in \eqref{eq:star1ee}: $(v_1,v_2,v_3,v_4)$ and $(v_2,v_1,v_4,v_3)$. \end{itemize} All three items above could be addressed at the cost of increasing the complexity of $f_\Phi,g_\Phi$, but building more general semidefiniteness conditions and/or order invariance did not bring practical benefit. \subsection{Vectorial operators}\label{sec:vectorial} $L$ discretizes operators that act on functions discretized using one value per vertex of a triangle mesh. We also can discretize operators acting on \emph{vector-valued} functions with a value in $\mathbb R^k$ per vertex by adjustmenting our construction. For example, for planar triangle meshes and $k=2$ we can reproduce the Killing operator described in \citep{solomon2011killing,claici2017isometry}; for $k=4$ we can mimic the Dirac operator used for shape analysis by \citet{liu2017dirac}. To extend to vectorial operators, we use a $k|E|\times k|V|$ block version of $d$ whose blocks are given as follows: $$ d_{ev}=\left\{ \begin{array}{ll} I_{k\times k} & \textrm{ if }v=v_2\\ -I_{k\times k} & \textrm{ if }v=v_1\\ 0 & \textrm{ otherwise}, \end{array} \right. $$ where $I_{k\times k}$ denotes the $k\times k$ identity matrix. We generalize $f_\Phi:\mathbb R^{3D}\to\mathbb R^{k\times k}$ and $g_\Phi:\mathbb R^{4D}\to\mathbb R^{k\times k}$ to output $k\times k$ matrices. Then, we compute $\star_0$ and $\star_1$ as block diagonal matrices whose elements are as follows: \begin{align} (\star_0)_{vv}&=\epsilon I_{k\times k}+\sum_{t\sim v} f_\Phi(F_{v_1},F_{v_2},F_{v_3})^\top f_\Phi(F_{v_1},F_{v_2},F_{v_3})\\ (\star_1)_{ee}&= \epsilon I_{k\times k} + g_\Phi(F_{v_1},F_{v_2},F_{v_3},F_{v_4})^\top g_\Phi(F_{v_1},F_{v_2},F_{v_3},F_{v_4}). \end{align} These definitions generalize our scalar construction for the case of $k=1$ and still lead to a semidefinite matrix $L=\star_0^{-1}d^\top \star_1 d\in\mathbb R^{k|V|\times k|V|}.$ \section{Differentiable Spectral Analysis}\label{sec:derivatives} Now that we have determined the form of our operator $L$, we turn the task of using its low-order eigenvalues and eigenvectors for learning tasks. The key challenge will be to differentiate through eigenvalue/eigenvector computation, a task we consider below. While general eigencomputation is extremely difficult for learning, we show how our particular form \eqref{eq:operator} for $L$ facilitates backpropagation and reduces dependence on random-access computation. Recall that a step of training requires evaluating the loss function $\L$ and its gradients with respect to the parameters $\Phi$ of the model. The loss function is evaluated in a \emph{forward pass}, and the gradients are evaluated during \emph{backpropagation}. We will perform both the forward and backward pass of our model on the CPU so as to take advantage of a sparse solver to \changed{compute} a set of eigenvalues/eigenvectors for the operator $L$ \changed{efficiently}. While the computation of our features for our learning problem as well as the entirety of backpropogation could be efficiently computed on the GPU, our model has sufficiently few parameters that we find it unnecessary to transfer data between GPU and CPU. \subsection{The HodgeNet \changed{Generalized} Eigenproblem}\label{sec:eigenproblem} Our architecture outputs features built from eigenvectors of $L$ in \eqref{eq:operator}. Recall that $L$---and, in particular, the Hodge stars $\star_0,\star_1$---is built from a matrix $F\in\mathbb R^{|V|\times D}$ of per-vertex features: $$L=\star_0(F)^{-1} d^\top \star_1(F) d.$$ Hence, our features are built from eigenvectors $x^i\in\mathbb R^{k|V|}$ satisfying \begin{equation}\label{eq:eigenproblem} Lx^i=\lambda^i x ^i\iff d^\top\star_1 dx^i=\lambda^i\star_0x^i. \end{equation} By construction, $d^\top\star_1 d\succeq0$ and $\star_0\succeq0$, so $\lambda^i\geq0$. By convention, we normalize eigenvectors to satisfy the condition $(x^i)^\top \star_0 x^j=\delta_{ij}$, possible thanks to symmetry of our operators. To differentiate our per-vertex features, we need to differentiate the eigenvectors $x^i$ and eigenvalues $\lambda^i$ with respect to the parameters $\Phi$ of our learned functions $f_\Phi, g_\Phi$. The expressions in \S\ref{sec:operatorconstruction} for $(\star_0)_{vv}(F)$ and $(\star_1)_{ee}(F)$ are readily differentiated. Hence, for compatibility with the backpropagation algorithm for differentiation, we need to solve the following problem involving our loss function $\L$: \begin{center} \textbf{Given the partial derivatives $\nicefrac{\partial\L}{\partial \lambda^i}$ and $\nicefrac{\partial\L}{\partial x_j^i}$ for all $i,j$, compute the partial derivatives $\nicefrac{\partial\L}{\partial(\star_0)_{vv}}$ and $\nicefrac{\partial\L}{\partial(\star_1)_{ee}}$ for all $v\in V,e\in E$.} \end{center} In words, given derivatives of the loss function with respect to the eigenvalues and eigenvectors of $L$, compute the derivatives of the loss function with respect to the Hodge stars. In general, differentiating through eigenvalue problems is expensive. Libraries like TensorFlow and PyTorch allow for differentiation of computing the \emph{full} spectrum of a matrix, but their implementations (1) cannot account for the sparsity structure of our mesh and (2) cannot target a few eigenvalues close to $0$, which are typically the meaningful eigenvalues to compute in geometry processing applications. Solving the full eigenvalue problem is extremely expensive computationally, and storing a $k|V|\times k|V|$ matrix of eigenvectors is prohibitive. Our pipeline addresses the issues above. We rely on CPU-based sparse eigensolvers during the forward pass of our network, solving \eqref{eq:eigenproblem} only for a subset of eigenvalues. This alleviates dependence on $k|V|\times k|V|$ dense matrices and instead only stores the $O(k|V|)$ nonzero entries. \subsection{Derivative Formulas} The vectorial operator $L$ operates on vectors in $\mathbb R^k$ per vertex on a mesh. Following \S\ref{sec:vectorial}, we will use $x^{i}_{v\ell}$ to refer to the $\ell$-th element ($\ell\in\{1,\ldots,k\}$) of entry $v$ ($v\in V$) of the $i$-th eigenvector of $L$. We use $\star_{0v\ell m}$ to refer to the element $(\ell,m)$ of the $k\times k$ block of $\star_0$ at vertex $v\in V$. More generally, we will use subscripts to refer to matrix elements and superscripts to index over eigenvalues. Define the following tensors: \begin{align*} y_{e\ell}^i &:=\sum_{v\in V} d_{ev} x_{v\ell}^i\\ M_{ij} &:= \left\{ \begin{array}{ll} (\lambda^i-\lambda^j)^{-1} & \textrm{ if }i\neq j\\ 0 & \textrm{ otherwise.} \end{array} \right.\\ N_{ij} &:=\left\{ \begin{array}{ll} \nicefrac{\lambda^i}{(\lambda^j-\lambda^i)} & \textrm{ if }i\neq j\\ -\nicefrac{1}{2} & \textrm{ otherwise.} \end{array} \right. \end{align*} We compute $y$ during the forward pass as $dx$ and cache the result for use during backpropagation, since $d$ is a sparse matrix. Our algorithm relies on the following proposition: \begin{proposition}\label{prop:derivatives} We can backpropagate derivatives of our loss function as follows: {\allowdisplaybreaks \begin{equation}\label{eq:grad} \begin{array}{r@{\,}l} \displaystyle\frac{\partial \L}{\partial \star_{0w\ell m}} &= \displaystyle -\sum_i\frac{\partial \L}{\partial \lambda^i}\lambda^i x_{w\ell}^i x_{wm}^i + \sum_{ivnj} \frac{\partial \L}{\partial x^i_{vn}} N_{ij} x_{w\ell}^j x_{wm}^i x^j_{vn}\\ \displaystyle\frac{\partial \L}{\partial \star_{1e\ell m}} &\displaystyle= \sum_i\frac{\partial \L}{\partial \lambda^i}y_{e\ell}^i y_{em}^i + \sum_{ivnj} \frac{\partial \L}{\partial x^i_{vn}} M_{ij} x_{vn}^j y_{e\ell}^j y_{em}^i \end{array} \end{equation} } Here, $i,j$ index over the eigenvectors of $L$; $\ell,n,m$ index over vector elements from $1$ to $k$; $v,w$ are vertices of the mesh; and $e$ is an edge. \end{proposition} \noindent We defer proof to the supplemental document, since it requires a fairly involved computation. That said, this proposition is roughly an application of standard derivative-of-eigenvalue formulae to our operator $L$ in \eqref{eq:operator}, which benefits from the fact that our differentiable parameters are in \emph{diagonal} matrices $\star_0,\star_1$. The expressions in \eqref{eq:grad} may appear complicated, but in reality they are efficiently computable. We have eliminated all sparse matrices and inverses from these formulas, which are readily implemented using a one-line call to Einstein summation functions in deep learning toolkits (e.g., \texttt{einsum} in PyTorch). \subsection{Derivative Approximation}\label{sec:derivapprox} Here we briefly address one challenge using Proposition \ref{prop:derivatives} to differentiate HodgeNet. Recall from \S\ref{sec:eigenproblem} that we compute an incomplete set of eigenvectors of $L$, far fewer than the largest possible number. This choice is reasonable for constructing a loss function, which will only depend on this low-order eigenstructure. However, \eqref{eq:grad} requires \emph{all} eigenvectors of $L$ to evaluate the sums over $j$. We use a simple strategy to address this issue. During the forward pass we compute and cache more eigenvalues/eigenvectors than are needed to evaluate $\L$; in practice, we use $2\times$ (see \S\ref{sec:ablation} for validation). Then, in backpropagation we truncate the sums over $j$ in \eqref{eq:grad} to include only these terms. A straightforward argument reveals that the resulting gradient approximation still yields a descent direction for $\L$. The first term in each sum is computable from exclusively the partial set of eigenvalues, implying we can exactly differentiate $\L$ with respect to the eigenvalues $\lambda^i$; our approximation is only relevant to the eigenvectors. \section{From Eigenvectors to Features}\label{sec:eigtofeature} Recall that our broad task is to design a learnable mapping from meshes to task-specific features. So far, we have designed a learnable operator from mesh geometry and provided a means of differentiating through its eigenvectors/eigenvalues. It is tempting to use the eigenvectors as per-vertex features, but this is not a suitable choice: The choice of sign $\pm x_i$ for each eigenvector is arbitrary. We return to geometry processing for inspiration. Classical shape descriptors built from operator eigenfunctions circumvent the sign issue by \emph{squaring} the Laplacian eigenfunctions pointwise. For instance, the heat kernel signature \citep{sun2009concise}, wave kernel signature \citep{aubry2011wave}, and general learned kernels \cite{litman2013learning} take the form $$\sum_i f(\bar\lambda^i) \psi^i(p)^2,$$ where $\bar\lambda^i$ is the $i$-th eigenvalue and $\psi^i$ is the $i$-ith eigenvector of the Laplacian. The fact that $\psi^i$ is squared alleviates sign dependence. Similarly, for eigenfunctions of the vectorial \emph{vector Laplacian}, sign-agnostic features can be computed from the outer product of the pointwise vector and itself $\psi^i(p)\psi^i(p)^\top\in\mathbb R^{k\times k}$ (see, e.g., \cite[eq.\ (3.13)]{singer2012vector}). Generalizing the hand-designed features above, we construct a sign-agnostic learnable per-vertex feature as follows. Take $m$ to be the number of eigenvectors of $L$ we will use to compute features, and take $n$ to be the number of output features. We learn a function $h_\Phi:\mathbb R\to\mathbb R^n$ and construct a matrix $H\in\mathbb R^{m\times n}$ whose columns are $h(\lambda^i)$ for $i\in\{1,\ldots,m\}$. Then, for $j\in\{1,\ldots,n\}$, the $j$-th output feature at vertex $v\in V$, notated $G_v^j$, is given by: $$G_v^j :=\sum_{i} H_{ij}\cdot (x^i_v)(x^i_v)^\top,$$ where $x^i_v$ denotes the $i$-th eigenvector of $L$ evaluated at vertex $v$ as a $k\times 1$ column vector. We omit the 0 eigenvalue corresponding to the constant eigenfunction. We give our form for $h_\Phi$ in \S\ref{sec:detailsandparameters}. Having computed per-vertex features $G_v$, we optionally follow a standard max pooling approach to obtain per-face features $$G_f = \max_{v \sim f} G_v,$$ or per-mesh features $$G_M = \max_{v \in V} G_v$$ depending on the learning task at hand. We map these features to the desired output dimensions $d$ using a learned function $o_\Phi : \mathbb R^n \to \mathbb R^d$. \section{Additional Details and Parameters}\label{sec:detailsandparameters} We model each of $f_\Phi$, $g_\Phi$, $h_\Phi$, $o_\Phi$ as an MLP with batch normalization and Leaky ReLU nonlinearity \cite{maasrectifier} before each hidden layer. $f_\Phi$ and $g_\Phi$ each consist of four hidden layers, each of of size 32; $h_\Phi$ consists of four hidden layers, each of size $n$; and $o_\Phi$ consists of two hidden layers, each of size 32, except for the classification task, where the layers have 64 units. In all our experiments, we set vector dimensionality $k=4$, output feature size $n=32$, and number of eigenpairs used $m=32$. We use an additional 32 eigenpairs for improved derivative approximation, as described in \S\ref{sec:derivapprox}. We train our network using the optimizer AdamW \cite{loshchilov2018decoupled} with a batch size of 16 and learning rate of 0.0001. We use gradient clipping with maximum norm of 1.0 to stabilize training. We implement our pipeline in PyTorch, using SciPy \texttt{eigsh} with ARPACK for solving our sparse eigenproblem and \texttt{libigl} for mesh processing. We train our models on 128 2.5 GHz CPUs. \section{Experiments} We demonstrate the efficacy of our method on several shape analysis tasks and provide experiments justifying some of our parameter and design choices. \changed{We also compare to state-of-the-art methods developed for learning on meshes. Other geometric deep learning approaches tend to use GNNs, ignoring the structure and relying on multiple layers to aggregate global data, whereas our method uses spectral geometry to infer global information from local features.} \subsection{Mesh Segmentation} We train our network for the task of mesh segmentation on four datasets---the Human Body dataset \cite{maron2017convolutional} and the vase, chair, and alien categories of the Shape COSEG dataset \cite{wang2012active}---optimizing the standard cross entropy loss. We use the same version of the Human Body dataset as in \cite{hanocka2019meshcnn, milano2020primal}, which is downsampled to 2000 faces per mesh. We evaluate on the test set of the Human Body dataset, and generate a random 85\%-15\% train-test split for each Shape COSEG category, as in \cite{hanocka2019meshcnn, milano2020primal}. We train for 100 epochs (about 3 hours), randomly decimating each input mesh to a resolution of 1000-2000 faces and randomly applying anisotropic scaling of up to 5\% in each dimension. We then fine-tune by training for 100 more epochs without decimation or scaling. In the case of the Human Body dataset, where meshes are not canonically rotated, we also apply random rotations as data augmentation to the training set. We center each mesh about the vertex center of mass and rescale to fit inside the unit sphere. \begin{table}[h] \centering \begin{tabular}{ccc} \toprule Method & \# Parameters & Accuracy \\ \midrule Ours & 31,720 & 85.03\% \\ PD-MeshNet \cite{milano2020primal} & 173,728 & 85.61\%\\ MeshCNN \cite{hanocka2019meshcnn} & 2,279,720 & 85.39\%\\ \bottomrule \end{tabular} \caption{Segmentation accuracy on the Human Body test set.} \label{tab:human_seg} \end{table} \begin{table}[h] \centering \begin{tabular}{cc} \toprule Method & Accuracy \\ \midrule Ours & 86.48\% \\ PD-MeshNet \cite{milano2020primal} & 86.45\%\\ \bottomrule \end{tabular} \caption{Area-weighted segmentation accuracy on Human Body test set.} \label{tab:human_seg_weighted} \end{table} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figures/vases/vases} \vspace{0.1in} \includegraphics[width=\linewidth]{figures/chairs/chairs} \vspace{0.1in} \includegraphics[width=\linewidth]{figures/aliens/aliens} \caption{Segmentation results on the Shape COSEG dataset. Meshes shown above are randomly selected from the test set for each category.} \label{fig:coseg} \end{figure} \begin{figure*}[h] \centering \includegraphics[width=\linewidth]{figures/humans/humans} \caption{Mesh segmentation results on the Human Body test set.} \label{fig:human_seg} \end{figure*} \begin{table}[h] \centering \begin{tabular}{cccc} \toprule Method & Vases & Chairs & Aliens \\ \midrule Ours & 90.30\% & 95.68\% & 96.03\% \\ PD-MeshNet \cite{milano2020primal} & 95.36\% & 97.23\% & 98.18\% \\ MeshCNN \cite{hanocka2019meshcnn} & 92.36\% & 92.99\% & 96.26\% \\ \bottomrule \end{tabular} \caption{Test segmentation accuracy on Shape COSEG.} \label{tab:coseg} \end{table} \begin{table}[h] \centering \begin{tabular}{cccc} \toprule Method & Vases & Chairs & Aliens \\ \midrule Ours & 94.38\% & 99.22\% & 97.97\% \\ PD-MeshNet \cite{milano2020primal} & 97.49\% & 97.86\% & 98.66\% \\ \bottomrule \end{tabular} \caption{Area-weighted test segmentation accuracy on Shape COSEG.} \vspace{-0.2in} \label{tab:coseg_weighted} \end{table} We report segmentation accuracies in Tables~\ref{tab:human_seg} and~\ref{tab:coseg} \changed{and area-weighted segmentation accuracies in Tables~\ref{tab:human_seg_weighted} and \ref{tab:coseg_weighted}. For fair comparison, as in \cite{milano2020primal}, we report accuracies based on ``hard" ground-truth segmentation face labels for MeshCNN \cite{hanocka2019meshcnn} rather than ``soft" edge labels; see \cite[Supplementary Material, Section H]{milano2020primal} for details regarding the segmentation metrics.} Our method obtains results comparable to state-of-the-art for each dataset while requiring significantly fewer learnable parameters. We also show our learned segmentations on the entire Human Body test set in Figure~\ref{fig:human_seg} and on a sampling of the Shape COSEG test sets in Figure~\ref{fig:coseg}. \subsection{High-Resolution Mesh Segmentation} \begingroup \setlength{\columnsep}{0.1in} \begin{wraptable}[11]{r}{.4\linewidth}\centering\vspace{-0.15in} \centering \begin{tabular}{cc} \toprule Split \# & Accuracy \\ \midrule 1 & 90.57\% \\ 2 & 86.90\% \\ 3 & 90.02\% \\ 4 & 89.07\% \\ 5 & 90.15\% \\ \bottomrule \end{tabular} \caption{Mesh segmentation accuracies for five random test splits of the full-resolution MIT animation dataset. } \label{tab:humans_highres} \end{wraptable} In contrast to earlier works, our method is capable of training on dense, non-decimated mesh data. We demonstrate this by training a segmentation model on the MIT animation dataset \cite{vlasic2008articulated}, where each mesh consists of 20,000 faces. We pre-initialize our model with the segmentation model trained on the Human Body dataset above and train for an additional 30 epochs, approximately 4 hours. \changed{The pre-initialization allows us to save training time by avoiding training a model from scratch: our model trained on low-resolution mesh data is able to capture some triangulation-invariant features, making this transfer learning possible.} We train on five randomly sampled 95\%-5\% train-test splits and achieve a mean accuracy of 89.34\%. We report segmentation accuracies for each split in Table~\ref{tab:humans_highres} and render Split 1 in Figure~\ref{fig:humans_highres} and Split 2 in Figure~\ref{fig:humans_highres2}. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figures/humans_dense/humans_dense2} \caption{Test set mesh segmentations of Split 2 of the full-resolution MIT animation dataset.} \label{fig:humans_highres2} \end{figure} \endgroup \subsection{Mesh Classification} We evaluate our method on mesh classification on the downsampled version of the SHREC dataset \cite{3DOR:3DOR11:079-088}, as in \cite{hanocka2019meshcnn, milano2020primal}, optimizing the standard cross entropy loss. We report our results on two different splits of the dataset---\emph{Split 10}, where each of the 30 shape categories is randomly split into 10 test and 10 training examples, and \emph{Split 16}, where each category is split into 4 test and 16 training examples---in Table~\ref{tab:shrec}. We train for 100 epochs using decimation to 400-500 faces, anisotropic scaling, and random rotations as data augmentation and then fine-tune for another 100 epochs for \emph{Split 16} and 200 epochs for \emph{Split 10}. \begin{table}[h] \centering \begin{tabular}{ccc} \toprule Method & Split 16 & Split 10 \\ \midrule Ours & 99.17\% & 94.67\% \\ PD-MeshNet \cite{milano2020primal} & 99.7\% & 99.1\% \\ MeshCNN \cite{hanocka2019meshcnn} & 98.6\% & 91.0\% \\ \bottomrule \end{tabular} \caption{Classification accuracy on the SHREC test set.} \label{tab:shrec} \end{table} \subsection{Dihedral Angle Prediction} As a stress test, we demonstrate that our method is capable of learning an operator that is sensitive to extrinsic geometry. To this end, we propose a synthetic dataset for dihedral angle regression. Previous methods that rely on computing a Laplacian would necessarily fail at this task, as they are only aware of intrinsic structure. \begingroup \setlength{\columnsep}{0.1in} \begin{wrapfigure}[5]{r}{.3\linewidth}\centering\vspace{-.15in} \includegraphics[width=\linewidth]{figures/fold/fold} \end{wrapfigure} We take a regular mesh of a flat square consisting of 100 faces and crease it down the center at a random angle $\theta \in [0, 2\pi]$, as shown. Our network learns a two-dimensional vector per mesh, and we optimize cosine distance to the ground truth $\theta$. We use the same hyperparameters as for the other experiments with a batch size of 32. For this experiment, we only use vertex positions as the input features---we do not provide normals. After training for just 15 minutes, we are able to predict the angle with an average error of $0.17\degree$. \endgroup \subsection{Ablation}\label{sec:ablation} We perform an ablation study to justify some the design and parameter choices in our architecture. In Table~\ref{tab:ablation}, we report test accuracy on the Shape COSEG vases dataset after 100 epochs of training (without fine-tuning). The accuracy degrades when we do not provide normals as part of the input mesh features, when we do not cache any additional eigenpairs for improved derivative approximation, when we reduce the vector dimensionality $k$, when we reduce the learned feature size $n$, or when we use fewer eigenpairs $m$ for feature computation. \begin{table}[h] \centering \begin{tabular}{cc} \toprule Model & Accuracy \\ \midrule full & 87.78\% \\ no normals & 86.26\% \\ no additional eig. & 87.44\% \\ $k=2$ & 79.02\% \\ $m=16$ & 86.34\% \\ $n=8$ & 87.08\% \\ \bottomrule \end{tabular} \caption{Ablation study of our parameter choices on segmentation of the Shape COSEG vases dataset.} \label{tab:ablation} \end{table} \section{Discussion and Conclusion} HodgeNet has many features that make it an attractive alternative for learning from meshes. During inference, its structure resembles that of most spectral geometry processing algorithms: construct a useful operator, and compute features from its spectrum. Our model is lightweight in the sense that the learnable functions act only on local neighborhoods, yet our model has a global receptive field thanks to the eigenvector computation. It has relatively few parameters and can be evaluated efficiently on the CPU. This exploratory work suggests many avenues for future research. The most obvious next step is to extend our model to tetrahedral meshes for volumetric problems; we do not anticipate any major issues with this extension. We also can use our method's connection to DEC to make learnable versions of other discrete differential operators, e.g.\ ones acting on $k$-forms for $k\geq1$, and we can consider other applications of learning on meshes like generative modeling. Our work also reveals some insight into other learning problems. Our architecture could easily be applied to graphs rather than triangle meshes, essentially by mildly changing the parameterization of $\star_1$ and taking $\star_0$ to be the identity matrix; we hence anticipate that there may be some applications to network analysis and other graph learning problems. Our lightweight differentiation strategy for eigenvectors may also prove useful in other contexts demanding eigenstructure of large matrices. From the broadest perspective, our work demonstrates one of many potential applications of differentiable sparse linear algebra. While our derivative approximations and specially-formulated operator provide one way to circumvent development of a general framework for combining deep learning and linear algebra, a framework coupling sparse linear algebra to deep learning toolkits would enable a vast set of modeling choice and applications currently hamstrung by available architectures. \begin{acks} The MIT Geometric Data Processing group acknowledges the generous support of Army Research Office grant W911NF2010168, of Air Force Office of Scientific Research award FA9550-19-1-031, of National Science Foundation grant IIS-1838071, from the CSAIL Systems that Learn program, from the MIT–IBM Watson AI Laboratory, from the Toyota--CSAIL Joint Research Center, from a gift from Adobe Systems, from an MIT.nano Immersion Lab/NCSOFT Gaming Program seed grant, and from the Skoltech--MIT Next Generation Program. This work was also supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1122374. \end{acks} \bibliographystyle{ACM-Reference-Format}
2024-02-18T23:40:35.812Z
2021-04-28T02:02:15.000Z
algebraic_stack_train_0000
2,817
7,827
proofpile-arXiv_065-13785
\section{Introduction} \label{sec:Intro} The statistical hadronization model (SHM) is the standard tool to predict and describe hadron abundances produced in relativistic nuclear collisions~\cite{Andronic:2017pug}. The main physics assumption underlying the SHM is that, near the phase boundary between the quark-gluon plasma (QGP) at high temperature and confined hadronic matter at lower temperature, the fireball formed in such collisions is close to thermal equilibrium. In the large volume limit applicable for Pb-Pb collisions at LHC energies or Au-Au collisions at RHIC energies the produced hadrons can then be precisely described by using a grand canonical partition function based on the hadron-resonance gas (HRG) and with residual interactions deduced using the S-matrix approach of \cite{Andronic:2018qqt}. We note that this HRG statistical operator provides an equation of state that is very close to that emerging from lattice QCD (lQCD) studies in the hadronic phase ~\cite{Bazavov:2014pvz}. Furthermore, the pseudo-critical temperature $T_{pc}$ at \ensuremath{\mu_{\rm B}} = 0, which is now determined in lQCD calculations \cite{Bazavov:2018mes,Borsanyi:2020fev} with great precision: $T_{pc} = 156.5 \pm 1.5$ MeV~\cite{Bazavov:2018mes}, agrees within (small) uncertainties with the chemical freeze-out temperature obtained from the SHM analysis of light-flavour hadron production data~\cite{Andronic:2017pug,Andronic:2018qqt}. How to extend the SHM to the charm sector, i.e., to SHMc, was outlined more than 20 years ago~\cite{BraunMunzinger:2000px} and further developed in~\cite{Andronic:2003zv,Becattini:2005hb,Andronic:2006ky,Andronic:2007zu}. The main idea behind this development is as follows: The charm quark mass $m_c$ is much larger than $T_{pc}$ and hence thermal production of charm quarks or hadrons is strongly Boltzmann suppressed. However, with increasing center-of-mass energy the total charm production cross section which results from initial hard collisions increases strongly. If the so produced charm quarks thermalize in the hot fireball they participate in the thermal evolution as 'impurities', their total yield being determined by the charm cross section, not by the fireball temperature. Quantitatively, this is described by the charm balance equation~\cite{BraunMunzinger:2000px,Andronic:2006ky} leading to a fugacity $g_c$. Roughly from $\sqrt{s_{NN}} > 15$ GeV on this will lead to an enhancement of hadrons with charm compared to a purely thermal description, see, e.g., Fig.~1 in~\cite{Andronic:2006ky} and the discussion below. Apart from canonical corrections~\cite{Andronic:2003zv,Andronic:2006ky} the enhancement scales $\propto (g_c)^{\alpha}$ where $\alpha$ is the number of charm quarks in a given hadron. Evidence for the thermalization of charm quarks in the fireball is discussed in~\cite{Andronic:2018qqt}. Charm quarks are deconfined inside the QGP, thermalize within the QGP and hadronize at the QCD phase boundary into open and hidden charm hadrons. This SHMc was used to predict~\cite{Andronic:2003zv,BraunMunzinger:2007zz} charmonium yields in Pb-Pb collisions at LHC energies long before the LHC turned on. It provides an excellent description of charmonium production~\cite{Andronic:2006ky,Andronic:2007bi,Andronic:2018vqh,Andronic:2019wva} without any new parameters and this success represents compelling evidence for this new production mechanism on the hadronizing QGP phase boundary. In the present paper we explore the predictions of the SHMc for the production of open charm mesons and baryons. Early predictions for open charm hadrons were made already in~\cite{Andronic:2003zv}, and in~\cite{Becattini:2005hb} for baryons with $\alpha > 1$, but in the absence of experimental data in the relevant low transverse momentum region these early investigations were not pursued further. The situation changed recently when the STAR collaboration at RHIC~\cite{Adam:2019hpq} as well as the ALICE~\cite{Adam:2015sza,Acharya:2018ckj,Acharya:2020lrg,Acharya:2020uqi} and CMS~\cite{Sirunyan:2019fnc} collaborations at the LHC published first results with Au and Pb beams. It is therefore timely to provide a concise description of the SHMc in the charm sector, to compare results based on this approach to the newly available data and to extend the predictions to the multi-charm sector. We note that the only additional information needed for SHMc predictions are the total open charm cross section and as complete as possible information on the mass spectrum of states in the charm sector. Apart from those there are no free parameters in our approach. In Section~\ref{sec:SHM_hq} we discuss the SHMc formalism including the charm-balance equation and fugacities, the information on the total open charm cross section, and the hadron mass spectrum in the charm sector. In addition, we will lay out the framework for extending our results to lighter colliding systems of Xe-Xe, Kr-Kr, Ar-Ar and O-O, which could be studied in future runs of LHC. For the study of system size dependence of $D$ meson $R_\text{AA}$ in a dynamical heavy flavour framework see ref.~\cite{Katz:2019qwv}. For these systems and, in particular for the evaluation of production yields of multi-charm hadrons, a detailed description in terms of canonical thermodynamics is required and outlined. This leads to thermal predictions for rapidity densities of all charmed hadrons in all colliding systems investigated here. In section~\ref{sec:SHMc_spec} we discuss the most up-to date information of the hadron mass spectrum in the charm sector. In particular we review the theoretical and experimental motivation of additional yet-undiscovered charmed hardon states. In section~\ref{sec:SHMc_pt} we present the description of transverse momentum spectra for charmed hadrons using a blast-wave approach. This includes a comparison of results for different freeze-out surfaces. An integral part of this approach is the incorporation of resonance decays into the calculation of spectra. In this section we also outline the 'core-corona' picture which is important to describe the high transverse momentum and centrality dependence of charm hadron production. Results and comparisons to data are discussed in section~\ref{sec:results1}. In this section we first compare SHMc predictions to data of D-mesons and make a prediction for $\Lambda_c$ baryons. With the same approach and no new inputs aside from masses and quantum numbers of charm hadrons we show how a whole hierarchy of predictions emerges depending on whether we deal with single, double, or triple charm hadrons. Because of the above discussed enhancement of production yields for states with multiple charm these predictions will be tested in the upcoming LHC Run3 and Run4 at least for a selected number of states with $\alpha \le 2$. With a new ALICE3 experiment~\cite{Adamova:2019vkf} a large part of the whole mass spectrum of charmed mesons and baryons should be within reach. These experiments can therefore bring completely new information on the degree of deconfinement and mechanism of hadronization of charm quarks in the hot fireball. We conclude this paper with a brief summary and outlook. \section{Heavy quarks in the statistical hadronization model} \label{sec:SHM_hq} Here we recapitulate the physics ideas and formalism behind the SHMc with special focus on the multi-charm sector. For more detail on the original development see~\cite{Andronic:2003zv,Andronic:2006ky,Andronic:2017pug}. Our main emphasis will be on the description of yields and transverse momentum spectra for open charm hadrons with $\alpha \le 3$, produced in Pb-Pb collisions at LHC energy. We will also provide expressions to describe the change of yields when going to lighter collision systems including Ar-Ar and O-O and discuss briefly what can be expected. The production of charmonia or charmonium-like states has recently been investigated, see~\cite{Andronic:2017pug,Andronic:2019wva} and will not be discussed here. Our approach can also be used to make predictions for open charm hadron production at lower energies such as at the RHIC, SPS and FAIR facilities and for higher energies expected at a possible Future Circular Collider~\cite{Dainese:2016gch}. The model can be straightforwardly extended to the beauty sector without conceptual changes or new parameters except for the total open beauty cross section and the corresponding hadronic mass spectrum. However, SHM might need to be modified for beauty hadrons, if future data reveal only partial thermalization of beauty quarks in the QCD medium. \subsection{Multi-charm hadrons, charm balance equation and the charm fugacity factor} \label{sec:balance} Our starting point is the charm balance equation~\cite{BraunMunzinger:2000px} \begin{equation} \begin{aligned} N_{\ensuremath{\text{c}\overline{\text{c}}}\xspace} = \frac{1}{2} & g_c V \sum_{h_{oc,1}^i} n^{{\rm th}}_i \, + \, g_c^2 V \sum_{h_{hc}^j} n^{{\rm th}}_j \, + \, \frac{1}{2} g_c^2 V \sum_{h_{oc,2}^k} n^{{\rm th}}_k, \end{aligned} \label{eq:balance} \end{equation} where $N_{\ensuremath{\text{c}\overline{\text{c}}}\xspace}\equiv \mathrm{d} N_{\ensuremath{\text{c}\overline{\text{c}}}\xspace}/\mathrm{d} y$ denotes the rapidity density of charm quark pairs produced in early, hard collisions and the (grand-canonical) thermal densities for open and hidden charm hadrons are given by $n_{i,j,k}^{{\rm th}}$. The index $i$ runs over all open charm states $h_{oc,1}^i = D, D_s, \Lambda_c, \Xi_c, \cdots, \bar{\Omega}_c$ with one valence charm or anti-charm quark, the index $j$ over all hidden charm states $h_{hc}^j = J/\psi, \chi_c, \psi',\cdots$, and the index $k$ over open charm states $h_{oc,2}^k = \Xi_{cc} \cdots, \bar{\Omega}_{cc}$ with two charm or anti-charm quarks. We leave out here states with 3 charm or anti-charm quarks as their contribution to the sum is negligible for realistic masses and values of $g_c$ and they have yet to be discovered. These thermal densities are computed using the latest version of the SHMc~\cite{Andronic:2017pug,Andronic:2019wva} with the chemical freeze-out temperature $T_{cf}= 156.5$ MeV and the fireball volume per unit rapidity at mid-rapidity $V = 4997\pm 455\,\text{fm}^3$ as appropriate for the most central 10\% Pb-Pb collisions at LHC energy $\sqrt{s_{NN}}= 5.02$ TeV. In the appendix we also give results for the 30-50\% centrality interval and at mid-rapidity. Scaling with the measured charged particle pseudo-rapidity density the corresponding volume in this centrality bin is $V = 1238\pm 113\,\text{fm}^3$. For the results shown below, the uncertainties in volume were not propagated, because they are sub-leading compared to the uncertainty in $g_c$ discussed below. The total number of charm quark pairs $N_{\ensuremath{\text{c}\overline{\text{c}}}\xspace}$ produced in a Pb-Pb collision is a quantity that should be determined by measurement of all hadrons with open or hidden charm. Following this prescription, the only (additional) input parameter of the SHMc, $N_{\ensuremath{\text{c}\overline{\text{c}}}\xspace}$, is determined by experiment. In particular, we note that $N_{\ensuremath{\text{c}\overline{\text{c}}}\xspace}$ already includes all nuclear effects in charm production as compared to pp collisions, takes into account potential additions to the charm yield from thermal production in the QGP as well as potential losses due to charm quark annihilation. In practice, using this prescription is, however, difficult since the measurement of all open and hidden charm hadrons needs to be performed without cuts in transverse momentum. Achieving a precision measurement of $N_{\ensuremath{\text{c}\overline{\text{c}}}\xspace}$ is one of the priorities for the upgraded ALICE experiment in LHC Run3 and Run4. In the absence of a measured charm production cross section in Pb-Pb collisions we obtain $N_{\ensuremath{\text{c}\overline{\text{c}}}\xspace}$ at mid-rapidity from the measured charm cross section $\text{d} \sigma_{c\bar{c}}/\text{d} y$ in pp collisions by multiplication with the appropriate nuclear thickness function for Pb-Pb collisions and taking into account nuclear modifications. The procedure is described in detail below. The pp data were measured at $\sqrt{s}= 5.02$ and 7 TeV at mid-rapidity~\cite{Adam:2016ich,Acharya:2017jgo,Acharya:2019mgn,Acharya:2019mno}. To apply to Pb-Pb collisions, the cross sections are multiplied with the nuclear thickness function and folded with a factor accounting for nuclear modification effects such as shadowing, energy loss or saturation effects. The estimate of this factor is based on the analysis of prompt $D^0$ and \ensuremath{\text{J}/\psi}\xspace production in p-Pb collisions at 5.02 and 8.16 TeV. We used the data from the LHCb collaboration~\cite{Aaij:2016jht,Aaij:2017cqq,Aaij:2017gcy} at forward rapidity, and of \ensuremath{\text{J}/\psi}\xspace production at mid-rapidity measured by the ALICE collaboration in pp and p-Pb collisions at 5.02 TeV~\cite{Acharya:2019mgn,Acharya:2019mno}. The $\sqrt{s}= 8.16$ and 7.0 TeV data are interpolated to 5.02 TeV using the measured data at other center-of-mass energies and the functional form obtained from perturbative QCD (FONLL) ~\cite{Cacciari:2015fta}. For mid-rapidity, we obtain a reduction factor of $0.65 \pm 0.12$, resulting in a value of $\text{d} \sigma_{\ensuremath{\text{c}\overline{\text{c}}}\xspace}/ \text{d} y = 0.532 \pm 0.096$~mb. The corresponding factor for $y$ = 2.0-4.5 is $0.70 \pm 0.08$ leading to a differential charm production cross section of $\text{d} \sigma_{\ensuremath{\text{c}\overline{\text{c}}}\xspace}/ \text{d} y = 0.334 \pm 0.053$~mb. To obtain the charm quark rapidity density for Pb-Pb collisions of a given centrality, the pp cross section is then multiplied with the mean nuclear thickness function $\left<T_\text{AA}\right>$ as described in~\cite{Abelev:2013qoq}. We neglect in the procedure based on results from pp and p-Pb collisions potential contributions to the differential charm cross section in Pb-Pb collisions from thermal charm production as well as reductions from charm quark annihilation. For LHC both contributions were estimated to be very small and negligible for lower energies~\cite{BraunMunzinger:2000dv,Andronic:2006ky}. We note here that the charm balance equation should contain canonical corrections for more peripheral collisions or for lighter collision systems, i.e., whenever the number of charm pairs is not large compared to 1~\cite{Gorenstein:2000ck,BraunMunzinger:2003zd}. The charm balance Eq.~\ref{eq:balance} needs then to be modified accordingly. To that end we define \begin{equation} \begin{aligned} N_{oc,1} = \frac{1}{2} g_c V \sum_{h_{oc,1}^i} n^{{\rm th}}_i,\\ N_{oc,2} = \frac{1}{2} g_c^2 V \sum_{h_{oc,2}^k} n^{{\rm th}}_k,\\ N_{hc} = g_c^2 V \sum_{h_{hc}^j} n^{{\rm th}}_j, \label{eq:charm_numbers} \end{aligned} \end{equation} where $N_{oc,1}$ is the rapidity density of charm quarks bound in hadrons $h_{oc,1}^i$ with one valence charm quark, $N_{oc,2}$ is the rapidity density of charm quarks bound in hadrons $h_{oc,2}^k$ with two valence charm quarks, and $N_{hc}$ is the rapidity density of charm-(anti-charm) quark pairs bound in hidden charm hadrons $h_{hc}^j$. This defines the total rapidity density of charm quarks, neglecting triply charmed states, as $N_c^\text{tot} = N_{oc,1} + N_{oc,2} + N_{hc}$. Note that the value of $N_c^\text{tot}$ itself depends on the charm fugacity $g_c$. Then the modified charm balance equation using the canonical corrections reads: \begin{equation} N_{c\bar{c}}= \sum_{\alpha = 1,2} N_{oc,\alpha} \frac{I_\alpha(N_c^\text{tot})} {I_0(N_c^\text{tot})} \, + \, N_{hc}. \label{eq:canonical} \end{equation} Here, the $I_\alpha$ are modified Bessel functions. For hadrons with 2 or 3 charm quarks there are generally additional terms which are, however, very small because of the small charm densities, and are neglected here (see, e.g. sect. 3.2 in~\cite{BraunMunzinger:2003zd}). Solving Eq.~\ref{eq:canonical} for $g_c$ then determines the charm fugacity factor at 5.02 TeV. For central (0-10\%) Pb-Pb collisions and the above discussed differential charm cross section at mid-rapidity (implying $\mathrm{d} N_{c\bar{c}}/\mathrm{d} y$=12.95$\pm$2.27) this leads to $g_c = 29.6 \pm 5.2$, with the uncertainty determined by the uncertainty in the open charm cross section for Pb-Pb collisions. The rapidity density of open charm hadrons of type $ h_{oc,\alpha}^i $ with $\alpha=1,2$ charm quarks can then be obtained from the computed thermal densities $n_{i}^{\rm th}$ as : \begin{equation} \frac{\mathrm{d} N(h_{oc,\alpha}^i)}{\mathrm{d} y} =g_c^\alpha \, V \, n^{{\rm th}}_i \frac{I_{\alpha}(N_c^\text{tot})}{I_0(N_c^\text{tot})}. \label{eq:yieldsoc} \end{equation} The large value of $g_c = 29.6 \pm 5.2$ for central Pb-Pb collisions for charm production at mid-rapidity (see Fig.~\ref{fig:gc-scaling} in the following section) implies very large enhancements for charmed hadrons compared to what is obtained in the purely thermal case. In the absence of canonical corrections the enhancement factor is (nearly) 900 for doubly charmed, and $ 2.6 \cdot 10^4$ for triply charmed hadrons. For central Pb-Pb collisions at 5.02 TeV the canonical correction factors are in fact close to 1: 0.98, 0.92, and 0.84 for $\alpha = 1, 2, 3$ charm quarks respectively, for the central value of the differential charm cross section at mid-rapidity, see Fig.~\ref{fig:canonical} below. If these enhancement factors are realized in nature then even very massive triply charmed hadrons may come into reach experimentally. For hidden charm states Eq.~\ref{eq:yieldsoc} reduces to \begin{equation} \frac{\mathrm{d} N(h_{hc}^j)}{\mathrm{d} y} = g_c^2 \, V \, n^{{\rm th}}_j. \label{eq:yieldshc} \end{equation} The enhancement factors expressed in Eqs.~\ref{eq:yieldsoc} and \ref{eq:yieldshc} come about because of the assumption that all charm quark reach thermal equilibrium at least for temperatures close to $T_{cf}$. In that case the heavy quarks are completely uncorrelated and the resulting statistical weight is just $g_c^\alpha$. We note that this implies deconfinement of the heavy quarks over the volume $V$, as discussed below. We also stress that all hadron rapidity densities discussed above are computed as rapidity densities for a volume and hence rapidity window of width of $\Delta y =1$. The rationale behind this is that one cannot combine charm quarks into hadrons over large rapidity distances as they are causally disconnected because hadrons have a finite formation time $\tau_f \approx 1$ fm and large rapidity correlations can only be established at very early times $\tau \ll 1$ fm~\cite{Acharya:2019izy,Dumitru:2008wn}. The value of $\Delta y$ is somewhat arbitrary and a range of $\Delta y = 1 - 3$ was explored in the past and for colliders a weak dependence was found \cite{Andronic:2003zv}. We finally note the asymptotic form of the modified Bessel functions $I_\alpha(x)$. For small argument $x$ and order $\alpha$ this reads: \begin{equation} I_{\alpha}(x) \approx \frac{1}{\Gamma(\alpha + 1)} (x/2)^{\alpha} \label{eq:bessel} \end{equation} where $\Gamma$ is the Euler Gamma function. For large $x$ the modified Bessel functions approach \begin{equation} I_\alpha(x) \approx \frac{e^x}{\sqrt{2\pi x}}. \label{eq:bessel1} \end{equation} This implies that the canonical suppression disappears for large arguments $x$, i.e., the system has reached the grand-canonical limit. For small $x$, $I_0 \approx 1$ and the canonical suppression factor approaches $\frac{1}{\Gamma(\alpha + 1)} (x/2)^{\alpha}$. \subsection{Dependence on mass number of the colliding nuclei} \label{sec:A-dependence} In the following we provide information on how to also compute the yields for (multi\nobreakdash-)charm hadrons produced in lighter collision systems such as Xe-Xe, Kr-Kr, Ar-Ar and O-O. Of course, these calculations are valid as long as the charm quarks produced in initial hard collisions reach or closely approach kinetic equilibrium in the hot fireball formed in the collision. This has to be carefully checked when one plans to study the production of charm hadrons in such small systems. In addition, we have not included in these exploratory calculations any contributions due to corona effects. Their importance will increase as the colliding systems become smaller. For the system O-O where the nuclear densities never reach a central plateau we expect very substantial corrections which need to be studied carefully if one wants to look for QGP effects in such very light systems. For more discussion on the corona effect see section~\ref{sec:SHMc_pt} below. To understand the charm hadron yield dependence on mass number A of the colliding nuclei we first determine the A dependence of $g_c$. From the charm balance Eqs.~\ref{eq:balance} and~\ref{eq:canonical} we note that $N_{c\bar{c}} \propto {\rm A^{4/3}}$ since charm is produced in hard collisions and we are interested in central nuclear collisions~\cite{dEnterria:2003xac}. Noting further that the volume $V \propto$ A we immediately obtain that $g_c \propto {\rm A^{1/3}}$ in the grand-canonical limit. In the canonical limit, i.e., for small charm densities, one obtains $g_c \propto {\rm A^{-1/3}}$ using the properties of the modified Bessel functions near the origin (see Eqs.~\ref{eq:bessel} and \ref{eq:bessel1}). However, at LHC energies charm densities are not so small and the grand-canonical approximation is a good approximation for the heavier systems Xe-Xe and Kr-Kr and leads to a 20\% correction for Ar-Ar. The correction becomes large for the O-O system. In Fig.~\ref{fig:gc-scaling} we show the result of the A dependence of $g_c$ as obtained by numerical solution of Eq.~\ref{eq:canonical}. The rather strong deviation from the ${\rm A^{1/3}}$ dependence observed for the O-O system is caused by the changes in the canonical correction factor due to the transition from grand-canonical to canonical thermodynamics where the A dependence of $g_c$ is expected to approach the ${\rm A^{-1/3}}$ scaling as discussed above. For the rapidity range 2.5-4 the non-monotonic feature of the curves is more pronounced, as the system is deeper into the canonical regime, see Fig.~\ref{fig:canonical}. \begin{figure} \centering \includegraphics[scale=0.35]{./figs/gc_3cc_y0.pdf} \includegraphics[scale=0.35]{./figs/gc_3cc_y3.pdf} \vskip -0.4 cm \caption{The system-size (expressed as $\mathrm{A}^{1/3}$) dependence of the charm fugacity factor $g_c$ for the five different collision systems Pb-Pb, Xe-Xe, Kr-Kr, Ar-Ar, and O-O for rapidity $|y| < 0.5$ (left plot) and rapidity 2.5-4 (right plot). The band reflects the uncertainties of $\mathrm{d}\sigma_{c \bar c}/\mathrm{d} y$ indicated in the plots. For details see text.} \label{fig:gc-scaling} \end{figure} In Fig.~\ref{fig:canonical} we present the dependence on mass number A of the canonical correction factors $f_{can}$ for the production of charm hadron $h^i$ in A-A collisions. They are defined as: \begin{equation} f_{can}(\alpha,{\rm A}) = \frac{I_{\alpha}(N_c^\text{tot}({\rm A}))}{I_0(N_c^\text{tot}({\rm A}))}. \label{eq:f_can} \end{equation} The curves on the left and right side are again obtained at rapidity $|y| < 0.5$ and rapidity 2.5-4, respectively. They are evaluated for charm hadrons with the expression given in equation~\ref{eq:canonical}. The A dependence of $g_c$ needs to be obtained numerically and is displayed in Fig.~\ref{fig:gc-scaling} above. \begin{figure} \centering \includegraphics[scale=0.35]{./figs/In2I0_charm_3cc_y0.pdf} \includegraphics[scale=0.35]{./figs/In2I0_charm_3cc_y3.pdf} \vskip -0.4 cm \caption{Canonical correction factors for the five different collision systems Pb-Pb, Xe-Xe, Kr-Kr, Ar-Ar, and O-O at mid-rapidity $|y| < 0.5$ (left panel) and forward rapidity 2.5-4 (right panel) for open flavor hadrons with charm quantum number C. The bands reflect the uncertainties of $\text{d} \sigma_{c \bar c}/\text{d} y$ as indicated in the figure. For details see text.} \label{fig:canonical} \end{figure} With the A-dependence of $g_c$ and of the canonical corrections factors at hand we can now compute the yield of any charmed hadron in the SHMc as function of mass number A. In section~\ref{sec:results1} below we will present our results on yields and transverse momentum distributions. To get a more intuitive understanding of these results we assume, in the following, that the A dependence of $g_c$ can be described by the above grand-canonical relation $g_c \propto {\rm A^{1/3}}$. As can be seen from Fig.~\ref{fig:gc-scaling}, this is well fulfilled, at the better than 10\% (1\%) level, for A $\ge$ 40 (80). Keeping these small deviations in mind, we can provide a good estimate of the A dependence of charm hadron yields provided we stay with A $\ge$ 40 , i.e., Ar-Ar collisions, by making use of Eq.~\ref{eq:yieldsoc} and the above defined canonical suppression factors $f_{can}$. This leads to the scaling relation \begin{equation} \frac{\text{d} N^{\rm AA}}{\text{d} y}(h^i)=\frac{\text{d} N^{\rm PbPb}}{\text{d} y}(h^i) \left(\frac{{\rm A}}{208}\right)^{(\alpha+3)/3} \frac{f_{can}(\alpha,{\rm A})}{f_{can}(\alpha,{\rm Pb})} \label{eq:scaling} \end{equation} for the production of hadron $h^i$ with $\alpha$ charm quarks in collision systems of A-A. Using this relation and the yields for charm hadrons produced in Pb-Pb collisions as displayed in Table~\ref{tab:yields_tot}, see section~\ref{sec:results1} below, the yields can be computed for charm hadrons yields in lighter systems from Ar-Ar to Xe-Xe. For very light systems such as O-O the full approach as discussed above should always be used. In Fig.~\ref{fig:yields_a} the system size dependence of selected hadron yields is displayed for mid-rapidity (left panel) and forward rapidity (right panel). The band for each hadron species correspond to different charm production cross sections as indicated in the figure. Note the change in A dependence for open and hidden charm states as a consequence of the absence of the canonical suppression for the latter (compare Eq.~\ref{eq:yieldshc} and \ref{eq:yieldsoc} above). \subsection{The canonical volume} \label{sec:can_vol} The volume $V$ appearing in Eq.~\ref{eq:balance} is usually set equal to the fireball volume at chemical freeze-out $V$ determined by the requirement that the measured rapidity density of charged particles divided by $V$ equals the thermal density of charged particles after strong decays at chemical freeze-out~\cite{Andronic:2017pug}. Employing a connection between momentum rapidity and space-time rapidity, this volume, corresponding to one unit of rapidity, is a fraction of the entire fireball. To consider such a sub-volume is meaningful since, at high collision energies, equilibration is achieved only locally and not globally. This leads to the picture at freeze-out of a string of fireballs lined up in rapidity and filling the entire gap between the rapidities of the two beams (or between beam and target in fixed target mode). The thermal parameters of these fireballs could differ, albeit at LHC we expect a slow variation with rapidity. Only at low collisions energies (AGS energy and below) one should think of one global thermalized system. We note in this context that in~\cite{Becattini:2005hb} it was assumed that the fireball volume comprises all rapidities up to but excluding beam and target rapidities, hence is significantly larger than what is discussed here. \begin{figure} \centering \includegraphics[scale=0.35]{./figs/Yields_charm_A_3cc_y0.pdf} \includegraphics[scale=0.35]{./figs/Yields_charm_A_3cc_y3.pdf} \vskip -0.4 cm \caption{System size dependence of selected hadron species for mid-rapidity $|y| < 0.5$ (left panel) and forward rapidity 2.5-4 (right panel).} \label{fig:yields_a} \end{figure} When computing the canonical suppression factor $f_{can}$ defined in Eq.~\ref{eq:f_can}, a new scale enters the problem. To obtain the argument of the Bessel functions, the differential cross section or multiplicity needs to be multiplied with the width of a rapidity interval $\Delta y$ which then can be associated with a canonical volume $V_{can}$ over which the relevant quantum number is conserved. For the conservation of baryon number we have recently learned, in the context of net-proton fluctuations, that this volume $V_{can}$ may be significantly larger, not smaller than $V$~\cite{Braun-Munzinger:2019yxj,Acharya:2019izy}. Very recent results concerning canonical strangeness suppression~\cite{Cleymans:2020fsc} at the LHC point also in that direction. Since charm quarks are all produced in the very early phase of the collision we could expect that the canonical volume for charm $V_{can}$ is similarly large, implying a reduced role of canonical suppression and yields larger than computed with $V = V_{can}$. This would affect in particular predicted yields for multi\nobreakdash-charm hadrons from lighter collision systems such as Ar-Ar or O-O. In the numbers given below for (multiple) charm production yields canonical suppression is included. To stay on the conservative side and in the absence of measurements of $V_{can}$ for charm we have, in the following employed only one volume setting $V_{can} = V$, implying that the canonical corrections for the smallest collision systems could be less severe when more information on $V_{can}$ becomes available. \subsection{Charm hadron production and deconfinement of charm quarks} \label{sec:deconfinement} Early on it was realized~\cite{Andronic:2003zv,Andronic:2007bi,BraunMunzinger:2009ih} that a successful description of the measured yields of charmonia in the SHMc would imply deconfinement for charm quarks. The measurements at RHIC and, in particular, LHC energy lend support to this interpretation~\cite{Andronic:2017pug}. Here we briefly discuss what could be learned on deconfinement from analysis of multi-charm meson and, in particular, baryon production data. In the SHMc the production of hadrons with $\alpha$ charm quarks is enhanced by a factor $(g_c)^{\alpha}$ compared to what is expected in a purely thermal approach, see Eq.~\ref{eq:yieldsoc}. Since $g_c \approx 30$ for central Pb-Pb collisions, the expected enhancements for multi-charm hadron production are very substantial and produce a distinctive hierarchy in their yield pattern, as shown below. That pattern results only if the charm quarks making up the final hadron are uncorrelated prior to hadronization as is expected for fully deconfined ('no strings attached') charm quarks. We note that even the residual correlation imposed by overall baryon number and charm conservation will be very small if the measurement window is of order one unit in rapidity~\cite{Acharya:2019izy}. Production of multi-charm hadrons in the (confined) hadronic phase would also be very small as it would necessarily have to involve exotic multi-particle collisions. To illustrate this point, the following estimates are based on energy conservation and on masses of 4.8 GeV for $\Omega_{ccc}$~\cite{Zhao:2020jqu} and 3.62 GeV for $\Xi_{cc}$~\cite{Zyla:2020zbs}. For the most exotic case of $\Omega_{ccc}$ production a possible production path is via collisions such as $3D + m\pi \rightarrow \bar{p} + \Omega_{ccc}$ with $m$ = 3. For the $\Xi_{cc}$ baryon the analogous rate equation reads $2D + m\pi \rightarrow \bar{p} + \Xi_{cc}$ with $m$ = 7. But many other processes such as $\Lambda_c + D\rightarrow \Xi_{cc} + \pi$ or $\Lambda_c+ 2D\rightarrow \Omega_{ccc} + \pi$ are imaginable. While the rates for all these processes will be enhanced compared to purely thermal estimates by a fugacity factors $(g_c)^{\alpha}$, they will, nevertheless, be very small because of the low $D$ meson and $\Lambda_c$ density of $1.2 \cdot 10^{-3}\,\text{fm}^{-3}$ (for $D^0$, the highest for $D$ mesons) and $ 2.6 \cdot 10^{-4}\,\text{fm}^{-3}$ for $g_c = 29.6$ at chemical freeze-out entering at the same power of $\alpha$. These rates will fall very rapidly with temperature during the hadronic expansion~\cite{BraunMunzinger:2003zz}. Also the phase after chemical freeze-out is by construction not in equilibrium. How to constrain the rate for such multi-particle collisions is totally unclear due to the unknown amplitudes for these different possible many-body collision processes. Similar arguments apply for charmonia, where the dominant channel would be $D + \bar{D} \rightarrow J/\psi + \pi$. Here, even the extension to $\psi'$ involves at least one more unknown parameter. This is to be contrasted with the SHMc approach where there is no free parameters. The experimental observation of a significant number of hadrons with multiple charm in relativistic nuclear collisions hence provides a unique opportunity to test the 'deconfinement' prediction and get quantitative information on the degree of deconfinement achieved in the hot fireball. The full predictions of the model, including the contribution from the low density corona, are presented for a selection of species in Table~\ref{tab:yields_tot} for Pb-Pb collisions at 5.02 TeV, for the 0-10\% and 30-50\% centralities (mid-rapidity values). For these hadrons, the production cross sections in pp collisions have recently been measured by ALICE at mid-rapidity \cite{Acharya:2021cqv,Acharya:2019mgn,Acharya:2020lrg,Acharya:2019lkw} and those are employed for the calculation of the corona component (we have employed the ratio $\psi(2S)/(J/\psi)$=0.15 \cite{Andronic:2017pug}). The model predictions for the core part for all systems for the two rapidity ranges are available in numerical form as auxiliary file with the arXiv version of the publication. \section{Charm hadron spectrum and SHMc} \label{sec:SHMc_spec} The spectrum of open charm hadrons incorporated in the SHMc includes all mesons and baryons established experimentally as given by the PDG \cite{Zyla:2020zbs}. This includes 27 D mesons and their anti-particles with angular momenta from 0 to 3 and masses up to 3 GeV. There are 36 established singly-charmed baryons and as many anti-baryons in the mass range up to 3.12 GeV. The known angular momenta are low, mostly 1/2 and 3/2 with one established 5/2 state. The thermal population of the charmed hadrons is strong enough so that the density of the ground state $D^0$ is quadrupled due to feeding from strong decays, the $\Lambda_c$ density is increased by a factor 5 due to feeding. There has been discussion recently that the number of charmed baryons, in particular, could be significantly larger. Fourth order susceptibilities were constructed and evaluated in lQCD calculations \cite{Bazavov:2014yba} and compared to results from HRG calculations of the same quantities in the temperature range up to the pseudo-critical temperature. The ratios were chosen such that they are particularly sensitive to contributions from the charmed baryon sector in the HRG. It was found that the lQCD results are significantly (at least 40\%) above the HRG calculation based on the states established by the PDG in 2012, while adding to the HRG charmed baryon states obtained from a lQCD calculation \cite{Padmanath:2013bla}, resulted in good agreement up to the pseudo-critical temperature. The authors of \cite{Bazavov:2014yba} view this as evidence for so far unobserved charmed hadrons contributing to the thermodynamics in the cross over region. Indeed, while the spectrum of \cite{Padmanath:2013bla} is consistent with the number of known states in the mass range above the respective ground state, about 200 additional baryons with total angular momenta up to 7/2 are predicted. Most of these states are significantly higher in mass. For the positive parity states there is a mass gap of about 500-600 MeV, the gap is only of the order of 400 MeV for the negative parity states (that are generally about 300 MeV higher in mass). The situation is only different for the negative parity $\Xi_c$ states, where the new states start right at the mass of the highest experimentally established state at 3123 MeV. Accordingly, at a freeze-out temperature $T_{cf}= 156.5$ MeV the thermal weights are significantly lower. Still, due to their large number and in part also higher degeneracy factors the feeding of ground state charmed baryons could be significantly affected. In this context it is interesting to note that a wealth of new XYZ states were found at the LHC while only 1 additional $\Lambda_c$, 2 $\Xi_c$ and 5 $\Omega_c$ states were newly discovered (compare e.g. the PDG2012 and PDG2020 compilations). Triggered by the surprizingly large fragmentation of charm into $\Lambda_c$ measured in pp collisions at 7 and 5.02 TeV by the ALICE collaboration \cite{Acharya:2017kfy,Acharya:2020uqi,Acharya:2020lrg}, He and Rapp \cite{He:2019tik} incorporated into a SHM calculation a hadron spectrum resulting from a relativistic quark model calculation \cite{Ebert:2011kk} exhibiting a very large number of additional charmed baryons with angular momenta up to 11/2 and both parities. The additional charmed baryons from the RQM calculation have by and large smaller masses than resulting from lQCD \cite{Padmanath:2013bla}, falling in part even into the mass range of the known states. Using this charmed baryon spectrum and a temperature of 170 MeV, the authors of \cite{He:2019tik} find a doubling of the $\Lambda_c$ ground state population as compared to the PDG spectrum and predict a yield in line with the ALICE experimental data. It should be noted that this poses a conceptual problem because it implies that charmed baryons exist at a temperature significantly above the pseudo-critical temperature for the chiral phase transition, while this is explicitly not supported by lQCD calculations. In \cite{Bazavov:2014yba} it is argued that cumulants on net charm fluctuations indicate that above $T_{pc}$ the charm degrees of freedom are no longer described by an uncorrelated gas of charmed hadrons but that rather the emergence of deconfined charm states sets in just near the chiral cross over transition. On the other hand, Petreczky \cite{Petreczky:2020olb} notes that while the ratio of fourth order baryon-charm susceptibilities around and above the pseudo-critical temperature of the chiral transition is much above the values for the HRG but still below the free quark gas value, that fact could be understood if charm hadron like excitations would still exist above $T_{pc}$ possibly up to 200 MeV. This is not the baseline of the predictions of this publication where deconfinement of all flavors at $T_{pc}$ is assumed. The predictions presented below will provide a stringent test of charm deconfinement and settle this discussion once a large enough dynamic range in mass and charm quantum number is covered by experimental data. Finally we quote recent lQCD results \cite{Lorenz:2020uik} where comparisons of Euclidean correlators to perturbative spectral functions were found to be indicative of charmonium melting in lQCD very close to $T_{pc}$. While the questions raised here are debated in the community, we want to give an indication in this publication how the SHMc predictions given below would be affected by a large number of yet undiscovered charmed baryons behaving like simple resonances. To this extent we have performed also calculations where the statistical weight of all excited charmed baryons was tripled and the corresponding change in the predictions by the SHMc is given in section \ref{sec:results1} where hadron yields are presented. Finally it should be noted that, even if the above plethora of charmed baryons exists, a treatment as simple resonances in the SHMc could be too naive and a situation could arise similar to the light quark sector. In a recent study~\cite{Andronic:2020iyg}, the SHM was augmented by 180 nonstrange and 300 strange baryons predicted by lQCD. When they were treated a simple additional resonances, their presence showed a significant impact on particularly the proton yield, strongly deteriorating agreement with experimental data. Proper treatment of the pion-nucleon interaction by the S-matrix approach and using all measured phase shifts \cite{Andronic:2018qqt} completely cancelled out the effect of these additional states. This strong effect of the S-matrix approach could be traced \cite{Lo:2017lym} to non-resonant and repulsive components in the pion-nucleon interaction for some partial waves. Whether such a situation could arise in the charm baryon sector depends, among other things, on the widths of the additional states, and is currently completely unexplored. We have assumed that all additional resonances are narrow Breit-Wigner-type resonances. \section{Transverse momentum spectra of charm hadrons} \label{sec:SHMc_pt} In the SHM fitted to integrated particle yields no assumption is made about the form of the momentum spectra of produced particles. Therefore the transverse momentum dependence must be supplied by additional modelling of the particle freeze-out. In hydrodynamical modelling of heavy ion collisions the soft momentum part of particle spectra is obtained by the Cooper-Frye~\cite{Cooper:1974mv} integral over the freeze-out surface and subsequently passing to the hadronic afterburner to perform resonance decays and possible hadronic rescattering. The blast-wave model~\cite{Schnedermann:1993ws,Florkowski:2010zz} is motivated by the same physics picture, but realized in simpler but approximate way to generate the $\ensuremath{p_{\text{T}}}\xspace $ spectra. The thermal particle spectra are obtained from a simple freeze-out surface with a given freeze-out temperature and with parametrized radial velocity profile. This thermal blast-wave model has been used extensively in the past to fit and characterize the experimentally measured identified particle spectra~\cite{Abelev:2013vea,Acharya:2019yoi,Acharya:2020zji,Acharya:2018orn}. For boost-invariant and azimuthally symmetric freeze-out surfaces $d\sigma_\mu$, the Cooper-Frye integral can be reduced to a one-dimensional integral along the freeze-out contour in the $\tau$-$r$ plane~\cite{Schnedermann:1993ws,Florkowski:2010zz}: \begin{align}\label{eq:Cooper-Frye} & \frac{\mathrm{d}^2 N}{2\pi \ensuremath{p_{\text{T}}}\xspace d\ensuremath{p_{\text{T}}}\xspace dy} =\frac{2J+1}{(2\pi)^3}\int \mathrm{d} \sigma_\mu p^\mu f(p)\nonumber\\ &= \frac{2J+1}{(2\pi)^3} \int_0^{r_\text{max}}\!\! \mathrm{d} r \; \tau(r) r \left[ K^\text{eq}_1(\ensuremath{p_{\text{T}}}\xspace ,u^r) - \frac{\partial \tau}{\partial r} K^\text{eq}_2(\ensuremath{p_{\text{T}}}\xspace ,u^r) \right], \end{align} where $2J+1$ accounts for spin-degeneracy. Here we consider a freeze-out surface defined by a single-valued function $\tau(r)$ in the range $0<r<r_\text{max}$. The freeze-out kernels $K^\text{eq}_{1,2}(\ensuremath{p_{\text{T}}}\xspace ,u^r)$ can be calculated analytically for the Boltzmann distribution $f(p) = \exp(-\sqrt{m^2+p^2}/T)$ of initial particles on the freeze-out surface and takes the well-known form in terms of modified Bessel functions~\cite{Schnedermann:1993ws,Florkowski:2010zz} \begin{align}\label{eq:thkernel} \begin{split} K^\text{eq}_1(\ensuremath{p_{\text{T}}}\xspace , u^r)& = 4\pi m_\text{T} I_0\left(\frac{\ensuremath{p_{\text{T}}}\xspace u^r}{T}\right)K_1\left(\frac{m_\text{T} u^\tau}{T}\right)\\ K^\text{eq}_2(\ensuremath{p_{\text{T}}}\xspace , u^r)& = 4\pi \ensuremath{p_{\text{T}}}\xspace I_1\left(\frac{\ensuremath{p_{\text{T}}}\xspace u^r}{T}\right)K_0\left(\frac{m_\text{T} u^\tau}{T}\right), \end{split} \end{align} where $m_\text{T}=\sqrt{m^2+\ensuremath{p_{\text{T}}}\xspace ^2}$ and $T$ is the (constant) freeze-out temperature. The 4-velocity $u^r = \beta/\sqrt{1-\beta^2}$ is given in terms of radial velocity $\beta(r)$, which is commonly parametrized by a power function with two parameters $\beta_\text{max}$ and $n$ \begin{equation} \beta(r) = \beta_\text{max}\frac{r^n}{r_\text{max}^n}.\label{eq:beta} \end{equation} In this paper the spectra of charmed hadrons formed in the core, i.e. by hadronization of the hot QGP fireball, are evaluated by using the velocity profile from a (3+1)D viscous hydrodynamics code MUSIC with IP-Glasma initial conditions tuned to the light flavor hadron observables~\cite{Schenke:2010nt,Schenke:2012wb}. The velocity profile and best fit with $\beta_\text{max}=0.62$ and $n=0.85$ for 0-10\% centrality bin is shown in Fig.~\ref{fig:plotv} (we use $\beta_\text{max}=0.60$ and $n=0.85$ for 30-50\% centrality bin). The fit uncertainties of the parameters $\beta_{\text{max}}$ and $n$ are 0.005 and 0.05, respectively. \begin{figure} \centering \includegraphics[scale=0.35]{./figs/Beta_R_midy_v2.pdf} \caption{Radial velocity profile on the freeze-out surface extracted from hydrodynamic simulations of central Pb-Pb collisions.} \label{fig:plotv} \end{figure} Different types of freeze-out surfaces have been used in the past, for example, the constant Bjorken time freeze-out surface introduced in ref.~\cite{Schnedermann:1993ws} \begin{align} \tau(r) = \tau_\text{fo} \end{align} or constant proper time time surface of~\cite{Broniowski:2001uk} \begin{align} \tau(r)&=\sqrt{\tau_\text{fo}^2+r^2}. \end{align} In ref.~\cite{Broniowski:2001uk} the velocity flow was restricted to be a Hubble-like $u^\mu=x^\mu/\tau_\text{fo}$ and parallel to the norm of the surface. For parametrized velocity in Eq.~\ref{eq:beta}, $u^\mu$ is no longer proportional to $d\sigma^\mu$. However, one can consider a third type of the surface for which this condition is still true: $\tau(r) = \tau_\text{fo}+\int_0^r dr'\beta(r')$ and using Eq.~\ref{eq:beta} we get \begin{equation} \tau(r)=\tau_\text{fo} + \frac{r\beta(r)}{n+1}. \end{equation} The three freeze-out surfaces are depicted in Fig.~\ref{fig:contour} (left). Without loss of generality, the freeze-out time is taken to be equal to $\tau_\text{fo}=r_\text{max}$ and $r_\text{max}$ itself can be determined by requiring the freeze-out volume per unit rapidity \begin{align} V &= 2\pi\int_0^{r_\text{max}}\! \mathrm{d} r \; r \tau(r)u^\tau\left[1 - \beta(r)\frac{\partial \tau}{\partial r}\right] \end{align} to be equal to a given value, e.g. $V=4997\,\text{fm}^3$ in central Pb-Pb collisions. Note, however, that the integration variable $r$ can be rescaled to $ x = r/r_\text{max}$ with the result that $r_\text{max}^3$ appears as normalization in front of the integral. Since we replace the overall normalization by that obtained from the SHMc, knowledge of $r_\text{max}$ is not required, and the only parameters left are the dimensionless parameters $\beta_{\text{max}}$ and $n$, as discussed above. As we did in a previous publication for the J/$\psi$ spectrum~\cite{Andronic:2019wva}, the spectra for various charmed hadrons are computed using this velocity profile as input for a blast-wave parameterization in terms of temperature, flow velocity profile and mass of the hadron. The temperature we use is the chemical freeze-out temperature $T_{cf} = 156.5\,\text{MeV}$ obtained from fitting the yields of light flavor hadrons and nuclei as measured by ALICE for Pb-Pb collisions at $\sqrt{s}$ = 2.76 TeV~\cite{Andronic:2017pug,Andronic:2018qqt}. We studied the effects of the uncertainties of the blast wave parameters $\beta_{\text{max}}$ and $n$ on the hadron spectra. The resulting variations in the spectra are less than 10\% and in the ratios to $D^0$ less than 3\%. In Fig.~\ref{fig:contour} (right) we show the $D^{0}$ spectra for the three freeze-out surfaces. We see that the difference in the absolute spectra is small and lies within the uncertainty band, which is mostly due to uncertainty in $g_c$ at these low momenta. In addition, given the still large experimental uncertainties we do not expect the precise form of the freeze-out surface to be the most important factor and we will use a constant freeze-out time surface as the default choice. We emphasize here that for particle ratios, e.g. $\ensuremath{\Lambda_{\text{c}}}\xspace/D^0$, even this small difference mostly cancels. \begin{figure} \centering \includegraphics[width=0.49\linewidth]{figs/Cent010/SurfaceComparison_Model.pdf} \includegraphics[width=0.49\linewidth]{figs/Cent010/SurfaceComparison_Spectra.pdf} \caption{Left: freeze-out surface comparison, where $\tau_\text{fo}=r_\text{rmax}$. Right: $D^0$ spectra for different freeze-out surfaces. The shaded band is due to the normalization uncertainty in $g_c$. Experimentally measured points and their uncertainties~\cite{Acharya:2018hre} are shown for reference. } \label{fig:contour} \end{figure} One of the limitations of the standard blast-wave model is that it does not include the momentum modification of particle spectra due to the feed-down caused by resonance decays. Recently, a very efficient way of computing such modifications was derived~\cite{Mazeliauskas:2018irt} and applied in blast-wave fits with resonance decay feed-down~\cite{Mazeliauskas:2019ifr} and hydrodynamic simulations~\cite{Devetak:2019lsk}. Here we compute the momentum resolved decay feed-down to long lived charmed mesons and baryons using the \texttt{FastReso} computer code~\cite{FastReso}. In total we perform calculations for 76 $2$-body and 10 $3$-body decays of charmed mesons and baryons. In practise, this procedure replaces thermal Boltzmann freeze-out kernels in Eq.~\ref{eq:Cooper-Frye} with numerically computed total final particle kernels. We use the same temperature and radial velocity profiles as in a standard blast-wave model. In Fig.~\ref{fig:FdCorrection} (left) we show the full decay spectra of charmed hadrons over their initial thermal spectra. In addition, in Fig.~\ref{fig:FdCorrection}~(right) we show the selected decay-channel contributions to $\ensuremath{\Lambda_{\text{c}}}\xspace$ spectra. The feed-down contributions preferentially accumulate at low momentum and can be as large as 5 times that of thermal spectra for $\ensuremath{\Lambda_{\text{c}}}\xspace$. The dotted lines in Fig.~\ref{fig:FdCorrection} (left) show the ratio of full over thermal $\ensuremath{p_{\text{T}}}\xspace $-integrated yields in SHMc. These feed-down factors were used previously to scale the thermal spectra without accounting for $\ensuremath{p_{\text{T}}}\xspace $ dependence of the feed-down. One can see rather good agreement between the naive and exact scaling of the spectra for $\ensuremath{p_{\text{T}}}\xspace \lesssim 3\,\text{GeV}$, where most of the particles are. As low momentum is the only region where core charmed hadron production is dominant, we find in practice very small differences between full decay spectra and scaled thermal spectra in this momentum range. Nevertheless, in the plots below we will use the spectra obtained with decay kernels from \texttt{FastReso}. \begin{figure} \centering \includegraphics[width=.49\linewidth]{./figs/RatiosToThermal.pdf} \includegraphics[width=.49\linewidth]{./figs/LcPartial.pdf} \caption{Left: ratios of different particle spectra with feed-down contribution to thermal spectra (note that the corona contribution is not included here). Dashed lines correspond to the ratio of integrated yields (these ratios were previously used to scale thermal spectra in SHMc). Right: feed-down contribution to $\Lambda_c^+$ from different decay channels. For details see text.} \label{fig:FdCorrection} \end{figure} Finally, the high momentum power-law tail actually observed in experimental particle spectra is not described by hydrodynamics. Instead it can be modelled using a core-corona picture~\cite{Andronic:2019wva}. Even in nucleus-nucleus collisions at small impact parameter, a number of nucleon-nucleon collisions take place in the so-called corona-region where the overlap density is a small fraction of the maximum density achieved in the collision. In this overlap volume where nucleons undergo on average one or less collisions, we assume that no QGP is formed and, hence, treat the collisions as $\ensuremath{\rm pp}$-like. On the contrary, in the core part, we assume full thermalization of produced charm quarks. We define the corona region as having 10\% of the central density $\rho_0$. In a heavy nucleus at rest the central nucleon number density is $\rho_0 = 0.16\,{\rm fm}^{-3}$. The \ensuremath{p_{\text{T}}}\xspace shape of the cross section measured in $\ensuremath{\rm pp}$ collision is parametrized by \begin{equation} \frac{\mathrm{d}^2\sigma^{\ensuremath{\rm pp}}}{\mathrm{d} y \mathrm{d}\ensuremath{p_{\text{T}}}\xspace } = C \times \frac{\ensuremath{p_{\text{T}}}\xspace}{(1+(\ensuremath{p_{\text{T}}}\xspace/p_0)^2)^n}, \label{eq:ppFit} \end{equation} where the coefficients $C$, $p_0$ and $n$ are obtained from a fit to experimental distributions for each particle species~\cite{Acharya:2021cqv, Acharya:2019mgn, Acharya:2020lrg} and the total integral of the function is set to experimentally measured integrated cross section $\mathrm{d}\sigma/\mathrm{d} y$. The fit is found to describe the measured cross sections well within the uncertainties in the whole \ensuremath{p_{\text{T}}}\xspace range considered. We then scale the $\ensuremath{\rm pp}$ differential cross section by the overlap function $T_\text{AA}^\text{corona}$ to account for the number of binary nucleon-nucleon collisions in the corona. In summary, for each of the charmed hadrons under consideration the \ensuremath{p_{\text{T}}}\xspace{} spectra are obtained by summing the soft momentum spectrum from the the blast-wave model with resonance decays and the high momentum tail from the corona part. The uncertainty bands are obtained by varying $g_c$. In addition, the uncertainty on the corona part also includes the uncertainty of the fit to the pp data~\cite{Acharya:2021cqv,Acharya:2019mgn,Acharya:2020lrg}. This uncertainty is assumed to be uncorrelated for different particle species and is the dominant source of uncertainties for particle spectra and their ratios at high \ensuremath{p_{\text{T}}}\xspace, although it cancels for $R_\text{AA}$. \section{Results for Pb-Pb and lighter collision systems} \label{sec:results1} \begin{figure*} \centering \includegraphics[width=.49\linewidth]{figs/Cent010/Spectra_D0AndLambda.pdf} \includegraphics[width=.49\linewidth]{figs/Cent010/Raa_D0AndLambda.pdf} \caption{Spectra (left) and $R_{\rm AA}$ (right) of $\ensuremath{\text{D}^{\text{0}}}\xspace$ mesons (top) and $\Lambda_{\rm c}$ baryons (bottom) in Pb-Pb collisions at \cme{5.02} and 0-10\% centrality. Pb-Pb data for D-meson distributions taken from~\cite{Acharya:2018hre}. The pp data needed to compute the corona part are taken from~\cite{Acharya:2021cqv,Acharya:2020lrg}. The model band width at low and high \ensuremath{p_{\text{T}}}\xspace are driven by the uncertainties of $g_{c}$ and pp spectra fits, respectively, as described in the text.} \label{fig:spectra_1} \end{figure*} In the following we will describe predictions from the SHMc as well as the comparison of results from SHMc with the currently available data. For simplicity, we will only consider Pb-Pb collisions at \cme{5.02} and 0-10\% centrality, and predictions for 30-50\% centrality will be given in Appendix~\ref{sec:SemiCentralPredictions}. The model predictions for all particle species and the two centrality bins are available in numerical form as auxiliary file with the arXiv version of the publication. By far the best series of experiments exists for $D$ mesons produced in Pb-Pb collisions, see~\cite{Acharya:2018hre}. \subsection{Transverse momentum distributions} In Fig.~\ref{fig:spectra_1} we show the comparison between the SHMc predictions and data for spectra and nuclear modification factor $R_{AA}$ as a function of transverse momentum $p_{\rm T}$. The transverse momentum dependence is obtained as explained in detail in section~\ref{sec:SHMc_pt} above. Note that there are no new parameters used here apart from the hydrodynamics input discussed in section~\ref{sec:SHMc_pt}. The transverse momentum spectrum for $D^0$ mesons is very well described in particular in the purely thermal (``core") region for $p_{\rm T} \le 4$ GeV. In the transition region between core and corona as well for the high momentum tail we notice that the data are under-predicted for both the $p_{\rm T}$ spectrum and the $R_{AA}$. This suggests that the corona description is somewhat schematic and could be further optimized. The corresponding distribution for the $\Lambda_c$ baryon are displayed in the lower panels of Fig.~\ref{fig:spectra_1}. We note that these spectra and distributions are obtained with the unmodified charm resonance spectrum discussed below. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{./figs/Cent010/RatiosToD0_MultiPanels.pdf} \caption{Ratio of charmed hadron spectra, normalized to the $D^0$ spectrum from SHMc + FastReso + corona in Pb-Pb collisions at \cme{5.02} and 0-10\% centrality, in comparison to ALICE data~\cite{Acharya:2018hre}. The pp data needed to compute the corona part are taken from~\cite{Acharya:2021cqv,Acharya:2019mgn,Acharya:2020lrg}. The model band width at low and high \ensuremath{p_{\text{T}}}\xspace are driven by the uncertainties of $g_{c}$ and pp spectra fits, respectively, as described in the text.} \label{fig:spectra_2} \end{figure*} In Fig.~\ref{fig:spectra_2} we show the corresponding distributions for $D^+$, $D^{*+}$, $D^{+}_{s}$ and $\Lambda_c$, plotted as a ratio to the $D^0$ spectrum. In this normalized plot, the charm cross section which determines the charm fugacity parameter $g_c$, is eliminated. For the three D-mesons we observe very good agreement with the experimental observations. For the $\Lambda_c$ baryon the structure of the distribution changes quite strongly: a clear maximum appears near $p_{\rm T} = 4.5$ GeV. Within the framework of the SHMc this maximum appears as a consequence of a superposition of collective flow (hydrodynamic expansion) and change of hadronization regime from bulk (statistical hadronization) to jets, much as it is observed also for the $\Lambda/K$ ratio in the (u,d,s) sector~\cite{Abelev:2013xaa}. \subsection{Integrated yields} In this section we discuss results for momentum integrated particle yields, which for constant temperature freeze-out assumed in the SHMc, do not depend on the details of the freeze-out surface and velocity prametrizations discussed in section~\ref{sec:SHMc_pt} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{./figs/Yields_charm_Pb-Pb_y0.pdf} \includegraphics[width=0.49\textwidth]{./figs/Yields2df_charm_Pb-Pb_y0.pdf} \vskip -0.4 cm \caption{Mass dependence of yields \ensuremath{\der N / \der y}~ for various hadron species for Pb-Pb collisions at mid-rapidity. The left panel is for absolute yields and the right panel is for yields per degree of freedom ($2J+1$). In this plot also the primordial (prior to decays) values are shown as lines, corresponding to hadrons with charm-quark or anti-quark content of 0, 1, 2, and 3 (respective powers of $g_c$).} \label{fig:yields_m} \end{figure} \begin{figure} \centering \includegraphics[scale=0.4]{./figs/YieldsToT_charm_Pb-Pb_y0.pdf} \vskip -0.4 cm \caption{Total (core+corona) yields \ensuremath{\der N / \der y}~ for various hadron species for central (0-10\%) Pb-Pb collisions at mid-rapidity. Red points correspond to the standard mass spectrum and total open charm cross section as discussed in the text. The open points where obtained with an enhanced total open charm cross section, implemented via tripled statistical weights for excited charmed baryons. For more details see text.} \label{fig:yields_tot} \end{figure} \begin{table*} \begin{tabular}{l | l l l } Particle & $\mathrm{d} N/\mathrm{d} y$ core (SHMc) & \, $\mathrm{d} N/\mathrm{d} y$ corona & $\mathrm{d} N/\mathrm{d} y$ total \\ \hline \hline & \multicolumn{3}{c}{0-10\%} \\ \hline $D^{0}$ & 6.02 $\pm$ 1.07 & \, 0.396 $\pm$ 0.032 & 6.42 $\pm$ 1.07 \\ $D^{+}$ & 2.67 $\pm$ 0.47 & \, 0.175 $\pm$ 0.026 & 2.84 $\pm$ 0.47 \\ $D^{*+}$ & 2.36 $\pm$ 0.42 & \, 0.160 +0.048$-$0.022 & 2.52 $\pm$ 0.42 \\ $D_{s}^{+}$ & 2.15 $\pm$ 0.38 & \, 0.074 +0.024$-$0.015 & 2.22 $\pm$ 0.38 \\ $\Lambda_{c}^{+}$ & 1.30 $\pm$ 0.23 & \, 0.250 $\pm$ 0.028 & 1.55 $\pm$ 0.23 \\ $\Xi_{c}^{0}$ & 0.263 $\pm$ 0.047 & \, 0.090 $\pm$ 0.035 & 0.353 $\pm$ 0.058 \\ J/$\psi$ & 0.108 +0.041$-$0.035 & \, (5.08$\pm$0.37)$\cdot$10$^{-3}$ & 0.113 +0.041$-$0.035 \\ $\psi(2S)$ & (3.04 +1.2$-$1.0)$\cdot$10$^{-3}$ & \, (7.61$\pm$0.55)$\cdot$10$^{-4}$ & (3.80 +1.2$-$1.0)$\cdot$10$^{-3}$ \\ \hline & \multicolumn{3}{c}{30-50\%} \\ \hline $D^{0}$ & 0.857 $\pm$ 0.153 & \, 0.207 $\pm$ 0.017 & 1.06 $\pm$ 0.154 \\ $D^{+}$ & 0.379 $\pm$ 0.068 & \, 0.092 $\pm$ 0.014 & 0.471 $\pm$ 0.069 \\ $D^{*+}$ & 0.335 $\pm$ 0.060 & \, 0.084 +0.025$-$0.011 & 0.419 +0.065$-$0.061 \\ $D_{s}^{+}$ & 0.306 $\pm$ 0.055 & \, 0.039 +0.013$-$0.008 & 0.344 $\pm$ 0.056 \\ $\Lambda_{c}^{+}$ & 0.185 $\pm$ 0.033 & \, 0.131 $\pm$ 0.015 & 0.316 $\pm$ 0.036 \\ $\Xi_{c}^{0}$ & 0.038 $\pm$ 0.007 & \, 0.047 $\pm$ 0.018 & 0.084 $\pm$ 0.020 \\ J/$\psi$ & (1.12 +0.37$-$0.32)$\cdot$10$^{-2}$ & \, (2.65$\pm$0.19)$\cdot$10$^{-3}$ & (1.39 +0.37$-$0.32)$\cdot$10$^{-2}$ \\ $\psi(2S)$ & (3.16 +1.04$-$0.89)$\cdot$10$^{-4}$ & \, (3.98$\pm$0.29)$\cdot$10$^{-4}$ & (7.14 +1.08$-$0.94)$\cdot$10$^{-4}$ \\ \end{tabular} \caption{Summary of the calculations of yields at mid-rapidity for open charm and charmonia in Pb-Pb at 5.02 TeV, 0-10\% (upper part) and 30-50\% (lower part) centralities. For the corona, we used as inputs the production cross sections $\mathrm{d} \sigma/\mathrm{d} y$ as measured by ALICE in pp collisions \cite{Acharya:2019mgn,Acharya:2021cqv,Acharya:2020lrg,Acharya:2019lkw} (and assumed for $\Xi_c^0$ $\mathrm{d} \sigma/\mathrm{d} y$=0.10$\pm$0.04 mb and $\psi(2S)/\mathrm{J}/\psi = 0.15$) and $T_\text{AA}^\text{corona}$=0.90 mb$^{-1}$ and 0.47 mb$^{-1}$, respectively (for corona corresponding to $\rho<0.1\rho_0$). For details see text.} \label{tab:yields_tot} \end{table*} In Fig.~\ref{fig:yields_m} we show the mass dependence of rapidity distributions \ensuremath{\der N / \der y}~ for selected charm hadrons at mid-rapidity. The selection includes $D^0$ mesons at the lower masses and includes many multi-charm states including the hypothetical $\Omega_{ccc}$ baryon at the high mass end of the plot. All are stable against decays via strong interactions. Already the left plot exhibits clear structures whose origin becomes clear with the plot at the right hand side, where the yields are divided by the angular momentum degeneracy. Since we are in the 'Boltzmann' regime where all masses $M$ are much larger then the temperature $T_{cf} = 156.5$ MeV, the degeneracy-normalized particle yields scale in the SHMc as $\propto M^{3/2} \exp({-M/T_{cf}})$. In a log plot over 7 decades this function looks essentially like a straight line for fixed charm quark number. The color code separates particles with $\alpha = 1, 2, 3$ charm quarks. The line at the far left corresponds to $\alpha =0$ and coincides with that determined for (u,d,s) hadrons in~\cite{Andronic:2017pug}. The deviation clearly visible for $\alpha = 1$ is due to feeding from hadronically unstable resonances. The grouping into three distinct regions is what is called in the introduction 'the charm hadron hierarchy'. In Fig.~\ref{fig:yields_tot} we show the total yields, the sum of core and corona components, for selected hadron species for which the data in pp collisions, used for the calculations of the corona component, are available. We include in the plot a scenario of charm baryon enhancement, implemented via tripled statistical weights for excited charmed baryons, which leads to an increase of the total thermal charm densities by 18\%. Note that the additional charmed baryon resonances are all assumed to be narrow Breit-Wigner-type resonances, as discussed in section~\ref{sec:SHMc_spec}. We demonstrate that the equivalent increase in the input charm cross section (from 0.53 to 0.63 mb) leads to a significant increase in the predicted yield for the charmed baryons, while the yields of all the rest of the species remain unchanged\footnote{After the completion of this work, the ALICE collaboration released~\cite{Acharya:2021set} a charm cross section at mid-rapidity for pp collisions at 5.02 TeV and based on the measurement of charmed mesons and baryons. Due to a significantly larger fragmentation into charmed baryons as compared to measurements in $\rm{e}^+\rm{e}^-$ and ep collisions, a charm cross section is obtained increased by 40\% compared to the value on which the current calculations are based.}. The numerical values for the case of the PDG hadron spectrum are shown in Table~\ref{tab:yields_tot}. One notices that some of the uncertainties are asymmetric and this originates either from SHMc, as the $g_c$ values are characterized by (slightly) asymmetric uncertainties and from the corona component via the experimental production cross section for pp collisions. In Table~\ref{tab:yields_canonical12} we have compiled the expected luminosity, rapidity density for $\Omega_{ccc}$ production, inelastic cross section corresponding to the 10\% most central collisions, and expected yields for $\Omega_{ccc}$ production in 5 different collision systems at top LHC energy and for a run time of $10^6$ s. The beam parameters are from~\cite{Citron:2018lsq}, the rapidity densities and yields for $\Omega_{ccc}$ production are our predictions. The predictions are per unit rapidity for the 10\% most central collisions but contain no efficiency and acceptance corrections. Nevertheless, substantial yields can be expected. Even though the expected luminosity increases by 4 orders of magnitude when moving from Pb-Pb to O-O, the yield in O-O is comparable to that for Pb-Pb, and that at a price of about 10 collisions per bunch crossing for O-O~\cite{Citron:2018lsq}. Furthermore, corona effects will be much increased when going to such a small system. Which of the systems is optimal for QGP-related research will have to be carefully optimized. \setlength{\tabcolsep}{4pt} \begin{table*} \begin{tabular}{l|ccccc} & O-O & Ar-Ar & Kr-Kr & Xe-Xe & Pb-Pb\\ \hline\hline $\sigma_{\text{inel}}(10\%)\, \text{mb}$ & 140 & 260 & 420 & 580 & 800 \\ $T_{\text{AA}}(0-10\%)\, \text{mb}^{-1}$ & 0.63 & 2.36 & 6.80 & 13.0 & 24.3 \\ $ \mathcal{L} ({\text{cm}^{-2}\text{s}^{-1}}) $ & $4.5 \cdot 10^{31}$ & $2.4 \cdot 10^{30} $ & $1.7\cdot 10^{29}$& $3.0 \cdot 10^{28} $& $3.8 \cdot 10^{27}$ \\ \hline &&&$\mathrm{d} \sigma_{\ensuremath{\text{c}\overline{\text{c}}}\xspace}/\mathrm{d} y = 0.53\,\text{mb}$ & &\\ \hline $\mathrm{d} N_{\Omega_{ccc}}/\mathrm{d} y$ & $8.38 \cdot 10^{-8} $ & $1.29 \cdot 10^{-6} $ & $1.23 \cdot 10^{-5} $& $4.17 \cdot 10^{-5}$ & $1.25 \cdot 10^{-4}$ \\ $\Omega_{ccc}$ Yield & $5.3 \cdot 10^{5}$& $8.05 \cdot 10^5 $& $8.78 \cdot 10^5$ & $7.26 \cdot 10^5$ & $3.80 \cdot 10^5 $ \\ \hline &&&$\mathrm{d} \sigma_{\ensuremath{\text{c}\overline{\text{c}}}\xspace}/\mathrm{d} y = 0.63\,\text{mb}$ & &\\ \hline $\mathrm{d} N_{\Omega_{ccc}}/\mathrm{d} y$ & $1.44 \cdot 10^{-7} $ & $2.33 \cdot 10^{-6} $ & $2.14 \cdot 10^{-5} $& $7.03 \cdot 10^{-5}$ & $2.07 \cdot 10^{-4}$ \\ $\Omega_{ccc}$ Yield & $9.2 \cdot 10^{5}$& $1.45 \cdot 10^6 $& $1.53 \cdot 10^6$ & $1.22 \cdot 10^6$ & $6.29 \cdot 10^5 $ \end{tabular} \caption{Expected yields for a run of $10^6$ s of $\Omega_{ccc}$ baryons for various collision systems at the LHC energy $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV with full canonical suppression. All calculations are for mid-rapidity with $\Delta y = 1$. } \label{tab:yields_canonical12} \end{table*} \section{Conclusions and Outlook} In the present paper we have explored a range of predictions made within the framework of the SHMc with focus on hadrons with open charm. Most important is the comparison to recent ALICE measurements on $D$ mesons~\cite{Acharya:2018hre} and predictions for $\Lambda_c$ baryons. As baseline for SHMc predictions we kept the chemical freeze-out temperature $T_{cf} = 156.5$ MeV determined from the analysis of (u,d,s) hadrons. As only additional input we used the open charm cross section based on pp measurements from the ALICE and LHCb collaborations and extrapolated to the Pb-Pb system using hard collision scaling and a correction for nuclear modifications obtained from an analysis of recently measured p-Pb open and hidden charm data. The transverse momentum distributions were obtained in a novel, hydro-inspired approach including resonance decays. Without any further assumptions and parameters all $D$ meson yields and low transverse momentum distributions in Pb-Pb collisions are well described. The situation is less well settled in the $\Lambda_c$ baryon sector. Recent ALICE measurements in pp and p-Pb collisions~\cite{Acharya:2020uqi} indicate enhanced production of $\Lambda_c$ baryons compared to what was expected based on $e^+e^-$ and $ep$ data on fragmentation into charmed baryons. For an account of ALICE preliminary data including those from Pb-Pb collisions see Fig.~4 in~\cite{Loizides:2020tey}. These preliminary data have led to new charm baryon production models including ``missing" charm baryons~\cite{He:2019vgs}. We have therefore provided predictions for $\Lambda_c$ production in Pb-Pb collisions using the current experimental information on the charm baryon resonance spectrum~\cite{Zyla:2020zbs} as well as with an increased number of charm baryons. New data on this puzzling situation are expected soon from both the CERN ALICE and LHCb collaborations. The success of the description of yields and low transverse momentum spectra of open charm hadrons by the SHMc also demonstrates that the hadronization of open and hidden charm takes place at or close to the QCD phase boundary. It further demonstrates that open and hidden charm data can be reproduced with one common hadronization mechanism. Our predictions for Pb-Pb collisions imply very large enhancements for hadrons with 2 or 3 charm quarks compared to pure thermal production with charm fugacity $g_c =1$. The enhancement will be predominantly visible at low transverse momentum \ensuremath{p_{\text{T}}}\xspace{}, see, e.g., Fig.~\ref{fig:spectra_1}. For multi-charmed baryons these enhancements lead to an impressive and quite spectacular hierarchy, see Fig.~\ref{fig:yields_m}. To test these predictions is a challenge for future charm production experiments in LHC Run3 and Run4 and ultimately one of the important goals for the ALICE3 'all Silicon' experiment~\cite{Adamova:2019vkf}. Fundamental new information on the hadronization and deconfinement of charm quarks should be the rewards for the efforts to build such a detector. \section{Acknowledgments} \label{sec:Acknowledgments} This work is part of and supported by the DFG (German Research Foundation) -- Project-ID 273811115 -- SFB 1225 ISOQUANT. K.R. acknowledges the support by the Polish National Science Center (NCN) under the Opus grant no. 2018/31/B\-/ST2/01663, and the Polish Ministry of Science and Higher Education. V.V. is supported by a research grant (Grant No. 00025462) from VILLUM FONDEN, the Danish National Research Foundation (Danmarks Grundforskningsfond), and the Carlsberg Foundation (Carlsbergfondet). \bibliographystyle{utphys}
2024-02-18T23:40:36.214Z
2021-06-14T02:24:21.000Z
algebraic_stack_train_0000
2,832
11,865
proofpile-arXiv_065-13872
\section{Introduction} One of the simplest formulas to emerge from general relativity (GR) is the expression for the bending angle $\hat{\alpha}$ of a light ray that passes by an object of mass $M$ at a minimum distance $r_0$: \begin{equation} \label{alphahat-R} \hat{\alpha} = \frac{4GM}{c^2 r_0} \equiv 2 \, \frac{R_S}{r_0} \,, \end{equation} where $R_S = 2GM/c^2$ is the object's {\it Schwarzschild radius}. The straightforward interpretation of Eq.~(\ref{alphahat-R}) is that the amount of bending is proportional to the mass of the deflector, and inversely proportional to the distance of closest approach of the light ray to the deflector. The appearance of $c$ and $G$ is not surprising since light and gravity are involved. Apart from a factor of two, this result can be derived from Newton's law of universal gravitation, provided that we treat a light ray as a stream of discrete particles. However, the Newtonian derivation relies on canceling out the mass of the photon, which we know to be zero. Even so, one of the great advantages of the Newtonian picture of gravity is that light bending can be understood as just one more example of classical orbital mechanics. This formulation has led to a rich literature in both astrophysics and mathematical physics (see the books \cite{Schneider-gravlenses, Petters-singularity, Mollerach-Roulet, Dodelson-glbook, Congdon-Keeton} and references therein). Yet explaining gravitational lensing in purely classical terms misses the essentially relativistic character of the phenomenon. Consider Eq.~(\ref{alphahat-R}), for example. In order for a Newtonian description to be physically meaningful, we would expect, at the very least, that the relativistic expression for the bending angle would reduce to Eq.~(\ref{alphahat-R}) in some appropriate limit, say, $r_0 \gg R_S$. Instead, we find a weak-field bending angle that is twice the Newtonian prediction. Explaining this discrepancy requires us to write down the exact relativistic bending angle, and expand the result in a Taylor series. The first-order term in $R_S/r_0$ yields Eq.~(\ref{alphahat-R}). A fully relativistic treatment of light bending shows that, in addition to the two lensed images that arise from the Newtonian bending angle, there is an infinite sequence of fainter images (see the review by \citet{Bozza-BHrev} and references therein, especially the original discovery by \citet{Darwin-exact}). If the deflector is compact enough to have a shadow border, which can be thought of as an event horizon for light, a photon that passes close to this region without falling into it can execute any number of orbits before continuing on to the observer. For a rotating compact object such as a black hole or ultra-dense neutron star, an asymmetry develops between {\it prograde} photons, which co-rotate with the deflector, and {\it retrograde} photons, which counter-rotate. The upshot for lensing is that the bending angle for prograde and retrograde photons is not the same. Both exact and approximate expressions for the bending angle in the Kerr metric have appeared in the literature, but questions of interpretation remain. After summarizing what is known about the bending angle in the Kerr metric, we identify and seek to resolve a subtle but important discrepancy that has appeared in the literature. \section{Light Bending in the Equatorial Kerr Metric} \label{sec:bending} \begin{figure}[h] \centering \includegraphics{bending-geometry.pdf} \caption{Lensing geometry for prograde motion in the equatorial plane of the Kerr metric. Arrows indicate the direction of propagation of the lensed photon and sense of rotation of the deflector. Retrograde motion is obtained by reflecting the photon's trajectory about the horizontal axis ($b \to -b$). Adapted from Fig. 3.1 of \citep{Congdon-Keeton}.} \label{fig:lensing-geometry} \end{figure} The spacetime around a rotating compact object is described by the Kerr metric \cite{Kerr-bh}, which is typically written in Boyer-Lindquist \cite{Boyer-Lindquist} coordinates ($t,r,\theta,\phi$), where $\theta$ is measured with respect to the axis of rotation. For simplicity, we work in the equatorial plane ($\theta=\pi/2$), where the line element takes the form \begin{equation} ds^2 = g_{tt}\,c^2\,dt^2 + g_{rr}\,dr^2 \ + \ g_{\phi \phi}\,d\phi^2 + 2 g_{t \phi}\, c \, dt \, d\phi. \end{equation} The metric coefficients are given by \begin{subequations} \begin{align} &g_{tt}(r)=-\left(1-\frac{2m}{r}\right)\\ &g_{rr}(r)=\left(1-\frac{2m}{r}+\frac{a^2}{r^2}\right)^{-1}\\ &g_{\phi\phi}(r)=r^2+a^2+\frac{2m a^2}{r}\\ &g_{t\phi}(r)=-\frac{2m a}{r} \end{align} \end{subequations} where $m \equiv GM/c^2$ is the mass scaled by constants to have dimensions of length. The {\em spin parameter}, which has the dimension of length, is defined by $a \equiv j/c$, where $j$ is the angular momentum per unit mass of the compact object. Without loss of generality, we take $j\geq0$. It is convenient to define the {\it dimensionless} spin parameter $\hat{a} \equiv a/m$, which is restricted to the interval $0 \leq \hat{a} \leq 1$. The Schwarzschild limit corresponds to $\hat{a}=0$, while maximal rotation corresponds to $\hat{a}=1$. The case $\hat{a}>1$, where an event horizon with some $r>0$ gives way to a naked singularity at $r=0$, is not considered in this Note. \subsection{Equations of Motion} The equations of motion for a particle subject to a given metric $ds$ follow from the Lagrangian $L = (ds/d \lambda)^2 / 2$, where $\lambda$ is an {\em affine} parameter. For a massive particle, we can identify $\lambda$ as the proper time. For a massless particle such as a photon, where the proper time vanishes, we can define $\lambda$ to be any parameter that specifies the spacetime coordinates everywhere along the trajectory. The Lagrangian for a particle in the equatorial plane of the Kerr metric is \begin{equation} L(r,\dot{t},\dot{r},\dot{\phi})= \frac{1}{2} \left(c^2 g_{tt} \dot{t}^2 + g_{r r} \dot{r}^2 + g_{\phi \phi} \dot{\phi}^2 + 2 g_{t \phi} c \dot{t} \dot{\phi} \right) \,,\label{eqn:Lagrangian-equatorial-Kerr} \end{equation} where an overdot denotes differentiation with respect to $\lambda$. We can identify two constants of motion: \begin{subequations} \begin{alignat}{1} \varepsilon &\equiv - \frac{\partial L} {\partial \dot{t}} = - c^2 g_{tt} \dot{t} - c g_{t \phi} \dot{\phi} \\ \ell &\equiv \frac{\partial L} {\partial \dot{\phi}} = c g_{t \phi} \dot{t} + g_{\phi \phi} \dot{\phi} \,\, . \end{alignat} \label{eqn:vareps-ell-Kerr}% \end{subequations} For a massive particle, $\varepsilon$ and $\ell$ reduce to the energy per unit mass and angular momentum per unit mass, respectively, when $r \gg m$. Since these quantities do not depend on the mass of the particle, they can be straightforwardly extended to include the massless case. It follows from our sign convention for $j$ that $\ell > 0$ for a prograde trajectory and $\ell < 0$ for a retrograde trajectory. Solving for $\dot{t}$ and $\dot{\phi}$ in terms of $\varepsilon$ and $\ell$ yields \begin{equation} \dot{t} = \frac{1}{c^2} \left( \frac{ c \ell g_{t \phi} + \varepsilon g_{\phi \phi} }{ g_{t \phi}^2 - g_{t t} g_{\phi \phi} }\right) = \frac{\varepsilon}{c^2} \frac{\left[r^2 + a^2\left( 1 + \frac{2m}{r}\right) - \frac{2ma}{r}\frac{ c \ell}{\varepsilon}\right]}{\left(r^2 - 2mr +a^2\right)} \end{equation} and \begin{equation} \dot{\phi} = - \, \frac{1}{c} \left( \frac{ c \ell g_{t t} + \varepsilon g_{t \phi} }{ g_{t \phi}^2 - g_{t t} g_{\phi \phi} } \right) = \ell \frac{\left(1 - \frac{2m}{r} + \frac{2ma}{r} \frac{\varepsilon}{c \ell}\right)}{\left(r^2 - 2mr +a^2\right)} \, . \label{Kerr-ang} \end{equation} Substituting these expressions into Eq.~(\ref{eqn:Lagrangian-equatorial-Kerr}), and setting $L=0$ along a null geodesic, we obtain \begin{equation} \dot{r}^2 = \frac{\varepsilon^2}{c^2} + \frac{1}{ r^2}\left(\frac{\varepsilon^2 a^2}{c^2} - \ell^2 \right) + \frac{2 m}{r^3}\left(\frac{\varepsilon^2 a^2}{c^2} - \frac{ 2 a \varepsilon \ell}{c} + \ell^2\right) \, . \label{eqn:Kerr-rad} \end{equation} \subsection{Bending Angle} In terms of the change in azimuthal angle $\phi$ as the light ray travels from the source to the observer, the bending angle is given by \begin{align} \hat{\alpha} &= -\pi + \left( \int_{\rm{r_{src}}}^{r_0} + \int_{r_0}^{{\rm r_{obs}}} \right) \frac{d \phi}{dr}\,dr \nonumber \\ &= -\pi + \left( - \int_{\rm{r_{src}}}^{r_0} + \int_{r_0}^{{\rm r_{obs}}} \right) \abs{\frac{d \phi}{dr}} dr \, . \end{align} The signs of the integrals correspond to a photon traveling {\it inward} from the source at distance $r_{\rm_{src}}$ to the distance of closest approach $r_0$, and then {\it outward} from $r_0$ to the observer at $r_{\rm_{obs}}$. Since the observer and source are assumed to be far from the lens, we let $r_{\rm_{src}}, r_{\rm_{obs}} \to \infty$. (See Fig. \ref{fig:lensing-geometry}.) This leads to \begin{equation} \hat{\alpha} = -\pi + 2 \int_{r_0}^\infty \abs{\frac{d \phi}{dr}} \,dr = -\pi + 2 \int_{r_0}^\infty \abs{\frac{\dot{\phi}}{ \dot{r}}} \,dr \,.\label{eqn:alphahat-def} \end{equation} To obtain the bending angle in the Kerr metric, we rewrite Eqs.~(\ref{Kerr-ang}) and (\ref{eqn:Kerr-rad}) as \begin{subequations} \begin{eqnarray} \dot{\phi} &=& \frac{\ell}{br} \left[\frac{br - 2m(b-a)}{r^2 - 2mr +a^2}\right] \\ \dot{r}^2 &=& \frac{\ell^2}{b^2 r^3} P(r) \,, \label{eqn:phidot-rdot-b} \end{eqnarray} \label{eqn:phidot-rdot} \end{subequations} where \begin{equation} P(r) \equiv r^3 - (b^2 - a^2)r + 2m (b - a)^2 \,, \label{eqn:Pdef} \end{equation} and $b \equiv c \ell / \varepsilon$ is the impact parameter. Since a lensed photon follows an unbound orbit, its energy is positive ($\varepsilon>0$). Thus, $b > 0$ for a prograde trajectory ($\ell > 0$), and $b < 0$ for a retrograde trajectory ($\ell <0$). Substituting Eqs.~(\ref{eqn:phidot-rdot}) into Eq.~(\ref{eqn:alphahat-def}) yields the Kerr bending angle, \begin{equation} \hat{\alpha} = - \pi + 2 \int_{r_0}^\infty \sqrt{\frac{r}{P(r)}} \, \frac{b r - 2 m (b - a)}{r^2 -2mr +a^2} dr \,.\label{Kerr-bending-integral} \end{equation} The integral above can be expressed in terms of elliptic integrals of the third kind \citep{Iyer-Hansen-equat}. In the Schwarzschild case ($a=0$), these reduce to elliptic integrals of the first kind \citep{Darwin-exact,Iyer-Petters}. We can express $r_0$ in terms of $a$, $b$, and $m$ by noting that $\dot{r} = 0$ when $r=r_0$. At such a {\it turning point}, $P(r)$ in Eq.~(\ref{eqn:phidot-rdot-b}) vanishes for $r \neq 0$. Depending on the value of $b$, the equation $P(r) = 0$ has either two or zero solutions with $r > 0$. First, consider the case of two solutions, denoted by $r_1$ and $r_2$ ($r_1 < r_2$). As long as $r_1$ and $r_2$ lie outside the event horizon at $r_H \equiv m+\sqrt{m^2-a^2}$, we interpret $r_1$ as the maximum distance of an outward-moving photon, and $r_2$ as the minimum distance of an inward-moving photon. In other words, $r_0 \equiv r_2$. Since $P(r) < 0$ for $r_1 < r < r_2$, $\dot{r}$ is imaginary in this region. Thus motion between $r_1$ and $r_2$ is not possible. If $P(r)=0$ has no solutions for $r > r_H$, an incoming photon will fall inside the event horizon. The critical impact parameter $b_c$ separates the cases when $P(r)$ admits two ($\abs{b} > \abs{b_c}$) or zero ($\abs{b} < \abs{b_c}$) turning points. As $\abs{b} \to \abs{b_c}$ from above, the two positive zeros of $P(r)$ approach a common value $r_c$. Thus $P'(r_c) = 0$ when $b=b_c$. Solving for $r_c$ yields \begin{equation} r_c = \sqrt{\frac{b_c^2-a^2}{3}}\,. \label{eqn:rc-bc} \end{equation} To understand what $r_c$ means physically, we differentiate Eq.~(\ref{eqn:phidot-rdot-b}), and solve for $\ddot{r}$: \begin{equation} \ddot{r} = \frac{\ell^2}{2 b^2} \left[ -\frac{3}{r^4} P(r) + \frac{1}{r^3}P'(r) \right] \, . \end{equation} Since $P(r_c) = P'(r_c) = 0$, we conclude that $\ddot{r}=0$ when $r=r_c$. Thus a photon with $b = b_c$ approaches a circular orbit at $r_c$. The circle of radius $r_c$ is known as the {\it shadow border} because an observer at $r>r_c$ cannot detect a photon emitted from any $r \leq r_c$. If we allow for photon trajectories that are not restricted to the equatorial plane, the shadow border becomes a two-dimensional surface. \begin{figure} \begin{center} \includegraphics[width=3.0in]{AsymmetricKerr.pdf} \caption{Paths of prograde (upper curve) and retrograde (lower curve) photons for a deflector with spin parameter $\hat{a}=0.9$. The rays start out far to the right of the figure with the {\it same} unsigned impact parameter $|b|=7.5m$. The axes are labeled in units of $m$.} \label{fig:Asymmetry} \end{center} \end{figure} To solve for $b_c$ in terms of $a$ and $m$, we substitute Eq.~(\ref{eqn:rc-bc}) into Eq.~(\ref{eqn:Pdef}), and set $P(r_c)=0$. This leads to the cubic equation \begin{equation} (b_c + a)^3 = 27 m^2 (b_c - a) \,, \end{equation} which has a doubly degenerate solution with $b_c > 0$, and a single solution with $b_c < 0$. The respective critical impact parameters for prograde and retrograde motion are denoted by $b^+_c$ and $b^-_c$, and are given by \begin{equation} \label{bcrit} b^{\pm}_c = -a \pm 6m \cos{\left[\frac{1}{3}\cos^{-1}{\left(\mp \frac{a}{m}\right)}\right]} \,. \end{equation} In the non-rotating case ($a=0$), we recover $b_c = 3m$. In the case of maximal rotation ($a=m$), $b_{c}^{+} = 2m$ and $b_{c}^{-} = -7m$. \section{Asymmetry between Prograde and Retrograde Motion} \label{sec:conclusion} It is not obvious from Eq.~(\ref{Kerr-bending-integral}), or from the representation of the bending angle in terms of elliptic integrals, that prograde and retrograde photons are deflected by different amounts. This asymmetry becomes apparent when we consider the bending angle in two asymptotic cases: the {\it weak deflection limit} (WDL) and the {\it strong deflection limit} (SDL). In the WDL, $\abs{b} \gg m$, so that the bending angle can be written as a power series in $m/\abs{b}$. Starting respectively from the elliptic integral representation of the bending angle and from Eq.~(\ref{Kerr-bending-integral}), \citet{Iyer-Hansen-arXiv} and \citet{Aazami-2011b} find \begin{equation} \begin{split} \label{AlphaKWeak} \hat{\alpha}_{\pm}&=4\left(\frac{m}{|b|}\right)+\left(\frac{15\pi}{4} \mp4\hat{a}\right) \left(\frac{m}{|b|}\right)^2\\ &\quad\quad+\left(\frac{128}{3}\mp 10 \pi\hat{a}+4\hat{a}^2\right) \left(\frac{m}{|b|}\right)^3 \\ &\quad\quad\quad+\left(\frac{3465\pi}{64} \mp192\hat{a}+\frac{285\pi}{16}\hat{a}^2 \mp4\hat{a}^3\right) \left(\frac{m}{|b|}\right)^4\\ &\quad\quad\quad\quad+O\left[\left(\frac{m}{|b|}\right)^5\right] \, \end{split} \end{equation} in the equatorial plane of the Kerr metric for a fixed value of $\hat{a}$. Apart from the first-order term in $m/\abs{b}$, which is independent of spin, all other terms increase (decrease) the bending angle of a retrograde (prograde) photon relative to the non-rotational case. In the SDL, where $\abs{b} \sim m$, a power series representation of the bending angle would require one to identify a different expansion parameter. However, this would prove to be an exercise in futility, since the bending angle diverges logarithmically \citep{Ohanian-BH} as $\abs{b} \to \abs{b_c} \sim m$. This same behavior is seen for the Schwarzschild bending angle \citep{Darwin-exact}. Thus, we examine the bending angle numerically in this case. Figure \ref{fig:Asymmetry} shows the trajectories of two photons with the same value of $\abs{b}$, but on opposite sides of the horizontal axis. Notice that the bending occurs mostly within a few gravitational radii of the deflector. To understand why the retrograde photon is bent more than its prograde counterpart, there is an additional asymmetry in Kerr lensing that warrants discussion. The critical radius for photon capture takes on the single value $r_c = 3m$ in the Schwarzschild metric, but has two different values, $r_{c}^{+}$ (prograde) and $r_{c}^{-}$ (retrograde) in the Kerr metric, which depend on the mass and spin of the deflector. They approach the Schwarzschild value $r_c = 3m$ as $\hat{a} \to 0$. At maximal rotation ($\hat{a} = 1$), $r_{c}^{+} = m$ and $r_{c}^{-} = 4m$. Since $r_{c}^{+} < r_{c}^{-}$, a prograde photon can get closer to the deflector than a retrograde photon without being captured. A less intuitive but more useful equivalent statement is that, for a fixed value of $\abs{b}$, a retrograde photon gets closer to its critical radius than a prograde photon gets to its critical radius. Thus, the bending angle of the retrograde photon is the larger of the two. For the sake of plotting the bending angle in the Kerr and Schwarzschild metrics for both prograde and retrograde motion over the full range $\abs{b_c^{\pm}} < \abs{b} < \infty$, \citet{Iyer-Hansen-equat} introduce the parameter $b' \equiv 1-\abs{b_{c}^{\pm}/b}$, which ranges from zero in the extreme SDL to unity in the extreme WDL. They plot the bending angle against $b'$ in their Figs.~4 and 5, finding greater deflection for prograde photons than for retrograde photons with the same value of $b^{\prime}$. Yet this conclusion runs counter to our discussion above. This is a direct consequence of the dependence of $b^{\prime}$ on $b_c^{\pm}$, which has distinct values for prograde and retrograde motion. In other words, prograde and retrograde photons with the same value of $b^{\prime}$ have different values of $b$, and vice-versa. The most natural way around this difficulty is to plot the bending angle against the physical impact parameter separately for the WDL and SDL (see Fig.~\ref{fig:ExactBendingAngle}). In both limits, we see that the magnitude of the bending angle is indeed larger for retrograde photons than prograde photons, with the Schwarzschild value lying somewhere in between. It is tempting to interpret the bending angle in the prograde and retrograde cases by means of analogy with a rotating fluid. A retrograde photon, which moves ``upstream," has to overcome the ``gravitational current" induced by the deflector on its way to the observer, while a prograde photon, which moves ``downstream," is swept along with the current. This results in a larger bending angle for a retrograde photon than for a prograde photon. Unfortunately, the same analogy can lead to the opposite conclusion \cite{Iyer-Hansen-equat}. \citet{Aazami-2011b}, who find a larger bending angle for retrograde photons, use their analysis of time delays to argue that retrograde photons ``spend more time" in the gravitational field of the deflector than prograde photons. The apparent tension between \citet{Iyer-Hansen-equat} and \citet{Aazami-2011b} disappears when the mathematical results of the two papers are compared, without reference to analogies or figures. Thus, analogies that are intended to explain physical results can become an accidental exercise in confirmation bias instead. Finding intuitive explanations for results in a field as non-intuitive as GR is a particularly risky undertaking. \begin{figure*} \begin{center} \includegraphics[height=6.0cm]{KerrBending-SDL-exact.pdf} \includegraphics[height=6.0cm]{KerrBending-WDL-exact.pdf} \caption{ Exact bending angle $\hat{\alpha}$ as a function of $\abs{b}/m$ for different values of the spin parameter $\hat{a}$. For Kerr cases, prograde rays are shown as solid lines and retrograde rays are shown as dashed lines. The left panel shows the strong deflection limit; the vertical dotted lines indicate the critical impact parameter $b_c/m = 3\sqrt{3}$ for the Schwarzschild case (black) as well as the limits $b_c^+/m= 2$ and $\abs{b_c^-}/m=7$ that correspond to prograde and retrograde photons, respectively, for $\hat{a} = 1$ (gray). The right panel shows the weak deflection limit. (The colors and line styles are the same in both panels.) } \label{fig:ExactBendingAngle} \end{center} \end{figure*} \section*{Acknowledgments} ABC would like to thank Ted Burkhardt, Jerod Caligiuri, Carl Droms, Tim Jones, Allan Moser, and Erik Nordgren for their helpful comments on the manuscript. SVI gratefully acknowledges Robert Sinesi, who prepared Figure \ref{fig:Asymmetry} and was supported by the NASA NY State Space Grant.
2024-02-18T23:40:36.568Z
2022-07-20T02:05:23.000Z
algebraic_stack_train_0000
2,846
3,468
proofpile-arXiv_065-14046
\section{Introduction} \label{sec:intro} The low-Reynolds-number aerodynamics have been extensively studied over the past few decades due the interests in designing small-scale air vehicles and understanding biological flights. At these scales, flows over wings exhibit complex flow physics comprised of unsteady separation, vortex formation, and wake interaction that are different from the high-Reynolds-number counterparts. In fact, the aerodynamic characteristics of low-Reynolds-number flows exhibit strong nonlinearity arising from the rich vortex dynamics. \citet{liu2012numerical} and \citet{kurtulus2015unsteady} numerically investigated two-dimensional unsteady flows over a NACA 0012 airfoil at a Reynolds number of 1000. Both studies highlighted the nonlinear behavior of the aerodynamic characteristics. \citet{rossi2018multiple} assessed the Reynolds number effects ($Re = 100 - 3000$) on the two-dimensional flows over a NACA 0010 airfoil and an ellipse at a fixed angle of attack of $30^{\circ}$. The presence of multiple bifurcations in the flow behavior and aerodynamic characteristics was reported. More recently, \citet{menon2019aerodynamic} conducted a comprehensive study on the effects of two-dimensional airfoil shapes and Reynolds number ($Re= 500- 2000$) on the aerodynamic characteristics. Their results showed sizeable and rapid changes in the aerodynamic quantities with angle of attack, due to the presence of distinct flow phenomenon such as the Kármán vortex shedding and the formation of the leading-edge vortex. Low-Reynolds-number flows around wings are further enriched by the end effects for finite-aspect-ratio wings. The nonlinear interactions among the tip vortices and the leading-/trailing-edge vortices lead to three-dimensional and aperiodic wakes \citep{winkelman1980flowfield,freymuth1987further,taira2009three,zhang2020formation}. Such vastly different flow physics from the analogous two-dimensional flows suggests that the understanding of fully three-dimensional analysis is necessary for practical wing designs at low Reynolds numbers. In our previous study, the wake dynamics of a NACA 0015 finite-aspect-ratio wings has been examined for a range of aspect ratios and angles of attack at $Re=400$ \citep{zhang2020formation}. The aerodynamic force coefficients of the finite-aspect-ratio wings were observed to be sigificantly lower than those of the two-dimensional airfoils even for wings with large aspect ratios. As we recently studied the flow over finite-aspect-ratio swept wings \citep{zhang2020laminar}, we observed that the sweep-induced midspan effects add another source of three dimensionality to the wake dynamics. However, the aerodynamic characteristics of the swept wings were not systematically reported in that study. In this Technical Note, we present a database of aerodynamic force coefficients for finite-aspect-ratio swept wings at a low Reynolds number of 400. The aerodynamic data are obtained from three-dimensional unsteady direct numerical simulations over a range of aspect ratios, angles of attack, and sweep angles. These data provides an improved understanding of the low-Reynolds-number aerodynamic characteristics of the canonical swept wings. \section{Computational methods} \begin{figure} \centering \includegraphics[width=0.75\textwidth]{figures/Scheme.png} \caption{Case setup for $(a)$ unswept wing and $(b)$ swept wing. $(c)$ cross-sectional mesh at locations indicated by dashed line in $(a)$ and $(b)$.} \label{fig:scheme} \end{figure} For the present aerodynamic characterization, we simulate incompressible flows over finite-aspect-ratio swept wings with a NACA 0015 cross section. A schematic of the wing geometry is shown in figure \ref{fig:scheme}. The wings are subjected to uniform flow with velocity $U_{\infty}$ in the $x$ direction. The $z$ axis aligns with the spanwise direction of the unswept wing, and the $y$ axis points in the lift direction. For the swept cases, the wings are sheared towards the streamwise direction, and the sweep angle $\Lambda$ is defined as the angle between the $z$ axis and leading edge of the wing. We consider a range of sweep angles from $0^{\circ}$ to $45^{\circ}$. The symmetry boundary condition is prescribed along the midspan (wing root). Denoting the half wing span as $b$, the semi aspect ratio is defined as $sAR=b/c$, where $c$ is the chord length, and is varied from 0.5 to 2. The Reynolds number, defined as $Re\equiv U_{\infty}c/\nu$ ($\nu$ is the kinematic viscosity of the fluid), is fixed at 400, at which the flow remains laminar. The lift and drag coefficients are defined as $C_L=F_L/(\rho U^2bc/2)$ and $C_D=F_D/(\rho U^2bc/2)$, where $F_L$ and $F_D$ are the aerodynamic forces in $y$ and $x$ directions, respectively, and $\rho$ is the fluid density. The incompressible solver \emph{Cliff} (in \emph{CharLES} software package, Cascade Technologies, Inc.) is used for simulating the flows over wings using direct numerical simulations. This solver employs a collocated, node-based finite-volume method to simulate the flows with second-order spatial and temporal accuracies \citep{ham2004energy,ham2006accurate}. The computational domain and mesh set-ups in this study follow our previous works \citep{zhang2020formation,zhang2020laminar}, which have been extensively validated. \section{Results} \subsection{Wake dynamics} \label{sec:wake} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{figures/3Dregime2.png} \caption{Classification of flows around finite-aspect-ratio wings. \protect\raisebox{0.0pt}{\tikz{\node[scale=0.6,regular polygon, circle, draw = {rgb,255:red,0; green,114; blue,189},line width=0.3mm, fill={rgb,255:red,0; green,114; blue,189}](){};}}: steady flows; \protect\raisebox{0.0pt}{\tikz{\node[scale=0.45, regular polygon, regular polygon sides=3, fill={rgb,255:red,126; green,47; blue,142}](){};}}: steady flow due to tip effects; $\MyDiamond[draw={rgb,255:red,217; green,83; blue,25},line width=0.3mm, fill=white]$: unsteady shedding near midspan; \protect\raisebox{0.0pt}{\tikz{\node[scale=0.45,regular polygon, regular polygon sides=3,fill={rgb,255:red,119; green,172; blue,48},rotate=-90](){};}}: steady flow due to midspan effects; \protect\raisebox{0.0pt}{\tikz{\node[scale=0.55,regular polygon, regular polygon sides=4, draw = {rgb,255:red,237; green,177; blue,32}, line width=0.3mm, fill={rgb,255:red,255; green,255; blue,255},rotate=0](){};}}: unsteady shedding near wing tip; \protect\raisebox{0.0pt}{\tikz{\node[scale=0.45,regular polygon, regular polygon sides=3,fill={rgb,255:red,77; green,190; blue,238},rotate=-180](){};}}: steady flow with streamwise vortices. The dashed lines denote the approximate boundaries between steady (filled symbols) and unsteady (empty symbols) flows. The vortical structures are visualized by isosurfaces of $Qc^2/U_{\infty}^2=1$ for representative cases. } \label{fig:regime} \end{figure} We begin the discussions by presenting an overview of the wake dynamics of the swept finite-aspect-ratio wings. A classification of the wakes is presented in figure \ref{fig:regime}, with representative vortical structures shown for selected cases. For wings with $\alpha\lesssim 12^{\circ}$, the wake over a NACA 0015 airfoil at $Re=400$ remains stable. Steady flows without significant formation of tip vortices (\protect\raisebox{0.0pt}{\tikz{\node[scale=0.6,regular polygon, circle, draw = {rgb,255:red,0; green,114; blue,189},line width=0.3mm, fill={rgb,255:red,0; green,114; blue,189}](){};}}), are observed regardless of the aspect ratio and sweep angle. For higher angles of attack, the wake dynamics are influenced by the complex interplay between the tip effects and the midspan effects. The tip effects are responsible for the formation of steady wakes with low aspect ratios and low sweep angles (\protect\raisebox{0.0pt}{\tikz{\node[scale=0.45, regular polygon, regular polygon sides=3, fill={rgb,255:red,126; green,47; blue,142}](){};}}). In these cases, the downwash induced by the tip vortices suppresses the roll-up of the vortex sheet on the suction side of the wing \citep{taira2009three,devoria2017mechanism}. With an increase in aspect ratio, the effects of the tip vortices become relatively weaker away from the tip. This allows for the roll-up of the leading-edge vortex sheets, resulting in unsteady vortex shedding near the midspan ($\MyDiamond[draw={rgb,255:red,217; green,83; blue,25},line width=0.3mm, fill=white]$). Compared with $sAR=0.5$, the stability boundaries of the wake for $sAR=1$ and 2 shift toward lower angles of attack for low-sweep wings. For wings with larger aspect ratios and larger sweep angles, the tip vortices are weaker than those for lower sweep wings, and the midspan effects become profound in shaping the wakes dynamics. The midspan effects are associated with the formation of a pair of vortical structures on the suction side of the midspan. These vortical structures are aligned at an angle of $180^{\circ}-2\Lambda$. For $\Lambda\neq 0^{\circ}$, each of the vortical structures is subjected to the downward velocity induced by its symmetric peer on the other side of the midspan. Such mechanism stabilizes the wake over a considerable number of cases (\protect\raisebox{0.0pt}{\tikz{\node[scale=0.45,regular polygon, regular polygon sides=3,fill={rgb,255:red,119; green,172; blue,48},rotate=-90](){};}}). The formation of the vortical structures near the midspan is also beneficial to the aerodynamic performance, as it will be discussed in detail in the following sections. The downward velocity described above is strong near the midspan and weak towards the outboard sections of the wing. For swept wings with large aspect ratios, unsteady vortex shedding develops locally near the tip region, while the midspan region still remains steady. The resulting flows resemble the ``tip stall" phenomenon \citep{black1956flow,visbal2019effect}, and prevail for swept wings of $sAR=1-2$ with high angles of attack (\protect\raisebox{0.0pt}{\tikz{\node[scale=0.55,regular polygon, regular polygon sides=4, draw = {rgb,255:red,237; green,177; blue,32}, line width=0.3mm, fill={rgb,255:red,255; green,255; blue,255},rotate=0](){};}}). For wings of $sAR=2$ with high sweep angles ($\Lambda=37.5^{\circ}-45^{\circ}$), the unsteady tip shedding further transitions to another type of steady flow, with the formation of the streamwise vortices (\protect\raisebox{0.0pt}{\tikz{\node[scale=0.45,regular polygon, regular polygon sides=3,fill={rgb,255:red,77; green,190; blue,238},rotate=-180](){};}}). We refer the readers to our previous study \citep{zhang2020laminar} for a thorough discussion on the wake dynamics of swept wings. \subsection{Lift coefficients} \label{sec:lift} \begin{figure} \includegraphics[width=1\textwidth]{figures/CL_All.png} \caption{Time-averaged lift coefficients for $(a)$ $sAR=0.5$, $(b)$ $sAR=1$ and $(c)$ $sAR=2$.} \label{fig:lift} \end{figure} The lift force coefficients of the wings with $sAR=0.5$, 1 and 2 are presented in figure \ref{fig:lift}. Also plotted are the inviscid limit for the lift of low-aspect-ratio unswept wings in incompressible flow \citep{helmbold1942unverwundene}: \begin{equation} C_L = \displaystyle{\frac{2\pi \alpha}{\sqrt{1+(1/sAR)^2}+1/sAR}}. \end{equation} The lift coefficients of the finite-aspect-ratios wings are significantly smaller than the inviscid limit for all three aspect ratios. Compared to a similar characterization \citep{taira2009three} for flat-plate wings at $Re=300$, we find that the airfoil shape is influential even at these low Reynolds numbers. For $sAR=0.5$, the sweep has a positive effect on the lift coefficients for $\alpha \lesssim 26^{\circ}$. The vortical lift enhancement with the sweep angle is due to the vortical structures near the midspan, as discussed in section \ref{sec:wake}. As the flow transitions to unsteady shedding at $\alpha=30^{\circ}$, the lift coefficients undergo an abrupt jump for wings with low sweep. For these unsteady flows, the lift coefficients decrease slightly with the sweep angle. As the aspect ratio increases to $sAR=1$, for $\alpha\lesssim 12^{\circ}$, the lift coefficients across different sweep angles remain close to each other, and increase almost linearly with the angle of attack with a steeper slope than that of the analogous cases with $sAR=0.5$. For $\alpha\approx 16^{\circ}-20^{\circ}$, the favorable effect of sweep on the lift coefficients becomes more noticeable. The increase of $\overline{C_L}$ with $\Lambda$ saturates at high sweep angles. For higher angles of attack ($\alpha\approx 26^{\circ}-30^{\circ}$), the lift coefficients no longer exhibit a monotonic relationship with the sweep angle. Instead, high $\overline{C_L}$ is observed for $\Lambda=15^{\circ}$ at $\alpha=26^{\circ}$, and $\Lambda=22.5^{\circ}$ at $\alpha=30^{\circ}$. For $sAR=2$, the lift coefficients decrease with increasing sweep angle for wings with low angles of attack ($\alpha\lesssim 16^{\circ}$). The adverse effect of sweep on lift at $sAR=2$ (contrary to the positive effect at $sAR=0.5$) is due the fact that the additional generation of vortical lift is limited to the midspan region, while the elongated outboard region is featured by lower sectional lift. However, at higher angles of attack, the lift coefficients for $\Lambda=45^{\circ}$ surpass those of moderate sweep angles ($\Lambda=15^{\circ}-37.5^{\circ}$), although they are significantly smaller than those for $\Lambda=0^{\circ}-7.5^{\circ}$. Compared with $sAR=0.5$ and 1, the lift coefficients for $sAR=2$ wings are generally higher. An exception of the this trend is observed for $\Lambda=45^{\circ}$ wings for $\alpha=0^{\circ}-16^{\circ}$, where the lift coefficients remain almost the same with those of the $sAR=1$ wings. \subsection{Drag coefficients} \begin{figure} \includegraphics[width=1\textwidth]{figures/CD_All.png} \caption{Time-averaged drag coefficients for $(a)$ $sAR=0.5$, $(b)$ $sAR=1$ and $(c)$ $sAR=2$.} \label{fig:drag} \end{figure} The drag coefficients of the wings exhibit an quadratic growth with angle of attack over the studied range, as shown in figure \ref{fig:drag}. For the low aspect ratio of $sAR=0.5$, the difference in drag coefficients among cases with different sweep angles is not noticeable until $\alpha=20^{\circ}$. At $\alpha=26^{\circ}$, $\overline{C_D}$ increases with the sweep angle. As the flow destabilizes at $\alpha=30^{\circ}$, similar to the lift coefficients shown in figure \ref{fig:lift}($a$), the drag coefficient becomes negatively affected by the sweep angle. Compared to the drag coefficients at $sAR=0.5$, those at $sAR=1$ are generally smaller for $\alpha\lesssim 20^{\circ}$. For these cases, the drag decreases with increasing $\Lambda$, although the difference among different sweep angles remains small. At higher angles of attack ($\alpha=26^{\circ}-30^{\circ}$), the drag coefficients of wings with $\Lambda=0^{\circ}-22.5^{\circ}$ are significantly higher than those with $\Lambda=30^{\circ}-45^{\circ}$. Similar to $sAR=1$, the drag coefficients at $sAR=2$ also decreases with increasing sweep angle. However, the difference in $\overline{C_D}$ among different sweep angles becomes larger even at low angles of attack. At higher angles of attack, the drag coefficients of wings with low sweep angles grow much faster with $\alpha$ than those with high sweep angles. \subsection{Lift-to-drag ratios} \begin{figure} \includegraphics[width=1\textwidth]{figures/CLCD_All.png} \caption{Time-averaged lift-to-drag ratio for $(a)$ $sAR=0.5$, $(b)$ $sAR=1$ and $(c)$ $sAR=2$.} \label{fig:ratio} \end{figure} The time-averaged lift-to-drag ratios are compiled in figure \ref{fig:ratio}. The $\overline{C_L/C_D}$ generally improves with increasing aspect ratio. However, due to the low-Reynolds-number nature of the flow, the lift-to-drag ratio remains below 1.5 for the cases considered herein. At $sAR=0.5$, the lift-to-drag ratio increases with the angle of attack up to $\alpha\approx 20^{\circ}$, at which the maximum $\overline{C_L/C_D}$ is achieved. The lift-to-drag ratio increases with the sweep angle, due to the lift enhancement mechanism of the midspan effects discussed in \S \ref{sec:wake}. The positive effect of sweep angle on $\overline{C_L/C_D}$ is also observed for wings with $sAR=1$. The maximum $\overline{C_L/C_D}$ for $sAR=1$ is achieved at $\alpha\approx 16^{\circ}$ for $\Lambda=0^{\circ}-37.5^{\circ}$, and at $\alpha\approx 20^{\circ}$ for $\Lambda=45^{\circ}$. For $sAR=2$ at low angles of attack ($\alpha\lesssim 12^{\circ}$), the lift-to-drag ratios of the $\Lambda=45^{\circ}$ wings are significantly lower than those with lower sweep angles. At higher angles of attack ($\alpha=20^{\circ}-30^{\circ}$), $\overline{C_L/C_D}$ of the $\Lambda=45^{\circ}$ wing are only slightly higher than the rest of the cases. This suggests that care should be taken in selecting the right type of wings if midspan lift enhancement is to be taken advantage of with finite-aspect-ratio swept wings. \section{Conclusions} \label{conclusions} We have performed unsteady three-dimensional direct numerical simulations to study the aerodynamic characteristics of finite-aspect-ratio swept wings with a NACA 0015 cross-section at a chord-based Reynolds number of 400. The effects of the sweep angle ($\Lambda=0^{\circ}-45^{\circ}$) on the aerodynamic force coefficients were examined for finite-aspect-ratio wings ($sAR=0.5$, 1, and 2) over a wide range of angles of attack ($\alpha=0^{\circ}-30^{\circ}$). The unsteady laminar separated flows exhibit complex aerodynamic characteristics with respect to these parameters. The introduction of sweep enhances lift for wings with low aspect ratios of $sAR=0.5$ and 1, due to the lift generated by the vortical structures near the midspan. For these cases, the dependence of drag coefficients on the sweep angle is less noticeable, particularly at lower angles of attack. The lift-to-drag ratios for low-aspect-ratio wings increase with the sweep angle. However, such favorable effects of sweep angle on the lift coefficients and lift-to-drag ratios are not observed for wings with higher aspect ratios, where the midspan effects are relatively weaker over the wing span. The results herein provide a laminar aerodynamic characterization of the low-aspect-ratio wings and complement the unsteady low-Reynolds-number aerodynamic database with highlight on the effect of sweep. \section*{Acknowledgments} We acknowledge the US Air Force Office of Scientific Research (Program Managers: Dr.~Gregg Abate and Dr.~Douglas Smith, Grant number: FA9550-17-1-0222) for funding this project. We thank Ms.~Shelby Hayostek, Prof.~Michael Amitay, Dr.~Wei He, Mr.~Anton Burtsev and Prof.~Vassilios Theofilis for insightful discussions.
2024-02-18T23:40:37.322Z
2020-11-26T02:01:38.000Z
algebraic_stack_train_0000
2,876
3,030
proofpile-arXiv_065-14142
\section{Introduction} \label{sec:introduction} Solid particles pervade the interstellar medium at all scales. Although they represent a small amount of its total mass, they deeply influence its evolution by setting the local chemical, thermal and charge balances. Dust plays also a key role in the formation of planets, since solid bodies grow over thirty orders of magnitude in mass to form cores of planets. Spatially resolved observations of young stellar objects strongly suggest that at least some planets have to form in less that one million of years (e.g. \citealt{Alma2015,Avenhaus2018,Pinte2020}). Key is to understand how dust growth can be so efficient. However, planet formation is an out-of-equilibrium non-linear multi-scales and multi-physics process. For example, dust grains differentiate from the gas as they settle vertically and drift radially in the disc (i.e \citealt{Testi2014} and references therein). This creates instabilities which concentrate the solids even more, affecting the collisional rate of the grains, and thus, their growth or fragmentation. Since dust dynamics strongly depends on the grain size, growth operates a strong feed-back on the spatial distribution of the particles. \begin{figure} \includegraphics[width=0.95\columnwidth]{figures/paper_kconst_brauer_nbins=15tend=10000_2.pdf} \caption{An illustration of the growth over-diffusion problem: numerical schemes of order 0 over-estimate the formation of large grains at low resolution. The plot has been realised with the scheme presented in \citet{Kovetz1969} for the case of a constant kernel $K = 1$ with $N=15$ logarithmically-spaced dust bins.} \label{fig:overdiff} \end{figure} Hence, 3D dust/gas simulations that include growth and fragmentation are compulsory to understand dust evolution during the early stages of planet formation (e.g. \citealt{Safronov1972,Hayashi1975,Weidenschilling1980,Ohtsuki1990,Wetherill1990,Tanaka1996,Dominik2007,Ormel2007,Birnstiel2010}). The simplest way to formalise the evolution of a local mass distribution of dust grains is by the mean of the deterministic mean-field Smoluchowski equation, which assumes binary collisions \citep{Smolu1916}. This equation does not have generic analytic solutions. \textit{Integrated non-linearities challenge numerical solvers to obtain accurate solutions} (see Fig.~\ref{fig:overdiff}). As such, this equation has been thoroughly studied since a century (e.g. \citealt{Muller1928,Schumann1940,Chandrasekhar1943,Melzak1953,McLeod1962,Golovin1963,Berry1967,Scott1968,Trubnikov1971,Hidy1972,Drake1972a,Gillespie1975b,SW1978,ST1979,Gelbard1980,Aldous1999,Friedlander2000,Ramkrishna2000,FL2004,Jacobson2005,Pruppacher2010}), and applied extensively to several fields such aerosols science, chemistry, meteorology, biology and astrophysics. It has been shown that classical solvers require a sufficient resolution in mass to avoid artificial formation of aggregates of large masses \citep{Soong1974,Berry1974,Trautmann1999,Khain2018}. This artificial diffusion may become particularly important when the mass interval considered is large (Fig.~\ref{fig:overdiff}). Typically, for planet formation, a few hundreds of mass bins are required to compute dust growth from interstellar sizes to pebbles. Usually, this fact is of no importance given current computational capacities. However, 3D hydrodynamical simulations can hardly handle more than (a few) ten(s) of mass bins in practice. Compromises have therefore been performed either by simplifying their growth or their dynamics. However, 1-2 D hydrodynamical codes integrating the Smoluchowski equation (e.g. \citealt{Birnstiel2010}) provide different results compared to 3D hydrodynamical codes with monodisperse growth models (e.g. \citealt{Gonzalez2017}), showing the necessity of a comprehensive approach. \textit{This implies to develop a solver which solves accurately the Smoluchowski equation with a limited number of bins, tractable by 3D hydrodynamical codes.} Reaching high accuracy with a low number of bins while conserving mass of a finite interval of mass is a characteristic property of finite volume high-order solvers, which stem therefore as a natural way to address the growth over-diffusion problem. In this study, we present a high-order solver for the Smoluchowski equation based on the Discontinous Galerkin method, following the pioneering work of \citet{Liu2019}. Important properties of the Smoluchowski equation discussed in the astrophysical context are presented in Sect.~\ref{sec:coag}. The novel Discontinous Galerkin numerical scheme is presented in Sect.~\ref{sec:dg}. The performances of the solver regarding the over-diffusion problem are studied in Sect.~\ref{sec:num}. Applicability of the algorithm to young stellar objects or in other astrophysical contexts are discussed in Sect.~\ref{sec:discussions}. \section{Smoluchowski equation} \label{sec:coag} \subsection{Short summary} The Smoluchowski equation describes mass conservation for a distribution aggregates where mass transfers are allowed. This equation exists under a discret form (monomers forming polymers) or a continuous limit form when mass quantization becomes negligible \citep{Muller1928}. The Smoluchowski equation is a non-linear integro-differential hyperbolic equation that depend on a collision function called the growth kernel (or kernel) which quantifies the collision rate between two grains. Explicit solutions exist only for the so-called constant \citep{Smolu1916,Schumann1940,Scott1968}, additive \citep{Golovin1963,Scott1968} and multiplicative kernels \citep{McLeod1962,Scott1968}, implying numerical resolution for physical problems. Among the known solutions, self-similar solutions are particularly important since they provide asymptotic behaviour of the mass distribution at large times \citep{Schumann1940,Friedlander1966,Wang1966,Menon2004,Niethammer2016a,Laurencot2018}. A generic feature of these solutions is the exponentially fast decay of the solution at large masses. Gelation, i.e. formation of aggregates of infinite mass form in a finite time for kernels sustaining explosive growth \citep{Leyvraz1981}. In astrophysics, collisions occurs essentially through ballistic impacts modulated by focusing due to long-range interactions \citep{Safronov1972,Dullemond2005}. Kernels are non-explosive and mass remains rigorously conserved during the grow process. \subsection{Conservative form} Mass conservation for a distribution of growing grains has been originally formalised by \citet{Smolu1916}. Growth is modelled via binary collisions between spheres having known mean probabilities. The by-products of collisions are called aggregates or polymers. In \citet{Smolu1916}, aggregates are assumed to also have spherical shapes. Spatial correlations are neglected. The smallest colliding elements are referred as monomers. For physical systems involving aggregates made of large numbers of monomers, it is often convenient to assume continuous mass distributions. The population density of grains within an elementary mass range $\mathrm{d}m$ is characterised by its number density $n\!\left( m \right)$. The continuous Smoluchowski equation is given by \begin{equation} \begin{aligned} \frac{\partial n\left(m,t\right)}{\partial t} = & \frac{1}{2} \int\limits_0^{m} \! K\!\left(m-m',m'\right)n \!\left(m-m',t\right)n\!\left(m',t\right) \mathrm{d}m' \\ & -n\!\left(m,t\right) \int\limits_0^{\infty} \! K\!\left(m,m'\right)n\!\left(m',t\right) \mathrm{d}m', \end{aligned} \label{eq:smolu_cont} \end{equation} where $t$ denotes time and $m$ and $m'$ the \textit{masses} of two colliding polymers. The averaged probabilities of collision are encoded inside the coagulation kernel $K\left(m,m'\right)$, which is a symmetric function of $m$ and $m'$ for binary collisions (see Sect.~\ref{sec:kernels}). Fig.~\ref{fig:scheme_smolu} shows the physical meaning of the non-linear integro-differential equation Eq.~\ref{eq:smolu_cont}. The number of grains encompassed within a given interval of masses varies since i) binary collisions of aggregates of appropriate masses can increase this population (first term of the right-hand side of Eq.~\ref{eq:smolu_cont}), but ii) those grains may themselves collide with other grains to form larger aggregates (second term of the right-hand side of Eq.~\ref{eq:smolu_cont}). This equation can be put under a convenient dimensionless form by introducing \citep{Scott1968,Drake1972a} \begin{equation} \left\{ \begin{aligned} & x \equiv m/m_0,\,y \equiv m'/m_0,\, \mathcal{K}(x,y) = K(m,m')/K_0, \\ & \tau = (K_0 N_0) t,\, f(x,\tau) = m_0 \, n(m,t)/N_0 . \end{aligned} \right. \end{equation} $N_0$ is the initial total number density of particles, $m_0$ is the initial mean mass of the particles and $K_0$ is a normalising constant with dimensions $[\mathrm{length}]^3/\mathrm{time}$. We adopt the variables $x$ and $\tau$ for sake of clarity and homogeneity with the existing literature (e.g. \citealt[and references therein]{Friedlander2000,Jacobson2005}). $x$ denotes therefore masses. Eq.~\ref{eq:smolu_cont} transforms into \begin{equation} \begin{aligned} \frac{\partial f(x,\tau)}{\partial \tau} = & \frac{1}{2} \int\limits_0^x \! \mathcal{K}(y,x-y) f(y,\tau) f(x-y,\tau) \mathrm{d}y \\ & - f(x,\tau) \int\limits_0^{\infty} \! \mathcal{K}(y,x) f(y,\tau) \mathrm{d}y. \end{aligned} \label{eq:smolu_cont_DL} \end{equation} % Eq.~\ref{eq:smolu_cont_DL} is physically ill-posed, since the probability to form aggregates of mass larger than the initial mass of the system may be non-zero. Recently, \citet{Tanaka1996} have shown that Eq.~\ref{eq:smolu_cont_DL} can be equivalently written under the conservative form \begin{equation} \left\{ \begin{aligned} &\frac{\partial g \left( x,\tau \right) }{\partial \tau} + \frac{\partial F_{\mathrm{coag}} \left[ g \right] \left( x,\tau \right)}{\partial x} = 0 \\ &F_{\mathrm{coag}} \left[ g \right] \left( x,\tau \right) = \int\limits_0^x \! \! \int\limits_{x-u}^{\infty} \mathcal{K} \left( u,v \right) g \left( u,\tau \right) \frac{g \left( v,\tau \right)}{v} \mathrm{d}u \mathrm{d}v , \end{aligned} \right. \label{eq:smol_cons_DL} \end{equation} where $g\left(x,\tau \right) \equiv x f\left(x,\tau \right)$ is the mass density of polymers per unit mass, and $F_{\mathrm{coag}} \left[ g \right] \left( x,\tau \right)$ is the flux of mass density across the mass $x$ triggered by coagulation \citep{FL2004}. Under this conservative form, the infinite upper bound of the second integral in $F_{\mathrm{coag}}$ can simply be replaced by $x_{\rm max} - u$. This prevents the formation of aggregates of masses larger than $x_{\rm max}$ by settling the passing-through mass flux to be rigorously zero. \subsection{Kernels} \label{sec:kernels} Physically, the coagulation kernel is defined according to \begin{equation} K\! \left(m,m' \right) \equiv \beta\! \left(m,m', \Delta v\right) \Delta v \!\left(m,m'\right) \sigma \! \left(m,m'\right), \end{equation} where $\Delta v$ is the mean relative velocity between two aggregates of masses $m$ and $m'$, $\sigma$ is the mean effective cross section of collision and $\beta$ denotes the mean sticking probability of the grains. The coagulation kernel encodes the microphysics of collisions inside $\beta$, $\sigma$ and $\Delta v$, those parameters depending \textit{a priori} on the sizes of the colliding grains, or the kinetic and thermodynamical parameters of an eventual surrounding flow. A kernel of particular importance for physical problems is the Ballistic kernel (Table~\ref{table:kernels}). In this case, $\sigma$ corresponds simply to the geometric cross-section of the grains (focusing effects due to electrostatic or gravitational forces being neglected), and $\beta$ and $\Delta v$ are treated as constants (which may be a relevant approximation at least over moderate ranges of masses). Coagulation kernel can also be seen as mathematical objects useful to study the properties of the Smoluchowski equation under various conditions or to derive explicit analytic solutions. The expression of the four kernels discussed in this work is given in Table~\ref{table:kernels}. \begin{figure} \includegraphics[width=\columnwidth,trim=100 200 100 100, clip]{./figures/scheme_smolu_2.pdf} \caption{Illustration of the Smoluchowski equation Eq.~\ref{eq:smolu_cont}. Polymers of mass $m_i$ are represented in orange. The green and blue polymers have masses lower than $m_i$. Creation (resp. growth) of polymers of mass $m_i$ increases (resp. decreases) its number density.} \label{fig:scheme_smolu} \end{figure} \subsection{Analytic solutions} \label{sec:analytic} Explicit analytic solutions exist in the case of simple kernels and specific initial conditions. We review these solutions hereafter since they will be used in Sect.~\ref{sec:num} to benchmark the numerical algorithms. \subsubsection{Constant kernel} \label{sec:kconst} For the constant kernel $\mathcal{K} \! \left( x,y \right)=1$ and the initial condition $f \left( x,0 \right) = \exp \left(-x\right)$, the solution of Eq.~\ref{eq:smolu_cont_DL} is \citep{Muller1928,Schumann1940,Melzak1957,Rajagopal1959,Scott1968,ST1979} \begin{equation} \left\{ \begin{aligned} &f_1(\tau) \equiv \frac{4}{(2+\tau)^2},\,f_2(\tau) \equiv \frac{\tau}{2+\tau},\\ &f(x,\tau) = f_1(\tau) \exp\left(-\left\lbrace1-f_2(\tau)\right\rbrace x\right).\\ \end{aligned} \right. \label{eq:sol_kconst} \end{equation} Physically, a constant kernel $\mathcal{K}=1$ implies that the frequency of collisions between two aggregates is independent of their size. \begin{table} \begin{center} \begin{tabular}{cc} \hline Kernel & $\mathcal{K}(x,y)$ \\ \hline Size-independent & $1$ \\ Sum & $x+y$ \\ Product & $xy$ \\ Ballistic & $\pi\left(x^{1/3}+y^{1/3}\right)^2 \Delta v$ \end{tabular} \caption{Functional form of the different coagulation kernels $\mathcal{K}$ considered in this study.} \label{table:kernels} \end{center} \end{table} \subsubsection{Additive kernel} \label{sec:kadd} The solution for the additive kernel $\mathcal{K}(x,y)=x+y$ with the initial condition $f_0(x,0)=\exp(-x)$ has been derived by \citet{Golovin1963}. \citet{Scott1968} extended the derivation for a general initial condition. For an initial condition under the form $f(x,0) = \exp(-x)$, the solution of Eq.~\ref{eq:smolu_cont_DL} is \begin{equation} \left\{ \begin{aligned} & T \equiv 1-\exp(-\tau), \\ &f(x,\tau) = \frac{\left(1-T\right)\exp\left(-x\left\lbrace1+T\right\rbrace\right)}{xT^{1/2}}I_1 \! \left(2xT^{1/2}\right),\\ \end{aligned} \right. \label{eq:sol_kadd} \end{equation} where $I_1$ is the modified Bessel function of first kind. Physically, the additive kernel implies that the frequency of collisions increases according to the size of the grains. Large aggregates form faster compared to case of a constant kernel, leading to broader dust distributions at large masses. The asymptotic tail presents therefore a smoother decay compared to the case $\mathcal{K}=1$. \subsubsection{Multiplicative kernel} \label{sec:kmul} Originally, \citet{McLeod1964} derived a solution for the multiplicative kernel $\mathcal{K}(x,y) = xy$ with the initial condition $f_0(x,0)=x^{-1}\exp(-x)$ only for a small interval of time. The general solution for this problem was later found by \citet{Ernst1984} \begin{equation} \left\{ \begin{aligned} & T \equiv \left\{ \begin{aligned} & 1+\tau \quad \mathrm{if} \, \,\,\, \tau \leq 1 \\ & 2\tau^{1/2} \quad \mathrm{otherwise} \end{aligned}, \right. \\ &f(x,\tau) = \frac{\exp\left(-Tx\right) I_1 \! \left(2x \tau^{1/2} \right)}{x^2 \tau^{1/2}}.\\ \end{aligned} \right. \label{eq:sol_kmul} \end{equation} The multiplicative kernel is a typical kernel to study the occurrence of gelation, since at $\tau=1$, aggregates with infinite masses form and mass conservation is mathematically no longer satisfied. Physically, the multiplicative kernel means an explosive increase of the collisional frequencies with respect to grain sizes. Massive grains form faster compared to the case of the additive kernel. In the same time, the mass density of small grains decreases quickly. \subsection{Numerical methods} \label{sec:numerical_methods} No known analytic solutions exist for the Smoluchowski coagulation equation with physical kernels, implying numerical resolution. Various numerical schemes have been developed for this purpose. Two classes of algorithms have been developed. A first class of solvers consists of Monte-Carlo simulations (e.g. \citealt{Gillespie1975a,Liffman1992,Smith1998,Lee2000,Debry2003,Sheng2006,Ormel2007,Zsom2008}). Although convenient, these methods have two principal drawbacks. Firstly, a large number of particles is required to ensure appropriate accuracy of the number density distribution $f$. Secondly, the scheme is not deterministic and simulations can be reproduced only in a statistical sense, which is not satisfying when interfacing with hydrodynamics. A second class of solvers consist of deterministic algorithms. These methods have been summarised in \cite{Kostoglou1994,Kumar1996a,Ramkrishna2000,Pruppacher2010,Khain2018}. A short but comprehensive summary is given hereafter. \subsubsection{Method of moments} \label{sec:moment_method} The method of moments seems to be the first numerical method proposed to solve the Smoluchowski equation \citep{Hulburt1964}. A system of ordinary differential equations is written over the $k$th moments $M_k \equiv \int_0^{\infty} x^k f(x,\tau) \mathrm{d}x$ of the number density function. Approximations either for the reconstruction of $f$ \citep{Hulburt1964} or for the derivation of fractional moments \citep{Estrada2008} are then required to close this system of ordinary differential equations. The Standard Moment Method (SMM) requires an analytical integration of the kernel. To avoid this difficulty, Quadrature Moment Methods (QMM), where integrals are approximated by Gaussian quadrature methods, have been developed. Solutions of moments can be used directly to derive the total number of particles $M_0$, the total mass $M_1$ or other physical quantity such as dust opacities \citep{Marchisio2003,Estrada2008}. Number densities $f$ are reconstructed using polynomials \citep{Pruppacher1980,Piskunov2002}. \subsubsection{Point-based methods} \label{sec:point_based_method} The number density function $f$ is sampled over a mass grid. The main difficulty lies in representing the continuous distribution $f$ as accurately as possible using the values of $f$ at the sampling points. Different algorithms have been developed using this approach: \paragraph{Interpolation method} This method was developed by \citet{Berry1967,Berry1974}. The continuous Smoluchowski equation is written in terms of $g(x,\tau) \equiv x f(x,\tau)$, the mass density function. The mass interval is discretised using a logarithmic grid. A system of ordinary differential equations is derived with respect to the variable $g$ evaluated on the grid points. Gain and loss terms are evaluated separately, and integrals are calculated by using high-order Lagrangian interpolations. \citet{Middleton1976,Suck1979} improved this method by using Simpson's rules for the integrals and cubic splines interpolations. \paragraph{Method of orthogonal collocation} The method of weighted residuals \citep{Finlayson1972} is a general method for obtaining numerical solutions to differential equations. The unknown solution is tested over a set of weight functions and is adapted to give the best approximated solution to the differential equation. The Smoluchowski equation is multiplied by the weight function $\phi$ and integrated over all the mass domain to form the residual \begin{equation} \begin{aligned} R \equiv \int_0^{\infty} & \left(\frac{\partial f(x,\tau)}{\partial \tau} \right. - \int_0^x \mathcal{K}(x-y,y)f(x-y,\tau)f(x,\tau) \mathrm{d}y \\ & \left. + \int_0^{\infty} \mathcal{K}(x,y) f(x,\tau)f(y,\tau) \mathrm{d}y \right) \phi(x) \mathrm{d}x =0. \end{aligned} \end{equation} The number density $f$ is approximated by polynomials. The collocation method corresponds to the case where $\phi(x) = \delta (x-x_0)$. The coagulation equation is evaluated at the collocation points $x_0$. This gives a set of ordinary differential equations equal to the degree of freedom of the polynomials used. Integrals are usually performed using Gaussian quadrature rules \citep{Eyre1988}. \begin{figure} \centering \includegraphics[width=\columnwidth,trim=0 200 0 150, clip]{./figures/scheme_pair_bins.pdf} \caption{Illustration of the pair interaction methods. A particle of mass $x_{n+l}=x_n +x_l$ forms from collision between particles of masses $x_l$ and $x_n$. The resulting mass $x_{n+l}$ is distributed onto adjacent bins, generating numerical over-diffusion towards large masses.} \label{fig:scheme_pair_bins} \end{figure} \paragraph{Pair interaction methods} Numerical integration of the Smoluchowski equation consists of summing contributions of pairwise collisions between all grid points of different masses. For non-regular mass samplings, aggregates do usually not have masses corresponding to an existing grid point. To ensure mass conservation, the mass of the aggregate is distributed over the two relevant adjacent grid points (Fig.~\ref{fig:scheme_pair_bins}). The first pair-interaction solver has been developed by \citet{Kovetz1969}. In this algorithm, a system of ordinary differential equations is obtained over the quantities $N(x_i)=\int_{a_i}^{b_i} \! f(x)\mathrm{d}x$ where $x_i$ denotes the mass of individual particles of the $i$-th point, and $a_i \equiv (x_{i+1}-x_{i})/2$ and $b_i\equiv(x_{i}-x_{i-1})/2$. In practice, logarithmic grids are used to cover wide ranges of masses. In the context of planet formation, widely used solvers follow this approach (e.g. \citealt{Brauer2008,Birnstiel2010}). The principal drawback of this method is that redistribution of mass towards large grains tend to over-predict the number of large aggregates, triggering artificial formation of large bodies (Fig.~\ref{fig:overdiff}). A large number of grid points is therefore required to avoid an artificial broadening of number density of particles $f$ \citep{Berry1974,Soong1974,Khain2018}. Moreover, a sufficient number of grid points is also needed to avoid difficulties related to collisions that form aggregates of masses larger than the largest mass point. \citet{Jacobson2005} extended the \citet{Kovetz1969} algorithm by distributing the mass between grid points and writing the scheme in a semi-implicit form. This solver ensures mass conservation to machine precision. \citet{Bott1998,Simmel2002,Wang2007} developed also binary-pairs interaction methods. Mass is advected towards adjacent grid points by a mass flux expressed with a high-order scheme. These methods do not introduce a significant numerical broadening. Other methods have been developed by \citet{Hounslow1988,Lister1995} where four binary interaction mechanisms of gain and loss of particles are considered to deal correctly the rate of change of particle and mass. \subsubsection{Finite element methods} \label{sec:finite_element_method} In these methods, the continuous mass distribution is discretised over a finite number of mass elements (intervals, cells, bins). \paragraph{Moments with finite elements} The first finite element scheme for coagulation was developed by \citet{Bleck1970} by discretising mass distributions over logarithmic bins. $f$ is approximated by its moment of order zero over each bin to obtain a system of ordinary differential equations. Over-diffusion for large grains is observed with this piecewise constant approximation. A change of variable $x \rightarrow x^{-3}$ is operated to reduce diffusivity at large masses. The method of \citet{Soong1974} follows \citet{Bleck1970}. The Smoluchowski equation is written in terms of mass density distributions $g$ and approximated by piecewise exponential functions. This allows to reduce drastically the diffusive effect at large masses. \citet{Gelbard1980,Landgrebe1990} proposed a similar method, where the Smoluchowski equation is decomposed over bins of indices $j$ in terms of $Q_j=\int_{I_j} x f(x,\tau) \mathrm{d}x$. A precise account of gain and loss of particles in terms of fluxes of $Q$ is performed. \citet{Trautmann1999} extends the work of \citet{Gelbard1980}, also finding numerical diffusion when using piecewise constant approximation, and addressing it by using piecewise exponential approximations. Another moment method that involves polynomial approximations for the first two moments $M_0$ and $M_1$ of $f$ has been proposed by \citep{Enukashvily1980,Kumar1996a,Tzivion1999}. \paragraph{Discontinuous Galerkin method} The discontinuous Galerkin method is a weighted residual method where the weight $\phi(x)$ consists of orthogonal polynomials (Lagrange polynomials, Legendre polynomials, cubic splines). The numerical solution of the Smoluchowski equation is decomposed on each bin over this basis and a system of ordinary differential equations is obtained for the coefficients (e.g. \citealt{Pilinis1990,Erasmus1994,Mahoney2002}, see Sect.~\ref{sec:dg}). Generally, the integrals are performed by Gaussian quadrature rules \citep{Gelbard1978,Rigopoulos2003,Sandu2006}. \subsubsection{Finite element schemes in the conservative form} \label{sec:fem_cons_form} The conservative form Eq.~\ref{eq:smol_cons_DL} has been exploited for numerical simulations only lately. \citet{FL2004} derived a finite volume scheme of order zero where volume integrals over flux divergences are replaced by flux terms at the interfaces by the mean of the divergence theorem. This scheme conserves mass exactly and has been further extended by \citep{Filbet2008,Bourgade2008,Forestier2012}. The mass interval can be sampled uniformly or non-uniformly. Finite volume schemes of higher orders solving for the conservative form have been investigated recently \citep{Gabriel2010,Liu2019}. \citet{Gabriel2010} used WENO reconstruction \citep{Jiang2000} to approximate the coagulation flux at interfaces. \citet{Liu2019} developed a numerical scheme based on the discontinuous Galerkin method. This method provides the further advantage to choose the order of the scheme in a flexible manner. Integrals are calculated using Gaussian Quadrature rules, which implies sub-sampling of the mass intervals. \begin{figure} \centering \includegraphics[width=0.9 \columnwidth]{./figures/scheme_DG.pdf} \caption{Sketch of the discontinuous Galerkin method. In each cell, the solution is approximated by high-order polynomials $k$ to increase accuracy.} \label{fig:scheme_DG} \end{figure} \subsection{Requirements from hydrodynamical simulations} \label{sec:hydro} Densities must remain strictly positive and total mass conserved rigorously to ensure the stability of hydrodynamical simulations. These two properties are genuinely ensured by finite volume methods based on the conservative form Eq.~\ref{eq:smol_cons_DL}. The double-integral formulation allows to simply quench the formation of aggregates with unphysical masses, by setting for the integral bound the maximum mass allowed. These constrains may not always be satisfied with simple integral formulations. On the other hand, observational constrains on young stellar objects are essentially provided by high-contrast spectro-polarimetry at infrared wavelengths (SPHERE/VLT, GPI, Subaru/HiCIAO) and millimetre interferometry (ALMA). These observations probe (sub)micron-to-millimetre-in-size dust distributions in discs, which corresponds to 4 orders of magnitude in size, i.e. 12 orders of magnitude in mass for compact grains. With current computational capacities, 3D dust/gas simulations of dusty discs can handle $\sim$10-20 dust species simultaneously (e.g. \texttt{PHANTOM}, \citealt{Price2018} or \texttt{RAMSES}, \citealt{Lebreuilly2020}). The global accuracy of second-order hydrodynamical solvers is of order $\sim 10^{-3}$. We aim therefore to design a versatile algorithm for coagulation of accuracy $\sim 10^{-3}$ with $\sim15$ dust bins distributed over 12 orders of magnitude in mass that allows tractable simulations. We therefore face the issue of over-diffusion associated to piecewise constant reconstructions with few mass bins, and high-order schemes appear as a natural way to overcome this difficulty. It is much preferable for hydrodynamics to handle a fix grid of sizes, to avoid interpolations when updating forces. We seek therefore for a growth algorithm that works efficiently with a fixed grid. Additionally, we seek for an algorithm which allows for convergence analysis in 3D hydrodynamical simulations. As explained above, multiplying the number of dust bins provides prohibitive computational costs. Instead, the order of the scheme may be varied, should it be parametrised in a flexible manner. This requirement tends to favour Discontinuous Galerkin schemes with respect to WENO schemes, although they provide in theory equivalent accuracies. Compared to regular Galerkin schemes, discontinuous Galerkin solvers decompose the solution over several mass bins. This helps to better capture the exponential decay of the solution at large masses and avoid over-diffusion biases. For these reasons, we have chosen to focus on the Discontinuous Galerkin method to solve for the Smoluchowski equation in astrophysical contexts, an approach recently pioneered by \cite{Liu2019}. Monofluid dust/gas hydrodynamical solvers provide a natural architecture to include a coagulation equation. Indeed, relative drifts between grains of different sizes are genuinely computed, eventually in the terminal velocity approximation (e.g. \citealt{Laibe2014a,Hutchison2018,Lebreuilly2019}). Monofluid formalism also ensures exact conservation of momentum, i.e. no thrust due to mass transfers propel the mixture. Sub-grid fluctuations should be prescribed by an accurate model that describes local turbulence or Brownian motion \section{Discontinuous Galerkin algorithm} \label{sec:dg} \subsection{Discontinuous Galerkin method} \label{sec:DG_method} The discontinuous Galerkin method is presented for the general scalar hyperbolic conservative equation \begin{equation} \left\{ \begin{aligned} &\frac{\partial g(x,\tau)}{\partial \tau} + \frac{\partial F[g](x,\tau)}{\partial x} =0,\\ &(x,\tau) \in \mathbb{R}_+, \end{aligned} \right. \label{eq:general_cons_law_eq} \end{equation} where $g$ is a density of a conservative quantity and $F\!\left[g \right]$ the associated flux.\\ Let partition the domain of interest $[x_{\mathrm{min}},x_{\mathrm{max}}] \in \mathbb{R}$ in $N$ subintervals (alternatively, cells or bins), not necessarily of equal sizes. Each cell is defined by $I_j=(x_{j-1/2},x_{j+1/2}], j \in [\![1,N]\!]$. The size of the $j$-th cell is defined as $h_j = x_{j+1/2}-x_{j-1/2}$. The cell is centred around the position $x_j=\left(x_{j+1/2}+x_{j-1/2}\right)/2$. We define $\mathcal{V}^k$ the space of polynomials of degree $k$ in each cell $I_j$ \begin{equation} \mathcal{V}^k = \left\{ v:v|_{I_j} \in P^k \left( I_j \right),j \in [\![1,N]\!] \right\}. \label{eq:basis} \end{equation} We denote $g_j \in \mathcal{V}^k$ the approximate solution of $g$ in the bin $I_j$. The terminology \textit{discontinuous Galerkin (DG)} comes from the fact that in $\mathcal{V}^k$, the functions are allowed to have jumps at the interfaces $x_{j+1/2}$. One obtains a weak formulation of Eq.~\ref{eq:general_cons_law_eq} by multiplying by a test function $\phi \in \mathcal{V}^k$, integrating over $I_j$ and finally integrating by parts \citep{Cockburn1989} \begin{equation} \begin{aligned} \int_{I_j} \frac{\partial g_j}{\partial t} \phi \mathrm{d}x &- \int_{I_j} F[g]\left(x,t\right) \frac{\partial \phi}{\partial x} \mathrm{d}x \\ &+ F[g]\left(x_{j+1/2},t\right) \phi(x_{j+1/2}) \\ & - F[g]\left(x_{j-1/2},t\right) \phi(x_{j-1/2}) = 0 . \end{aligned} \label{eq:DG_eq} \end{equation} Eq.~\ref{eq:DG_eq} allows to fix unequivocally the degrees of freedom of the function $g_j$. The residual of Eq.~\ref{eq:general_cons_law_eq} on bin $I_j$ is defined as \begin{equation} \begin{aligned} R_j \equiv \int_{I_j} \frac{\partial g_j}{\partial t} \phi \mathrm{d}x & - \int_{I_j} F[g]\left(x,t\right) \frac{\partial \phi}{\partial x} \mathrm{d}x \\ & +F[g]\left(x_{j+1/2},t\right) \phi(x_{j+1/2}) \\ & - F[g]\left(x_{j-1/2},t\right) \phi(x_{j-1/2}). \end{aligned} \end{equation} DG schemes consist of choosing a local orthogonal polynomials basis on $I_j$ to replace the test function and to approximate the solution. Residuals $R_j$ are therefore null in the sense of orthogonalisation on the basis. In practice, Legendre polynomials are used \citep{Cockburn1989}. We denote hereafter the $i$-th Legendre polynomial by $\phi_i\left(\xi\right)$, where $\xi \in [-1,1]$. Polynomial functions $\phi_i\left(\xi\right)$ are orthogonal in $L^2\left([-1,1]\right)$ with respect to the inner product with weight unity. Fig.~\ref{fig:scheme_DG} shows a sketch of the DG method. In each cell, the function $g$ is approximated by Legendre polynomials. The accuracy of the approximation increases with respect to the order of the polynomials. The approximation of $g$ in cell $I_j$ writes \begin{equation} \begin{aligned} & \forall x \in I_j,\; g(x) \approx g_j\left(x,t\right)=\sum_{i=0}^k g_j^i\left(t\right) \phi_i(\xi_j\left(x\right)), \\ & g_j\left(x,t\right)= \bm{g}^T_j(t) \cdot \bm{\phi}(\xi_j (x)), \; \mathrm{with} \; \bm{g}_j = \begin{bmatrix} g_j^0 \\ \vdots \\ g_j^k \end{bmatrix} \mathrm{and} \; \bm{\phi} = \begin{bmatrix} \phi_0 \\ \vdots \\ \phi_k \end{bmatrix}, \end{aligned} \label{eq:proj_basis} \end{equation} where $g_j^i$ is the component of $g_j$ on the Legendre polynomials basis. The function $\xi_j\left(x\right) \equiv \frac{2}{h_j}\left(x-x_j\right)$ is used to map the interval $I_j$ onto the interval $[-1,1]$. Normalising the Legendre basis gives \begin{equation} \int\limits_{-1}^1 \bm{\phi}(\xi) \bm{\phi}^T (\xi) \mathrm{d}\xi = d_i \delta_{ik} \; \mathrm{with} \;d_i \equiv \frac{2}{2i+1}, \label{eq:normalisation_leg} \end{equation} where $d_i$ is the coefficient of normalisation. By combining Eqs.~\ref{eq:DG_eq}, \ref{eq:proj_basis} and \ref{eq:normalisation_leg} one obtains \begin{equation} \begin{aligned} &\frac{\mathrm{d} \bm{g}_j\left(t \right)}{\mathrm{d} t} = \bm{L}[g] \; \mathrm{with} \\ & \begin{aligned} \bm{L}[g] \equiv & \frac{2}{h_j} \begin{bmatrix} 1/d_0 & & \\ & \ddots & \\ & & 1/d_i \end{bmatrix} \\ & \begin{aligned} \Bigg( \Bigg. \int_{I_j} & F\left[ g \right]\left( x,t \right) \partial_x \bm{\phi} \left(\xi_j\left(x \right)\right) \mathrm{d}x \\ & \begin{aligned} - \bigg[ \bigg. & F\left[ g \right]\left(x_{j+1/2},t\right) \bm{\phi} \left(\xi_j \left( x_{j+1/2} \right) \right) \\ & \Bigg. \bigg. - F\left[ g \right]\left(x_{j-1/2},t\right) \bm{\phi} \left(\xi_j \left( x_{j-1/2} \right) \right) \bigg] \Bigg) , \end{aligned} \end{aligned} \end{aligned} \end{aligned} \label{eq:DG_ode} \end{equation} where $\bm{L}$ is the operator that results from applying the DG procedure to Eq.~\ref{eq:general_cons_law_eq} with a Legendre polynomials basis. With the procedure described above, the original system of partial differential equations (PDE) Eq.~\ref{eq:DG_eq} is transformed into a system of ordinary differential equations (ODE) Eq.~\ref{eq:DG_ode} onto the coefficients $g_j^i(t)$. The initial condition $g_j\left(x,0\right)$ is generated by the piecewise $L^2$ projection of an initial mass density distribution $g_0(x)$ on each bin, i.e. \begin{equation} \begin{aligned} & \forall j \in [\![1,N]\!], \\ & \int_{I_j} \left( g_j\left(x,0\right) - g_0\left(x\right) \right) \bm{\phi}^T (\xi_j(x)) \mathrm{d}x = \bm{0}. \end{aligned} \end{equation} Orthogonality of Legendre polynomials ensures \begin{equation} \begin{aligned} \int_{I_j} g_j \bm{\phi}^T \mathrm{d}x & = \frac{h_j}{2} \int_{-1}^1 \bm{\phi}(\xi) \bm{\phi}^T (\xi) \mathrm{d}\xi \bm{g}_j (t) \\ & = \frac{h_j}{2} \mathrm{diag}[d_0,...,d_k] \bm{g}_j (t). \end{aligned} \end{equation} Then, the components of $\bm{g}_j$ are given by \begin{equation} \begin{aligned} &\forall j \in [\![1,N]\!], \forall i \in [\![0,k]\!],\\ &g_j^i(0) = \frac{2}{h_j d_i} \int\limits_{-1}^1 g_0 \left(\frac{h_j}{2} \xi_j + x_j \right) \phi_i (\xi_j) \mathrm{d} \xi_j. \end{aligned} \label{eq:g_components_initial} \end{equation} Hence, the DG method consists in solving the following Cauchy problem \begin{equation} \left\{ \begin{aligned} &\forall j \in [\![1,N]\!], \forall i \in [\![0,k]\!], \\ &\frac{\mathrm{d} \bm{g}_j\left(t \right)}{\mathrm{d} t} = \bm{L}[g],\\ &g_j^i(0) = \frac{2}{h_j d_i} \int\limits_{-1}^1 g_0 \left(\frac{h_j}{2} \xi_j + x_j \right) \phi_i (\xi_j) \mathrm{d} \xi_j, \end{aligned} \right. \end{equation} where $\bm{L}$ is detailed in Eq.~\ref{eq:DG_ode}. \subsection{Evaluation of the flux} \label{sec:fluxes} \subsubsection{Regularised flux} The continuous Smoluchowski coagulation Eq.~ \ref{eq:smolu_cont_DL} is defined over an unbounded interval of masses $x \in \mathbb{R}_+$. Before applying the DG procedure, Eq.~\ref{eq:smolu_cont_DL} is restrained to a physical mass interval. Moreover, growth from a gaseous reservoir is excluded, meaning that $x>0$. The mass interval is therefore reduced to the interval $[x_{\mathrm{min}} > 0,x_{\mathrm{max}} < + \infty]$ \citep{FL2004,Liu2019}. The coagulation flux can be truncated according to two procedures \citep{FL2004}. On the one hand \begin{equation} \begin{aligned} & F_{\mathrm{coag}}^{\mathrm{c}} \left[ g \right] \left( x,\tau \right) = \\ & \qquad \int\limits_{x_{\mathrm{min}}}^x \! \int\limits_{x-u+x_{\mathrm{min}}}^{x_{\mathrm{max}}-u+x_{\mathrm{min}}} \mathcal{K} \left( u,v \right) g \left( u,\tau \right) \frac{g \left( v,\tau \right)}{v} \mathrm{d}v \mathrm{d}u, \end{aligned} \end{equation} where $F_{\mathrm{coag}}^{\mathrm{c}}$ is the conservative flux, meaning that no particle of mass larger than $x_{\mathrm{max}}$ is allowed to form. On the other hand \begin{equation} F_{\mathrm{coag}}^{\mathrm{nc}} \left[ g \right] \left( x,\tau \right) = \int\limits_{x_{\mathrm{min}}}^x \! \int\limits_{x-u+x_{\mathrm{min}}}^{x_{\mathrm{max}}} \mathcal{K} \left( u,v \right) g \left( u,\tau \right) \frac{g \left( v,\tau \right)}{v} \mathrm{d}v \mathrm{d}u, \end{equation} where $F_{\mathrm{coag}}^{\mathrm{nc}}$ is the non-conservative flux which allows formation of particles of mass $x>x_{\mathrm{max}}$. $F_{\mathrm{coag}}^{\mathrm{c}}$ is useful in realistic simulations of growth, whereas $F_{\mathrm{coag}}^{\mathrm{nc}}$ should be used to compare numerical solution to analytic solutions of Eq.~\ref{eq:smolu_cont}. \\ \subsubsection{Method for evaluating the flux} A crucial difference between this scheme and usual DG solvers is that the coagulation flux $F_{\mathrm{coag}}^{\mathrm{nc}}$ is non local. The evaluation of the numerical flux $F_{\mathrm{coag}}^{\mathrm{nc}}[g]$ at the interface $x_{j+1/2}$ depends on the evaluation of $g_j$ in all cells. Mathematically, $F_{\mathrm{coag}}^{\mathrm{nc}}$ is a double integral of a product of polynomials. Then the flux is a continuous function of mass $x$. At the interface $x_{j+1/2}$, the numerical flux reduces to $F_{\mathrm{coag}}^{\mathrm{nc}}\left[ g \right] = F_{\mathrm{coag}}^{\mathrm{nc}}\left[ g \right] \left(x_{j+1/2},t\right)$. In usual DG solvers, the numerical flux is a discontinuous function and must be reconstructed at the interfaces (e.g \citealt{Cockburn1989,Zhang2010}). The principal difficulty lies in carefully evaluating the flux at interfaces. This relies on handling the numerical integration of the polynomials $g_j$ in every relevant cell. \cite{Liu2019} uses a Gaussian quadrature method with a Legendre polynomials basis to approximate the flux. The lower bound of the inner integral $x-u$ does usually not correspond to a grid point. To accurately perform the Gauss quadrature, some grid elements must be sub-divided, increasing drastically the cost of the numerical procedure, especially for high-order polynomials. To avoid prohibitive computational costs due to cell oversampling, we take advantage of the polynomial approximation by calculating integrals analytically. This requires integrable kernels, which is the case for the four kernels presented in this study. This approach maintains a reasonable computational cost by not multiplying the number of sampling points. This also avoid to add errors due to the numerical integration and to approximate kernels by piecewise constant functions. \subsubsection{Mathematical procedure} To integrate analytically the numerical flux, let define the function $\tilde{g}$ that approximates the function $g$ over the entire mass interval \begin{equation} \begin{aligned} &\forall x \in [x_{\mathrm{min}},x_{\mathrm{max}}],\\ &\tilde{g}\left(x,\tau \right) \equiv \\ & \sum_{l=1}^N \sum_{i=0}^k g_l^i\left(\tau\right) \phi_i(\xi_l(x)) [ \theta(x-x_{l-1/2}) - \theta(x-x_{l+1/2})]. \end{aligned} \end{equation} We assume that the kernel function is explicitly integrable and can be written as $\mathcal{K}(u,v) = \mathcal{K}_1(u) \mathcal{K}_2(v)$, which is effectively the case for the three simple kernels and the ballistic kernel (see Sect.~\ref{sec:kernels}). For instance, the additive kernel writes $\mathcal{K}_{\mathrm{kadd}}(u,v) = u+v = \mathcal{K}_1^1(u) \mathcal{K}_2^1(v) + \mathcal{K}_1^2(u) \mathcal{K}_2^2(v)$, where $\mathcal{K}_1^1(u)=u$, $\mathcal{K}_2^1(v)=1$, $\mathcal{K}_1^2(u)=1$ and $\mathcal{K}_2^2(v)=v$. The numerical flux is split in two terms. The numerical flux writes \begin{equation} \begin{aligned} & F_{\mathrm{coag}}^{\mathrm{nc}}[\tilde{g}](x,t) = \sum_{l'=1}^{N} \sum_{i'=0}^k \sum_{l=1}^{N} \sum_{i=0}^k g_{l'}^{i'}(t) g_l^i(t)\\ & \begin{aligned} \int\limits_{x_{\mathrm{min}}}^x \int\limits_{x-u+x_{\mathrm{min}}}^{x_{\mathrm{max}}} & \frac{\mathcal{K}(u,v)}{v} \\ & \phi_{i'}(\xi_{l'}(u)) [\theta(u-x_{l'-1/2})-\theta(u-x_{l'+1/2})] \\ & \phi_{i}(\xi_l(v)) [\theta(v-x_{l-1/2})-\theta(v-x_{l+1/2})] \mathrm{d}v \mathrm{d}u. \end{aligned} \end{aligned} \label{eq:numerical_flux} \end{equation} In the DG Eq.~\ref{eq:DG_eq}, the numerical flux is evaluated on grid points $x_{j+1/2}$ and $x_{j-1/2}$ with $j \in [\![1,N]\!]$. $k$ is the order of the Legendre polynomials to approximate the solution. Therefore, $F_{\mathrm{coag}}^{\mathrm{nc}}$ depends on $j$ and $k$. The flux is sampled over a $2$D array $(N,k+1)$ in order to use vectorial operations to reduce the computational time. The numerical flux is \begin{equation} \left\{ \begin{aligned} & F_{\mathrm{coag}}^{\mathrm{nc}}[\tilde{g}](x,t) = \\ & \sum_{l'=1}^{N} \sum_{i'=0}^k \sum_{l=1}^{N} \sum_{i=0}^k g_{l'}^{i'}(t) g_l^i(t) T(x,x_{\mathrm{min}},x_{\mathrm{max}},i',i,l',l),\\ & T(x,x_{\mathrm{min}},x_{\mathrm{max}},i',i,l',l) = \\ & \begin{aligned} &\int\limits_{x_{\mathrm{min}}}^x \mathcal{K}_1(u)\phi_{i'}(\xi_{l'}(u)) [\theta(u-x_{l'-1/2})-\theta(u-x_{l'+1/2})] \\ & \int\limits_{x-u+x_{\mathrm{min}}}^{x_{\mathrm{max}}} \frac{\mathcal{K}_2(v)}{v} \phi_{i}(\xi_l(v)) \\ & \qquad \qquad [\theta(v-x_{l-1/2})-\theta(v-x_{l+1/2})] \mathrm{d}v \mathrm{d}u. \end{aligned} \end{aligned} \right. \label{eq:numerical_flux_split} \end{equation} \textit{A priori}, the boundaries for the intervals of integration can be arbitrarily large. We therefore rescale these intervals to avoid any numerical issues related to large numbers when calculating the terms $T$ in the variables $\xi_{l'}$ and $\xi_l$. To avoid critical typos, the term $T$ is derived with \textsc{Mathematica} by starting with the inner integral on $\xi_l$ and then the integral on $\xi_{l'}$. Further details about the derivation of the algorithm are given in supplementary material on GitHub (see Data Availability Sect.~\ref{data_github}) for reproducibility. These integrals do not commute. The high-order solver is written in \texttt{Fortran}. Reducing the number of integrals is key to avoid numerical issues with differences of large numbers. For this purpose, the expression of $T$ is split in several terms provided on GitHub (see Data Availability Sect.~\ref{data_github}). For robustness, all these integrals are calculated with \textsc{Mathematica}. The \textsc{Mathematica} function \texttt{FortranForm} is used to translate integral expressions to \texttt{Fortran}. For large expressions, it is necessary to split them with the function \texttt{MonomialList}. The scheme to evaluate $T(x,x_{\mathrm{min}},x_{\mathrm{max}},i',i,l',l)$ in \texttt{Fortran} is given on GitHub (see Data Availability Sect.~\ref{data_github}). A $4\mathrm{D}$ array with element $T(x,x_{\mathrm{min}},x_{\mathrm{max}},i',i,l',l)$ and a $4\mathrm{D}$ array with element $g_{l'}^{i'}(t) g_l^i(t)$ are computed. The element $(j,k)$ of the $2$D array corresponding to the flux is obtained by multiplying these two $4\mathrm{D}$ arrays and summing over of all elements. $F_{\mathrm{coag}}^{\mathrm{nc}}[\tilde{g}]$ is then evaluated in $x_{j-1/2}$ and $x_{j+1/2}$ for all $j$. \subsection{Evaluation of the integral of the flux} \label{sec:intflux} Let denote $\mathcal{F}_{\mathrm{coag}}^{\mathrm{nc}}$ the term of Eq.~\ref{eq:DG_eq} corresponding to the integral of the numerical flux. $\mathcal{F}_{\mathrm{coag}}^{\mathrm{nc}}$ writes \begin{equation} \left\{ \begin{aligned} & \mathcal{F}_{\mathrm{coag}}^{\mathrm{nc}} [\tilde{g},j,k](t) = \\ & \sum_{l'=1}^{N} \sum_{i'=0}^k \sum_{l=1}^{N} \sum_{i=0}^k \, g_{l'}^{i'}(t) \, g_l^i(t) \mathcal{T}\left( x_{\mathrm{min}},x_{\mathrm{max}},j,k,i',i,l',l\right)\\ & \begin{aligned} &\mathcal{T}\left( x_{\mathrm{min}},x_{\mathrm{max}},j,k,i',i,l',l\right) \equiv \\ &\int\limits_{I_j} \int\limits_{x_{\mathrm{min}}}^x \int\limits_{x-u+x_{\mathrm{min}}}^{x_{\mathrm{max}}} \frac{\mathcal{K}(u,v)}{v} \, \partial_x \phi_k (\xi_j (x)) \\ & \qquad \phi_{i'}(\xi_{l'}(u)) [\theta(u-x_{l'-1/2})-\theta(u-x_{l'+1/2})] \\ & \qquad \phi_{i}(\xi_l(v)) [\theta(v-x_{l-1/2})-\theta(v-x_{l+1/2})] \mathrm{d}v\, \mathrm{d}u\, \mathrm{d}x. \end{aligned} \end{aligned} \right. \label{eq:numerical_intflux_split} \end{equation} $\mathcal{F}_{\mathrm{coag}}^{\mathrm{nc}}[\tilde{g}]$ is evaluated similarly to the flux. A triple integral is derived with \textsc{Mathematica} with the changes of variables \begin{equation} \xi_l \equiv \frac{2}{h_l}\left(v-x_l\right),\, \xi_{l'} \equiv \frac{2}{h_{l'}}\left( u-x_{l'}\right),\, \xi_j \equiv \frac{2}{h_j}\left(x-x_j\right). \end{equation} To derive tractable equations for the integrals involving Heaviside distributions, we start to compute integrals over the variable $\xi_l$, then calculating the integral over $\xi_{l'}$ and finally, over $x$. The details of the calculations and the scheme in \texttt{Fortran} to evaluate $\mathcal{T}\left( x_{\mathrm{min}},x_{\mathrm{max}},j,k,i',i,l',l\right)$ are given in supplementary material on GitHub (see Data Availability Sect.~\ref{data_github}) for completeness. $\mathcal{F}_{\mathrm{coag}}^{\mathrm{nc}}$ is computed as a product of 4D arrays similarly to $F_{\mathrm{coag}}^{\mathrm{nc}}$. Accuracy on $T$ and $\mathcal{T}$ depends only the quality of the polynomial approximation of $g$ by $\tilde{g}$, since the integrals corresponding to $F_{\mathrm{coag}}^{\mathrm{nc}}[\tilde{g}]$ and $\mathcal{F}_{\mathrm{coag}}^{\mathrm{nc}}[\tilde{g}]$ are calculated analytically. \subsection{Slope limiter} \label{sec:slope_limiter} For most of astrophysical kernels, the solution of the Smoluchowski coagulation equation has been mathematically shown to decays with an exponential tail in at large masses \citep{Schumann1940,Menon2004}. This part is challenging to approximate with polynomials, and numerical estimates $g_j$ of $g$ in the bin $I_j$ can lead to negative values, which is not acceptable physically.\\ To preserve the positivity of solution, the requirement $g_j(x,t) \geq 0$ for $x \in I_j$ needs to be enforced. The idea is to use a scaling limiter which controls the maximum/minimum of the reconstructed polynomials \citep{Liu1996,Zhang2010,Liu2019}. This is achieved by a reconstruction step based on cell averaging. Let us consider the polynomials $g_j(x)$ of order $k$ that approximates $g(x)$ on $I_j$. Let denote $m$ and $M$ two positive reals $M_j\equiv \underset{x \in I_j}{\mathrm{max}}\, g_j(x)$, $m_j \equiv \underset{x \in I_j}{\mathrm{min}}\, g_j(x)$ and define the scaled polynomials \begin{equation} \begin{aligned} & p_j \left(x\right) \equiv \gamma_j \left(g_j(x) - \overline{g}_j \right) + \overline{g}_j, \\ &\gamma_j = \mathrm{min} \left\{ \left|\frac{M-\overline{g}_j}{M_j-\overline{g}_j}\right|,\left| \frac{m-\overline{g}_j}{m_j-\overline{g}_j}\right|,1 \right\} . \end{aligned} \end{equation} where $\overline{g}_j$ refers to the cell average of $g$ in $I_j$ \begin{equation} \overline{g}_j \equiv \frac{1}{h_j} \int_{I_j} g_j(x,t) \mathrm{d}x. \end{equation} For all $j$, we assume $\overline{g}_j \in [m,M]$. $p_j(x)$ is a polynomial of order $k$ such as $\overline{p}_j=\overline{g}_j$. \citet{Liu1996} proved that $\forall x \in I_j,\; p_j(x) \in [m,M]$. This scaling limiter allows to build a maximum-principle-satisfying DG scheme, in the sense that the numerical solution never goes out of the range $[m,M]$. The main difficulty is to ensure the property $\overline{g}_j \in [m,M]$ during the evolution without loosing high accuracy. \\ In the DG scheme given by Eq.~\ref{eq:DG_ode}, polynomials $g_j(x)$ are replaced by the scaled polynomials $p_j(x)$ such as \begin{equation} \begin{aligned} p_j \left(x\right) & = \gamma_j \left(g_j(x) - \overline{g}_j \right) + \overline{g}_j \\ & = \sum_{i=0}^k \gamma_j g_j^i(t) \phi_{1,i}(\xi_j(x)) + \sum_{i=0}^k g_j^i(t) \phi_{2,i} (\xi_j(x)) \end{aligned} \label{eq:scaling_polynomials} \end{equation} with \begin{equation} \begin{aligned} & \phi_{1,i}(\xi_j(x)) \equiv \left(\phi_i(\xi_j(x))- \frac{1}{2} \int_{I_j} \phi_i(\xi_j(x)) \mathrm{d}x \right), \\ & \phi_{2,i}(\xi_j(x)) \equiv \frac{1}{2} \int_{I_j} \phi_i(\xi_j(x)) \mathrm{d}x. \end{aligned} \end{equation} Replacing $g_j$ by $p_j$ in Eq.~\ref{eq:numerical_flux_split} gives four terms for the function $T$: $T_{11}[\phi_{1,i'}\phi_{1,i}]$, $T_{12}[\phi_{1,i'}\phi_{2,i}]$, $T_{21}[\phi_{2,i'}\phi_{1,i}]$ and $T_{22}[\phi_{2,i'}\phi_{2,i}]$. For each term, a corresponding coefficient $g_{l',i'}(t)g_{l,i}(t)$ is associated, namely $\gamma_{l'}g_{l',i'}(t)g_{l,i}(t)$, $\gamma_{l'} g_{l',i'}(t)g_{l,i}(t)$, $\gamma_l g_{l',i'}(t)g_{l,i}(t)$ and $g_{l',i'}(t)g_{l,i}(t)$ (no $\gamma$ in the last term). $F_{\mathrm{coag}}^{\mathrm{nc}}$ is evaluated by summing over those four terms. The same procedure is applied for $\mathcal{F}_{\mathrm{coag}}^{\mathrm{nc}}$. Therefore, the positivity of $\tilde{g}$ is ensured in each cell. \subsection{High-order time stepping} \label{sec:time_solver} \subsubsection{CFL condition} \label{sec:timestepping} Forward Euler discretisation of Eq.~\ref{eq:DG_eq} gives \begin{equation} \begin{aligned} & \overline{g}_j^{n+1} = \\ & \overline{g}_j^n - \frac{\Delta t}{\Delta x_j} \left[ F_{\mathrm{coag}}^{\mathrm{nc}}\left[ g_j \right] \left(x_{j+1/2},t\right)-F_{\mathrm{coag}}^{\mathrm{nc}}\left[ g_j \right] \left(x_{j-1/2},t\right)\right], \end{aligned} \label{eq:DG_discrete_mean} \end{equation} for the $n$-th time step. The Courant-Friedrichs-Lewy condition (CFL) of the scheme is chosen to guarantee the positivity of the cell average $\overline{g}_j^{n+1}>0$ \citep{FL2004}, i.e. \begin{equation} \Delta t < \frac{\Delta x_j \overline{g}_j^n }{| F_{\mathrm{coag}}^{\mathrm{nc}}\left[ g_j \right] \left(x_{j+1/2},t\right)-F_{\mathrm{coag}}^{\mathrm{nc}}\left[ g_j \right] \left(x_{j-1/2},t\right)|}. \label{eq:CFL_euler} \end{equation} This CFL condition associated with the slope limiter (see Sect.~\ref{sec:slope_limiter}) ensures the positivity of the global scheme. The CFL condition is initially dominated by small grains and softens as grains grow. \subsubsection{Strong Stability Preserving Runge-Kutta method} \label{sec:SSP} In Eq.~\ref{eq:smol_cons_DL}, the spatial derivative $\partial_x F_{\mathrm{coag}}[g]$ is approximated by the nonlinearly stable operator $-L[g]$ given in Eq.~\ref{eq:DG_ode}. For hyperbolic conservation laws, nonlinear stability is characterised by the total variation diminishing (TVD) semi-norm \begin{equation} TV\left(g \right) \equiv \sum_j |\overline{g}_{j+1} - \overline{g}_j|. \end{equation} The spatial discretisation $-L[g]$ has the property that the total variation of the numerical solution does not increase for a forward Euler integration \begin{equation} g^{n+1} = g^n + \Delta t L[g], \; \Delta t \leq \Delta t_{\mathrm{FE}}, \end{equation} when $\Delta t_{\mathrm{FE}}$ the CFL condition determined in Eq.~\ref{eq:CFL_euler}, i.e. $TV\left( g^{n+1} \right) \leq TV\left( g^n \right)$. TVD property can be generalised to high-order time discretisation with a Strong Stability Preserving (SSP) scheme \citep{Shu1988,Gottlieb2001,Zhang2010,Liu2019}. The method is SSP if the following condition holds \begin{equation} TV\left( g^{n+1} \right) \leq TV\left( g^n \right), \end{equation} and the timestep satisfies \begin{equation} \Delta t_{\mathrm{SSP}} \leq c \Delta t_{\mathrm{FE}}, \end{equation} where $c$ is a positive coefficient. Stability arguments are based on convex decomposition of high-order methods in term of the first-order Euler elements. This ensures that SSP preserves high-order accuracy in time for any convex functional (e.g. $TV$). In practice, errors are dominated by mass discretisation. We use a SSP Runge-Kutta (SSPRK) third-order method \citep{Gottlieb2009,Zhang2010,Liu2019} which writes, with $c=1$, \begin{equation} \begin{aligned} \bm{g}_j^{(1)} &= \bm{g}_j^n + \Delta t_{\mathrm{SSP}}\bm{L}[g_j^n], \\ \bm{g}_j^{(2)} &= \frac{3}{4} \bm{g}_j^n +\frac{1}{4} \left( \bm{g}_j^{(1)} + \Delta t_{\mathrm{SSP}} \bm{L}[g_j^{(1)}]\right), \\ \bm{g}_j^{n+1} &= \frac{1}{3} \bm{g}_j^n + \frac{2}{3} \left( \bm{g}_j^{(2)} + \Delta t_{\mathrm{SSP}} \bm{L}[g_j^{(2)}] \right). \end{aligned} \label{eq:SSPRK3} \end{equation} This SSPRK third-order method ensures that $\overline{g}_j \in [m,M]$ for $(m,M) \in \mathbb{R}_+$ at all times. Hence, under a suitable CFL condition, SSP high-order time discretisation preserves the property $\overline{g}_j \in [m,M]$ of the DG scheme and the linear scaling presented in Sect.~\ref{sec:slope_limiter} satisfies a maximum principle. \subsection{Algorithm flowchart} Associating SSPRK with a DG scheme provides overall an high-order scheme that maintains overall a uniform high-order accuracy of the solution \citep{Zhang2010,Liu2019}. We use the SSPRK of third order given by Eq.~\ref{eq:SSPRK3}. Splitting the algorithm into the following steps ensures positivity: \begin{enumerate} \item Initialisation: From the initial data $g_0(x)$, \begin{enumerate} \item generate $\forall j \in [\![1,N]\!],\; g_j(x,0) \in \mathcal{V}^k$ by piecewise $L^2$ projection and get the components on Legendre basis Eq.~\ref{eq:g_components_initial}, \\ \item define $[m,M]$ for which $\overline{g}_j(x,0) \in [m,M]$, \item replace $g_j$ by $p_j$, \end{enumerate} \item Evolution: Use the scheme Eq.~\ref{eq:SSPRK3} to compute $\forall j \in [\![1,N]\!], \forall i \in [\![1,k]\!], \;(g_j^i)^{n+1}$,\\ \item Reconstruction: Use Eq.~\ref{eq:scaling_polynomials} to reconstruct $p_j(x,t)$. \end{enumerate} \section{Numerical tests}{} \label{sec:num} The high-order solver presented in Sect.~\ref{sec:dg} is benchmarked against the analytical solutions presented in Sect.~\ref{sec:analytic}, similarly to \citet{Liu2019}. Accuracy tests are performed with a small number of bins, consistently with hydrodynamical requirements. \subsection{Error measurements} \label{sec:errors} Numerical simulations are carried out to i) investigate the \textit{experimental order of convergence} (EOC, \citealt{Kumar2014,Liu2019} ) , and ii) determine the efficiency of the algorithm. Relative errors are measured using a continuous norm and a discrete norm. The $L^1$ norm is a natural choice for equations of conservation. The continuous $L^1$ norm can be approximated by using a high order Gaussian quadrature rule \begin{equation} \begin{aligned} \left\|f\right\|_1 & \equiv \int_{x_{\mathrm{min}}}^{x_{\mathrm{max}}} |f(x)| \mathrm{d}x \\ & = \sum_{j=1}^{N} \int_{I_j} |f(x)| \mathrm{d}x \approx \sum_{j=1}^N \frac{h_j}{2} \sum_{\alpha=1}^R \omega_{\alpha} |f(x_j^{\alpha})|, \end{aligned} \label{eq:norm_L1} \end{equation} where $N$ is the number of bins, $h_j$ is the size of bin $I_j$, $\omega_{\alpha}$ are the weights and $x_j^{\alpha}$ are the corresponding Gauss points in cell $I_j$. We use $R=16$ for sufficient accuracy. The numerical error $e_{\mathrm{c},N}$ is measured with the continuous $L^1$ norm as \begin{equation} e_{\mathrm{c},N}(\tau) \equiv \sum_{j=1}^N \frac{h_j}{2} \sum_{\alpha=1}^R \omega_{\alpha} |g_j(x_j^{\alpha},\tau) - g(x_j^{\alpha},\tau)| , \label{eq:errL1_continuous} \end{equation} where $g$ and $g_j$ are the analytic and the numerical solutions of the Smoluchowski equation. Eq.~\ref{eq:errL1_continuous} is computed with \textsc{Mathematica} using $16$ digits for sufficient precision. The discrete $L^1$ norm is defined by evaluating $g_j$ and $g$ at the geometric mean $\hat{x}_j\equiv \sqrt{x_{j-1/2}x_{j+1/2}}$ of the bin $I_j$. The numerical error measured with this discrete $L^1$ norm is \begin{equation} e_{\mathrm{d},N}(\tau) \equiv \sum_{j=1}^N h_j |g_j(\hat{x}_j,\tau) - g(\hat{x}_j,\tau)|. \label{eq:errL1_discrete} \end{equation} We follow \citet{Liu2019} to calculate the \textit{experimental order of convergence} (EOC) \begin{equation} \mathrm{EOC} \equiv \frac{\ln\left( \frac{e_N(\tau)}{e_{2N}(\tau)}\right)}{\ln(2)}, \label{eq:EOC} \end{equation} where $e_N$ is the error evaluated for $N$ cells and $e_{2N}$ for $2N$ cells. For the calculation of the EOC, the numerical errors are calculated at time $\tau =0.01$ for the order of convergence of the DG scheme not to be altered by time stepping errors. The moments of the numerical solution are defined according to \begin{equation} \begin{aligned} M_{p,N}\left(\tau\right) & = \int\limits_{x_{\mathrm{min}}}^{x_{\mathrm{max}}} x^{p-1} \tilde{g}(x,\tau) \mathrm{d}x \\ & = \sum_{j=1}^N \int_{I_j} x^{p-1} g_j(x,\tau) \mathrm{d}x \\ & = \sum_{j=1}^N \sum_{i=0}^k g_j^i(\tau) \int_{I_j} x^{p-1} \phi_i\left(\xi_j (x) \right) \mathrm{d}x. \end{aligned} \end{equation} The total mass of the system writes \begin{equation} M_{1,N}(\tau) = \sum_{j=1}^N \sum_{i=0}^k g_j^i(\tau) \frac{h_j}{2} \underbrace{\int\limits_{-1}^1 \phi_i\left(\xi_j \right) \mathrm{d}\xi_j}_{= \delta_{00}=2} = \sum_{j=1}^N h_j g_j^0(\tau) . \end{equation} Absolute errors on moments are given by \begin{equation} e_{M_{p,N}}(\tau) \equiv \frac{|M_{p,N}(\tau) - M_p(\tau)|}{M_p(\tau)}, \end{equation} where $M_p(\tau)$ is the moment of order $p$ at time $\tau$ for the exact solution. In usual convergence tests, errors are normalised with respect to the number of degrees of freedom of the algorithm. This is not the case here, since we compare absolute gains for the purpose interfacing it with an hydrodynamical solver. \subsection{Practical implementation of the tests} \label{sec:benchmark_coag} Numerical tests are performed by comparing numerical solutions the constant, additive and multiplicative kernels to the solutions given in Eqs.~\ref{eq:sol_kconst}, \ref{eq:sol_kadd} and \ref{eq:sol_kmul}. Solutions are integrated over the intervals $x \in [10^{-3},10^6]$ for the constant and the additive kernels, and $x \in [10^{-3},10^3]$ for the multiplicative kernel. Tests are performed with \textsc{Fortran}, errors are calculated with \textsc{Mathematica} at machine precision. Quadruple precision is required for the additive kernel with $k = 2$, and for all kernels with $k = 3$. The results are shown for Legendre polynomials of order $k = 0, 1, 2, 3$. Above order $3$, numerical errors due to arithmetics of large numbers are not negligible anymore. A safety coefficient of $1/2$ is applied on the CFL condition, i.e. the coagulation time-step used in practice is $\mathrm{d}\tau_{\mathrm{coag}} = 1/2 \, \mathrm{d}\tau_{\mathrm{CFL}}$. Initial conditions are set to satisfy the analytic solution at initial time $\tau=0$. The analytical and numerical solutions are compared when particles of large masses are formed at final times $\tau$ that depend on the kernels. Simulations are performed by dividing $\tau$ into constant dumps of value $\mathrm{d}\tau $ (300 for the constant and the additive kernels, 10000 for the multiplicative kernel). Each dump is subdivided in several coagulation steps satisfying the CFL condition. The analytical derivation of the coagulation flux allows the algorithm to be efficient, i.e. to reach desired accuracy with a low computational time. To quantify efficiency, the computational time is compared to the one obtained with the scheme of \citet{Liu2019} with a number of Gauss points $Q=k+1$ on a simulation in double precision with $N=20$ bins, $k=1$ for the additive kernel and $k=2$ for the constant and multiplicative kernels. The Liu scheme is implemented by following the description of \citet{Liu2019} step-by-step, without additional optimisations. Simulations are performed in sequential on an Intel Core i$7$ $2.8\mathrm{GHz}$. We use the \texttt{gfortran v9.2.0} compiler. Such a comparison is delicate to perform and interpret, since it is implementation-dependant. Should the number of Gauss points in the Liu algorithm be increased to better approximate the integral terms calculated here analytically, this may result in an increase of computational time by several orders of magnitudes, giving the false impression that the Liu algorithm is not performant. Hence the choice $Q=k+1$. Qualitatively, our scheme is more effective by a factor of several unities for same precision and without requiring sub-binning, except for the additive kernel for which the Liu scheme exhibits serendipitous super-convergence \citep{Liu2019}. \subsection{Constant kernel} \label{sec:kconst_tests} \subsubsection{Positivity and mass conservation} \label{sec:kconst_positivity} Fig.~\ref{fig:kconst_linlog} shows the numerical solutions obtained for $N=20$ bins, varying the order of the polynomials $k$. The analytical and numerical solutions are compared at time $\tau=30000$. As expected, the solution remains positive, as a result from combining the slope limiter (see Sect.~\ref{sec:slope_limiter}) and the SSP Runge-Kutta time stepping (see Sect.~\ref{sec:SSP}). The piecewise linear solution ($k=1$) appears curved due to the logarithmic scale of the $x$-axis. Fig.~\ref{fig:mass_cons_kconst} shows the numerical absolute error $e_{M_1,N}$ on the moment $M_{1,N}$ for $N=20$ bins from $\tau=0$ to $\tau=30000$. The total mass remains conserved to machine precision until $\tau=10^4$.\\ \begin{figure} \centering \includegraphics[width=\columnwidth]{./figures/paper_kconst_err_moments.pdf} \caption{Test case, constant kernel: evolution of the numerical absolute error $e_{M_1,N}$ on the moment $M_{1,N}$ for $N=20$ bins. The divergence at long times is explained by accumulation of errors due to numerical diffusion for even orders $k = 0$ and $k = 2$. Total mass is conserved at machine precision until $\tau=10^4$.} \label{fig:mass_cons_kconst} \end{figure} \subsubsection{Accuracy of the numerical solution} \label{sec:kconst_accuracy} As expected, the accuracy of the numerical solution improves with the order of the scheme. Fig.~\ref{fig:kconst_loglog_xmeanlog} shows the numerical solution obtained at $\tau=30000$ (note the \textit{16 orders of magnitude in mass} on the $y$ axis in log). The major part of the total mass of the system is located around the maximum of the curve. Fig.~\ref{fig:kconst_loglog_xmeanlog} shows that around this maximum, schemes of order $k = 1,2,3$ provide errors of order $\sim 0.1 - 1\%$ when $k = 0$ generates errors of order $\sim 30\%$. Fig.~\ref{fig:kconst_loglog_xmeanlog} also shows that numerical diffusion is drastically reduced in the exponential tail as the order of the scheme increases, since a gain of a factor $\sim 100$ is obtained with order $3$ compared to order $0$. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{./figures/paper_kconst_linlog.pdf} \caption{Test case, constant kernel: the numerical solution $g_j(x,\tau)$ is plotted for $N=20$ bins and $k = 0,1,2,3$ from $\tau=0$ to $\tau=30000$, and compared to the analytic solution $g(x,\tau)$. Vertical grey lines delimit the bins. The accuracy improves for larger values of $k$. Order $3$ approximates the bump where the major part of the mass is concentrated with accuracy of order $\sim 0.1 \%$.} \label{fig:kconst_linlog} \end{figure*} \begin{figure} \centering \includegraphics[width=\columnwidth]{./figures/paper_kconst_tend_loglog_xmeanlog.pdf} \caption{Test case, constant kernel: numerical solution $g_j(x,\tau$) evaluated with the geometric mean $\hat{x}_j$ over each bin $I_j$. At the location of the maximum, orders $k = 1,2,3$ achieve an absolute error of $\sim 0.1 - 1\%$, to be compared with $30\%$ obtained with $k = 0$. Accuracy in the exponential tail is improved by a factor $100$ with $k = 3 $ compared to $k = 0$.} \label{fig:kconst_loglog_xmeanlog} \end{figure} \subsubsection{Convergence analysis} \label{sec:kconst_convergence} Numerical errors introduced in Sect.~\ref{sec:errors} are shown on Fig.~\ref{fig:kconst_errL1_convergence} at $\tau=0.01$. $e_{\mathrm{c},N}$ and $e_{\mathrm{d},N}$ are plotted as a functions of the number of bins per decade $N_{\mathrm{bin}/\mathrm{dec}}$, to infer the EOC independently from the global mass interval. With the continuous $L^1$ norm, the EOC is of order $k+1$ on a geometric grid, similarly to \citet{Liu2019}. With the discrete $L^1$ norm, the EOC is of order $k+2$ for odd polynomials, and $k+1$ for even polynomials. We recover second order of convergence (EOC=$2$) for the finite volume scheme with $k = 0$ that was predicted by \citet{FL2004}. Fig.~\ref{fig:kconst_errL1_convergence} shows that the expected accuracy of order $\sim 0.1\%$ on $e_{d,N}$ is achieved with more than $10$ bins/decade for orders $0$ and $1$, with $\sim 9$ bins/decade for order $2$ and with $\sim 5$ bins/decade for order $3$. Accuracy of order $\sim 1\%$ is achieved with $\sim 9$ bins/decade for orders $0$ and $1$, with $\sim 5$ bins/decade for order $2$, and with $\sim 2$ bins/decade for order $3$. \begin{figure} \centering \includegraphics[width=\columnwidth]{./figures/paper_kconst_errL1_convergence.pdf} \includegraphics[width=\columnwidth]{./figures/paper_kconst_errL1_xmeanlog_convergence.pdf} \caption{Test case, constant kernel: the continuous $L^1$ error $e_{c,N}$ and the discrete $L^1$ error $e_{d,N}$ are plotted as functions of the number of bins per decade. With $e_{c,N}$, the experimental order of convergence is EOC = $k+1$. With $e_{d,N}$, EOC = $k+1$ for polynomials of odd orders and EOC = $k+2$ for polynomials of even orders. The DG scheme achieves on $e_{d,N}$ an accuracy of $0.1\%$ with more than $10$ bins/decade for $k=0,1$, with $\sim 9$ bins/decade for $k = 2$ and with $\sim 5$ bins/decade for $k = 3$. An accuracy of $1\%$ is achieved with $\sim 9$ bins/decade for $k = 0,1$, with $\sim 5$ bins/decade for $k = 2$ and $\sim 2$ bins/decade for $k = 3$.} \label{fig:kconst_errL1_convergence} \end{figure} \subsubsection{Stability in time} \label{sec:kconst_stability_time} Time evolution of the numerical errors $e_{c,N}$ and $e_{d,N}$ are shown in Fig.~\ref{fig:kconst_err_L1cont_L1dis}. The results are shown for $N=20$ bins for $k = 0,1,2,3$ at time $\tau = 30000$, when particles of large masses have formed. We verify that $e_{c,N}$ and $e_{d,N}$ remain bounded. \begin{figure} \centering \includegraphics[width=\columnwidth]{./figures/paper_kconst_errL1cont.pdf} \includegraphics[width=\columnwidth]{./figures/paper_kconst_errL1dis.pdf} \caption{Test case, constant kernel: numerical errors $e_{c,N}$ with the $L^1$ continuous norm, $e_{d,N}$ with the discrete $L^1$ norm. All these errors are calculated for $N=20$. Errors remain bounded at large times.} \label{fig:kconst_err_L1cont_L1dis} \end{figure} \subsubsection{Computational efficiency} \label{sec:kconst_computational_performance} Fig.~\ref{fig:kconst_loglog_xmeanlog_DGvsDGGQ} shows that similar accuracies are obtained with this scheme and the scheme described in \cite{Liu2019}. Computational time is compared on a simulation with $N=20$ bins, $k=2$ and a final time $\tau = 30000$ after $\sim 10^{3}$ timesteps. The computational time for the \citet{Liu2019} scheme is around 16 seconds (real time). The computational time for this scheme is around 4 seconds (real time). An improvement of factor 4 is therefore achieved for the computational time by estimating integrals analytically. \begin{figure} \centering \includegraphics[width=\columnwidth]{./figures/paper_kconst_tend_loglog_xmeanlog_DGvsDGGQ.pdf} \caption{Test case, constant kernel: comparison with the scheme of \citet{Liu2019}. Similar accuracies are reached, but being $\sim 4\times$ more effective due to numerical integration.} \label{fig:kconst_loglog_xmeanlog_DGvsDGGQ} \end{figure} \subsection{Additive kernel} \label{sec:kadd_tests} \subsubsection{Positivity and mass conservation} Fig.~\ref{fig:kadd_linlog} shows numerical solutions obtained for $N=20$ bins and $k = 0,1,2,3$ at time $\tau=3$. The numerical solutions remains positive as grains grow. Fig.~\ref{fig:mass_cons_kadd} shows the evolution of the numerical absolute error $e_{M_1,N}$ on the first moment $M_{1,N}$. The total mass remains conserved to machine precision until $\tau=1$. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{./figures/paper_kadd_linlog.pdf} \caption{Test case, additive kernel: the numerical solution $g_j(x,\tau)$ is plotted for $N=20$ bins and $k = 0,1,2,3$ from $\tau=0$ to $\tau=3$, and compared to the analytic solution $g(x,\tau)$. Vertical grey lines delimit the bins. The accuracy improves for larger values of $k$. Order $3$ approximates the bump where the major part of the mass is concentrated with accuracy of order $\sim 0.1 \%$. } \label{fig:kadd_linlog} \end{figure*} \begin{figure} \centering \includegraphics[width=0.95\columnwidth]{./figures/paper_kadd_err_moments.pdf} \caption{Test case, additive kernel: evolution of the numerical absolute error $e_{M_1,N}$ on the moment $M_{1,N}$ for $N=20$ bins. The divergence at long times is explained by accumulation of errors due to numerical diffusion for orders $k = 0$, $k = 2$ and $k = 3$. Total mass is conserved at machine precision until $\tau=1$. } \label{fig:mass_cons_kadd} \end{figure} \subsubsection{Accuracy of the numerical solution} \label{sec:kadd_accuracy} Fig.~\ref{fig:kadd_loglog_xmeanlog} shows numerical solutions obtained at $\tau=3$ on a logarithmic scale. Fig.~\ref{fig:kadd_loglog_xmeanlog} reveals a strong numerical diffusion for order $0$. Numerical errors are indeed integrated and diffused extremely efficiently towards large masses by the additive kernel. In this case, the mass density for large-masses particles is over-estimated by several orders of magnitude. High-order schemes reduce this numerical diffusion as expected. Fig.~\ref{fig:kadd_loglog_xmeanlog} shows that around the maximum, schemes of order $k = 1,2,3$ provide errors of order $\sim 0.1 - 1\%$ when $k = 0$ generates errors of order $\sim 10\%$. Numerical diffusion is reduced in the exponential tail as the order of the scheme increases, up to reaching a gain of a factor $\sim 10000$ with order $3$ compared to order $0$. \begin{figure} \centering \includegraphics[width=\columnwidth]{./figures/paper_kadd_tend_loglog_xmeanlog.pdf} \caption{ Test case, additive kernel: At the location of the maximum, orders $k = 1,2,3$ achieve an absolute error of $\sim 0.1 - 1\%$, to be compared with $10\%$ obtained with $k = 0$. Accuracy in the exponential tail is improved by a factor $10000$ by $k = 3 $ compared to $k = 0$. } \label{fig:kadd_loglog_xmeanlog} \end{figure} \subsubsection{Convergence analysis} \label{sec:kadd_convergence} Numerical errors are shown on Fig.~\ref{fig:kadd_errL1_convergence} at $\tau=0.01$. Accuracy of order $\sim 0.1\%$ on $e_{d,N}$ errors are achieved with more than $10$ bins/decade for order $0$ and $1$, with $\sim 9$ bins/decade for orders $2$ and $3$. Accuracy of order $\sim 1\%$ is achieved with $\sim 9$ bins/decade for orders $0$ and $1$, with $\sim 5$ bins/decade for order $2$ and $\sim 2$ bins/decade for order $3$. \begin{figure} \centering \includegraphics[width=\columnwidth]{./figures/paper_kadd_errL1cont_convergence.pdf} \includegraphics[width=\columnwidth]{./figures/paper_kadd_errL1dis_convergence.pdf} \caption{Test case, additive kernel: similar to Fig.~\ref{fig:kconst_errL1_convergence}. The DG scheme achieves on $e_{d,N}$ an accuracy of order $0.1\%$ with more than $10$ bins/decade for $k=0,1$, with $\sim 5$ bins/decade for $k = 2,3$. An accuracy of order $1\%$ is achieved with $\sim 9$ bins/decade for $k = 0,1$, with $\sim 5$ bins/decade for $k = 2$ and with $\sim 2$ bins/decade for $k = 3$.} \label{fig:kadd_errL1_convergence} \end{figure} \subsubsection{Stability in time} \label{sec:kadd_stability_time} Evolution of the numerical errors $e_{c,N}$ and $e_{d,N}$ are shown in Fig.~\ref{fig:kadd_err_L1cont_L1dis}. The results are shown for $N=20$ bins for $k = 0,1,2,3$ at $\tau = 3$, when particles with large masses have formed. At order $0$, $e_{c,N}$ (resp. $e_{d,N}$) increases significantly after $\tau \approx 5\cdot 10^{-1}$ (resp. $\tau \approx 10^{-1}$). On the contrary, $e_{c,N}$ and $e_{d,N}$ remain bounded for longer times at orders $1$, $2$ and $3$. \begin{figure} \centering \includegraphics[width=\columnwidth]{./figures/paper_kadd_errL1cont.pdf} \includegraphics[width=\columnwidth]{./figures/paper_kadd_errL1dis.pdf} \caption{Test case, additive kernel: numerical errors $e_{c,N}$ with the $L^1$ continuous norm, $e_{d,N}$ with the discrete $L^1$ norm. All these errors are calculated for $N=20$. Errors remain bounded at large times for orders $k = 1,2,3$.} \label{fig:kadd_err_L1cont_L1dis} \end{figure} \subsubsection{Computational efficiency} \label{sec:kadd_computational_performance} Computational time is compared to \citet{Liu2019} on a simulation with $N=20$ bins, $k=1$ and a final time $\tau=3$. Fig.~\ref{fig:kconst_loglog_xmeanlog_DGvsDGGQ} shows similar accuracy for both schemes. The computational time for the \citet{Liu2019} scheme is around 3 seconds (real time) for a number of Gauss quadrature points $Q=2$. The computational time for this scheme is 1 second, providing an improvement by a factor 3. Fig.~\ref{fig:kadd_loglog_xmeanlog_DGvsDGGQ} also shows that for the additive kernel, the Liu scheme with $Q=2$ is counter-intuitively more accurate than for $Q=16$ and the DG scheme. This result can be explained by a serendipitous compensation of errors when approximating the integrals with a Gauss quadrature of low order. \begin{figure} \centering \includegraphics[width=\columnwidth]{./figures/paper_kadd_tend_loglog_xmeanlog_DGvsDGGQ.pdf} \caption{Test case, additive kernel: comparison with the scheme of \citet{Liu2019}. Unexpected accuracy occurs for integral estimates with $Q =2$ Gauss points due to serendipitous error compensations. Our algorithm is $\sim 3\times$ more effective due to analytical integration compared to the Lui scheme with $Q=2$. } \label{fig:kadd_loglog_xmeanlog_DGvsDGGQ} \end{figure} \subsection{Multiplicative kernel} \label{sec:kmul_tests} \subsubsection{Positivity and mass conservation} Fig.~\ref{fig:kmul_linlog} shows the numerical solutions obtained for $N=20$ bins and $k = 0,1,2,3$ after $\tau=100$. The numerical solutions remain positive as grain grow. Fig.~\ref{fig:mass_cons_kmul} shows the evolution of $e_{M_1,N}$. Total mass remains conserved to machine precision until $\tau <1$. At $\tau=1$, gelation occurs, particles with infinite mass are formed \citep{McLeod1964,Ernst1984,FL2004} and total mass is no longer conserved anymore. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{./figures/paper_kmul_linlog.pdf} \caption{Test case, multiplicative kernel: numerical solution $g_j(x,\tau)$ is plotted for $N=20$ bins for $k = 0,1,2,3$ from $\tau=0$ to $\tau=100$ and compared to the analytic solution $g(x,\tau)$. Vertical grey lines delimit the bins. Accuracy of order $\sim 0.1 \%$ is achieved at all orders. } \label{fig:kmul_linlog} \end{figure*} \begin{figure} \centering \includegraphics[width=\columnwidth]{./figures/paper_kmul_err_moments.pdf} \includegraphics[width=\columnwidth]{./figures/paper_kmul_M1.pdf} \caption{ Test case, multiplicative kernel: evolution of the numerical absolute error $e_{M_1,N}$ on the moment $M_{1,N}$ for $N=20$ bins. Mass is conserved anymore when gelation occurs at $\tau = 1$. } \label{fig:mass_cons_kmul} \end{figure} \subsubsection{Accuracy of the numerical solution} \label{sec:kmul_accuracy} Fig.~\ref{fig:kmul_loglog_xmeanlog} shows the numerical solution for the multiplicative kernel at $\tau= 100$. Accuracy of order $\sim 0.1 \%$ is obtained at all orders, even $k = 0$. Physically, growth is effective enough for advection in the mass space to be more efficient than numerical diffusion. \begin{figure} \centering \includegraphics[width=\columnwidth]{./figures/paper_kmul_tend_loglog_xmeanlog.pdf} \caption{Test case, multiplicative kernel: Accuracy of order $\sim 0.1 \%$ is achieved at any order. Growth is so efficient than is overtakes numerical diffusion. } \label{fig:kmul_loglog_xmeanlog} \end{figure} \subsubsection{Convergence analysis} \label{sec:kmul_convergence} Numerical errors are shown on Fig.~\ref{fig:kmul_errL1_convergence} at $\tau=0.01$. Accuracy of order $\sim 0.1\%$ on $e_{d,N}$ errors are achieved with $\sim 15$ bins/decade for orders $0$ and $1$, with $\sim 7$ bins/decade for order $2$, and with $\sim 4$ bins/decade for order $3$. Accuracy of order $\sim 1\%$ is achieved with $\sim 7$ bins/decade for orders $0$ and $1$, with $\sim 2$ bins/decade for order $2$, and with $\sim 1$ bins/decade for order $3$. \begin{figure} \centering \includegraphics[width=\columnwidth]{./figures/paper_kmul_errL1cont_convergence.pdf} \includegraphics[width=\columnwidth]{./figures/paper_kmul_errL1dis_convergence.pdf} \caption{Test case, multiplicative kernel: the continuous $L^1$ error $e_{c,N}$ and the discrete $L^1$ error $e_{d,N}$ are plotted as functions of the number of bins per decade. With $e_{c,N}$, the experimental order of convergence is EOC = $k+1$. With $e_{d,N}$, EOC = $k+1$ for polynomials of odd orders and EOC = $k+2$ for polynomials of even orders. The DG scheme achieves on $e_{d,N}$ an accuracy of $0.1\%$ with $\sim 15$ bins/decade for $k = 0,1$, with $\sim 7$ bins/decade for $k = 2$ and with $\sim 4$ bins/decade for $k = 3$ . An accuracy of $1\%$ is achieved with $\sim 7$ bins/decade for $k = 0,1$, with $\sim 2$ bins/decade for $k = 2$ and with $\sim 1$ bins/decade for $k = 3$.} \label{fig:kmul_errL1_convergence} \end{figure} \subsubsection{Stability in time} \label{sec:kmul_stability_time} The evolution of the numerical errors $e_{c,N}$ and $e_{d,N}$ are shown in Fig.~\ref{fig:kmul_err_L1cont_L1dis}. The results are shown for $N=20$ bins fo $k = 0,1,2,3$ at time $\tau = 100$, when particles with large masses have formed. We observe that $e_{c,N}$ and $e_{d,N}$ remain bounded, even after the occurence of gelation at $\tau=1$. \begin{figure} \centering \includegraphics[width=\columnwidth]{./figures/paper_kmul_errL1cont.pdf} \includegraphics[width=\columnwidth]{./figures/paper_kmul_errL1dis.pdf} \caption{Test case, multiplicative kernel: numerical errors $e_{c,N}$ with the $L^1$ continuous norm, $e_{d,N}$ with the discrete $L^1$ norm. All these errors are calculated for $N=20$. Errors remain bounded at large times. } \label{fig:kmul_err_L1cont_L1dis} \end{figure} \subsubsection{Computational efficiency} \label{sec:kmul_computational_performance} Fig.~\ref{fig:kmul_loglog_xmeanlog_DGvsDGGQ} shows similar accuracies for the \citet{Liu2019} scheme and our implementation. With $k = 2$, the computational time for the \citet{Liu2019} scheme is around 8 minutes for a number of Gauss quadrature points $Q=3$. The computational time is for this scheme 1 minute and 40 seconds, providing an improvement by a factor 5. \begin{figure} \centering \includegraphics[width=\columnwidth]{./figures/paper_kmul_tend_loglog_xmeanlog_DGvsDGGQ.pdf} \caption{Test case, multiplicative kernel: comparison between the numerical solutions provided by this scheme and the scheme of \citet{Liu2019}. Similar accuracies are reached, but being $\sim 5\times$ more effective due to analytical integration.} \label{fig:kmul_loglog_xmeanlog_DGvsDGGQ} \end{figure} \section{Discussion} \label{sec:discussions} The Discontinuous Galerkin scheme presented in Sect.~\ref{sec:dg} involves polynomials of high-order, implying issues with differences of large real numbers. Order $k = 3$ appears as a maximum limit for the order of the scheme in its current form in practice. So far, the ratio betwen $10^{6}$ coagulation time-steps and one hydrodynamical time-step with \texttt{PHANTOM} using $10^{6}$ SPH particles is of order $\sim 10-100$. We are confident the we can reach a one-to-one ratio by i) taking advantage of more ingenious time-stepping (e.g. \citealt{Carrillo2004,Goudon2013}, ii) adopt a more relevant choice for the basis (e.g. \citealt{Soong1974}) and iii) use GPU parallelisation, since calls to the coagulation solver by the hydrodynamical code are independent. These strategies to further gain accuracy and computational efficiency will be tested in a next future. The most relevant kernel for astrophysics is the ballistic kernel (Sect.~\ref{sec:kernels}). Large-scale values of $\Delta v$ are provided by 2D piecewise constant functions from hydrodynamic codes. In discs, the $\Delta v$ function encompasses radial drift, vertical settling and turbulence at large scales. The ballistic kernel splits in three terms \begin{equation} \begin{aligned} \mathcal{K}_{b}(u,v) &= \pi (u^{2/3}+2u^{1/3}v^{1/3} + v^{2/3}) \Delta v(u,v) \\ & = \mathcal{K}_{b,1}(u,v) + \mathcal{K}_{b,2}(u,v) + \mathcal{K}_{b,3}(u,v), \end{aligned} \end{equation} $\mathcal{K}_{b,1}(u,v) \equiv \pi u^{2/3} \Delta v(u,v)$, $\mathcal{K}_{b,2}(u,v) \equiv \pi 2u^{1/3}v^{1/3} \Delta v(u,v) $ and $\mathcal{K}_{b,3}(u,v) \equiv \pi v^{2/3} \Delta v(u,v)$. The numerical flux is also split in three terms that are evaluated analytically. Models of differential velocities are also used to model sub-grid small-scale values of $\Delta v$ (Brownian motion, dusty turbulence at small scales). Shall these kernels not be integrable, we will estimate them with an appropriate interpolation. Moreover, the algorithm presented above solves for the Smoluchowski equation with pure growth. Although fragmentation plays a key role in regulating the number of small grains and preventing the formation of large bodies, it has not being included in the solver yet. The algorithm presented in Sect.~\ref{sec:dg} has been designed to incorporate fragmentation genuinely by adding the extra fragmentation flux \citep{Paul2018} \begin{equation} \begin{aligned} & F_{\mathrm{frag}} \left[ g \right] \left( x,\tau \right) \equiv \\ & \int\limits_0^{\infty} \int\limits_x^{\infty} \int\limits_0^x \frac{w}{yz}b(w,y,z)\mathcal{K}(y,z)g(y,\tau)g(z,\tau)\mathrm{d}w \mathrm{d}y \mathrm{d}z, \end{aligned} \label{eq:frag_cont_cons} \end{equation} similarly e.g. to \citet{Birnstiel2010}. The kernel $\mathcal{K}$ provides the fragmentation rate between two particles of masses $x$ and $y$. The function $b$ is the breakage rate related to the formation of a particle of mass $x$ from particles of mass $y$ and $w$. Known functional forms of the fragmentation kernel should authorise direct analytic integrations, similarly to the derivations performed in Sect.~\ref{sec:fluxes}. For peculiar regimes, fragmentation kernels can alternatively be interpolated. Astrophysical mass distributions are expected to be dominated by large grains. Hence, the CFL condition for fragmentation should be similar to the one for growth \citep{Vericel2020}. If so, numerical integration will be performed explicitly. If not, implicit time-stepping can be implemented in a manageable way since the number of dust bins has been kept minimal with analytic integrations (i.e. linear algebra with $\sim 15\times15$ matrices). Eq.~\ref{eq:smolu_cont} restrains dust interactions to binary collisions between aggregates of spherical shapes. Multiple collisions are not expected to play a critical role in astrophysics, since dust volume densities are extremely low. On the other hand, dust aggregates are expected to be porous or have fractal structures. In particular, small bodies that have not been recompacted by collisions are expected to be fluffy. Eq.~\ref{eq:smolu_cont} also reduces probability distributions of velocities to their mean values. This approximation may quench grain growth occurring through rare collisional events, e.g. between large bodies having low relative velocities \citep{Windmark2012,Garaud2013}. Finally, growth is in essence stochastic, but fluctuations of the solution can not be computed with Eq.~\ref{eq:smolu_cont}. This is not critical, those being hardly constrained by observations. Although the solver presented in Sect.~\ref{sec:dg} can not be used directly to treat the additional physical processes described above, the method could be adapted to do so. Lastly, extending Eq.~\ref{eq:smolu_cont} to multiple compositions, without or with change of states has been done in other communities. This comes to the cost of multiplying the number of variables by the number of materials considered. The algorithm presented in Sect.~\ref{sec:dg} is a first step towards reducing the number of dust bins to allow for solving for multiple compositions in 3D. This would have strong implications for planet formation, e.g. by handling snow lines consistently and providing constrains for meteoritic data. \section{Conclusion} \label{sec:conclusion} We have presented an high-order algorithm that solves accurately the coagulation equation with a limited number of dust bins ($\sim 15$). Specifically: \begin{enumerate} \item Mass is conserved to machine precision for astrophysical kernels, \item Positivity is guaranteed by combining an appropriate slope-limiter to a Total Variation Diminishing time-stepping, \item Creating aggregates of masses larger that the mass of the reservoir is mathematically excluded by a control of the growth flux, \item Errors of order $0.1 - 1 \%$ are achieved by high-order discretisation in time and space that can be modulated for convergence purpose. They shall not dominate the error budget over hydrodynamics, \item Combining a low number of bins and analytic integrations allows manageable costs in memory and time, \item Additional physics should be implementable in a versatile way. \end{enumerate} The next step consists of performing 3D hydrodynamical simulations of star and planet formation with accurate dust growth. The design of the algorithm allows to implement additional processes such as fragmentation in a genuine way. This solver encourages the reduction of CO$_{2}$ emissions related to computational astrophysics. \section*{Acknowledgements} GL acknowledges funding from the ERC CoG project PODCAST No 864965. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\l odowska-Curie grant agreement No 823823. This project was partly supported by the IDEXLyon project (contract nANR-16-IDEX-0005) under the auspices University of Lyon. We acknowledge financial support from the national programs (PNP, PNPS, PCMI) of CNRS/INSU, CEA, and CNES, France. We used \textsc{Mathematica} \citep{Mathematica}. We thank L. Tine, E. D\'el\'eage, D. Price, D. Mentiplay, T. Guillet, S. Charnoz, R. Teyssier and the anonymous referee for useful comments and discussions. \newpage \section*{Data availability} \label{data_github} The data and supplementary material underlying this article are available in the repository "growth" on GitHub at \url{https://github.com/mlombart/growth.git}. Figures can be reproduced following the file \texttt{README.md}. The repository contains data and Python scripts used to generate figures. \label{lastpage} \bibliographystyle{mnras}
2024-02-18T23:40:37.715Z
2020-11-26T02:00:16.000Z
algebraic_stack_train_0000
2,898
14,131
proofpile-arXiv_065-14221
\section{Introduction} The stability and dynamics of spatially localized spike and spot patterns in activator-inhibitor reaction-diffusion systems has been the subject of many studies. These patterns deviate significantly from the uniform state and arise in parameter regimes well beyond the Turing stability threshold, and cannot be well-described by amplitude equations obtained from weakly nonlinear theory \cite{matkowsky1970nonlinear, borckmans1995turing, de1999spatial}. Motivated largely by the 1998 review article \cite{ni1998diffusion}, numerous studies have focused on the so-called semistrong interaction regime \cite{doelman2003semistrong} in which the activator-inhibitor diffusivity ratio $\eps^2 \ll 1$ is asymptotically small. Early works developed matched asymptotic and geometric singular perturbation techniques to characterize the existence, dynamics, and stability of localized patterns in this regime \cite{doelman1998stability, doelman2001large, doelman2000slowly, doelman2003semistrong, iron2001stability, ward2003hopf} of the Gray-Scott and Gierer-Meinhardt systems. Slow drift dynamics of quasi-equilibrium spot patterns have been determined both asymptotically and numerically for one-, two-, and three-dimensions (see, e.g., \cite{doelman2000slowly, ward2002dynamics, sherratt2013pattern, bastiaansen2019dynamics, carter2018traveling, kolokolnikov2009spot, tzou2017stability, tzou2019spot, bastiaansen2020pulse, wong2021spot}). It is \red{shown} in such works how the drift dynamics are impacted by combinations of advection, interaction with domain boundaries and other spots, domain heterogeneities, and/or curvature, Instabilities of spot patterns can occur both in the spot profiles (amplitude instabilities) as well as in the spot locations (translational instabilities). Monotonic (in time) amplitude instabilities come in the form of competition or overcrowding instabilities leading to spot annihilation \cite{doelman2001slowly, sun2005slow, chen2011stability}, or self-replication instabilities \cite{doelman2001slowly, doelman1998stability,muratov2001spike, kolokolnikov2005existence, kolokolnikov2009spot} leading to additional spots. For \red{the} former, it has recently been shown for the 1-D Gierer-Meinhardt and Schnakenberg systems that such competition instabilities are subcritical \cite{kolokolnikov2021competition}. Monotonic translational instabilities of equilibrium spot configurations lead to a rearrangement of spot locations, and may trigger amplitude instabilities in the process. Recently, slow monotonic translations instabilities were analyzed in \cite{kolokolnikov2022ring} for spots in ring equilibrium configurations of the two-dimensional Schnakenberg model. Analysis of these instabilities are often more intricate than that for amplitude instabilities due to the asymptotically small eigenvalues associated with translation instabilities \cite{WGM1, iron2001stability, ward2002existence}. Results for oscillatory amplitude instabilities of one-dimensional spot patterns have been established for various reaction-diffusion systems; see, e.g., \cite{doelman2002stability, doelman2001large, ward2003hopf, kolokolnikov2005existence, chen2009oscillatory, doelman2015explicit, tzou2, tzou2019spot}. This instability is typically associated with a pair of complex eigenvalues that remain $\mathcal{O}(1)$ as $\eps \to 0$. In \cite{veerman2015breathing}, a weakly nonlinear theory is developed to characterize oscillations beyond the linear stability regime and determine whether the Hopf bifurcation is subcritical or supercritical. In two dimensions, \cite{wei2001pattern, WGM1,WGS2, WGS1, WS1} established the existence of stability to oscillatory amplitude instabilities, while \cite{tzou2018anomalous} determined an anomalous scaling for the Hopf stability threshold. Recently, Hopf bifurcations for amplitude instabilities were determined for the three-dimensional Gierer-Meinhardt model \cite{gomez2021asymptotic}. Hopf bifurcations leading to temporal oscillations in spot locations, particularly in dimensions greater than one, has not be analyzed as in-depth as oscillatory amplitude instabilities. In one dimension, \cite{chen2009oscillatory, wei2006slow} obtained Hopf stability thresholds for oscillatory drift instabilities of spots in the Gray-Scott model, and established the $\mathcal{O}(\eps^2)$ scaling of the associated eigenvalue. Oscillatory instabilities in spot widths, referred to as breathing pulses, were analyzed for three-component systems in \cite{doelman2009pulse, gurevich2013moving, gurevich2006breathing}. \red{In \cite{xie2021complex}, oscillatory motion of multiple spots were investigated for an extended three-component Schnakenberg model, where multiple modes were excited slightly beyond the Hopf bifurcation.} In two-dimensions, \cite{xie2017moving} determines oscillatory translational instabilities of a one-spot equilibrium of the Schnakenberg reaction-diffusion system \cite{schnakenberg1979simple} on the unit disk. The symmetry of the disk, however, meant that the results shed very little light on effects of domain geometry. Furthermore, as the analysis was specific to a \red{one-spot} pattern on the unit disk, it did not account for effects that arise from \red{spot interactions}. In this paper, we perform the analysis on a general bounded 2-D planar domain $\Omega$ and analyze how its geometry impacts the properties of the instability. In particular, we demonstrate that asymmetries of the domain shape lead to preferred directions of oscillation at the onset of instability, which then saturates into orbits that are far from circular. This is in contrast to the behavior observed in \cite{xie2017moving} for the unit disk. There, it was shown that the Hopf bifurcation of an equilibrium spot at the center of a unit disk was not associated with a preferred direction of motion, and subsequent spot trajectories about the center were nearly circular. We also further generalize the analysis of \cite{xie2017moving} to the case of multi-spot equilibria, and deduce how spot orientation affects the mode of oscillations. For simplicity, we will consider the now well-studied (dimensionless) Schnakenberg reaction-diffusion model for activator $v(\mathbf{x},t)$ and inhibitor $u(\mathbf{x},t)$ concentrations \begin{subequations} \label{schnak} \begin{equation} \label{schnakv} v_t = \eps^2\Delta v - v + uv^2 \,, \qquad t > 0 \,, \quad \mathbf{x} = \left(\begin{array}{c} x_1 \\ x_2 \end{array}\right) \in \Omega \,, \end{equation} \begin{equation} \label{schnaku} \tau u_t = \Delta u + A - \eps^{-2} uv^2 \,, \qquad t > 0 \,, \quad \mathbf{x} \in \Omega \,, \end{equation} \begin{equation} \label{schnakbc} \partial_n v = 0 \enspace \mbox{and} \enspace \partial_n \red{u}= 0 \,, \quad \mathbf{x} \in \partial\Omega \,, \end{equation} \end{subequations} where $\partial_n$ denotes the normal derivative on $\partial \Omega$. We note that any non-unity diffusion coefficient $D$ on the inhibitor component can be scaled out independently of domain size. That is, with a diffusivity $D$ for the inhibitor, we recover \eqref{schnaku} by setting $v \to \sqrt{D}v$, $u \to u/\sqrt{D}$, $\tau \to D\tau$, $A \to \sqrt{D} A$. In \eqref{schnak}, $\eps^2 \ll 1$ is the small diffusivity of the activator, $A$ is a (constant) feed rate, and the Hopf bifurcation parameter $\tau$ is a measure of the rate at which the inhibitor responds to perturbations in the concentrations of the activator and inhibitor. As $\tau$ is increased, an increasingly sluggish response of the inhibitor leads to oscillatory instabilities via Hopf bifurcations \cite{kerner2013autosolitons}. The outline of the paper is as follows. In \S \ref{Nspoteqbm}, we asymptotically construct an $N$-spot equilibrium solution of the Schnakenberg PDE\eqref{schnak}. In contrast to the $\mathcal{O}(1)$ construction presented in \cite{kolokolnikov2009spot}, we require correction terms up to $\mathcal{O}(\eps^2)$ in order to facilitate the subsequent stability analysis. In \S \ref{stab} we analyze the stability of the $N$-spot equilibrium to oscillatory translation instabilities. We derive a $2N\times 2N$ complex matrix-eigenvalue problem of the form $P\mathbf{a} = \lambda\mathbf{a}$ that characterizes the Hopf bifurcation of a translational perturbation of a one-spot equilibrium. The Hopf bifurcation threshold for $\tau$ is obtained by requiring that the $2N\times 2N$ matrix $P$ have a pure imaginary eigenvalue $\lambda$, which yields the frequency of oscillations. The eigenvector $\mathbf{a}$ will yield initial directions along which spot oscillations occur. Effects of domain geometry are encoded in the entries of $P$, which involve the quadratic terms of the local behavior of Helmholtz Green's function $G_\mu(\mathbf{x};\mathbf{x}_0)$ satisfying (see \cite{paquin2020asymptotics} for a derivation of \eqref{Gkloc}) \begin{subequations} \label{Gk} \begin{equation} \label{Gkeq} \Delta G_\mu - \mu G_\mu = -\delta(\mathbf{x}-\mathbf{x}_j) \,, \quad \mathbf{x},\mathbf{x}_j \in \Omega \,, \qquad \partial_n G_\mu = 0 \,, \quad \mathbf{x} \in \partial \Omega \,, \end{equation} \begin{multline} \label{Gkloc} G_\mu \sim -\frac{1}{2\pi}\log|\mathbf{x}-\mathbf{x}_j| + R_\mu(\mathbf{x}_j;\mathbf{x}_j) + \nabla_\mathbf{x} R_\mu (\mathbf{x};\mathbf{x}_j)\left.\right|_{\mathbf{x}=\mathbf{x}_j} \cdot (\mathbf{x}-\mathbf{x}_j) \\ - \frac{\mu}{8\pi}|\mathbf{x}-\mathbf{x}_j|^2 \log|\mathbf{x}-\mathbf{x}_j| + \frac{1}{2}(\mathbf{x}-\mathbf{x}_j)^T H_{\mu j}(\mathbf{x}-\mathbf{x}_j) \,, \enspace \mbox{as} \enspace \mathbf{x}\to\mathbf{x}_j \,, \end{multline} and \begin{multline} \label{Gklocij} G_\mu \sim G_\mu(\mathbf{x}_i;\mathbf{x}_j) + \nabla_{\mathbf{x}}G_\mu(\mathbf{x};\mathbf{x}_j)\left.\right|_{\mathbf{x}=\mathbf{x}_j} \cdot (\mathbf{x}-\mathbf{x}_j) + \frac{1}{2}(\mathbf{x}-\mathbf{x}_i)H_{\mu ij}(\mathbf{x}-\mathbf{x}_i) \,, \enspace \mbox{as} \enspace \mathbf{x}\to\mathbf{x}_i \,, \end{multline} \end{subequations} where $H_{\mu j}$ and $H_{\mu ij}$ are the respective $2\times 2$ Hessian matrices. Their entries depend on the geometry of the domain (and also on $\mu$) and thus cannot be obtained from a local analysis. With the exception of special geometries such as disks and rectangles, they must be computed numerically. In \S \ref{onespot}, we reduce our $2N\times 2N$ eigenvalue problem result to the case of $N=1$. We compare the resulting eigenvalue problem to the one derived for the special case of one spot inside the unit disk in \cite{xie2017moving}. We use this comparison to highlight extra terms in the analysis that arise due to asymmetries of domain geometry. In \S \ref{perturbeddisk}, we analyze how perturbations of the unit disk break the symmetry and give rise to two distinct thresholds corresponding to two distinct modes of oscillation. We show that the lower of these thresholds, and therefore, the preferred mode of oscillation, is determined by the mode-2 coefficient of the Fourier series of the perturbation. In \S \ref{numerics}, we perform detailed numerical investigations to confirm our theoretical results for both the $1$- and $N$-spot cases by solving the full Schnakenberg PDE \eqref{schnak}. In these computations, we consider domains that are high in symmetry (e.g., half disk, unit disk, rectangles) and domains that have little symmetry (e.g., rectangular domains containing circular holes). \section{Equilibrium and stability analysis} In this section, we investigate the impact of domain geometry on the preferred initial direction of oscillation of the oscillatory translational instability of $N$-spot equilibrium solutions to \eqref{schnak}. We begin with a brief construction of the equilibrium solutions before performing the stability analysis. \subsection{$N$-spot equilibrium} \label{Nspoteqbm} For completeness, we begin with a very brief outline of the construction of an $N$-spot equilibrium; more details can be found in, e.g., \cite{kolokolnikov2009spot}. In the inner region near the $j$-th spot centered at $\mathbf{x} = \mathbf{x}_j$, we have the inner variables \begin{subequations}\label{innervar} \begin{equation} \label{innervar1} \mathbf{x} = \mathbf{x}_j + \eps\mathbf{y}_j \,, \quad \quad \mathbf{y}_j = \left(\begin{array}{c} y_{1j} \\ y_{2j} \end{array}\right) = \rho_j \mathbf{e}_j \,, \quad \mathbf{e}_j \equiv \left(\begin{array}{c} \cos \theta_j \\ \sin\theta_j \end{array}\right) \,, \end{equation} \begin{equation} \label{innervar2} u_e\sim U_{0j}(\rho_j) + \eps^2 U_{2j}(\mathbf{y}_j)\,, \quad v_e \sim V_{0j}(\rho_j) + \eps^2 V_{2j}(\mathbf{y}_j) \,, \end{equation} \end{subequations} where the $\mathcal{O}(\eps)$ terms in \eqref{innervar2} are absent under under the assumption that each spot is stationary in time. Note also that the leading order spot profile is radially symmetric; the asymmetry due to the geometry of the problem is captured at $\mathcal{O}(\eps^2)$. Substituting \eqref{innervar} into \eqref{schnak} and collecting leading order terms, we obtain the following core problem for the radially symmetric functions $U_{0j}$ and $V_{0j}$, \begin{subequations} \label{core} \begin{equation} \label{coreeq} \Delta_{\rho_j} V_{0j} - V_{0j} + U_{0j}V_{0j}^2 = 0 \,, \quad \Delta_{\rho_j} U_{0j} - U_{0j}V_{0j}^2 = 0 \,, \qquad \rho_j > 0 \end{equation} \begin{equation} \label{corebc} V_{0j}^\prime(0) = U_{0j}^\prime(0) = 0 \,, \quad V_{0j} \to 0 \enspace \mbox{and} \enspace U_{0j} \sim S_j\log\rho_j + \chi(S_j) \,, \enspace \mbox{as} \enspace \rho_j \to \infty \,. \end{equation} \end{subequations} In \eqref{coreeq}, $\Delta_{\rho_j} \equiv \partial_{\rho_j\rho_j} + \rho_j^{-1}\partial_{\rho_j}$ denotes the radially symmetric Laplacian in the polar coordinates $(\rho_j, \theta_j)$. Numerical solutions of \eqref{core} are depicted in Fig. 2 of \cite{kolokolnikov2009spot} for various $S_j$, including the nonlinear function $\chi(S_j)$. We assume that $S_j \lessapprox 4.3$ so that each spot is stable to the local (mode-2) self-replication instability. The divergence theorem on \eqref{core} yields \begin{equation} \label{Sdiv} 2\pi S_j = \int_{\mathbb{R}^2}\! U_{0j} V_{0j}^2 \, d\mathbf{y} \,. \end{equation} In the outer region, the $\eps^{-2}uv^2$ term in \eqref{schnaku} behaves in the distributional sense as a sum of delta functions located at each $\mathbf{x}_j$ with weight $2\pi S_j$ as given in \eqref{Sdiv} As such, the leading order equilibrium solution $u_e(\mathbf{x})$ satisfies \begin{equation} \label{ueeq} \Delta u_e + A = 2\pi \sum_{j=1}^N S_j \delta(\mathbf{x}-\mathbf{x}_j) \,, \qquad \mathbf{x} \in \Omega \,; \quad \partial_n u_e = 0 \enspace \mbox{on} \enspace \mathbf{x} \in \Omega \,, \end{equation} with solution given by \begin{equation} \label{uout} u_e(\mathbf{x}) \sim -2\pi \sum_{j = 1}^N S_j G(\mathbf{x};\mathbf{x}_j) + \bar{u} \,, \end{equation} where by the zero-integral condition on $G$ below, the constant $\bar{u}$ is the average of $u$ over $\Omega$. Integrating \eqref{ueeq} over $\Omega$ and applying the divergence theorem yields the solvability condition \begin{equation} \label{Sj} 2\pi\sum_{j = 1}^N S_j = A|\Omega| \,, \end{equation} where $|\Omega|$ denotes the area of the domain $\Omega$. In \eqref{uout}, $G(\mathbf{x};\boldsymbol\xi)$ is the modified Neumann Green's function satisfying \begin{subequations} \label{GNall} \begin{equation} \label{GNeq} \Delta G(\mathbf{x};\mathbf{x}_j) = -\delta(\mathbf{x}-\mathbf{x}_j) + \frac{1}{|\Omega|} \,, \quad \mathbf{x},\mathbf{x}_j \in \Omega \,, \qquad \partial_n G = 0 \,, \quad \mathbf{x} \in \partial\Omega \,; \qquad \int_\Omega \! G (\mathbf{x};\mathbf{x}_j) \,d\mathbf{x} = 0 \,, \end{equation} \begin{equation} \label{GNloc} G \sim -\frac{1}{2\pi}\log|\mathbf{x}-\mathbf{x}_j| + R_{jj} + \nabla R_{jj} \cdot (\mathbf{x}-\mathbf{x}_j) + \frac{1}{2}(\mathbf{x}-\mathbf{x}_j)^T H_{jj}(\mathbf{x}-\mathbf{x}_j) \enspace \mbox{as} \enspace \mathbf{x} \to \mathbf{x}_j \,, \end{equation} where $R_{jj}$ is the regular part of $G(\mathbf{x};\mathbf{x}_j)$ evaluated on the diagonal, $\nabla R_{jj} = \nabla_\mathbf{x} R(\mathbf{x};\mathbf{x}_j) \left.\right|_{\mathbf{x}=\mathbf{x}_j}$ is its gradient, and $H_{jj}$ its Hessian matrix. For $\mathbf{x}_i \neq \mathbf{x}_j$, we have that \begin{equation} \label{Gij} G(\mathbf{x};\mathbf{x}_i) \sim G_{ij} + \nabla G_{ji} \cdot (\mathbf{x}-\mathbf{x}_i) + \frac{1}{2} (\mathbf{x}-\mathbf{x}_i)^T H_{ji}(\mathbf{x}-\mathbf{x}_i) \,, \enspace \mbox{as} \enspace \mathbf{x} \to \mathbf{x}_j \,, \end{equation} where $G_{ji} = G(\mathbf{x}_j;\mathbf{x}_i)$, while $\nabla G_{ji}$ and $H_{ji}$ are the gradient and Hessian terms of $G(\mathbf{x};\mathbf{x}_i)$ at $\mathbf{x}_j$, respectively, in the Taylor expansion. Principal Result 3.4 in \cite{kolokolnikov2009spot} gives the equation of motion $d\mathbf{x}_j/dt$ of the $j$-th spot in terms of the gradient terms $\nabla R_{jj}$ and $\nabla G_{ji}$ with $i \neq j$. The condition $d\mathbf{x}_j/dt = \mathbf{0}$ for all $j = 1,\ldots,N$ yields the $2N$ equations for the equilibrium strengths $S_j$ and locations $\mathbf{x}_j$ \end{subequations} \begin{equation} \label{eqbm1} S_j\nabla R_{jj} + \sum_{i\neq j}^N S_i \nabla G_{ji} = \mathbf{0} \,, \quad j = 1,\ldots, N \,. \end{equation} The equations \eqref{Sj} and \eqref{eqbm1} constitute $2N+1$ equations for $S_j$, $\mathbf{x}_j$, and $\bar{u}$. To determine the remaining $N$ equations that fix the parameters of an $N$-spot equilibrium, we match the far-field behavior of the $j$-th inner region \eqref{corebc} to the leading order terms of the local behavior of the outer solution \eqref{uout} near $\mathbf{x}_j$, yielding the $N$ nonlinear equations \begin{subequations} \label{matchjj} \begin{equation} \label{matchj} \left(2\pi \mathcal{G}\nu + \mathcal{I}_N\right) \mathbf{s} + \nu\boldsymbol\chi = \nu\bar{u} \mathbf{e} \,; \end{equation} with \begin{equation} \label{matchj2} \mathbf{G} \equiv \left(\begin{array}{cccc} R_{11} & G_{12} & \cdots & G_{1N} \\ G_{21} & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & G_{N-1, N} \\ G_{N1} & \hdots & G_{N,N-1} & G_{NN} \end{array} \right) \,, \quad \mathbf{s} = \left(\begin{array}{c} S_1 \\ \vdots \\ S_N \end{array}\right) \,, \quad \mathbf{e} = \left(\begin{array}{c} 1 \\ \vdots \\ 1 \end{array}\right) \,, \quad \boldsymbol\chi = \left(\begin{array}{c} \chi(S_1)\\ \vdots \\ \chi(S_N) \end{array}\right) \,, \end{equation} \end{subequations} where $\mathcal{I}_m$ is the $m\times m$ identity matrix. Equation \eqref{matchjj} along with \eqref{Sj} and \eqref{eqbm1} determine the spot strengths $S_j$, the spot locations $\mathbf{x}_j$, along with $\bar{u}$ that define a leading order equilibrium $N$-spot configuration. The $\mathcal{O}(\eps^2)$ correction terms $U_{2j}(\mathbf{y}_j)$ and $V_{2j}(\mathbf{y}_j)$ satisfy the system for $\mathbf{y}_j \in \mathbb{R}^2$, \begin{subequations} \begin{equation} \label{coreeq2} \Delta_{\mathbf{y}_j} V_{2j} - V_{2j} + 2U_{0j}V_{0j}V_{2j} + V_{0j}^2U_{2j} = 0 \,, \quad \Delta_{\mathbf{y}_j} U_{2j} - 2U_{0j}V_{0j}V_{2j} - V_{0j}^2U_{2j} = 0 \,, \qquad \mathbf{y}_j \in \mathbb{R}^2 \,. \end{equation} The far-field conditions for $U_{2j}$ and $V_{2j}$ come from the quadratic terms in the local behavior of \red{$u_e$} near $\mathbf{x}_j$. That is, from $u_e$ in \eqref{uout} and local behaviors of $G(\mathbf{x};\mathbf{x}_j)$ in \eqref{GNall}, as $\mathbf{x} \to \mathbf{x}_j$, \begin{multline*} u_e \sim -2\pi \left[ -\frac{S_j}{2\pi} \log|\mathbf{x}-\mathbf{x}_j| + S_j R_{jj} + S_j\nabla R_jj \cdot (\mathbf{x}-\mathbf{x}_j) + \frac{1}{2}S_j(\mathbf{x}-\mathbf{x}_j)^TH_{jj}(\mathbf{x}-\mathbf{x}_j) \right. \\ \left. + \sum_{i\neq j} G_{ji} S_i + \sum_{i\neq j} S_i \nabla G_{ji} \cdot (\mathbf{x}-\mathbf{x}_j) + \frac{1}{2} \sum_{i\neq j} S_i (\mathbf{x}-\mathbf{x}_j)^T H_{ji} (\mathbf{x}-\mathbf{x}_j) \right] + \bar{u} \,. \end{multline*} All terms not involving $(\mathbf{x}-\mathbf{x}_j)$ are matched at $\mathcal{O}(1)$, yielding \eqref{matchjj}. Terms linear in $(\mathbf{x}-\mathbf{x}_j)$ sum to zero due to the equilibrium condition \eqref{eqbm1}, which is why the inner expansion \eqref{innervar2} has no $\mathcal{O}(\eps)$ term, while the quadratic terms are matched by the far-field of $U_{2j}$. With the inner polar coordinates $\rho_j$ and $\mathbf{e}_j$ as defined in \eqref{innervar1}, we have the far-field conditions \begin{equation} \label{innervar2far} U_{2j} \sim -\pi\rho_j^2 \mathbf{e}_j^T \mathcal{H}_j \mathbf{e}_j \,, \qquad V_{2j} \to 0 \,, \enspace \mbox{as} \enspace \rho_j \to \infty \,, \end{equation} where the $2\times 2$ matrix $\mathcal{H}_j$ is defined as \begin{equation} \label{mHj} \mathcal{H}_j \equiv \sum_{i=1}^N S_i H_{ji} \,. \end{equation} \end{subequations} This completes the construction of the $N$-spot equilibrium solution to \eqref{schnak}, with the inner solution given to $\mathcal{O}(\eps^2)$ by \eqref{innervar} along with \eqref{core}, \eqref{coreeq2}, and \eqref{innervar2far}. In the outer region, the leading order equilibrium for $u$ is given in \eqref{uout}, while $v_e \sim 0$. \subsection{Stability analysis} \label{stab} With $|\psi|,|\phi| \ll \mathcal{O}(1)$, we perturb the equilibrium solution \begin{equation} \label{pertmain} u \sim u_e + e^{\lambda t} \psi + c.c. \,, \qquad v \sim v_e + e^{\lambda t} \phi + c.c. \,, \end{equation} which yields the eigenvalue problem \begin{equation} \label{eigprob} \lambda\phi = \eps^2 \Delta\phi - \phi + 2u_ev_e\phi + v_e^2\psi \,, \quad \tau \lambda \psi = \Delta \psi -\frac{1}{\eps^2}\left[ 2u_ev_e\phi + v_e^2\psi \right] \,. \end{equation} In \eqref{pertmain}, $c.c.$ denotes the complex conjugate of the term immediately preceding it. In the inner region, we let $\psi \sim \Psi_j(\mathbf{y}_j)$ and $\phi \sim \Phi_j(\mathbf{y}_j)$, where \begin{equation} \label{pert} \Psi_j \sim \Psi_{0j} + \eps\Psi_{1j} + \eps^2\Psi_{2j} \,, \qquad \Phi_j \sim \Phi_{0j} + \eps\Phi_{1j} + \eps^2\Phi_{2j} \,. \end{equation} Since drift velocities of spots in quasi-equilibrium patterns are $\mathcal{O}(\eps^2)$ (see e.g., \cite{kolokolnikov2009spot}), we assume that $\lambda \sim \mathcal{O}(\eps^2)$ and that $\tau\lambda \sim \mathcal{O}(1)$ when $\tau$ is at or near the Hopf bifurcation threshold. Substituting \eqref{pert} into \eqref{schnak} and collecting leading order terms, we have for $\Psi_{0j}$ and $\Phi_{0j}$ \begin{subequations} \label{pertsys} \begin{equation} \label{O1pert} \Delta_{\mathbf{y}_j} \left(\begin{array}{c} \Phi_{0j} \\ \Psi_{0j} \end{array}\right) + \mathcal{M}_j\left(\begin{array}{c} \Phi_{0j} \\ \Psi_{0j} \end{array}\right) = \mathbf{0} \,. \end{equation} where \begin{equation} \label{M} \mathcal{M}_j \equiv \left(\begin{array}{cc} -1 + 2U_{0j}V_{0j} & V_{0j}^2 \\ -2U_{0j}V_{0j} & - V_{0j}^2 \end{array}\right) \,. \end{equation} We observe that any linear combination of $\partial_{y_{1j}} V_{0j}$ and $\partial_{y_{2j}} V_{0j}$ satisfies the first equation in \eqref{O1pert}, while any linear combination of $\partial_{y_{1j}} U_{0j}$ and $\partial_{y_{2j}} U_{0j}$ satisfies the second. That is, the perturbation is the translation mode given by \begin{equation} \label{O1pertsol} \Phi_{0j} = \mathbf{a}_j^T\nabla_{\mathbf{y}_j} V_{0j} = \partial_{\rho_j} V_{0j} (\mathbf{a}_j^T\mathbf{e}_j) \,, \qquad \Psi_{0j} = \mathbf{a}_j^T\nabla_{\mathbf{y}_j} U_{0j} = \partial_{\rho_j} U_{0j} (\mathbf{a}_j^T\mathbf{e}_j) \,; \qquad \mathbf{a}_j \equiv \left(\begin{array}{c} a_{1j} \\ a_{2j} \end{array}\right) \,. \end{equation} In \eqref{O1pertsol}, the possibly complex vector $\mathbf{a}_j$ determines the nature of the oscillations of the $j$-th spot at the onset of instability. If $\mathbf{a}_j$ is real, the $j$-th spot oscillates along a line passing through the equilibrium location $\mathbf{x}_j$ in the direction $\mathbf{a}_j$; if $\mathbf{a}_j$ is complex, the motion of the $j$-th spot is rotational about the point $\mathbf{x}_j$. From \eqref{corebc} and \eqref{O1pertsol}, the far-field behaviors of $\Phi_{0j}$ and $\Psi_{0j}$ are \begin{equation} \label{O1pertfar} \Phi_{0j} \to 0 \,, \qquad \Psi_{0j} \sim (\mathbf{a}_j^T\mathbf{e}_j)\frac{S_j}{\rho_j} \,, \enspace \mbox{as} \enspace \rho_j \to \infty \,. \end{equation} \end{subequations} We contrast \eqref{pertsys} with stability analysis of amplitude instabilities; i.e., the self-replication peanut instability (e.g., \cite{kolokolnikov2009spot}, competition instability (e.g., \cite{chen2011stability}), and amplitude oscillation instabilities (e.g., \cite{tzou2018anomalous}). These amplitude instabilities all occur on an $\mathcal{O}(1)$ time-scale so that $\lambda \sim \mathcal{O}(1)$, giving rise to a $\lambda \Phi_{0j}$ in the right-hand side of the equation for $\Phi_{0j}$ in \eqref{O1pert}. Furthermore, the competition and amplitude oscillation instabilities are radially symmetric to leading order with $\Psi_{0j} \sim \log\rho_j$ in the far-field, which leads to a strong coupling through the inhibitor component in the outer region. On the other hand, the self-replication eigenfunction is a mode-2 instability with $\Psi_{0j} \sim \rho_j^{-2}$ in the far-field. The fast decay leads to a very weak coupling between the different spots. It is therefore a strictly local instability, to leading order. The translation instability is mode-1 with $\Psi_{0j} \sim \rho_j^{-1}$. This decay leads to a weak coupling through an $\mathcal{O}(\eps)$ outer solution for the inhibitor eigenfunction. In contrast to the local self-replication instability, we show below that the nature of this weak coupling between the spots must be determined in order to characterize the translation instability. The $1/\rho_j$ behavior of $\Psi_{0j}$ in the far-field gives rise to a singular behavior in the outer region near $\mathbf{x}_j$ that must be matched by the leading order term of $\psi$, which is $\mathcal{O}(\eps)$. The regular part of the behavior of $\psi$ at $\mathbf{x}_j$ must then be matched by a constant term in the far-field of $\Psi_{1j}$. At $\mathcal{O}(\eps)$ in the inner region, we have for $\Phi_{1j}$ and $\Psi_{1j}$, \begin{subequations} \label{Oepspert} \begin{equation} \label{Oepsperteq} \Delta_{\mathbf{y}_j} \left(\begin{array}{c} \Phi_{1j} \\ \Psi_{1j} \end{array}\right) + \mathcal{M}_j\left(\begin{array}{c} \Phi_{1j} \\ \Psi_{1j} \end{array}\right) = \mathbf{0} \,. \end{equation} Since $\Psi_{1j}$ must have a constant term in the far-field, we must have \begin{equation} \label{Oepspertfar} \Phi_{1j} \to 0 \,, \qquad \Psi_{1j} \sim \kappa_j(\nu) \left[\log\rho_j + B_j(S_j) \right] \,, \enspace \mbox{as} \enspace \rho_j \to \infty \,. \end{equation} \end{subequations} In \eqref{Oepspertfar}, $\kappa_j(\nu)$ is a scaling constant to be found by matching to the regular part of $\psi_1$ at the spot locations. The solution to \eqref{Oepspert} is (see e.g., \cite{tzou2017stability}) \begin{equation} \label{Phi1Psi1sol} \Phi_{1j} = \kappa_j\partial_{S_j} V_{0j} \,, \qquad \Psi_{1j} = \kappa_j\partial_{S_j} U_{0j} \,, \end{equation} where $\partial_{S_j} U_{0j} \sim \log\rho_j + \chi^\prime(S_j)$, and $\chi(S_j)$ is the constant that must be computed from the core problem \eqref{core}. The far-field behavior of $\Psi_{1j}$ is thus \begin{equation} \label{Psi1jfar} \Psi_{1j} \sim \kappa_j \left[\log|\mathbf{y}_j| + \chi^\prime_j \right] \,, \enspace \mbox{as} \enspace |\mathbf{y}_j| \to \infty \,, \end{equation} where $\chi^\prime_j \equiv \chi^\prime(S_j)$. Observe that both the dipole term from $\Psi_{0j}$ as well as the logarithmic term of $\Psi_{1j}$ must be contained in the singularity structure for $\psi$ in the outer region. Thus, in the outer region, we let $\psi \sim \eps \psi_1$, where $\psi_1$ satisfies \begin{subequations} \label{psi1} \begin{equation} \label{psi1eq} \Delta \psi_1 = \tau\lambda\psi_1 \,, \qquad \mathbf{x} \in \Omega \,, \qquad \partial_n \psi_1 = 0 \,, \quad \mathbf{x} \in \partial\Omega \end{equation} with the singular behavior \begin{equation} \label{psi1sing} \psi_1 \sim S_j \frac{\mathbf{a}_j^T(\mathbf{x}-\mathbf{x}_j)}{|\mathbf{x}-\mathbf{x}_j|^2} + \kappa_j\log|\mathbf{x}-\mathbf{x}_j| + \kappa_j\left[\frac{1}{\nu} + \chi^\prime_j \right] \,, \enspace \mbox{as} \enspace \mathbf{x} \to \mathbf{x}_j \,, \quad j = 1, \ldots, N \,. \end{equation} \end{subequations} In terms of the Helmholtz Green's function of \eqref{Gk} and its gradient with respect to the second variable, we determine $\psi_1$ to be \begin{equation} \label{psi1sol} \psi_1 = 2\pi\sum_{i = 1}^N \left[ S_i \mathbf{a}_i^T \nabla_{\mathbf{x}_i} G_{\lambda\tau}(\mathbf{x};\mathbf{x}_i) - \kappa_i G_{\lambda\tau}(\mathbf{x};\mathbf{x}_i) \right] \,. \end{equation} Notice that $\nabla_{\mathbf{x}_i} G_{\lambda\tau}(\mathbf{x};\mathbf{x}_i)$ produces the dipole term of \eqref{psi1sing} near $\mathbf{x}_i$ while it also still satisfies the no-flux boundary condition of \eqref{psi1eq} since the gradient is being taken with respect to the second variable. Its local behaviors near $\mathbf{x}_i$ and $\mathbf{x}_j \neq \mathbf{x}_i$ are \begin{subequations} \label{gradGk} \begin{multline} \label{gradGKii} \nabla_{\mathbf{x}_i} G_{\lambda\tau}(\mathbf{x};\mathbf{x}_i) \sim \frac{1}{2\pi}\frac{\mathbf{x}-\mathbf{x}_i}{|\mathbf{x}-\mathbf{x}_i|^2} + \mathbf{F}_{{\lambda\tau} i} + \mathcal{F}_{{\lambda\tau} i}(\mathbf{x}-\mathbf{x}_i) \\ + \frac{{\lambda\tau}}{4\pi}(\mathbf{x}-\mathbf{x}_i)\log|\mathbf{x}-\mathbf{x}_i| + \left[\frac{{\lambda\tau} }{8\pi} \mathcal{I}_2 - H_{{\lambda\tau}_i} \right](\mathbf{x}-\mathbf{x}_i) \,, \enspace \mbox{as} \enspace \mathbf{x} \to \mathbf{x}_i \,; \end{multline} \begin{equation} \label{gradGKji} \nabla_{\mathbf{x}_i} G_{\lambda\tau}(\mathbf{x};\mathbf{x}_i) \sim \mathbf{E}_{{\lambda\tau} ji} + \mathcal{E}_{{\lambda\tau} ji}(\mathbf{x}-\mathbf{x}_j) \ \,, \enspace \mbox{as} \enspace \mathbf{x} \to \mathbf{x}_j \neq \mathbf{x}_i \,; \end{equation} where we have defined the quantities \begin{equation} \label{gradGKdef} \begin{gathered} \mathbf{F}_{\mu_i} = \left(\begin{array}{c} F_{\mu_i}^{(1)} \\ F_{\mu_i}^{(2)} \end{array}\right) \equiv \nabla_\mathbf{x} R_\mu(\mathbf{x};\mathbf{x}_i)\left.\right|_{\mathbf{x}=\mathbf{x}_i} \,, \qquad \mathcal{F}_{\mu_i} \equiv \left(\nabla_{\mathbf{x}_i} F_{\mu_i}^{(1)} \enspace\enspace \nabla_{\mathbf{x}_i} F_{\mu_i}^{(2)} \right) \,, \\ \mathbf{E}_{\mu_{ji}} \equiv \nabla_{\mathbf{x}_i} G_\mu(\mathbf{x}_j;\mathbf{x}_i) \,, \qquad \mathcal{E}_{\mu ji} \equiv \left(\nabla_{\mathbf{x}_i}\partial_{x_1} G_\mu(\mathbf{x};\mathbf{x}_i) \left.\right|_{\mathbf{x} = \mathbf{x}_j} \enspace\enspace \nabla_{\mathbf{x}_i}\partial_{x_2} G_\mu(\mathbf{x};\mathbf{x}_i) \left.\right|_{\mathbf{x} = \mathbf{x}_j} \right) \,. \end{gathered} \end{equation} \end{subequations} The scaling constant $\kappa_j$ of \eqref{Oepspertfar} is then found by matching the far-field of $\Psi_{1j}$ in \eqref{Psi1jfar} to the constant terms of the local behavior of $\psi_1$ near $\mathbf{x}_j$. Using \eqref{psi1sol}, \eqref{gradGk}, and \eqref{Gk}, we match the constant terms in \eqref{psi1sing} and those contained in \eqref{psi1sol} near $\mathbf{x}_j$ to obtain the matching condition \begin{equation} \label{kappaj} \kappa_j \left[\frac{1}{\nu} + \chi_j^\prime + 2\pi R_{\lambda\tau}(\mathbf{x}_j;\mathbf{x}_j)\right] + 2\pi \sum_{i\neq j}\kappa_i G_{\lambda\tau_{ji}} = 2\pi S_j\mathbf{a}_j^T \mathbf{F}_{\lambda\tau_j} + 2\pi \sum_{i \neq j} S_{i} \mathbf{a}_i^T \mathbf{E}_{\lambda\tau_{ji}} \,, \quad j = 1, \ldots, N \,. \end{equation} In matrix-vector form, \eqref{kappaj} becomes \begin{subequations} \label{kappajvec} \begin{equation} \label{kappajeq} \boldsymbol\kappa = \mathcal{K}_{\lambda\tau} \mathbf{a} \end{equation} where we have defined the $N\times 1$ vector $\boldsymbol\kappa$, $2N\times 1$ vector $\mathbf{a}$, the $N\times N$ matrices $\mathcal{G}_{\lambda\tau}$ and $\boldsymbol\chi^\prime$, the $N\times 2N$ matrices $(\nabla_2 \mathcal{G}_{\lambda\tau})$ and $\mathcal{K}_{\lambda\tau}$, and the diagonal $2N \times 2N$ matrix $\mathcal{S}$ \begin{equation} \label{kappadef} \begin{gathered} \boldsymbol\kappa \equiv \left(\begin{array}{c} \kappa_1 \\ \vdots \\ \kappa_N\end{array}\right) \,, \quad \mathbf{a} \equiv \left(\begin{array}{c} \mathbf{a}_1 \\ \vdots \\ \mathbf{a}_N \end{array}\right) \,, \quad \mathcal{G}_{\lambda\tau} \equiv \left(\begin{array}{cccc} R_{\lambda\tau}(\mathbf{x}_1;\mathbf{x}_1) & G_{\lambda\tau}(\mathbf{x}_1;\mathbf{x}_2) & \cdots & G_{\lambda\tau}(\mathbf{x}_1;\mathbf{x}_N) \\ G_{\lambda\tau}(\mathbf{x}_2;\mathbf{x}_1) & R_{\lambda\tau}(\mathbf{x}_2;\mathbf{x}_2) & \cdots & \vdots \\ \vdots & \cdots & \ddots & \vdots \\ G_{\lambda\tau}(\mathbf{x}_N;\mathbf{x}_1) & \cdots & \cdots & R_{\lambda\tau}(\mathbf{x}_N;\mathbf{x}_N) \end{array}\right) \\ \boldsymbol\chi^\prime \equiv \left(\begin{array}{ccc} \chi_1^\prime & & \\ & \ddots & \\ & & \chi_N^\prime \end{array}\right) \,, \quad (\nabla_2 \mathcal{G}_{\lambda\tau}) \equiv \left(\begin{array}{cccc} \mathbf{F}_{\lambda\tau_1}^T & \mathbf{E}_{\lambda\tau_{12}}^T & \cdots & \mathbf{E}_{\lambda\tau_{1N}}^T \\ \mathbf{E}_{\lambda\tau_{21}}^T & \mathbf{F}_{\lambda\tau_2}^T & \cdots & \vdots \\ \vdots & \cdots & \ddots & \vdots \\ \mathbf{E}_{\lambda\tau_{N1}}^T & \cdots & \cdots & \mathbf{F}_{\lambda\tau_N}^T \end{array}\right) \,, \quad \mathcal{S} \equiv \left(\begin{array}{ccccc} S_1 & & & & \\ & S_1 & & & \\ & & \ddots & & \\ & & & S_N & \\ & & & & S_N \end{array}\right) \,, \\ \mathcal{K}_{\lambda\tau} \equiv 2\pi \left[ \frac{1}{\nu} \mathcal{I}_N + \boldsymbol\chi^\prime + 2\pi \mathcal{G}_{\lambda\tau}\right]^{-1} (\nabla_2 \mathcal{G}_{\lambda\tau}) \mathcal{S} \,. \end{gathered} \end{equation} \end{subequations} In \eqref{kappadef}, the vectors $\mathbf{E}_{\mu ij}$ and $\mathbf{F}_{\mu j}$ are gradients of the Green's function and its regular part, respectively, defined in \eqref{gradGKdef}. Also, we note that as $\nu \to 0$, the matrix to be inverted in the computation of $\mathcal{K}_{\lambda\tau}$ in \eqref{kappadef} is invertible due to its being diagonally dominant. The linear system for $\kappa_j$ in \eqref{kappajvec} along with \eqref{psi1sol} and \eqref{Phi1Psi1sol} determine the leading order outer solution for $\psi$, up to the oscillation directions $\mathbf{a}_j$, and the $\mathcal{O}(\eps)$ inner solutions for $\phi$ and $\psi$. The Hopf stability threshold for $\tau$, the frequency of oscillations at onset $\lambda$, and the direction of oscillations $\mathbf{a}_j$ will be determined via a solvability condition at $\mathcal{O}(\eps^2)$ for $\Phi_{2j}$ and $\Psi_{2j}$. To proceed, we must first obtain the far-field condition for $\Psi_{2j}$ from the linear and $|\mathbf{x}-\mathbf{x}_j|\log|\mathbf{x}-\mathbf{x}_j|$ terms in the local behavior of $\psi_1$ near $\mathbf{x}_j$. Recalling that $\psi_1$ is an $\mathcal{O}(\eps)$ term, while $\mathbf{x}-\mathbf{x}_j = \eps\rho_j\mathbf{e}_j$, we use \eqref{psi1sol} with \eqref{Gk} and \eqref{gradGk} to compute that as $\rho_j \to \infty$, \begin{multline} \label{Psi2jfar1} \Psi_{2j} \sim \frac{1}{2} S_j\lambda\tau (\mathbf{a}_j^T\mathbf{e}_j)\rho_j\log\rho_j + 2\pi\rho_j \left\lbrace S_j\mathbf{a}_j^T\left[\mathcal{F}_{\lambda\tau_j} - H_{\lambda\tau j} + \frac{\lambda\tau}{8\pi}\left(1-\frac{2}{\nu} \right)\mathcal{I}_2\right] - \right. \\ \left. \kappa_j \mathbf{F}_{\lambda\tau_j}^T - \sum_{i \neq j}\left[ \kappa_i \nabla_{\mathbf{x}}G_{\lambda\tau}(\mathbf{x};\mathbf{x}_i)\mid_{\mathbf{x}=\mathbf{x}_j}^T + S_i\mathbf{a}_i^T \mathcal{E}_{\lambda\tau_{ji}}\right] \right\rbrace \mathbf{e}_j \,, \quad j = 1,\ldots,N \,. \end{multline} Next, we substitute \eqref{pert} into \eqref{eigprob} while recalling the expansion for $u_e$ and $v_e$ in \eqref{innervar2}, and with $\lambda = \eps^2\lambda_0$ and $\tau = \eps^{-2}\tau_0$, we collect $\mathcal{O}(\eps^2)$ terms to obtain \begin{equation} \label{Oeps2eq} \begin{gathered} \lambda_0\Phi_{0j} = \Delta_{\mathbf{y}_j} \Phi_{2j} - \Phi_{2j} + 2U_{0j}V_{0j} \Phi_{2j} + V_{0j}^2 \Psi_{2j} + 2\left(U_{0j}V_{2j} + U_{2j} V_{0j}\right) \Phi_{0j} + 2U_{0j}V_{2j} \Psi_{0j} \,, \\ \lambda_0\tau_0 \Psi_{2j} = \Delta_{\mathbf{y}_j} \Psi_{2j} - 2U_{0j}V_{0j} \Phi_{2j} - V_{0j}^2 \Psi_{2j} - 2\left(U_{0j}V_{2j} + U_{2j} V_{0j}\right) \Phi_{0j} - 2U_{0j}V_{2j} \Psi_{0j} \,. \end{gathered} \end{equation} To write \eqref{Oeps2eq} and \eqref{Psi2jfar1} more compactly, we define the quantities \begin{equation} \label{Oeps2def} \begin{gathered} \mathbf{W}_j \equiv \left(\begin{array}{c} \Phi_{2j} \\ \Psi_{2j} \end{array}\right) \,, \qquad \mathbf{f}_{1j} \equiv \left(\begin{array}{c} \Phi_{0j} \\ \Psi_{0j} \end{array}\right) = \mathbf{a}_j^T\mathbf{e}_j \left(\begin{array}{c} \partial_{\rho_j} V_{0j} \\ \partial_{\rho_j} U_{0j} \end{array}\right) \,, \qquad \omega \equiv \lambda_0\tau_0 \\ \mathbf{f}_{2j} \equiv \left(\begin{array}{c} \lambda_0 \Phi_{0j} \\ \omega \Psi_{0j} \end{array}\right) = \mathbf{a}_j^T\mathbf{e}_j \left(\begin{array}{c} \lambda_0 \partial_{\rho_j} V_{0j} \\ \omega \partial_{\rho_j} U_{0j} \end{array}\right) \,, \qquad \mathcal{N}_j \equiv \left(\begin{array}{cc} -2(U_{0j}V_{2j} + U_{2j}V_{0j}) & -2V_{0j}V_{2j} \\ 2(U_{0j}V_{2j} + U_{2j}V_{0j}) & 2V_{0j}V_{2j} \end{array}\right) \,. \end{gathered} \end{equation} where we have used \eqref{O1pertsol} for $\Phi_{0j}$ and $\Psi_{0j}$, to obtain \begin{subequations} \label{Oeps2inn} \begin{equation} \label{Oeps2inneq} \Delta_{\mathbf{y}_j} \mathbf{W}_j + \mathcal{M}_j \mathbf{W}_j = \mathcal{N}_j \mathbf{f}_{1j} + \mathbf{f}_{2j} \,, \qquad \mathbf{y}_j \in \mathbb{R}^2 \end{equation} with the far-field condition \begin{equation} \label{Psi2jfar2} \mathbf{W}_j \sim \left(\begin{array}{c} 0 \\ \left[\frac{1}{2} S_j\omega\log\rho_j \mathbf{a}_j^T + 2\pi\mathbf{a}_{Q_j}^T\right]\rho_j\mathbf{e}_j \end{array}\right) \,, \enspace \mbox{as} \enspace \rho_j \to \infty \,, \quad j = 1\, \ldots, N \,, \end{equation} where we have defined the $N\times 2$ matrix $(\nabla_1\mathcal{G}_\omega)_j$ and $2N\times 2$ matrices $M_{\omega j}$ and $Q_{\omega j}$, and $2\times 1$ vector $\mathbf{a}_{Qj}$, \begin{equation} \label{LjMj} \begin{gathered} (\nabla_1\mathcal{G}_\omega)_j \equiv \left(\begin{array}{c} \nabla_{\mathbf{x}}G_{\omega}(\mathbf{x};\mathbf{x}_1)\mid_{\mathbf{x}=\mathbf{x}_j}^T \\ \nabla_{\mathbf{x}}G_{\omega}(\mathbf{x};\mathbf{x}_2)\mid_{\mathbf{x}=\mathbf{x}_j}^T \\ \vdots \\ \mathbf{F}_{\omega j}^T \\ \vdots \\ \nabla_{\mathbf{x}}G_{\omega}(\mathbf{x};\mathbf{x}_N)\mid_{\mathbf{x}=\mathbf{x}_j}^T \end{array} \right) \,, \qquad M_{\omega_j} \equiv \left(\begin{array}{c} S_1 \mathcal{E}_{\omega_{j1}} \\ S_2\mathcal{E}_{\omega_{j2}} \\ \vdots \\ S_j\left[\mathcal{F}_{\omega_j} - H_{\omega_j} + \frac{\omega}{8\pi}\left(1-\frac{2}{\nu}\right)\mathcal{I}_2 \right] \\ \vdots \\ S_N\mathcal{E}_{\omega_{jN}} \end{array} \right) \,, \\ Q_{\omega_j} \equiv M_{\omega_j} - \mathcal{K}_\omega^T (\nabla_1\mathcal{G}_\omega)_j \,, \qquad \mathbf{a}_{Qj} \equiv \left(\begin{array}{c} a_{Q_{1j}} \\ a_{Q_{2j}} \end{array}\right) = Q_{\omega j}^T \mathbf{a} \,. \end{gathered} \end{equation} \end{subequations} In \eqref{Oeps2inneq} and \eqref{Psi2jfar2} for $\mathbf{W}_j$, the coupling of the $j$-th inner region is contained only in the $\mathbf{a}_{Qj}$ term defined in \eqref{LjMj}. All other terms are local to the $j$-th inner region. From \eqref{pertsys}, the linear operator in \eqref{Oeps2inneq} admits a nontrivial nullspace of dimension at least two. The nonhomogeneous terms of \eqref{Oeps2inneq} and \eqref{Psi2jfar2} must therefore satisfy an orthogonality condition involving the solution to the homogeneous adjoint problem. Before applying this condition, we observe that $\mathbf{W}_j$ can be decomposed into two components proportional to $\cos\theta_j$ and $\sin\theta_j$, respectively, as \begin{subequations} \label{bW} \begin{equation} \label{bWsol} \mathbf{W}_j = \mathbf{W}_{cj}\cos\theta_j + \mathbf{W}_{sj}\sin\theta_j \,, \end{equation} where $\mathbf{W}_{cj}$ and $\mathbf{W}_{sj}$ are $2\times 1$ vectors satisfying \begin{equation} \label{bWeq} \begin{gathered} \left(\partial_{\rho_j\rho_j} + \frac{1}{\rho_j}\partial_{\rho_j} - \frac{1}{\rho_j^2} \right) \mathbf{W}_{cj} + \mathcal{M}_j\mathbf{W}_{cj} = a_{1j}\left[\mathcal{N}_j \left(\begin{array}{c} \partial_{\rho_j} V_{0j} \\ \partial_{\rho_j} U_{0j} \end{array}\right) + \left(\begin{array}{c} \lambda_0 \partial_{\rho_j} V_{0j} \\ \omega \partial_{\rho_j} U_{0j} \end{array}\right) \right] \,, \\ \left(\partial_{\rho_j\rho_j} + \frac{1}{\rho_j}\partial_{\rho_j} - \frac{1}{\rho_j^2} \right) \mathbf{W}_{sj} + \mathcal{M}_j\mathbf{W}_{sj} = a_{2j}\left[\mathcal{N}_j \left(\begin{array}{c} \partial_{\rho_j} V_{0j} \\ \partial_{\rho_j} U_{0j} \end{array}\right) + \left(\begin{array}{c} \lambda_0 \partial_{\rho_j} V_{0j} \\ \omega \partial_{\rho_j} U_{0j} \end{array}\right) \right] \,, \end{gathered} \end{equation} with the far-field conditions \begin{equation} \label{bWfar} \begin{gathered} \mathbf{W}_{cj} \sim \left(\begin{array}{c} 0 \\ \frac{1}{2}S_j\omega a_{1j}\rho_j\log\rho_j + 2\pi\rho_j a_{Q_{1j}} \end{array}\right) \,, \enspace \mbox{as} \enspace \rho_j \to \infty\,.\\ \mathbf{W}_{sj} \sim \left(\begin{array}{c} 0 \\ \frac{1}{2}S_j\omega a_{2j}\rho_j\log\rho_j + 2\pi\rho_j a_{Q_{2j}} \end{array}\right) \,, \enspace \mbox{as} \enspace \rho_j \to \infty \,. \end{gathered} \end{equation} \end{subequations} The nonhomogeneous terms of \eqref{Oeps2inn} must be orthogonal to the nullspace of the homogeneous adjoint operator, given by \begin{equation} \label{adjeq} \Delta_{\mathbf{y}_j} \mathbf{P}_j + \mathcal{M}_j^T \mathbf{P}_j = \mathbf{0} \,, \qquad \mathbf{y}_j \in \mathbb{R}^2 \,. \end{equation} We seek two linearly independent mode-1 solutions To \eqref{adjeq} of the form $\mathbf{P}_j = \mathbf{P}_{cj}\cos\theta_j$ and $\mathbf{P}_j = \mathbf{P}_{sj}\sin\theta_j$, where $\mathbf{P}_{cj}$ and $\mathbf{P}_{sj}$ are given by \begin{equation} \label{adjdecomp} \mathbf{P}_{cj} \equiv \tilde{\mathbf{P}}_j(\rho_j)\cos\theta_j \,, \qquad \mathbf{P}_{sj} \equiv \tilde{\mathbf{P}}_j(\rho_j)\sin\theta_j \,, \qquad \tilde{\mathbf{P}}_j(\rho_j) \equiv \left(\begin{array}{c} \tilde{P}_{1j} \\ \tilde{P}_{2j} \end{array}\right) \end{equation} and the radially symmetric $\tilde{\mathbf{P}}_j$ satisfies \begin{subequations} \label{Ptilde} \begin{equation} \label{Ptildeeq} \left(\partial_{\rho_j\rho_j} + \frac{1}{\rho_j}\partial_{\rho_j} - \frac{1}{\rho_j^2} \right) \tilde{\mathbf{P}}_j + \mathcal{M}_j^T\tilde{\mathbf{P}}_j = \mathbf{0}\,, \qquad 0 < \rho_j < \infty \end{equation} with boundary and far-field conditions \begin{equation} \label{Ptildefar} \tilde{\mathbf{P}}_j(\mathbf{0}) = \mathbf{0} \,, \qquad \tilde{\mathbf{P}}_j \sim \left(\begin{array}{c} 0 \\ 1/\rho_j \end{array}\right) \,, \enspace \mbox{as} \enspace \rho_j \to \infty \,. \end{equation} \end{subequations} Note that the normalization condition in the far-field condition of \eqref{Ptildefar} uniquely specifies $\tilde{\mathbf{P}}_j$, while the condition at the origin ensures continuity of $\mathbf{P}_{cj}$ and $\mathbf{P}_{sj}$. To apply the orthogonality condition, we multiply \eqref{Oeps2inn} on the left by $\mathbf{P}_{cj,sj}^T$ and integrate over a disk $B_{R}$ of radius $R \gg 1 $ centered at the origin to obtain \begin{equation} \label{orthog} \iint_{B_R} \! \mathbf{P}_{cj,sj}^T \left[\Delta_{\mathbf{y}_j} \mathbf{W}_j + \mathcal{M}_j \mathbf{W}_j \right] \, d\mathbf{y}_j \ = \iint_{B_R} \! \mathbf{P}_{cj,sj}^T \mathcal{N}_j \mathbf{f}_{1j} \, d\mathbf{y}_j + \iint_{B_R} \! \mathbf{P}_{c,s}^T \mathbf{f}_{2j} \, d\mathbf{y}_j \,. \end{equation} We now compute each of the integrals in \eqref{orthog} in the limit $R \gg 1$. For the term on the left-hand side, we use Green's identity along with \eqref{adjeq} to obtain \begin{equation} \label{orthogleft} \iint_{B_R} \! \mathbf{P}_{cj,sj}^T \left[\Delta_{\mathbf{y}_j} \mathbf{W}_j + \mathcal{M}_j \mathbf{W}_j \right] \, d\mathbf{y}_j = \int_{\theta_j = 0}^{2\pi}\! \left[\mathbf{P}_{cj,sj}^T \partial_{\rho_j}\mathbf{W}_j - \mathbf{W}_j^T\partial_{\rho_j} \mathbf{P}_{cj,sj} \right] R \, d\theta_j \,. \end{equation} \,. For the right-hand side of \eqref{orthogleft}, we use the far-field conditions of \eqref{bWfar} along with \eqref{adjdecomp} and \eqref{Ptildefar} to obtain for the cosine term \begin{subequations} \label{orthogleftcossin} \begin{equation} \label{orthogleftcos} \int_{\theta_j = 0}^{2\pi}\! \left[\mathbf{P}_{cj}^T \partial_{\rho_j}\mathbf{W}_j - \mathbf{W}_j^T\partial_{\rho_j} \mathbf{P}_{cj} \right] R \, d\theta_j \sim \pi\left[2c_{1j}\log R + 2c_{2j} + c_{1j} \right] \,, \qquad R \gg 1 \,, \end{equation} while for sine term, we have \begin{equation}\label{orthogleftsin} \int_{\theta_j = 0}^{2\pi}\! \left[\mathbf{P}_{sj}^T \partial_{\rho_j}\mathbf{W}_j - \mathbf{W}_j^T\partial_{\rho_j} \mathbf{P}_{sj} \right] R \, d\theta_j \sim \pi\left[2s_{1j}\log R + 2s_{2j} + s_{1j} \right] \,, \qquad R \gg 1 \,, \end{equation} where we have defined \begin{equation} \label{orthogleftdef} c_{1j} \equiv \frac{1}{2}S_j\omega a_{j1} \,, \qquad c_{2j} \equiv 2\pi a_{Qj1} \,, \qquad s_{1j} = \frac{1}{2}S_j\omega a_{j2} \,, \qquad s_{2j} \equiv 2\pi a_{Qj2} \,. \end{equation} \end{subequations} For the second term on the right-hand side of \eqref{orthog}, we use \eqref{Oeps2def} for $\mathbf{f}_{2j}$ and perform an integration by parts to obtain \begin{equation} \label{orthogPf2} \begin{gathered} \iint_{B_R} \! \mathbf{P}_{cj}^T \mathbf{f}_{2j} \, d\mathbf{y}_j \sim \pi a_{1j}\omega S_j\log R + \pi a_{1j}\lambda_0\left[-\tau_0 k_{2j} + k_{1j}\right] \,,\\ \iint_{B_R} \! \mathbf{P}_{sj}^T \mathbf{f}_{2j} \, d\mathbf{y}_j \sim \pi a_{2j}\omega S_j\log R + \pi a_{2j}\lambda_0\left[-\tau_0 k_{2j} + k_{1j}\right] \,, \end{gathered} \end{equation} where $k_{1j}$ and $k_{2j}$ are defined by the integrals (see \cite{xie2017moving}) \begin{equation} \label{k1jk2j} k_{1j} \equiv \int_0^\infty \! V_{0j}^\prime \tilde{P}_{1j} \rho_j \, d\rho_j \,, \qquad k_{2j} \equiv \int_0^\infty \! \left[U_{0j} - \chi_j\right] (\tilde{P}_{2j} \rho_j)^\prime \, d\rho_j \,. \end{equation} Note that $k_{1j}$ and $k_{2j}$ are both functions of $S_j$ through their dependence on $V_{0_j}$ and $\tilde{P}_{1j}$, as well as $U_{0_j}$, $\tilde{P}_{2j}$, and $\chi_j$, respectively. For completeness, we reproduce plots of $k_{1j}$ and $k_{2j}$ versus $S_j$ in Fig \ref{kappafig}. \renewcommand{\thesubfigure}{\alph{subfigure}} \begin{figure}[!ht] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{kappa_1.eps} \caption{$k_{1j}$} \label{kappa_1} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{kappa_2.eps} \caption{$k_{2j}$} \label{kappa_2 } \end{subfigure} \caption{Plots of (a) $k_{1j}$ and (b) $k_{2j}$ as defined in \eqref{k1jk2j}. Note that a spot with strength $S_j \gtrsim 4.31$ is unstable to a self-replication instability.} \label{kappafig} \end{figure} To compute the first term on the right-hand side of \eqref{orthog}, we seek to rewrite $\mathcal{N}_j \mathbf{f}_1$ in terms of the linear operator $\Delta_{\mathbf{y}_j} + \mathcal{M}_j$ so that, upon multiplication by $\mathbf{P}_{cj,sj}^T$ and integrating over $B_{R}$, we may apply the divergence theorem. To proceed, we use \eqref{Oeps2def} to compute \begin{equation} \label{Nf1_1} \mathcal{N}_j \mathbf{f}_1 = \mathcal{N}_j (a_{1j} \cos\theta_j\partial_{\rho_j} + a_{2j}\sin\theta_j\partial_{\rho_j}) \left(\begin{array}{c} V_{0j} \\ U_{0j} \end{array}\right) = \mathcal{N}_j \mathbf{a}_j \cdot \nabla_{\mathbf{y}_j}\left(\begin{array}{c} V_{0j} \\ U_{0j} \end{array}\right) \,. \end{equation} Expanding \eqref{Nf1_1}, we have \begin{equation} \label{Nf1_2} \mathcal{N}_j \mathbf{f}_1 = \left(\begin{array}{c} U_{2j}\mathbf{a}_j\cdot\nabla_{\mathbf{y}_j} V_{0j}^2 + 2V_{2j}\mathbf{a}_j\cdot\nabla_{\mathbf{y}_j}(U_{0j}V_{0j}) \\ -U_{2j}\mathbf{a}_j\cdot\nabla_{\mathbf{y}_j} V_{0j}^2 - 2V_{2j}\mathbf{a}_j\cdot\nabla_{\mathbf{y}_j}(U_{0j}V_{0j}) \end{array}\right) \,. \end{equation} Next, we add and subtract terms to each component of \eqref{Nf1_2} to obtain full derivative derivatives of $U_{2j}V_{0j}^2$ and $2U_{0j}V_{0j}V_{2j}$ and find \begin{equation} \label{Nf1_3} \mathcal{N}_j \mathbf{f}_1 = \left(\begin{array}{c} \mathbf{a}_j\cdot\nabla_{\mathbf{y}_j}(U_{2j} V_{0j}^2) + \mathbf{a}_j\cdot\nabla_{\mathbf{y}_j}(2V_{2j}U_{0j}V_{0j}) - V_{0j}^2 \mathbf{a}_j\cdot\nabla_{\mathbf{y}_j} U_{2j} - 2U_{0j}V_{0j}\mathbf{a}_j\cdot\nabla_{\mathbf{y}_j} V_{2j} \\ -\mathbf{a}_j\cdot\nabla_{\mathbf{y}_j}(U_{2j} V_{0j}^2) - \mathbf{a}_j\cdot\nabla_{\mathbf{y}_j}(2V_{2j}U_{0j}V_{0j}) + V_{0j}^2 \mathbf{a}_j\cdot\nabla_{\mathbf{y}_j} U_{2j} + 2U_{0j}V_{0j}\mathbf{a}_j\cdot\nabla_{\mathbf{y}_j} V_{2j} \end{array}\right) \,. \end{equation} Passing the operator $\mathbf{a}_j\cdot\nabla_{\mathbf{y}_j}$ through the system of $U_{2j}$ and $V_{2j}$ in \eqref{coreeq2}, we observe that \begin{equation*} \mathbf{a}_j\cdot\nabla_{\mathbf{y}_j}(U_{2j} V_{0j}^2) + \mathbf{a}_j\cdot\nabla_{\mathbf{y}_j}(2V_{2j}U_{0j}V_{0j}) = -\Delta_{\mathbf{y}_j} \mathbf{a}_j\cdot\nabla_{\mathbf{y}_j} V_{2j} + \mathbf{a}_j\cdot\nabla_{\mathbf{y}_j} V_{2j} \,, \end{equation*} and \begin{equation*} \mathbf{a}_j\cdot\nabla_{\mathbf{y}_j}(U_{2j} V_{0j}^2) + \mathbf{a}_j\cdot\nabla_{\mathbf{y}_j}(2V_{2j}U_{0j}V_{0j}) = \Delta_{\mathbf{y}_j} \mathbf{a}_j\cdot\nabla_{\mathbf{y}_j} U_{2j} \,. \end{equation*} We thus obtain from \eqref{Nf1_3} that \begin{equation} \label{Nf1_4} \mathcal{N}_j \mathbf{f}_1 = \left[\Delta_{\mathbf{y}_j} + \mathcal{M}_j\right]\mathbf{a}_j\cdot\nabla_{\mathbf{y}_j} \mathbf{n}_j \,; \qquad \mathbf{n}_j \equiv \left(\begin{array}{c} V_{2j} \\ U_{2j} \end{array}\right) \,, \end{equation} where $\mathcal{M}_j$ is the matrix of the linearized reaction terms defined in \eqref{M}. Multiplying \eqref{Nf1_4} by $\mathbf{P}_{cj,sj}^T$ satisfying \eqref{adjeq} and applying the divergence theorem, we have \begin{equation} \label{PNf1_1} \iint_{B_R} \! \mathbf{P}_{cj,sj}^T \mathcal{N}_j \mathbf{f}_{1j} \, d\mathbf{y}_j = \int_0^{2\pi}\left(\mathbf{P}_{cj,sj}^T \partial_{\rho_j}(\mathbf{a}_j\cdot\nabla_{\mathbf{y}_j} \mathbf{n}_j) - (\mathbf{a}_j\cdot\nabla_{\mathbf{y}_j} \mathbf{n}_j)^T\partial_{\rho_j} \mathbf{P}_{cj,sj}\right)R\,d\theta_j \,, \end{equation} where each term of the integrand on the right-hand side are evaluated on the circle $\rho_j = R$. For $\mathbf{n}_j$ defined in \eqref{Nf1_4}, we use the far-field condition for $V_{2j}$ and $U_{2j}$ in \eqref{innervar2far} to compute \begin{equation} \label{njfar} \begin{gathered} \mathbf{a}_j\cdot\nabla_{\mathbf{y}_j} \mathbf{n}_j \sim \left(\begin{array}{c} 0\\ -2\pi\rho_j \mathbf{a}_j^T\mathcal{H}_j \mathbf{e}_j \end{array}\right) \,; \qquad \rho_j \gg 1 \,, \\ \partial_{\rho_j} \mathbf{a}_j\cdot\nabla_{\mathbf{y}_j} \mathbf{n}_j \sim \left(\begin{array}{c} 0\\ -2\pi\mathbf{a}_j^T\mathcal{H}_j \mathbf{e}_j \end{array}\right) \,; \qquad \rho_j \gg 1 \,. \end{gathered} \end{equation} Substituting \eqref{njfar} and \eqref{adjdecomp}-\eqref{Ptildefar} into \eqref{PNf1_1}, we obtain \begin{equation} \label{PNf1_2} \begin{gathered} \iint_{B_R} \! \mathbf{P}_{cj}^T \mathcal{N}_j \mathbf{f}_{1j} \, d\mathbf{y}_j \sim -4\pi^2\left[a_{j1}\mathcal{H}_{j}^{(11)} + a_{j2}\mathcal{H}_{j}^{(21)}\right] \,, \\ \iint_{B_R} \! \mathbf{P}_{sj}^T \mathcal{N}_j \mathbf{f}_{1j} \, d\mathbf{y}_j \sim -4\pi^2\left[a_{j1}\mathcal{H}_{j}^{(12)} + a_{j2}\mathcal{H}_{j}^{(22)}\right] \,, \end{gathered} \end{equation} where $\mathcal{H}_j^{(mn)}$ denotes the $(m,n)$-th entry of the matrix $\mathcal{H}_j$ defined in \eqref{mHj}. Now we substitute \eqref{orthogleftcossin}, \eqref{orthogPf2} and \eqref{PNf1_2} for their respective terms in the orthogonality condition \eqref{orthog} to obtain for $R \gg 1$ and $j = 1,\ldots,N$, \begin{equation} \label{orthog2} \begin{gathered} \pi\left[2c_{1j}\log R + 2c_{2j} + c_{1j} \right] = \pi a_{j1}\omega S_j\log R + \pi a_{j1}\lambda_0\left[-\tau_0 k_{2j} + k_{1j}\right] - 4\pi^2\left[a_{j1}\mathcal{H}_{j}^{(11)} + a_{j2}\mathcal{H}_{j}^{(21)}\right] \,, \\ \pi\left[2s_{1j}\log R + 2s_{2j} + s_{1j} \right] = \pi a_{j2}\omega S_j\log R + \pi a_{j2}\lambda_0\left[-\tau_0 k_{2j} + k_{1j}\right] -4\pi^2\left[a_{j1}\mathcal{H}_{j}^{(12)} + a_{j2}\mathcal{H}_{j}^{(22)}\right] \,. \end{gathered} \end{equation} Observe that, with $c_{1j}$ and $s_{1j}$ defined in terms of $a_{j1}$ and $a_{j2}$ in \eqref{orthogleftdef}, the $\log R$ terms in \eqref{orthog2} cancel, yielding \begin{equation} \label{mateig} \begin{gathered} \left[\frac{1}{2}S_j\omega + \omega k_{2j} + 4\pi^2\mathcal{H}_j^{(11)}\right] a_{j1} + 4\pi^2\mathcal{H}_j^{(21)}a_{j2} + 4\pi a_{Qj1} = k_{1j}\lambda_0a_{j1} \,, \\ 4\pi^2\mathcal{H}_j^{(12)} a_{j1} + \left[\frac{1}{2}S_j\omega + \omega k_{2j} + 4\pi^2\mathcal{H}_j^{(22)}\right]a_{j2} + 4\pi a_{Qj2} = k_{1j}\lambda_0a_{j2} \,. \end{gathered} \end{equation} where $a_{Qj1}$ and $a_{Qj2}$ are the first and second components, respectively, of the vector $\mathbf{a}_{Qj}$ defined in \eqref{LjMj}, which is the term giving rise to the coupling between the different inner regions. Substituting for $\mathbf{a}_{Qj}$ in \eqref{mateig}, we arrive at the $2N\times 2N$ matrix-eigenvalue problem for the oscillation modes $\mathbf{a}$, Hopf bifurcation frequency $\lambda_0$ and threshold $\tau_0 \equiv \omega/\lambda_0$: \begin{subequations} \label{finaleig} \begin{equation} \label{finaleigeq} \bK_1^{-1}\left[\omega\left(\frac{1}{2}\mathcal{S} + \bK_2\right) + 4\pi\mathcal{H} + 4\pi \mathcal{Q}_\omega \right] \mathbf{a} = \lambda_0\mathbf{a} \,. \end{equation} where $\mathcal{S}$ is the $2N\times 2N$ matrix of spot strengths \eqref{kappadef}, while the $2N\times 2N$ matrices $\bK_1$, $\bK_2$, $\mathcal{H}$, and $\mathcal{Q}_\omega$ are defined as \begin{equation} \label{finaleigdef} \begin{gathered} \bK_1 \equiv \left(\begin{array}{ccccc} k_{11} & & & & \\ & k_{11} & & & \\ & & \ddots & & \\ & & & k_{1N} & \\ & & & & k_{1N} \end{array}\right) \,, \qquad \bK_2 \equiv \left(\begin{array}{ccccc} k_{21} & & & & \\ & k_{21} & & & \\ & & \ddots & & \\ & & & k_{2N} & \\ & & & & k_{2N} \end{array}\right) \,, \\ \mathcal{Q}_\omega = \left(\begin{array}{c} Q_{\omega 1}^T \\ Q_{\omega 2}^T \\ \vdots \\ Q_{\omega N}^T \end{array}\right) \,, \qquad \mathcal{H} \equiv \left(\begin{array}{cccc} \mathcal{H}_1 & & & \\ & \mathcal{H}_2 & & \\ & & \ddots & \\ & & & \mathcal{H}_N \end{array}\right) \,. \end{gathered} \end{equation} \end{subequations} In \eqref{finaleigdef}, $k_{1j}$ and $k_{2j}$ are the integrals defined in \eqref{k1jk2j}, $Q_{\omega j}$ are the $2N \times 2$ matrices defined in \eqref{LjMj}, and $\mathcal{H}_{j}$ is the $2\times 2$ matrix defined in \eqref{mHj}. Rewriting in terms of previously defined matrices, we obtain that the final matrix-eigenvalue problem of \eqref{finaleigeq} takes the form \begin{subequations} \label{finaleigblock} \begin{equation} \bK_1^{-1}\left[ B(\omega) - M(\omega) \right] \mathbf{a} = \lambda_0\mathbf{a} \end{equation} where the $2N\times 2N$ matrices $B(\omega)$ and $M(\omega)$ are given by \begin{equation} \label{finaleigblockdef} \begin{gathered} B(\omega) \equiv \omega\left[\left(1-\frac{1}{\nu}\right)\mathcal{S} + \bK_2 \right] + 4\pi\mathcal{H} + 4\pi \left(\nabla^2 \mathcal{G}_\omega\right)\mathcal{S} \,, \\ M(\omega) \equiv 8\pi^2\left(\nabla_1 \mathcal{G}_\omega\right)^T\left[\frac{1}{\nu}\mathcal{I}_N + \boldsymbol\chi^\prime + 2\pi\mathcal{G}_\omega\right]^{-1} \left(\nabla_2\mathcal{G}_\omega\right)\mathcal{S} \,, \end{gathered} \end{equation} with the $N\times 2N$ and $2N\times 2N$ matrices $\left(\nabla_1\mathcal{G}_\omega\right)$ and $\left(\nabla^2\mathcal{G}_\omega\right)$, respectively, given by \begin{equation} \label{finaleigblockdef2} \begin{gathered} \left(\nabla_1\mathcal{G}_\omega\right) \equiv \left(\left(\nabla_1\mathcal{G}_\omega\right)_1 \enspace\enspace \left(\nabla_1\mathcal{G}_\omega\right)_2 \enspace\enspace \cdots \enspace\enspace \left(\nabla_1\mathcal{G}_\omega\right)_N \right) \,, \\ (\nabla^2 \mathcal{G}_{\omega}) \equiv \left(\begin{array}{cccc} [\mathcal{F}_{\omega_1} - H_{\omega_1}]^T & \mathcal{E}_{\omega_{12}}^T & \cdots & \mathcal{E}_{\omega_{1N}}^T \\ \mathcal{E}_{\omega_{21}}^T & [\mathcal{F}_{\omega 2} - H_{\omega_2}]^T & \cdots & \vdots \\ \vdots & \cdots & \ddots & \vdots \\ \mathcal{E}_{\omega_{N1}}^T & \cdots & \cdots & [\mathcal{F}_{\omega_N} - H_{\omega_N}]^T \end{array}\right) \,. \end{gathered} \end{equation} \end{subequations} In \eqref{finaleigblock}, $\mathcal{S}$ is the diagonal matrix of spot strengths defined in \eqref{kappadef}, $\mathcal{H}$ is the diagonal block matrix with each block a linear combination of Hessian matrices of the Neumann Green's function weighted by spot strengths (see \eqref{mHj}), $\left(\nabla^2 \mathcal{G}_\omega\right)$ is a block matrix involving second order derivatives of the Helmholtz Green's function and its regular part (see \eqref{gradGKdef}), $\left(\nabla_1\mathcal{G}_\omega\right)$ and $\left(\nabla_2\mathcal{G}_\omega\right)$ are $N\times 2N$ matrices involving first derivatives of the first and second arguments of the Helmholtz Green's function $G_\omega(\mathbf{x};\mathbf{x}_0)$, respectively (see \eqref{finaleigblockdef2} and \eqref{kappadef}), $\mathcal{I}_N$ is the $N\times N$ identity matrix, $\boldsymbol\chi^\prime$ is the diagonal matrix whose $j$-th diagonal entry is $\chi^\prime(S)$, while $\mathcal{G}_\omega$ is the Green's interaction matrix The diagonal matrices $\bK_1$ and $\bK_2$ are defined in \eqref{finaleigdef}; the terms along the diagonal are the nonzero constants defined in \eqref{k1jk2j} that must be computed numerically. The $(i,j)$-th entry or block in each matrix when $i \neq j$ accounts for the interaction between the $i$-th and $j$-th spot. To find pure imaginary eigenvalues of \eqref{finaleigblock}, we set $\lambda_0 = i\lambda_I$ and solve the two transcendental equations \begin{subequations} \label{det0} \begin{equation} \label{det01} \textrm{Re}\left\lbrace\det\left(B(i\hat{\omega}) - M(i\hat{\omega}) - i\hat{\lambda}_I\mathcal{I}_{2N}\right)\right\rbrace = 0 \,, \end{equation} \begin{equation} \label{det02} \textrm{Im}\left\lbrace\det\left(B(i\hat{\omega}) - M(i\hat{\omega}) - i\hat{\lambda}_I\mathcal{I}_{2N}\right)\right\rbrace = 0 \,, \end{equation} \end{subequations} for $\omega = \hat{\omega}$ and $\lambda_I = \hat{\lambda}_I$. Each solution corresponds to a different mode of oscillation with frequency $\hat{\lambda}_I/(2\pi)$ and Hopf bifurcation value $\hat{\tau} = \hat{\omega}/\hat{\lambda}_I$. The corresponding nullspace $\hat{\mathbf{a}}$ of the matrix $B(i\hat{\omega}) - M(i\hat{\omega}) - i\hat{\lambda}_I\mathcal{I}_{2N}$ then indicates the mode of oscillation of each of the $N$ spots. The Hopf stability threshold $\hat{\tau}^*$ is then the minimum of all $\hat{\tau}$ with the dominant mode of oscillation given by the corresponding eigenvector $\hat{\mathbf{a}}^*$ and frequency $\lambda_I^*$. If the direction vector $\hat{\mathbf{a}}_k$ of the $k$-th spot is real, then at onset, the $k$-th spot oscillates about the point $\mathbf{x}_k \in \Omega$ along the direction $\hat{\mathbf{a}}_k$. If $\hat{\mathbf{a}}_k = \textrm{Re}(\hat{\mathbf{a}}_k) + i\textrm{Im}(\hat{\mathbf{a}}_k)$, the trajectory of the $k$-th spot at onset is that of a rotated ellipse centered at $\mathbf{x}_k$ with angle of rotation and minor and major axes determined by the $2\times 2$ matrix $(\textrm{Re}(\hat{\mathbf{a}}_k) \enspace -\textrm{Im}(\hat{\mathbf{a}}_k))$. We give numerical examples of both types of oscillation in \S \ref{numerics}. \section{Single-spot analysis} \label{onespot} In this section, we consider the special case of the eigenvalue problem \eqref{finaleigblock} when the pattern consists of only $N=1$ spot. We compare our result to that given in \cite{xie2017moving} for the special case of the unit disk, showing that we recover its result from \eqref{finaleigblock} when $N=1$ and $\Omega$ is the unit disk. In doing so, we highlight the extra terms that arise in the eigenvalue problem due to asymmetries of the domain that were absent in the analysis of the unit disk. We also perform an analysis of a perturbed unit disk and show how the perturbation affects the bifurcation threshold, corresponding frequency, as well as the preferred direction of oscillation. In the case of a single spot of strength $S$, the only terms that remain in \eqref{finaleigblock} are the (1,1) entries and blocks of the matrices in the formula. Then the small eigenvalue problem for $N=1$ is \eqref{finaleigblock}, where the $2\times 2$ matrices $B(\omega)$ and $M(\omega)$ are given by \begin{equation} \label{oneeigdef} \begin{gathered} B(\omega) \equiv \omega\left[\left(1-\frac{1}{\nu}\right)S + k_{21} \right]\mathcal{I}_2 + 4\pi H_{11} + 4\pi\left[\mathcal{F}_{\omega_1} - H_{\omega_1} \right]^T \,, \\ M(\omega) \equiv 8\pi^2 S \left[\frac{1}{\nu} + \chi^\prime(S) + 2\pi R_\omega(\mathbf{x}_1;\mathbf{x}_1)\right]^{-1} \left(\nabla_\mathbf{x} R_\omega(\mathbf{x};\mathbf{x}_1)\left.\right|_{\mathbf{x}=\mathbf{x}_1}\right)\left(\nabla_\mathbf{x} R_\omega(\mathbf{x};\mathbf{x}_1)\left.\right|_{\mathbf{x}=\mathbf{x}_1}\right)^T\,. \end{gathered} \end{equation} Recall that $\nabla_\mathbf{x} R(\mathbf{x};\mathbf{x}_1)\mid_{\mathbf{x}=\mathbf{x}_1} = \mathbf{0}$ since $\mathbf{x}_1$ is an equilibrium location of the one-spot pattern. In highly symmetric geometries such as rectangles and the unit disk which, the zero of $\nabla_\mathbf{x} R(\mathbf{x};\mathbf{x}_1)\mid_{\mathbf{x}=\mathbf{x}_1}$ coincides exactly with that of $\nabla_\mathbf{x} R_\omega(\mathbf{x};\mathbf{x}_1)\left.\right|_{\mathbf{x}=\mathbf{x}_1}$. That is, the gradient of the regular parts of the Neumann \eqref{GNall} and Helmholtz \eqref{Gk} Green's functions vanish at the same value of $\mathbf{x}_1$. In these geometries, the matrix $M(\omega)$ vanishes while the Hessian matrices $H_{11}$ and $H_{\omega_1}$ along with $\mathcal{F}_{\omega_1}$ can be made diagonal. The eigenvalue problem \eqref{finaleigblock} then decouples to form \begin{equation} \label{oneeigsymm1} \begin{gathered} k_{11}^{-1}\left\lbrace \omega\left[\left(1-\frac{1}{\nu}\right)S + k_{21} \right]\mathcal{I}_2 + 4\pi \left[H_{11}^{(1,1)} + \mathcal{F}_{\omega_1}^{(1,1)} - H_{\omega_1}^{(1,1)} \right]S \right\rbrace a_1 = \lambda_0 a_1 \,, \\ k_{11}^{-1}\left\lbrace \omega\left[\left(1-\frac{1}{\nu}\right)S + k_{21} \right]\mathcal{I}_2 + 4\pi \left[H_{11}^{(2,2)} + \mathcal{F}_{\omega_1}^{(2,2)} - H_{\omega_1}^{(2,2)} \right]S \right\rbrace a_2 = \lambda_0 a_2 \,, \end{gathered} \end{equation} where $H_{11}^{(jj)}$ and $H_{\omega_1}^{(jj)}$ are the $(j,j)$ components of the Hessian matrices of the Neumann and Helmholtz Green's functions, respectively. The dominant mode of oscillation would either along the $(1,0)$ or $(0,1)$ directions depending on which pair of Hessian terms in \eqref{oneeigsymm1} yields the lower Hopf threshold $\hat{\tau}$. In even more symmetric geometries such as the square and unit disk, the latter of which we analyze in detail below, $H_{11}$ and $H_{\omega_1}$ are multiples of $\mathcal{I}_2$ so that \eqref{finaleigblock} reduces to the scalar problem given by \begin{equation} \label{oneeigsymm2} \omega\left[\left(1-\frac{1}{\nu}\right)S + k_{21} \right] + 4\pi \left[H_{11}^{(1,1)} + \mathcal{F}_{\omega_1}^{(1,1)} - H_{\omega_1}^{(1,1)} \right]S - k_{11}\lambda_0 = 0 \,, \end{equation} In such geometries, the vector $\mathbf{a}$ indicating the direction of spot oscillation at onset becomes arbitrary, and there is no preferred direction of oscillation. Observe from \eqref{kappaj} with $N = 1$ that the coincidence in the zeros of $\nabla_\mathbf{x} R(\mathbf{x};\mathbf{x}_1)\mid_{\mathbf{x}=\mathbf{x}_1}$ and $\nabla_\mathbf{x} R_\omega(\mathbf{x};\mathbf{x}_1)\left.\right|_{\mathbf{x}=\mathbf{x}_1}$ implies that $\kappa_1 = 0$. With $\kappa_1 = 0$, the additional extra $G_{\lambda\tau}$ term in the outer solution for the eigenfunction $\psi_1$ in \eqref{psi1sol} vanishes, while the $\mathcal{O}(\eps)$ term is also absent in the inner expansion for the eigenfunctions $\Phi_1$ and $\Psi_1$. As such, we may attribute the presence of these two terms to the asymmetry of the domain. Their effects are encoded in the matrix $M(\omega)$ in \eqref{oneeigdef}. In the next section, we consider \eqref{oneeigsymm2} for the case of the unit disk and show that it is equivalent to the eigenvalue problem derived in \cite{xie2017moving}. \subsection{The unit disk} \label{unitdisk} We first require the Hessian terms of the Neumann and Helmholtz Green's functions, $H_{11}^{(1,1)}$ and $H_{\omega_1}^{(1,1)}$, respectively, along with $\mathcal{F}_{\omega_1}^{(1,1)}$, the gradient with respect to the source location of the gradient of the regular part of the Helmholtz Green's function. We begin with computing $\mathcal{F}_{\omega_1}^{(1,1)}$. In polar coordinates $\mathbf{x} = (x,y) = \rho(\cos\theta, \sin\theta)$, \cite{chen2011stability} gives the series solution for the Helmholtz Green's function $G_\omega(\mathbf{x};\mathbf{x}_0)$ satisfying \eqref{Gk} with source at $\mathbf{x}_0 = (x_0, y_0) = \rho_0(\cos\theta_0, \sin\theta_0)$ as \begin{equation} \label{Gunit} G_\omega(\rho,\theta;\rho_0,\theta_0) = \frac{1}{2\pi}K_0\left(\sqrt{\omega}|\mathbf{x}-\mathbf{x}_0|\right) - \frac{1}{2\pi}A_0I_0(\sqrt{\omega}\rho) - \frac{1}{\pi}\sum_{n =1}^\infty \cos(n(\theta-\theta_0))A_n I_n(\sqrt{\omega}\rho) \,, \end{equation} \noindent where $|\mathbf{x}-\mathbf{x}_0| = \sqrt{\rho^2 + \rho_0^2 - 2\rho\rho_0\cos(\theta-\theta_0)}$ and $A_n = K_n^\prime(\sqrt{\omega})I_n(\sqrt{\omega}\rho_0)/I_n^\prime(\sqrt{\omega})$ for $n = 0, 1, \ldots$ . To compute $\mathcal{F}_{\omega_1}^{(1,1)}$ in \eqref{oneeigsymm2}, we first obtain $R_\omega(\mathbf{x};\mathbf{x}_0)$ from the definition in \eqref{Gkloc} along with the small argument asymptotics of $K_0(z)$ \begin{multline} \label{Gunitregular} R_\omega(\rho,\theta;\rho_0,\theta_0) = \frac{1}{2\pi}\left[-\gamma - \log\frac{\sqrt{\omega}}{2} + \frac{\omega}{4}\left( -\log|\mathbf{x}-\mathbf{x}_0| + 1 - \gamma -\log\frac{\sqrt{\omega}}{2} \right)|\mathbf{x}-\mathbf{x}_0|^2 \right] \\ - \frac{1}{2\pi}A_0I_0(\sqrt{\omega}\rho) - \frac{1}{\pi}\sum_{n =1}^\infty \cos(n(\theta-\theta_0))A_n I_n(\sqrt{\omega}\rho) \,. \end{multline} \noindent In \eqref{Gunitregular}, $\gamma$ is Euler's constant. We next use \eqref{Gunitregular} to compute $F_{\omega_1}$, the gradient of the regular part with respect to $\mathbf{x}$ evaluated at $\mathbf{x} = \mathbf{x}_0$. We have \begin{multline} \label{Gunitregulargrad} \nabla_\mathbf{x} R_\omega(\mathbf{x};\mathbf{x}_0) = \frac{\omega}{4\pi}\left(-\log|\mathbf{x}-\mathbf{x}_0| + \frac{1}{2} - \gamma - \log\frac{\sqrt{\omega}}{2} \right)(\mathbf{x}-\mathbf{x}_0) \\ - \frac{\sqrt{\omega}}{2\pi}A_0I_0^\prime(\sqrt{\omega}\rho)\mathbf{e}_\theta - \frac{\sqrt{\omega}}{\pi}\sum_{n =1}^\infty \cos(n(\theta-\theta_0))A_n I_n^\prime(\sqrt{\omega}\rho)\mathbf{e}_\theta \,, \end{multline} where $\mathbf{e}_\theta \equiv (\cos\theta, \sin\theta)^T$. Setting $\mathbf{x} = \mathbf{x}_0$ in \eqref{Gunitregulargrad}, we obtain \begin{equation} \label{Gunitregulargradself} \mathbf{F}_{\omega_1} \equiv \nabla_\mathbf{x} R_\omega(\mathbf{x};\mathbf{x}_0)\mid_{\mathbf{x}=\mathbf{x}_0} = - \frac{\sqrt{\omega}}{\pi}\sum_{n =0}^\infty c_n A_n I_n^\prime(\sqrt{\omega}\rho_0)\mathbf{e}_{\theta_0} \,, \end{equation} where $c_n = 1/2$ ($c_n = 1$) when $n = 0$ ($n > 0)$. Observe in \eqref{Gunitregulargradself} that setting $\mathbf{x}_0 = \mathbf{0}$ results in the gradient being $\mathbf{0}$, since $\mathbf{x}_0 = \mathbf{0}$ is the equilibrium location of the spot. Finally, to compute $\mathcal{F}_{\omega_1}^{(1,1)}$ in \eqref{oneeigsymm2}, we use \eqref{gradGKdef} take the gradient of the first component of \eqref{Gunitregulargradself} with respect to $\mathbf{x}_0$ using $\nabla_{\mathbf{x}_0} = \mathbf{e}_{\theta_0}\partial_{\rho_0} + \rho_0^{-1}\mathbf{e}_{\theta_0}^\prime\partial_{\theta_0}$ to obtain \begin{multline} \label{Fomega1} \nabla_{\mathbf{x}_0} F_{\omega_1}^{(1)} = -\mathbf{e}_{\theta_0} \cos\theta_0 \frac{\omega}{\pi} \sum_{n = 0}^\infty c_n \frac{K_n^\prime(\sqrt{\omega})}{I_n^\prime(\sqrt{\omega})}\left[\left( I_n^\prime(\sqrt{\omega}\rho_0) \right)^2 + I_n(\sqrt{\omega}\rho)I_n^{\prime\prime}(\sqrt{\omega}\rho_0) \right] \\ + \frac{1}{\rho_0}\mathbf{e}_{\theta_0}^\prime\sin\theta_0\frac{\sqrt{\omega}}{\pi}\sum_{n =0}^\infty c_n \frac{K_n^\prime(\sqrt{\omega})}{I_n^\prime(\sqrt{\omega})}I_n(\sqrt{\omega}\rho_0)I_n^\prime(\sqrt{\omega}\rho_0) \,, \end{multline} where $\mathbf{e}_{\theta_0}^\prime = (-\sin\theta, \cos\theta)$. Taking the limit as $\rho_0 \to 0^+$ in \eqref{Fomega1} yields \begin{equation} \label{Fomega1_2} \left(\begin{array}{c} \mathcal{F}_{\omega_1}^{(1,1)} \\ \mathcal{F}_{\omega_1}^{(1,2)} \end{array}\right) = \lim_{\rho_0 \to 0^+} \nabla_{\mathbf{x}_0} F_{\omega_1}^{(1)} = -\frac{\omega}{4\pi}\left\lbrack \frac{K_0^\prime(\sqrt{\omega})}{I_0^\prime(\sqrt{\omega})} + \frac{K_1^\prime(\sqrt{\omega})}{I_1^\prime(\sqrt{\omega})} \right\rbrack \left(\begin{array}{c} 1 \\ 0 \end{array}\right) \,. \end{equation} \noindent As expected, the second component of \eqref{Fomega1_2} is zero since the matrix $\mathcal{F}_{\omega_1}$ must be diagonal due to the symmetry of the disk. It remains now to compute the Hessian term of the Neumann Green's function, $H_{11}^{(1,1)}$ and that of the Helmholtz Green's function, $H_{\omega_1}^{(1,1)}$. The Neumann Green's function $G(\rho)$ satisfying \eqref{GNall} with source at the origin is given in polar coordinates by \begin{equation} \label{GNunit} G(\rho) = -\frac{1}{2\pi}\log\rho + \frac{\rho^2}{4\pi} - \frac{3}{8\pi} \,. \end{equation} From \eqref{GNunit} and \eqref{GNloc}, we obtain that $H_{11}^{(1,1)} = (2\pi)^{-1}$. The Helmholtz Green's function $G_\omega(\rho)$ satisfying \eqref{Gk} with source at the origin is given in polar coordinates by \begin{equation} \label{GHelmunit} G_\omega(\rho) = \frac{1}{2\pi}\left[K_0(\sqrt{\omega}\rho) - \frac{K_0^\prime(\sqrt{\omega})}{I_0^\prime(\sqrt{\omega})}I_0(\sqrt{\omega}\rho) \right] \,. \end{equation} Using the small argument asymptotics of $K_0(z)$ and $I_0(z)$ in \eqref{GHelmunit}, we obtain from \eqref{Gkloc} the Hessian term \begin{equation} \label{GHelmunithess} H_{\omega_1}^{(1,1)} = \frac{1}{\pi}\left[ -\frac{\omega}{4}\log\frac{\sqrt{\omega}}{2} + \frac{1-\gamma}{4}\omega \right] - \frac{\omega}{4\pi}\frac{K_0^\prime(\sqrt{\omega})}{I_0^\prime(\sqrt{\omega})} \,. \end{equation} With $H_{\omega_1}^{(1,1)} = (2\pi)^{-1}$ and using \eqref{GHelmunithess} for $H_{\omega_1}^{(1,1)}$ and \eqref{Fomega1_2} for $\mathcal{F}_{\omega_1}^{(1,1)}$, we obtain from \eqref{oneeigsymm2} \begin{equation} \label{eigunit} 2 + \omega\left[\log\left(\frac{e^\gamma \eps\sqrt{\omega}}{2}\right) - \frac{K_1^\prime(\sqrt{\omega})}{I_1^\prime(\sqrt{\omega})}\right] = \frac{\lambda_0k_{11} - \omega k_{21}}{S} \,. \end{equation} Setting $\lambda_0 = i\lambda_I$ and replacing $\omega$ with $i\lambda_I\tau_0$ in \eqref{eigunit}, we recover the complex equation given in (4.51) of \cite{xie2017moving} for the Hopf stability threshold $\tau_0$ and corresponding bifurcation frequency $\lambda_I$. \subsection{The perturbed disk} \label{perturbeddisk} In this section, we compute the leading order correction to the Neumann and Helmholtz Green's functions when $\Omega$ is the perturbed disk with boundary parametrized by $(x_1, x_2) = (1 + \sigma f(\theta))(\cos\theta,\sin\theta)$ with $\eps \ll \sigma \ll 1$ and some function $f(\theta)$ periodic over the interval $[0, 2\pi)$. We use this calculation to obtain analytic insight into how domain geometry impacts the preferred direction of oscillation at the onset of instability. Since $f$ can be expressed as a Fourier series, and the leading order effects of the perturbation are linear, it suffices to perform the calculation for individual modes $f(\theta) = \cos n\theta$ with $n = \mathbb{N}$. Analysis of perturbations of $\sin n\theta$ can can be recovered by replacing $\theta \to \theta - \pi/(2n)$. We show in the following analysis that the $n=2$ mode impacts the bifurcation threshold, oscillation frequency, as well as oscillation mode at $\mathcal{O}(\sigma)$. We then use this analysis to briefly show that the $n \neq 2$ effects enter only in higher orders $\sigma$. We thus focus our calculations on the $n=2$ case for which the perturbed disk $\Omega$ is slightly elliptical in shape with the major axis aligned along the $x_1$ direction. For $n=2$, we show that the mode of oscillation along the $x_1$-axis is the first to become unstable as $\tau$ is increased. That is, we show that the threshold corresponding to the $(1,0)$ mode of oscillation is smaller than that for the unit disk, and that the threshold of the $(0,1)$ mode is larger than that of the unit disk. The effect of the boundary perturbation on the localized variable $v(\mathbf{x})$ is exponentially small. As such, we need only expand \begin{equation} \label{usigma} u \sim u_0(\rho) + \sigma u_1(\rho, \theta) \,, \end{equation} and compute the boundary conditions for $u_1$ in terms of $u_0$. Proceeding, we find that an outward pointing normal vector on $\partial\Omega$ is \begin{equation} \label{normvec} \mathbf{n} = (1+\sigma f(\theta)\mathbf{e}_\theta - \sigma f^\prime(\theta)\mathbf{e}_\theta^\prime \,. \end{equation} In polar coordinates, the homogeneous boundary condition $\nabla u \cdot \mathbf{n} = 0$ becomes \begin{equation} \label{pertbc} u_\rho - \frac{\sigma f^\prime}{(1+\sigma f)^2} u_\theta = 0 \,. \end{equation} Substituting \eqref{usigma} into \eqref{pertbc} and expanding to first order in $\sigma$, we obtain the boundary conditions \begin{equation} \label{pertbc2} u_{0\rho}(1) = 0 \,, \qquad u_{1\rho}(1,\theta) = - f(\theta) u_{0\rho\rho}(1) \,. \end{equation} We begin with the expansion of the Neumann Green's function $G \sim G_0 + \sigma G_1$, where $G_0$ is the solution given in \eqref{GNunit} for the unperturbed unit disk $\Omega_0$ with source at the origin. From \eqref{pertbc2}, the boundary value problem for $G_1$ is then \begin{equation} \label{G1bvp} \Delta G_{1\rho\rho} = 0 \,, \qquad G_{1\rho}(1,\theta) = -f(\theta)\frac{1}{\pi} \,; \qquad \int_{\Omega_0} \! G_1 \, d\Omega_0 = 0 \,, \end{equation} with regularity at $\rho = 0$. When $f(\theta) = \cos 2\theta$, the solution to \eqref{G1bvp} is \begin{equation} \label{G1sol} G_{1}(\rho,\theta) = -\frac{1}{2\pi}\rho^2\cos 2\theta \,, \end{equation} or $G_1(x_1,x_2) = (2\pi)^{-1}(x_2^2-x_1^2)$ in Cartesian coordinates. To leading order in $\sigma$, we therefore have that the Hessian matrix $H_{11}$ in \eqref{oneeigdef} is given by \begin{equation} \label{H11sigma} H_{11} \sim \frac{1}{2\pi}\mathcal{I}_2 + \sigma \frac{1}{\pi}\left(\begin{array}{cc} -1 & 0 \\ 0 & 1\end{array}\right) \,, \end{equation} where the leading order term in \eqref{H11sigma} is computed from Neumann's Green's function of the unperturbed disk given in \eqref{GNunit}. For the Helmholtz Green's function $G_\omega$ of \eqref{Gkeq}, we expand $G_{\omega} = G_{\omega_0} + \sigma G_{\omega_1}$, where $G_{\omega_0}$ is the leading order solution when $\rho > \rho_0$ given by \cite{chen2011stability} \begin{equation} \label{Gomega0} G_{\omega_0}(\rho,\theta; \rho_0, \theta_0) = \frac{1}{2\pi} \sum_{n = -1}^\infty e^{-in(\theta-\theta_0)} F_n(\rho) I_n(\sqrt{\omega}\rho_0) \,, \end{equation} where $F_n(\rho) \equiv K_n(\sqrt{\omega}\rho) - K_n^\prime(\sqrt{\omega})I_n(\sqrt{\omega}\rho)/I_n^\prime(\sqrt{\omega})$. We note that \eqref{Gomega0} is equivalent to \eqref{Gunit} when $\rho > \rho_0$. While \eqref{Gunit} has the singularity extracted analytically from the sum, \eqref{Gomega0} is more convenient to work with when $\mathbf{x}$ is not near $\mathbf{x}_0$. From \eqref{pertbc2}, the boundary value problem for $G_1$ is then \begin{equation} \label{Gw1bvp} \Delta G_{\omega_1 \rho\rho} - G_{\omega_1} = 0 \,, \qquad G_{\omega_1}(1,\theta) = \frac{1}{2\pi} \textrm{Re}\left[\sum_{n = -\infty}^\infty e^{-i(n-2)\theta + in\theta_0} F_n^{\prime\prime}(1) I_n(\sqrt{\omega}\rho_0) \right] \,. \end{equation} The solution to \eqref{Gw1bvp} is \begin{equation} \label{Gw1sol} G_{\omega_1}(\rho,\theta;\rho_0,\theta_0) = \frac{1}{2\pi\sqrt{\omega}}\textrm{Re}\left[\sum_{n = -\infty}^\infty e^{-i(n-2)\theta + in\theta_0} F_n^{\prime\prime}(1) I_n(\sqrt{\omega}\rho_0)\frac{I_{n-2}(\sqrt{\omega}\rho)}{I_{n-2}^\prime(\sqrt{\omega})} \right] \,. \end{equation} Taking only the real part for $G_{\omega_1}$, we obtain \begin{equation} \label{Gw1sol2} G_{\omega_1}(\rho,\theta;\rho_0,\theta_0) = \frac{1}{2\pi\sqrt{\omega}}\left[\sum_{n = -\infty}^\infty \cos\left[(n-2)\theta - n\theta_0\right] F_n^{\prime\prime}(1) I_n(\sqrt{\omega}\rho_0)\frac{I_{n-2}(\sqrt{\omega}\rho)}{I_{n-2}^\prime(\sqrt{\omega})} \right] \,. \end{equation} We now compute the contributions of the domain perturbation to the quantities $\mathcal{F}_{\omega_1}^{(k,k)}$ and $H_{\omega_1}^{(k,k)}$, $k = 1, 2$, of \eqref{oneeigsymm1}. We first compute the correction to $\mathbf{F}_{\omega}$, the gradient of the regular part evaluated at $\mathbf{x}_0$. Since $G_{\omega_1}$ contains no singularities, we use $\nabla_{\mathbf{x}} = \mathbf{e}_{\theta}\partial_{\rho} + \rho^{-1}\mathbf{e}_{\theta}^\prime\partial_{\theta}$ to obtain \begin{subequations} \begin{equation} \label{gradGw1} \mathbf{g} \equiv \nabla_{\mathbf{x}}G_{\omega_1} \mid_{\mathbf{x}=\mathbf{x}_0} = \mathbf{e}_{\theta_0} \cos 2\theta_0 A(\rho_0) + \mathbf{e}_{\theta_0}^\prime \sin 2\theta_0 B(\rho_0) \,, \end{equation} where \begin{equation} \label{gradGw1def} \begin{gathered} A(\rho_0) \equiv \frac{1}{2\pi} \sum_{n=-\infty}^\infty \frac{F_n^{\prime\prime}(1)}{I_{n-2}^\prime(\sqrt{\omega})}I_n(\sqrt{\omega}\rho_0)I_{n-2}^\prime(\sqrt{\omega}\rho_0) \,, \\ B(\rho_0) \equiv \frac{1}{2 \pi \sqrt{\omega} \rho_0} \sum_{n=-\infty}^\infty (n-2) \frac{F_n^{\prime\prime}(1)}{I_{n-2}^\prime(\sqrt{\omega})}I_n(\sqrt{\omega}\rho_0)I_{n-2}(\sqrt{\omega}\rho_0) \,. \end{gathered} \end{equation} \end{subequations} The leading order correction to $\mathcal{F}_\omega$ is given by $\lim_{\rho \to 0^+} \left(\begin{array}{cc} \nabla_{\mathbf{x}_0} \mathbf{g}^{(1)} & \nabla_{\mathbf{x}_0} \mathbf{g}^{(2)} \end{array}\right)$, where $\mathbf{g}^{(1)}$ and $\mathbf{g}^{(2)}$ are the first and second components of the vector $\mathbf{g}$ defined in \eqref{gradGw1}. We next use the fact that for $|z| \ll 1$, we have \begin{equation} \label{limbes} \begin{gathered} I_n(z)I_{n-2}^\prime(z) \sim \begin{cases} \frac{z}{4} & n = 0, 1 \\ \mo(z^2) & \textrm{else} \end{cases} \,; \\ I_n(z)I_{n-2}(z) \sim \begin{cases} \frac{z^2}{8} & n = 0, 2 \\ \frac{z^2}{4} & n = 1 \\ \mo(z^3) & \textrm{else} \end{cases} \,. \end{gathered} \end{equation} We thus obtain that \begin{equation} \label{Fcoor2} \lim_{\rho \to 0^+} \left(\begin{array}{cc} \nabla_{\mathbf{x}_0} \mathbf{g}^{(1)} &\nabla_{\mathbf{x}_0} \mathbf{g}^{(2)} \end{array}\right) = \frac{1}{8\pi}\sqrt{\omega}\left[\frac{F_0^{\prime\prime}(1)}{I_{-2}^\prime(\sqrt{\omega})} + \frac{F_1^{\prime\prime}(1)}{I_{-1}^\prime(\sqrt{\omega})}\right]\left(\begin{array}{cc} 1 & 0 \\ 0 & -1\end{array}\right) \,. \end{equation} To leading order in $\sigma$, the matrix $\mathcal{F}_{\omega_1}$ in \eqref{oneeigdef} is given by \begin{equation} \label{Fwsigma} \mathcal{F}_{\omega_1} \sim -\frac{\omega}{4\pi}\left\lbrack \frac{K_0^\prime(\sqrt{\omega})}{I_0^\prime(\sqrt{\omega})} + \frac{K_1^\prime(\sqrt{\omega})}{I_1^\prime(\sqrt{\omega})} \right\rbrack \mathcal{I}_2 + \sigma \frac{1}{8\pi}\sqrt{\omega}\left[\frac{F_0^{\prime\prime}(1)}{I_{-2}(\sqrt{\omega})} + \frac{F_1^{\prime\prime}(1)}{I_{-1}(\sqrt{\omega})}\right]\left(\begin{array}{cc} 1 & 0 \\ 0 & -1\end{array}\right) \,, \end{equation} where the leading order term of \eqref{Fwsigma} was computed in \eqref{Fomega1_2}. Finally, for the correction to the Hessian term $H_{\omega_1}$ of \eqref{oneeigdef}, we set $(\rho,\theta) = (\rho_0,\theta_0)$ in \eqref{Gw1sol2} and let $\rho \to 0^{+}$ to obtain \begin{equation} \label{Gw1sol2hess1} G_{\omega_1}(\mathbf{x}_0;\mathbf{x}_0) \sim \frac{\sqrt{\omega}}{8\pi}\left[\frac{F_0^{\prime\prime}(1)}{I_{-2}^\prime(\sqrt{\omega})} + \frac{F_1^{\prime\prime}(1)}{I_{-1}^\prime(\sqrt{\omega})} \right] (x_2^2 - x_1^2) \, \end{equation} yielding the two-term expansion for $H_{\omega_1}$ \begin{multline} \label{Hw1sigma} H_{\omega_1} \sim \left\lbrace \frac{1}{\pi}\left[ -\frac{\omega}{4}\log\frac{\sqrt{\omega}}{2} + \frac{1-\gamma}{4}\omega \right] - \frac{\omega}{4\pi}\frac{K_0^\prime(\sqrt{\omega})}{I_0^\prime(\sqrt{\omega})}\right\rbrace \mathcal{I}_2 + \\ \sigma\frac{\sqrt{\omega}}{4\pi}\left[\frac{F_0^{\prime\prime}(1)}{I_{-2}^\prime(\sqrt{\omega})} + \frac{F_1^{\prime\prime}(1)}{I_{-1}^\prime(\sqrt{\omega})} \right]\left(\begin{array}{cc} -1 & 0 \\ 0 & 1\end{array}\right) \,, \end{multline} where the leading order term in \eqref{Hw1sigma} corresponds to that for the unperturbed unit disk computed in \eqref{GHelmunithess}. We now substitute \eqref{H11sigma} for $H_{11}$, \eqref{Fwsigma} for $\mathcal{F}_{\omega_1}$, and \eqref{Hw1sigma} for $H_{\omega_1}$ into \eqref{oneeigsymm1}. We perturb the eigenvalue $\lambda_0 = i(\lambda_{I_0} + \sigma \lambda_{I_1})$ and the stability threshold $\tau_0 = \hat{\tau}_0 + \sigma\hat{\tau}_1$ so that $\omega = \lambda_0\tau_0 \equiv \omega_0 + \sigma \omega_1$, where $\omega_0 = i\lambda_{I_0}\hat{\tau}_0$ and $\omega_{1} = i(\lambda_{I_1}\hat{\tau}_0 + \lambda_{I_0}\hat{\tau}_1)$. The calculation below will show that $\hat{\tau}_1 < 0$ for the $(1,0)$ oscillation mode while $\hat{\tau}_1 > 0$ for the $(0,1)$ oscillation mode, implying that the preferred direction of oscillation is along the major axis of the perturbed unit disk. From \eqref{oneeigsymm1}, we obtain the leading order eigenvalue problem \eqref{eigunit} for $\omega_0$ and $\lambda_{I_0}$ corresponding to the unperturbed problem, while at $\mathcal{O}(\sigma)$, we obtain the two uncouple equations for $\omega_1$ and $\lambda_{I_0}$ \begin{subequations} \label{Osigmaeig} \begin{multline} \label{Osigmaeig1} \omega_1\left[\log\frac{e^\gamma\eps}{2} + \frac{1}{2}\log\omega_0 - \frac{K_1^\prime(\sqrt{\omega_0})}{I_1^\prime(\sqrt{\omega_0})} \right] + \omega_0\left[\frac{1}{2}\frac{\omega_1}{\omega_0} - \omega_1 Q(\sqrt{\omega_0}) \right] \\ - \frac{1}{\pi} + \frac{3}{8\pi}\sqrt{\omega_0} \left[ \frac{F_0^{\prime\prime}(1)}{I_{-2}^\prime(\sqrt{\omega_0})} + \frac{F_1^{\prime\prime}(1)}{I_{-1}^\prime(\sqrt{\omega_0})} \right] = \frac{ik_{11}\lambda_{I_1} - k_{21}\omega_1}{S} \,, \end{multline} \begin{multline} \label{Osigmaeig2} \omega_1\left[\log\frac{e^\gamma\eps}{2} + \frac{1}{2}\log\omega_0 - \frac{K_1^\prime(\sqrt{\omega_0})}{I_1^\prime(\sqrt{\omega_0})} \right] + \omega_0\left[\frac{1}{2}\frac{\omega_1}{\omega_0} - \omega_1 Q(\sqrt{\omega_0}) \right] \\ + \frac{1}{\pi} - \frac{3}{8\pi}\sqrt{\omega_0} \left[ \frac{F_0^{\prime\prime}(1)}{I_{-2}^\prime(\sqrt{\omega_0})} + \frac{F_1^{\prime\prime}(1)}{I_{-1}^\prime(\sqrt{\omega_0})} \right] = \frac{ik_{11}\lambda_{I_1} - k_{21}\omega_1 }{S} \,, \end{multline} where the function $Q(z)$ is defined by \begin{equation} \label{Osigmaeig1def} Q(z) = \frac{1}{2z}\frac{(z^2+1)\left[K_1(z)I_0(z) + K_0(z)I_1(z)\right]}{(zI_0(z) - I_1(z))^2} \,. \end{equation} \end{subequations} The signs in \eqref{Osigmaeig1} correspond to that of the $(1,0)$ mode of oscillation, while the signs of \eqref{Osigmaeig2} correspond to that of the $(0,1)$ mode. Rearranging \eqref{Osigmaeig1} and \eqref{Osigmaeig2} \begin{subequations} \label{Osigmaeigall} \begin{multline} \label{Osigmaeig3} \omega_1\left[\log\frac{e^\gamma\eps}{2} + \frac{1}{2}\log\omega_0 - \frac{K_1^\prime(\sqrt{\omega_0})}{I_1^\prime(\sqrt{\omega_0})} + \frac{1}{2} - \omega_0Q(\sqrt{\omega_0}) + \frac{k_{21}}{S}\right] - \frac{ik_{11}\lambda_{I_1}}{S} \\ = \frac{1}{\pi} - \frac{3}{8\pi}\sqrt{\omega_0} \left[ \frac{F_0^{\prime\prime}(1)}{I_{-2}^\prime(\sqrt{\omega_0})} + \frac{F_1^{\prime\prime}(1)}{I_{-1}^\prime(\sqrt{\omega_0})} \right] \,, \end{multline} \begin{multline} \label{Osigmaeig4} \omega_1\left[\log\frac{e^\gamma\eps}{2} + \frac{1}{2}\log\omega_0 - \frac{K_1^\prime(\sqrt{\omega_0})}{I_1^\prime(\sqrt{\omega_0})} + \frac{1}{2} - \omega_0Q(\sqrt{\omega_0}) + \frac{k_{21}}{S}\right] - \frac{ik_{11}\lambda_{I_1}}{S} \\ = -\frac{1}{\pi} + \frac{3}{8\pi}\sqrt{\omega_0} \left[ \frac{F_0^{\prime\prime}(1)}{I_{-2}^\prime(\sqrt{\omega_0})} + \frac{F_1^{\prime\prime}(1)}{I_{-1}^\prime(\sqrt{\omega_0})} \right] \,. \end{multline} \end{subequations} Noting that $\omega_1 = i(\lambda_{I_1}\hat{\tau}_0 + \lambda_{I_0}\hat{\tau}_1)$ is pure imaginary, we equate the real and imaginary parts of \eqref{Osigmaeigall} to obtain \begin{subequations} \label{Osigmaeigall2} \begin{multline} \label{Osigmaeig5} -(\lambda_{I_1}\hat{\tau}_0 + \lambda_{I_0}\hat{\tau}_1)\textrm{Im}\left\lbrace\frac{1}{2}\log\omega_0 - \frac{K_1^\prime(\sqrt{\omega_0})}{I_1^\prime(\sqrt{\omega_0})} - \omega_0Q(\sqrt{\omega_0}) \right\rbrace \\ = \pm \frac{1}{\pi} \mp \frac{3}{8\pi} \textrm{Re}\left\lbrace \sqrt{\omega_0} \left[ \frac{F_0^{\prime\prime}(1)}{I_{-2}^\prime(\sqrt{\omega_0})} + \frac{F_1^{\prime\prime}(1)}{I_{-1}^\prime(\sqrt{\omega_0})} \right] \right\rbrace \,, \end{multline} \begin{multline} \label{Osigmaeig6} (\lambda_{I_1}\hat{\tau}_0 + \lambda_{I_0}\hat{\tau}_1)\textrm{Re}\left\lbrace \frac{k_{11}}{S\hat{\tau}_0} + \frac{1}{2} - \omega_0Q(\sqrt{\omega_0}) \right\rbrace - \frac{k_{11}\lambda_{I_1}}{S} \\ = \mp \frac{3}{8\pi} \textrm{Im}\left\lbrace\sqrt{\omega_0} \left[ \frac{F_0^{\prime\prime}(1)}{I_{-2}^\prime(\sqrt{\omega_0})} + \frac{F_1^{\prime\prime}(1)}{I_{-1}^\prime(\sqrt{\omega_0})} \right] \right\rbrace \,, \end{multline} \end{subequations} where, in \eqref{Osigmaeig6}, we have used (4.52b) of \cite{xie2017moving} to obtain that the leading order threshold $\hat{\tau}_0$ satisfies the relationship \begin{equation} \label{tau0hat} \textrm{Re}\left\lbrace \log\frac{e^\gamma \eps}{2} + \frac{1}{2}\log\omega_0 -\frac{K_1^\prime(\sqrt{\omega_0})}{I_1^\prime(\sqrt{\omega_0}) } + \frac{k_{21}}{S} \right\rbrace = \frac{k_{11}}{S\hat{\tau}_0} \,. \end{equation} Observe that $\hat{\tau}_0 \sim 1/|\log\eps| \ll 1$. In \eqref{Osigmaeigall2}, the top (bottom) sign corresponds to the $(1,0)$ ($(0,1)$) oscillation mode. Solving for the quantity $\lambda_{I_1}\hat{\tau}_0 + \lambda_{I_0}\hat{\tau}_1$ in \eqref{Osigmaeig5}, we obtain \begin{equation} \label{omega1} (\lambda_{I_1}\hat{\tau}_0 + \lambda_{I_0}\hat{\tau}_1) \equiv \tilde{\omega}_1 = \frac{\mp\frac{1}{\pi} \pm \frac{3}{8\pi} \textrm{Re}\left\lbrace \sqrt{\omega_0} \left[ \frac{F_0^{\prime\prime}(1)}{I_{-2}^\prime(\sqrt{\omega_0})} + \frac{F_1^{\prime\prime}(1)}{I_{-1}^\prime(\sqrt{\omega_0})} \right] \right\rbrace}{\textrm{Im}\left\lbrace\frac{1}{2}\log\omega_0 - \frac{K_1^\prime(\sqrt{\omega_0})}{I_1^\prime(\sqrt{\omega_0})} - \omega_0Q(\sqrt{\omega_0}) \right\rbrace} \,. \end{equation} We note that there are no parameters on the right-hand side of \eqref{omega1}, as the numerical value for $\omega_0$ was given in (4.52c) of \cite{xie2017moving} as $\omega_0 = 3.02603687i$ and shown to be independent of parameters of the original PDE system \eqref{schnak}. From \eqref{Osigmaeig6}, we solve for $\lambda_{I_1}$ to obtain \begin{equation} \label{lambda1} \lambda_{I_1} = \frac{\tilde{\omega}_1}{\hat{\tau}_0} + \frac{S}{k_{11}}\left[\tilde{\omega}_1\textrm{Re}\left\lbrace \frac{1}{2} - \omega_0Q(\sqrt{\omega_0}) \right\rbrace \pm \frac{3}{8\pi} \textrm{Im}\left\lbrace\sqrt{\omega_0} \left[ \frac{F_0^{\prime\prime}(1)}{I_{-2}^\prime(\sqrt{\omega_0})} + \frac{F_1^{\prime\prime}(1)}{I_{-1}^\prime(\sqrt{\omega_0})} \right] \right\rbrace \right] \,. \end{equation} Notice that as $\eps \to 0$, since $\hat{\tau}_0 \sim 1/|\log\eps|$, $\lambda_{I_1} \sim \tilde{\omega}_1|\log\eps|$. That is, $\lambda_{I_1}$ has the same sign as $\tilde{\omega}_1$ as $\eps \to 0$. Finally, with $\hat{\tau}_1 = (\tilde{\omega}_1 - \lambda_{I_1}\hat{\tau}_0)/\lambda_{I_0}$, we use \eqref{lambda1} for $\lambda_{I_1}$ to obtain \begin{equation} \label{tau1} \hat{\tau}_1 = -\frac{S\hat{\tau}_0}{k_{11}\lambda_{I_0}}\left[ \tilde{\omega_1} \textrm{Re}\left\lbrace \frac{1}{2} - \omega_0Q(\sqrt{\omega_0}) \right\rbrace \pm \frac{3}{8\pi} \textrm{Im}\left\lbrace\sqrt{\omega_0} \left[ \frac{F_0^{\prime\prime}(1)}{I_{-2}^\prime(\sqrt{\omega_0})} + \frac{F_1^{\prime\prime}(1)}{I_{-1}^\prime(\sqrt{\omega_0})} \right] \right\rbrace \right] \,. \end{equation} Noting that the spot strength $S$, the leading order threshold $\hat{\tau}_0$, and leading order frequency $\lambda_{I_0}$ are all positive while $k_{11} < 0$ from Fig. \ref{kappafig}, we use $\omega_0 = 3.02603687i$ and the upper signs in \eqref{omega1} and \eqref{tau1} to find that $\hat{\tau}_1 \approx -(1.083)\hat{\tau}_0/(|k_{11}|\lambda_{I_0}) < 0 $ while $\hat{\tau}_1 \approx (1.083)\hat{\tau}_0/(|k_{11}|\lambda_{I_0}) > 0 $. We thus conclude that the for the perturbed disk with radius $r = 1 + \sigma\cos 2\theta$ with $0 < \sigma \ll 1$, preferred mode of oscillation is along the major axis, with corresponding threshold $\hat{\tau} \sim \hat{\tau}_0 + \sigma\hat{\tau}_1$ less than that for the unit disk. The threshold corresponding to oscillation along the minor axis is larger than that for the unit disk. Furthermore, since $\tilde{\omega}_1 < 0$ for the $(1,0)$ mode, the frequency of oscillation is less than that for the unit disk. Likewise, $\tilde{\omega}_1 > 0$ for the $(0,1)$ mode so that the oscillation frequency along that direction increases with the perturbation of the unit disk. A perturbation of the form $f(\theta) = a_2\cos 2\theta + b_2\sin 2\theta$ can be written as $f(\theta) = C_2\cos (2\theta -\phi)$, where $C_2 = \sqrt{a_2^2 + b_2^2}$ and $\cos\phi = a_2/C_2$, $\sin\phi = b_2/C_2$. This simply constitutes a counterclockwise rotation of the perturbation by $\phi/2$. In this case, the direction of oscillation is along the line that makes an angle $\phi/2$ with the $x_1$ axis. We now briefly comment on the effect of perturbations of the form $f(\theta) = \cos m\theta$ for $m \neq 2$. The $m = 1$ perturbation constitutes a translation of the unit disk by $\sigma$ in the $x_1$ direction along with an $\mathcal{O}(\sigma^2)$ deformation away from circular geometry. That is, the parametric equation for this perturbed disk is $x_1 = \sigma + (1+ \sigma^2 g(\theta))\cos\theta$ and $x_2 = (1+ \sigma^2 g(\theta))\sin\theta$ with $g(\theta) = (1/4)(1-\cos 2\theta) + \mathcal{O}(\sigma^2)$. The geometry deformation to leading order is thus effectively a mode-$2$ perturbation with negative coefficient. From the above analysis, the $m=1$ perturbation thus selects the $(0,1)$ oscillation mode as the dominant mode. We emphasize, however, that this is an $\mathcal{O}(\sigma^2)$ effect in contrast to the $\mathcal{O}(\sigma)$ effect of the $m = 2$ mode. A similar argument asserts that the effect of $f(\theta) = \sin\theta$ is also $\mathcal{O}(\sigma^2)$. When $m \geq 3$, the leading order correction $G_1(\rho,\theta)$ to the Neumann Green's function in \eqref{G1sol} would be replaced by an $\mathcal{O}(\rho^m)$ function, which has a zero Hessian at the origin. It therefore does not contribute to the eigenvalue problem \eqref{oneeigsymm2}. For the Helmholtz Green's function, we have that $I_m(z) \sim z^{|m|}$ for $|z| \ll 1$ so that $I_{n}(z)I_{n-m}(z) \sim z^{\alpha}$ where $\alpha = |n| + |n-m| \geq |m| $ while $I_{n}(z)I_{n-m}^\prime(z) \sim z^{\alpha}$ where $\alpha = |n| + |n-m-1| \geq |m-1|$. From \eqref{limbes}, we observe modes $m \geq 3$ do not contribute towards $\mathcal{F}_{\omega_1}$ in \eqref{Fwsigma} and $H_{\omega_1}$ in \eqref{Hw1sigma}. Since the leading order corrections to the Neumann and Helmholtz Green's functions are linear in the perturbation $f(\theta)$, we can conclude that for a $2\pi$-periodic function $f(\theta)$, the leading order contribution to the eigenvalue problem comes from the coefficients $a_2 \equiv \pi^{-1}\int_0^{2\pi} \!f(\theta)\cos 2\theta \, d\theta$ and $b_2 \equiv \pi^{-1}\int_0^{2\pi} \!f(\theta)\sin 2\theta \, d\theta$ in the Fourier series of $f$. In particular, if $a_2 = b_2= 0$, the bifurcation threshold (to leading order in $\sigma$) will remain unchanged from that of the unit disk. When $a_2$ and $b_2$ are not both zero, the bifurcation threshold will decrease, and the preferred direction of oscillation will be along a line that makes an angle $\phi/2$ with the $x_1$-axis, where $\cos\phi = a_2/\sqrt{a_2^2 + b_2^2}$, $\sin\phi = b_2/\sqrt{a_2^2+b_2^2}$. Furthermore, the corresponding bifurcation frequency will also decrease. While we have not performed this analysis for near-square rectangle with edge lengths $L$ and $L+\sigma$ for $\eps \ll \sigma \ll 1$, our analysis for the perturbed disk predicts that the preferred oscillation direction in a rectangle would be in the direction parallel to the longer edge. We show a numerical example of this scenario, along with several others illustrating our single- and multi-spot theory, in the next section. \section{Numerical validation} \label{numerics} In this section, we numerically validate our theoretical result \eqref{finaleigblock} by solving the full time-dependent PDE system \eqref{schnak} using the finite element solver FlexPDE 7 \cite{flex}. For single- and multi-spot equilibria of \eqref{schnak} on various domain geometries, we verify both the Hopf bifurcation threshold $\tau = \eps^{-2}\hat{\tau}^*$ for the onset of oscillatory instabilities as well as the direction(s) of oscillations, where $\hat{\tau}^*$ is the minimum of all $\hat{\tau}$ satisfying the complex eigenvalue problem \eqref{finaleigblock}. Before we describe our results, we first outline our procedures. Initial $N$-spot equilibrium states for which we test stability are obtained by initializing an $N$-bump pattern in \eqref{schnak} with $\tau$ set well below any Hopf thresholds. We then evolve \eqref{schnak} until the time $t$ is sufficiently large that changes in solution are no longer observed; observe that steady state solutions of \eqref{schnak} are unaffected by the value $\tau$. Using this equilibrium solution as an initial condition, we trial various of values of $\tau$ to test stability; we say that the numerical (or ``exact'') value of the Hopf bifurcation threshold is $\hat{\tau}_f$ if no oscillations are observed when $\red{\tau_0} < \hat{\tau}_f-0.005$ and oscillations are observed when $\red{\tau_0} > \hat{\tau}_f + 0.005$. To compute $\hat{\tau}$ from the matrix-eigenvalue problem, \eqref{det0}, we require quantities associated with the Neumann and Helmholtz Green's function of \eqref{GNall} and \eqref{Gk}, respectively, for the domains $\Omega$ that we consider. For the unit disk and rectangle, analytic formulas for both are given in \cite{chen2011stability}. For the half disk, we simply employ the unit disk formula with the method of images to obtain the reflective boundary condition on the straight segment of the half-disk. For the more complex geometries, we employ the finite element solver from MATLAB's PDE Toolbox. In the implementation, we solve a regular equation for $G - G^{\textrm{free}}$ with nonhomogeneous Neumann conditions, where $G$ is the desired Green's function in $\Omega$ and $G^{\textrm{free}}$ is the corresponding free space Green's function. Finally, the equilibrium locations $\mathbf{x}_j$ and corresponding spot strengths $S_j$ are obtained by simultaneously solving the $3N+1$ system of nonlinear equations \eqref{Sj}, \eqref{eqbm1}, and \eqref{matchj}. \red{We remark that there are as many as $2N$ distinct pairs of solutions $(\hat{\omega}, \hat{\lambda_I})$ to the nonlinear system \eqref{det0}. In the $N$-spot numerical experiments below, we were were not always able to find all thresholds when $N$ was sufficiently large. We make a note of this below where relevant. In all cases, however, the smallest threshold we found corresponded to the most unstable mode as observed numerically in the PDE simulations.} \subsection{Hopf bifurcation of a single spot} In this subsection, we investigate the Hopf bifurcation of small eigenvalues of a single spot solution to \eqref{schnak} in three different domains $\Omega$. The experiments are: \begin{enumerate} \item $\Omega$ is a perturbed unit disk with radius \begin{equation} \label{pertf} \begin{gathered} r = 1 + \sigma f(\theta) \,; \qquad \theta \in \lbrack 0, 2\pi) \,, \quad \sigma = 20\eps \,; \\ f(\theta) = \frac{1}{4}\left \lbrack 0.1\cos\theta + a_2\cos 2\theta + \cos 3\theta + \cos 4\theta \right\rbrack \,. \end{gathered} \end{equation} In one numerical simulation, we take $a_2 = 0.1$, while we take $a_2 = -0.1$ in the other simulation. We confirm our prediction of \S \ref{perturbeddisk} for the impact of the $a_2$ coefficient on the dominant mode of oscillation. \item $\Omega$ is a half-disk with radius $1$ (Fig.~\ref{halfdisk1}). The predicted Hopf threshold is $\hat{\tau}^* = 0.0712$ while the numerical result was found to be $\hat{\tau}_f = 0.072$. \item $\Omega$ is a rectangle of width $2$ and length $1$ (see Fig.~\ref{rectangle1}). The predicted Hopf threshold is $\hat{\tau}^* = 0.0615$, while the numerical threshold was found to be $\hat{\tau}_f = 0.062$. \item $\Omega$ is an asymmetric, non-simply connected domain consisting of the same rectangle as that in Fig.~\ref{rectangle1} with two differently sized holes in the shape of disks (see Fig.~\ref{rectangle2}). All boundary conditions are reflective. The predicted Hopf threshold is $\hat{\tau} = 0.065$ while the numerical threshold was found to be $\hat{\tau}_f = 0.0615$. \end{enumerate} In each of the figures below showing snapshots of the solution for the localized activator component at various points in time, the red arrows indicate the direction of motion of the spot(s) at the given instant. We note that since the strength of the spot is a function of its location, spot-splitting may occur during the course of oscillations. In the multi-spot solutions, spot annihilation events may occur when spot distances decrease due to oscillation. We exclude illustrations of these phenomena and focus only on the oscillatory dynamics. \textbf{Experiment 1.} \, In this experiment, we verify the calculations of \S \ref{perturbeddisk} on how the perturbation of the disk impacts the dominant mode of oscillation. For the perturbed disk of \eqref{pertf} with $a_2 = 0.1 > 0$, Fig. \ref{a2positive} shows the oscillations of the spot at onset as well as in long-time. In particular, we observe that the $(1,0)$ oscillation mode (i.e., the horizontal mode) is the dominant mode, consistent with the result of \S \ref{perturbeddisk}. \begin{figure}[!htb] \centering \includegraphics[width=0.98\textwidth]{imagespng/a2positive.eps} \caption{Experiment 1 -- Hopf bifurcation of a single spot in a perturbed disk of the form \eqref{pertf} with $a_2 = 0.1 > 0$. In the top half, we indicate with arrows the direction of spot motion at the particular time. In the bottom half, we plot the $x_1$ and $x_2$ coordinates of the spot location as functions of time. For increasing time, the horizontal oscillation mode emerges as dominant. The other parameters are $S = 4$, $\eps = 0.01$, and $\tau_0 = 0.084$. } \label{a2positive} \end{figure} In Fig. \ref{a2negative}, we set $a_2 = -0.1 < 0$, leading to vertical oscillations of the spot. That is, we observe that when $a_2 < 0$, the $(0,1)$ oscillation mode is dominant, consistent with the result of \S \ref{perturbeddisk}. We emphasize that the only difference between Figs. \ref{a2positive} and \ref{a2negative} is the sign of the $a_2$ coefficient in \eqref{pertf}. Furthermore, we note that $|a_2|$ is relatively small in comparison to the coefficients of the $\cos 3\theta$ and $\cos 4\theta$ terms, and yet it is the coefficient that dictates which mode of oscillation is preferred. This is due to the fact that the effect of the $\cos 2\theta$ perturbation enters at leading order in $\sigma$, while those of higher modes enter at $\mathcal{O}(\eps^2)$. \begin{figure}[!htb] \centering \includegraphics[width=0.98\textwidth]{imagespng/a2negative.eps} \caption{Experiment 1 -- Hopf bifurcation of a single spot in a perturbed disk of the form \eqref{pertf} with $a_2 = -0.1 < 0$. In the top half, we indicate with arrows the direction of spot motion at the particular time. In the bottom half, we plot the $x_1$ and $x_2$ coordinates of the spot location as functions of time. For increasing time, the vertical oscillation mode emerges as dominant. The other parameters are $S = 4$, $\eps = 0.01$, and $\tau_0 = 0.084$. } \label{a2negative} \end{figure} \textbf{Experiment 2.} \, In left portion of Fig.~\ref{halfdisk1}, we show the unstable oscillations of one spot in the half disk when $\tau$ exceeds the Hopf threshold. On the right, we plot the $x_1$ and $x_2$ coordinates of the spot-center as a function of time; observe that the initial oscillations at onset are only in the $x_1$-direction. Indeed, the predicted Hopf thresholds are $\hat{\tau} = 0.0712$ for the $(1,0)$ oscillatory mode and $\hat{\tau} = 0.1072$ for $(0,1)$ mode. The lower threshold for the $(1,0)$ mode indicates that it is the dominant mode of oscillation, which is what is observed numerically. The saturation of the oscillation amplitudes indicates that the Hopf bifurcation is supercritical, with the initial horizontal oscillations leading to a stable orbit with nonzero $x_1$ and $x_2$ components. \begin{figure}[!htb] \centering \includegraphics[width=0.98\textwidth]{imagespng/HalfDisk_all.eps} \caption{Experiment 2 -- numerical simulations performed for $\tau_0=0.0725,~\varepsilon=0.01$ and $S= 4$. On the left, we show oscillatory motion of the spot center as $\tau_0$ exceeds the Hopf bifurcation threshold of $\hat{\tau}^* = 0.072$. Red arrows indicate the direction of motion. On the right, we plot coordinates of the center of the spot, where the blue (red) curve is the $x_1$-coordinate ($x_2$-coordinate). The saturation of the oscillation amplitudes indicates that the Hopf bifurcation is supercritical, with the initial horizontal oscillations leading to a stable orbit with nonzero $x_1$ and $x_2$ components.} \label{halfdisk1} \end{figure} In Fig.~\ref{precision}, for the half-disk, we demonstrate the convergence with respect to $\varepsilon$ of the predicted values of the Hopf threshold $\hat{\tau}$ (Fig. \ref{tauvseps}) and corresponding bifurcation frequency $\lambda_I$ (Fig. \ref{lambdavseps}) to their exact values obtained from numerical computations. The blue error bars indicate the range in which the exact threshold falls. We note that all quantities plotted in Fig.~\ref{precision} have been scaled by $\eps^{-2}$; as such, the figures show that the error in the unscaled Hopf thresholds and frequencies scale as $\mathcal{O}(\eps^2|\log\eps|)$ when $\eps$ is sufficiently small. \begin{figure}[!ht] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{imagespng/TauVsEps.eps} \caption{$\hat{\tau}^*$ vs. $-\log{\varepsilon}$} \label{tauvseps} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{imagespng/LambdaVsEps.eps} \caption{$\hat{\lambda}_I^*$ vs. $-\log{\varepsilon}$} \label{lambdavseps} \end{subfigure} \caption{Experiment 2 -- comparison of Hopf thresholds and corresponding frequency for $\varepsilon=0.04,~0.02,~0.01,~0.05,~0.025$. The blue error error bars indicate the range in which the exact threshold (obtained from PDE simulations) falls. The agreements in the Hopf stability threshold (a) as well as the corresponding frequency (b) between the predicted value and PDE simulations improve with decreasing $\varepsilon$.} \label{precision} \end{figure} \textbf{Experiment 3.} \, In the left portion of Fig.~\ref{rectangle1}, we show Hopf oscillations of a spot in a rectangle of height $1$ and width $\ell = 2$. In our numerical simulations, we adjust the value of the feed-rate $A$ in \eqref{schnak} in inverse proportion to $\ell$ so as to keep the spot strength $S = (2\pi)^{-1} A|\Omega|$ constant. The dominant mode of oscillations is in the direction of the longer dimension, similar to what was shown for the perturbed disk of \S \ref{perturbeddisk}. On the right, we plot the $x_1$ and $x_2$ coordinates of the spot center. As was the case for the half-disk, the initial growth of amplitude oscillations saturates, suggesting that the Hopf bifurcation is supercritical. Due to the symmetry of the rectangle, the long-time oscillations occur only in the $x_1$ component. \begin{figure}[!htb] \centering \includegraphics[width=0.98\textwidth]{imagespng/Ra2b1.eps} \caption{Experiment 3 -- numerical simulations performed for $\tau_0=0.063,~\varepsilon=0.01$ and $S= 4$. On the left, we show oscillatory motion of the spot center as $\tau_0$ exceeds the Hopf bifurcation threshold. Red arrows indicate the direction of motion. On the right, we plot the coordinates of the center of the spot, where the blue (red) curve is the $x_1$-coordinate ($x_2$-coordinate). The saturation of the oscillation amplitudes indicates that the Hopf bifurcation is supercritical, with the initial horizontal oscillations leading to a stable orbit with only a nonzero $x_1$ component.} \label{rectangle1} \end{figure} In Fig.~\ref{TauVsLen}, we plot the predicted (red, green) and numerical (blue bars) Hopf bifurcation thresholds versus the length $\ell$ of the rectangle of unit height. The blue error error bars indicate the range in which the exact threshold falls as determined from PDE simulations. The two predicted thresholds correspond to the $(1,0)$ (horizontal) and $(0,1)$ (vertical) oscillation modes. When $\ell = 1$, the two thresholds are equal due to symmetry. As $\ell$ increases, the asymptotics results predict that the $(1,0)$ mode is the first to destabilize as $\tau_0$ is increased, in agreement with the oscillations parallel to the longer edge in the rectangle of Fig. \ref{rectangle1}. \begin{figure}[!htb] \centering \includegraphics[width=0.6\textwidth, height=0.35\textwidth]{imagespng/TauAndLength.eps} \caption{Experiment 3 -- thresholds for one spot in a rectangle of unit height and width $\ell$. When $\ell = 1$, the thresholds corresponding to the $(1,0)$ (horizontal) and $(0,1)$ (vertical) oscillation modes are equal due to symmetry. When $\ell$ increases, the dominant mode of oscillation is the one parallel to the longer edge of the rectangle.} \label{TauVsLen} \end{figure} We observe that both the Hopf threshold and the corresponding frequency decrease with increasing $\ell$. Indeed, as $\ell \to \infty$, we expect a zero eigenvalue corresponding to translational invariance in the $x_1$-direction. For this infinite strip, the only Hopf bifurcation is in the $x_2$-direction. \textbf{Experiment 4.} \, In Fig. \ref{rectangle2}, we break the symmetry of the rectangle of Experiment 2 by removing two circular holes of different sizes. The feed-rate $A$ is set so that the strength of the spot is $S = 4$. The break in symmetry causes the spot center to shift away from the $(1, 0.5)$ location, and leads to long-time oscillations that are non-zero in both $x_1$- and $x_2$-components. The initial oscillations at onset, however, are still nearly horizontal, while the bifurcation threshold $\hat{\tau}^*$ is also very close to that of Experiment 3. \begin{figure}[!htb] \centering \includegraphics[width=0.98\textwidth]{imagespng/ContourCollection.eps} \caption{Experiment 4 -- numerical simulations performed for $\tau_0=0.063,~\varepsilon=0.01$ and $S= 4$. On the left, we show the oscillatory motion of the spot center as $\tau_0$ exceeds the Hopf bifurcation threshold. The red arrows indicate the direction of motion at the particular instant in time. On the right, we plot the coordinates of the spot center, where blue curve is the $x_1$-coordinate and red curve is the $x_2$-coordinate. The holes in the domain break the symmetry of the rectangle and cause nonzero long-time oscillations in both coordinates.} \label{rectangle2} \end{figure} \subsection{Hopf bifurcation of multiple spots} In this subsection, we investigate Hopf bifurcations of small eigenvalues of $N$-spot solutions to \eqref{schnak} along with the ensuing dynamics. While for a given $N$, multiple equilibrium configurations of spot locations and strengths are possible, we focus on the following configurations: \begin{enumerate}[resume] \item A ring of $N$ equally-spaced spots concentric with the unit disk (see Fig.~\ref{UnitdiskM}), and a line of $N$ equally-spaced spots along the width of a rectangle of height $1$ and width $5$ (see Fig. \ref{RectangleM}). Comparisons of stability thresholds are plotted in Fig. \ref{crossing} against $N$. We observe excellent agreement between asymptotic and numerical values. \item \label{RC-5H} Two spots in an asymmetric domain consisting of the same rectangle as that in Experiment 3 with five holes in the shape of disks. All boundary conditions are reflective. The holes create a barriers between the two spots. The predicted Hopf threshold is $\hat{\tau}^* = 0.0891$, while the threshold found from PDE simulations is $\hat{\tau}_f = 0.0887$. \end{enumerate} Spots in symmetric $N$-spot equilibrium configurations all have equal strength $S$ given by $S = \frac{A}{2N\pi|\Omega|}$. In the asymmetric domains of Experiment \ref{RC-5H}, spot strengths are generally unequal and must be determined by a numerical solution of the nonlinear system \eqref{Sj}, \eqref{eqbm1}, and \eqref{matchjj}. In the figures containing snapshots of solutions, the red arrows indicate the initial direction of oscillation at onset of instability. Arrows of different sizes indicate the relative amplitudes of oscillation between the different spots. \textbf{Experiment 5.} \, In this experiment, we investigate dominant oscillation modes of symmetric $N$-spot equilibria. In particular, we show that our theory correctly predicts the switching of dominant modes as $N$ is increased. In Fig. \ref{crossingdisk}, we plot two bifurcation thresholds for an $N$-spot equilibrium arranged in a ring concentric with the unit disk (see Fig. \ref{UnitdiskM}). The blue curve corresponds to the radial mode of oscillations characterized by in-phase oscillations in the radial direction. For $N \leq 5$, the red diamonds denote the thresholds for the (near)-tangential mode in which spots oscillate (approximately) along the tangent of the equilibrium ring. As this threshold is lower than that of the radial mode, we expect that this mode emerges first as $\tau_0$ is increased. This is indeed observed in the first row of Fig. \ref{UnitdiskM}. The symmetry of even-numbered configurations allows this mode to be exactly tangential; for odd-numbered configurations, the spots undergo an elliptical orbit of high eccentricity. This elliptic orbit at onset is consistent with the fact the eigenvectors $\mathbf{a}^*$ of the eigenvalue problem \eqref{finaleigblock} for the $N = 3,5$ modes are complex with imaginary components small in comparison to the real components. Theoretical radii of these ring configurations were obtained from Eq. (16) of \cite{kolokolnikov2022ring}, and were used to initialize the simulations. \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{imagespng/Crossing.eps} \caption{thresholds for $N$-spot ring in a disk} \label{crossingdisk} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{imagespng/Crossing_rect.eps} \caption{thresholds for $N$-spots in a rectangle} \label{crossingrect} \end{subfigure} \caption{Experiment 5 -- Hopf bifurcation thresholds for $N$-spots in a (a) unit disk and (b) rectangle. In (a), ``(near)-tangential'' refers to the mode in which spots oscillate in a direction tangent ($N$ even) or nearly tangent ($N$ odd) to the equilibrium ring, while ``next lowest found'' corresponds to the next lowest threshold (above that of the in-phase radial mode) that we were able to find in solving \eqref{finaleigblock}. The prediction that the in-phase radial mode is dominant for $N > 5$ is corroborated by the numerical simulations in Fig. \ref{UnitdiskM}. Similarly, in (b), the threshold plotted for the ``next lowest found'' is the lowest threshold we were able to find above that of the in-phase vertical mode. The emergence of the in-phase vertical mode as the dominant mode for $N \geq 6$ is observed in the numerical simulations of Fig. \ref{RectangleM}. In (a) and (b), the $\times$'s indicate the value of $\hat{\tau}_f$, the Hopf bifurcation value found from PDE simulations. } \label{crossing} \end{figure} For $N > 5$, the purple squares of Fig. \ref{crossingdisk} denote the next lowest threshold (above that of the in-phase radial threshold) that we were able to find in the nonlinear system \eqref{det0}. For such $N$, the radial mode is the lower threshold, in which case we expect that this mode emerges first as $\tau_0$ is increased. This is seen in the second row of Fig. \ref{UnitdiskM}. The black $\times$'s in Fig. \ref{crossingdisk} denote the value of $\tau_0$ at which the first Hopf bifurcation was encountered in our PDE simulations of \eqref{schnak}. The agreement, including the switch of dominance from the non-radial to radial mode of oscillation at $N = 6$, is excellent. \begin{figure}[!htb] \centering \includegraphics[width=0.95\textwidth]{imagespng/UnitDisk2To8Spots.eps} \caption{Experiment 5 -- spatial configurations of $N$-spots in a unit disk and their initial oscillation directions at onset for $\varepsilon=0.01,~S=4$. For $N=2,4$, the oscillations are along a direction tangential to the ring on which the spots are located. The $N = 3, 5$ configurations lack the symmetry to undergo perfectly tangential oscillations. The eigenvectors of \eqref{finaleigblock} are complex, indicating that the spots follow an elliptical orbit at onset. For $N=6,7,8$, the emergence of radial oscillations as the dominant mode for $N > 5$ is predicted by Fig. \ref{crossingdisk}. } \label{UnitdiskM} \end{figure} We observe a similar switch in mode-dominance for $N$-spots equally spaced along the center line parallel to the longer edge of the rectangle. In Fig. \ref{crossingrect}, the blue curve denotes the thresholds for the in-phase vertical oscillation mode in which all spots oscillate vertically in-phase (last row of Fig. \ref{RectangleM}). For $N \leq 5$, this threshold is preceded by a horizontal mode in which the spots oscillate horizontally and out-of-phase with their neighbors (red diamonds). For $N > 5$, we plot the next lowest threshold (above that of the in-phase vertical mode) that we were able to find in \eqref{finaleigblock} (purple squares). The black $\times$'s denote the value of $\tau_0$ at which the first Hopf bifurcation was encountered in our PDE simulations of \eqref{schnak}. We highlight the coincidence of the two thresholds at $N = 5$; indeed, when $N = 5$ in a rectangle of unit height and length $5$, the symmetry dictates that the two thresholds equal the $\ell = 1$ threshold of Fig. \ref{TauVsLen}. As such, the oscillations observed in the fourth image of Fig. \ref{RectangleM} is a linear combination of the $(1,0)$ and $(0,1)$ modes of a single spot in the unit square. \begin{figure}[!htb] \centering \includegraphics[width=0.95\textwidth]{imagespng/Rec2to7spots.eps} \caption{Experiment 5 -- spatial configurations and their initial oscillation directions at onset. There is a transition of the oscillation direction at $N=5$. For $N<5$, the oscillations are along the horizontal direction, while for $N>5$ the oscillations are following the vertical direction. All profiles have one eigenfunction at the Hopf bifurcation except the case of 5-spot, where two eigenvectors are found, indicating that the initial oscillation have two directions. \red{The parameter values are $S=4$ and $\varepsilon=0.01$.}} \label{RectangleM} \end{figure} This experiment illustrates what was concluded in \S \ref{perturbeddisk} for a single spot in a perturbed disk - the dominant mode of oscillation appears to be along the direction(s) in which there is more separation between spots or between a spot and the boundary. We also observe in these two scenarios that, when possible, the even mode was the dominant mode of oscillation. That is, each dominant mode can be replicated by a single spot in a correctly chosen domain with pure Neumann boundary conditions; i.e., a wedge of angle $2\pi/N$ for the $N$-spot ring in the unit disk, and a rectangle of unit height and length $\ell = 5/N$. \textbf{Experiment 6.} \, In this experiment, we show the full generality of our stability result \eqref{finaleigblock} in which the Hessian terms of the Helmholtz and Green's functions were computed from a finite elements methods. Fig. \ref{rectangleRc5holes} shows two spots in a non-simply connected domain. While the threshold does not deviate significantly from that of one spot in a unit square, mode of oscillation as illustrated in the figure is rather different. The oscillation of the left spot is influenced by the orientation of the two nearest holes and has a vertical component as a result. The right spot is isolated in a smaller region and thus undergoes an oscillation of significantly smaller amplitude in comparison to the left spot (as indicated by the size of the arrows). A weak coupling still appears to be present, however, as the directions of oscillation still have remnants of the even mode of oscillation observed in the absence of the barriers. \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.68\textwidth} \centering \includegraphics[width=\textwidth]{imagespng/R-5-2spikes-Dynamics-c.eps} \label{rc5snapshots} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{imagespng/LeftSpotTrajectory-5C.eps} \label{rc5lefttrajectory} \end{subfigure} \caption{Experiment 6 -- numerical simulations performed for $\tau_0=0.09,~\varepsilon=0.01$. The strengths for the left spot and right spot are approximately $4$ and $3$, respectively. \red{On the left, we show snapshots of the two-spot profile at different times, along with the direction of motions indicated by the red arrows. The disparity in the sizes of the arrows convey the significantly smaller amplitude of oscillation of the right spot in comparison to that of the left. On the right, we plot the coordinates of the spot centers, where $(x_1,y_1)$ refers to the coordinates of the center of the left spot, while $(x_2,y_2)$ refers to those of the center of the right spot.} } \label{rectangleRc5holes} \end{figure} \section{Discussion} \label{conc} Through a formal asymptotic analysis, we have derived a $2N\times 2N$ complex matrix eigenvalue problem yielding the Hopf bifurcation threshold along with frequency and mode of oscillation for the slow oscillatory translation instabilities of $N$-spot equilibrium solutions to the Schnakenberg reaction-diffusion system with insulating boundary conditions. This result is valid for general, flat and bounded two-dimensional domains, and is a generalization of that derived for a single spot in a unit disk in \cite{xie2017moving}. This general result requires a more intricate analysis that accounts for domain asymmetries as well as spot-spot interactions. For $N=1$, we showed that the matrix eigenvalue problem for bifurcation threshold and frequency reduces to the complex scalar problem derived in \cite{xie2017moving} for the unit disk. We then extended this analysis to that for a perturbed unit disk with radius in polar coordinates $r = 1 + \sigma f(\theta)$, $\theta \in [0,2\pi)$, $\eps \ll \sigma \ll 1$, where $f(\theta)$ is a $2\pi$-periodic function. We found that, at leading order in $\sigma$, only the coefficients $a_2 \equiv \pi^{-1}\int_0^{2\pi} \!f(\theta)\cos 2\theta \, d\theta$ and $b_2 \equiv \pi^{-1}\int_0^{2\pi} \!f(\theta)\sin 2\theta \, d\theta$ change the eigenvalue problem at leading order in $\sigma$. If $a_2 = b_2 = 0$, the bifurcation threshold will remain unchanged at leading order. When they are not both zero, the bifurcation threshold will decrease, and the preferred direction of oscillation will be along the line that makes an angle $\phi/2$ with the $x_1$-axis, where $\cos\phi = a_2/\sqrt{a_2^2+b_2^2}$ and $\sin\phi = b_2/\sqrt{a_2^2 + b_2^2}$. All of our asymptotic results were numerically confirmed from finite elements solutions of the full Schnakenberg PDE \eqref{schnak} using FlexPDE. Our analysis is valid for the Schnakenberg model with homogeneous feed-rate in a general flat $2$-dimensional domain of finite size. It would be interesting to investigate how various heterogeneous effects impact the stability threshold as well as the spot dynamics at onset. For example, \cite{wong2021spot} employs a hybrid asymptotic-numerical method to reveal novel dynamics and bifurcations of spot solutions when in the presence of a strongly localized feed-rate or small holes in the domain through which chemicals can leak. Heterogeneity can also come in the form of surface curvature. In this case, the integration of microlocal techniques into our asymptotic framework would be necessary to accurately compute Hessian terms of relevant Green's functions in the matrix eigenvalue problem. These techniques were first employed to compute linear terms of Green's function expansions for the purposes of predicting spot dynamics on curved surfaces in \cite{tzou2019spot} and \cite{tzou2020analysis}. Lastly, the small eigenvalues of spot clusters has not yet been analyzed. These clusters may form in the presence of a spatially dependent advection term in the PDE, or, as was analyzed in \cite{kolokolnikov2020hexagonal}, a spatially varying potential in the Gierer-Meinhardt model. While we performed our analysis on the Schnakenberg model, a similar analysis would be possible for other activator-inhibitor reaction-diffusion models such as the Gray-Scott and Brusselator models. Furthermore, the hybrid method employed in this paper can be extended to compute small eigenvalues of $3$-dimensional spot patterns. For 3-D domains, the slow dynamics of quasi-equilibrium spot patterns along the stability and bifurcation structure of equilibrium configurations were analyzed for the Schnakenberg model in \cite{tzou2017stability} and more recently for the Gierer-Meinhardt model in \cite{gomez2021asymptotic}. It would also be interesting to study the weakly nonlinear behavior of the spot dynamics beyond the linear stability regime. Such a theory has been developed in \cite{veerman2015breathing} for oscillatory amplitude instabilities in one-dimension. Numerical results of \cite{xie2017moving} showed the impact of the domain geometry on the long-time periodic orbits of a single spot when the Hopf bifurcation parameter $\tau$ was slightly above threshold. In particular, they observed primarily elliptical orbits for a single spot inside the unit disk, with a more exotic periodic orbit when the domain was a square. It would be an interesting analysis to characterize the periodic orbits that are admitted by a given domain and to investigate their stability. Another possible weakly nonlinear study could be performed near a codimension-2 point where the Hopf threshold for the small eigenvalues is equal to that for the large eigenvalues which lead to amplitude oscillations. Lastly, it has been well-documented that equilibrium configurations of $N$-spot patterns in the Schnakenberg model correspond to (locally) optimal target configurations in the narrow escape optimization problem. See \cite{cheviakov2010asymptotic, ChevWard2010, tzou2017stability}. Furthermore, \cite{xie2017moving} and \cite{tzou2015mean} along with \cite{tzou2014first} establish connections between oscillatory translational instabilities of single spot patterns with a certain optimization problem of a deterministically target on a one-dimensional interval and two-dimensional unit disk. It would be interesting to observe if this correspondence persists for multi-spot patterns and in more general two-dimensional domains. \bibliographystyle{siam}
2024-02-18T23:40:38.070Z
2022-07-05T02:09:07.000Z
algebraic_stack_train_0000
2,916
20,571
proofpile-arXiv_065-14448
\section{Introduction} In a Hamiltonian system small oscillations around a periodic orbit are often described using the normal form theory~\cite{AKN,SM} which provide an important tool for the study of local dynamics (see e.g.~\cite{AKN,Bryuno,Kuznetsov}). In the case of two degrees of freedom the Poincar\'e section is used to reduce the problem to studying a family of area-preserving maps in a neighbourhood of a fixed point. The Poincar\'e map depends on the energy level and possibly on other parameters involved in the problem. A sequence of coordinate changes is used to transform the map to a normal form. Our approach to the normal form of a map is similar to \cite{Tak74}. In the absence of resonances the normal form is a rotation of the plane, and the angle of the rotation depends on the amplitude. In a generic one-parameter family of area-preserving maps, the normal form provides a description for a chain of islands which is born from the origin when the multiplier of the fixed point crosses a resonant value~\cite{AKN,SM,Meyer1970,SV2009,GG2009}. In \cite{GG2014} unique normal forms for two-parametric families were constructed and used to analyse bifurcations of $n$-periodic orbits. In the paper we study three-parametric families of APM with fixed point of elliptic type. In such families two possibilities of degenerations are possible: (1) degenerations in the twist terms; (2) degeneration in the leading resonance term. Normal form in the case of (1) was constructed in \cite{GG2014} (for arbitrary number of parameters). Normal forms for (2) are constructed in the paper first for an individual map (Section~\ref{Se:hn0is0}) and then for families (Section~\ref{Se:Familieshn0}). In Sections~\ref{Se:biflead} and \ref{Se:Familiesh33} the normal forms are used to inverstigate biffurcations. \subsection{Individual maps} Let $F_0:\mathbb{R}^{2}\to \mathbb{R}^{2}$ be an area-preserving map (APM) which also preserves orientation. Let the origin be a fixed point: $ F_0(0)=0. $ Since $F_0$ is area-preserving $\det DF_0(0)=1$. Therefore the two eigenvalues of the Jacobian matrix $DF_0(0)$ are $\lambda_0 $ and $\lambda_0 ^{-1}$. We will consider an elliptic fixed point, i.e. the case of non real $\lambda_0$. As the map is real $\lambda^{-1}_0=\lambda_0^*$. Consequently the multipliers of an elliptic fixed point belong to the unit circle: $|\lambda_0|=1$, i.e. $\lambda_0 = e^{i\alpha_0}$. There is a linear area-preserving change of variables such that the Jacobian of $F_0$ takes the form of a rotation: \begin{equation}\label{Eq:rotation} DF_0(0)=R_{\alpha_0 },\qquad\mbox{where }R_{\alpha_0}=\left( \begin{array}{cc} \cos \alpha_0 & -\sin \alpha_0 \\ \sin \alpha_0 & \cos \alpha_0% \end{array}% \right). \end{equation} It is well known that APM with elliptic fixed point can be represented in Birkhoff normal form \cite{AKN}, i.e. there is an area-preserving change of coordinates which transforms $F_0$ into the resonant normal form $N_0$ that commutes with the rotation: $N_0\circ R_{\alpha_0}=R_{\alpha_0}\circ N_0$. The change of coordinates and the map $N_0$ are formal series. The linear part of a normal form $N_0$ is $R_{\alpha_0}$. Following the method suggested in \cite{Tak74} by Takens (see e.g. \cite{BGSTT1993,GG2009}), we consider a formal series $H_0$ such that \begin{equation} N_0=R_{\alpha_0}\circ \Phi^1_{H_0}, \end{equation} where $\Phi_{H_0}^t$ is a flow generated by the Hamiltonian $H_0$. The Hamiltonian has the Takens normal form, i.e. it is invariant with respect to the rotation: $H_0\circ R_{\alpha_0}=H_0$. Our goal is to transform formal series $H_0$ to the most simple form. We use changes of variables which commutate with the rotation $ R_{\alpha_0}$. Then in new variables the map is still in Birkhoff normal form and the corresponding Hamiltonian remains in the Takens normal form. It is convinient to use complex variables defined by \begin{equation}\label{Eq:z} z=\frac{x+iy}{\sqrt2} \qquad\mbox{and}\qquad\bar z=\frac{x-i y}{\sqrt2}. \end{equation} A fixed point is called {\em resonant} if there exists $n\in \mathbb{N}$ such that $\lambda_0^{n}=1$. The least positive $n$ is called the {\em order of the resonance} \cite{AKN}. The rotation $R_{\alpha_0}$ takes the form $(z, \bar z) \mapsto (e^{i\alpha_0}z,e^{-i\alpha_0} \bar z )$. As the map $N_0$ commutes with $R_{\alpha_0}$ it contains only resonant terms: $$ N_0(z,\bar z)=\lambda_0 z +\sum_{\substack {k+l \ge 2 , \\k-l=1 \pmod n}} f_{kl} z^k \bar z^l .$$ Corresponding Hamiltonian has the Takens normal form \cite{GG2009}: \begin{equation}\label{Eq:hinitial} H_0(z,\bar z)=\sum_{\substack {k+l \ge 3 , \\k=l \pmod n}} h_{kl} z^k \bar z^l , \qquad h_{kl}=h_{lk}^*.\end{equation} It was established in \cite{GG2009}, \cite{GG2014} that if $h_{n0} \ne 0$ then Hamiltonian $H_0$ can be transformed to a normal form \[ \widetilde H_0(z,\bar z)=\sum_{k \ge 2} a_k z^k \bar z^k +(z^n+\bar z^n)\sum_{k \ge 0} b_k z^k \bar z^k .\] In the paper we consider the case of $h_{n0} = 0$ but $h_{22}\ne 0$ and $h_{n+1,1}^2-4h_{22}h_{2n,0} \ne 0$. In order to investigate Hamiltonian it is convenient to use the symplectic polar coordinates $(I,\varphi)$ given by \begin{equation} \label{Eq:zpolar} \left\{ \begin{array}{rcl} x&=&\sqrt{2I}\cos\varphi\,,\\ y&=&\sqrt{2I}\sin\varphi\, \end{array} \right. \quad {\rm or} \quad \left\{ \begin{array}{rcl} z&=&\sqrt{I}e^{i\varphi}\,,\\ \bar z&=&\sqrt{I}e^{-i\varphi}\,. \end{array} \right. \end{equation} The Hamiltonian in Takens normal form (\ref{Eq:hinitial}) takes the form : \begin{equation}\label{Eq:inter} H_0(I,\varphi)=I^2\sum_{k\ge 0} a_kI^k+\sum_{j\ge 1,k\ge 0} a_{jk}I^{jn/2+k}\cos (j n\varphi +\beta_{jk}), \end{equation} where $a_k$, $a_{jk}$ and $\beta_{jk}$ are real coefficients. Proposition~\ref{Pro:hn0is0} of Section~\ref{Se:hn0is0} implies the following theorem. \begin{theorem}\label{Te:N0} Let $N_0$ be BNF for some APM $F_0$ with resonance of order $n$ at the origin. Let for corresponding Hamiltonian in TNF (\ref{Eq:inter}) $a_{10}=0$, $a_0\ne 0$ and $a_{11}^2e^{i2\beta_{11}}-8a_0a_{20}e^{i\beta_{20}}\ne 0$. Then there is a formal canonical change of coordinates such that in new variables $\widetilde N_0 =R_{\alpha_0}\circ \Phi^1_{\widetilde H_0}$ and \begin{equation}\label{Eq:Hn0is0} \widetilde H_0(I,\varphi )=I^{2}\sum_{k\ge 0} a_k {I}^k +I^{n/2} \sum_{k\ge 1} b_k \cos (n\varphi + \psi_k ) {I}^{2k} +I^{n} \cos 2n\varphi \sum_{k\ge 0}c_k I^{2k}. \end{equation} \end{theorem} The coefficients in the form (\ref{Eq:Hn0is0}) are not unique: the coefficients $b_k$ are replaced by $-b_k$ after rotation by $\pi /n$. There is also an alternative normal form (only $a_0\ne 0$ is required), which contains fewer terms of low orders: \begin{theorem}\label{Te:N01} Let $N_0$ be BNF for some APM $F_0$ with resonance of order $n$ at the origin. Let for corresponding Hamiltonian in TNF (\ref{Eq:inter}) $a_0\ne 0$. Then there is a formal canonical change of coordinates such that in new variables $\widetilde N_0 =R_{\alpha_0}\circ \Phi^1_{\widetilde H_0}$ and \[ {\widetilde H_0}(I,\varphi )=I^{2}\sum_{k\ge 0} a_k {I}^k +\sum_{k\ge 1} c_k I^{nk/2} \cos (kn\varphi +\psi_k). \] \end{theorem} Corresponding Proposition~\ref{Pro:hn0is0case1} is proved in Section~\ref{Se:hn0is0}. \subsection{Families} Let us consider a three-parametric family $F_\mu$ of APM with fixed point of an elliptic type at the origin with $\lambda_\mu =e^{i\alpha_\mu}$, $ \mu =(\mu_1,\mu_2,\mu_3) \in \mathbb{R}^3$. We assume that for $\mu = (0,0,0)$ the function $F_0$ has resonance of order $n$ (i.e. $\lambda_0=e^{i\alpha_0}$, $\lambda_0^n=1 $). After a linear change of coordinates the map $F_\mu$ takes the form $ F_\mu= R_{\alpha_\mu} \circ \Phi $, where $\Phi$ is a tangent-to-identity APM. In complex variables (\ref{Eq:z}) $F_\mu =(f_\mu ,\bar f_\mu)$, \[ f_\mu (z,\bar z)=\lambda_\mu z +\sum_{k+l\ge 2} f_{kl}(\mu )z^k\bar z^l .\] It is natural to use the value \begin{equation}\label{Eq:epsilon} \varepsilon = \alpha_0 -\alpha_\mu \end{equation} as one of parameters. Let $\mu_3$ be expressed in terms of $(\varepsilon , \mu_1, \mu_2 )$. Then $f_\mu(z,\bar z)$ can be presented as a series in three variables $(z,\bar z, \varepsilon )$ with coefficients depending on $(\mu_1 ,\mu_2 )$: \[ f_\mu (z,\bar z)=\lambda_0 z+\sum_{k+l+m \ge 2} f_{klm}(\mu_1,\mu_2)z^k\bar z^l \varepsilon^m. \] After an appropriate change of coordinates the map $f_\mu$ can be written in Birkhoff normal form which contains only resonant terms: \begin{equation}\label{Eq:BNFfam} f_\mu (z,\bar z)=\lambda_0 z+\sum _{\substack{ k+l+m \ge 2\cr k-l=1\pmod n}} f_{klm}(\mu_1,\mu_2)z^k\bar z^l \varepsilon^m. \end{equation} Interpolation theorem for families (see for example \cite{GG2009}) gives $F_\mu=R_{\alpha_{0}} \circ \Phi^1_{H_\mu} $, where Hamiltonian $H_\mu$ in complex variables has the form \begin{equation}\label{Eq:hpinitial} H_\mu (z, \bar z)=\sum_{\substack{ k+l+m \ge 3\cr k=l\pmod n}}h_{klm}(\mu_1, \mu_2)z^k\bar z^l\varepsilon^m , \end{equation} where $h_{111}(\mu_1, \mu_2) =1$ as $\varepsilon $ is determined by (\ref{Eq:epsilon}). Or, in the symplectic coordinates (\ref{Eq:zpolar}) \begin{equation}\label{Eq:hpinitialIPhi} \begin{array}{rcl} H_\mu (I,\varphi )&=&\varepsilon I + I^2\sum_{\substack{ k \ge 0\cr m \ge 0}}a_{km}(\mu_1, \mu_2)I^k\varepsilon^m \\ &+&\sum_{\substack{ j \ge 1\cr k,m \ge 0}} a_{jkm}(\mu_1, \mu_2)I^{k+j\frac{n}{2}}\varepsilon^m \cos (jn\varphi +\psi_{jkm}(\mu_1, \mu_2)). \end{array} \end{equation} The form (\ref{Eq:hpinitialIPhi}) is not unique. Rotations and time-one shifts $\Phi^1_\chi$ with resonant Hamiltonian $\chi $ preserve its structure. Our goal is to derive the most simple form of $H_{\mu}$, {\it i.e.} to eliminate as many terms of $H_{\mu}$ as possible. If $a_{100}(\mu_1 ,\mu_2 )= 2|h_{n00}(\mu_1 ,\mu_2 )|\ne 0$ for all values of $(\mu_1 ,\mu_2 )$ then Hamiltonian can be transformed to the form \cite{Gelfarxiv}: \[\widetilde H_\mu(z, \bar z)=\varepsilon z \bar z + \sum_{ k \ge 2, m \ge 0}a_{km}(\mu_1,\mu_2)z^k\bar z^k\varepsilon^m + (z^n+\bar z^n)\sum_{k,m \ge 0}b_{km}(\mu_1,\mu_2) z^k\bar z^k\varepsilon^m\] or, in coordinates $(I,\varphi )$: \begin{equation}\label{Eq:Hinpolar} \widetilde H_\mu (I,\varphi ) =\varepsilon I + \sum_{k \ge 2, m\ge 0}a_{km}(\mu_1, \mu_2)I^k\varepsilon^m +I^{n/2}\cos n\varphi \sum_{k,m \ge 0}b_{km}(\mu_1, \mu_2)I^k \varepsilon^m. \end{equation} In this paper the main attention is paid to the case of Hamiltonian (\ref{Eq:hpinitial}) when the following conditions are valid: \begin{equation}\label{Eq:condit} h_{n00}(0,0)=0, \quad h_{220}(0,0)\ne 0,\quad h_{n+1,1,0}^2(0,0)-4h_{220}(0,0)h_{2n,0,0}(0,0)\ne 0. \end{equation} Note that these conditions correspond to a family which is a three-parametric unfolding of $F_0$ from Theorem~\ref{Te:N0}. We show that there is such canonical change of variables that Hamiltonian has the form (\ref{Eq:hpinitialIPhi}) with the last sum containing only two harmonics, {\it i.e.} terms with $j=1$ and $j=2$. Moreover $a_{1,2l+1,m}=0$, $a_{2,2l+1,m}=0$ and $\psi_{2,2l,m}=0$ for all $l$ and $m$. For the sake of receiving $\psi_{200}(\mu_1,\mu_2)=0$, as it will be shown in Section~\ref{Se:Familieshn0}, it is necessary to make a rotation by an angle $\varphi =-\frac{1}{2n} \arg \left( h_{2n,0,0}-\frac{h_{n+1,1,0}^2}{4h_{220}} \right) $. After the rotation the small coefficient of $z^n$ in (\ref{Eq:hpinitial}) is \begin{equation}\label{Eq:nu} h_{n00} (\mu_1, \mu_2)\exp \left( -\frac{i}{2} \arg \left( h_{2n,0,0}(\mu_1, \mu_2)-\frac{h_{n+1,1,0}^2(\mu_1, \mu_2)}{4h_{220}(\mu_1, \mu_2)} \right) \right)=\nu(\mu_1, \mu_2) , \end{equation} \[ \nu (\mu_1, \mu_2)= \nu_1(\mu_1, \mu_2)+i\nu_2(\mu_1, \mu_2) =\gamma (\mu_1, \mu_2)e^{i\beta(\mu_1, \mu_2)}.\] Let $\mu_1$ and $\mu_2$ be expressed by $\nu_1$ and $\nu_2$: $\mu_1=\mu_1 (\nu_1,\nu_2)$, $\mu_2=\mu_2 (\nu_1,\nu_2)$. Then we can consider $(\nu_1,\nu_2)$ as new parameters instead of $(\mu_1,\mu_2 )$. Below we use the following notations: $\mu=(\mu_1, \mu_2, \mu_3)$, $\Upsilon =(\varepsilon, \nu_1,\nu_2 ) $, $\mathbf{m}=(m_1,m_2,m_3)$, $|\mathbf{m}|=m_1+m_2+m_3$, $\Upsilon^{\mathbf{m}}= \varepsilon^{m_1} \nu_1^{m_2}\nu_2^{m_3}$. The following theorem gives a simplification of the Hamiltonian to a normal form. \begin{theorem}\label{Thr:Famhn0is0} Let $F_{\mu}$ be a smooth ($C^\infty$ or analytic) family of area preserving maps with fixed point of elliptic type at the origin such that (1) $F_{\mathbf{0}}$ has a resonant of order $n$ at the origin: $\lambda_0=e^{i\alpha_0}$, $\lambda^n=1$; (2) coefficients of Hamiltonian $H_\mu$ in the TNF (\ref{Eq:hpinitial}) satisfy conditions (\ref{Eq:condit}); (3) parameters $\mu =(\mu_1,\mu_2,\mu_3)$ can be expressed by $(\varepsilon , \nu_1, \nu_2)$ defining by (\ref{Eq:epsilon}) and (\ref{Eq:nu}). Then there is a formal Hamiltonian $\widetilde H_{\Upsilon}$ and formal canonical change of variables which conjugates $F_{\mu}$ with $R_{\alpha_0}\circ\Phi^1_{\widetilde H_{\Upsilon}}$. Moreover, $\widetilde H_{\Upsilon}$ in coordinates $(I,\varphi )$ has the following form: \begin{eqnarray*} \widetilde H_{\Upsilon}(I, \varphi )=\varepsilon I+\gamma I^{n/2} \cos (n\varphi +\beta ) +I^{2}\sum_{k,|\mathbf{m}|\ge0} a_{k\mathbf{m}} I^k \Upsilon^{\mathbf{m}} \\ +I^{n/2}\sum_{\substack {k,|\mathbf{m}|\ge0, \\k+m_1\ge 1}} b_{k\mathbf{m}}{I}^{2k} \Upsilon^{\mathbf{m}} \cos (n\varphi +\psi_{k\mathbf{m}} ) \\ + I^n \sum_{k,|\mathbf{m}|\ge0} c_{k\mathbf{m}}I^{2k}\Upsilon^{\mathbf{m}}\cos (2n\varphi) . \end{eqnarray*} \end{theorem} The theorem follows from Proposition~\ref{Pro:Famhn0is0} of Section~\ref{Se:Familieshn0}. From Proposition~\ref{Pro:Famhnois0} follows an alternative variant, useful for study of bifurcations: \begin{theorem}\label{Thr:Famhnois0} If $F_{\mu}$ is a smooth ($C^\infty$ or analytic) family of area preserving maps such that $F_{\mathbf{0}}$ has a resonant elliptic fixed point at the origin and in (\ref{Eq:hpinitialIPhi}) $a_{00}(0,0)\ne 0$, then there is a formal Hamiltonian $\widetilde H_{\Upsilon}$ and formal canonical change of variables which conjugates $F_\mu$ with $R_{\alpha_0}\circ\Phi^1_{\widetilde H_{\Upsilon}}$. Moreover, $\widetilde H_{\Upsilon}$ has the following form: \begin{equation}\label{Eq:Hfamalter} \begin{array}{r} \widetilde H_{\Upsilon}(I, \varphi )=\varepsilon I+\gamma I^{n/2} \cos (n\varphi +\beta ) +I^{2}\sum_{k,|\mathbf{m}|\ge0} a_{k\mathbf{m}} I^k \Upsilon^{\mathbf{m}} \\ \qquad \qquad + I^{n/2} \sum_{\substack {k+m_1\ge 1, \\m_2,m_3\ge 0}} c_{k\mathbf{m}}I^{kn/2}\Upsilon^{\mathbf{m}}\cos ((k+2)n\varphi+\psi_{k\mathbf{m}}) . \end{array} \end{equation} \end{theorem} Then we use the normal forms (\ref{Eq:Hinpolar}) and (\ref{Eq:Hfamalter}) to study biffurcations. We keep lower order terms and skip those, which do not change the picture qualitatively. The detailed discussion of typical level sets and their bifurcations for the case of degeneracy in the main resonant term are presented in Section~\ref{Se:biflead}. In Section~\ref{Se:Familiesh33} there is a brief description of some possible bifurcations for three-parametric families when in (\ref{Eq:Hinpolar}) $a_{20}(0,0)=a_{30}(0,0)=0$, $b_{00}(\mu_1 ,\mu_2 )\ne 0$. This case is different from two-parametric families considered in \cite{GG2014} only in tiny domain in the space of parameters. \section{Simplification of a formal Hamiltonian with $h_{n,0}=0$ for individual map\label{Se:hn0is0}} In this section we construct two degenerate resonant normal forms (\ref{Eq:NFhn0is0}) and (\ref{Eq:hnois0}). The first one provides particularly simple form of Hamiltonian in the symplectic polar coordinates: it contains only two lowest harmonics of the angle variable, namely $n\varphi$ and $2n\varphi$. An alternative normal form (\ref{Eq:hnois0}) contains fewer terms of low orders. \begin{proposition}\label{Pro:hn0is0} If \begin{equation}\label{Eq:hindiv} H(z,\bar z)=\sum_{\substack {k+l \ge 4 ,\ k,l\ge 0, \\k=l \pmod n}} h_{kl}z^k \bar z^l \end{equation} is a formal series such that $h_{kl}=h_{lk}^*$, $h_{n0}=0$, $h_{22} \ne 0$ and $h_{n+1,1}^2-4h_{22}h_{2n,0}\ne 0$ then there exists a formal canonical change of variables which transforms the Hamiltonian $H$ into \begin{equation}\label{Eq:NFhn0is0} \widetilde H(z,\bar z)=\sum_{k\ge 0} a_k {(z\bar{z})}^{k+2} +\sum_{k\ge 1}(b_k z^{n}+b_k^*\bar z^n) {(z\bar{z})}^{2k} +\sum_{k\ge 0} c_k {(z\bar{z})}^{2k} (z^{2n}+\bar z^{2n}), \end{equation}% where $a_k, c_k \in \mathbb{R}$, $b_k \in \mathbb{C}$. Moreover, $a_0=h_{22}$ and $c_0=\left|h_{2n,0}-\frac{h_{n+1,1}^2}{4h_{22}}\right|$. \end{proposition} \begin{proof} The key idea behind the proof is based on studying terms of formal power series in the order of their $\delta$-degree. For a resonant monomial ($z^kz^l$, $k=l \mod n $) we define its $\delta$-degree by \begin{equation}\label{Eq:deltahn0is0 \delta(z^k\bar z^l)= \left|\frac{k-l}n\right|+\min\{\,k,l\,\} = \frac{1}{2}(k+l)-\frac{n-2}{2n}|k-l|\,. \end{equation} Grouping in (\ref{Eq:hindiv}) terms of the same $\delta$-degree we get \[H(z,\bar z)=\sum_{m\ge 2} h_m(z,\bar z), \] where $h_m(z,\bar z)$ is a homogeneous resonant polynomial of $\delta$-degree $m$. The term of the lowest $\delta$-degree is \[ h_2(z,\bar z)=h_{22}z^2\bar z^2 +h_{n+1,1}z^{n+1}\bar z+h_{1,n+1}z\bar z^{n+1} +h_{2n,0}z^{2n}+h_{0,2n}\bar z^{2n}.\] Let $\chi$ be a resonant polynomial. After the substitution $(z, \bar z ) \to \Phi^1_{\chi}(z, \bar z )$ the Hamiltonian takes the form \begin{equation}\label{Eq:htransL} \widetilde H=H+L_{\chi}H+\sum_{k\ge 2}\frac{1}{k!}L^k_{\chi}H\,, \end{equation} where \[ L_{\chi} H=-i\{ H, \chi \} . \] The $\delta$-degrees of monomials in Poisson brackets \[ \{z^{k_1}\bar z^{l_1},z^{k_2}\bar z^{l_2}\} =(k_1l_2-k_2l_1)z^{k_1+k_2-1}\bar z^{l_1+l_2-1} \] satisfy the following relation: \begin{equation}\label{Eq:Pbdelta} \delta(z^{k_1+k_2-1}\bar z^{l_1+l_2-1}) \ge \delta (z^{k_1}\bar z^{l_1}) +\delta(z^{k_2}\bar z^{l_2})-1. \end{equation} Indeed, \[ \begin{array}{l} \delta(z^{k_1+k_2-1}\bar z^{l_1+l_2-1})= \frac{1}{2}(k_1+k_2+l_1+l_2-2)- \frac{n-2}{2n}|k_1-l_1+k_2-l_2|\\ \ge \frac{1}{2}(k_1+l_1)-\frac{n-2}{2n}|k_1-l_1| +\frac{1}{2}(k_2+l_2)-\frac{n-2}{2n}|k_2-l_2|-1. \end{array} \] Let $\chi=\alpha z^n+\alpha^*\bar z^n$ whith $\alpha= - \dfrac{h_{n+1,1}}{2inh_{22}}$. It is not difficult to see that after the substitution $(z, \bar z ) \mapsto \Phi^1_{\chi}(z, \bar z )$ the coefficients of Hamiltonian $\widetilde H=\sum_{k+l\ge 4}\tilde h_{kl}z^k\bar z^l$ of $\delta$-degree 2 are \[\left\{ \begin{array}{l} \tilde h_{22}=h_{22}, \\ \tilde h_{n+1,1}=h_{n+1,1}+2in\alpha h_{22}=0,\\ \tilde h_{2n,0}= h_{2n,0}+in\alpha h_{n+1,1}-n^2\alpha^2h_{22} =h_{2n,0}-\frac{h_{n+1,1}^2}{4h_{22}}. \end{array} \right. \] After rotation $z \mapsto e^{-i\frac{\arg \tilde h_{2n,0}}{2n}}z$ the coefficient $\tilde h_{2n,0}\mapsto c_0= |\tilde h_{2n,0}|$. Then in new variables \begin{equation}\label{Eq:h2nf} h_2(z,\bar z)=a_0z^2\bar z^2 +c_0(z^{2n}+\bar z^{2n}), \end{equation} where $a_0=h_{22}$, $c_0=\left|h_{2n,0}-\frac{h_{n+1,1}^2}{4h_{22}}\right|$. So the terms of $\delta$-degree 2 has the declared form. We proceed by induction. Let $h_m(z,\bar z)$ has the declared form for $m<p$, namely \begin{eqnarray}\label{Eq:h_m} h_{2j+2}(z,\bar z)=a_{2j}{(z\bar z)}^{2j+2}+c_j{(z\bar z)}^{2j}(z^{2n}+\bar z^{2n}),\quad {\rm for\ }0\le j \le \left\lfloor \frac{p-3}{2}\right\rfloor ,\\ h_{2j+1}(z,\bar z)=a_{2j-1}{(z\bar z)}^{2j+1}+{(z\bar z)}^{2j}(b_jz^{n}+b_j^*\bar z^{n}),\quad {\rm for\ }1\le j \le \left\lfloor \frac{p-2}{2}\right\rfloor . \end{eqnarray} For the sake of receiving such form for the term of $\delta$-degree $p$ we use several changes of variables consequently. Any homogeneous polynomial $\chi $ of $\delta$-degree $p-1\ge 2$ generates the change of variables $(z, \bar z ) \mapsto \Phi^1_{\chi}(z, \bar z )$. Formulae (\ref{Eq:htransL}) and (\ref{Eq:Pbdelta}) imply \[\tilde h_m=h_m\quad {\rm for\ } m\le p-1 \] and for $m=p$ \[ \tilde h_p=h_p+ L(\chi ).\] Here $L(\chi )$ is the homological operator: \begin{equation}\label{Eq:homop} L(\chi )=\left[ L_\chi h_2 \right]_p , \end{equation} where $h_2$ is determined by (\ref{Eq:h2nf}) and $[\cdot ]_p$ denotes terms of $\delta$-degree $p$. Let $p$ be $\delta$-degree of resonant monomial $z^k\bar z^l$ and $j=\frac{k-l}{n}$. Let \[ Q_{p0}(z,\bar z)=z^p\bar z^p\] and for $1\le j\le p$ \[ Q_{pj}(z,\bar z)=z^{p+j(n-1)}\bar z^{p-j}, \quad Q_{p,-j}(z,\bar z)=z^{p-j}\bar z^{p+j(n-1)}.\] The homological operator (\ref{Eq:homop}) acts on monomial $Q_{p-1,j}(z,\bar z)$ of $\delta$-degree $p-1$ with $j>0$ by \[L(Q_{p-1,j})=2ia_0njQ_{pj} -2inc_0(p-1-j)Q_{p,j+2} \] and for $j=0$: \[ L(Q_{p-1,0})=-2inc_0(p-1) (Q_{p,2}-Q_{p,-2}). \] We denote coefficients of resonant polynomials $h_p(z,\bar z)$ and $\tilde h_p (z,\bar z)$ by $g_j$ and $\tilde g_j$ respectively: \[ h_p=g_0Q_{p0}+\sum_{1\le j\le p} (g_jQ_{pj}+g_j^*Q_{p,-j}), \quad \tilde h_p=\tilde g_0Q_{p0}+\sum_{1\le j\le p} (\tilde g_jQ_{pj}+\tilde g_j^*Q_{p,-j}).\] The change of variables generated by resonant polynomial $\chi_k$ of $\delta$-degree $p-1$ \begin{equation}\label{Eq:chij} \begin{array}{lr} \chi_k=\alpha_kQ_{p-1,k}+\alpha_k^*Q_{p-1,-k} ,\,& 1 \le k \le p-1, \\ \chi_0=\alpha_0Q_{p-1,0}\, \end{array} \end{equation} transforms coefficients $g_j$ by the following way: \[ \left\{ \begin{array}{l} \tilde g_j = g_j, \qquad j\notin \{ k,k+2 \}, \\ \tilde g_k =g_k +2ia_0nk\alpha_k ,\\ \tilde g_{k+2}=g_{k+2}-2inc_0(p-1-k)\alpha_k . \end{array} \right. \] Below we describe the order of this transformations wich provides the declared form of the terms of $\delta$-degree $p$. Let $k_0=1$ for even $p$ and $k_0=2$ for odd $p$. Change of variables generated by $\chi_k$ (\ref{Eq:chij}) with $\alpha_k=-\frac{g_k}{2ia_0kn}$ eliminates $g_k$, changes $g_{k+2}$ and does not affect any other terms of $\delta$-degree $p$: \begin{equation*} \left\{ \begin{array}{l} \tilde g_j=g_j \, ,\quad j\not\in \{ k,k+2 \} ,\\ \tilde g_k=0 , \\ \tilde g_{k+2}=g_{k+2} + \frac{c_0(p-1-k)}{ka_0}g_k\, . \end{array} \right. \end{equation*} Starting with $k=k_0$, then $k=k_0+2$ {\it etc.} proceeding up to $k=p-1$ one get corresponding terms $\tilde g_k=0$ for all $k\ge k_0$ for which $k-k_0$ is even. To eliminate terms with odd $k-k_0$ we use changes of variables generated by $\chi_k$ (\ref{Eq:chij}) with \begin{equation*}\begin{array}{lr} \alpha_k= \frac{g_{k+2}}{2inc_0(p-k+1)} ,\,& 1\le k \le p-2, \\ \alpha_0=\frac{\Im g_2}{2nc_0(p-1)}\, . \end{array} \end{equation*} Each of these substitutions eliminates $g_{k+2}$ (for $k=0$ only image part of $g_2$ eliminates), changes $g_k$ and does not affect any other terms of $\delta$-degree $p$: \begin{equation*} \left\{\begin{array}{l} \tilde g_j=g_j \, ,\quad j\not\in \{ k,k+2 \},\\ \tilde g_{k+2}=0 ,\, \quad ({\mbox{for} \ } k=0 \quad \Im \tilde g_2=0),\\ \tilde g_k=g_{k} + \frac{ka_0g_{k+2}}{c_0(p-k+1)}\, .\end{array} \right. \end{equation*} Starting from $k=p-2$, then $k=p-4$ {\it etc.} down to $k=k_0-1$ we get corresponding $\tilde g_{k+2}=0$ and $\tilde g_2 \in\mathbb{R}$. \end{proof} {\bf Remark about uniqueness.} The kernel of the homological operator is empty for odd $p$ and one-dimensional for even $p$. So only $[h_2^k]_{2k} \in \ker L $ which implies uniqueness of the coefficients $a_k$, $c_k$, $|b_k|$ and $\arg b_k \pmod \pi $ in formula (\ref{Eq:NFhn0is0}). But rotation by $\pi /n$ change $\arg b_k \mapsto \arg b_k +\pi$. An alternative normal form is derived in the next proposition. \begin{proposition} \label{Pro:hn0is0case1} If \begin{equation}\label{Eq:h_original1} H(z,\bar z)=a_0z^2\bar z^2+ \sum_{\substack {k+l \ge 5 ,\ k,l\ge 0, \\k=l \pmod n}} h_{kl}z^k \bar z^l \end{equation} is a formal series such that $h_{kl}=h_{lk}^*$ and $a_0\ne 0$, then there exists a formal tangent-to-identity canonical change of variables which transforms the Hamiltonian $H$ into \begin{equation}\label{Eq:hnois0} \widetilde H(z,\bar z)=(z\bar{z})^{2}\sum_{k\ge 0} a_k {(z\bar{z})}^k +\sum_{k\ge 1} (c_k z^{nk}+c_k^*\bar z^{nk}), \end{equation}% where $a_k \in \mathbb{R}$, $c_k\in\mathbb C$. \end{proposition} \begin{proof} Let $\chi (z, \bar z )=\alpha z^k \bar z^l +\alpha^*z^l \bar z^k$, $k > l$. After the substitution $(z, \bar z ) \to \Phi^1_{\chi}(z, \bar z )$ the Hamiltonian takes the form (\ref{Eq:htransL}), where \[ L_{\chi} H=-i\{ H, \chi \} =2ia_0\alpha (k-l)z^{k+1}\bar z^{l+1} -2ia_0\alpha^* (k-l)z^{l+1}\bar z^{k+1} +O_{k+l+3}, \] where $O_{k+l+3}$ denotes terms of degree ${k+l+3}$ and higher. Then $\tilde h_{st}=h_{st}$ for $ 4\le s+t\le k+l+1$ and for $s+t=k+l+2$ if $(s,t)\ne (k+1,l+1)$. And \[ \tilde h_{k+1,l+1}=h_{k+1,l+1}+2ia_0\alpha (k-l) \,. \] Let $\alpha = - \dfrac{h_{k+1,l+1}}{2ia_0(k-l)}$. Then $\tilde h_{k+1,l+1}=0$. Repeating this substitutions one gets by induction $\tilde h_{kl}=0$ with the exception of $\tilde h_{kk}$ and $\tilde h_{nk,0}=\tilde h^*_{0,nk}$. \end{proof} \section{Simplification of a formal Hamiltonian for families with small $h_{11}$ and $h_{n0}$\label{Se:Familieshn0}} Now we consider the three-parametric unfolding of Hamiltonian (\ref{Eq:hindiv}): \begin{equation}\label{Eq:hfam} H(z,\bar z; \varepsilon ,\mu_1,\mu_2 )= \varepsilon z\bar z+ \sum_{\substack{k+l\ge 3, m \ge 0 \cr k=l\pmod n}} h_{klm}(\mu_1,\mu_2 )z^k \bar z^l \varepsilon^m, \end{equation} where $h_{klm}(\mu_1,\mu_2 ) =h_{lkm}^*(\mu_1,\mu_2 )$, $h_{n00}(0,0)=0$, $ h_{220}(\mu_1,\mu_2 )\ne 0$, \[ h_{n+1,1,0}^2(\mu_1,\mu_2 )- 4h_{220}(\mu_1,\mu_2 )h_{2n,0,0}(\mu_1,\mu_2 )\ne 0. \] The purpose of the section is to prove that after some canonical formal transformations Hamiltonian (\ref{Eq:hfam}) takes the form (\ref{Eq:hfinalC}). An alternative form (\ref{Eq:hfamalt}) is also derived. As in Section~\ref{Se:hn0is0} $\delta$-degree is defined by (\ref{Eq:deltahn0is0}). Then \begin{equation}\label{Eq:hfamgroop} H(z,\bar z; \varepsilon ,\mu_1,\mu_2 )= \sum_{p\ge 1, m\ge 0} h_{(p)m}(z,\bar z;\mu_1,\mu_2 ) \varepsilon^m, \end{equation} where $h_{(p)m}$ is homogeneous resonant polynomial on $(z,\bar z)$ of $\delta$-degree $p$ with coefficients depending on $(\mu_1,\mu_2)$. The terms of $\delta$-degree 1 are \begin{equation}\label{Eq:h1fam} \begin{array}{l} h_{(1)1}(z,\bar z;\mu_1,\mu_2 )=z\bar z+h_{n01}(\mu_1,\mu_2 )z^n +h_{n01}^*(\mu_1,\mu_2 )\bar z^n, \\ h_{(1)m}(z,\bar z;\mu_1,\mu_2 )=h_{n0m}(\mu_1,\mu_2 )z^n +h_{n0m}^*(\mu_1,\mu_2 )\bar z^n, \quad m\ne 1 . \end{array} \end{equation} The all transformations described below preserve the form of $h_{(1)m}$ (i.e. $h_{110}=1$ and $h_{11m}=0$ for $m\ne 1$) although coefficients $h_{n0m}$ are changed. As for individual map, normalisation of terms of $\delta$-degree $p=2$ is different from $p\ge 3$. So it is convenient to consider the case of $p=2$ separately. \begin{lemma} Let $H(z,\bar z; \varepsilon ,\mu_1,\mu_2 )$ be a formal series (\ref{Eq:hfamgroop}) with $h_{(1)m}(z,\bar z;\mu_1,\mu_2 )$ as in (\ref{Eq:h1fam}) and \[ h_{(2)m}(z,\bar z;\mu_1,\mu_2 )= h_{22m}(\mu_1,\mu_2 )z^2\bar z^2 + h_{n+1,1,m}(\mu_1,\mu_2 )z^{n+1}\bar z \] \[+ h_{n+1,1,m}^*(\mu_1,\mu_2 )z\bar z^{n+1} + h_{2n,0,m}(\mu_1,\mu_2 )z^{2n} +h_{2n,0,m}^*(\mu_1,\mu_2 )\bar z^{2n}. \] If $h_{220}(\mu_1,\mu_2 )\ne 0$ and $h_{n+1,1,0}^2(\mu_1,\mu_2 )- 4h_{220}(\mu_1,\mu_2 )h_{2n,0,0}(\mu_1,\mu_2 )\ne 0$ then there exists a formal canonical change of variables which transforms the Hamiltonian $H$ into $\widetilde H= \sum_{p\ge 1, m \ge 0} \tilde h_{(p)m}\varepsilon^m$ with \begin{equation}\label{Eq:nuhn0} \tilde h_{n00}(\mu_1,\mu_2 ) = h_{n00} (\mu_1, \mu_2)\exp \left( -\frac{i}{2} \arg \left( h_{2n,0,0}(\mu_1, \mu_2)-\frac{h_{n+1,1,0}^2(\mu_1, \mu_2)}{4h_{220}(\mu_1, \mu_2)} \right) \right), \end{equation} $\tilde h_{11m}(\mu_1,\mu_2 )=h_{11m}(\mu_1,\mu_2 )=0$ for $m\ne 1$, $\tilde h_{111}(\mu_1,\mu_2 )=h_{111}(\mu_1,\mu_2 )=1$,\begin{equation}\label{Eq:h2} \tilde h_{(2)m}(z,\bar z;\mu_1,\mu_2 ) =\tilde h_{22m}(\mu_1,\mu_2 )z^2\bar z^2 +\tilde h_{2n,0,m}(\mu_1,\mu_2 )(z^{2n}+\bar z^{2n}), \end{equation} where $\tilde h_{2n,0,m}(\mu_1,\mu_2 ) \in \mathbb{R}$ for all $m \ge 0$, $\tilde h_{2n,0,0}(\mu_1,\mu_2)= \left| h_{2n,0,0}(\mu_1,\mu_2)- \frac{h_{n+1,1,0}^2(\mu_1,\mu_2)}{4h_{220}(\mu_1,\mu_2)} \right|$. \end{lemma} \begin{proof} The lemma is proved by induction. For $m=0$ we use change of variables $(z, \bar z ) \mapsto \Phi_\chi^{1}(z, \bar z )$ with $\chi =\beta z^n +\beta^* \bar z^n$, $ \beta = -\frac{h_{n+1,1,0}}{2inh_{220}}$. From (\ref{Eq:htransL}) we obtain coefficients of terms of $\delta$-degree 1: \[\begin{array}{l} \tilde h_{n0m}=h_{n0m} \quad {\rm for\ }m=0 {\rm \ and\ }m\ge 2 ,\\ \tilde h_{n01}=h_{n01}+in\beta =h_{n01}-\frac{h_{n+1,1,0}}{2h_{220}} \end{array}\] and of $\delta$-degree 2, $m=0$ : \[\begin{array}{l} \tilde h_{220}=h_{220} \quad {\rm for\ }n\ge 4, \\ \tilde h_{220}=h_{220}-in^2\beta^*h_{300} \quad {\rm for\ }n=3 ,\\ \tilde h_{n+1,1,0}=h_{n+1,1,0}+2in\beta h_{220} =0,\\ \tilde h_{2n,0,0}=h_{2n,0,0}+in\beta h_{n+1,1,0} -n^2\beta^2h_{220}=h_{2n,0,0}-\frac{h_{n+1,1,0}^2}{4h_{220}} \ne 0. \end{array}\] Note that $\tilde h_{220}$ is changed only for the case of resonance of the order $n=3$ but $\tilde h_{220}(0,0) =h_{220}(0,0)$ as $h_{300}(0,0)=0$. After rotation $z \mapsto z \exp \left( -i \frac{\arg \tilde h_{2n,0,0}}{2n} \right) $ the coefficient $\tilde h_{2n,0,0} \mapsto |\tilde h_{2n,0,0} |$. Thus $\tilde h_{(2)0}$ has the form (\ref{Eq:h2}). Let $h_{(2)m}$ has the form (\ref{Eq:h2}) for all $m\le M-1$. In order to normalise the term $h_{(2)M} $ we use successively two substitutions of the type $(z,\bar z)\mapsto\Phi_\chi^{\varepsilon^M}(z,\bar z )$. Then \[ \widetilde H=H+\varepsilon^M L_\chi H +\sum_{k\ge 2} \frac{\varepsilon^{kM}}{k!}L^k_\chi H .\] First we assume $\chi =\beta z^n +\beta^* \bar z^n$ and then $\chi =\alpha z\bar z$. Choosing appropriate value of $\beta$ we eliminate $h_{n+1,1,M}$ and we get $\Im h_{2n,0,M}=0$ by choosing $\alpha $. After the first substitution with $ \beta = -\frac{h_{n+1,1,M}}{2inh_{220}}$ the coefficients of terms of $\delta$-degree 1 are: \[\begin{array}{l} \tilde h_{n0m}=h_{n0m} \quad {\rm for\ }m\ne M+1, \\ \tilde h_{n,0,M+1}=h_{n,0,M+1}+in\beta =h_{n,0,M+1}-\frac{h_{n+1,1,M}}{2h_{220}} \end{array}\] and for terms of $\delta$-degree 2 for $m \le M-1$ the coefficients are not changed and for $m=M$: \[\begin{array}{l} \tilde h_{22M}=h_{22M} \quad {\rm for\ }n\ge 4 ,\\ \tilde h_{22M}=h_{22M}-in^2\beta^*h_{300} \quad {\rm for\ }n=3 ,\\ \tilde h_{n+1,1,M}=h_{n+1,1,M}+2in\beta h_{220}=0,\\ \tilde h_{2n,0,M}=h_{2n,0,M}+in\beta h_{n+1,1,0}=h_{2n,0,M} -\frac{h_{n+1,1,M}h_{n+1,1,0}}{2h_{220}}. \end{array}\] The change $(z, \bar z ) \mapsto \Phi_\chi^{\varepsilon^M}(z, \bar z )$ with $\chi=\alpha z\bar z$ leads to $\tilde h_{klm} =h_{klm}$ for $m \le M-1$ and \[ \tilde h_{klM} =h_{klM} -i\alpha (k-l)h_{kl0} \quad {\rm for\ }k>l.\] So, \[ \tilde h_{n,0,M}=h_{n,0,M}-in\alpha h_{n00} \] and $\tilde h_{22M}=h_{22M}$, $\tilde h_{n+1,1,M}=h_{n+1,1,M}=0$, \[ \tilde h_{2n,0,M}=h_{2n,0,M}-2in\alpha h_{2n,0,0}. \] Choosing $\alpha =\frac{\Im h_{2n,0,M}}{2nh_{2n,0,0}}$ we get $\tilde h_{2n,0,M} \in \mathbb{R}$. \end{proof} Now we introduce new parameters $(\nu_1 ,\nu_2 )$ instead of $(\mu_1,\mu_2 )$. Let \[ \tilde h_{n00}(\mu_1,\mu_2 )=\nu=\nu_1+i\nu_2,\] where $\tilde h_{n00}(\mu_1,\mu_2 )$ is determined by the formula (\ref{Eq:nuhn0}). Let $\mu_1$ and $\mu_2$ be expressed in terms of $(\nu_1,\nu_2 )$ as power series. Then we get Hamiltonian $H$ in the form of the series in five variables $(z,\bar z ; \varepsilon , \nu_1, \nu_2 )$. Let $\Upsilon =(\varepsilon , \nu_1,\nu_2 ) $, $\mathbf{m}=(m_1,m_2,m_3)$, $|\mathbf{m}|=m_1+m_2+m_3$, $\Upsilon^{\mathbf{m}}= \varepsilon^{m_1} \nu_1^{m_2}\nu_2^{m_3}$. After collecting terms of the same $\delta$-degree Hamiltonian $H$ takes the form: \[ H=\varepsilon z\bar z +\nu z^n +\nu^* \bar z^n +\sum_{\substack{m_1\ge 1 \cr m_2,m_3\ge 0}} (b_{0\mathbf{m}}z^n +b^*_{0\mathbf{m}}\bar z^n) \Upsilon^\mathbf{m} \] \begin{equation}\label{Eq:hdeltanu} + az^2\bar z^2 +c(z^{2n }+\bar z^{2n})+ \sum_{|\mathbf{m}| \ge 1} (a_{0\mathbf{m}}z^2\bar z^2 +c_{0\mathbf{m}}(z^{2n}+\bar z^{2n}))\Upsilon^\mathbf{m} \end{equation} \[ +\sum_{p\ge 3, |\mathbf{m}| \ge 0} h_{(p)\mathbf{m}} (z,\bar z) \Upsilon^\mathbf{m} .\] Here the terms of $\delta$-degree 1 are in the form (\ref{Eq:h1fam}) and the terms of $\delta$-degree 2 are already in the form (\ref{Eq:h2}), $a=h_{220}(0,0)$, $c=\left| h_{2n,0,0}(0,0)- \frac{h_{n+1,1,0}^2(0,0)}{4h_{220}(0,0)} \right|$. The next proposition completes transformation of Hamiltonian to the normal form. \begin{proposition}\label{Pro:Famhn0is0} Let (\ref{Eq:hdeltanu}) be a formal series where $h_{(p)\mathbf{m}}$ are real-valued resonant polynomials on $(z, \bar z)$ of $\delta$-degree $p$ and $ac\ne 0$. There exists a formal canonical change of variables which transforms the Hamiltonian $H$ into \begin{equation}\label{Eq:hfinalC} \widetilde H(z,\bar z; \Upsilon )=\varepsilon z \bar z +\nu z^n+ \nu^*\bar z^n+a (z\bar{z})^{2}+c(z^{2n}+\bar z^{2n}) +(z\bar{z})^{2}\sum_{k+|\mathbf{m}|\ge 1} a_{k\mathbf{m}} {(z\bar{z})}^k \Upsilon^\mathbf{m} \end{equation} \begin{equation*} +\sum_{\substack{k+m_1\ge 1 \cr m_2,m_3\ge 0}} {(z\bar{z})}^{2k} (b_{k\mathbf{m}}z^{n}+b_{k\mathbf{m}}^*\bar z^n) \Upsilon^\mathbf{m} + (z^{2n}+\bar z^{2n}) \sum_{k+|\mathbf{m}|\ge 1} c_{k\mathbf{m}} {(z\bar{z})}^{2k} \Upsilon^\mathbf{m} , \end{equation*} where $a_{k\mathbf{m}}, c_{k\mathbf{m}} \in \mathbb{R}$, $b_{k\mathbf{m}} \in \mathbb{C}$. \end{proposition} \begin{proof} Let $\delta$-degree of monomial $z^k\bar z^l$ is defined by (\ref{Eq:deltahn0is0}) as before. Now we introduce a new $\delta_{\Upsilon}$-degree for monomial on five variables: \begin{equation}\label{Eq:deltaepsilonorder} \delta_{\Upsilon} (z^k \bar z^l \Upsilon^\mathbf{m})=\delta(z^k \bar z^l)+2|\mathbf{m}| . \end{equation} Then \[ H(z, \bar z; \Upsilon )= \sum_{s \ge 2} h_s(z, \bar z; \Upsilon ), \] where $h_s$ is a homogeneous polynomial of $\delta_{\Upsilon}$-degree $s$. The main term of $\delta_{\Upsilon}$-degree $s=2$ is already in the normal form: \[ h_2(z, \bar z; \Upsilon ) = a (z\bar{z})^{2}+c(z^{2n}+\bar z^{2n}) .\] We proceed by induction. Let $S \ge 3$ and $h_s(z, \bar z; \Upsilon )$ has the declared form for $s \le S-1$. The term of $\delta_{\Upsilon}$-degree $S$ has the form: \begin{equation} \label{Eq:hS} h_S (z, \bar z; \Upsilon )= \sum_{0 \le |\mathbf{M}| \le \lfloor \frac{S-1}{2}\rfloor} h_{(S-2|\mathbf{M}|)\mathbf{M}} (z,\bar z) \Upsilon^\mathbf{M}, \end{equation} where $h_{(S-2|\mathbf{M}|)\mathbf{M}} (z,\bar z)$ is a polynomial of $\delta$-degree $S-2|\mathbf{M}|$. The terms of $\delta$-degree 1 and 2 are already in the normal form. So we consider $S-2|\mathbf{M}|\ge 3$, i.e. $0\le |\mathbf{M}|\le \lfloor \frac{S-3}{2}\rfloor$. Let $\chi$ be a homogeneous polynomial on $(z,\bar z)$ of $\delta$-degree $P-1=S-2|\mathbf{M}|-1$. After the substitution $(z, \bar z ) \mapsto \Phi_\chi^{\Upsilon^{\mathbf{M}}}(z, \bar z )$ the Hamiltonian takes the form \[ \widetilde H = H+\Upsilon^{\mathbf{M}} L_\chi H + \sum_{j \ge 2} \frac{\Upsilon^{\mathbf{M}j}}{j!} L_\chi^j H \, . \] Then \[ \tilde h_s (z, \bar z; \Upsilon ) = h_s (z, \bar z; \Upsilon ) \quad \mbox{for \ }s \le S-1 \] and \[ \tilde h_S (z, \bar z; \Upsilon ) = h_S (z, \bar z; \Upsilon ) + \Upsilon^{\mathbf{M}}L(\chi) , \] where $L$ is the homological operator (\ref{Eq:homop}) from Section \ref{Se:hn0is0}. So \[\tilde h_{(P)\mathbf{m}}=h_{(P)\mathbf{m}} \quad {\rm for\ } \mathbf{m}\ne \mathbf{M}\] and \[\tilde h_{(P)\mathbf{M}}=h_{(P)\mathbf{M}}+L(\chi ).\] It was been shown in Section \ref{Se:hn0is0} that for odd $P$ there exists such $\chi$ that \[ \tilde h_{(P)\mathbf{M}} =a_{P\mathbf{M}}z^P\bar z^P+ {(z\bar z )}^{P-1}( b_{P\mathbf{M}} z^{n}+b_{P\mathbf{M}}^* \bar z^{n}) \] and for even $P$ \[\tilde h_{(P)\mathbf{M}}=a_{P\mathbf{M}}z^P\bar z^P+ c_{P\mathbf{M}} {(z\bar z )}^{P-2} (z^{2n}+\bar z^{2n}) ,\] where $a_{P\mathbf{M}},c_{P\mathbf{M}} \in \mathbb{R}$, $b_{P\mathbf{M}} \in \mathbb{C}$. So, taking sequentially all sets of $(m_1,m_2,m_3)$ such that $0\le m_1+m_2+m_3\le \lfloor \frac{S-3}{2}\rfloor$ we transform all terms of $\delta_{\Upsilon}$-degree $S$ to the declared form. \end{proof} The following proposition establishes the normal form of Hamiltonian useful for study of bifurcations. \begin{proposition}\label{Pro:Famhnois0} Let (\ref{Eq:hdeltanu}) be a formal series where $a\ne 0$ and $h_{(p)\mathbf{m}}$ are real-valued resonant polynomials of $\delta$-degree $p$. There exists a formal canonical change of variables which transforms the Hamiltonian $H$ into \begin{equation}\label{Eq:hfamalt} \begin{array}{r} \tilde H(z,\bar z; \Upsilon )=\varepsilon z \bar z +a (z\bar{z})^{2}+(z\bar{z})^{2}\sum_{k+|\mathbf{m}|\ge 1} a_{k\mathbf{m}} {(z\bar{z})}^k \Upsilon^\mathbf{m} \\ +\nu z^n+ \nu^*\bar z^n + \sum_{k\ge 1,k+m_1\ge 2, |\mathbf{m}|\ge 0} (c_{k\mathbf{m}}z^{kn}+c_{k\mathbf{m}}^*\bar z^{kn}) \Upsilon^\mathbf{m} \end{array} \end{equation} where $a_{k\mathbf{m}}\in \mathbb{R}$, $c_{k\mathbf{m}}\in \mathbb{C} $, $c_{2\mathbf{0}}=c$. \end{proposition} \begin{proof} We use $\delta_{\Upsilon}$-degree introduced by \[ \delta_{\Upsilon}(z^k\bar z^l\Upsilon^{\mathbf{m}})=k+l+2|{\mathbf{m}}|\] and normalized order by order as in Proposition~\ref{Pro:Famhn0is0} but now we choose the compliment to the image of the homological operator as in the proof of Proposition~\ref{Pro:hn0is0case1}. \end{proof} \section{Bifurcations for the case of degeneracy in the leading resonant term}\label{Se:biflead} We discuss typical level sets of Hamiltonian (\ref{Eq:hfamalt}) for $0<|\varepsilon |+|\nu |\le \epsilon_0$ with sufficiently small $\epsilon_0$. In order to investigate biffurcations we keep lower order terms and skip those, which do not change the picture qualitatively: \[ H(z, \bar z ; \varepsilon , \tilde \nu )=\varepsilon z\bar z+a (z\bar{z})^{2} +\tilde \nu z^n+\tilde \nu^* \bar z^n +c(z^{2n}+\bar z^{2n}),\] where $\tilde \nu =\nu+\varepsilon c_{1100}+\dots +\varepsilon^kc_{1k00}$, $k=\left\lfloor \frac{n}{2}\right\rfloor$. Applying the symplectic polar coordinates (\ref{Eq:zpolar}) and assuming $\tilde \nu =\frac{1}{2} \gamma e^{i\beta}$ we get the model Hamiltonian in the form \begin{equation}\label{Eq:modelp_B} H(I, \varphi; \varepsilon, \gamma , \beta ) =\varepsilon I+ I^2+\gamma I^{n/2}\cos (n\varphi+\beta)+I^n\cos 2n\varphi\,. \end{equation} Here the coefficients $a$ and $c$ are normalized to unity with the help of a scaling applied to the variable $I$, the parameters $\varepsilon$ and $\gamma$, and the Hamiltonian function $H$. The case of $n=3$ is different from the case of $n\ge 4$. \subsection{Typical level sets of the model Hamiltonian for $n\ge 4$} Since \begin{equation}\label{Eq:prIH} \partial_I H = \varepsilon +2I+O(\gamma I^{n/2-1}) +O(I^{n-1}), \end{equation} when $\varepsilon >0$ for $n \ge 4$ Hamiltonian $H$ has not any critical points near the origin and level sets are closed curves. Typical level sets for $\varepsilon <0$ are depending on value of parameters $\varepsilon $ and $\gamma$. We consider separatly the case of $\gamma \gg |\varepsilon |^{n/2}$ and $\gamma = O(|\varepsilon |^{n/2})$. If ${|\varepsilon |}^{n/2}=o(\gamma )$ then the last term in (\ref{Eq:modelp_B}) can be omitted. Then critical points of Hamiltonian are located near $I=-\frac{\varepsilon}{2}$, $\cos (n\varphi+\beta)=\pm 1$. There is a chain of $n$ islands on the distance of order $|\varepsilon |^{1/2}$ from the origin. The size of islands is of order $\gamma^{1/2} |\varepsilon |^{n/4}$. For $\gamma = O\left({|\varepsilon | }^{n/2}\right)$ the last term in (\ref{Eq:modelp_B}) is also essential. In order to study bifurcations we introduce the scaling: \begin{equation}\label{Eq:scaling4p_B} \gamma=4{\left( -\frac{\varepsilon}{2}\right) }^{n/2} b,\quad I=-\frac{\varepsilon}{2}+{\left( -\frac{\varepsilon}{2}\right) }^{n/2}J,\quad \psi =n\varphi ,\quad H={\left( -\frac{\varepsilon}{2}\right) }^n\bar H\,. \end{equation} After this scaling the Hamiltonian takes the form (scipping the constant term) $\bar H(J,\psi)=\bar H_0(J,\psi)+ O\left( {\left( -\frac{\varepsilon}{2}\right) }^{n/2-1} \right) $, where \begin{equation}\label{Eq:barh0_4p_B} \bar H_0(J,\psi)=J^2+4b\cos (\psi+\beta)+\cos 2\psi\,. \end{equation} Equilibrium points of $\bar H_0(J,\psi)$ locate at the points for which $J=0$ and \begin{equation}\label{Eq:hyperb} b\sin (\psi+\beta)+\sin \psi \cos \psi =0. \end{equation} To figure out how many solutions has this equation we denote $\sin \psi =x$ and $\cos \psi =y$. Then equation (\ref{Eq:hyperb}) is equvalent to the system \[ \left\lbrace { \begin{array}{c} xb\cos \beta +yb \sin \beta +xy =0 \\ x^2 +y^2=1 \end{array} } \right. \] First equation corresponds to hyperbole (or 2 straight lines if $\cos \beta=0$, or $\sin \beta =0$, or $b=0$). One branch of the hyperbole (or one of stright lines) passes through the origin. So at least two solutions exist for arbitrary $b$ and $\beta$. \begin{figure}[t] \begin{center} \includegraphics[width=5cm]{figure1.png} \end{center} \caption{ Bifurcation diagram on the complex plane of $\nu=\gamma e^{i\beta }$ for $\varepsilon <0$. The domains $D_2$ and $D_2'$ are separated by the vertical line segment. \label{Fig:nbbifdiag}} \end{figure} There are two more solutions if second branch of hyperbole (or second straight line) crosses the unit circle. Tangency condition for the unit circle and the second brunch of the hyperbole is \[ {(b\cos \beta)}^{2/3}+{(b \sin \beta)}^{2/3}=1. \] On the complex plane of $\gamma e^{i\beta} =\nu =\nu_1+i\nu_2$ corresponding line is given by astroid: \begin{equation}\label{Eq:astr} {\nu_1}^{2/3}+{\nu_2}^{2/3}=2^{4/3}{\left( -\frac{\varepsilon}{2}\right) }^{n/3} \, \end{equation} and is presented on Fig.~\ref{Fig:nbbifdiag}. Typical critical level sets of Hamiltonian corresponding different values of parameter $\nu$ (provided $\varepsilon <0$) are shown on Fig.~\ref{Fig:h0ll}. Fragments on the figure are repeated $n$ times. On the boundary between $D_2$ and $D_2'$ (vertical line segment) there is a chain of $2n$ islands (Fig.~\ref{Fig:h0ll}(c)). Outside a domain bounded by the astroid there is a chain of $n$ islands. \begin{figure}[t] \begin{center} \includegraphics[width=2.9cm]{figure2a.png}\quad \includegraphics[width=2.9cm]{figure2b.png}\quad \includegraphics[width=2.9cm]{figure2c.png}\quad \includegraphics[width=2.9cm]{figure2d.png}\\ (a)\kern2.9cm(b)\kern2.9cm(c)\kern2.9cm(d) \end{center} \caption{Critical level sets of the model Hamiltonian for $\varepsilon <0$ depending on value of $\nu$ indicated on bifurcation diagram on Fig.~\ref{Fig:nbbifdiag}: (a) $\nu \in D_2$, (b) $\nu \in D_2$, $ \beta =0$, (c) $\nu$ is in the boundary between $D_2$ and $D_2'$ ($\beta =\pm \pi /2$), (d) $\nu$ is in the boundary between $D_2$ and $D_1$. \label{Fig:h0ll}} \end{figure} \subsection{Typical level sets of model Hamiltonian for $n=3$} For $\varepsilon <0$ the case of $n=3$ essentially differs from the case of $n\ge 4$ as the third term in (\ref{Eq:prIH}) cannot be omitted. The first three terms in (\ref{Eq:prIH}) are of the same order if $I$ and $\varepsilon $ are of the order of $\gamma^2$. So we use the following scaling in (\ref{Eq:modelp_B}) \begin{equation}\label{Eq:scaling3b1} \varepsilon =a \gamma^2, \quad I=\gamma^2 J, \quad H=\gamma^4 \bar H \, \end{equation} and get $\bar H = \bar H_0 +O(\gamma^2)$, where \begin{equation}\label{Eq:n3sc1} \bar H_0 =a J+J^{3/2}\cos (3\varphi +\beta ) +J^2. \end{equation} \begin{figure}[t] \begin{center} \includegraphics[width=5cm]{figure3.png} \end{center} \caption{ Bifurcation diagram on the plane $(\gamma ,\varepsilon )$ for $n=3$. \label{Fig:n3bifdiag}} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=3cm]{figure4a.pdf} \kern0.2cm \includegraphics[width=3cm]{figure4b.pdf} \kern0.2cm \includegraphics[width=3cm]{figure4c.pdf} \kern0.2cm \includegraphics[width=3cm]{figure4d.pdf} \\ (a)\kern2.8cm% (b)\kern2.8cm% (c)\kern2.8cm% (d)% \end{center} \caption{Critical level sets of the model Hamiltonian depending on value of $(\gamma , \varepsilon )$: (a)~$(\gamma , \varepsilon )\in D_1$, (b)~the boudary between $D_1$ and $D_1'$ ($\varepsilon =0$), (c)~$(\gamma , \varepsilon )\in D_1'$, (d)~the boudary between $D_1'$ and $D_0$ ($\varepsilon =\frac{9}{32} \gamma^2$). \label{Fig:n3b2} } \end{figure} The critical points of $\bar H_0$ are located at the points for which $\cos(3\varphi +\beta)=\sigma_\varphi=\pm 1$, and $$ a +\frac{3\sigma_\varphi}2J^{1/2}+2J=0\,. $$ The last equation can be solved explicitly. There are no real solutions for $a>a_0$, where $a_0=\frac{9}{32}$. If $a\in (0;a_0)$ there are two solutions for $\sigma_\varphi =-1$ and no one for $\sigma_\varphi =1$. And if $a<0$ there is one solution for $\sigma_\varphi =-1$ and one for $\sigma_\varphi =1$. The line $$ \varepsilon =\frac{9}{32}\gamma^2\,. $$ on the plane $(\gamma, \varepsilon )$ bounded the domain in which level sets of the Hamiltonian (\ref{Eq:n3sc1}) are closed curves (the domain $D_0$ on Fig.~\ref{Fig:n3bifdiag}). The critical level sets of the Hamiltonian $\bar H_0$ are illustrated on Figure~\ref{Fig:n3b2}. For $\varepsilon <0$ the case of $n=3$ is not different from $n\ge 4$ (see the analysis in the previous subsection). In particular, the boundary between $D_1$ and $D_2$ is given by (\ref{Eq:astr}). On the Fig.~\ref{Fig:n3bifdiag} we assume $\nu_1=\nu =\gamma $, $\nu_2=0$ which correspond to $\beta =0$ ($\nu =\gamma e^{i\beta }$). Then the boundary between $D_1$ and $D_2$ is given by \[ \varepsilon = -2^{-1/3} \gamma^{2/3}. \] For $(\gamma , \varepsilon ) \in D_2$ level sets of the model Hamiltonian are presented on Fig.~\ref{Fig:h0ll}. \section{Families with twist degeneracy\label{Se:Familiesh33}} The theorem about normal forms for families with twist degeneracy can be found in \cite{GG2014}. For three-parametric families it takes the following form. \begin{proposition} \label{Pro:h33issmall} Let $\Upsilon =(\varepsilon_1,\varepsilon_2,\varepsilon_3 )$, $\mathbf{m} = (m_1,m_2,m_3 )$, $\Upsilon^\mathbf{m} = \varepsilon_1^{m_1} \varepsilon_2^{m_2}\varepsilon_3^{m_3}$. If \begin{equation}\label{Eq:h_originalh33small} H(z,\bar z; \Upsilon)=\varepsilon_1 z\bar z+\varepsilon_2 {(z\bar z)}^2+\varepsilon_3 {(z\bar z)}^3+\sum_{\substack {k+l > 2 ,\ k,l,|\mathbf{m}|\ge 0, \\k=l \pmod n}} h_{kl\mathbf{m}}z^k \bar z^l\Upsilon^\mathbf{m} \end{equation} is a formal series such that $h_{kl\mathbf{m}}=h_{lk\mathbf{m}}^*$, $h_{22\mathbf{m}}=h_{33\mathbf{m}}=0$ and $h_{44\mathbf{0}}h_{n0\mathbf{0}}\ne 0$, then there exists a formal tangent-to-identity canonical change of variables which transforms the Hamiltonian $H$ into \begin{eqnarray*}\label{Eq:hn0is0} \widetilde H(z,\bar z;\Upsilon)=\varepsilon_1 z\bar z+\varepsilon_2 {(z\bar z)}^2+\varepsilon_3 {(z\bar z)}^3+ (z\bar{z})^{4}\sum_{k, |\mathbf{m}|\ge 0} a_{k\mathbf{m}} {(z\bar{z})}^k \Upsilon^\mathbf{m} \\ +(z^n+\bar z^n)\sum_{k, |\mathbf{m}|\ge 0} b_{k\mathbf{m}} {(z\bar{z})}^k \Upsilon^\mathbf{m}, \end{eqnarray*}% where $a_{k\mathbf{m}},b_{k\mathbf{m}} \in \mathbb{R}$, $b_{0\mathbf{0}}=|h_{n0\mathbf{0}}|$. Moreother \begin{itemize} \item if $3 \le n \le 8$ $a_{k\mathbf{m}}=0$ for $k=n-5 \pmod n$ and $b_{k\mathbf{m}}=0$ for $k=n-1 \pmod n$ \item if $n\ge 8$ $b_{k\mathbf{m}}=0$ for $k=3 \pmod 4$ \end{itemize} The coefficients $a_{k\mathbf{m}}$ and $b_{k\mathbf{m}}$ are defined uniquely. \end{proposition} In the symplectic polar coordinates (\ref{Eq:zpolar}) the model Hamiltonian is \begin{equation} \label{Eq:modelHtwist} H(I,\varphi ;\Upsilon)=\varepsilon_1I +\varepsilon_2 I^2+\varepsilon_3I^3+I^4+b_0I^{n/2} \cos n\varphi \, . \end{equation} Its typical level sets depend on which of two last terms is the main. If the term $I^4$ is smaller then the last term ($n \le 7$ or $n=8$ and $|b_0|>1$) then for $\Upsilon =0$ the origin is not stable. If $n \ge 9$ or $n=8$ and $|b_0|<1$ then the origin is stable for $\Upsilon =0$. Bifurcations in stable and unstable cases are briefly considered below. \subsection{Stable case } Let $n\ge 9$. For $0\le I\ll 1$, the Hamiltonian (\ref{Eq:modelHtwist}) can be considered as a small perturbation of $H_0=\varepsilon_1I +\varepsilon_2 I^2+\varepsilon_3I^3+I^4$. The level sets of $H_0$ are circles for all values of the parameters. The Hamiltonian (\ref{Eq:modelHtwist}) does not posses this symmetry. The implicit function theorem implies that the last term can affect on the shape of level sets of $H$ only near zeroes of $\partial_I H_0$. The equation \begin{equation}\label{Eq:bdroots} \partial_IH_0=\varepsilon_1 +2\varepsilon_2 I+3\varepsilon_3I^2+4I^3 =0 \end{equation} has from $0$ to $3$ solutions depending on $(\varepsilon_1,\varepsilon_2,\varepsilon_3 )$. \begin{figure}[t] \begin{center} \includegraphics[width=3.8cm]{figure5a.png}\qquad \includegraphics[width=3.8cm]{figure5b.png}\\ (a)\kern3.8cm(b) \end{center} \caption{Bifurcation diagram for stable case on the unit sphere in the space of parameters $(\varepsilon_1, \varepsilon_2,\varepsilon_3)$. \label{Fig:bdtwist}} \end{figure} Let $D_k$ be a domain in the parameter space such that equation (\ref{Eq:bdroots}) has $k$ roots, {\it i.e.} $H_0$ has $2kn$ stationary points. These domains on the unit sphere in the space $(\varepsilon_1, \varepsilon_2,\varepsilon_3)$ are shown on the Figure~\ref{Fig:bdtwist}. In $D_0$ all level sets of $H$ are closed curves around the origin. In $D_1$ there is one chain of islands. In $D_2$ typical level sets are similar to ones in two-parametric families \cite{GG2014}. \begin{figure}[t] \begin{center} \includegraphics[width=2.9cm]{figure6a.png}\quad \includegraphics[width=2.9cm]{figure6b.png}\quad \includegraphics[width=2.9cm]{figure6c.png}\quad \includegraphics[width=2.9cm]{figure6d.png}\\ (a)\kern2.9cm(b)\kern2.9cm(c)\kern2.9cm(d) \end{center} \caption{The typical critical level sets for the Hamiltonian (\ref{Eq:modelHtwist}) in the stable case if parameters $(\varepsilon_1,\varepsilon_2,\varepsilon_3) \in D_3$. \label{Fig:Stable}} \end{figure} In neighbourhood of the point $\varepsilon_1=0$, $\varepsilon_2=0$, $\varepsilon_3=-1$ there is a tiny domain $D_3$. When parameters are in $D_3$ the equation (\ref{Eq:bdroots}) have 3 roots. They correspond to three sets of hyperbolic and three sets of elliptic stationary points of the model Hamiltonian (\ref{Eq:modelHtwist}). Some of the possible corresponding critical level sets are shown on Figure~\ref{Fig:Stable}. The fragment shown on the figure is repeated $n$ times around the origin. \begin{figure}[t] \begin{center} \includegraphics[width=3.8cm]{figure7a.png}\quad \includegraphics[width=3.8cm]{figure7b.png}\\ (a)\kern3.8cm(b) \end{center} \caption{Bifurcation diagram for unstable case on the unit sphere in the space of parameters $(\varepsilon_1, \varepsilon_2,\varepsilon_3)$. \label{Fig:bdtwistunst}} \end{figure} \subsection{Unstable case} \begin{figure}[t] \begin{center} \includegraphics[width=3.8cm]{figure8a.png}\quad \includegraphics[width=3.8cm]{figure8b.png} \quad \includegraphics[width=3.8cm]{figure8c.png}\\ (a)\kern3.8cm(b)\kern3.8cm(c) \end{center} \caption{Some critical level sets for $n=8$, $b_0=2$. \label{Fig:bdunstln}} \end{figure} For $n \le 7$ or $n=8$ and $|b_0|>1$ the bifurcation diagram on the unit sphere in the space of parameters $(\varepsilon_1,\varepsilon_2,\varepsilon_3)$ is shown in Figure~\ref{Fig:bdtwistunst}. In $D_1$ and $D_1'$ typical level sets are the same as in one-parametric families. In $D_2$ and $D_2'$ typical level sets and their bifurcations are essentially the same as in two-parametric families. In a neighbourhood of the points $\varepsilon_1=\varepsilon_2=0$, $\varepsilon_3 = \pm 1$ there are tiny domains with additional sets of stationary points. Some possible critical level sets for this case are shown on Figure~\ref{Fig:bdunstln}.
2024-02-18T23:40:39.012Z
2017-11-09T02:08:23.000Z
algebraic_stack_train_0000
2,959
9,621
proofpile-arXiv_065-14449
\section{Introduction} \label{sec:introduction} \vspace{-.80ex} \jk{% \newenvironment{myquote}[1]% {\list{}{\leftmargin=#1\rightmargin=#1}\item[]}% {\endlist}% \begin{myquote}{0.2in} \vspace{-5pt} \centering \fontsize{9}{10}\selectfont{``\textsl{All animals are equal, but some animals are more equal than others.''} \vspace{-.80ex} \vspace{-.80ex} } \\ \begin{flushright} {\fontsize{7}{8}\selectfont{---George \citeauthor{orwell2003animal}, Animal Farm, 1945}} \end{flushright} \vspace{-13pt} \vspace{-.80ex} \end{myquote}} The success of deep neural networks to date depends strongly on the availability of labeled data which is costly and not always easy to obtain. Usually it is much easier to obtain small quantities of high-quality labeled data and large quantities of unlabeled data. The problem of how to best integrate these two different sources of information during training is an active pursuit in the field of semi-supervised learning~\citep{chap:semi06}. However, for a large class of tasks it is also easy to define one or more so-called ``weak annotators'', additional (albeit noisy) sources of \emph{weak supervision} based on heuristics or ``weaker'', biased classifiers trained on e.g.\ non-expert crowd-sourced data or data from different domains that are related. While easy and cheap to generate, it is not immediately clear if and how these additional weakly-labeled data can be used to train a stronger classifier for the task we care about. More generally, in almost all practical applications machine learning systems have to deal with data samples of variable quality. For example, in a large dataset of images only a small fraction of samples may be labeled by experts and the rest may be crowd-sourced using e.g.\ Amazon Mechanical Turk~\citep{Veit:2017}. In addition, in some applications, labels are intentionally perturbed due to privacy issues~\citep{wainwright2012privacy,Papernot:2016}. Assuming we can obtain a large set of weakly-labeled data in addition to a much smaller training set of ``strong'' labels, the simplest approach is to expand the training set by including the weakly-supervised samples (all samples are equal). Alternatively, one may pretrain on the weak data and then fine-tune on observations from the true function or distribution (which we call strong data). Indeed, it has recently been shown that a small amount of expert-labeled data can be augmented in such a way by a large set of raw data, with labels coming from a heuristic function, to train a more accurate neural ranking model~\citep{Dehghani:2017:SIGIR}. The downside is that such approaches are oblivious to the amount or source of noise in the labels. In this paper, we argue that treating weakly-labeled samples uniformly (i.e.\ each weak sample contributes equally to the final classifier) ignores potentially valuable information of the label quality. Instead, we propose Fidelity-Weighted Learning\xspace (FWL\xspace), a Bayesian semi-supervised approach that leverages a small amount of data with true labels to generate a larger training set with \emph{confidence-weighted weakly-labeled samples}, which can then be used to modulate the fine-tuning process based on the fidelity (or quality) of each weak sample. By directly modeling the inaccuracies introduced by the weak annotator\xspace in this way, we can control the extent to which we make use of this additional source of weak supervision: more for confidently-labeled weak samples close to the true observed data, and less for uncertain samples further away from the observed data. We propose a setting consisting of two main modules. One is called the student\xspace and is in charge of learning a suitable data representation and performing the main prediction task, the other is the teacher\xspace which modulates the learning process by modeling the inaccuracies in the labels. We explain our approach in much more detail in Section~\ref{sec:proposed-method}, but at a high level it works as follows (see Figure~\ref{fig:model}): We pretrain the student network on weak data to learn an initial task-dependent data representation which we pass to the teacher along with the strong data. The teacher then learns to predict the strong data, but crucially, \emph{based on the student's learned representation}. This then allows the teacher to generate new labeled training data from unlabeled data, and in the process correct the student's mistakes, leading to a better final data representation and better final predictor. \input{Images/model} We introduce the proposed FWL\xspace approach in more detail in Section~\ref{sec:proposed-method}. We then present our experimental setup in Section~\ref{sec:experiments} where we evaluate FWL\xspace on a toy task and two real-world tasks, namely document ranking and sentence sentiment classification. In all cases, FWL\xspace outperforms competitive baselines and yields state-of-the-art results, indicating that FWL\xspace makes better use of the limited true labeled data and is thereby able to learn a better and more meaningful task-specific representation of the data. Section~\ref{sec:discussion} provides analysis of the bias-variance trade-off and the learning rate, suggesting also to view FWL\xspace from the perspective of Vapnik's learning with privileged information (LUPI) framework~\citep{vapnik2015learning}. Section~\ref{sec:relatedwork} situates FWL\xspace relative to related work, and we end the paper by drawing the main conclusions in Section~\ref{sec:conclusion}. \section{Fidelity-Weighted Learning\xspace (FWL\xspace)} \label{sec:proposed-method} \vspace{-.80ex} In this section, we describe our proposed FWL\xspace approach for semi-supervised learning when we have access to weak supervision (e.g.\ heuristics or weak annotators). We assume we are given a large set of unlabeled data samples, a heuristic labeling function called the \emph{weak annotator\xspace}, and a small set of high-quality samples labeled by experts, called the \emph{strong dataset}, consisting of tuples of training samples $x_i$ and their true labels $y_i$, i.e. $\mathcal{D}_s=\{(x_i,y_i)\}$. We consider the latter to be observations from the true target function that we are trying to learn. We use the weak annotator\xspace to generate labels for the unlabeled samples. Generated labels are noisy due to the limited accuracy of the weak annotator\xspace. This gives us the \emph{weak dataset} consisting of tuples of training samples $x_i$ and their weak labels $\tilde{y}_i$, i.e. $\mathcal{D}_w=\{(x_i, \tilde{y}_i)\}$. Note that we can generate a large amount of weak training data $\mathcal{D}_w$ at almost no cost using the weak annotator\xspace. In contrast, we have only a limited amount of observations from the true function, i.e. $|\mathcal{D}_s| \ll |\mathcal{D}_w|$. Our proposed setup comprises a neural network called the \textbf{student\xspace} and a Bayesian function approximator called the \textbf{teacher\xspace}. The training process consists of three phases which we summarize in Algorithm~\ref{alg:main} and Figure~\ref{fig:model}. \input{alg_main} \textbf{Step 1} \emph{Pre-train the student\xspace on $\mathcal{D}_w$ using weak labels generated by the weak annotator\xspace}. The main goal of this step is to learn a \emph{task dependent} representation of the data as well as pretraining the student\xspace. The student\xspace function is a neural network consisting of two parts. The first part $\psi(.)$ learns the data representation and the second part $\phi(.)$ performs the prediction task (e.g. classification). Therefore the overall function is $\hat{y}=\phi(\psi(x_i))$. The student\xspace is trained on all samples of the weak dataset $\mathcal{D}_w=\{(x_i, \tilde{y}_i)\}$. For brevity, in the following, we will refer to both data sample $x_i$ and its representation $\psi(x_i)$ by $x_i$ when it is obvious from the context. From the self-supervised feature learning point of view, we can say that representation learning in this step is solving a surrogate task of approximating the expert knowledge, for which a noisy supervision signal is provided by the weak annotator\xspace. \textbf{Step 2} \emph{Train the teacher\xspace on the strong data $(\psi(x_j),y_j) \in \mathcal{D}_s$ represented in terms of the student representation $\psi(.)$ and then use the teacher\xspace to generate a soft dataset $\mathcal{D}_{sw}$ consisting of $\langle \textrm{sample}, \textrm{predicted label}, \textrm{ confidence} \rangle$ for \textbf{all} data samples.} We use a Gaussian process as the teacher\xspace to capture the label uncertainty in terms of the student\xspace representation, estimated w.r.t\ the strong data. We explain the finer details of the $\mathcal{GP}$ in Appendix~\ref{app:GP}, and just present the overall description here. A prior mean and co-variance function is chosen for $\mathcal{GP}$. The learned embedding function $\psi(\cdot)$ in Step 1 is then used to map the data samples to dense vectors as input to the $\mathcal{GP}$. We use the learned representation by the student\xspace in the previous step to compensate lack of data in $\mathcal{D}_s$ and the teacher\xspace can enjoy the learned knowledge from the large quantity of the weakly annotated data. This way, we also let the teacher\xspace see the data through the lens of the student\xspace. The $\mathcal{GP}$ is trained on the samples from $\mathcal{D}_s$ to learn the posterior mean $\bm{m}_{\rm post}$ (used to generate soft labels) and posterior co-variance $K_{\rm post}(.,.)$ (which represents label uncertainty). We then create the \emph{soft dataset} $\mathcal{D}_{sw}=\{(x_t,\bar{y}_t)\}$ using the posterior $\mathcal{GP}$, input samples $x_t$ from $\mathcal{D}_w \cup \mathcal{D}_s$, and predicted labels $\bar{y}_t$ with their associated uncertainties as computed by $T(x_t)$ and $\Sigma(x_t)$: \begin{eqnarray*} \tfunc(x_t) &=& g(\bm{m}_{\rm post}(x_t))\\ \Sigma(x_t) &=& h(K_{\rm post}(x_t,x_t)) \vspace{-.80ex} \end{eqnarray*} The generated labels are called \emph{soft labels}. Therefore, we refer to $\mathcal{D}_{sw}$ as a soft dataset. $g(.)$ transforms the output of $\mathcal{GP}$ to the suitable output space. For example in classification tasks, $g(.)$ would be the softmax function to produce probabilities that sum up to one. For multidimensional-output tasks where a vector of variances is provided by the $\mathcal{GP}$, the vector $K_{\rm post}(x_t,x_t)$ is passed through an aggregating function $h(.)$ to generate a scalar value for the uncertainty of each sample. Note that we train $\mathcal{GP}$ only on the strong dataset $\mathcal{D}_s$ but then use it to generate soft labels $\bar{y}_t = \tfunc(x_t)$ and uncertainty $\Sigma(x_t)$ for samples belonging to $\mathcal{D}_{sw}=\mathcal{D}_w\cup \mathcal{D}_s$. In practice, we furthermore divide the space of data into several regions and assign each region a separate $\mathcal{GP}$ trained on samples from that region. This leads to a better exploration of the data space and makes use of the inherent structure of data. The algorithm called clustered $\mathcal{GP}$ gave better results compared to a single GP. See Appendix~\ref{sec:appendix_CGP} for the detailed description and empirical observations which makes the use of multiple $\mathcal{GP}$s reasonable. \textbf{Step 3} \emph{Fine-tune the weights of the student\xspace network on the soft dataset, while modulating the magnitude of each parameter update by the corresponding teacher\xspace-confidence in its label.} The student\xspace network of Step 1 is fine-tuned using samples from the soft dataset $\mathcal{D}_{sw}=\{(x_t, \bar{y}_t)\}$ where $\bar{y}_t=\tfunc(x_t)$. The corresponding uncertainty $\Sigma(x_t)$ of each sample is mapped to a confidence value according to Equation~\ref{eqn:eta2} below, and this is then used to determine the step size for each iteration of the stochastic gradient descent (SGD). So, intuitively, for data points where we have true labels, the uncertainty of the teacher\xspace is almost zero, which means we have high confidence and a large step-size for updating the parameters. However, for data points where the teacher\xspace is not confident, we down-weight the training steps of the student\xspace. This means that at these points, we keep the student\xspace function as it was trained on the weak data in Step 1. More specifically, we update the parameters of the student\xspace by training on $\mathcal{D}_{sw}$ using SGD: \begin{eqnarray*} \pmb{w}^* &=& \argmin_{\pmb{w} \in \mathcal{W}} \> \frac{1}{N}\sum_{(x_t,\bar{y}_t) \in \mathcal{D}_{sw}}l(\pmb{w}, x_t, \bar{y}_t) + \mathcal{R}(\pmb{w}), \\ \pmb{w}_{t+1} &=& \pmb{w}_t - \eta_t(\nabla l(\pmb{w},x_t,\bar{y}_t) + \nabla \mathcal{R}(\pmb{w})) \end{eqnarray*} where $l(\cdot)$ is the per-example loss, $\eta_t$ is the total learning rate, $N$ is the size of the soft dataset $\mathcal{D}_{sw}$, $\pmb{w}$ is the parameters of the student\xspace network, and $\mathcal{R(.)}$ is the regularization term. We define the total learning rate as $\eta_t = \eta_1(t)\eta_2(x_t)$, where $\eta_1(t)$ is the usual learning rate of our chosen optimization algorithm that anneals over training iterations, and $\eta_2(x_t)$ is a function of the label uncertainty $\Sigma(x_t)$ that is computed by the teacher\xspace for each data point. Multiplying these two terms gives us the total learning rate. In other words, $\eta_2$ represents the \emph{fidelity} (quality) of the current sample, and is used to multiplicatively modulate $\eta_1$. Note that the first term does not necessarily depend on each data point, whereas the second term does. We propose \begin{equation} \label{eqn:eta2} \eta_2(x_t) = \exp[-\beta \Sigma(x_t)], \end{equation} to exponentially decrease the learning rate for data point $x_t$ if its corresponding soft label $\bar{y}_t$ is unreliable (far from a true sample). In Equation~\ref{eqn:eta2}, $\beta$ is a positive scalar hyper-parameter. Intuitively, small $\beta$ results in a student\xspace which listens more carefully to the teacher\xspace and copies its knowledge, while a large $\beta$ makes the student\xspace pay less attention to the teacher\xspace, staying with its initial weak knowledge. More concretely speaking, as $\beta \to 0$ student\xspace places more trust in the labels $\bar{y}_t$ estimated by the teacher\xspace and the student\xspace copies the knowledge of the teacher\xspace. On the other hand, as $\beta \to \infty$, student\xspace puts less weight on the extrapolation ability of $\mathcal{GP}$ and the parameters of the student\xspace are not affected by the correcting information from the teacher\xspace. \section{Experiments} \label{sec:experiments} \vspace{-.80ex} In this section, we apply FWL\xspace first to a toy problem and then to two different real tasks: \emph{document ranking} and \emph{sentiment classification}. The neural networks are implemented in TensorFlow~\citep{tensorflow2015-whitepaper,tang2016:tflearn}. GPflow~\citep{GPflow2017} is employed for developing the $\mathcal{GP}$ modules. For both tasks, we evaluate the performance of our method compared to the following baselines: \begin{enumerate}[leftmargin=*] \vspace{-1.5ex} \setlength{\topsep}{0.1pt} \setlength{\partopsep}{0.1pt} \setlength{\itemsep}{0.1pt} \setlength{\parskip}{0.1pt} \setlength{\parsep}{0.1pt} \item \textbf{WA}. The weak annotator\xspace, i.e. the unsupervised method used for annotating the unlabeled data. \item \textbf{NN$_{\text{W}}$}. The student\xspace trained only on weak data. \item \textbf{NN$_{\text{S}}$}. The student\xspace trained only on strong data. \item \textbf{NN$_{\text{S}^+\text{/W}}$}. The student\xspace trained on samples that are alternately drawn from $\mathcal{D}_w$ without replacement, and $\mathcal{D}_s$ with replacement. Since $|\mathcal{D}_s| \ll |\mathcal{D}_w|$, it oversamples the strong data. \item \textbf{NN$_{\text{W} \to \text{S}}$}. The student\xspace trained on weak dataset $\mathcal{D}_w$ and fine-tuned on strong dataset $\mathcal{D}_s$. \item \textbf{NN$_{\text{W}^\omega \to \text{S}}$}. The student\xspace trained on the weak data, but the step-size of each weak sample is weighted by a fixed value $0 \leq \omega \leq 1$, and fine-tuned on strong data. As an approximation for the optimal value for $\omega$, we have used the mean of $\eta_2$ of our model (below). \item \textbf{FWL\xspace$ـ{unsuprep}$}. The representation in the first step is trained in an unsupervised way\footnote{\tiny{In the document ranking task, as the representation of documents and queries, we use weighted averaging over pretrained embeddings of their words based on their inverse document frequency~\citep{Dehghani:2017:SIGIR}. In the sentiment analysis task, we use skip-thoughts vectors\citep{kiros2015skip}}} and the student is trained on examples labeled by the teacher\xspace using the confidence scores. \item \textbf{FWL\xspace$\backslash\Sigma$}. The student\xspace trained on the weakly labeled data and fine-tuned on examples labeled by the teacher\xspace without taking the confidence into account. This baseline is similar to \citep{Veit:2017}. \item \textbf{FWL\xspace}. Our FWL\xspace model, i.e.\ the student\xspace trained on the weakly labeled data and fine-tuned on examples labeled by the teacher\xspace using the confidence scores. \vspace{-1.5ex} \end{enumerate} In the following, we introduce each task and the results produced for it, more detail about the exact student\xspace network and teacher\xspace $\mathcal{GP}$ for each task are in the appendix. \subsection{Toy Problem} \label{sec:toy_exmpale} \vspace{-.80ex} \input{Images/toy_example} We first apply FWL\xspace to a one-dimensional toy problem to illustrate the various steps. Let $f_t(x)=\sin(x)$ be the true function (red dotted line in Figure~\ref{fig:toy_plot1}) from which a small set of observations $\mathcal{D}_s=\{x_j,y_j\}$ is provided (red points in Figure~\ref{fig:toy_plot2}). These observation might be noisy, in the same way that labels obtained from a human labeler could be noisy. A weak annotator\xspace function $f_{w}(x)=2sinc(x)$ (magenta line in Figure~\ref{fig:toy_plot1}) is provided, as an approximation to $f_t(.)$. The task is to obtain a good estimate of $f_t(.)$ given the set $\mathcal{D}_s$ of strong observations and the weak annotator\xspace function $f_{w}(.)$. We can easily obtain a large set of observations $\mathcal{D}_w=\{x_i,\tilde{y}_i\}$ from $f_{w}(.)$ with almost no cost (magenta points in Figure~\ref{fig:toy_plot1}). We consider two experiments: \begin{enumerate}[leftmargin=*] \vspace{-1.5ex} \setlength{\topsep}{0.1pt} \setlength{\partopsep}{0.1pt} \setlength{\itemsep}{0.1pt} \setlength{\parskip}{0.1pt} \setlength{\parsep}{0.1pt} \item A neural network trained on weak data and then fine-tuned on strong data from the true function, which is the most common semi-supervised approach (Figure~\ref{fig:toy_plot3}). \item A teacher-student framework working by the proposed FWL\xspace approach. \vspace{-1.5ex} \end{enumerate} As can be seen in Figure~\ref{fig:toy_plot4}, FWL\xspace by taking into account label confidence, gives a better approximation of the true hidden function. We repeated the above experiment 10 times. The average RMSE with respect to the true function on a set of test points over those 10 experiments for the student\xspace, were as follows: \begin{enumerate}[leftmargin=*] \vspace{-1.5ex} \setlength{\topsep}{0.1pt} \setlength{\partopsep}{0.1pt} \setlength{\itemsep}{0.1pt} \setlength{\parskip}{0.1pt} \setlength{\parsep}{0.1pt} \item Student is trained on weak data (blue line in Figure~\ref{fig:toy_plot1}): $0.8406$, \item Student is trained on weak data then fine tuned on true observations (blue line in Figure~\ref{fig:toy_plot3}): $0.5451$, \item Student is trained on weak data, then fine tuned by soft labels and confidence information provided by the teacher (blue line in Figure~\ref{fig:toy_plot4}): $0.4143$ (best). \vspace{-1.5ex} \end{enumerate} More details of the neural network and $\mathcal{GP}$ along with the specification of the data used in the above experiment are presented in Appendix~\ref{app:GP} and~\ref{app:setup:toy}. \subsection{Document Ranking} \vspace{-.80ex} This task is the core information retrieval problem and is challenging as the ranking model needs to learn a representation for long documents and capture the notion of relevance between queries and documents. Furthermore, the size of publicly available datasets with query-document relevance judgments is unfortunately quite small ($\sim 250$ queries). We employ a state-of-the-art pairwise neural ranker architecture as the student\xspace~\citep{Dehghani:2017:SIGIR}. In this model, ranking is cast as a regression task. Given each training sample $x$ as a triple of query $q$, and two documents $d^+$ and $d^-$, the goal is to learn a function $\mathcal{F} : \{<q, d^+, d^->\} \rightarrow \mathbb{R}$, which maps each data sample $x$ to a scalar output value $y$ indicating the probability of $d^+$ being ranked higher than $d^-$ with respect to $q$. \begin{wrapfigure}{t}{0.3\textwidth} \vspace{-10pt} \centering \includegraphics[width=0.26\textwidth]{Images/ranker} \vspace{-10pt} \caption{\fontsize{8}{7}\selectfont{The student\xspace for the document ranking task.}} \label{fig:ranker} \vspace{-10pt} \end{wrapfigure} \mypar{The student\xspace} follows the architecture proposed in~\citep{Dehghani:2017:SIGIR}. The first layer of the network, i.e. representation learning layer $\psi: \{<q, d^+, d^->\} \rightarrow \mathbb{R}^{m}$ maps each input sample to an $m$-\:dimensional real-valued vector. In general, besides learning embeddings for words, function $\psi$ learns to compose word embedding based on their global importance in order to generate query/document embeddings. The representation layer is followed by a simple fully-connected feed-forward network with a sigmoidal output unit to predict the probability of ranking $d^+$ higher than $d^-$. The general schema of the student\xspace is illustrated in Figure~\ref{fig:ranker}. More details are provided in Appendix~\ref{app:ranking}. \mypar{The teacher\xspace} is implemented by clustered $\mathcal{GP}$ algorithm. See Appendix~\ref{app:GP} for more details. \mypar{The weak annotator\xspace} is BM25~\citep{Robertson:2009}, a well-known unsupervised method for scoring query-document pairs based on statistics of the matched terms. More details are provided in Appendix~\ref{app:wa:ranking}. Description of the data with weak labels and data with true labels as well as the setup of the document-ranking experiments is presented in Appendix~\ref{app:setup:ranking} in more details. \input{table_rank_res} \mypar{Results and Discussions} We conducted k-fold cross validation on $\mathcal{D}_s$ (the strong data) and report two standard evaluation metrics for ranking: mean average precision (MAP) of the top-ranked $1,000$ documents and normalized discounted cumulative gain calculated for the top $20$ retrieved documents (nDCG@20). Table~\ref{tbl_main} shows the performance on both datasets. As can be seen, FWL\xspace provides a significant boost on the performance over all datasets. In the ranking task, the student\xspace is designed in particular to be trained on weak annotations~\citep{Dehghani:2017:SIGIR}, hence training the network only on weak supervision, i.e. NN$_\text{W}$ performs better than NN$_\text{S}$. This can be due to the fact that ranking is a complex task requiring many training samples, while relatively few data with true labels are available. Alternating between strong and weak data during training, i.e. NN$_{\text{S}^+\text{/W}}$ seems to bring little (but statistically significant) improvement. However, we can gain better results by the typical fine-tuning strategy, NN$_{\text{W} \to \text{S}}$. Comparing the performance of FWL$_{unsuprep}$ to FWL\xspace indicates that, first of all learning the representation of the input data downstream of the main task leads to better results compared to a task-independent unsupervised or self-supervised way. Also the dramatic drop in the performance compared to the FWL\xspace, emphasizes the importance of the preretraining the student\xspace on weakly labeled data. We can gain improvement by fine-tuning the NN$_\text{W}$ using labels generated by the teacher\xspace without considering their confidence score, i.e. FWL\xspace$\backslash\Sigma$. This means we just augmented the fine-tuning process by generating a fine-tuning set using teacher\xspace which is better than $\mathcal{D}_s$ in terms of quantity and $\mathcal{D}_w$ in terms of quality. This baseline is equivalent to setting $\beta = 0$ in Equation~\ref{eqn:eta2}. However, we see a big jump in performance when we use FWL\xspace to include the estimated label quality from the teacher\xspace, leading to the best overall results. \subsection{Sentiment Classification} \vspace{-.80ex} In sentiment classification, the goal is to predict the sentiment (e.g., positive, negative, or neutral) of a sentence. Each training sample $x$ consists of a sentence $s$ and its sentiment label $\tilde{y}$. \mypar{The student\xspace} for the sentiment classification task is a convolutional model which has been shown to perform best on the dataset we used~\citep{Deriu:2017, Severyn:2015:SIGIR, Severyn:2015:SemEval,Deriu2016:SemEval}. The first layer of the network learns the function $\psi(.)$ which maps input sentence $s$ to a dense vector as its representation. The inputs are first passed through an embedding layer mapping the sentence to a matrix $S \in \mathbb{R}^{m \times |s|}$, followed by a series of 1d convolutional layers with max-pooling. The representation layer is followed by feed-forward layers and a softmax output layer which returns the probability distribution over all three classes. Figure~\ref{fig:sentiment} presents the general schema of the architecture of the student\xspace. See Appendix~\ref{app:sentiment} for more details. \mypar{The teacher\xspace} for this task is modeled by a $\mathcal{GP}$. See Appendix~\ref{app:GP} for more details. \mypar{The weak annotator\xspace}\label{sentiment-WA} is a simple unsupervised lexicon-based method~\citep{Hamdan:2013,Kiritchenko:2014}, which estimate a distribution over sentiments for each sentence, based on sentiment labels of its terms. More details are provided in Appendix~\ref{app:wa:sentiment}. Specification of the data with weak labels and data with true labels along with the detailed experimental setup are given in Appendix~\ref{app:setup:sentiment}. \mypar{Results and Discussion} \input{table_sentiment_res_fig} We report Macro-F1, the official SemEval metric, in Table~\ref{tbl_main_sent}. We see that the proposed FWL\xspace is the best performing approach. For this task, since the amount of data with true labels are larger compared to the ranking task, the performance of NN$_{\text{S}}$ is acceptable. Alternately sampling from weak and strong data gives better results. Pretraining on weak labels then fine-tuning the network on true labels, further improves the performance. Weighting the gradient updates from weak labels during pretraining and fine-tuning the network with true labels, i.e. NN$_{\text{W}^\omega \to \text{S}}$ seems to work quite well in this task. For this task, like ranking task, learning the representation in an unsupervised task independent fashion, i.e. FWL$_{unsuprep}$, does not lead to good results compared to the FWL\xspace. Similar to the ranking task, fine-tuning NN$_{\text{S}}$ based on labels generated by $\mathcal{GP}$ instead of data with true labels, regardless of the confidence score, works better than standard fine-tuning. Besides the baselines, we also report the best performing systems which are also convolution-based models (\citealt{Rouvier:2016} on SemEval-14; \citealt{Deriu2016:SemEval} on SemEval-15). Using FWL\xspace and taking the confidence into consideration outperforms the best systems and leads to the highest reported results on both datasets. \section{Analysis} \label{sec:discussion} \vspace{-1.5ex} In this section, we provide further analysis of FWL\xspace by investigating the bias-variance trade-off and the learning rate. \subsection{Handling the Bias-Variance Trade-off} \label{sec:bias-variance} As mentioned in Section~\ref{sec:proposed-method}, $\beta$ is a hyperparameter that controls the contribution of weak and strong data to the training procedure. In order to investigate its influence, we fixed everything in the model and ran the fine-tuning step with different values of $\beta \in \{0.0, 0.1, 1.0, 2.0, 5.0\}$ in all the experiments. \begin{wrapfigure}{t}{0.5\textwidth} \vspace{-5pt} \centering \includegraphics[width=0.5\textwidth]{Images/beta} \vspace{-15pt} \caption{\fontsize{8}{7}\selectfont{Effect of different values for $\beta$}.} \label{fig:beta} \vspace{-10pt} \end{wrapfigure} Figure~\ref{fig:beta} illustrates the performance on the ranking (on Robust04 dataset) and sentiment classification tasks (on SemEval14 dataset). For both sentiment classification and ranking, $\beta=1$ gives the best results (higher scores are better). We also experimented on the toy problem with different values of $\beta$ in three cases: 1) having 10 observations from the true function (same setup as Section~\ref{sec:toy_exmpale}), marked as ``Toy Data'' in the plot, 2) having only 5 observations from the true function, marked as ``Toy Data *'' in the plot, and 3) having $f(x) = x + 1$ as the weak function, which is an extremely bad approximator of the true function, marked as ``Toy Data **'' in the plot. For the ``Toy Data'' experiment, $\beta=1$ turned out to be optimal (here, lower scores are better). However, for ``Toy Data *'', where we have an extremely small number of observations from the true function, setting $\beta$ to a higher value acts as a regularizer by relying more on weak signals, and eventually leads to better generalization. On the other hand, for ``Toy Data **'', where the quality of the weak annotator\xspace is extremely low, lower values of $\beta$ put more focus on the true observations. Therefore, $\beta$ lets us control the bias-variance trade-off in these extreme cases. \subsection{A Good Teacher is Better Than Many Observations} \label{sec:learning_pace} We now look at the rate of learning for the student\xspace as the amount of training data is varied. We performed two types of experiments for all tasks: In the first experiment, we use all the available strong data but consider different percentages of the entire weak dataset. In the second experiment, we fix the amount of weak data and provide the model with varying amounts of strong data. We use standard fine-tuning with similar setups as for the baseline models. Details on the experiments for the toy problem are provided in Appendix~\ref{app:setup:toy}. Figure~\ref{fig:learning_rate} presents the results of these experiments. In general, for all tasks and both setups, the student\xspace learns faster when there is a teacher\xspace. One caveat is in the case where we have a very small amount of weak data. In this case the student\xspace cannot learn a suitable representation in the first step, and hence the performance of FWL\xspace is pretty low, as expected. It is highly unlikely that this situation occurs in reality as obtaining weakly labeled data is much easier than strong data. The empirical observation of Figure~\ref{fig:learning_rate} that our model learns more with less data can also be seen as evidence in support of another perspective to FWL\xspace, called \emph{learning using privileged information}~\citep{vapnik2015learning}. We elaborate more on this connection in Appendix~\ref{app:LUPI}. \input{Images/learning_rate} \subsection{Sensitivity of the FWL\xspace to the Quality of the Weak Annotator} Our proposed setup in FWL\xspace requires defining a so-called ``weak annotator\xspace'' to provide a source of weak supervision for unlabelled data. In Section~\ref{sec:bias-variance} we discussed the role of parameter $\beta$ for controlling the bias-variance trade-off by trying two weak annotators for the toy problem. Now, in this section, we study how the quality of the weak annotator may affect the performance of the FWL\xspace, for the task of document ranking as a real-world problem. To do so, besides BM25~\citep{Robertson:2009}, we use three other weak annotators: \begin{wrapfigure}{t}{0.4\textwidth} \vspace{-5pt} \centering \includegraphics[width=0.4\textwidth]{Images/sensitivity.png} \vspace{-10pt} \caption{\fontsize{8}{7}\selectfont{Performance of FWL\xspace versus performance of the corespondence weak annotator\xspace in the document ranking task, on Robust04 dataset}.} \label{fig:sensitivity} \vspace{-5pt} \end{wrapfigure} vector space model~\citep{salton1973specification} with binary term occurrence (BTO) weighting schema and vector space model with TF-IDF weighting schema, which are both weaker than BM25, and BM25+RM3~\citep{Abdul-jaleel:2004} that uses RM3 as the pseudo-relevance feedback method on top of BM25, leading to better labels. Figure~\ref{fig:sensitivity} illustrates the performance of these four weak annotators\xspace in terms of their mean average precision (MAP) on the test data, versus the performance of FWL\xspace given the corresponding weak annotator\xspace. As it is expected, the performance of FWL\xspace depends on the quality of the employed weak annotator\xspace. The percentage of improvement of FWL\xspace over its corresponding weak annotator\xspace on the test data is also presented in Figure~\ref{fig:sensitivity}. As can be seen, the better the performance of the weak annotator\xspace is, the less the improvement of the FWL\xspace would be. \subsection{From Modifying the Learning Rate to Weighted Sampling} \vspace{-.80ex} \begin{wrapfigure}{t}{0.4\textwidth} \vspace{-5pt} \centering \includegraphics[width=0.4\textwidth]{Images/sampling.png} \vspace{-10pt} \caption{\fontsize{8}{7}\selectfont{Performance of FWL\xspace and FWL$_s$ with respect to different batch of data for the task of document ranking (Robust04 dataset) and sentiment classification (SemEval14 dataset)}.} \label{fig:sampling} \vspace{-5pt} \end{wrapfigure} FWL\xspace provides confidence score based on the certainty associated with each generated label $\bar{y}_t$, given sample $x_t \in \mathcal{D}_{sw}$. We can translate the confidence score as how likely including $(x_t,\bar{y}_t)$ in the training set for the student\xspace model improves the performance, and rather than using this score as the multiplicative factor in the learning rate, we can use it to bias sampling procedure of mini-batches so that the frequency of training samples are proportional to the confidence score of their labels. We design an experiment to try FWL\xspace with this setup (FWL$_s$), in which we keep the architectures of the student\xspace and the teacher\xspace and the procedure of the first two steps of the FWL\xspace fixed, but we changed the step 3 as follows: Given the soft dataset $\mathcal{D}_{sw}$, consisting of $x_t$, its label $\bar{y}_t$ and the associated confidence score generated by the teacher\xspace, we normalize the confidence scores over all training samples and set the normalized score of each sample as its probability to be sampled. Afterward, we train the student\xspace model by mini-batches sampled from this set with respect to the probabilities associated with each sample, but without considering the original confidence scores in parameter updating. This means the more confident the teacher\xspace is about the generated label for each sample, the more chance that sample has to be seen by the student\xspace model. Figure~\ref{fig:sampling} illustrates the performance of both FWL\xspace and FWL$_s$ trained on different amount of data sampled from $\mathcal{D}_{sw}$, in the document ranking and sentiment classification tasks. As can be seen, compared to FWL\xspace, the performance of FWL$_s$ increases rapidly in the beginning but it slows down afterward. We have looked into the sampling procedure and noticed that the confidence scores provided by the teacher\xspace form a rather skewed distribution and there is a strong bias in FWL$_s$ toward sampling from data points that are either in or closed to the points in $\mathcal{D}_{s}$, as $\mathcal{GP}$ has less uncertainty around these points and the confidence scores are high. We observed that the performance of FWL$_s$ gets closer to the performance of FWL\xspace after many epochs, while FWL\xspace had already a log convergence. The skewness of the confidence distribution makes FWL$_s$ to have a tendency for more exploitation than exploration, however, FWL\xspace has more chance to explore the input space, while it controls the effect of updates on the parameters for samples based on their merit. \section{Related Work} \label{sec:relatedwork} \vspace{-.80ex} In this section, we position our FWL\xspace approach relative to related work. Learning from imperfect labels has been thoroughly studied in the literature~\citep{Frenay:2014}. The imperfect (weak) signal can come from non-expert crowd workers, be the output of other models that are weaker (for instance with low accuracy or coverage), biased, or models trained on data from different related domains. Among these forms, in the distant supervision setup, a heuristic labeling rule~\citep{Deriu2016:SemEval,Severyn:2015:SemEval} or function~\citep{Dehghani:2017:SIGIR} which can be relying on a knowledge base~\citep{Mintz2009:distant, min2013distant, Han:2016} is employed to devise noisy labels. Learning from weak data sometimes aims at encoding various forms of domain expertise or cheaper supervision from lay annotators. For instance, in the structured learning, the label space is pretty complex and obtaining a training set with strong labels is extremely expensive, hence this class of problems leads to a wide range of works on learning from weak labels~\citep{roth2017incidental}. Indirect supervision is considered as a form of learning from weak labels that is employed in particular in the structured learning, in which a companion binary task is defined for which obtaining training data is easier~\citep{Chang2010structured, Raghunathan:2016}. In the response-based supervision, the model receives feedback from interacting with an environment in a task, and converts this feedback into a supervision signal to update its parameters~\citep{roth2017incidental,clarke2010driving,riezler2014response}. Constraint-based supervision is another form of weak supervision in which constraints that are represented as weak label distributions are taken as signals for updating the model parameters. For instance, physics-based constraints on the output~\citep{stewart2017label} or output constraints on execution of logical forms~\citep{clarke2010driving}. In the proposed FWL\xspace model, we can employ these approaches as the weak annotator to provide imperfect labels for the unlabeled data, however, a small amount of data with strong labels is also needed, which put our model in the class of semi-supervised models. In the semi-supervised setup, some ideas were developed to utilize weakly or even unlabeled data. For instance, the idea of self(incremental)-training~\citep{Rosenberg:2005}, pseudo-labeling~\citep{Lee:2013,Hinton:2015}, and Co-training~\citep{Blum:1998} are introduced for augmenting the training set by unlabeled data with predicted labels. Some research used the idea of self-supervised (or unsupervised) feature learning~\citep{noroozi2016unsupervised,dosovitskiy2016discriminative,donahue2016adversarial} to exploit different labelings that are freely available besides or within the data, and to use them as intrinsic signals to learn general-purpose features. These features, that are learned using a proxy task, are then used in a supervised task like object classification/detection or description matching. As a common approach in semi-supervised learning, the unlabeled set can be used for learning the distribution of the data. In particular for neural networks, greedy layer-wise pre-training of weights using unlabeled data is followed by supervised fine-tuning~\citep{Hinton:2006,Deriu:2017,Severyn:2015:SemEval,Severyn:2015:SIGIR,Go:2009}. Other methods learn unsupervised encoding at multiple levels of the architecture jointly with a supervised signal~\citep{Ororbia:2015,Weston:2012}. Alternatively, some noise cleansing methods have been proposed to remove or correct mislabeled samples~\citep{Brodley:1999}. There are some studies showing that weak or noisy labels can be leveraged by modifying the loss function~\citep{reed2014training, Patrini:2016, patrini2016loss, Vahdat:2017} or changing the update rule to avoid imperfections of the noisy data~\citep{malach2017decoupling, Dehghani:2017:nips_metalearn, Dehghani:2017avoiding}. One direction of research focuses on modeling the pattern of the noise or weakness in the labels. For instance, methods that use a generative model to correct weak labels such that a discriminative model can be trained more effectively~\citep{Ratner:2016,Rekatsinas:2017,Varma:2017}. Furthermore, methods that aim at capturing the pattern of the noise by inserting an extra layer~\citep{goldberger2016training} or a separate module tries to infer better labels from noisy ones and use them to supervise the training of the network~\citep{Sukhbaatar:2014,Veit:2017, Dehghani:2017:nips_metalearn}. Our proposed FWL\xspace can be categorized in this class as the teacher\xspace tries to infer better labels and provide certainty information which is incorporated as the update rule for the student\xspace model. \section{Conclusion} \label{sec:conclusion} \vspace{-.80ex} Training neural networks using large amounts of weakly annotated data is an attractive approach in scenarios where an adequate amount of data with true labels is not available, a situation which often arises in practice. In this paper, we introduced fidelity-weighted learning\xspace (FWL\xspace), a new student-teacher framework for semi-supervised learning in the presence of weakly labeled data. We applied FWL\xspace to document ranking and sentiment classification, and empirically verified that FWL\xspace speeds up the training process and improves over state-of-the-art semi-supervised alternatives. \section{Introduction1}
2024-02-18T23:40:39.015Z
2018-05-24T02:12:07.000Z
algebraic_stack_train_0000
2,960
6,857
proofpile-arXiv_065-14523
\section{Experimental setup} For this experiment a mirror composed of 36 segments is used. For more details on the setup see~\cite{Veberic:2015yua,FunkICRC2017}. The experiment is set-up in a light-tight window-less room with concrete walls of at least 2\,m thickness. The inner area (see Fig.~\ref{f:setup}), encompassing the camera and the mirror, is additionally light insulated with a thick black curtain and a 120\,$\upmu$m layer of opaque polyethylene foil. As the light detector a 29\,mm diameter photomultiplier (PMT) ET 9107BQ with very low dark-current properties was chosen. The PMT has a blue-green sensitive bialkali photocathode with the quantum efficiency extended into the ultra-violet range with the peak quantum efficiency of around 28\% and excellent single electron and pulse-height resolution, suitable for the photon counting. The PMT camera is placed on a motorized linear-stage that can drive it (perpendicularly to the mirror axis) in and out of the center of the spherical mirror. The PMT front is additionally equipped with a motorized shutter that can obscure the entrance of photons. Signals from the PMT were digitized with the PicoScope 6404D digital oscilloscope. In Fig.~\ref{f:traces} two examples of triggered traces are given. A single-photon (SP) pulse can be observed on the left and a trace containing several pulses in the 1.6\,$\upmu$s trigger window is shown on the right. Traces with multiple pulses were discarded since they can be produced only by cosmic-ray showers. Based on measurements of the SP charge spectrum with an LED flasher, Fig.~\ref{f:charge}-left, a cut on the allowed range of observed charges was also applied, as seen in Fig.~\ref{f:charge}-right. The efficiency of the latter cut on SP traces is estimated to be 75\%. \section{Preliminary limit on mixing parameter} \begin{figure}[t] \def0.3{0.35} \centering \includegraphics[height=0.3\textwidth]{Engel_Ralph_fig1a}\hfill \includegraphics[height=0.3\textwidth]{Engel_Ralph_fig1b.jpg} \caption{Schematic (left) and photo (right) of the experimental setup.} \label{f:setup} \end{figure} \begin{figure}[t] \def0.3{0.372} \centering \includegraphics[height=0.3\textwidth]{Engel_Ralph_fig2a}\hfill \includegraphics[height=0.3\textwidth]{Engel_Ralph_fig2b} \caption{Typical examples of captured traces with single pulse (left) and many pulses within a short time span (right).} \label{f:traces} \end{figure} \begin{figure}[t] \def0.3{0.39} \centering \includegraphics[height=0.3\textwidth]{Engel_Ralph_fig3a}\hfill \includegraphics[height=0.3\textwidth]{Engel_Ralph_fig3b} \caption{\emph{Left:} Charge distribution for a flasher run with very low power setting and, therefore, composed mostly by single photo-electrons. \emph{Right:} Charge distribution observed in one of the measurement runs. Note that in both cases $q=\lg(Q/\text{e})$ where $Q$ is the charge of a pulse.} \label{f:charge} \end{figure} The selected photon counts for a 30-day run performed in February and March 2017 can be seen in Fig.~\ref{f:measurement}-left. The data was taken in cycles of four 60\,s measurements performed in all four possible combinations of the two positions of the PMT camera (\emph{in} and 8\,cm \emph{out} of the center), and the two positions of the shutter (\emph{open} and \emph{close}), as schematically shown in Fig.~\ref{f:measurement}-right. The average rate of the whole run is $R=0.535$\,Hz and the relative differences $\Delta R$ in the four different configurations are shown in Fig.~\ref{f:measurement}-right. The difference of the count rates between \emph{open} and \emph{close} for the PMT being \emph{in} the radius point is proxy for the dark-matter signal. With the shutter \emph{open} there are $\Delta R=0.0032\pm0.0014$\,Hz more counts registered than with \emph{closed}. Ignoring for a moment any possible systematic effects, we obtain the limit shown in Fig.~\ref{f:limit} denoted with \emph{FUNK sensitivity}. To determine possible systematic uncertainties that might be related to temperature changes or the limited accuracy of the measurement time, we also compare the rates with the \emph{closed} PMT \emph{in} and \emph{out} of the radius point. The two count rates agree within the statistical uncertainty ($\Delta R=0.0007\pm0.0014$\,Hz). Nevertheless, the comparison of the count rates for the \emph{open} PMT \emph{in} and \emph{out} of the radius point we found significantly larger count rate for the PMT \emph{open} and \emph{out} of the radius point, possibly related to the different imaging properties of the setup in the two positions. Additional measurements are in progress to better understand this systematic behavior. For now we treat this difference ($\Delta R\approx0.025$\,Hz) as an upper limit of the overall systematic uncertainty of the measurement and use it to derive a preliminary upper limit~\cite{Dobrich:2014kda} on the magnitude of the mixing parameter $\chi$ in the sensitivity range of the PMT, see the line denoted \emph{FUNK preliminary (sys)} in Fig.~\ref{f:limit}. \textbf{Summary.} No significant signal was found. The detailed analysis of the data is still ongoing, thus here we are reporting only preliminary results with a maximally conservative estimate of possible systematic uncertainties. \textbf{Future plans.} We are planning further searches for possible hidden-photon dark matter with measurements extended into the MHz, GHz, and THz range. \textbf{Acknowledgments.} We gratefully acknowledge partial support from the Helmholtz Alliance for Astroparticle physics (HAP), funded by the Initiative and Networking Fund of the Helmholtz Association. \begin{figure}[t] \def0.3{0.295} \centering \includegraphics[height=0.3\textwidth]{Engel_Ralph_fig4a}\hfill \includegraphics[height=0.3\textwidth]{Engel_Ralph_fig4b} \caption{\emph{Left:} observed pulse rate in one of the measurement runs. \emph{Right:} A measurement is composed of many event cycles, where in each cycle four different 60-second measurements are performed. The schematic show these four different combinations obtained with open/closed shutter and with camera in/out.} \label{f:measurement} \end{figure} \begin{figure}[t] \def0.3{0.3} \centering \includegraphics[height=0.3\textwidth]{Engel_Ralph_fig5a}\hfill \includegraphics[height=0.3\textwidth]{Engel_Ralph_fig5b} \caption{Preliminary limits on the mixing parameter $\chi$ derived from systematic dominated (blue) and statistics dominated (red) assumptions. The right figure is an enlargement of the right plot around the dark-matter mass window corresponding to the optical emission of the visible photons. The previous result obtained with a CCD camera~\cite{Veberic:2015yua} is also shown (light-blue).} \label{f:limit} \end{figure} \begin{footnotesize}
2024-02-18T23:40:39.361Z
2017-11-09T02:09:17.000Z
algebraic_stack_train_0000
2,979
1,084
proofpile-arXiv_065-14676
\section{Introduction} The class of $\omega$-regular specifications allows to concisely capture long-term tasks for systems to be controlled. Consequently, they have not only been used as specification formalism for the control of deterministic systems, but found applications in control of probabilistic systems. In the probabilistic case, the objective is typically to ensure that the specification holds almost surely or with the highest possible probability. There are however many systems that do not admit any control strategy that satisfies an $\omega$-regular objective with a non-zero probability. In such a case, all controllers are equally bad: they violate the specification almost surely (or surely). If, for example, we have a robot control scenario where there is always a small probability that the robot moves towards a wall (due to external influences), then a specification that forbids colliding with the wall cannot be fulfilled with a non-zero probability, as colliding with a wall almost surely eventually happens. Yet, researchers have proposed many approaches for controlling robots in such environments in practice. In a nutshell, these approaches are \emph{optimistic}: why should we be intimidated by events that are unavoidable but occur with small probability even in long time spans when we can still satisfy the specification for some time? {Such approaches are typically also \emph{risk-averse}: within the actions that are available to the robot, those that avoid the violation of the specification as long as possible are preferred.} A reasonable strategy for the robot could, for example, try to stay clear of the walls and immediately take action when it happens to get closer to the wall at runtime. In this way, the robot could work towards satisfying its goals even though in the long run, it will eventually collide with a wall almost surely. On a theoretical level, the infinite-horizon nature of $\omega$-regular specifications however prevents the immediate application of \emph{optimism}, though. If with probability $1$, the specification is violated no matter what policy is used for controlling the system, then all control policies are equally bad and no best policy can be generated. While this fact advocates for an approach to system control that is not based on $\omega$-regular objectives, the infinitary nature of them allows to abstract from many details of the specification. As an example, we can state in the $\omega$-regular setting that the robot should visit each of two regions in a workspace infinitely often, which is a concise representation of the task of patrolling between these regions. The specification does not impose maximal times between visits to the regions, {which allows to optimize the risk-averseness of the policy}. % Deviating from this concept would mean to impose time bounds between the visits to the regions. But then we get a tradeoff between optimizing for satisfying the specification as long as possible and the lengths of the patrolling periods. { So it is desirable to keep the simplicity and conciseness of $\omega$-regular specifications to allow optimizing the probability to satisfy the specification for at least some time. In this paper, we show how to compute \emph{optimistic}, yet \emph{risk-averse} policies for satisfying $\omega$-regular objectives in Markov decision processes (MDPs). We define an optimization criterion that captures the task of computing policies that satisfy $\omega$-regular control objective as long as possible, and give an algorithm to compute these policies. The basic idea is that we require the policy to have a labeling that describes which states are considered to be \emph{goal states} by the policy, i.e., for which visiting them infinitely often ensures that the specification is satisfied. An \emph{optimally risk-averse policy} maximizes the probability for reaching the next goal state from the respective previous goal state. We argue that this criterion matches the intuitive idea that the controller should satisfy the specification as long as possible even if violation is almost surely unavoidable in the long run. % We validate the usability of our risk-averse policy definition and the scalability of our policy computation algorithm on two case studies for robot control in probabilistic environments. } \section{Related Work} MDPs are widely used in many areas such as engineering, economics and biology, and have been successfully used to model and control autonomous robots with uncertainty in their sensing and actuation (see e.g., \cite{ding2014optimal,lahijanian2012temporal,temizer2010collision,alterovitz2007stochastic}). In these domains, the behavior of the system cannot be predicted with certainty, but it can be modeled probabilistically through simulations or empirical trials. {Our results in this paper can be used in \emph{practical} settings in which the system cannot be controlled to satisfy a specification in the long run, but some amount of risk-taking is acceptable. } MDPs are also referred to as $1\frac{1}{2}$-player games and belong to a broader class of \emph{stochastic games}. The algorithmic study of stochastic games with respect to $\omega$-regular objectives has recently attracted significant attention \cite{condon1992complexity,de1998concurrent,de2000concurrent,de2001quantitative,chatterjee2005complexity,chatterjee2006complexity}. See \cite{chatterjee2012survey} for a detailed survey. The central question about a game is whether a player has a strategy for winning the game. There are several definitions for \emph{winning} in a stochastic game \cite{chatterjee2012survey}. For example, one may ask if a player has a strategy that ensures a winning outcome of the game, no matter how the other player chooses her actions (\emph{sure winning}), or one may ask if a player has a strategy that achieves a winning outcome with probability $1$ (\emph{almost-sure winning}). In contrast to these \emph{qualitative} winning criteria, the \emph{quantitative} solution \cite{chatterjee2004quantitative,de2001quantitative} amounts to computing the value of the game, i.e., the maximal probability of winning that a player can guarantee against any strategy chosen by the opponent. {The choice of MDPs in this paper is motivated by their manageable complexity compared to more general classes of stochastic games, and by their applicability to many control problems. Ding et al.~\cite{DingEtAlMDPControl} gave an approach to compute MDP policies that maximize the probability of satisfying an $\omega$-regular specification. They applied their algorithm to robot indoor navigation. Svorenova et al.~\cite{DBLP:conf/cdc/SvorenovaCB13} considered the problem of minimizing the expected cost in between reaching \emph{goal states} in MDPs for $\omega$-regular specifications. Our work uses a similar notion of goal states. } None of the mentioned works consider the syn\-the\-sis of risk-averse policies in case there is no strategy that wins with a probability of greater than $0$. \section{Preliminaries} \paragraph{MDPs} A \newterm{Markov decision process} is defined as a tuple $\mathcal{M} = (S,A,\Sigma,P,L,s_0)$, where $S$ is a finite set of \newterm{states}, $A$ is a finite set of \newterm{actions}, $\Sigma$ is the \newterm{label alphabet}, $P : S \times A \rightarrow \mathcal{P}(S) \cup \{\bot\}$ is the \newterm{transition function}, where $\mathcal{P}(S)$ denotes the probability distributions over $S$, $L : S \rightarrow \Sigma$ is the labeling function of $\mathcal{M}$, and $s_0 \in S$ is the initial state of the Markov chain. We say that some finite sequence $\pi = \pi_0 \ldots \pi_n \in S^*$ is a \newterm{finite trace} (or \newterm{run}) of $\mathcal{M}$ if there exists a sequence of actions $\rho = \rho_0 \ldots \rho_{n-1} \in A^*$ such that for all $i \in \{0, \ldots, n-1\}$, we have $P(\pi_i,\rho_i) \neq \bot$ and $P(\pi_i,\rho_i)(\pi_{i+1}) > 0$. We say that the combined probability of $(\pi,\rho)$ is $\prod_{i=0}^{n-1} P(\pi_i,\rho_i)(\pi_{i+1})$. The definition of finite traces carries over to infinite traces. A \newterm{Markov chain} is a Markov decision process (MDP) in which $A = \{\cdot\}$. A Markov chain introduces the usual probability measure over sets of infinite traces. A \newterm{policy} for an MDP is a function $f : S^* \rightarrow \mathcal{P}(A)$ such that for all $s_0 \ldots s_n \in S^*$, we have $f(s_0 \ldots s_n)(a) = 0$ for all actions $a$ such that $P(s_n,a) = \bot$. A policy induces an infinite-state Markov chain $\mathcal{C}' = (S',\{\cdot\},\Sigma,P',L',s_0)$ with $S' = S^*$, $L'(t_0 \ldots t_n) = L(t_n)$ for all $t_0 \ldots t_n \in S'$, and for all $t_0 \ldots t_n, u_0 \ldots u_m \in S'$, we have $P'(t_0 \ldots t_n,\cdot)(u_0 \ldots u_m) = \sum_{a \in A} P(t_n,a) \cdot f(t_0 \ldots t_n)(a)$ if $u_0 \ldots u_{m-1} = t_0 \ldots t_n$, and $P'(t_0 \ldots t_n, \cdot)(u_0 \ldots u_m) = 0$ otherwise. Policies for MDPs can be \newterm{positional} or \newterm{finite-state}. For a positional policy, for all $\pi = \pi_0 \ldots \pi_n \in S^*$ and $\pi' = \pi'_0 \ldots \pi'_m \in S^*$, we have that $f(\pi)=f(\pi')$ if $\pi_n = \pi'_m$. For a finite-state strategy, there exists a finite-state automaton $\mathcal{F} = (Q,S,\delta,q_0)$ with $Q$ being a finite set of states, $q_0 \in Q$, and $\delta : Q \times S \rightarrow Q$ such that there is a function $f' : Q \rightarrow \mathcal{P}(A)$ such that for all $\pi = \pi_0 \ldots \pi_n \in S^*$, we have that $f(\pi) = f'(q)$ for $q = \delta(\ldots \delta(\delta(q_0,\pi_0),\pi_1), \ldots, \pi_n)$. {In literature, MDPs often also have a \emph{reward function}. As in some other work on $\omega$-regular MDP control \cite{DingEtAlMDPControl}, we do not need it in this paper and have thus omitted the reward function in the MDP definition.} An MDP can be represented graphically by drawing the states as nodes in a graph, marking the initial state and letting the transitions be represented by groups of \newterm{edges}, which are in turn labeled by their transition probabilities. The groups of edges are labeled by their actions. Disallowed actions, i.e., for which we have $P(s,a)=\bot$, are not shown. % \paragraph{Parity automata and $\omega$-specifications} Given an alphabet $\Sigma$, an $\omega$-regular specification is a subset of $\Sigma^\omega$ that is representable as the language of a \newterm{deterministic parity word automaton}. These automata are defined as tuples $\mathcal{A} = (Q,\Sigma,\delta,q_0,C)$, where $Q$ is a finite set of states, $\Sigma$ is an alphabet, $\delta : Q \times \Sigma \rightarrow Q$ is the transition function of $\mathcal{A}$, $q_0 \in Q$ is the initial state of the automaton, and $C : Q \rightarrow \mathbb{N}$ is the \newterm{coloring function}. Given a word $w = w_0 w_1 \ldots \in \Sigma^\omega$, $\mathcal{A}$ induces a \newterm{trace} $\pi = \pi_0 \pi_1 \ldots \in Q^\omega$ such that for all $i \in \mathbb{N}$, we have $\pi_{i+1} = \delta(\pi_i,w_i)$. Let $\mathsf{inf}$ be a function that maps an infinite sequence onto the elements of the sequence that occur infinitely often in it. A trace $\pi$ of $\mathcal{A}$ is called \newterm{accepting} if $\max(\mathsf{inf}( c(\pi_0) c(\pi_1) c(\pi_2) \ldots))$ is even. An automaton is said to accept a word $w$ if there exists an accepting trace for it. The set of all words accepted by the automaton is called its \newterm{language}. \paragraph{Reachability MDPs} A reachability MDP $\mathcal{M} = (S,A,\Sigma,P,L,s_0,g)$ consists of the usual MDP elements plus a function $g : S \rightarrow \{0,1\}$, which assigns to every state $s \in S$ either $0$ or $1$ depending on whether it is a \newterm{goal state} or not. A policy $f$ for $\mathcal{M}$ induces for every state $s$ a \newterm{value} $v(s) \in [0,1]$ that states the probability measure of the traces starting in $s$ and visiting a state $s' \in S$ with $g(s')=1$ when executing the policy, i.e., in the Markov chain induced by $\mathcal{M}$ and $f$ starting from state $s$. A policy that maximizes the values from all starting states is called \newterm{optimal} and it is known that in reachability MDPs, positional optimal policies exist \cite{condon1992complexity}. The values of the states induced by an optimal policy are also called the \newterm{state values} of the reachability MDP. These can be computed either by \newterm{policy iteration} or \newterm{value iteration} algorithms \cite{Sigaud:2010:MDP:1841781}. In the latter case, a sequence of vectors $\vec x_1, \vec x_2, \ldots \in [0,1]^{|S|}$ is computed such that for every $i \in \mathbb{N}$, $x_{i+1}$ is closer to the vector of state values than $x_i$. Value iteration is normally programmed to abort computation if at some point, $||x_{i+1} - x_i|| \leq \epsilon$ for some value $\epsilon$ and some norm $||\cdot||$. When starting with $\vec x_0$ being equivalent to $g$, the approximations are all under-approximations of the actual state values (modulo rounding errors). \section{Problem Definition} \begin{definition}[Parity MDP] The product of an MDP $\mathcal{M} = (S,A,\Sigma,P,L,s_0)$ and a parity word automaton $\mathcal{A} = (Q,\Sigma,\delta,q_0,C)$ is an MDP $\mathcal{M}' = (S', A, \Sigma, P', C', s'_0)$ with a coloring function instead of a labeling function where: \begin{align*} S' & = S \times Q, \\ s'_0 & = (s_0,q_0), \\ C'(s,q) & = C(q) \text{ for all } (s,q) \in S', \text{and} \\ P'((s,q),a)((s',q')) & = \begin{cases} P'(s,a)(s') & \text{if } q' = \delta(q,L(s')) \\ 0 & \text{else} \end{cases} \\ & \quad \text{for all } (s,q), (s',q') \in S', a \in A. \end{align*} An infinite trace $\pi_0 \pi_1 \ldots \in S'^\omega$ in $\mathcal{M}'$ is said to be \newterm{accepting} if the highest number occurring infinitely often in the sequence $C'(\pi_0) C'(\pi_1) \ldots$ is even. \end{definition} { A parity MDP captures a control problem in a probabilistic environment. We say that some trace $\pi = \pi_0 \pi_1 \ldots$ (or run) of the MDP is \newterm{accepting} if the trace fulfills the parity acceptance condition defined in the $C$ component in the MDP. Let us consider an example. } \begin{example} \label{example:motivation} As a first example, we consider a simple robot with unicycle dynamics in a two-dimensional gridded world. The workspace, which we depict in Figure~\ref {fig:simpleRobot}, has 70$\times$40 cells and the robot always has one out of eight possible current directions. The speed of the robot is constant, and it needs to avoid hitting the workspace boundaries or the static obstacles. In order to model the scenario as an MDP, we use a semantics with a fixed time step. We shift the current cell into the current direction of travel by 2 cells, extend the resulting rectangle by $0.1$ into every direction to account for imprecise motion, and then assign transition probabilities that are proportional to the overlap of the rectangle with the world cells. There is an additional special error state in the MDP that represents crashes. In every step of the execution, the policy can decide to increase or decrease the current direction by $1$ step (out of $8$). This turning operation may fail with a probability of $0.2$ - in the case of failure, the direction of the robot is not changed. The MDP has $70 \cdot 40 \cdot 8 + 1 = 22401$ states, $67201$ state/action pairs, and $681591$ \newterm{edges}, i.e., pairs $(s,a,s')$ in the MDP $\mathcal{M} = (S,A,\Sigma,P,L,s_0)$ with $P(s,a)(s')>0$. \begin{figure} \centering\resizebox{0.7\columnwidth}{!}{% \begin{tikzpicture}[yscale=-1] \definecolor{mycolor0}{RGB}{255,255,255} \definecolor{mycolor1}{RGB}{0,0,0} \definecolor{mycolor2}{RGB}{0,255,0} \definecolor{mycolor3}{RGB}{255,0,1} \definecolor{mycolor4}{RGB}{0,0,255} \definecolor{mycolor5}{RGB}{255,0,255} \definecolor{mycolor6}{RGB}{0,255,255} \definecolor{mycolor7}{RGB}{220,220,220} \definecolor{mycolor8}{RGB}{205,206,179} \path[fill=mycolor0] (0,0) rectangle (27,1); \path[fill=mycolor7] (27,0) rectangle (33,1); \path[fill=mycolor0] (33,0) rectangle (47,1); \path[fill=mycolor1] (47,0) rectangle (50,1); \path[fill=mycolor0] (50,0) rectangle (70,1); \path[fill=mycolor0] (0,1) rectangle (27,2); \path[fill=mycolor7] (27,1) rectangle (33,2); \path[fill=mycolor0] (33,1) rectangle (47,2); \path[fill=mycolor1] (47,1) rectangle (50,2); \path[fill=mycolor0] (50,1) rectangle (70,2); \path[fill=mycolor0] (0,2) rectangle (27,3); \path[fill=mycolor7] (27,2) rectangle (33,3); \path[fill=mycolor0] (33,2) rectangle (47,3); \path[fill=mycolor1] (47,2) rectangle (50,3); \path[fill=mycolor0] (50,2) rectangle (70,3); \path[fill=mycolor0] (0,3) rectangle (27,4); \path[fill=mycolor7] (27,3) rectangle (33,4); \path[fill=mycolor0] (33,3) rectangle (47,4); \path[fill=mycolor1] (47,3) rectangle (50,4); \path[fill=mycolor0] (50,3) rectangle (70,4); \path[fill=mycolor0] (0,4) rectangle (27,5); \path[fill=mycolor7] (27,4) rectangle (33,5); \path[fill=mycolor0] (33,4) rectangle (47,5); \path[fill=mycolor1] (47,4) rectangle (50,5); \path[fill=mycolor0] (50,4) rectangle (70,5); \path[fill=mycolor0] (0,5) rectangle (8,6); \path[fill=mycolor1] (8,5) rectangle (10,6); \path[fill=mycolor0] (10,5) rectangle (27,6); \path[fill=mycolor7] (27,5) rectangle (33,6); \path[fill=mycolor0] (33,5) rectangle (47,6); \path[fill=mycolor1] (47,5) rectangle (50,6); \path[fill=mycolor0] (50,5) rectangle (70,6); \path[fill=mycolor0] (0,6) rectangle (8,7); \path[fill=mycolor1] (8,6) rectangle (10,7); \path[fill=mycolor0] (10,6) rectangle (27,7); \path[fill=mycolor7] (27,6) rectangle (33,7); \path[fill=mycolor0] (33,6) rectangle (47,7); \path[fill=mycolor1] (47,6) rectangle (50,7); \path[fill=mycolor0] (50,6) rectangle (70,7); \path[fill=mycolor0] (0,7) rectangle (8,8); \path[fill=mycolor1] (8,7) rectangle (10,8); \path[fill=mycolor0] (10,7) rectangle (70,8); \path[fill=mycolor0] (0,8) rectangle (8,9); \path[fill=mycolor1] (8,8) rectangle (10,9); \path[fill=mycolor0] (10,8) rectangle (70,9); \path[fill=mycolor0] (0,9) rectangle (8,10); \path[fill=mycolor1] (8,9) rectangle (10,10); \path[fill=mycolor0] (10,9) rectangle (70,10); \path[fill=mycolor0] (0,10) rectangle (8,11); \path[fill=mycolor1] (8,10) rectangle (10,11); \path[fill=mycolor0] (10,10) rectangle (70,11); \path[fill=mycolor0] (0,11) rectangle (8,12); \path[fill=mycolor1] (8,11) rectangle (10,12); \path[fill=mycolor0] (10,11) rectangle (70,12); \path[fill=mycolor0] (0,12) rectangle (8,13); \path[fill=mycolor1] (8,12) rectangle (10,13); \path[fill=mycolor0] (10,12) rectangle (47,13); \path[fill=mycolor1] (47,12) rectangle (50,13); \path[fill=mycolor0] (50,12) rectangle (70,13); \path[fill=mycolor0] (0,13) rectangle (8,14); \path[fill=mycolor1] (8,13) rectangle (10,14); \path[fill=mycolor0] (10,13) rectangle (27,14); \path[fill=mycolor4] (27,13) rectangle (29,14); \path[fill=mycolor0] (29,13) rectangle (31,14); \path[fill=mycolor6] (31,13) rectangle (33,14); \path[fill=mycolor0] (33,13) rectangle (47,14); \path[fill=mycolor1] (47,13) rectangle (50,14); \path[fill=mycolor0] (50,13) rectangle (70,14); \path[fill=mycolor0] (0,14) rectangle (8,15); \path[fill=mycolor1] (8,14) rectangle (10,15); \path[fill=mycolor0] (10,14) rectangle (27,15); \path[fill=mycolor4] (27,14) rectangle (29,15); \path[fill=mycolor0] (29,14) rectangle (31,15); \path[fill=mycolor6] (31,14) rectangle (33,15); \path[fill=mycolor0] (33,14) rectangle (47,15); \path[fill=mycolor1] (47,14) rectangle (50,15); \path[fill=mycolor0] (50,14) rectangle (70,15); \path[fill=mycolor0] (0,15) rectangle (7,16); \path[fill=mycolor1] (7,15) rectangle (10,16); \path[fill=mycolor0] (10,15) rectangle (27,16); \path[fill=mycolor4] (27,15) rectangle (29,16); \path[fill=mycolor5] (29,15) rectangle (31,16); \path[fill=mycolor6] (31,15) rectangle (33,16); \path[fill=mycolor0] (33,15) rectangle (47,16); \path[fill=mycolor1] (47,15) rectangle (50,16); \path[fill=mycolor0] (50,15) rectangle (70,16); \path[fill=mycolor0] (0,16) rectangle (7,17); \path[fill=mycolor1] (7,16) rectangle (10,17); \path[fill=mycolor0] (10,16) rectangle (27,17); \path[fill=mycolor4] (27,16) rectangle (29,17); \path[fill=mycolor5] (29,16) rectangle (31,17); \path[fill=mycolor6] (31,16) rectangle (33,17); \path[fill=mycolor0] (33,16) rectangle (47,17); \path[fill=mycolor1] (47,16) rectangle (50,17); \path[fill=mycolor0] (50,16) rectangle (70,17); \path[fill=mycolor0] (0,17) rectangle (7,18); \path[fill=mycolor1] (7,17) rectangle (10,18); \path[fill=mycolor0] (10,17) rectangle (27,18); \path[fill=mycolor4] (27,17) rectangle (29,18); \path[fill=mycolor5] (29,17) rectangle (31,18); \path[fill=mycolor6] (31,17) rectangle (33,18); \path[fill=mycolor0] (33,17) rectangle (47,18); \path[fill=mycolor1] (47,17) rectangle (50,18); \path[fill=mycolor0] (50,17) rectangle (70,18); \path[fill=mycolor2] (0,18) rectangle (4,19); \path[fill=mycolor0] (4,18) rectangle (7,19); \path[fill=mycolor1] (7,18) rectangle (10,19); \path[fill=mycolor0] (10,18) rectangle (27,19); \path[fill=mycolor4] (27,18) rectangle (29,19); \path[fill=mycolor5] (29,18) rectangle (31,19); \path[fill=mycolor6] (31,18) rectangle (33,19); \path[fill=mycolor0] (33,18) rectangle (47,19); \path[fill=mycolor1] (47,18) rectangle (50,19); \path[fill=mycolor0] (50,18) rectangle (66,19); \path[fill=mycolor3] (66,18) rectangle (70,19); \path[fill=mycolor2] (0,19) rectangle (4,20); \path[fill=mycolor0] (4,19) rectangle (7,20); \path[fill=mycolor1] (7,19) rectangle (10,20); \path[fill=mycolor0] (10,19) rectangle (27,20); \path[fill=mycolor4] (27,19) rectangle (29,20); \path[fill=mycolor5] (29,19) rectangle (31,20); \path[fill=mycolor6] (31,19) rectangle (33,20); \path[fill=mycolor0] (33,19) rectangle (47,20); \path[fill=mycolor1] (47,19) rectangle (50,20); \path[fill=mycolor0] (50,19) rectangle (66,20); \path[fill=mycolor3] (66,19) rectangle (70,20); \path[fill=mycolor0] (0,20) rectangle (8,21); \path[fill=mycolor1] (8,20) rectangle (10,21); \path[fill=mycolor0] (10,20) rectangle (27,21); \path[fill=mycolor4] (27,20) rectangle (29,21); \path[fill=mycolor5] (29,20) rectangle (31,21); \path[fill=mycolor6] (31,20) rectangle (33,21); \path[fill=mycolor0] (33,20) rectangle (70,21); \path[fill=mycolor0] (0,21) rectangle (8,22); \path[fill=mycolor1] (8,21) rectangle (10,22); \path[fill=mycolor0] (10,21) rectangle (27,22); \path[fill=mycolor4] (27,21) rectangle (29,22); \path[fill=mycolor5] (29,21) rectangle (31,22); \path[fill=mycolor6] (31,21) rectangle (33,22); \path[fill=mycolor0] (33,21) rectangle (70,22); \path[fill=mycolor0] (0,22) rectangle (8,23); \path[fill=mycolor1] (8,22) rectangle (10,23); \path[fill=mycolor0] (10,22) rectangle (27,23); \path[fill=mycolor4] (27,22) rectangle (29,23); \path[fill=mycolor0] (29,22) rectangle (31,23); \path[fill=mycolor6] (31,22) rectangle (33,23); \path[fill=mycolor0] (33,22) rectangle (70,23); \path[fill=mycolor0] (0,23) rectangle (8,24); \path[fill=mycolor1] (8,23) rectangle (10,24); \path[fill=mycolor0] (10,23) rectangle (27,24); \path[fill=mycolor4] (27,23) rectangle (29,24); \path[fill=mycolor0] (29,23) rectangle (31,24); \path[fill=mycolor6] (31,23) rectangle (33,24); \path[fill=mycolor0] (33,23) rectangle (70,24); \path[fill=mycolor0] (0,24) rectangle (8,25); \path[fill=mycolor1] (8,24) rectangle (10,25); \path[fill=mycolor0] (10,24) rectangle (70,25); \path[fill=mycolor0] (0,25) rectangle (8,26); \path[fill=mycolor1] (8,25) rectangle (10,26); \path[fill=mycolor0] (10,25) rectangle (70,26); \path[fill=mycolor0] (0,26) rectangle (8,27); \path[fill=mycolor1] (8,26) rectangle (10,27); \path[fill=mycolor0] (10,26) rectangle (47,27); \path[fill=mycolor1] (47,26) rectangle (50,27); \path[fill=mycolor0] (50,26) rectangle (70,27); \path[fill=mycolor0] (0,27) rectangle (8,28); \path[fill=mycolor1] (8,27) rectangle (10,28); \path[fill=mycolor0] (10,27) rectangle (47,28); \path[fill=mycolor1] (47,27) rectangle (50,28); \path[fill=mycolor0] (50,27) rectangle (70,28); \path[fill=mycolor0] (0,28) rectangle (8,29); \path[fill=mycolor1] (8,28) rectangle (10,29); \path[fill=mycolor0] (10,28) rectangle (47,29); \path[fill=mycolor1] (47,28) rectangle (50,29); \path[fill=mycolor0] (50,28) rectangle (70,29); \path[fill=mycolor0] (0,29) rectangle (8,30); \path[fill=mycolor1] (8,29) rectangle (10,30); \path[fill=mycolor0] (10,29) rectangle (27,30); \path[fill=mycolor8] (27,29) rectangle (34,30); \path[fill=mycolor0] (34,29) rectangle (47,30); \path[fill=mycolor1] (47,29) rectangle (50,30); \path[fill=mycolor0] (50,29) rectangle (70,30); \path[fill=mycolor0] (0,30) rectangle (8,31); \path[fill=mycolor1] (8,30) rectangle (10,31); \path[fill=mycolor0] (10,30) rectangle (27,31); \path[fill=mycolor8] (27,30) rectangle (34,31); \path[fill=mycolor0] (34,30) rectangle (47,31); \path[fill=mycolor1] (47,30) rectangle (50,31); \path[fill=mycolor0] (50,30) rectangle (70,31); \path[fill=mycolor0] (0,31) rectangle (8,32); \path[fill=mycolor1] (8,31) rectangle (10,32); \path[fill=mycolor0] (10,31) rectangle (27,32); \path[fill=mycolor8] (27,31) rectangle (34,32); \path[fill=mycolor0] (34,31) rectangle (47,32); \path[fill=mycolor1] (47,31) rectangle (50,32); \path[fill=mycolor0] (50,31) rectangle (70,32); \path[fill=mycolor0] (0,32) rectangle (8,33); \path[fill=mycolor1] (8,32) rectangle (10,33); \path[fill=mycolor0] (10,32) rectangle (27,33); \path[fill=mycolor8] (27,32) rectangle (34,33); \path[fill=mycolor0] (34,32) rectangle (47,33); \path[fill=mycolor1] (47,32) rectangle (50,33); \path[fill=mycolor0] (50,32) rectangle (70,33); \path[fill=mycolor0] (0,33) rectangle (8,34); \path[fill=mycolor1] (8,33) rectangle (10,34); \path[fill=mycolor0] (10,33) rectangle (27,34); \path[fill=mycolor8] (27,33) rectangle (34,34); \path[fill=mycolor0] (34,33) rectangle (70,34); \path[fill=mycolor0] (0,34) rectangle (8,35); \path[fill=mycolor1] (8,34) rectangle (10,35); \path[fill=mycolor0] (10,34) rectangle (27,35); \path[fill=mycolor8] (27,34) rectangle (34,35); \path[fill=mycolor0] (34,34) rectangle (70,35); \path[fill=mycolor0] (0,35) rectangle (27,36); \path[fill=mycolor8] (27,35) rectangle (34,36); \path[fill=mycolor0] (34,35) rectangle (70,36); \path[fill=mycolor0] (0,36) rectangle (27,37); \path[fill=mycolor8] (27,36) rectangle (34,37); \path[fill=mycolor0] (34,36) rectangle (70,37); \path[fill=mycolor0] (0,37) rectangle (27,38); \path[fill=mycolor8] (27,37) rectangle (34,38); \path[fill=mycolor0] (34,37) rectangle (70,38); \path[fill=mycolor0] (0,38) rectangle (27,39); \path[fill=mycolor8] (27,38) rectangle (34,39); \path[fill=mycolor0] (34,38) rectangle (70,39); \path[fill=mycolor0] (0,39) rectangle (27,40); \path[fill=mycolor8] (27,39) rectangle (34,40); \path[fill=mycolor0] (34,39) rectangle (70,40); \path[draw] ( 0,0) -- ( 0 , 40 ); \path[draw] ( 1,0) -- ( 1 , 40 ); \path[draw] ( 2,0) -- ( 2 , 40 ); \path[draw] ( 3,0) -- ( 3 , 40 ); \path[draw] ( 4,0) -- ( 4 , 40 ); \path[draw] ( 5,0) -- ( 5 , 40 ); \path[draw] ( 6,0) -- ( 6 , 40 ); \path[draw] ( 7,0) -- ( 7 , 40 ); \path[draw] ( 8,0) -- ( 8 , 40 ); \path[draw] ( 9,0) -- ( 9 , 40 ); \path[draw] ( 10,0) -- ( 10 , 40 ); \path[draw] ( 11,0) -- ( 11 , 40 ); \path[draw] ( 12,0) -- ( 12 , 40 ); \path[draw] ( 13,0) -- ( 13 , 40 ); \path[draw] ( 14,0) -- ( 14 , 40 ); \path[draw] ( 15,0) -- ( 15 , 40 ); \path[draw] ( 16,0) -- ( 16 , 40 ); \path[draw] ( 17,0) -- ( 17 , 40 ); \path[draw] ( 18,0) -- ( 18 , 40 ); \path[draw] ( 19,0) -- ( 19 , 40 ); \path[draw] ( 20,0) -- ( 20 , 40 ); \path[draw] ( 21,0) -- ( 21 , 40 ); \path[draw] ( 22,0) -- ( 22 , 40 ); \path[draw] ( 23,0) -- ( 23 , 40 ); \path[draw] ( 24,0) -- ( 24 , 40 ); \path[draw] ( 25,0) -- ( 25 , 40 ); \path[draw] ( 26,0) -- ( 26 , 40 ); \path[draw] ( 27,0) -- ( 27 , 40 ); \path[draw] ( 28,0) -- ( 28 , 40 ); \path[draw] ( 29,0) -- ( 29 , 40 ); \path[draw] ( 30,0) -- ( 30 , 40 ); \path[draw] ( 31,0) -- ( 31 , 40 ); \path[draw] ( 32,0) -- ( 32 , 40 ); \path[draw] ( 33,0) -- ( 33 , 40 ); \path[draw] ( 34,0) -- ( 34 , 40 ); \path[draw] ( 35,0) -- ( 35 , 40 ); \path[draw] ( 36,0) -- ( 36 , 40 ); \path[draw] ( 37,0) -- ( 37 , 40 ); \path[draw] ( 38,0) -- ( 38 , 40 ); \path[draw] ( 39,0) -- ( 39 , 40 ); \path[draw] ( 40,0) -- ( 40 , 40 ); \path[draw] ( 41,0) -- ( 41 , 40 ); \path[draw] ( 42,0) -- ( 42 , 40 ); \path[draw] ( 43,0) -- ( 43 , 40 ); \path[draw] ( 44,0) -- ( 44 , 40 ); \path[draw] ( 45,0) -- ( 45 , 40 ); \path[draw] ( 46,0) -- ( 46 , 40 ); \path[draw] ( 47,0) -- ( 47 , 40 ); \path[draw] ( 48,0) -- ( 48 , 40 ); \path[draw] ( 49,0) -- ( 49 , 40 ); \path[draw] ( 50,0) -- ( 50 , 40 ); \path[draw] ( 51,0) -- ( 51 , 40 ); \path[draw] ( 52,0) -- ( 52 , 40 ); \path[draw] ( 53,0) -- ( 53 , 40 ); \path[draw] ( 54,0) -- ( 54 , 40 ); \path[draw] ( 55,0) -- ( 55 , 40 ); \path[draw] ( 56,0) -- ( 56 , 40 ); \path[draw] ( 57,0) -- ( 57 , 40 ); \path[draw] ( 58,0) -- ( 58 , 40 ); \path[draw] ( 59,0) -- ( 59 , 40 ); \path[draw] ( 60,0) -- ( 60 , 40 ); \path[draw] ( 61,0) -- ( 61 , 40 ); \path[draw] ( 62,0) -- ( 62 , 40 ); \path[draw] ( 63,0) -- ( 63 , 40 ); \path[draw] ( 64,0) -- ( 64 , 40 ); \path[draw] ( 65,0) -- ( 65 , 40 ); \path[draw] ( 66,0) -- ( 66 , 40 ); \path[draw] ( 67,0) -- ( 67 , 40 ); \path[draw] ( 68,0) -- ( 68 , 40 ); \path[draw] ( 69,0) -- ( 69 , 40 ); \path[draw] ( 70,0) -- ( 70 , 40 ); \path[draw] (0, 0) -- ( 70 , 0 ); \path[draw] (0, 1) -- ( 70 , 1 ); \path[draw] (0, 2) -- ( 70 , 2 ); \path[draw] (0, 3) -- ( 70 , 3 ); \path[draw] (0, 4) -- ( 70 , 4 ); \path[draw] (0, 5) -- ( 70 , 5 ); \path[draw] (0, 6) -- ( 70 , 6 ); \path[draw] (0, 7) -- ( 70 , 7 ); \path[draw] (0, 8) -- ( 70 , 8 ); \path[draw] (0, 9) -- ( 70 , 9 ); \path[draw] (0, 10) -- ( 70 , 10 ); \path[draw] (0, 11) -- ( 70 , 11 ); \path[draw] (0, 12) -- ( 70 , 12 ); \path[draw] (0, 13) -- ( 70 , 13 ); \path[draw] (0, 14) -- ( 70 , 14 ); \path[draw] (0, 15) -- ( 70 , 15 ); \path[draw] (0, 16) -- ( 70 , 16 ); \path[draw] (0, 17) -- ( 70 , 17 ); \path[draw] (0, 18) -- ( 70 , 18 ); \path[draw] (0, 19) -- ( 70 , 19 ); \path[draw] (0, 20) -- ( 70 , 20 ); \path[draw] (0, 21) -- ( 70 , 21 ); \path[draw] (0, 22) -- ( 70 , 22 ); \path[draw] (0, 23) -- ( 70 , 23 ); \path[draw] (0, 24) -- ( 70 , 24 ); \path[draw] (0, 25) -- ( 70 , 25 ); \path[draw] (0, 26) -- ( 70 , 26 ); \path[draw] (0, 27) -- ( 70 , 27 ); \path[draw] (0, 28) -- ( 70 , 28 ); \path[draw] (0, 29) -- ( 70 , 29 ); \path[draw] (0, 30) -- ( 70 , 30 ); \path[draw] (0, 31) -- ( 70 , 31 ); \path[draw] (0, 32) -- ( 70 , 32 ); \path[draw] (0, 33) -- ( 70 , 33 ); \path[draw] (0, 34) -- ( 70 , 34 ); \path[draw] (0, 35) -- ( 70 , 35 ); \path[draw] (0, 36) -- ( 70 , 36 ); \path[draw] (0, 37) -- ( 70 , 37 ); \path[draw] (0, 38) -- ( 70 , 38 ); \path[draw] (0, 39) -- ( 70 , 39 ); \path[draw] (0, 40) -- ( 70 , 40 ); \path[draw][fill=red!80!yellow,very thick] (30.5,25.5) circle (0.35cm); \end{tikzpicture} } \caption{Workspace for the single-robot example.} \label{fig:simpleRobot} \end{figure} The specification for the robot is represented as a 15 state parity automaton. It encodes four conditions to hold: \begin{itemize} \item The left-most marked part of the workspace should be visited infinitely often, \item the right-most marked part of the workspace should be visited infinitely often, \item either the top marked part of the workspace must be visited only finitely often, or the bottom one, or both, and \item infinitely often, the regions in the middle shall be visited strictly in the middle-left-right order. \end{itemize} The product MDP of the MDP and the parity automaton has 366015 states, out of which 2196 are unreachable (and can be removed). \end{example} { A classical problem over MDPs with $\omega$-regular optimization criteria is to find a policy that maximizes the probability that a trace is accepting. In the product MDP from Example~\ref{example:motivation}, there is however no policy that raises this probability above $0$. This follows from the fact that no matter what the policy does, with a probability of at least $0.2$, the robot continues to travel into the current direction. By the limited size of the workspace, colliding with the workspace boundaries takes at most 35 steps, and thus, a very conservative lower bound on the probability for a crash within 35 steps is $(0.2)^{35}$ at \emph{every} step of the MDPs execution. In the long run, the collision is thus unavoidable with probability $1$. Despite the fact that the parity MDP does not admit a good policy in the traditional sense, we may want to compute a policy that works towards the satisfaction of the specification as long as possible while avoiding unnecessary \emph{risks}. We formalize this objective in the following definition: } \begin{definition} \label{def:mainProblem} Let $\mathcal{M} = (S,A,\Sigma,P,C,s_0)$ be a parity MDP. We say that some control policy $f : S^* \rightarrow A$ has a \newterm{risk-averseness probability} $p \in [0,1]$ if there exist labelings $l : S^* \rightarrow \mathbb{N}$ and $l': S^* \rightarrow \mathbb{B}$ and a Markov chain $\mathcal{C}'$ induced by $\mathcal{M}$ and $f$ with the following properties: \begin{itemize} \item There exists some number $k \in \mathbb{N}$ such that for all $t_0 t_1 t_2 \ldots \in S^\omega$, there are at most $k$ many indices $i \in \mathbb{N}$ for which we have $l(t_0 \ldots t_i) > l(t_0 \ldots t_i t_{i+1})$. \item For all $t_0 t_1 \ldots t_n \in S^*$, we have that $l(t_0 \ldots t_n)$ is even, and $l'(t_0 \ldots t_n)=\mathbf{true}$ implies that $C(t_n) \geq l(t_0 \ldots t_n)$ and that $C(t_n)$ is even. \item For all $t_0 t_1 \ldots t_n \in S^*$, if $C(t_n)$ is odd, then $l(t_0 \ldots t_n) > C(t_n)$. \item For all $t = t_0 t_1 \ldots t_n \in S^*$ with either (a) $l'(t) = \mathbf{true}$ or (b) $t = s_0$, the probability measure in $\mathcal{C}'$ to reach some state $t\,t'_0 \ldots t'_m \in S^*$ with $l'(t\,t'_0 \ldots t'_m)=\mathbf{true}$ from state $t$ is at least $p$. \end{itemize} \end{definition} The labellings $l$ and $l'$ in Definition~\ref{def:mainProblem} augment a policy with the information what \newterm{goal color} the policy is trying to reach and when a goal has been reached. A goal must always be even-colored, but along different traces, different goals are allowed. From every goal state, the next goal state must be reached with probability at least $p$. Together with the first two requirements in Definition~\ref{def:mainProblem}, this implements the parity acceptance condition, as they together state that the {goal color} can only decrease finitely often along a trace. The parity acceptance condition does not need to be fulfilled with strictly positive probability in the long run, however, as in between two visits to goal states, the policy may fail with probability $(1-p)$. Thus, we only require the parity acceptance condition to hold on those paths on which goal states are reached infinitely often (which may have probability measure $0$). {The strategy can choose goal states in a way that maximizes the probability of reaching the respective next goal state. Thus, the higher the value of $p$ is, the more averse to the risk to miss the next goal the control policy needs to be.} The reader may wonder why mentioning the labeling function $l'$ is actually necessary in Definition~\ref{def:mainProblem}, as one could simply implicitly set $l'(t_0 \ldots t_n) = \mathbf{true}$ whenever $C(t_n) \geq l(t_0 \ldots t_n)$ and $C(t_n)$ is even. However, this change requires the policy to be able to reach the next goal from state $t_n$ with probability $p$ in the induced Markov chain, which is not always possible in a $p$-risk-averse strategy. Figure~\ref{fig:nonWinningExample} shows an example in which increasing the color of state $q_1$ to $2$ (which is even) would reduce the maximally implementable risk-averseness level from $0.68$ to $0.64$. As changing an odd color to an even one only makes the parity acceptance condition easier to satisfy, this is a very unintuitive property. To avoid it, we thus chose to make the labeling function $l'$ explicit. \begin{figure} \centering \begin{tikzpicture}[auto,node distance=8mm,>=latex,font=\scriptsize,myrotate/.style 2 args={rotate=#1,nodes={rotate=#2}}] \pgfutil@ifnextchar\bgroup\tikz@style@parseA\tikz@style@parseB{round}=[thick,draw=black,circle] \pgfqkeys{/tikz}{every loop/.style={min distance=10mm,in=0,out=60,looseness=10}} \path[use as bounding box] (-0.87,-0.95) rectangle (7.13,1.15); \path[draw][fill=black] (-0.8,0.0) circle (0.07cm); \tikz@path@overlay{node}[round] (s0) at (0,0) {$0$}; \path[draw][thick,->] (-0.8,0.0) -- (s0); \tikz@path@overlay{node}[round] (s1) at (2,0.7) {{\color{white}$1$}}; \tikz@path@overlay{node} at (2,0.7) {$q_1\!\!:\!\!1$}; \tikz@path@overlay{node}[round] (s2) at (2,-0.7) {$1$}; \tikz@path@overlay{node}[round] (s3) at (4,0) {$1$}; \tikz@path@overlay{node}[round] (s4) at (6,0) {$2$}; \path[draw][thick,->] (s0) edge node[above] {0.2} coordinate[pos=0.1](sA1C1) (s1); \path[draw][thick,->] (s0) edge node[above] {0.8} coordinate[pos=0.1](sA1C2) (s2); \path[draw][thick,fill=red,draw] pic[draw,angle radius=5mm,"a",angle eccentricity=1.2] {angle = sA1C2--s0--sA1C1}; \path[draw][thick,->] (s1) edge node[below left=-1mm] {0.8} coordinate[pos=0.1](sA1C2) (s3); \path[draw][thick,->] (s1) edge[bend left=20] node[above] {0.2} coordinate[pos=0.1](sA1C1) (s4); \path[draw][thick,fill=red,draw] pic[draw,angle radius=5mm,"a",angle eccentricity=1.2] {angle = sA1C2--s1--sA1C1}; \path[draw][thick,->] (s2) edge node[above left=-1mm] {0.2} coordinate[pos=0.1](sA1C2) (s3); \path[draw][thick,->] (s2) edge[bend right=20] node[above] {0.8} coordinate[pos=0.1](sA1C1) (s4); \path[draw][thick,fill=red,draw] pic[draw,angle radius=5mm,"a",angle eccentricity=1.2] {angle = sA1C1--s2--sA1C2}; \path[draw][thick,->] (s3) edge[loop,out=-20,in=20,distance=0.6cm] node[right] {1.0} (s3); \path[draw][thick,->] (s4) edge[loop,out=-20,in=20,distance=0.6cm] node[right] {1.0} (s4); \end{tikzpicture} \caption{An MDP in which for risk-averseness level $p=0.68$, state $q_1$ is not winning, but the state is reachable on the (unique) $p$-risk-averse policy. All states are labeled by their colors. } \label{fig:nonWinningExample} \end{figure} {Using Definition~\ref{def:mainProblem}, we can now state the main problem considered in this paper:} \begin{definition}[Optimal risk-averse policy synthesis] Given a parity MDP, the optimal risk-averse policy synthesis is to find the highest value $p$ such that a policy for the MDP with risk-averseness level $p$ exists, and to find such a policy. \label{def:optimalRiskAversePolicyComputation} \end{definition} \section{Computing Risk-Averse Policies} In this section, we describe an algorithm to compute risk-averse policies in parity MDPs. The algorithm produces finite-memory strategies that are not necessarily positional. This may appear to be a flaw of the algorithm, as memoryless policies suffice for maximizing the probability for a trace to satisfy a parity objective in MDPs \cite{chatterjee2004quantitative}. However, optimal risk-averse strategies \emph{do} require memory in general, which we show by means of an example. \begin{figure} \centering \begin{tikzpicture}[auto,node distance=8mm,>=latex,font=\scriptsize,myrotate/.style 2 args={rotate=#1,nodes={rotate=#2}}] \pgfutil@ifnextchar\bgroup\tikz@style@parseA\tikz@style@parseB{round}=[thick,draw=black,circle] \pgfqkeys{/tikz}{every loop/.style={min distance=10mm,in=0,out=60,looseness=10}} \path[use as bounding box] (-4.1,-4.25) rectangle (4.5,4.05); \tikz@path@overlay{node}[round] (s0) at (0,0) {$3$}; \begin{scope}[myrotate={-18}{-18}] \tikz@path@overlay{node}[round] (sA1) at (2,0) {$0$}; \tikz@path@overlay{node}[round] (sB1) at (4,0) {$1$}; \path[draw][thick,->] (s0) -- node[above] {0.9} (sA1); \path[draw][thick,->] (s0) edge[bend right=20] node[below] {0.1} coordinate[pos=0.1](sB1C) (sB1); \path[draw][thick,fill=red,draw] pic[draw,angle radius=10mm,"a",angle eccentricity=1.2] {angle = sB1C--s0--sA1}; \path[draw][thick,->] (sB1) edge[loop,out=-20,in=20,distance=0.6cm] node[below=1mm,xshift=-3pt] {1.0} (sB1); \path[draw][thick,->] (sA1) edge[bend right=30] node[above] {0.7} coordinate[pos=0.1](sA1C1) (s0); \path[draw][thick,->] (sA1) edge[bend left=30] node[above] {0.3} coordinate[pos=0.1](sA1C2) (sB1); \path[draw][thick,fill=red,draw] pic[draw,angle radius=5mm,"a",angle eccentricity=1.2] {angle = sA1C2--sA1--sA1C1}; \end{scope} \begin{scope}[myrotate={56}{56}] \tikz@path@overlay{node}[round] (sA1) at (2,0) {$0$}; \tikz@path@overlay{node}[round] (sB1) at (4,0) {$1$}; \path[draw][thick,->] (s0) -- node[above] {0.8} (sA1); \path[draw][thick,->] (s0) edge[bend right=20] node[below] {0.2} coordinate[pos=0.1](sB1C) (sB1); \path[draw][thick,fill=red,draw] pic[draw,angle radius=10mm,"b",angle eccentricity=1.2] {angle = sB1C--s0--sA1}; \path[draw][thick,->] (sB1) edge[loop,out=-20,in=20,distance=0.6cm] node[below=2mm,xshift=-2pt] {1.0} (sB1); \path[draw][thick,->] (sA1) edge[bend right=30] node[above] {0.8} coordinate[pos=0.1](sA1C1) (s0); \path[draw][thick,->] (sA1) edge[bend left=30] node[above] {0.2} coordinate[pos=0.1](sA1C2) (sB1); \path[draw][thick,fill=red,draw] pic[draw,angle radius=5mm,"a",angle eccentricity=1.2] {angle = sA1C2--sA1--sA1C1}; \end{scope} \begin{scope}[myrotate={126}{126}] \tikz@path@overlay{node}[round] (sA1) at (2,0) {\rotatebox{180}{$0$}}; \tikz@path@overlay{node}[round] (sB1) at (4,0) {\rotatebox{180}{$1$}}; \path[draw][thick,->] (s0) -- node[above] {\rotatebox{180}{0.7}} (sA1); \path[draw][thick,->] (s0) edge[bend right=20] node[below] {\rotatebox{180}{0.3}} coordinate[pos=0.1](sB1C) (sB1); \path[draw][thick,fill=red,draw] pic[draw,angle radius=10mm,"\rotatebox{180}{c}",angle eccentricity=1.2] {angle = sB1C--s0--sA1}; \path[draw][thick,->] (sB1) edge[loop,out=-20,in=20,distance=0.6cm] node[left=5mm,xshift=-2pt] {\rotatebox{180}{1.0}} (sB1); \path[draw][thick,->] (sA1) edge[bend right=30] node[above] {\rotatebox{180}{0.9}} coordinate[pos=0.1](sA1C1) (s0); \path[draw][thick,->] (sA1) edge[bend left=30] node[above] {\rotatebox{180}{0.1}} coordinate[pos=0.1](sA1C2) (sB1); \path[draw][thick,fill=red,draw] pic[draw,angle radius=5mm,"\rotatebox{180}{a}",angle eccentricity=1.2] {angle = sA1C2--sA1--sA1C1}; \end{scope} \begin{scope}[myrotate={198}{198}] \path[draw][fill=black] (4,0.8) circle (0.07cm); \tikz@path@overlay{node}[round] (sA1) at (4,0) {\rotatebox{180}{$0$}}; \path[draw][thick,->] (4,0.8) -- (sA1); \tikz@path@overlay{node}[round] (sB1) at (2,0) {\rotatebox{180}{$1$}}; \path[draw][thick,->] (sB1) edge[loop,out=160,in=200,distance=0.6cm] node[left] {\rotatebox{180}{$1.0$}} (sB1); \path[draw][thick,->] (sA1) edge[bend left=30] node[below] {\rotatebox{180}{0.4}} coordinate[pos=0.1](sA1C1) (sB1); \path[draw][thick,->] (sA1) edge[bend right=20] node[above] {\rotatebox{180}{0.6}} coordinate[pos=0.1](sA1C2) (s0); \path[draw][thick,fill=red,draw] pic[draw,angle radius=5mm,"\rotatebox{180}{a}",angle eccentricity=1.2] {angle = sA1C2--sA1--sA1C1}; \end{scope} \begin{scope}[myrotate={270}{0}] \tikz@path@overlay{node}[round] (sA1) at (2,0) {$1$}; \tikz@path@overlay{node}[round] (sB1) at (4,0) {$2$}; \path[draw][thick,->] (s0) edge[bend left=20] node[right] {$0.6$} coordinate[pos=0.1](sA1D1) (sB1); \path[draw][thick,->] (s0) edge node[left] {0.4} coordinate[pos=0.1](sA1D2) (sA1); \path[draw][thick,fill=red,draw] pic[draw,angle radius=10mm,"d",angle eccentricity=1.2] {angle = sA1D2--s0--sA1D1}; \end{scope} \path[draw][thick,->] (sA1) edge[loop,out=165,in=195,distance=0.6cm] node[left] {$1.0$} (sA1); \path[draw][thick,->] (sB1) edge[loop,out=-15,in=15,distance=0.6cm] node[right] {$1.0$} (sB1); \end{tikzpicture} \caption{An example parity MDP that admits a $0.54$-risk-averse finite-memory policy, but no such positional policy. All states are labeled by their colors.} \label{fig:exampleMDP} \end{figure} \begin{example} Figure~\ref{fig:exampleMDP} shows a parity MDP. It has four colors, and all states with color $1$ are sink states, i.e., from which no possible goal state can be reached. The center state has the highest and odd color, so it may only be visited finitely often. Any policy cannot avoid either ending up in a sink state or visiting the middle state at least every second step, unless eventually action $d$ is chosen by the policy. If the policy chooses action $a$ in the initial state, and then immediately chooses $d$, it reaches the state with color $2$ with a probability of $0.6 \cdot 0.6 = 0.36$. The resulting policy is thus $0.36$-risk averse. However, there exists a better policy: when the state with color $3$ is visited for the first time, action $a$ should be taken, then action $b$, $c$, and finally action $d$. By declaring all color $0$ states to be goal states, the resulting policy then has a risk averseness level of $\min(0.6 \cdot 0.9, 0.7 \cdot 0.8, 0.8 \cdot 0.7, 0.9 \cdot 0.6) = 0.54$. Thus, the best next action in the state with color $3$ depends on the history of the trace. While this example only shows that memory is needed in optimally risk-averse policies, the fact that finite memory suffices follows from the correctness of our algorithm described below. \end{example} \subsection{$p$-risk-averse policy computation} \label{subsec:pRiskAversePolicyComputation} Let us assume that $p$ is fixed and that we want to compute a $p$-risk-averse MDP control policy. The algorithm that we describe in this section computes the set of states from which a $p$-risk-averse policy exists. We call such states \newterm{winning}. The policies computed sometimes make use of non-winning states, which may be counter-intuitive at first. Figure~\ref{fig:nonWinningExample} shows an example MDP where this is the case: from state $q_1$, the probability of reaching a next goal state is only $0.2$, but the optimal $0.68$-risk-averse policy from the initial state requires that even after reaching $q_1$, state $q_2$ is labeled as being a goal state if it is subsequently reached. Whenever a goal state is reached, the only information about the history of the trace that may need to be retained is (1) how often the goal color may still be decreased before the limit of $k$ is reached, and (2) what the current goal color is. This follows from the fact that the computation of probabilities is \emph{reset} at goal states. Our algorithm makes use of this fact by planning policies from goal state to goal state(s). It iterates over all possible value combinations for the current goal color and the number of remaining goal color reductions. \begin{definition} We say that a state $q$ is $(k,c)$-winning (for some fixed risk averseness level $p$) if there exists a $p$-risk-averse policy $f$ from $q$ as initial state with labels $l$ and $l'$ such that $l(\epsilon)=c$ and along all traces of the policy, goal colors are never decreased more than $k$ times. We call such policies $p$-$(k,c)$-risk-averse. \label{def:ckWinning} \end{definition} \begin{corollary} Some parity MDP admits a $p$-risk-averse policy if the initial state is $(k,c)$-winning for some values of $c \in \mathbb{N}$ and $k \in \mathbb{N}$. \end{corollary} \begin{proof} Follows directly from Definitions~\ref{def:mainProblem} and \ref{def:ckWinning}. \end{proof} This corollary allows us to frame the search for a $p$-risk-averse policy as an iterative process, which we base on the following lemma. Let in the following $c_\mathit{maxEven}$ be the least \emph{even} upper bound on the colors occurring in the parity MDP. \begin{lemma} A state $q$ is winning for some values of $(k,c)$ with $c \leq c_\mathit{maxEven}$ and even $c$ if and only if there exists a policy such that with probability $p$ eventually either: \begin{itemize} \item some even-colored state $q'$ is visited that is winning for $(k-1,0)$, or \item some even-colored state $q'$ with $C(q') = c'$ for $c' \geq c$ is visited that is winning for $(k,c')$ while no odd color $\geq c'$ is visited along the way to $q'$. \end{itemize} \label{lem:ckLemma} \end{lemma} \begin{proof} We prove the claim by induction over $(k,c)$ with even $c$. The order of induction that we use is lexicographic in $(k,-1 \cdot c)$. \textbf{Induction basis:} For the case $(k,c) = (0,c_\mathit{maxEven})$, the only way for a state to be $(k,c)$-winning is for a policy from that state to exist such that with probability at least $p$, a state is eventually visited that has color $c$ and is $(k,c)$-winning again. This is exactly the only condition from the claim that is applicable in this case. \textbf{Induction step:} ($\Rightarrow$) Let $f'$ be a policy from $q$ such that on every trace of the policy, the goal colors decrease at most $k$ times, and let $l(\epsilon)=c$ for the labellings $(l,l')$ assigned to the policy. The probability to reach the next goal must be at least $p$ in order for the state to be $(k,c)$-winning. A goal can either have a color of $\geq c$ or a color less than $c$. In the latter case, the goal state must be $(k',c')$-winning for some $c'$ and some $k'<k$. As all such states are also $(k-1,0)$-winning (by definition), this case is covered by cases in the claim. If a goal with color $c'$ is reached, then either no state with color $\geq c$ is visited along the way and $c'=c$, or alternatively $c'>c$ and no state with color $\geq c'$ is visited along the way. Both cases are covered by the case list in the claim. ($\Leftarrow$) {Now let $q$ be a state from which a policy to visit some goal state $q'$ with probability $p$ exists.} State $q'$ can be a $(k,c)$-winning state, but does not need to be one. If on the way to $q'$, a state with an odd color $c' > c$ is visited, this requires that the label function $l'$ of the policy has to be greater than $c'$ on the way from $q$ to $q'$. So for the trace to count towards the probability mass of $p$, state $q'$ needs to be either $(k,c'+x)$-winning (for even $x\geq 2$) or alternatively $(k-1,c')$-winning. Since the set of $(k,c'+x)$-winning states is contained in the $(k,c'+2)$-winning states and the $(k-1,c')$-winning states are a subset of the $(k-1,0)$-winning states (by definition), we can assume, without loss of generality, that a $(k,c'+2)$-winning or $(k-1,0)$-winning state is visited. We construct the $p$-risk averse policy $f$ with associated labels $(l,l')$ that prove that $q$ is $(k,c)$-winning as follows: we use the policy with the properties from the claim, and switch to the policies that exist by the inductive hypothesis for the states that are $(k-1,0)$-winning or $(k,c'+2)$-winning when the second condition from the claim is used. When another $(k,c)$-winning state is visited, we instead continue with a policy constructed from $q'$ in the same way as for $q$. The fact that this composition of the policies yields a correct $(k,c)$-winning policy follows by induction: at every policy prefix $t$ with $l'(t)=\mathbf{true}$ or $t = \epsilon$ such that no transition to a $(k-1,0)$-winning or $(k,c'+2)$-winning has yet occurred, we know that the policy reaches some next goal state with probability at least $p$. For the other goal states, the correctness follows from the inductive hypothesis and the fact that after transitions to $(k,c'+2)$-winning or $(k-1,0)$-winning goal states, the existing $p$-risk-averse policies can be used from there. If no such other goal state is reached or until such a goal state is reached, the construction ensures that the goal states otherwise reached are $(k,c)$-winning, and no odd color higher than $c$ is reached in between two visits to $(k,c)$-winning goal states that are not $(k,c'+2)$-winning or $(k-1,0)$-winning. As this property holds (by induction over the length of the policy prefix) for all visits to goal states, the claim follows. \end{proof} The characterization of $(k,c)$-winning states in Lemma~\ref{lem:ckLemma} allows us to compute the $(k,c)$-winning states using traditional MDP policy computation algorithms. \begin{lemma} Let $\mathcal{M} = (S,A,\Sigma,P,C,s_0)$ be a parity MDP, $S_{k,c} \subseteq S$ be the $(k,c)$-winning states, $S_{k,c+2}, \allowbreak \ldots, \allowbreak S_{k,c_\mathit{maxEven}}$ be the $(k,c+2)$-winning to $(k,c_\mathit{maxEven})$-winnings states, and $S_{k-1,0}$ be the states that are $(k-1,0)$-winning (for some value of $p$). We can compute a reachability MDP $\mathcal{M'}$ with $|S| \times |\{c, c+2, \ldots, c_\mathit{maxEven}\}|$ many states in which the value of any state $(q,c)$ is $\geq p$ if and only if $q$ is a $(k,c)$-winning state. \label{lem:reachabilityMDPLemma} \end{lemma} \begin{proof} We can construct $\mathcal{M}' = (S',A,\Sigma,P',g,s_0)$ as follows: { \begin{align*} S' & = S \times \{c, c+2, \ldots, c_\mathit{maxEven}\} \\ P((s,\tilde c),a)((s',\tilde c')) & = \begin{cases} P(s,a)(s') & \text{if } C(s') \text{ is odd and } \tilde c' = \max(\tilde c, C(s')) \\ P(s,a)(s') & \text{if } C(s') \text{ is even and } \tilde c' = \tilde c \\ 0 & \text{else} \end{cases} \\ & \quad \quad \text{for all} (s,\tilde c), (s',\tilde c') \in S', a \in A \end{align*}\begin{align*} g((s,\tilde c)) & = \begin{cases} 1 & \text{if } \tilde c = c, s \in S_{k,c}, C(s)\geq \tilde c, C(s)\text{ is even} \\ 1 & \text{if } s \in S_{k,\tilde c} \text{ or } s \in S_{k-1,0},\text{ and }C(s)\text{ is even}\\ 0 & \text{else} \end{cases} \\ & \quad \quad \text{for all} (s,\tilde c) \in S' \end{align*} } % The MDP has the stated properties by the facts that (1) it keeps track of the highest color visited along a trace so far, and (2) it induces a payoff of $1$ exactly for the states that are possible goal states. \end{proof} Optimal policy computation for a reachability MDP can be performed by standard policy iteration or value iteration algorithms. Until now, the definition of the reachability MDP in Lemma~\ref{lem:reachabilityMDPLemma} is somewhat recursive: in order to determine which states are $(k,c)$-winning, we have to already know the $(k,c)$-winning states. The characterization from Lemma~\ref{lem:ckLemma} however allows us to compute it with the approach from Lemma~\ref{lem:reachabilityMDPLemma}. What we are actually searching for is the \emph{largest} set of states $S_\mathit{k,c}$ that the construction from Lemma~\ref{lem:reachabilityMDPLemma} maps to itself; any state set that is smaller misses some states that are $(k,c)$-winning by the characterization from Lemma~\ref{lem:ckLemma}, and by the same lemma, any set that is larger contains some state that is not $(k,c)$-winning. So computing the \emph{greatest fixpoint} over the states $Q_\mathit{k,c}$ allows to find the $(k,c)$-winning states, provided that the $(k,c+2)$-winning to $(k,c_\mathit{maxEven})$-winning and $(k-1,0)$-winning states are known. By iterating over the possible values of $k$ and $c$, we can thus compute the sets $S_{k,c}$ in a bottom-up fashion, as shown in Algorithm~\ref{algo:complexAlgorithm}. \begin{algorithm} \begin{algorithmic}[1] \Function{ComputeRAPolicy}{$\mathcal{M}, p$} \State $S_{k-1} \gets \emptyset$ \While{fixed point of $S_k$ has not been reached} \For{$c \in \{c_\mathit{maxEven},c_\mathit{maxEven}-2, \ldots, 0\}$} \State $S_{k}[c] \gets S$ \While{fixed point of $S_k[c]$ has not been reached} \State $\mathcal{M}' = \Call{ConstructionFromLemma2}{c, \allowbreak S_{k}[c], \allowbreak \ldots, \allowbreak S_{k}[c_\mathit{maxEven}],\allowbreak S_{k-1}}$ \State $V \gets \Call{ComputeStateValues}{\mathcal{M}'}$ \State $S_k[c] \gets \{s \in S \mid V((s,c))\geq p\}$ \label{algoline:computerSkc} \EndWhile \EndFor \State $S_{k-1} \gets S_{k}[0]$ \EndWhile \State \Return $s_0 \in S_{k-1}$ \EndFunction \end{algorithmic} \caption{Algorithm to compute if a parity MDP $\mathcal{M}$ admits a $p$-risk-averse policy.} \label{algo:complexAlgorithm} \end{algorithm} The algorithm calls the external function \textsc{ComputeStateValues} to solve the reachability MDPs obtained by the construction in Lemma~\ref{lem:reachabilityMDPLemma}, which can be a value or policy iteration algorithm. Extending Algorithm~\ref{algo:complexAlgorithm} to also compute a policy is simple: without loss of generality, optimal reachability MDP policies are positional, and we can stitch these policies together in the order in which they are found by the algorithm. Since the algorithm performs only a finite number of iterations over $k$ and $c$, the resulting policy is finite-state. \begin{remark} \label{remark:algorithmSimplication} To speed up Algorithm~\ref{algo:complexAlgorithm}, we can simplify the reachability MDP construction of Lemma~\ref{lem:reachabilityMDPLemma}: instead of keeping track of the maximum odd color seen along a trace so far (in excess of $c$), we can alternatively keep track of whether an odd color greater than $c$ has been seen so far, and only consider switching to a $(k-1,0)$-winning goal state in that case. While the number of loop iterations of the algorithm until all positions that admit a $p$-risk-averse policy has been found can be higher with this modification, the reachability MDPs are typically smaller (as they have a size of at most $2 \cdot | S |$ then), which speeds up the value or policy iteration process for solving them. \end{remark} \subsection{Maximally risk-averse policy computation} \label{subsec:binarySearch} In the previous subsection, we gave an algorithm to obtain $p$-risk-averse policies for a given $p$ whenever they exist. In order to compute optimally risk-averse policies, we can apply a \emph{bisection search}, which is the continuous-domain version of binary search, to find the highest value $p$ such that a $p$-risk-averse policy exists. Since $p$ is a continuous value, this process has no natural termination point, however. For all practical means, it makes sense to define a cut-off value for the search such that if the difference between known upper and lower bounds on the risk-averseness level of the optimal policy is below the cut-off, the search process terminates with the best policy found until then. Defining a cut-off point is also motivated by practical means: most MDP solving algorithms run with a bounded precision, which leads to rounding errors. This makes it difficult to solve the problem given in Definition~\ref{def:optimalRiskAversePolicyComputation} in the strict sense. However, under the assumption that the probabilities computed by function \textsc{ComputeStateValues} are exact, Algorithm~\ref{algo:complexAlgorithm} can be modified in order to allow finding a maximally risk-averse policy. For this, line~\ref{algoline:computerSkc} of the algorithm needs to be replaced by $S_k[c] \gets \{s \in S \mid V((s,c)) > p\}$. The algorithm then checks if a $p'$-risk-averse policy for $p'>p$ exists. Furthermore, after every call to \textsc{ComputeStateValues}, we let the algorithm also compute $\mathit{lb} := \min \{V(s) \mid s \in S, V((s,c))> p\}$. The least of these $\mathit{lb}$ values represents a lower bound on the $p$-risk averseness of the policy actually computed. Let this value be named $\mathit{lb}_\mathit{min}$. We can now perform an iterative search process for the optimally risk-averse policy as follows: starting with $p=0$, we search for a $p'$-risk-averse policy for $p'>p$ using the modified version of Algorithm~\ref{algo:complexAlgorithm}. If we find one, we update $p$ to $\mathit{lb}_\mathit{min}$ and continue with the search. Otherwise, the previously found policy is an optimally risk-averse policy. To see why this process solves the problem, note that whenever $p$ is increased, at least one state is removed from $S_k[c]$ in some iteration of the outermost while loop. While the state may be added to $S_k[c]$ \emph{later} in the process, increasing the value of $p$ can only push states to be found later in the search process of Algorithm~\ref{algo:complexAlgorithm}. When delaying the addition of states to $S_k[c]$, at some point, there will be one execution of the outer while loop of Algorithm~\ref{algo:complexAlgorithm} in which no additional states are found. Since the algorithm will terminate without finding a policy in this case, by the correctness of the algorithm, we can terminate the search at that point, and the policy found last is optimally risk-averse. \section{Experiments} We implemented the $p$-risk-averse policy computation approach in a prototype tool written in \texttt{C++} that is called \texttt{ramps}. The tool uses the simplification from Remark~\ref{remark:algorithmSimplication} and employs value iteration to compute policies for the reachability MDPs analyzed in Algorithm~\ref{algo:complexAlgorithm}. Bisection search with a cut-off value of $0.01$ (i.e., 1 percent) is used to computed close-to-optimal risk-averse policies. We configured the value iteration processes to terminate when the sum of updates to the state values falls below $0.05$. Value iteration is performed in a parallelized way using the \texttt{openmp} library. All computation times reported in the following were taken on an Intel i5-4200U computer with 1.60\,GHz clock rate and 4GB of RAM, utilizing 2 physical processor cores, each with two virtual hyper-threaded cores that are made use of for value iteration. The \texttt{ramps} tool is available under the GPLv3 open source license from \texttt{https://github.com/progirep/ramps}. \subsection{Single-robot control} {In the first experiment, we consider the setting from Example~\ref{example:motivation}. The \texttt{ramps} tool needs} 30 minutes and 11 seconds (95m57s of single-processor time) to compute a $0.890689$-risk-averse policy with 388329 states. A simulation of it, available as a video on \texttt{https://progirep.github.\allowbreak{}io/\allowbreak{}ramps}, shows that the robot performs the task encoded into the parity automaton until it crashes. Visiting the regions in the middle in the correct order seems to be relatively easy for the policy. In order to reach the regions on the left and on the right in a risk-averse way, the robot often circles many times before it has the right approach angle and position to travel through one of the gaps next to the static obstacles. \subsection{Multi-robot control} As a second example, we considered a multi-robot control scenario, which we depict in Figure~\ref{fig:exampleworkspace2}. This time, we have two robots without complex dynamics: in each step, they can either move left, right, up or down by one cell, or choose not to move at all. If a robot chooses to move, there is an 8 percent chance that it moves into a different direction than chosen (i.e., $8/3$ percent per remaining direction). As in the first example, crashing into an obstacle or into the workspace boundaries leads to a transition to an error state in the MDP. A robot crashing into the other robot also leads to the error state. The robots can also carry an item. For this, they have to jointly perform a pickup operation while standing left and right, respectively, of the pickup region $r_1$. While they maintain a horizontal distance of $2$, they can continue carrying the item. The item is lost if there is a deviation in the distance. At region $r_2$, they can also drop the item. They cannot crash into each other while carrying an item (as it acts like a spacer). The MDP has 12294 states, 307304 state/action pairs, and 2798040 edges. The numbers of state/action pairs and edges are higher than in the first scenario, as each of the two robots has five choices of actions in each step. The specification is represented as a 5-state parity automaton that encodes that (1) infinitely many items shall be delivered from $r_1$ to $r_2$, (2) infinitely often, robot one and two shall visit the top left and top right regions, respectively, and (3) the pickup and dropping regions should never be visited by any robot. Computing a $0.599408$-risk-averse policy takes $146.4$ seconds and the simulation (available as a video on \texttt{https://progirep.github.\allowbreak{}io/\allowbreak{}ramps}) shows that again, the policy lets the robots perform their task until at least one of them collides. In case the item is lost during delivery, the two robots just try again immediately. The generated policy has $61509$ states. \begin{figure} \centering \resizebox{0.38\columnwidth}{!}{% \begin{tikzpicture}[yscale=-1] \definecolor{mycolor0}{RGB}{255,255,255} \definecolor{mycolor1}{RGB}{0,0,0} \definecolor{mycolor2}{RGB}{0,255,0} \definecolor{mycolor3}{RGB}{255,0,1} \definecolor{mycolor4}{RGB}{0,0,255} \definecolor{mycolor5}{RGB}{255,0,255} \definecolor{mycolor6}{RGB}{0,255,255} \definecolor{mycolor7}{RGB}{220,220,220} \definecolor{mycolor8}{RGB}{205,206,179} \path[fill=mycolor4] (0,0) rectangle (2,1); \path[fill=mycolor0] (2,0) rectangle (7,1); \path[fill=mycolor1] (7,0) rectangle (8,1); \path[fill=mycolor0] (8,0) rectangle (9,1); \path[fill=mycolor5] (9,0) rectangle (11,1); \path[fill=mycolor4] (0,1) rectangle (2,2); \path[fill=mycolor0] (2,1) rectangle (7,2); \path[fill=mycolor1] (7,1) rectangle (8,2); \path[fill=mycolor0] (8,1) rectangle (9,2); \path[fill=mycolor5] (9,1) rectangle (11,2); \path[fill=mycolor0] (0,2) rectangle (3,3); \path[fill=mycolor1] (3,2) rectangle (8,3); \path[fill=mycolor0] (8,2) rectangle (11,3); \path[fill=mycolor0] (0,3) rectangle (11,4); \path[fill=mycolor0] (0,4) rectangle (11,5); \path[fill=mycolor0] (0,5) rectangle (2,6); \path[fill=mycolor1] (2,5) rectangle (3,6); \path[fill=mycolor0] (3,5) rectangle (11,6); \path[fill=mycolor0] (0,6) rectangle (2,7); \path[fill=mycolor1] (2,6) rectangle (3,7); \path[fill=mycolor0] (3,6) rectangle (11,7); \path[fill=mycolor0] (0,7) rectangle (8,8); \path[fill=mycolor3] (8,7) rectangle (9,8); \path[fill=mycolor0] (9,7) rectangle (11,8); \path[fill=mycolor0] (0,8) rectangle (2,9); \path[fill=mycolor2] (2,8) rectangle (3,9); \path[fill=mycolor0] (3,8) rectangle (11,9); \path[fill=mycolor0] (0,9) rectangle (8,10); \path[fill=mycolor1] (8,9) rectangle (9,10); \path[fill=mycolor0] (9,9) rectangle (11,10); \path[fill=mycolor0] (0,10) rectangle (11,11); \tikz@path@overlay{node} at (2.5,8.5) {\Large $r_1$}; \tikz@path@overlay{node} at (8.5,7.5) {\Large $r_2$}; \path[draw] ( 0,0) -- ( 0 , 11 ); \path[draw] ( 1,0) -- ( 1 , 11 ); \path[draw] ( 2,0) -- ( 2 , 11 ); \path[draw] ( 3,0) -- ( 3 , 11 ); \path[draw] ( 4,0) -- ( 4 , 11 ); \path[draw] ( 5,0) -- ( 5 , 11 ); \path[draw] ( 6,0) -- ( 6 , 11 ); \path[draw] ( 7,0) -- ( 7 , 11 ); \path[draw] ( 8,0) -- ( 8 , 11 ); \path[draw] ( 9,0) -- ( 9 , 11 ); \path[draw] ( 10,0) -- ( 10 , 11 ); \path[draw] ( 11,0) -- ( 11 , 11 ); \path[draw] (0, 0) -- ( 11 , 0 ); \path[draw] (0, 1) -- ( 11 , 1 ); \path[draw] (0, 2) -- ( 11 , 2 ); \path[draw] (0, 3) -- ( 11 , 3 ); \path[draw] (0, 4) -- ( 11 , 4 ); \path[draw] (0, 5) -- ( 11 , 5 ); \path[draw] (0, 6) -- ( 11 , 6 ); \path[draw] (0, 7) -- ( 11 , 7 ); \path[draw] (0, 8) -- ( 11 , 8 ); \path[draw] (0, 9) -- ( 11 , 9 ); \path[draw] (0, 10) -- ( 11 , 10 ); \path[draw] (0, 11) -- ( 11 , 11 ); \path[draw][fill=red!80!yellow,very thick] (5.5,5.5) circle (0.35cm); \path[draw][fill=red!80!yellow,very thick] (1.5,1.5) circle (0.35cm); \end{tikzpicture} } \caption{Workspace for the multi-robot example.} \label{fig:exampleworkspace2} \end{figure} \section{Conclusion} In this paper, we showed how to compute \newterm{risk-averse} policies. A system governed by such a policy works towards the satisfaction of some given $\omega$-regular specification even in probabilistic environments in which almost sure non-satisfaction of the specification cannot be avoided in the long run. Instead of just resigning because the probability mass of the runs of a Markov decision process that satisfy the specification can only be $0$, a $p$-risk averse policy always reaches the respective next \emph{goal state} with a probability of at least $p$ (from the previous goal state). The definition of the problem ensures that the goal states are chosen in a way that faithfully captures the satisfaction of the specification. We assumed that the specification is given as a deterministic parity automaton, but structured logics such as linear time logic (LTL) could also be used, as translations from LTL to parity automata are known \cite{DBLP:journals/lmcs/Piterman07}.\looseness=-1 We intent to extend the approach to the synthesis of strategies in stochastic two-player games in future work. Also, we will explore how to incorporate additional optimization criteria such as mean-average cost into policy generation and if reinforcement learning techniques can be used to successively approximate optimal policies during policy execution. \section*{Acknowledgements} \noindent R.~Ehlers was supported by the Institutional Strategy of the University of Bremen, funded by the German Excellence Initiative. S. Moarref and U. Topcu were partially supported by awards AFRL $FA8650$-$15$-$C$-$2546$, ONR $N000141310778$, ARO $W911NF$-$15$-$1$-$0592$, NSF $1550212$, and DARPA $W911NF$-$16$-$1$-$0001$. \bibliographystyle{IEEEtran}
2024-02-18T23:40:39.930Z
2017-05-03T02:04:20.000Z
algebraic_stack_train_0000
3,013
11,777
proofpile-arXiv_065-14914
\section{Introduction} \label{sec:intro} Colorization of grayscale images is a simple task for the human imagination. A human need only recall that sky is blue and grass is green; for many objects, the mind is free to hallucinate several plausible colors. The high-level comprehension required for this process is precisely why the development of fully automatic colorization algorithms remains a challenge. Colorization is thus intriguing beyond its immediate practical utility in graphics applications. Automatic colorization serves as a proxy measure for visual understanding. Our work makes this connection explicit; we unify a colorization pipeline with the type of deep neural architectures driving advances in image classification and object detection. Both our technical approach and focus on fully automatic results depart from past work. Given colorization's importance across multiple applications (\emph{e.g.}~historical photographs and videos~\cite{tsaftaris2014novel}, artist assistance~\cite{sykora2004unsupervised,qu2006manga}), much research strives to make it cheaper and less time-consuming~\cite{ welsh2002transferring, levin2004colorization, irony2005colorization, charpiat2008automatic, morimoto2009automatic, chia2011semantic, gupta2012image, deshpande2015learning, cheng2015deep}. However, most methods still require some level of user input~\cite{ levin2004colorization, sapiro2005inpainting, irony2005colorization, charpiat2008automatic, chia2011semantic, gupta2012image}. Our work joins the relatively few recent efforts on fully automatic colorization~\cite{morimoto2009automatic,deshpande2015learning,cheng2015deep}. Some~\cite{deshpande2015learning, cheng2015deep} show promising results on typical scenes (\emph{e.g.}~landscapes), but their success is limited on complex images with foreground objects. \begin{figure}[!t] \RawFloats \hspace{-0.35cm} \includestandalone[trim=0.1cm 0.1cm 0.1cm 0.1cm]{figures/overview} \vspace{-0.2cm} \caption{ \small \textbf{System overview.} We process a grayscale image through a deep convolutional architecture~(VGG)~\cite{vgg16} and take spatially localized multilayer slices (hypercolumns)~\cite{ MYP:ACCV:2014, mostajabi2015feedforward ,hariharan2015hypercolumns}, as per-pixel descriptors. We train our system end-to-end for the task of predicting hue and chroma distributions for each pixel $p$ given its hypercolumn descriptor. These predicted distributions determine color assignment at test time. } \label{fig:schematic} \end{figure} At a technical level, existing automatic colorization methods often employ a strategy of finding suitable reference images and transferring their color onto a target grayscale image~\cite{morimoto2009automatic, deshpande2015learning}. This works well if sufficiently similar reference images can be found, but is difficult for unique grayscale input images. Such a strategy also requires processing a large repository of reference images at test time. In contrast, our approach is free of database search and fast at test time. Section~\ref{sec:related} provides a complete view of prior methods, highlighting differences. Our approach to automatic colorization converts two intuitive observations into design principles. First, semantic information matters. In order to colorize arbitrary images, a system must interpret the semantic composition of the scene (what is in the image: faces, cars, plants, \ldots) as well as localize objects (where things are). Deep convolutional neural networks (CNNs) can serve as tools to incorporate semantic parsing and localization into a colorization system. Our second observation is that while some scene elements can be assigned a single color with high confidence, others (\emph{e.g.}~clothes or cars) may draw from many suitable colors. Thus, we design our system to predict a color histogram, instead of a single color, at every image location. Figure~\ref{fig:schematic} sketches the CNN architecture we use to connect semantics with color distributions by exploiting features across multiple abstraction levels. Section~\ref{sec:method} provides details. Section~\ref{sec:experiments} experimentally validates our algorithm against competing methods~\cite{welsh2002transferring,deshpande2015learning} in two settings: fully (grayscale input only) and partially (grayscale input with reference global color histogram) automatic colorization. Across every metric and dataset~\cite{xiao2010sun,patterson2014sun,imagenet}, our method achieves the best performance. Our system's fully automatic output is superior to that of prior methods relying on additional information such as reference images or ground-truth color histograms. To ease the comparison burden for future research, we propose a new colorization benchmark on ImageNet~\cite{imagenet}. We also experiment with colorization itself as an objective for learning visual representations from scratch, thereby replacing use of ImageNet pretraining in a traditional semantic labeling task. Section~\ref{sec:final} summarizes our contributions: (1) a novel technical approach to colorization, bringing semantic knowledge to bear using CNNs, and modeling color distributions; (2) state-of-the-art performance across fully and partially automatic colorization tasks; (3) a new ImageNet colorization benchmark; (4) proof of concept on colorization for self-supervised representation learning. \section{Related work} \label{sec:related} Previous colorization methods broadly fall into three categories: scribble-based~\cite{ levin2004colorization, huang2005adaptive, qu2006manga, yatziv2006fast, luan2007natural}, transfer~\cite{ welsh2002transferring, irony2005colorization, tai2005local, charpiat2008automatic, morimoto2009automatic, chia2011semantic, gupta2012image}, and automatic direct prediction~\cite{ deshpande2015learning,cheng2015deep}. \emph{Scribble-based} methods, introduced by Levin~\emph{et al.}~\cite{levin2004colorization}, require manually specifying desired colors of certain regions. These scribble colors are propagated under the assumption that adjacent pixels with similar luminance should have similar color, with the optimization relying on Normalized Cuts~\cite{ shi2000normalized}. Users can interactively refine results via additional scribbles. Further advances extend similarity to texture~\cite{qu2006manga,luan2007natural}, and exploit edges to reduce color bleeding~\cite{huang2005adaptive}. \emph{Transfer-based} methods rely on availability of related \emph{reference} image(s), from which color is transferred to the target grayscale image. Mapping between source and target is established automatically, using correspondences between local descriptors~\cite{welsh2002transferring, charpiat2008automatic,morimoto2009automatic}, or in combination with manual intervention~\cite{irony2005colorization,chia2011semantic}. Excepting~\cite{morimoto2009automatic}, reference image selection is at least partially manual. In contrast to these method families, our goal is \emph{fully automatic} colorization. We are aware of two recent efforts in this direction. Deshpande~\emph{et al.}~\cite{deshpande2015learning} colorize an entire image by solving a linear system. This can be seen as an extension of patch-matching techniques~\cite{welsh2002transferring}, adding interaction terms for spatial consistency. Regression trees address the high-dimensionality of the system. Inference requires an iterative algorithm. Most of the experiments are focused on a dataset (SUN-6) limited to images of a few scene classes, and best results are obtained when the scene class is known at test time. They also examine another partially automatic task, in which a desired global color histogram is provided. The work of Cheng~\emph{et al.}~\cite{cheng2015deep} is perhaps most related to ours. It combines three levels of features with increasing receptive field: the raw image patch, DAISY features~\cite{tola2008fast}, and semantic features~\cite{long2015fully}. These features are concatenated and fed into a three-layer fully connected neural network trained with an $L_2$ loss. Only this last component is optimized; the feature representations are fixed. Unlike~\cite{deshpande2015learning,cheng2015deep}, our system does not rely on hand-crafted features, is trained end-to-end, and treats color prediction as a histogram estimation task rather than as regression. Experiments in Section~\ref{sec:experiments} justify these principles by demonstrating performance superior to the best reported by~\cite{ deshpande2015learning,cheng2015deep} across all regimes. Two concurrent efforts also present feed-forward networks trained end-to-end for colorization. Iizuka \& Simo-Serra~\emph{et al.}~\cite{IizukaSIGGRAPH2016} propose a network that concatenates two separate paths, specializing in global and local features, respectively. This concatenation can be seen as a two-tiered hypercolumn; in comparison, our 16-layer hypercolumn creates a continuum between low- and high-level features. Their network is trained jointly for classification (cross-entropy) and colorization ($L_2$ loss in Lab). We initialize, but do not anchor, our system to a classification-based network, allowing for fine-tuning of colorization on unlabeled datasets. Zhang~\emph{et al.}~\cite{zhang2016colorful} similarly propose predicting color histograms to handle multi-modality. Some key differences include their usage of up-convolutional layers, deep supervision, and dense training. In comparison, we use a fully convolutional approach, with deep supervision implicit in the hypercolumn design, and, as Section~\ref{sec:method} describes, memory-efficient training via spatially sparse samples. \section{Method} \label{sec:method} We frame the colorization problem as learning a function $f : \mathcal{X} \to \mathcal{Y}$. Given a grayscale image patch $\mathbf{x} \in \mathcal{X} = [0, 1]^{S \times S}$, $f$ predicts the color $\mathbf{y} \in \mathcal{Y}$ of its center pixel. The patch size $S \times S$ is the receptive field of the colorizer. The output space $\mathcal{Y}$ depends on the choice of color parameterization. We implement $f$ according to the neural network architecture diagrammed in Figure~\ref{fig:schematic}. Motivating this strategy is the success of similar architectures for semantic segmentation~\cite{ farabet2013learning, long2015fully, chen2014semantic, hariharan2015hypercolumns, mostajabi2015feedforward} and edge detection~\cite{ MYP:ACCV:2014, GL:ACCV:2014, BST:CVPR:2015, SWWBZ:CVPR:2015, xie2015holistically}. Together with colorization, these tasks can all be viewed as image-to-image prediction problems, in which a value is predicted for each input pixel. Leading methods commonly adapt deep convolutional neural networks pretrained for image classification~\cite{imagenet,vgg16}. Such classification networks can be converted to \emph{fully convolutional} networks that produce output of the same spatial size as the input, \emph{e.g.}~using the shift-and-stitch method~\cite{long2015fully} or the more efficient {\em\`a trous} algorithm~\cite{chen2014semantic}. Subsequent training with a task-specific loss fine-tunes the converted network. Skip-layer connections, which directly link low- and mid-level features to prediction layers, are an architectural addition beneficial for many image-to-image problems. Some methods implement skip connections directly through concatenation layers~\cite{long2015fully,chen2014semantic}, while others equivalently extract per-pixel descriptors by reading localized slices of multiple layers~\cite{MYP:ACCV:2014,mostajabi2015feedforward, hariharan2015hypercolumns}. We use this latter strategy and adopt the recently coined \emph{hypercolumn} terminology~\cite{hariharan2015hypercolumns} for such slices. Though we build upon these ideas, our technical approach innovates on two fronts. First, we integrate domain knowledge for colorization, experimenting with output spaces and loss functions. We design the network output to serve as an intermediate representation, appropriate for direct or biased sampling. We introduce an energy minimization procedure for optionally biasing sampling towards a reference image. Second, we develop a novel and efficient computational strategy for network training that is widely applicable to hypercolumn architectures. \subsection{Color spaces} \label{sec:method_color} We generate training data by converting color images to grayscale according to $L = \frac{R+G+B}{3}$. This is only one of many desaturation options and chosen primarily to facilitate comparison with Deshpande~\emph{et al.}~\cite{ deshpande2015learning}. For the representation of color predictions, using RGB is overdetermined, as lightness $L$ is already known. We instead consider output color spaces with $L$ (or a closely related quantity) conveniently appearing as a separate pass-through channel: \begin{itemize} \item{ \textbf{Hue/chroma}. Hue-based spaces, such as HSL, can be thought of as a color cylinder, with angular coordinate $H$ (hue), radial distance $S$ (saturation), and height $L$ (lightness). The values of $S$ and $H$ are unstable at the bottom (black) and top (white) of the cylinder. HSV describes a similar color cylinder which is only unstable at the bottom. However, $L$ is no longer one of the channels. We wish to avoid both instabilities and still retain $L$ as a channel. The solution is a color bicone, where chroma ($C$) takes the place of saturation. Conversion to HSV is given by $V\,=\,L\,+\,\frac{C}{2},\; S\,=\,\frac{C}{V}$. } \item{ \textbf{Lab} and $\bfgreek{alpha}\bfgreek{beta}$. Lab (or L*a*b) is designed to be perceptually linear. The color vector $(a,b)$ defines a Euclidean space where the distance to the origin determines chroma. Deshpande~\emph{et al.}~\cite{deshpande2015learning} use a color space somewhat similar to Lab, denoted ``ab''. To differentiate, we call their color space $\alpha\beta$. } \end{itemize} \subsection{Loss} \label{sec:method_loss} For any output color representation, we require a loss function for measuring prediction errors. A first consideration, also used in~\cite{cheng2015deep}, is $L_2$ regression in Lab: \begin{equation} \label{eq:loss-l2} L_\mathrm{reg}(\mathbf{x}, \mathbf{y}) = \| f(\mathbf{x}) - \mathbf{y} \|^2 \end{equation} where $\mathcal{Y} = \mathbb{R}^2$ describes the $(a, b)$ vector space. However, regression targets do not handle multimodal color distributions well. To address this, we instead predict distributions over a set of color bins, a technique also used in~\cite{charpiat2008automatic}: \begin{equation} \label{eq:loss-hist} L_\mathrm{hist}(\mathbf{x}, \mathbf{y}) = D_\mathrm{KL}( \mathbf{y} \|f(\mathbf{x})) \end{equation} where $\mathcal{Y} = [0, 1]^K$ describes a histogram over $K$ bins, and $D_\mathrm{KL}$ is the KL-divergence. The ground-truth histogram $\mathbf{y}$ is set as the empirical distribution in a rectangular region of size $R$ around the center pixel. Somewhat surprisingly, our experiments see no benefit to predicting smoothed histograms, so we simply set $R = 1$. This makes $\mathbf{y}$ a one-hot vector and Equation~\eqref{eq:loss-hist} the log loss. For histogram predictions, the last layer of neural network $f$ is always a softmax. There are several choices of how to bin color space. We bin the Lab axes by evenly spaced Gaussian quantiles ($\mu = 0, \sigma = 25$). They can be encoded separately for $a$ and $b$ (as marginal distributions), in which case our loss becomes the sum of two separate terms defined by Equation~\eqref{eq:loss-hist}. They can also be encoded as a joint distribution over $a$ and $b$, in which case we let the quantiles form a 2D grid of bins. In our experiments, we set $K = 32$ for marginal distributions and $K = 16 \times 16$ for joint. We determined these numbers, along with $\sigma$, to offer a good compromise of output fidelity and output complexity. For hue/chroma, we only consider marginal distributions and bin axes uniformly in $[0,1]$. Since hue becomes unstable as chroma approaches zero, we add a sample weight to the hue based on the chroma: \begin{equation} \label{eq:loss-hc} L_\mathrm{hue/chroma}(\mathbf{x}, \mathbf{y}) = D_\mathrm{KL}( \mathbf{y}_\mathrm{C}\|f_\mathrm{C}(\mathbf{x}) ) + \lambda_H y_\mathrm{C} D_\mathrm{KL}(\mathbf{y}_\mathrm{H}\|f_\mathrm{H}(\mathbf{x})) \end{equation} where $\mathcal{Y} = [0,1]^{2 \times K}$ and $y_C \in [0,1]$ is the sample pixel's chroma. We set $\lambda_H = 5$, roughly the inverse expectation of $y_\mathrm{C}$, thus equally weighting hue and chroma. \subsection{Inference} \label{sec:method_inference} Given network $f$ trained according to a loss function in the previous section, we evaluate it at every pixel $n$ in a test image: $\hat{\mathbf{y}}_n = f(\mathbf{x}_n)$. For the $L_2$ loss, all that remains is to combine each $\hat{\mathbf{y}}_n$ with the respective lightness and convert to RGB. With histogram predictions, we consider options for inferring a final color: \begin{itemize} \item{ \textbf{Sample} Draw a sample from the histogram. If done per pixel, this may create high-frequency color changes in areas of high-entropy histograms. } \item{ \textbf{Mode} Take the $\arg\max_k \hat{y}_{n,k}$ as the color. This can create jarring transitions between colors, and is prone to vote splitting for proximal centroids. } \item{ \textbf{Median} Compute cumulative sum of $\hat{\mathbf{y}}_n$ and use linear interpolation to find the value at the middle bin. Undefined for circular histograms, such as hue. } \item{ \textbf{Expectation} Sum over the color bin centroids weighted by the histogram. } \end{itemize} For Lab output, we achieve the best qualitative and quantitative results using expectations. For hue/chroma, the best results are achieved by taking the median of the chroma. Many objects can appear both with and without chroma, which means $C = 0$ is a particularly common bin. This mode draws the expectation closer to zero, producing less saturated images. As for hue, since it is circular, we first compute the complex expectation: \begin{equation} z = \mathbb{E}_{H \sim f_h(\mathbf{x})}[H] \triangleq \frac 1K \sum_k [f_h(x)]_k \mathrm{e}^{i\theta_k}, \quad \theta_k = 2\pi \frac{k + 0.5}{K} \end{equation} We then set hue to the argument of $z$ remapped to lie in $[0,1)$. \parpic[r][t]{ \includegraphics{staged-figures/fig-examples-fading} } In cases where the estimate of the chroma is high and $z$ is close to zero, the instability of the hue can create artifacts. A simple, yet effective, fix is chromatic fading: downweight the chroma if the absolute value of $z$ is too small. We thus re-define the predicted chroma by multiplying it by a factor of $\max(\eta^{-1}|z|, 1)$. In our experiments, we set $\eta = 0.03$ (obtained via cross-validation). \subsection{Histogram transfer from ground-truth} \label{sec:method_transfer} So far, we have only considered the fully automatic color inference task. Deshpande~\emph{et al.}~\cite{deshpande2015learning}, test a separate task where the ground-truth histogram in the two non-lightness color channels of the original color image is made available.\footnote{ Note that if the histogram of the $L$ channel were available, it would be possible to match lightness to lightness exactly and thus greatly narrow down color placement. } In order to compare, we propose two histogram transfer methods. We refer to the predicted image as the \emph{source} and the ground-truth image as the \emph{target}. ~\\\noindent \textbf{Lightness-normalized quantile matching}. \label{sec:energy} Divide the RGB representation of both source and target by their respective lightness. Compute marginal histograms over the resulting three color channels. Alter each source histogram to fit the corresponding target histogram by quantile matching, and multiply by lightness. Though it does not exploit our richer color distribution predictions, quantile matching beats the cluster correspondence method of~\cite{deshpande2015learning} (see Table~\ref{tab:sun6}). ~\\\noindent \textbf{Energy minimization}. We phrase histogram matching as minimizing energy: \begin{equation}\label{eq:energy-min} E = \frac 1N \sum_n D_\mathrm{KL}(\hat{\mathbf{y}}^*_n \| \hat{\mathbf{y}}_n) + \lambda D_{\chi^2}(\langle \hat{\mathbf{y}^*} \rangle, \mathbf{t}) \end{equation} where $N$ is the number of pixels, $\hat{\mathbf{y}}, \hat{\mathbf{y}}^* \in [0, 1]^{N \times K}$ are the predicted and posterior distributions, respectively. The target histogram is denoted by $\mathbf{t} \in [0, 1]^K$. The first term contains unary potentials that anchor the posteriors to the predictions. The second term is a symmetric $\chi^2$ distance to promote proximity between source and target histograms. Weight $\lambda$ defines relative importance of histogram matching. We estimate the source histogram as $\langle \hat{\mathbf{y}}^* \rangle = \frac 1N \sum_n \hat{\mathbf{y}}^*_n$. We parameterize the posterior for all pixels $n$ as: $\hat{\mathbf{y}}^*_n = \mathrm{softmax}(\log \hat{\mathbf{y}}_n + \mathbf{b})$, where the vector $\mathbf{b} \in \mathbb{R}^K$ can be seen as a global bias for each bin. It is also possible to solve for the posteriors directly; this does not perform better quantitatively and is more prone to introducing artifacts. We solve for $\mathbf{b}$ using gradient descent on $E$ and use the resulting posteriors in place of the predictions. In the case of marginal histograms, the optimization is run twice, once for each color channel. \subsection{Neural network architecture and training} Our base network is a fully convolutional version of VGG-16~\cite{vgg16} with two changes: (1) the classification layer ($\texttt{fc8}$) is discarded, and (2) the first filter layer (\texttt{conv1\_1}) operates on a single intensity channel instead of mean-subtracted RGB. We extract a hypercolumn descriptor for a pixel by concatenating the features at its spatial location in all layers, from \texttt{data} to \texttt{conv7} (\texttt{fc7}), resulting in a $12,417$ channel descriptor. We feed this hypercolumn into a fully connected layer with $1024$ channels (\texttt{h\_fc1} in Figure~\ref{fig:schematic}), to which we connect output predictors. Processing each pixel separately in such manner is quite costly. We instead run an entire image through a single forward pass of VGG-16 and approximate hypercolumns using bilinear interpolation. Even with such sharing, densely extracting hypercolumns requires significant memory ($1.7$~GB for $256\times256$ input). \begin{figure}[!th] \begin{center} \begin{minipage}[b]{0.156\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{examples/ctest10k/grayscale/00039118}\\ \includegraphics[width=\textwidth]{examples/ctest10k/grayscale/00036694}\\ \includegraphics[width=\textwidth]{examples/ctest10k/grayscale/00039857}\\ \includegraphics[width=\textwidth]{examples/ctest10k/grayscale/00037934}\\ \includegraphics[width=\textwidth]{examples/ctest10k/grayscale/00044131}\\ \includegraphics[width=\textwidth]{examples/ctest10k/grayscale/00037096}\\ \scriptsize{\textbf{\textsf{Input}}} \end{center} \end{minipage} \begin{minipage}[b]{0.156\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{examples/ctest10k/good/00039118}\\ \includegraphics[width=\textwidth]{examples/ctest10k/good/00036694}\\ \includegraphics[width=\textwidth]{examples/ctest10k/good/00039857}\\ \includegraphics[width=\textwidth]{examples/ctest10k/good/00037934}\\ \includegraphics[width=\textwidth]{examples/ctest10k/good/00044131}\\ \includegraphics[width=\textwidth]{examples/ctest10k/good/00037096}\\ \scriptsize{\textbf{\textsf{Our Method}}} \end{center} \end{minipage} \begin{minipage}[b]{0.156\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{examples/ctest10k/good-gt/00039118}\\ \includegraphics[width=\textwidth]{examples/ctest10k/good-gt/00036694}\\ \includegraphics[width=\textwidth]{examples/ctest10k/good-gt/00039857}\\ \includegraphics[width=\textwidth]{examples/ctest10k/good-gt/00037934}\\ \includegraphics[width=\textwidth]{examples/ctest10k/good-gt/00044131}\\ \includegraphics[width=\textwidth]{examples/ctest10k/good-gt/00037096}\\ \scriptsize{\textbf{\textsf{Ground-truth}}} \end{center} \end{minipage} \hfill \begin{minipage}[b]{0.156\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{examples/ctest10k/grayscale/00039488}\\ \vspace{0.0075\linewidth} \includegraphics[width=\textwidth]{examples/ctest10k/grayscale/00041491}\\ \vspace{0.0075\linewidth} \includegraphics[width=\textwidth]{examples/ctest10k/grayscale/00043772}\\ \vspace{0.0075\linewidth} \includegraphics[width=\textwidth]{examples/ctest10k/grayscale/00038876}\\ \vspace{0.0075\linewidth} \includegraphics[width=\textwidth]{examples/ctest10k/grayscale/00038325}\\ \vspace{0.0075\linewidth} \includegraphics[width=\textwidth]{examples/ctest10k/grayscale/00039038}\\ \scriptsize{\textbf{\textsf{Input}}} \end{center} \end{minipage} \begin{minipage}[b]{0.156\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{examples/ctest10k/good/00039488}\\ \vspace{0.0075\linewidth} \includegraphics[width=\textwidth]{examples/ctest10k/good/00041491}\\ \vspace{0.0075\linewidth} \includegraphics[width=\textwidth]{examples/ctest10k/good/00043772}\\ \vspace{0.0075\linewidth} \includegraphics[width=\textwidth]{examples/ctest10k/good/00038876}\\ \vspace{0.0075\linewidth} \includegraphics[width=\textwidth]{examples/ctest10k/good/00038325}\\ \vspace{0.0075\linewidth} \includegraphics[width=\textwidth]{examples/ctest10k/good/00039038}\\ \scriptsize{\textbf{\textsf{Our Method}}} \end{center} \end{minipage} \begin{minipage}[b]{0.156\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{examples/ctest10k/good-gt/00039488}\\ \vspace{0.0075\linewidth} \includegraphics[width=\textwidth]{examples/ctest10k/good-gt/00041491}\\ \vspace{0.0075\linewidth} \includegraphics[width=\textwidth]{examples/ctest10k/good-gt/00043772}\\ \vspace{0.0075\linewidth} \includegraphics[width=\textwidth]{examples/ctest10k/good-gt/00038876}\\ \vspace{0.0075\linewidth} \includegraphics[width=\textwidth]{examples/ctest10k/good-gt/00038325}\\ \vspace{0.0075\linewidth} \includegraphics[width=\textwidth]{examples/ctest10k/good-gt/00039038}\\ \scriptsize{\textbf{\textsf{Ground-truth}}} \end{center} \end{minipage} \end{center} \caption{\small \textbf{Fully automatic colorization results on ImageNet/ctest10k.} Our system reproduces known object color properties (\emph{e.g.}~faces, sky, grass, fruit, wood), and coherently picks colors for objects without such properties (\emph{e.g.}~clothing). } \label{fig:ctest10k-examples} \end{figure} To fit image batches in memory during training, we instead extract hypercolumns at only a sparse set of locations, implementing a custom Caffe~\cite{jia2014caffe} layer to directly compute them.\footnote{ \url{https://github.com/gustavla/autocolorize} } Extracting batches of only $128$ hypercolumn descriptors per input image, sampled at random locations, provides sufficient training signal. In the backward pass of stochastic gradient descent, an interpolated hypercolumn propagates its gradients to the four closest spatial cells in each layer. Locks ensure atomicity of gradient updates, without incurring any performance penalty. This drops training memory for hypercolumns to only $13$~MB per image. We initialize with a version of VGG-16 pretrained on ImageNet, adapting it to grayscale by averaging over color channels in the first layer and rescaling appropriately. Prior to training for colorization, we further fine-tune the network for one epoch on the ImageNet classification task with grayscale input. As the original VGG-16 was trained without batch normalization~\cite{ ioffe2015batch}, scale of responses in internal layers can vary dramatically, presenting a problem for learning atop their hypercolumn concatenation. Liu~\emph{et al.}~\cite{liu2015parsenet} compensate for such variability by applying layer-wise $L_2$ normalization. We use the alternative of balancing hypercolumns so that each layer has roughly unit second moment ($\mathbb{E}[X^2] \approx 1$); Appendix (Section~\ref{sec:rebalance}) provides additional details. \section{Experiments} \label{sec:experiments} Starting from pretrained VGG-16-Gray, described in the previous section, we attach \texttt{h\_fc1} and output prediction layers with Xavier initialization~\cite{glorot2010understanding}, and fine-tune the entire system for colorization. We consider multiple prediction layer variants: Lab output with $L_2$ loss, and both Lab and hue/chroma marginal or joint histogram output with losses according to Equations~\eqref{eq:loss-hist} and~\eqref{eq:loss-hc}. We train each system variant end-to-end for one epoch on the $1.2$ million images of the ImageNet training set, each resized to at most $256$ pixels in smaller dimension. A single epoch takes approximately 17 hours on a GTX Titan X GPU. At test time, colorizing a single $512 \times 512$ pixel image takes $0.5$ seconds. \begin{table}[t!] \RawFloats \begin{minipage}[t]{.52\textwidth} \centering {\small \begin{tabular*}{\textwidth}{l @{\extracolsep{\fill}} rr} \toprule Model$\backslash$Metric & RMSE & PSNR \\ \midrule No colorization & 0.343 & 22.98 \\ Lab, $L_2$ & 0.318 & 24.25 \\ Lab, $K = 32$ & 0.321 & 24.33 \\ Lab, $K = 16 \times 16$ & 0.328 & 24.30 \\ Hue/chroma, $K = 32$ & 0.342 & 23.77 \\ \quad + chromatic fading & {\bf 0.299} & {\bf 24.45} \\ \bottomrule \end{tabular*}} \caption{ \small \textbf{ImageNet/cval1k.} Validation performance of system variants. Hue/chroma is best, but only with chromatic fading. } \label{tab:cval1k} \end{minipage} \hfill \begin{minipage}[t]{.45\textwidth} \centering {\small \begin{tabular*}{\textwidth}{l @{\extracolsep{\fill}} rr} \toprule Model$\backslash$Metric & RMSE & PSNR \\ \midrule \texttt{data..fc7} & {\bf 0.299} & {\bf 24.45} \\ \texttt{data..conv5\_3} & 0.306 & 24.13 \\ \texttt{conv4\_1..fc7} & 0.302 & {\bf 24.45} \\ \texttt{conv5\_1..fc7} & 0.307 & 24.38 \\ \texttt{fc6..fc7} & 0.323 & 24.22 \\ \texttt{fc7} & 0.324 & 24.19 \\ \bottomrule \end{tabular*}} \caption{ \small \textbf{ImageNet/cval1k.} Ablation study of hypercolumn components. } \label{tab:ablation} \end{minipage} \end{table} We setup two disjoint subsets of the ImageNet validation data for our own use: $1000$ validation images (\textbf{cval1k}) and $10000$ test images (\textbf{ctest10k}). Each set has a balanced representation for ImageNet categories, and excludes any images encoded as grayscale, but may include images that are naturally grayscale (\emph{e.g.}~closeup of nuts and bolts), where an algorithm should know not to add color. Category labels are discarded; only images are available at test time. We propose \textbf{ctest10k} as a standard benchmark with the following metrics: \begin{itemize} \item{ \textbf{RMSE}: root mean square error in $\alpha\beta$ averaged over all pixels~\cite{deshpande2015learning}. } \item{ \textbf{PSNR}: peak signal-to-noise ratio in RGB calculated per image~\cite{cheng2015deep}. We use the arithmetic mean of PSNR over images, instead of the geometric mean as in Cheng~\emph{et al.}~\cite{cheng2015deep}; geometric mean is overly sensitive to outliers. } \end{itemize} By virtue of comparing to ground-truth color images, quantitative colorization metrics can penalize reasonable, but incorrect, color guesses for many objects (\emph{e.g.}~red car instead of blue car) more than jarring artifacts. This makes qualitative results for colorization as important as quantitative; we report both. Figures~\ref{fig:teaser},~\ref{fig:ctest10k-examples}, and~\ref{fig:ctest10k-more-examples} show example test results of our best system variant, selected according to performance on the validation set and trained for a total of $10$ epochs. This variant predicts hue and chroma and uses chromatic fading during image generation. Table~\ref{tab:cval1k} provides validation benchmarks for all system variants, including the trivial baseline of no colorization. On ImageNet test (\textbf{ctest10k}), our selected model obtains $0.293$ (RMSE, $\alpha\beta$, avg/px) and $24.94$~dB (PSNR, RGB, avg/im), compared to $0.333$ and $23.27$~dB for the baseline. Table~\ref{tab:ablation} examines the importance of different neural network layers to colorization; it reports validation performance of ablated systems that include only the specified subsets of layers in the hypercolumn used to predict hue and chroma. Some lower layers may be discarded without much performance loss, yet higher layers alone (\texttt{fc6..fc7}) are insufficient for good colorization. Our ImageNet colorization benchmark is new to a field lacking an established evaluation protocol. We therefore focus on comparisons with two recent papers~\cite{deshpande2015learning,cheng2015deep}, using their self-defined evaluation criteria. To do so, we run our ImageNet-trained hue and chroma model on two additional datasets: \begin{itemize} \item{ \textbf{\bf SUN-A}~\cite{patterson2014sun} is a subset of the SUN dataset~\cite{xiao2010sun} containing $47$ object categories. Cheng~\emph{et al.}~\cite{cheng2015deep} train a colorization system on $2688$ images and report results on $1344$ test images. We were unable to obtain the list of test images, and therefore report results averaged over five random subsets of $1344$ SUN-A images. We do not use any SUN-A images for training. } \item{ \textbf{SUN-6}, another SUN subset, used by Deshpande~\emph{et al.}~\cite{deshpande2015learning}, includes images from $6$ scene categories (beach, castle, outdoor, kitchen, living room, bedroom). We compare our results on $240$ test images to those reported in~\cite{deshpande2015learning} for their method as well as for Welsh~\emph{et al.}~\cite{welsh2002transferring} with automatically matched reference images as in~\cite{morimoto2009automatic}. Following~\cite{deshpande2015learning}, we consider another evaluation regime in which ground-truth target color histograms are available. } \end{itemize} \begin{table}[!t] \RawFloats \begin{minipage}[t]{.40\textwidth} {\small \begin{tabular*}{\textwidth}[t]{l @{\extracolsep{\fill}} r} \toprule Method & RMSE \\ \midrule Grayscale (no colorization) & 0.285 \\ Welsh~\emph{et al.}~\cite{welsh2002transferring} & 0.353 \\ Deshpande~\emph{et al.}~\cite{deshpande2015learning} & 0.262 \\ \quad + GT Scene & 0.254 \\ Our Method & \textbf{0.211} \\ \bottomrule \end{tabular*}} \caption{ \small \textbf{SUN-6.} Comparison with competing methods. } \label{tab:sun6} \end{minipage} \hfill \begin{minipage}[t]{.56\textwidth} {\small \begin{tabular*}{\textwidth}[t]{l @{\extracolsep{\fill}} r} \toprule Method & RMSE \\ \midrule Deshpande~\emph{et al.}~(C)~\cite{deshpande2015learning} & 0.236 \\ Deshpande~\emph{et al.}~(Q) & 0.211 \\ Our Method (Q) & 0.178 \\ Our Method (E) & \textbf{0.165} \\ \bottomrule \end{tabular*}} \caption{ \small \textbf{SUN-6 (GT Hist).} Comparison using ground-truth histograms. Results for Deshpande~\emph{et al.}~\cite{deshpande2015learning} use GT Scene. } \label{tab:sun6-gth} \end{minipage} \end{table} \begin{figure}[!t] \RawFloats \begin{minipage}[c]{.45\linewidth} \centering \includegraphics[width=.78\linewidth]{figures/sun6-histogram-pixels}\\ \caption{ \small \textbf{SUN-6.} Cumulative histogram of per pixel error (higher=more pixels with lower error). Results for Deshpande~\emph{et al.}~\cite{deshpande2015learning} use GT Scene. } \label{fig:sun6-cum-hist} \end{minipage}\hspace{1em}% \begin{minipage}[c]{.5\linewidth} \centering \includegraphics[width=.82\linewidth]{figures/SUN-A-histogram.pdf} \caption{ \small \textbf{SUN-A.} Histogram of per-image PSNR for~\cite{cheng2015deep} and our method. The highest geometric mean PSNR reported for experiments in~\cite{cheng2015deep} is 24.2, vs. {\bf 32.7$\pm$2.0} for us. } \label{fig:sun-deep-histogram} \end{minipage} \end{figure} Figure~\ref{fig:sun6-examples} shows a comparison of results on SUN-6. Forgoing usage of ground-truth global histograms, our fully automatic system produces output qualitatively superior to methods relying on such side information. Tables~\ref{tab:sun6}~and~\ref{tab:sun6-gth} report quantitative performance corroborating this view. The partially automatic systems in Table~\ref{tab:sun6-gth} adapt output to fit global histograms using either: (C) cluster correspondences~\cite{deshpande2015learning}, (Q) quantile matching, or (E) our energy minimization described in Section~\ref{sec:method_transfer}. Our quantile matching results are superior to those of~\cite{deshpande2015learning} and our new energy minimization procedure offers further improvement. Figures~\ref{fig:sun6-cum-hist} and~\ref{fig:sun-deep-histogram} compare error distributions on SUN-6 and SUN-A. As in Table~\ref{tab:sun6}, our fully automatic method dominates all competing approaches, even those which use auxiliary information. It is only outperformed by the version of itself augmented with ground-truth global histograms. On SUN-A, Figure~\ref{fig:sun-deep-histogram} shows clear separation between our method and~\cite{cheng2015deep} on per-image PSNR. The Appendix (Figures~\ref{fig:charpiat-reference} and~\ref{fig:charpiat-portraits}) provides anecdotal comparisons to one additional method, that of Charpiat~\emph{et al.}~\cite{charpiat2010machine}, which can be considered an automatic system if reference images are available. Unfortunately, source code of~\cite{charpiat2010machine} is not available and reported time cost is prohibitive for large-scale evaluation ($30$ minutes per image). We were thus unable to benchmark~\cite{charpiat2010machine} on large datasets. With regard to concurrent work, Zhang~\emph{et al.}~\cite{zhang2016colorful} include a comparison of our results to their own. The two systems are competitive in terms of quantitative measures of colorization accuracy. Their system, set to produce more vibrant colors, has an advantage in terms of human-measured preferences. In contrast, an off-the-shelf VGG-16 network for image classification, consuming our system's color output, more often produces correct labels, suggesting a realism advantage. We refer interested readers to~\cite{zhang2016colorful} for the full details of this comparison. Though we achieve significant improvements over prior state-of-the-art, our results are not perfect. Figure~\ref{fig:failure-modes} shows examples of significant failures. Minor imperfections are also present in some of the results in Figures~\ref{fig:ctest10k-examples} and~\ref{fig:ctest10k-more-examples}. We believe a common failure mode correlates with gaps in semantic interpretation: incorrectly identified or unfamiliar objects and incorrect segmentation. In addition, there are ``mistakes'' due to natural uncertainty of color -- \emph{e.g.}~the graduation robe at the bottom right of Figure~\ref{fig:ctest10k-examples} is red, but could as well be purple. Since our method produces histograms, we can provide interactive means of biasing colorizations according to user preferences. Rather than output a single color per pixel, we can sample color for image regions and evaluate color uncertainty. Specifically, solving our energy minimization formulation (Equation~\eqref{eq:energy-min}) with global biases~$\mathbf{b}$ that are not optimized based on a reference image, but simply ``rotated'' through color space, induces changed color preferences throughout the image. The uncertainty in the predicted histogram modulates this effect. Figure~\ref{fig:warhol} shows multiple sampled colorizations, together with a visualization of uncertainty. Here, uncertainty is the entropy of the predicted hue multiplied by the chroma. Our distributional output and energy minimization framework open the path for future investigation of human-in-the-loop colorization tools. \subsection{Representation learning} \label{sec:representation-learning} \begin{figure}[!t] \begin{center} \begin{minipage}[b]{0.1880\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{examples/ctest10k/bad/00048022}\\ \includegraphics[width=\textwidth]{examples/ctest10k/bad/00040947} \end{center} \end{minipage} \hfill \begin{minipage}[b]{0.2045\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{examples/ctest10k/bad/00039050}\\ \includegraphics[width=\textwidth]{examples/ctest10k/bad/00047635} \end{center} \end{minipage} \hfill \begin{minipage}[b]{0.1802\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{examples/ctest10k/bad/00037037}\\ \includegraphics[width=\textwidth]{examples/ctest10k/bad/00042951} \end{center} \end{minipage} \hfill \begin{minipage}[b]{0.1895\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{examples/ctest10k/bad/00032520}\\ \includegraphics[width=\textwidth]{examples/ctest10k/bad/00034137} \end{center} \end{minipage} \hfill \begin{minipage}[b]{0.1860\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{examples/ctest10k/bad/00042169}\\ \includegraphics[width=\textwidth]{examples/ctest10k/bad/00039352} \end{center} \end{minipage} \vspace{-0.25cm} \end{center} \caption{\small \textbf{Failure modes.} \emph{Top row, left-to-right:} texture confusion, too homogeneous, color bleeding, unnatural color shifts ($\times 2$). \emph{Bottom row:} inconsistent background, inconsistent chromaticity, not enough color, object not recognized (upside down face partly gray), context confusion (sky). } \label{fig:failure-modes} \end{figure} \begin{figure}[!t] \RawFloats \centering \begin{tabular}{ccccc} \includegraphics[width=.190\textwidth]{examples/rotation/00049925gray}& \includegraphics[width=.190\textwidth]{examples/rotation/00049925_1}& \includegraphics[width=.190\textwidth]{examples/rotation/00049925_2}& \includegraphics[width=.190\textwidth]{examples/rotation/00049925_4}& \includegraphics[width=.190\textwidth]{examples/rotation/00049925_uncentainty} \end{tabular} \caption{ \small \textbf{Sampling colorizations.} \emph{Left:} Image \& 3 samples; \emph{Right:} Uncertainty map. } \label{fig:warhol} \end{figure} High-level visual understanding is essential for the colorization of grayscale images, motivating our use of an ImageNet pretrained network as a starting point. But with enough training data, perhaps we can turn this around and use colorization as means of learning networks for capturing high-level visual representations. Table~\ref{tab:from-scratch} shows that a colorization network, trained from scratch using only unlabeled color images, is surprisingly competitive. It converges slower, but requires not more than twice the number of epochs. Our preliminary work shows that the networks learned via training colorization from scratch generalize well to other visual tasks. This is significant because such training requires no human annotation effort. It follows a recent trend of learning representations through self-supervision (\emph{e.g.}{} context prediction~\cite{contextpred}, solving jigsaw puzzles~\cite{jigsaw}, inpainting~\cite{contextencoders}, adversarial feature learning \cite{bigan,dumoulin2016adversarially}). We examine self-supervised colorization as a replacement for supervised ImageNet pretraining on the Pascal VOC 2012 semantic segmentation task, with results on grayscale validation set images. We train colorization from scratch on ImageNet (Table~\ref{tab:from-scratch}) and fine-tune for Pascal semantic segmentation. We make the one adjustment of employing cross-validated early stopping to avoid overfitting. Table~\ref{tab:voc2012-segmentation} shows this strategy to be promising as a drop-in replacement for supervised ImageNet pretraining. Self-supervised colorization more than halfway bridges the gap between random initialization and supervised pretraining. As VGG-16 is a more performant architecture, comparison with prior work is not straightforward. Yet, Table~\ref{tab:voc2012-segmentation} still indicates that colorization is a front-runner among the self-supervision methods, leading to an 18-point improvement in mIU over the baseline. To our knowledge, 50.2\% is the highest reported result that does not supplement training with additional annotated data~\cite{ion2014probabilistic}. \begin{table}[!t] \RawFloats \begin{minipage}[t]{.36\textwidth} \centering {\small \begin{tabular*}{\textwidth}[t]{l @{\extracolsep{\fill}} rr} \toprule Initialization & RMSE & PSNR \\ \midrule Classifier & 0.299 & 24.45 \\ Random & 0.311 & 24.25 \\ \bottomrule \end{tabular*}} \caption{ \small \textbf{ImageNet/cval1k.} Compares methods of initialization before colorization training. Hue/chroma with chromatic fading is used in both cases (see in Tab.~\ref{tab:cval1k}). } \label{tab:from-scratch} \end{minipage} \hfill \begin{minipage}[t]{.60\textwidth} \centering {\small \begin{tabular*}{\textwidth}[t]{l @{\extracolsep{\fill}} l @{\extracolsep{\fill}} c @{\extracolsep{\fill}} c @{\extracolsep{\fill}} c @{\extracolsep{\fill}} r} \toprule Initialization & Architecture & $X$ & $Y$ & $C$ & mIU (\%) \\ \midrule Classifier & VGG-16 & \ding{51} & \ding{51} & & 64.0 \\ Colorizer & VGG-16 & \ding{51} & & & 50.2 \\ Random & VGG-16 & & & & 32.5 \\ \midrule Classifier~\cite{bigan,contextencoders} & AlexNet & \ding{51} & \ding{51} & \ding{51} & 48.0 \\ BiGAN~\cite{bigan} & AlexNet & \ding{51} & & \ding{51} & 34.9 \\ Inpainter~\cite{contextencoders} & AlexNet & \ding{51} & & \ding{51} & 29.7 \\ Random~\cite{contextencoders} & AlexNet & & & \ding{51} & 19.8 \\ \bottomrule \end{tabular*}} \caption{ \small \textbf{VOC 2012 segmentation validation set.} Pretraining uses ImageNet images ($X$), labels ($Y$). VOC 2012 images are in color ($C$). } \label{tab:voc2012-segmentation} \end{minipage} \end{table} \section{Conclusion} \label{sec:final} We present a system that demonstrates state-of-the-art ability to automatically colorize grayscale images. Two novel contributions enable this progress: a deep neural architecture that is trained end-to-end to incorporate semantically meaningful features of varying complexity into colorization, and a color histogram prediction framework that handles uncertainty and ambiguities inherent in colorization while preventing jarring artifacts. Our fully automatic colorizer produces strong results, improving upon previously leading methods by large margins on all datasets tested; we also propose a new large-scale benchmark for automatic image colorization, and establish a strong baseline with our method to facilitate future comparisons. Our colorization results are visually appealing even on complex scenes, and allow for effective post-processing with creative control via color histogram transfer and intelligent, uncertainty-driven color sampling. We further reveal colorization as a promising avenue for self-supervised visual learning. \section{Supplementary details} \label{sec:supp-details} \subsection{Re-balancing} \label{sec:rebalance} To adjust the scale of the activations of layer $l$ by factor $m$, without changing any other layer's activation, the weights $\mathbf{W}$ and the bias $\mathbf{b}$ are updated according to: \begin{equation} \mathbf{W}_l \leftarrow m \mathbf{W}_l \quad\quad\quad\quad \mathbf{b}_l \leftarrow m \mathbf{b}_l \quad\quad \quad\quad \mathbf{W}_{l+1} \leftarrow \frac 1m \mathbf{W}_{l+1} \end{equation} The activation of $\mathbf{x}_{l+1}$ becomes: \begin{equation} \mathbf{x}_{l+1} = \frac 1m \mathbf{W}_{l+1} \mathrm{ReLU}(m \mathbf{W}_l \mathbf{x}_l + m \mathbf{b}_l) + \mathbf{b}_{l+1} \end{equation} The $m$ inside the ReLU will not affect whether or not a value is rectified, so the two cases remain the same: (1) negative: the activation will be the corresponding feature in $\mathbf{b}_{l+1}$ regardless of $m$, and (2) positive: the ReLU becomes the identity function and $m$ and $\frac 1m$ cancel to get back the original activation. We set $m = \frac{1}{\sqrt{\hat{\mathbb{E}}[X^2]}}$, estimated for each layer separately. \subsection{Color space $\alpha\beta$} The color channels $\alpha\beta$ (``ab'' in~\cite{deshpande2015learning}) are calculated as \begin{equation} \alpha = \frac{B - \frac 12 (R + G) }{L + \epsilon} \quad\quad \beta = \frac{R - G}{L + \epsilon} \\ \end{equation} where $\epsilon = 0.0001$, $R, G, B \in [0, 1]$ and $L = \frac{R+G+B}3$.\footnote{We know that this is how Deshpande~\emph{et al.}~\cite{deshpande2015learning} calculate it based on their code release.} \subsection{Error metrics} For $M$ images, each image $m$ with $N_m$ pixels, we calculate the error metrics as: \begin{align} \mathrm{RMSE} &= \frac 1{\sum_{m=1}^M N_m} \sum_{m=1}^M \sum_{n=1}^{N_m} \sqrt{\left\| \left[\mathbf{y}^{(m)}_{\alpha\beta}\right]_n - \left[\hat\mathbf{y}^{(m)}_{\alpha\beta}\right]_n \right\|^2 } \\ \mathrm{PSNR} &= \frac 1M \sum_{m=1}^M \sum_{n=1}^{N_m} -10 \cdot \log_{10} \left(\frac{\|\mathbf{y}_\mathrm{RGB}^{(m)} - \hat \mathbf{y}_\mathrm{RGB}^{(m)}\|^2}{3N_m}\right) \end{align} Where $\mathbf{y}_{\alpha\beta}^{(m)} \in [-3, 3]^{N_m \times 2}$ and $\mathbf{y}_\mathrm{RGB}^{(m)} \in [0, 1]^{N_m \times 3}$ for all $m$. \begin{figure}[t!] \centering \includestandalone{figures/parrot} \caption{\textbf{Histogram predictions.} Example of predicted hue/chroma histograms.} \label{fig:histogram-supp-parrot} \end{figure} \subsection{Lightness correction} Ideally the lightness $L$ is an unaltered pass-through channel. However, due to subtle differences in how $L$ is defined, it is possible that the lightness of the predicted image, $\hat L$, does not agree with the input, $L$. To compensate for this, we add $L - \hat L$ to all color channels in the predicted RGB image as a final corrective step. \section{Supplementary results} \label{sec:supp-results} \subsection{Validation} A more detailed list of validation results for hue/chroma inference methods is seen in Table~\ref{tab:supp-cval1k-hue-chroma}. \begin{table}[t!] \RawFloats \floatbox[{\capbeside\thisfloatsetup{ capbesideposition={right,top},capbesidewidth=5cm}}]{figure}[\FBwidth] { \captionof{table}{ \small \textbf{ImageNet/cval1k.} Comparison of various histogram inference methods for hue/chroma. Mode/mode does fairly well but has severe visual artifacts. (CF = Chromatic fading) } \label{tab:supp-cval1k-hue-chroma} } { {\small \begin{tabular*}{0.52\textwidth}{llc @{\extracolsep{\fill}} rr} \toprule Hue & Chroma & CF & RMSE & PSNR \\ \midrule Sample & Sample & & 0.426 & 21.41 \\ Mode & Mode & & 0.304 & 23.90 \\ Expectation & Expectation & & 0.374 & 23.13 \\ Expectation & Expectation & \ding{51} & 0.307 & 24.35 \\ Expectation & Median & & 0.342 & 23.77 \\ Expectation & Median & \ding{51} & {\bf 0.299} & {\bf 24.45} \\ \bottomrule \end{tabular*}} } \end{table} \subsection{Examples} We provide additional samples for global biasing (Figure~\ref{fig:supp-warhol}) and SUN-6 (Figure~\ref{fig:supp-sun6-examples}). Comparisons with Charpiat~\emph{et al.}~\cite{charpiat2010machine} appear in Figures~\ref{fig:charpiat-reference} and~\ref{fig:charpiat-portraits}. Examples of how our algorithm can bring old photographs to life in Figure~\ref{fig:supp-legacy}. More examples on ImageNet (ctest10k) in Figures~\ref{fig:ctest10k-supp-examples1} to~\ref{fig:ctest10k-supp-examples4} and Figure~\ref{fig:ctest10k-supp-failures1} (failure cases). Examples of histogram predictions in Figures~\ref{fig:histogram-supp-field} and~\ref{fig:histogram-supp-parrot}. \begin{figure}[!ht] \RawFloats \centering \begin{tabular}{ccccc} \includegraphics[width=.190\textwidth]{examples/rotation/00041877gray}& \includegraphics[width=.190\textwidth]{examples/rotation/00041877_0}& \includegraphics[width=.190\textwidth]{examples/rotation/00041877_2}& \includegraphics[width=.190\textwidth]{examples/rotation/00041877_3}& \includegraphics[width=.190\textwidth]{examples/rotation/00041877_uncentainty}\\ \includegraphics[width=.190\textwidth]{examples/rotation/00042799gray}& \includegraphics[width=.190\textwidth]{examples/rotation/00042799_1}& \includegraphics[width=.190\textwidth]{examples/rotation/00042799_2}& \includegraphics[width=.190\textwidth]{examples/rotation/00042799_4}& \includegraphics[width=.190\textwidth]{examples/rotation/00042799_uncentainty}\\ \includegraphics[width=.190\textwidth]{examples/rotation/00042765gray}& \includegraphics[width=.190\textwidth]{examples/rotation/00042765_1}& \includegraphics[width=.190\textwidth]{examples/rotation/00042765_2}& \includegraphics[width=.190\textwidth]{examples/rotation/00042765_4}& \includegraphics[width=.190\textwidth]{examples/rotation/00042765_uncentainty}\\ \includegraphics[width=.190\textwidth]{examples/rotation/00041981gray}& \includegraphics[width=.190\textwidth]{examples/rotation/00041981_1}& \includegraphics[width=.190\textwidth]{examples/rotation/00041981_2}& \includegraphics[width=.190\textwidth]{examples/rotation/00041981_4}& \includegraphics[width=.190\textwidth]{examples/rotation/00041981_uncentainty}\\ \includegraphics[width=.190\textwidth]{examples/rotation/00046905gray}& \includegraphics[width=.190\textwidth]{examples/rotation/00046905_1}& \includegraphics[width=.190\textwidth]{examples/rotation/00046905_2}& \includegraphics[width=.190\textwidth]{examples/rotation/00046905_4}& \includegraphics[width=.190\textwidth]{examples/rotation/00046905_uncentainty}\\ \end{tabular} \caption{ \small \textbf{Sampling multiple colorizations.} From left: graylevel input; three colorizations sampled from our model; color uncertainty map according to our model. } \label{fig:supp-warhol} \end{figure} \begin{figure}[!th] \RawFloats \setlength\fboxsep{0pt} \begin{center} \begin{minipage}[t]{0.1625\linewidth} \begin{center} \begin{minipage}[t]{0.975\linewidth} \begin{center} \includegraphics[width=\textwidth]{examples/sun6/bedroom-39-welsh}\\ \includegraphics[width=\textwidth]{examples/sun6/livingroom-16-welsh}\\ \includegraphics[width=\textwidth]{examples/sun6/beach-26-welsh}\\ \includegraphics[width=\textwidth]{examples/sun6/beach-03-welsh}\\ \includegraphics[width=\textwidth]{examples/sun6/kitchen-10-welsh}\\ \includegraphics[width=\textwidth]{examples/sun6/outdoor-09-welsh}\\ \scriptsize{\textbf{\textsf{Grayscale only}}} \end{center} \end{minipage}\\ \rule{0.95\linewidth}{0.5pt}\\ \scriptsize{\textbf{\textsf{Welsh~\emph{et al.}~\cite{welsh2002transferring}}}} \end{center} \end{minipage} \hfill \begin{minipage}[t]{0.325\linewidth} \begin{center} \begin{minipage}[t]{0.4875\linewidth} \begin{center} \includegraphics[width=\textwidth]{examples/sun6/bedroom-39-learch-mean}\\ \includegraphics[width=\textwidth]{examples/sun6/livingroom-16-learch-mean}\\ \includegraphics[width=\textwidth]{examples/sun6/beach-26-learch-mean}\\ \includegraphics[width=\textwidth]{examples/sun6/beach-03-learch-mean}\\ \includegraphics[width=\textwidth]{examples/sun6/kitchen-10-learch-mean}\\ \includegraphics[width=\textwidth]{examples/sun6/outdoor-09-learch-mean}\\ \scriptsize{\textbf{\textsf{\textcolor{white}{y}GT Scene\textcolor{white}{y}}}} \end{center} \end{minipage} \begin{minipage}[t]{0.4875\linewidth} \begin{center} \includegraphics[width=\textwidth]{examples/sun6/bedroom-39-learch-gt}\\ \includegraphics[width=\textwidth]{examples/sun6/livingroom-16-learch-gt}\\ \includegraphics[width=\textwidth]{examples/sun6/beach-26-learch-gt}\\ \includegraphics[width=\textwidth]{examples/sun6/beach-03-learch-gt}\\ \includegraphics[width=\textwidth]{examples/sun6/kitchen-10-learch-gt}\\ \includegraphics[width=\textwidth]{examples/sun6/outdoor-09-learch-gt}\\ \scriptsize{\textbf{\textsf{GT Scene \& Hist}}} \end{center} \end{minipage}\\ \rule{0.975\linewidth}{0.5pt}\\ \scriptsize{\textbf{\textsf{Deshpande~\emph{et al.}~\cite{deshpande2015learning}}}} \end{center} \end{minipage} \hfill \begin{minipage}[t]{0.325\linewidth} \begin{center} \begin{minipage}[t]{0.4875\linewidth} \begin{center} \includegraphics[width=\textwidth]{examples/sun6/bedroom-39-ours}\\ \includegraphics[width=\textwidth]{examples/sun6/livingroom-16-ours}\\ \includegraphics[width=\textwidth]{examples/sun6/beach-26-ours}\\ \includegraphics[width=\textwidth]{examples/sun6/beach-03-ours}\\ \includegraphics[width=\textwidth]{examples/sun6/kitchen-10-ours}\\ \includegraphics[width=\textwidth]{examples/sun6/outdoor-09-ours}\\ \scriptsize{\textbf{\textsf{Grayscale only}}} \end{center} \end{minipage} \begin{minipage}[t]{0.4875\linewidth} \begin{center} \includegraphics[width=\textwidth]{examples/sun6/bedroom-39-ours-gthist-e1}\\ \includegraphics[width=\textwidth]{examples/sun6/livingroom-16-ours-gthist-e1}\\ \includegraphics[width=\textwidth]{examples/sun6/beach-26-ours-gthist-e1}\\ \includegraphics[width=\textwidth]{examples/sun6/beach-03-ours-gthist-e1}\\ \includegraphics[width=\textwidth]{examples/sun6/kitchen-10-ours-gthist-e1}\\ \includegraphics[width=\textwidth]{examples/sun6/outdoor-09-ours-gthist-e1}\\ \scriptsize{\textbf{\textsf{GT Histogram}}} \end{center} \end{minipage}\\ \rule{0.975\linewidth}{0.5pt}\\ \scriptsize{\textbf{\textsf{Our Method}}} \end{center} \end{minipage} \hfill \begin{minipage}[t]{0.1625\linewidth} \begin{center} \begin{minipage}[t]{0.975\linewidth} \begin{center} \includegraphics[width=\textwidth]{examples/sun6/bedroom-39-gt}\\ \includegraphics[width=\textwidth]{examples/sun6/livingroom-16-gt}\\ \includegraphics[width=\textwidth]{examples/sun6/beach-26-gt}\\ \includegraphics[width=\textwidth]{examples/sun6/beach-03-gt}\\ \includegraphics[width=\textwidth]{examples/sun6/kitchen-10-gt}\\ \includegraphics[width=\textwidth]{examples/sun6/outdoor-09-gt}\\ \scriptsize{\textbf{\textsf{Ground-truth}}} \end{center} \end{minipage} \end{center} \end{minipage} \end{center} \caption{\small \textbf{SUN-6.} Additional qualitative comparisons. } \label{fig:supp-sun6-examples} \end{figure} \begin{figure}[b!] \begin{center} \begin{minipage}[b]{0.270\linewidth} \begin{center} \includegraphics[height=2.3cm]{examples/charpiat-ref/zebra_ref} \includegraphics[height=1.8cm]{examples/charpiat-ref/landscape_ref} \scriptsize{\textbf{\textsf{\phantom{k[}{}Reference Image\phantom{k[}\\ \phantom{k[}}}} \end{center} \end{minipage} \begin{minipage}[b]{0.235\linewidth} \begin{center} \includegraphics[height=2.3cm]{examples/charpiat-ref/zebra_grayscale_cropped} \includegraphics[height=1.8cm]{examples/charpiat-ref/landscape_grayscale_cropped} \scriptsize{\textbf{\textsf{\phantom{k[}{}Input\phantom{k[}\\ \phantom{k[}}}} \end{center} \end{minipage} \begin{minipage}[b]{0.235\linewidth} \begin{center} \includegraphics[height=2.3cm]{examples/charpiat-ref/zebra_charpiat_cropped} \includegraphics[height=1.8cm]{examples/charpiat-ref/landscape_charpiat_cropped} \scriptsize{\textbf{\textsf{\phantom{k[}{}Charpiat~\emph{et al.}~\cite{charpiat2010machine}\phantom{k[}{}\\ \phantom{k[}}}} \end{center} \end{minipage} \begin{minipage}[b]{0.235\linewidth} \begin{center} \includegraphics[height=2.3cm]{examples/charpiat-ref/zebra_ours_energy_cropped} \includegraphics[height=1.8cm]{examples/charpiat-ref/landscape_ours_energy_cropped} \scriptsize{\textbf{\textsf{\phantom{k[}{}Our Method\phantom{k[}{}\\ (Energy Minimization)}}} \end{center} \end{minipage} \end{center} \caption{ \textbf{Transfer.} Comparison with Charpiat~\emph{et al.}~\cite{charpiat2010machine} with reference image. Their method works fairly well when the reference image closely matches (compare with Figure~\ref{fig:charpiat-portraits}). However, they still present sharp unnatural color edges. We apply our histogram transfer method (Energy Minimization) using the reference image. } \label{fig:charpiat-reference} \end{figure} \begin{figure}[h!] \begin{center} \begin{minipage}[b]{0.180\linewidth} \begin{center} \includegraphics[width=\textwidth]{examples/charpiat/grayscale/test_1} \includegraphics[width=\textwidth]{examples/charpiat/grayscale/test_2} \includegraphics[width=\textwidth]{examples/charpiat/grayscale/test_3} \includegraphics[width=\textwidth]{examples/charpiat/grayscale/test_4} \includegraphics[width=\textwidth]{examples/charpiat/grayscale/test_5} \scriptsize{\textbf{\textsf{\phantom{y[}Input}\phantom{k[}}} \end{center} \end{minipage} \begin{minipage}[b]{0.180\linewidth} \begin{center} \includegraphics[width=\textwidth]{examples/charpiat/charpiat/test_1} \includegraphics[width=\textwidth]{examples/charpiat/charpiat/test_2} \includegraphics[width=\textwidth]{examples/charpiat/charpiat/test_3} \includegraphics[width=\textwidth]{examples/charpiat/charpiat/test_4} \includegraphics[width=\textwidth]{examples/charpiat/charpiat/test_5} \scriptsize{\textbf{\textsf{Charpiat~\emph{et al.}~\cite{charpiat2010machine}}}} \end{center} \end{minipage} \begin{minipage}[b]{0.180\linewidth} \begin{center} \includegraphics[width=\textwidth]{examples/charpiat/ours/test_1} \includegraphics[width=\textwidth]{examples/charpiat/ours/test_2} \includegraphics[width=\textwidth]{examples/charpiat/ours/test_3} \includegraphics[width=\textwidth]{examples/charpiat/ours/test_4} \includegraphics[width=\textwidth]{examples/charpiat/ours/test_5} \scriptsize{\textbf{\textsf{\phantom{y[}Our Method\phantom{y[}}}} \end{center} \end{minipage} \begin{minipage}[b]{0.180\linewidth} \begin{center} \includegraphics[width=\textwidth]{examples/charpiat/gt/test_1} \includegraphics[width=\textwidth]{examples/charpiat/gt/test_2} \includegraphics[width=\textwidth]{examples/charpiat/gt/test_3} \includegraphics[width=\textwidth]{examples/charpiat/gt/test_4} \includegraphics[width=\textwidth]{examples/charpiat/gt/test_5} \scriptsize{\textbf{\textsf{\phantom{y[}Ground-truth\phantom{y[}}}} \end{center} \end{minipage} \end{center} \caption{ \textbf{Portraits.} Comparison with Charpiat~\emph{et al.}~\cite{charpiat2010machine}, a transfer-based method using 53 reference portrait paintings. Note that their method works significantly worse when the reference images are not hand-picked for each grayscale input (compare with Figure~\ref{fig:charpiat-reference}). Our model was not trained specifically for this task and we used no reference images. } \label{fig:charpiat-portraits} \end{figure} \begin{figure}[h!] \begin{center} \begin{minipage}[b]{0.239\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{examples/legacy/original/11733r} \vspace{0.022cm} \includegraphics[width=\textwidth]{examples/legacy/grayscale/00438r} \vspace{0.022cm} \includegraphics[width=\textwidth]{examples/legacy/grayscale/4a25022r} \vspace{0.022cm} \includegraphics[width=\textwidth]{examples/legacy/grayscale/8a01297r} \vspace{0.022cm} \includegraphics[width=\textwidth]{examples/legacy/grayscale/8a02611r} \vspace{0.022cm} \includegraphics[width=\textwidth]{examples/legacy/grayscale/8a00092r} \vspace{0.02\linewidth} \scriptsize{\textbf{\textsf{\phantom{y[}Input\phantom{y[}}}} \end{center} \end{minipage} \begin{minipage}[b]{0.239\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{examples/legacy/good/11733r} \vspace{0.022cm} \includegraphics[width=\textwidth]{examples/legacy/good/00438r} \vspace{0.022cm} \includegraphics[width=\textwidth]{examples/legacy/good/4a25022r} \vspace{0.022cm} \includegraphics[width=\textwidth]{examples/legacy/good/8a01297r} \vspace{0.022cm} \includegraphics[width=\textwidth]{examples/legacy/good/8a02611r} \vspace{0.022cm} \includegraphics[width=\textwidth]{examples/legacy/good/8a00092r} \vspace{0.02\linewidth} \scriptsize{\textbf{\textsf{\phantom{y[}Our Method\phantom{y[}}}} \end{center} \end{minipage} ~ \begin{minipage}[b]{0.239\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{examples/legacy/original/08229r} \includegraphics[width=\textwidth]{examples/legacy/grayscale/00303r} \includegraphics[width=\textwidth]{examples/legacy/grayscale/4a26353r} \includegraphics[width=\textwidth]{examples/legacy/grayscale/8a02243r} \includegraphics[width=\textwidth]{examples/legacy/grayscale/8a00320r} \includegraphics[width=\textwidth]{examples/legacy/grayscale/8a01274r} \vspace{0.02\linewidth} \scriptsize{\textbf{\textsf{\phantom{y[}Input\phantom{y[}}}} \end{center} \end{minipage} \begin{minipage}[b]{0.239\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{examples/legacy/good/08229r} \includegraphics[width=\textwidth]{examples/legacy/good/00303r} \includegraphics[width=\textwidth]{examples/legacy/good/4a26353r} \includegraphics[width=\textwidth]{examples/legacy/good/8a02243r} \includegraphics[width=\textwidth]{examples/legacy/good/8a00320r} \includegraphics[width=\textwidth]{examples/legacy/good/8a01274r} \vspace{0.02\linewidth} \scriptsize{\textbf{\textsf{\phantom{y[}Our Method\phantom{y[}}}} \end{center} \end{minipage} \end{center} \caption{ \textbf{B\&W photographs.} Old photographs that were automatically colorized. (Source: Library of Congress, \texttt{www.loc.gov}) } \label{fig:supp-legacy} \end{figure} \begin{figure}[!th] \begin{center} \begin{minipage}[b]{0.158\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{\gs{41970}}\\ \includegraphics[width=\textwidth]{\gs{41918}}\\ \includegraphics[width=\textwidth]{\gs{41497}}\\ \includegraphics[width=\textwidth]{\gs{41928}}\\ \includegraphics[width=\textwidth]{\gs{41937}}\\ \includegraphics[width=\textwidth]{\gs{41939}}\\ \includegraphics[width=\textwidth]{\gs{41958}}\\ \includegraphics[width=\textwidth]{\gs{42718}}\\ \includegraphics[width=\textwidth]{\gs{42729}}\\ \includegraphics[width=\textwidth]{\gs{42085}}\\ \vspace{0.02\linewidth} \scriptsize{\textbf{\textsf{Input}}} \end{center} \end{minipage} \begin{minipage}[b]{0.158\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{\mm{41970}}\\ \includegraphics[width=\textwidth]{\mm{41918}}\\ \includegraphics[width=\textwidth]{\mm{41497}}\\ \includegraphics[width=\textwidth]{\mm{41928}}\\ \includegraphics[width=\textwidth]{\mm{41937}}\\ \includegraphics[width=\textwidth]{\mm{41939}}\\ \includegraphics[width=\textwidth]{\mm{41958}}\\ \includegraphics[width=\textwidth]{\mm{42718}}\\ \includegraphics[width=\textwidth]{\mm{42729}}\\ \includegraphics[width=\textwidth]{\mm{42085}}\\ \vspace{0.02\linewidth} \scriptsize{\textbf{\textsf{Our Method}}} \end{center} \end{minipage} \begin{minipage}[b]{0.158\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{\gt{41970}}\\ \includegraphics[width=\textwidth]{\gt{41918}}\\ \includegraphics[width=\textwidth]{\gt{41497}}\\ \includegraphics[width=\textwidth]{\gt{41928}}\\ \includegraphics[width=\textwidth]{\gt{41937}}\\ \includegraphics[width=\textwidth]{\gt{41939}}\\ \includegraphics[width=\textwidth]{\gt{41958}}\\ \includegraphics[width=\textwidth]{\gt{42718}}\\ \includegraphics[width=\textwidth]{\gt{42729}}\\ \includegraphics[width=\textwidth]{\gt{42085}}\\ \vspace{0.02\linewidth} \scriptsize{\textbf{\textsf{Ground-truth}}} \end{center} \end{minipage} \hfill \begin{minipage}[b]{0.158\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{\gs{42700}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\gs{40375}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\gs{41979}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\gs{41995}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\gs{42010}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\gs{42022}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\gs{42054}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\gs{42093}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\gs{42133}}\\ \vspace{0.02\linewidth} \scriptsize{\textbf{\textsf{Input}}} \end{center} \end{minipage} \begin{minipage}[b]{0.158\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{\mm{42700}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\mm{40375}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\mm{41979}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\mm{41995}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\mm{42010}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\mm{42022}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\mm{42054}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\mm{42093}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\mm{42133}}\\ \vspace{0.02\linewidth} \scriptsize{\textbf{\textsf{Our Method}}} \end{center} \end{minipage} \begin{minipage}[b]{0.158\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{\gt{42700}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\gt{40375}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\gt{41979}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\gt{41995}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\gt{42010}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\gt{42022}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\gt{42054}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\gt{42093}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\gt{42133}}\\ \vspace{0.02\linewidth} \scriptsize{\textbf{\textsf{Ground-truth}}} \end{center} \end{minipage} \end{center} \caption{\small \textbf{Fully automatic colorization results on ImageNet/ctest10k.} } \label{fig:ctest10k-supp-examples1} \end{figure} \begin{figure}[!th] \begin{center} \begin{minipage}[b]{0.158\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{\gs{42488}}\\ \includegraphics[width=\textwidth]{\gs{42444}}\\ \includegraphics[width=\textwidth]{\gs{42151}}\\ \includegraphics[width=\textwidth]{\gs{42697}}\\ \includegraphics[width=\textwidth]{\gs{42698}}\\ \includegraphics[width=\textwidth]{\gs{42699}}\\ \includegraphics[width=\textwidth]{\gs{43987}}\\ \includegraphics[width=\textwidth]{\gs{44100}}\\ \vspace{0.02\linewidth} \scriptsize{\textbf{\textsf{Input}}} \end{center} \end{minipage} \begin{minipage}[b]{0.158\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{\mm{42488}}\\ \includegraphics[width=\textwidth]{\mm{42444}}\\ \includegraphics[width=\textwidth]{\mm{42151}}\\ \includegraphics[width=\textwidth]{\mm{42697}}\\ \includegraphics[width=\textwidth]{\mm{42698}}\\ \includegraphics[width=\textwidth]{\mm{42699}}\\ \includegraphics[width=\textwidth]{\mm{43987}}\\ \includegraphics[width=\textwidth]{\mm{44100}}\\ \vspace{0.02\linewidth} \scriptsize{\textbf{\textsf{Our Method}}} \end{center} \end{minipage} \begin{minipage}[b]{0.158\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{\gt{42488}}\\ \includegraphics[width=\textwidth]{\gt{42444}}\\ \includegraphics[width=\textwidth]{\gt{42151}}\\ \includegraphics[width=\textwidth]{\gt{42697}}\\ \includegraphics[width=\textwidth]{\gt{42698}}\\ \includegraphics[width=\textwidth]{\gt{42699}}\\ \includegraphics[width=\textwidth]{\gt{43987}}\\ \includegraphics[width=\textwidth]{\gt{44100}}\\ \vspace{0.02\linewidth} \scriptsize{\textbf{\textsf{Ground-truth}}} \end{center} \end{minipage} \hfill \begin{minipage}[b]{0.158\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{\gs{42451}}\\ \includegraphics[width=\textwidth]{\gs{42481}}\\ \includegraphics[width=\textwidth]{\gs{42554}}\\ \includegraphics[width=\textwidth]{\gs{42153}}\\ \includegraphics[width=\textwidth]{\gs{42165}}\\ \includegraphics[width=\textwidth]{\gs{42194}}\\ \includegraphics[width=\textwidth]{\gs{42207}}\\ \includegraphics[width=\textwidth]{\gs{42300}}\\ \includegraphics[width=\textwidth]{\gs{42358}}\\ \includegraphics[width=\textwidth]{\gs{43959}}\\ \includegraphics[width=\textwidth]{\gs{43985}}\\ \vspace{0.02\linewidth} \scriptsize{\textbf{\textsf{Input}}} \end{center} \end{minipage} \begin{minipage}[b]{0.158\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{\mm{42451}}\\ \includegraphics[width=\textwidth]{\mm{42481}}\\ \includegraphics[width=\textwidth]{\mm{42554}}\\ \includegraphics[width=\textwidth]{\mm{42153}}\\ \includegraphics[width=\textwidth]{\mm{42165}}\\ \includegraphics[width=\textwidth]{\mm{42194}}\\ \includegraphics[width=\textwidth]{\mm{42207}}\\ \includegraphics[width=\textwidth]{\mm{42300}}\\ \includegraphics[width=\textwidth]{\mm{42358}}\\ \includegraphics[width=\textwidth]{\mm{43959}}\\ \includegraphics[width=\textwidth]{\mm{43985}}\\ \vspace{0.02\linewidth} \scriptsize{\textbf{\textsf{Our Method}}} \end{center} \end{minipage} \begin{minipage}[b]{0.158\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{\gt{42451}}\\ \includegraphics[width=\textwidth]{\gt{42481}}\\ \includegraphics[width=\textwidth]{\gt{42554}}\\ \includegraphics[width=\textwidth]{\gt{42153}}\\ \includegraphics[width=\textwidth]{\gt{42165}}\\ \includegraphics[width=\textwidth]{\gt{42194}}\\ \includegraphics[width=\textwidth]{\gt{42207}}\\ \includegraphics[width=\textwidth]{\gt{42300}}\\ \includegraphics[width=\textwidth]{\gt{42358}}\\ \includegraphics[width=\textwidth]{\gt{43959}}\\ \includegraphics[width=\textwidth]{\gt{43985}}\\ \vspace{0.02\linewidth} \scriptsize{\textbf{\textsf{Ground-truth}}} \end{center} \end{minipage} \end{center} \caption{\small \textbf{Fully automatic colorization results on ImageNet/ctest10k.} } \label{fig:ctest10k-supp-examples2} \end{figure} \begin{figure}[!th] \begin{center} \begin{minipage}[b]{0.19\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{\mm{41064}}\\ \vspace{0.027cm} \includegraphics[width=\textwidth]{\mm{37073}}\\ \vspace{0.027cm} \includegraphics[width=\textwidth]{\mm{37123}}\\ \vspace{0.027cm} \includegraphics[width=\textwidth]{\mm{37215}}\\ \vspace{0.027cm} \includegraphics[width=\textwidth]{\mm{37538}}\\ \vspace{0.027cm} \includegraphics[width=\textwidth]{\mm{37379}}\\ \vspace{0.027cm} \includegraphics[width=\textwidth]{\mm{37247}}\\ \vspace{0.027cm} \includegraphics[width=\textwidth]{\mm{42817}}\\ \vspace{0.027cm} \includegraphics[width=\textwidth]{\mm{37129}}\\ \end{center} \end{minipage} \hfill \begin{minipage}[b]{0.19\linewidth} \vspace{0.10\linewidth} \begin{center} \includegraphics[width=\textwidth]{\mm{38651}}\\ \vspace{0.002cm} \includegraphics[width=\textwidth]{\mm{38734}}\\ \vspace{0.002cm} \includegraphics[width=\textwidth]{\mm{37404}}\\ \vspace{0.002cm} \includegraphics[width=\textwidth]{\mm{37468}}\\ \vspace{0.002cm} \includegraphics[width=\textwidth]{\mm{37619}}\\ \vspace{0.002cm} \includegraphics[width=\textwidth]{\mm{42942}}\\ \vspace{0.002cm} \includegraphics[width=\textwidth]{\mm{38739}}\\ \vspace{0.002cm} \includegraphics[width=\textwidth]{\mm{37222}}\\ \end{center} \end{minipage} \hfill \begin{minipage}[b]{0.19\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{\mm{37813}}\\ \vspace{0.005cm} \includegraphics[width=\textwidth]{\mm{43649}}\\ \vspace{0.005cm} \includegraphics[width=\textwidth]{\mm{43863}}\\ \vspace{0.005cm} \includegraphics[width=\textwidth]{\mm{37552}}\\ \vspace{0.005cm} \includegraphics[width=\textwidth]{\mm{43529}}\\ \vspace{0.005cm} \includegraphics[width=\textwidth]{\mm{37658}}\\ \vspace{0.005cm} \includegraphics[width=\textwidth]{\mm{37736}}\\ \vspace{0.005cm} \includegraphics[width=\textwidth]{\mm{37749}}\\ \vspace{0.005cm} \includegraphics[width=\textwidth]{\mm{37522}}\\ \end{center} \end{minipage} \hfill \begin{minipage}[b]{0.19\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{\mm{43638}}\\ \includegraphics[width=\textwidth]{\mm{37915}}\\ \includegraphics[width=\textwidth]{\mm{37980}}\\ \includegraphics[width=\textwidth]{\mm{38061}}\\ \includegraphics[width=\textwidth]{\mm{38209}}\\ \includegraphics[width=\textwidth]{\mm{38273}}\\ \includegraphics[width=\textwidth]{\mm{39170}}\\ \includegraphics[width=\textwidth]{\mm{43178}}\\ \end{center} \end{minipage} \hfill \begin{minipage}[b]{0.19\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{\mm{43519}}\\ \vspace{0.03cm} \includegraphics[width=\textwidth]{\mm{38756}}\\ \vspace{0.03cm} \includegraphics[width=\textwidth]{\mm{38932}}\\ \vspace{0.03cm} \includegraphics[width=\textwidth]{\mm{38973}}\\ \vspace{0.03cm} \includegraphics[width=\textwidth]{\mm{39089}}\\ \vspace{0.03cm} \includegraphics[width=\textwidth]{\mm{38424}}\\ \vspace{0.03cm} \includegraphics[width=\textwidth]{\mm{39242}} \vspace{0.03cm} \includegraphics[width=\textwidth]{\mm{43612}}\\ \end{center} \end{minipage} \end{center} \caption{\small \textbf{Fully automatic colorization results on ImageNet/ctest10k.} } \label{fig:ctest10k-supp-examples3} \end{figure} \begin{figure}[!th] \begin{center} \begin{minipage}[b]{0.19\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{\mm{39284}}\\ \vspace{0.02cm} \includegraphics[width=\textwidth]{\mm{37780}}\\ \vspace{0.02cm} \includegraphics[width=\textwidth]{\mm{39341}}\\ \vspace{0.02cm} \includegraphics[width=\textwidth]{\mm{39371}}\\ \vspace{0.02cm} \includegraphics[width=\textwidth]{\mm{39950}}\\ \vspace{0.02cm} \includegraphics[width=\textwidth]{\mm{39993}}\\ \vspace{0.02cm} \includegraphics[width=\textwidth]{\mm{40001}}\\ \vspace{0.02cm} \includegraphics[width=\textwidth]{\mm{40021}}\\ \vspace{0.02cm} \includegraphics[width=\textwidth]{\mm{40062}}\\ \end{center} \end{minipage} \hfill \begin{minipage}[b]{0.19\linewidth} \vspace{0.10\linewidth} \begin{center} \includegraphics[width=\textwidth]{\mm{41852}}\\ \vspace{0.006cm} \includegraphics[width=\textwidth]{\mm{40185}}\\ \vspace{0.006cm} \includegraphics[width=\textwidth]{\mm{40587}}\\ \vspace{0.006cm} \includegraphics[width=\textwidth]{\mm{40542}}\\ \vspace{0.006cm} \includegraphics[width=\textwidth]{\mm{40300}}\\ \vspace{0.006cm} \includegraphics[width=\textwidth]{\mm{40313}}\\ \vspace{0.006cm} \includegraphics[width=\textwidth]{\mm{40363}}\\ \vspace{0.006cm} \includegraphics[width=\textwidth]{\mm{41638}}\\ \end{center} \end{minipage} \hfill \begin{minipage}[b]{0.19\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{\mm{40446}}\\ \vspace{0.002cm} \includegraphics[width=\textwidth]{\mm{40191}}\\ \vspace{0.002cm} \includegraphics[width=\textwidth]{\mm{40557}}\\ \vspace{0.002cm} \includegraphics[width=\textwidth]{\mm{40212}}\\ \vspace{0.002cm} \includegraphics[width=\textwidth]{\mm{40622}}\\ \vspace{0.002cm} \includegraphics[width=\textwidth]{\mm{40821}}\\ \vspace{0.002cm} \includegraphics[width=\textwidth]{\mm{40953}}\\ \vspace{0.002cm} \includegraphics[width=\textwidth]{\mm{41863}}\\ \end{center} \end{minipage} \hfill \begin{minipage}[b]{0.19\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{\mm{41217}}\\ \vspace{0.013cm} \includegraphics[width=\textwidth]{\mm{41272}}\\ \vspace{0.013cm} \includegraphics[width=\textwidth]{\mm{41425}}\\ \vspace{0.013cm} \includegraphics[width=\textwidth]{\mm{41497}}\\ \vspace{0.013cm} \includegraphics[width=\textwidth]{\mm{41544}}\\ \vspace{0.013cm} \includegraphics[width=\textwidth]{\mm{41588}}\\ \vspace{0.013cm} \includegraphics[width=\textwidth]{\mm{41599}}\\ \vspace{0.013cm} \includegraphics[width=\textwidth]{\mm{41611}}\\ \vspace{0.013cm} \includegraphics[width=\textwidth]{\mm{40375}}\\ \end{center} \end{minipage} \hfill \begin{minipage}[b]{0.19\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{\mm{40080}}\\ \vspace{0.015cm} \includegraphics[width=\textwidth]{\mm{41672}}\\ \vspace{0.015cm} \includegraphics[width=\textwidth]{\mm{41854}}\\ \vspace{0.015cm} \includegraphics[width=\textwidth]{\mm{40087}}\\ \vspace{0.015cm} \includegraphics[width=\textwidth]{\mm{40161}}\\ \vspace{0.015cm} \includegraphics[width=\textwidth]{\mm{41010}}\\ \vspace{0.015cm} \includegraphics[width=\textwidth]{\mm{40408}}\\ \vspace{0.015cm} \includegraphics[width=\textwidth]{\mm{40410}}\\ \end{center} \end{minipage} \end{center} \caption{\small \textbf{Fully automatic colorization results on ImageNet/ctest10k.} } \label{fig:ctest10k-supp-examples4} \end{figure} \begin{figure}[!th] \begin{center} \begin{minipage}[b]{0.193\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{\mm{38007}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\mm{38915}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\mm{39897}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\mm{40016}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\mm{40075}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\mm{40296}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\mm{41662}}\\ \vspace{0.033cm} \includegraphics[width=\textwidth]{\mm{41413}}\\ \vspace{0.02\linewidth} \scriptsize{\textbf{\textsf{Too Desaturated}}} \end{center} \end{minipage} \hfill \begin{minipage}[b]{0.193\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{\mm{43187}}\\ \vspace{0.00cm} \includegraphics[width=\textwidth]{\mm{39902}}\\ \vspace{0.00cm} \includegraphics[width=\textwidth]{\mm{38138}}\\ \vspace{0.00cm} \includegraphics[width=\textwidth]{\mm{41631}}\\ \vspace{0.00cm} \includegraphics[width=\textwidth]{\mm{42214}}\\ \vspace{0.00cm} \includegraphics[width=\textwidth]{\mm{37131}}\\ \vspace{0.00cm} \includegraphics[width=\textwidth]{\mm{42949}}\\ \vspace{0.00cm} \includegraphics[width=\textwidth]{\mm{42844}}\\ \vspace{0.00cm} \includegraphics[width=\textwidth]{\mm{43723}}\\ \vspace{0.02\linewidth} \scriptsize{\textbf{\textsf{Inconsistent Chroma}}} \end{center} \end{minipage} \hfill \begin{minipage}[b]{0.193\linewidth} \vspace{0.10\linewidth} \begin{center} \includegraphics[width=\textwidth]{\mm{39880}}\\ \vspace{0.024cm} \includegraphics[width=\textwidth]{\mm{38588}}\\ \vspace{0.024cm} \includegraphics[width=\textwidth]{\mm{39572}}\\ \vspace{0.024cm} \includegraphics[width=\textwidth]{\mm{39824}}\\ \vspace{0.024cm} \includegraphics[width=\textwidth]{\mm{40597}}\\ \vspace{0.024cm} \includegraphics[width=\textwidth]{\mm{41006}}\\ \vspace{0.024cm} \includegraphics[width=\textwidth]{\mm{40629}}\\ \vspace{0.024cm} \includegraphics[width=\textwidth]{\mm{41943}}\\ \vspace{0.02\linewidth} \scriptsize{\textbf{\textsf{Inconsistent Hue}}} \end{center} \end{minipage} \hfill \begin{minipage}[b]{0.193\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{\mm{39090}}\\ \vspace{0.014cm} \includegraphics[width=\textwidth]{\mm{40353}}\\ \vspace{0.014cm} \includegraphics[width=\textwidth]{\mm{39332}}\\ \vspace{0.014cm} \includegraphics[width=\textwidth]{\mm{37403}}\\ \vspace{0.014cm} \includegraphics[width=\textwidth]{\mm{40124}}\\ \vspace{0.014cm} \includegraphics[width=\textwidth]{\mm{38790}}\\ \vspace{0.014cm} \includegraphics[width=\textwidth]{\mm{43015}}\\ \vspace{0.014cm} \includegraphics[width=\textwidth]{\mm{43407}}\\ \vspace{0.02\linewidth} \scriptsize{\textbf{\textsf{Edge Pollution}}} \end{center} \end{minipage} \hfill \begin{minipage}[b]{0.193\linewidth} \vspace{0pt} \begin{center} \includegraphics[width=\textwidth]{\mm{38463}}\\ \vspace{0.055cm} \includegraphics[width=\textwidth]{\mm{44141}}\\ \vspace{0.055cm} \includegraphics[width=\textwidth]{\mm{41892}}\\ \vspace{0.055cm} \includegraphics[width=\textwidth]{\mm{43889}}\\ \vspace{0.055cm} \includegraphics[width=\textwidth]{\mm{41901}}\\ \vspace{0.055cm} \includegraphics[width=\textwidth]{\mm{37605}}\\ \vspace{0.055cm} \includegraphics[width=\textwidth]{\mm{42051}}\\ \vspace{0.02\linewidth} \scriptsize{\textbf{\textsf{Color Bleeding}}} \end{center} \end{minipage} \end{center} \caption{\small \textbf{Failure cases.} Examples of the five most common failure cases: the whole image lacks saturation ({\em Too Desaturated}); inconsistent chroma in objects or regions, causing parts to be gray ({\em Inconsistent Chroma}); inconsistent hue, causing unnatural color shifts that are particularly typical between red and blue ({\em Inconsistent Hue}); inconsistent hue and chroma around the edge, commonly occurring for closeups where background context is unclear ({\em Edge Pollution}); color boundary is not clearly separated, causing color bleeding ({\em Color Bleeding}). } \label{fig:ctest10k-supp-failures1} \end{figure} \section{Document changelog} \label{sec:changelog} Overview of document revisions: \begin{itemize} \item[\textbf{v1}] Initial release. \\ \item[\textbf{v2}] ECCV 2016 camera-ready version. Includes discussion about concurrent work and new experiments using colorization to learn visual representations (Section~\ref{sec:representation-learning}). \\ \item[\textbf{v3}] Added overlooked reference. \end{itemize}
2024-02-18T23:40:40.963Z
2017-08-15T02:07:58.000Z
algebraic_stack_train_0000
3,058
11,438
proofpile-arXiv_065-15009
\section{Introduction} \begin{figure}[ht] \centering \includegraphics[height=5.5cm]{figure/1.png} \caption{Illustration of discontinuous dependency among vehicles at crossroad intersection near traffic lights. We highlight the local trajectories of four vehicles, $v_1 \ldots v_4$ using black directed curves. The orange boxes represent the influence area of the corresponding traffic lights, which are fixed regions and restrict the motion behavior of vehicles while they are passing. We show the position of three vehicles, $v_1, v_2, v_3$ at time $t_1$ and $t_5$, along with the corresponding green boxes which show the dynamic interaction area determined by the moving vehicles. The purple directed edges within each green box represent the interactions among vehicles. In this case, $v_3$ interacts with $v_1$ and $v_2$ at time $t_1$. However, $v_3$ is not affected by $v_1$ at time $t_5$, even though it is located in the same region. This indicates the discontinuity in the interaction between $v_1$ and $v_3$ during this time period. The vehicle $v_4$ located in the influence areas is not constrained by the red light because it is right-turning.} \label{fig1} \end{figure} The interaction relationships and behavioral intentions of vehicles or agents are frequently used for autonomous driving~\cite{2015Intentionaware,2018PORCA,2016MotionPlanning}. A key problem is to predict the future trajectory of each vehicle or road agent, which is used to perform safe navigation or traffic forecasting~\cite{2018Socialgan,2021AG-GAN,2019TraPHic,2021Tra2Tra}. Existing trajectory prediction methods are mainly designed to extract the spatial-temporal information from spatial interactions and behavior modeling. In terms of spatial interaction, most previous works determine the interaction among objects according to the predefined interaction areas, such as the entire scene~\cite{2018Socialgan,2021AG-GAN,2020SoPhie,2018SocialAttention,2021Tra2Tra}, localized regions~\cite{2016Social,2018ConvolutionalSocial,2019TraPHic}, and the area corresponding to visual attention~\cite{2021ForecastingPeople}. However, these methods do not fully consider the varying interactions and dependency between neighbors that occur due to different behaviors, such as changing lanes or turning directions that can lead to new pairwise interactions. In terms of behavior dependency, these prediction algorithms obtain the relevant information of the current state from previous states based on LSTM-based methods~\cite{2019STGAT,2021Tra2Tra} or graph-based approaches~\cite{2020SocialSTGCNN,2021SGCN}. In this paper, we address the problem of trajectory prediction in areas close to traffic lights or intersections. Due to the constraints of traffic signs and traffic lights with red, green, and yellow states labeled by discrete indexes, the vehicles usually do not exhibit the first-order continuity in their movement behaviors with stopping, going straight, turning right, and turning left. Instead, their trajectory is governed by the discontinuous effects from the environment or other agents. For example, in the green boxes of Figure~\ref{fig1}, the interactions among vehicle $v_{1}$, $v_{2}$ and $v_{3}$ change from time $t_{1}$ to $t_{5}$. Even though these vehicles are within the same interaction regions determined by distance which shown using green boxes, the spatial and behavior interaction between the vehicles changes considerably and we need to model such changes. For vehicle $v_{4}$, the most important influence on its current state is the change in its behavior due to the right-turn, rather than the movement state in adjacent timestamp. We refer to these phenomena as \textit{discontinuous dependency (D2)}, which makes accurate spatial-temporal feature extraction extremely challenging. Current trajectory prediction methods do not fully account for this property that the trajectories of traffic agents are usually not first-order continuous due to the frequent starting and stopping motions. \noindent{\bf Main Results:} In order to model the discontinuous dependency between traffic agents, we present a new trajectory prediction approach (D2-TPred). In our formulation, we construct a spatial dynamic interaction graph (SDG) for different traffic agents in one frame. Each traffic agent is regarded as a graph node and we compute appropriate edges to model its interactions with other changing neighboring agents determined by visual scope, distance, and lane index as well as discontinuous dependencies in terms of their relative positions. Moreover, a behavior dependency graph (BDG) is computed for each agent to model the discontinuities with respect to their behaviors at previous time instances, rather than only adjacent timestamp. Specifically, to avoid the key behavioral features such as acceleration, deceleration, or turning direction, may be filtered by forget gates or the error will be accumulated in sequential prediction by RNN network, the way of dependency information passing between adjacent frames is replaced by a GAT (graph attention network)~\cite{2017GraphAttention}, and the behavior dependency is modeled along the edges in the BDG. The SDG and BDG are used as part of a graph-based network for trajectory prediction. We also present a new dataset for vehicle trajectory prediction, VTP-TL. Our dataset consists of traffic pattern at urban intersections with different traffic rules, such as crossroads, T-junctions intersections, and roundabouts, containing 2D coordinates of vehicle trajectory and more than 1000 annotated vehicles at each traffic intersection. The novel components of our work include: \begin{itemize} \item [1.] We propose a novel trajectory prediction approach, D2-TPred, that accounts for various discontinuities in the vehicle trajectories and pairwise interactions near traffic lights and intersections. \item [2.] We present two types of data structure to improve the performance of graph-based networks to model dynamic interactions and vehicle behaviors. SDG is used to model spatial interactions by reconstructing appropriate sub-graphs for dynamic agents with constantly changing neighbors in each frame. BDG is used to model the dynamically changing behaviors dependency of current state on previous behaviors. The usage of SDG and BDG improves the prediction accuracy by 22.45\% and 29.39\% in ADE and FDE. \item [3.] We present a new dataset VTP-TL that corresponds to traffic video data near traffic lights and interactions. This includes $150$ minutes of video clips at $30$fps corresponding to challenging urban scenarios. They are captured using drones at $70-120$ meters above the traffic intersections. \end{itemize} \section{Related Work} A brief overview of prior work on graph neural networks, interaction models, and motion pattern dependency is given. \textbf{Graph Neural Networks:} Graph Neural Network (GNN)~\cite{2018Graph} can model social or other interactions between agents. Prior trajectory prediction methods based on GNN can be divided into two categories. The first is based on undirected graphs, which utilize the graph structure to explicitly construct interactions and assign the same weight for each pair of nodes, e.g., STUGCN~\cite{2021STUGCN}, Social-STGCNN~\cite{2020SocialSTGCNN}. The second is based on Graph attention networks (GAT)~\cite{2017GraphAttention}, which introduces an attention mechanism into the undirected graph to calculate asymmetric influence weights for interactive agents. The GAT-based approaches, such as Social-BiGAT~\cite{2019SocialBiGAT}, STGAT~\cite{2019STGAT}, EvolveGraph~\cite{2020EvolveGraph}, and SGCN~\cite{2021SGCN}, can flexibly model asymmetric interactions to compute spatial-temporal features and improve the prediction accuracy. Meanwhile, EvolveGraph~\cite{2020EvolveGraph} and SGCN~\cite{2021SGCN} introduce graph structure inference to generate dynamic and sparse interaction. Different from these methods, we directly construct one directed graph according to interactive objects determined by the visual scope, distance, and traffic rules, and use GATs to represent the asymmetric interactions among agents. \textbf{Social Interaction Models:} Social interactions and related information are used by traffic agents to make reasonable decisions to avoid potential collisions. Social force-based methods~\cite{1995Social,2009Abnormal,2018AutoRVO} use different types of force to model acceleration and deceleration forces. Social pooling based approaches~\cite{2016Social,2018ConvolutionalSocial,2018Socialgan,2021AG-GAN} try to integrate motion information of neighbors within a radius. GNN-based techniques~\cite{2018SocialAttention,2019SocialBiGAT,2020EvolveGraph,2020SocialSTGCNN,2020Trajectron++,2019STGAT,2021AComprehensive} use graph structures to directly model the interactions among near and far agents. These methods assume that the underlying agents interact with all other agents in a predefined or nearby regions. They do not account for those neighbors need to be pruned, especially moving along the opposite lanes. \textbf{Motion Models:} Motion models are used to infer the motion information as part of trajectory prediction. Early studies have focused on forecasting future trajectories based on a linear model, constant velocity model, or constant acceleration model~\cite{2016Trajectoryof}. However, these simple models can not handle complex traffic patterns. Furthermore, LSTM-based approaches~\cite{2016Social,2018Socialgan,2019STGAT,2021Pedestrian} and graph structure-based approaches~\cite{2019TGCN,2021STUGCN,2020SocialSTGCNN,2021SGCN} are proposed to model the motion trajectories. Other techniques take into account driver behavior patterns~\cite{CMetric2020,B-GAP2020}. Giuliari et al.~\cite{2020TransformerNetworks} perform precise trajectory prediction using transformer networks. Here, the states of an agent in temporal sequence are regarded as nodes to construct directed graph, further achieve the direct influence between discontinuous timestamps, rather than only adjacent one. \section{D2-TPred} In this section, we present our novel learning-based trajectory prediction algorithm, which involves the influence of traffic lights on motion behaviors, and the architecture is shown in Figure~\ref{fig2}. \begin{figure}[!t] \centering \includegraphics[width=0.75\linewidth]{figure/2.png} \caption{Architecture of our proposed D2-TPred model. The spatial dynamic interaction graph is used to represent the dynamic interactions and we reconstruct the sub-graphs. The behavior dependency graph learns the movement features by estimating the effect of agent behavior. The discriminator is used to refine the predicted trajectories. The traffic light module is used to predict the trajectories at urban intersections.} \label{fig2} \end{figure} \subsection{Problem Formulation} Given spatial coordinate and traffic light state of $N$ agents in each scenario, we aim to predict the most likely trajectories of these agents in the future. At any time $t$, the state of the $i$th agent $Sq_{i}$ at time $t$ can be denoted as $Sq_{i}^{t} = (Fid,Aid,x_{i}^{t},y_{i}^{t},Lid,pa_{i}^{t},f_{i}^{t},mb_{i}^{t},lid_{i}^{t},ls_{i}^{t},lt_{i}^{t})$, where $p_{i}^{t}=(x_{i}^{t},y_{i}^{t})$ represents the position coordinate and the other symbols represent the corresponding traffic light information described with more detail in Section 3.3. According to the inputs of all agents in the interval $[1: {t}_{obs}]$, our method can predict their position at next moment ${t}_{pred}\in[{t}_{obs}+1: T]$. Different from the ground truth trajectory ${Lq}_{i}^{{{t}_{pred}}}$, $\hat{Lq}_{i}^{{{t}_{pred}}}$ notates the predicted trajectory. \subsection{Spatio-Temporal Dependency} \begin{figure}[ht] \begin{center} \includegraphics[width=0.75\linewidth]{figure/3.png} \end{center} \caption{The spatial dynamic interaction graph (SDG). The left part of the figure shows the scene from time $t_{1}$ to $t_{5}$, and the right part represents the reconstructed interaction sub-graphs at different time instances.} \label{fig3} \end{figure} \textbf{Spatial Dynamic Interaction Graph.} Unlike prior methods~\cite{2019STGAT,2020SocialSTGCNN}, we reconstruct the sub-graphs to model all the interactions in each frame. We illustrate our approach to model discontinuous dependency by highlighting one scenario with $7$ vehicles and appropriate trajectories in Figure~\ref{fig3}. Similar to~\cite{2021ForecastingPeople}, the visual area of a subject is treated as frustum, where different visual ranges are set between road and intersection by considering the characteristics of human visual system. At time $t_{1}$, $v_{2}$, $v_{3}$, and $v_{7}$ are located in the visual area of neighborhood of $v_{1}$. However, the motion behavior of $v_{1}$ is not affected by $v_{7}$, which moves in the opposite lane. Hence, we construct sub-graph $G_{1}$ corresponding to the interactions among vehicles $v_{1}$, $v_{2}$, and $v_{3}$, and sub-graph $G_{3}$ for vehicles $v_{5}$ and $v_{6}$. Moreover, for vehicles $v_{4}$ and $v_{7}$ without nearby neighbors, we compute sub-graphs $G_{2}$ and $G_{4}$, respectively. Based on these sub-graphs, the intermediate states of these vehicles are updated. Since the interactions between the vehicles change dynamically, vehicle $v_{1}$ is not affected by vehicle $v_{3}$ at time $t_{2}$. Even though they are within the same interaction region determined by distance, the influence of vehicle $v_{3}$ on vehicle $v_{1}$ is not the same between adjacent frames. In this manner, we reconstruct the corresponding sub-graphs $G_{5}$, $G_{13}$ to represent these varying interactions between the vehicles. Considering the asymmetry of interactions among agents, we use a self-attention mechanism into these constructed directed graphs to model the spatial interactions. For agent $i$ at time $t$, we first determine its interactive objects $j$ according to the visual scope $\theta$, distance $d$, and lane index $Lid$, and the corresponding matrix $V$, $D$, and $L$ respectively. \begin{align} \left\{ \begin{aligned} & V[i,j]=1 , \ \ \ if\ \theta(\vec{ij}) \in \theta, \\ & D[i,j]=1, \ \ \ if\ \left\|{p}_{j}^{t}-p_{i}^{t}\right\|_2\leq d, \\ & L[i,j]=1, \ \ \ if\ Lid_{i}^{t}=Lid_{j}^{t},\\ & R = V\times D\times L \end{aligned} \right. \end{align} where $R$ filled with 0 and 1 represents adjacency matrix among agents, and we further construct sub-graph based on it. We then calculate the spatial state ${hs}_{i}^{t}$ by integrating the hidden states $h_{j}^{t}$ from interactive objects. \begin{align} e_{i}^{t}=\Phi(p_{i}^{t}, W_{p}),\ \ h_{i}^{t}&=LSTM(h_{i}^{t-1}, e_{i}^{t}, W_{l}), \end{align} \begin{align} hs_{i}^{t}=&\sum_{j\in \mathcal{N}_{i}^{t}}(a_{ij}^{t}R_{ij}^{t})h_{j}^{t}, \label{eq:4} \end{align} where $\Phi(\cdot)$ is an embedding function, and $e_{i}^{t}$ is the state vector of agent $i$ at time $t$. Similar to method~\cite{2014NeuralAttention}, $a_{ij}^{t}$ represents the attention coefficient of agent $j$ to $i$ at timestamp $t$, $W_{p}$ and $W_{l}$ are embedding matrix and LSTM cell weight. \begin{figure}[!t] \begin{center} \includegraphics[width=0.75\linewidth]{figure/4.png} \end{center} \caption{The behavior dependency graph (BDG). The lower part refers to the encoding process of trajectories and traffic light signals. The upper part describes behavior dependency, where segments with the same color refer to a temporal dependency graph.} \label{fig4} \end{figure} \textbf{Behavior Dependency Graph.} To avoid the key behavioral features may be filtered by forget gate of RNN network in process of information passing, we model the discontinuous dependency from previous behaviors to the current state by using GATs, rather than only adjacent timestamps. Specifically, for a given vehicle, its states updated by SDG are regarded as nodes. We model discontinuous dependency in the temporal sequence as edges and construct a directed graph, where the behavior information is transferred along the directed edges. The detailed architecture of the BDG for a given agent is shown in Figure~\ref{fig4}. Specifically, for agent $i$, we use directed segments with the same color to constitute an unfolded BDG, and different colors represent the behavior dependency graphs at different time instances. The BDG uses the state $hs_{i}^{t}$ generated by the SDG. Its current state is updated and embed into the behavior dependency graph at the next time instance, where dependency weights among nodes are calculated by using a self-attention mechanism. As shown in the dashed box of Figure~\ref{fig4}, the motion state of agent $i$ at current moment $t+2$ is governed by the previous behaviors at time $t+1$, $t$, $t-1$, $t-2$, $t-3$, and $t-4$, etc., whereas the next instance $t+3$ is governed by $t+2$, $t+1$, $t$, $t-1$, $t-2$, and $t-3$. In this way, the updated hidden state $hb_{i}^{t}$ for agent $i$ at time $t$ is calculated as follows: \begin{align} \begin{split} a_{i}^{tt^{'}}=softm&ax(\frac{exp(LeakyReLU^{*}(\beta^{T}[W{hs}_{i}^{t} \| W{hb}_{i}^{t^{'}}]))}{\sum_{t^{'}}^{t}exp(LeakyReLU^{*}(\beta^{T}[W{hs}_{i}^{t} \| W{hb}_{i}^{t^{'}}]))}), \\ &hb_{i}^{t}=\sum_{t^{'}\in(t-k)\;and\;t^{'}\geq0}^{t}a_{i}^{tt^{'}}(hb_{i}^{t^{'}},\;hs_{i}^{t}), \label{eq:4} \end{split} \end{align} where $k$ represents the experimental estimated time window whose quantitative results have the lowest prediction error by obtaining more effective behavior feature. $\beta^{T}$ is the weight vector of a single-layer feedforward neural network. $t^{'}$ denotes a specific time instance in the previous frames from $t-k$ to $t$. \subsection{Trajectory Prediction near Traffic Lights} In this section, we present two prediction schemes for the vehicle trajectory prediction. The first scheme considers the discontinuous constraints on vehicles' behaviors caused by the alternation of traffic light states, where the traffic lights are regarded as indicator signals with fixed position and alternating states. The second scheme is designed for scenarios without traffic lights, which is described in detail in the supplementary material \url{https://github.com/VTP-TL/D2-TPred}. Given the observed sequence: $Sq_{i}^{t} = (Fid,Aid,x,y,Lid,pa,f,mb,lid,ls,lt)$, which is divided by the vehicle trajectory $q=(Fid, Aid, x, y, Lid, pa, f, mb)$ and the corresponding traffic light states sequence $LS=(Fid, lid, ls, lt)$. $Fid$, $Aid$, and $Lid$ are the index of frame, vehicle, and lane where the vehicle located, respectively. $lid_{i}^{t}$ is the traffic light index. $pa_{i}^{t}$ describes whether vehicle $v_{i}$ is within the influence area of corresponding traffic light. $f_{i}^{t}$ indicates whether vehicle $v_{i}$ is closest to the parking line in the influence area. $mb_{i}^{t}$ represents the movement behavior of one agent, such as turning-left, turning-right, or going-straight. $ls_{i}^{t}$ and $lt_{i}^{t}$ respectively describe the state and duration of traffic light. We take into account that the vehicle trajectory is continuous and the traffic light states sequence is periodic and discontinuous. Therefore, the two different encoders, LSTM and MLP, are utilized to handle them and compute the corresponding hidden states $h_{i}^{t}$ and $lh_{i}^{t}=MLP(LS_{i}^{t}, W_{M})$, respectively. In SDG, we use GAT to integrate influencing features from nearby interacting agents and then compute the updated state ${hs}_{i}^{t}$ of agent $i$. In terms of behavior dependency, we first concatenate the state ${hs}_{i}^{t}$ (Eq.~\ref{eq:4}) and traffic lights state $lh_{i}^{t}$ as input $\tilde{HL}_{i}^{t}$, and then use these results to construct BDG. Based on the BDG, we can model discontinuous constraints of traffic lights on the movement behaviors of vehicles, as shown in Figure~\ref{fig4}. In this stage, hidden state ${HLb}_{i}^{t}$ is computed as a weighted sum of ${HLb}_{i}^{1:(t-1)}$, where the dependency weights are calculated by a self-attention mechanism. The resulting equations: \begin{align} \tilde{HL}_{i}^{t}=integrate({hs}_{i}^{t},\;lh_{i}^{t},\;W_{l}),\ \ HLb_{i}^{t}=GAT(\tilde{HL}_{i}^{t},\;HLb_{i}^{1:t},\;W_{\tilde{\theta}}), \end{align} where $integrate(\cdot)$ is concatenation operation. $W_{l}$, and $W_{\tilde{\theta}}$ are the embedding weights. To augment behavior feature and avoid the feature loss filtered by forget gates in process of sequence, the intermediate state are generated by integrating the sate $HLb_{i}^{t}$ and original state $h_{i}^{t}$. The predicted position is given by: \begin{equation} \hat{Lq}_{i}^{t_{pred}}=\sigma(D_{LSTM}([HLb_{i}^{t}, h_{i}^{t}], W_{d})). \end{equation} where $D_{LSTM}$ and $W_{d}$ are the decoder based LSTM and corresponding weight respectively. $\sigma(\cdot)$ represents a linear layer. Our method is also GAN-based model integrating a discriminator $D_{cls}$ into the predicted approach, which utilizes LSTM and MLP to respectively encode the complete trajectory ([$Sq_{i}^{T_{obs}}$, $\hat{Lp}_{i}^{T_{pred}}$)] and traffic light sequence $LS$, and then concatenate them as the input into the \textit{Discriminator} to output a real/fake probability $probL_{i}$ by by a linear network. \begin{align} probL_{i}=D_{cls}(LSTM([Sq_{i}^{t_{obs}},\hat{Lq}_{i}^{t_{pred}}], W_{l}), MLP(LS_{i}, W_{M})) \end{align} For each vehicle, we calculate displacement error by variety loss in ~\cite{2018Socialgan}. The model predicts multiple trajectories $K$, and chooses the trajectory with the lowest distance error between them and ground-truth trajectory as the model output. \begin{align} Loss_{variety}=\min_{K}\left\|{Lq}_{i}-\hat{Lq}_{i}^{K}\right\|_2. \end{align} Through considering the best trajectory, the loss encourages network to cover the space of outputs that conform to the past trajectory. \subsection{VTP-TL Dataset} \begin{table}[!t] \scriptsize \begin{center} \caption{ VTP-TL vs other state-of-the-art traffic datasets. $Size$ represents the number of annotated frames. \textbf{$E_{V}$} and \textbf{$B_{V}$} represent the egocentric vision and bird's view.} \label{table4} \setlength{\tabcolsep}{0.9mm}{ \begin{tabular}{l c c c c c c} \hline \cline{1-7} Datasets & Location & View & Night & Road type & Size & Traffic lights \\ \cline{1-7} CityScapes~\cite{Cityscapes2016} & Europe & $E_{V}$ & $\times$ & urban & 25K & $\times$ \\ Argoverse~\cite{2019Argoverse} & USA & $E_{V}$ & $\surd$ & urban & 22K & $\times$ \\ INTERACTION~\cite{2019INTERACTION} & International & $B_{V}$ & $\times$ & urban & - & $\times$ \\ ApolloScape~\cite{2019TrafficPredict} & China & $E_{V}$ & $\surd$ & urban + rural & 144K & $\times$ \\ TRAF~\cite{2019TraPHic} & India & $E_{V}$ & $\surd$ & urban + rural & 72K & $\times$ \\ D2-city~\cite{2019D2City} & China & $E_{V}$ & $\times$ & urban & 700K & $\times$ \\ inD~\cite{2019inDdataset} & Germany & $B_{V}$ & $\times$ & urban & - & $\times$ \\ Lyft Level5~\cite{2020Olyft5} & USA & $E_{V}$ & $\times$ & urban & 46K & $\times$ \\ nuScenes~\cite{2020nuScenes} & USA/Singapore & $E_{V}$ & $\surd$ & urban & 40K & $\times$ \\ Waymo~\cite{2021Large} & USA & $E_{V}$ & $\surd$ & urban & 200K & $\surd$ \\ Waterloo~\cite{2021uwaterloo} & Canada & $B_{V}$ & $\times$ & urban & - & $\surd$ \\ IDD~\cite{IDD2019} & India & $E_{V}$ & $\times$ & urban + rural & 10K & $\times$ \\ METEOR~\cite{Meteor2021} & India & $E_{V}$ & $\surd$ & urban + rural & 2027K & $\times$ \\ \hline \hline \textbf{VTP-TL} & China & $B_{V}$ & \textbf{$\surd$} & \textbf{urban} & \textbf{270K} & \textbf{$\surd$} \\ \hline \end{tabular}} \end{center} \end{table} Although plenty of datasets have been constructed to evaluate the performance of trajectory prediction (Table~\ref{table4}), they rarely contained the important attributes of traffic lights except for Waymo and Waterloo. More details and their differences are described in our supplementary materials. For our new traffic datset, VTP-TL, we use drones to hover at $70$ to $120$ meters above the traffic intersections in an urban setting, as statically as possible, to record vehicle trajectories passing through the area with a bird’s-eye view in the daytime corresponding to non-rush hours, rush hours, and during the evening. The dataset contains more than $150$ minutes video clips, over $270k$ annotated frames, and more than 4 million bounding boxes for traffic vehicles at three typical urban scenarios: crossroads, T-junctions, and roundabouts intersections. Specifically, according to these recorded videos including traffic lights when they are visible, we infer the invisible state in the videos since the pattern of traffic lights are fixed. Therefore, we manually mark the related attributes of traffic light signals as discrete index, such as the index, state, duration, and position coordinate, and high definition maps such as lanes, crosswalks, and stop lines. Compared with Waymo, the traffic lights are annotated as independent objects, and modeled as agents with fixed position and changeable states in our prediction framework. The center of bounding box is regarded as the position coordinates of vehicle. Finally, we obtain a new dataset of vehicles trajectory prediction containing over $1288$ vehicles driving straight, $801$ vehicles turning left, and $2584$ vehicles turning right. Our dataset is divided into training, validation, and testing sets at a ratio of 4:1:1, and down-sampled to 3 frames per second for experiments. \begin{figure}[!t] \begin{center} \includegraphics[width=0.85\linewidth]{figure/7.png} \end{center} \caption{We highlight the traffic light states and vehicles behaviors in various videos in VTP-TL. The detailed descriptions are given in the appendix.} \label{fig7} \end{figure} We also perform statistical analysis on the VTP-TL dataset, and the corresponding results are shown in Figure~\ref{fig7}. The number of vehicles with different motion behaviors at each urban intersection are shown in Figure~\ref{fig7}(a). Considering different traffic rules at various urban intersections, we also count the number range of passing vehicles per frame as Figure~\ref{fig7}(b). Meanwhile, to ensure the effective passing of vehicles, different cycle time for traffic lights is set at different intersections (shown in Figure~\ref{fig7}(c)). In Figure~\ref{fig7}(d), we count the number of vehicles in the daytime of the rush and non-rush hours in $10$ minutes. These descriptions can fully represent a large number of vehicles' behaviors are constrained by traffic lights. The user identifiers and exact date of publication have been masked off to protect privacy. The dataset would only be available for research purposes. More details are given in the supplementary materials. \section{Experimental Evaluation} In our experiments, the dimension of the embedding layer and the hidden state are set as 16 and 32, respectively. We also set the fixed input dimension as 64 and use the attention layer of 64. During training, the Adam optimizer is applied with a learning rate of 0.01 and batch size of 64. \textbf{Evaluation Datasets.} We evaluate proposed model on four traffic datasets, Apolloscape~\cite{2019TrafficPredict}, SDD~\cite{2016LearningSDD}, INTERACTION~\cite{2019INTERACTION}, and Waymo~\cite{2021Large}. In addition, we also report the experiments on our new dataset VTP-TL. \textbf{Evaluation Metrics.} We use the same evaluating metrics as ~\cite{2019STGAT,2020Trajectron++,2020Collaborative}. \textit{Average displacement error} (ADE) represents the average square error between the predicted trajectory and the ground truth trajectory for all agents at all frames. \textit{Final displacement error} (FDE) represents the mean distance between the predicted path and ground truth trajectory for all agents at the final frame. \textbf{Comparable methods.} \textit{Social-LSTM}~\cite{2016Social} models spatial interaction by pooling mechanism. \textit{CS-LSTM}~\cite{2018ConvolutionalSocial}, \textit{TraPHic}~\cite{2019TraPHic} and \textit{GRAPH-LSTM}~\cite{2020ForecastingTrajectory} combine CNN with LSTM to perform trajectory prediction. \textit{SGAN}~\cite{2018Socialgan}, SGCN~\cite{2021SGCN}, and \textit{Goal-Gan}~\cite{2021GoalGAN} use GAN to model spatial interactions and physical attentions. \textit{Social Attention}~\cite{2018SocialAttention}, AI-TP~\cite{2022AITP}, \textit{Trajectron++}~\cite{2020Trajectron++}, \textit{GRIP++}~\cite{2020GRIP++}, \textit{NLNI}~\cite{2021Unlimited}, \textit{EvolveGraph}~\cite{2020EvolveGraph}, and \textit{STGAT}~\cite{2019STGAT} integrate graph structure and attention mechanism to extract spatial-temporal interaction features. \textit{TPNet}~\cite{2020TPNet} and \textit{DESIRE}~\cite{2017DESIRE}, integrate scene contex into prediction framework. \textit{NMMP}~\cite{2020Collaborative} models the directed interaction with the neural motion message passing strategy. SimAug~\cite{liang2020simaug} is trained only on 3D simulation data to predict future trajectories. LB-EBM~\cite{E19} is a probabilistic model with cost function defined in the latent space to account for the movement history and social context for diverse human trajectories. \textit{HEAT}~\cite{2022HEAT}, \textit{TNT}~\cite{2020TNT}, and \textit{MultiPath}~\cite{2019MultiPath} are used on INTERACTION and reported in~\cite{2020TNT}. \begin{table}[!t] \scriptsize \begin{center} \caption{Quantitative results of prediction performance on traffic datasets. The ADE/FDE are calculated for each dataset. The bold fonts correspond to the best results with the lowest error among predicted 20 possible trajectories for each agent, except INTER (INTERACTION) where the lowest error among predicted 6 possible trajectories. For Waymo, we implement those baseline methods according to their open source code, while the other experimental values of comparison methods are all described in open papers. \textbf{-} denotes methods have not been validated on those datasets.} \label{table1} \setlength{\tabcolsep}{0.2mm}{ \begin{tabular}{l|c||l|c|c||l|c} \hline Method & \textbf{Apolloscape} & Method & \textbf{SDD} & \textbf{INTER} & Method & \textbf{Waymo} \\ \hline TPNet~\cite{2020TPNet} & 2.23/4.70 & EvolveGraph~\cite{2020EvolveGraph} & 13.9/22.9 &- & SGAN~\cite{2018Socialgan} & 6.01/11.40 \\ \hline CS-LSTM~\cite{2018ConvolutionalSocial} & 2.14/11.70 & Goal-Gan~\cite{2021GoalGAN} & 12.2/22.1 &- & Social-LSTM~\cite{2016Social} & 4.05/7.59 \\ \hline G-LSTMS\cite{2020ForecastingTrajectory} & 1.12/2.05 & SimAug~\cite{liang2020simaug} & 10.27/19.71 & - & STGAT~\cite{2019STGAT} & 1.68/3.70 \\ \hline SGAN~\cite{2018Socialgan} & 3.98/6.75 & LB-EBM~\cite{E19} & 8.87/\textbf{15.61} & - & SGCN~\cite{2021SGCN} & 1.02/2.26 \\ \hline TraPHic~\cite{2019TraPHic} & 1.28/11.67 & DESIRE~\cite{2017DESIRE} & 19.3/34.1 & 0.32/0.88 & & \\ \hline NLNI~\cite{2021Unlimited} & 1.09/1.55 & HEAT~\cite{2022HEAT} & - & \textbf{0.19}/0.66& & \\ \hline AI-TP~\cite{2022AITP} & 1.16/2.13 & TNT~\cite{2020TNT} & - & 0.21/0.67 & & \\ \hline GRIP++~\cite{2020GRIP++} & 1.25/2.34 & MultiPath~\cite{2019MultiPath} & - & 0.30/0.99 & & \\ \hline \hline D2-TPred & \textbf{1.02}/\textbf{1.69} & D2-TPred & \textbf{8.24}/15.89 & 0.29/\textbf{0.62} & D2-TPred & \textbf{0.85}/\textbf{1.89} \\ \hline \end{tabular}} \end{center} \end{table} \subsection{Quantitative Evaluation} We have performed the detailed quantitative evaluation. On traffic datasets Apolliscape, SDD, INTERACTION, Waymo, and VTP-TL, the quantitative results of prediction performance for D2-TPred and other trajector prediction methods are shown in Table~\ref{table1} and Table~\ref{table3}. \textbf{Traffic datasets without traffic lights:} Taking benefits from the SDG and BDG to extract spatio-temporal features, our method achieves competitive performance in the datasets shown in Table~\ref{table1}. Specifically, the performance of our method significantly outperforms comparative methods on Apolloscape. In SDD dataset with a large number of different scenarios, we observe the lowest error on ADE and the third-lowest error on FDE, as well as the lowest error on the FDE on INTER (INTERACTION). Moreover, we also achieve the best performance on Waymo Open Motion dataset by observing 8 frames to predict the next 12 frames. These demonstrate that our model can effectively capture the dynamic changeable interaction features and behavior dependency in complex traffic scenarios. More experimental results on other datasets such as ETH-UCY, Argoverse, nuScenes and inD are described in our supplementary materials. \begin{table}[!t] \scriptsize \begin{center} \caption{Quantitative results on VTP-TL dataset. We compare with the baseline methods and compute the ADE and FDE metrics by using 8 time steps to predict 12 future frames. +TL represents that traffic light states is embedded into the trajectory prediction system. The bold fonts correspond to the best results, which are the lowest error among predicted 20 possible trajectories for each agent.} \label{table3} \setlength{\tabcolsep}{1.0mm}{ \begin{tabular}{l|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Metrics} & \multicolumn{7}{|c}{Comparable models (in pixels)} \\ \cline{2-8} & Social Lstm & Social Attention & SGAN & STGAT & Trajectron++ & NMMP & D2-TPred\\ \hline ADE & 54.328 & 43.648 & 37.63 & 28.279 & 39.01 & 35.15 & \textbf{20.685} \\ \hline FDE &112.635& 97.614& 75.35 & 61.762& 118.37 & 70.35 & \textbf{47.296}\\ \hline \hline \multirow{2}{*}{Metrics}& \multicolumn{7}{|c}{Comparable models+TL (in pixels)} \\ \cline{2-8}& Social Lstm & Social Attention & SGAN & STGAT & Trajectron++ & NMMP & D2-TPred\\ \hline ADE & 45.04 & 34.460 & 31.56 & 21.245 & 35.456 & 32.33 & \textbf{16.900} \\ \hline FDE & 78.52 & 75.825 & 65.67 & 43.620 & 114.365 & 66.35 & \textbf{34.553}\\ \hline \end{tabular}} \end{center} \end{table} \textbf{VTP-TL dataset with traffic lights:} In this section, we describe D2-TPred+TL, which introduces traffic light states into D2-TPred approach. In Table~\ref{table3}, we evaluate our model against comparable methods, and all these methods against themselves with traffic lights. The experimental results show our method outperforms all other methods on the VTP-TL dataset in terms of ADE and FDE. Notably, compared with STGAT with the lowest prediction error, the ADE and FDE of D2-TPred+TL are reduced by 20.45$\%$ and 20.78$\%$. This illustrates we can effectively model constraints of traffic lights on motion behaviors. \subsection{Ablation Studies} We present the ablation studies on VTP-TL with traffic light. This not only demonstrates the significance of each component but also highlights the benefits of modeling discontinuity due to traffic lights on vehicle movement behavior. \textbf{Evaluation of the SDG and BDG:} To show the effectiveness of the SDG and BDG, we compare $S_{G}+B_{B}+TL_{M}+D$, $S_{S}+B_{L}+TL_{M}+D$ with $S_{S}+B_{B}+TL_{M}+D$ in Table~\ref{table5}. $S_{S}+B_{B}+TL_{M}+D$ can reduce ADE by 13.93$\%$ and 15.85$\%$, and FDE by 17.34$\%$ and 22.46$\%$, respectively. This directly illustrates that the SDG and BDG can effectively capture discontinuous dependency in spatial-temporal space to further improve the accuracy of prediction trajectories. \textbf{Evaluation of the discriminator:} We introduce a discriminator to refine the predicted trajectories. By comparing $S_{S}+B_{B}+TL_{M}$ with $S_{S}+B_{B}+TL_{M}+D$ in Table~\ref{table5}, the performances of the latter are increased by 9.26$\%$ and 12.74$\%$ in ADE and FDE, respectively. Moreover, the discriminator contributes to improving the accuracy of predicted trajectory. \textbf{Evaluation of different encoders:} Due to the distinctive characteristics of traffic light states, we use the MLP and LSTM to encode them. By comparing $S_{S}+B_{B}+TL_{L}+D$ and $S_{S}+B_{B}+TL_{M}+D$ in Table~\ref{table5}, utilizing MLP to capture features of traffic light states can be further improved by 5.56$\%$ and 8.17$\%$ on ADE and FDE, respectively. This illustrates that a discontinuous sequence may not be suitable for being encoded by LSTM with strong context correlation. \textbf{Evaluation of the Function of Traffic Lights:} For traffic lights, we compare the methods+TL with the corresponding baseline methods. The former directly uses the VTP-TL dataset, and the latter uses a dataset that consists of $Fid$, $Aid$, $x$, and $y$ attributes split from the VTP-TL dataset. As shown in Table~\ref{table3}, it can further increase the performance by 8.02$\%$ to 24.87$\%$ and 3.38$\%$ to 30.29$\%$ in ADE and FDE, respectively. Therefore, we can clearly validate the necessity of traffic lights in trajectory prediction at urban intersections. \begin{table}[!t] \scriptsize \begin{center} \caption{The ablation results on VTP-TL dataset. \textbf{S} denotes spatial interaction achieved by GAT ($S_{G}$) or SDG ($S_{S}$). \textbf{B} denotes behavior dependency achieved by LSTM ($B_{L}$) or BDG ($B_{B}$). \textbf{TL} denotes traffic light encoder as LSTM ($TL_{L}$) or MLP ($TL_{M}$). \textbf{D} denotes the discriminator. The bold fonts correspond to the best results.} \label{table5} \setlength{\tabcolsep}{1.5mm}{ \begin{tabular}{l|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Setting} & \multicolumn{2}{|c}{\textbf{S}} & \multicolumn{2}{|c}{\textbf{B}} & \multicolumn{2}{|c}{\textbf{TL}} & \multicolumn{1}{|c}{\textbf{D}} & \multicolumn{1}{|c}{Metrics} \\ \cline{2-9} & GAT & SDG & LSTM & BDG & LSTM & MLP & D& ADE / FDE \\ \hline $S_{G}+B_{L}+TL_{M}+D$ & $\surd$ & & $\surd$ & & & $\surd$ & $\surd$ & 21.792 / 48.936 \\ \hline $S_{G}+B_{B}+TL_{M}+D$ & $\surd$ & & & $\surd$ & & $\surd$ & $\surd$ & 19.635 / 41.804 \\ \hline $S_{S}+B_{L}+TL_{M}+D$ & & $\surd$ & $\surd$ & & & $\surd$ & $\surd$ & 20.082 / 44.560 \\ \hline $S_{S}+B_{B}+TL_{L}+D$ & & $\surd$ & & $\surd$ & $\surd$ & & $\surd$ & 17.896 / 37.629 \\ \hline $S_{S}+B_{B}+TL_{M}$ & & $\surd$ & & $\surd$ & & $\surd$ & & 18.626 / 39.598 \\ \hline $S_{S}+B_{B}+TL_{M}+D$ & & $\surd$ & & $\surd$ & & $\surd$ & $\surd$ & \textbf{16.900} / \textbf{34.553} \\ \hline \end{tabular}} \end{center} \end{table} \subsection{Qualitative Evaluation} In Figure~\ref{fig5}, the images of first two columns show the qualitative results derived from the Argoverse and Apolloscape. It can be seen our method without traffic lights also predicts acceptable future paths at urban intersections. In the third column, we show the qualitative results on the VTP-TL dataset. For the first row, the current traffic light state is red on the vertical road. We only show five vehicles' trajectories, where vehicle $v_{1}$ drives straight, $v_{2}$ turns right under the red traffic light, $v_{3}$ goes straight under the green light, $v_{4}$ and $v_{5}$ are not in the influence area of traffic light signals. For $v_{1}$, the predicted trajectory of our method is closest to the ground truth. Although the trajectories of $v_{2}$, $v_{3}$, $v_{4}$, and $v_{5}$ are not affected by traffic light signals, our method also predicts acceptable trajectories. The next two images show the predicted trajectory at T-junctions and roundabout intersections, where the states of vehicles located in the former are changing from parking to driving under the traffic light states from red to green. This illustrates our model can flexibly respond to the dynamic changes of surrounding agents and traffic light states. Limited by the pages, more results and failure case are listed in supplementary materials. \section{Conclusions} We present \textit{D2-TPred}, a new trajectory prediction approach by taking into traffic lights. The approach can not only model the dynamic interactions by reconstructing sub-graphs for all agents with constantly changing interaction objects (SDG), but also captures discontinuous behavior dependency by modeling the direct effects of behaviors at prior instances on the current state (BDG). Moreover, one new dataset VTP-TL for vehicles trajectory prediction with traffic lights is also released. Based on it, we describe two trajectory forecasting schemes and obtain competitive performance against other state-of-the-art. \textbf{Acknowledgments.} This work was supported in part by the National Natural Science Foundation of China with Grant No.61772474 and 62036010, Zhengzhou Major Science and Technology Project with Grant No.2021KJZX0060-6. We thank all the reviewers for their valuable suggestions. \begin{figure}[!t] \begin{center} \includegraphics[width=0.74\linewidth]{figure/5.png} \end{center} \caption{The visualization results at urban intersections on traffic datasets and VTP-TL dataset. Note that the compared methods are not the same in different datasets.} \label{fig5} \end{figure} \clearpage \bibliographystyle{splncs04}
2024-02-18T23:40:41.450Z
2022-07-22T02:16:48.000Z
algebraic_stack_train_0000
3,082
6,342
proofpile-arXiv_065-15021
\section{Introduction} Many problems of interested in science and engineering are modelled by partial differential equations, which help us understand and control complex systems across a wide variety of real-world applications \cite{evans2010partial}. Unfortunately, it is often difficult or impossible to obtain their analytical solutions, therefore various numerical techniques including, but not limited to the finite difference, finite volume, and finite element methods \cite{leveque2007finite,leveque2002finite,brenner2008mathematical} have been developed to obtain their approximate solutions. Based on a discretization of the solution space by dividing the computational domain into a polygon mesh, these numerical methods are highly accurate and efficient for low-dimensional problems on regular domains. However, there are still many challenging issues to be addressed, \textit{e.g.}, mesh generation remains complex when the boundary is geometrically complicated or dynamically changing, computation of high-dimensional problems is often infeasible due to the so-called curse of dimensionality, and many others. As the traditional methods are continuously being improved, it also raises the need for new methods and tools in order to tackle the difficulties mentioned above \cite{karniadakis2021physics}. With the significant enhancement of hardware and software developments, the deep learning methods \cite{lecun2015deep} have recently emerged as an attractive alternative for solving different types of equations in both the forward and inverse problems \cite{sirignano2018dgm,han2017deep,brunton2019machine}, and have achieved remarkable success due to the universal approximation \cite{scarselli1998universal} of neural networks as an ansatz to the solution function or the operator mapping \cite{karniadakis2021physics}. For instance, one of the most representative work is the physics-informed neural network (PINN) \cite{raissi2019physics,lagaris2000neural,lagaris1998artificial}, where the differential operators can be exactly calculated via automatic differentiation \cite{paszke2017automatic} and the residual of the governing equations is incorporated into the training loss function. Another pioneering work is the deep Ritz method \cite{yu2018deep}, which resorts to the variational principle and performs better for problems with low-regularity solutions \cite{chen2020comparison}. It is also possible to design the training task that is corresponded to the weak formulation of the underlying equations \cite{zang2020weak}, but the training process is often hard to converge due to the imbalance between generative and adversarial networks. Furthermore, to make the trained model satisfy the boundary conditions as precise as possible, various techniques including, but not limited to the deep Nitsche method \cite{liao2019deep}, the augmented Lagrangian relaxation \cite{huang2021augmented}, and the auxiliary network with distance function \cite{mcfall2009artificial,berg2018unified} have been developed. When compared with the traditional numerical solvers \cite{leveque2007finite,brenner2008mathematical}, these deep learning methods enjoy advantages of flexible and meshless implementation, strong ability to tackle non-linearity and to break the curse of dimensionality, and others \cite{karniadakis2021physics}. However, they may still exhibit poor performance in handling problems with multi-scale phenomena \cite{wight2020solving,jagtap2020extended}, and the large training cost is also a major drawback that limits their application in the field of large-scale scientific computing. To further enhance the representation and parallelization capacity of the network solution, it is natural to integrate the state-of-the-art deep learning techniques with the conventional domain decomposition strategies \cite{heinlein2021combining}, aiming at paving the wary for truly enabling the large-scale computation using neural networks. One way is to incorporate the distributed training techniques \cite{ben2019demystifying}, \textit{e.g.}, the data and module parallelization, into the PINN approach \cite{jagtap2020extended,jagtap2020conservative,hu2021extended}, where the training task is split into multiple subproblems according to the non-overlapping partition of domain and simple continuity conditions are enforced across the subregion interfaces. Although this combination is quite general and parallelizable, no specific knowledge of the underlying problem is utilized during the training process, which substantially differs from the conventional methodology of splitting a partial differential equation \cite{toselli2004domain,quarteroni1999domain}. On the other hand, as the classical domain decomposition methods \cite{toselli2004domain} can be formulated at the continuous or the weak level, various works have been devoted to employing the learning approaches for solving the decomposed subproblems and therefore benefit from the mesh free nature of deep learning solvers. Under such circumstances, the machine learning analogue of the overlapping domain decomposition methods have emerged recently and have successfully handled many problems \cite{liao2019deep,li2019d3m,mercier2021coarse,sheng2022pfnn}, however, the more general non-overlapping counterpart has not been thoroughly studied yet. A major difficulty is that the network solutions of local problems are prone to overfitting at and near the interface \cite{dockhorn2019discussion,bajaj2021robust} since the interface conditions are enforced as soft penalty functions during training and the size of training data on interface is smaller than that of interior domains, which eventually propagates the errors to neighboring subdomains and hampers the convergence of outer iteration. In other words, the issue of boundary overfitting is a key threat to the integration of deep learning techniques and domain decomposition methods, especially for those based on a direct flux exchange across the subdomain interfaces. What's worse, it will always occur to a greater or lesser extent in practical implementations and has not been fully addressed or studied in the existing literature. \begin{figure}[t!] \begin{adjustbox}{max totalsize={0.99\textwidth}{0.99\textheight},center} \begin{tikzpicture}[shorten >=1pt,auto,node distance=2.5cm,thick,main node/.style={rectangle,draw,font=\footnotesize},decoration={brace,mirror,amplitude=7},dimension node/.style={draw,dashed,fill=lightgray,font=\footnotesize},decoration={brace,mirror,amplitude=7}] \node[main node] (1) {\parbox{3.4cm}{\centering Domain Decomposition \\ Learning Methods}}; \node[draw=none,fill=none] (2) [right = 0.2cm of 1] {}; \node[dimension node, fill=white] (3) [above = 3.6cm of 2] {\parbox{2.6cm}{\centering interface exchange \\ \textcolor{blue}{solution} }}; \node[dimension node, fill=white] (4) [below = 4.2cm of 2] {\parbox{2.6cm}{\centering interface exchange \\ \textcolor{blue}{solution} and \textcolor{red}{flux}}}; \node[draw=none,fill=none] (5) [right = 1.5cm of 3] {}; \node[dimension node] (6) [above = 1.cm of 5] {\parbox{2.3cm}{\centering overlapping }}; \node[dimension node] (7) [below = 1.cm of 5] {\parbox{2.3cm}{\centering non-overlapping}}; \node[main node] (8) [right = 0.5cm of 6] {\parbox{2.9cm}{\centering Alternating/Jacobi- \\ Schwarz Algorithm}}; \node[main node] (9) [right = 0.5cm of 7] {\parbox{2.9cm}{\centering Robin-Robin \\ Learning Algorithm}}; \node[draw=none,fill=none] (10) [right = 1.5cm of 8] {}; \node[main node] (11) [above = 0.12cm of 10] {\parbox{1.8cm}{\centering Dirichlet \\ subproblems}}; \node[main node] (12) [below = 0.12cm of 10] {\parbox{1.8cm}{\centering Dirichlet \\ subproblems}}; \node[main node,fill = white!70!blue] (13) [right = 0.5cm of 11] {\parbox{2.7cm}{\centering Solution-Oriented \\ Learning Methods}}; \node[main node,fill = white!70!blue] (14) [right = 0.5cm of 12] {\parbox{2.7cm}{\centering Solution-Oriented \\ Learning Methods}}; \node[draw=none,fill=none] (15) [right = 1.5cm of 9] {}; \node[main node] (16) [above = 0.12cm of 15] {\parbox{1.8cm}{\centering Robin \\ subproblems}}; \node[main node] (17) [below = 0.12cm of 15] {\parbox{1.8cm}{\centering Robin \\ subproblems}}; \node[main node,fill = white!70!blue] (18) [right = 0.5cm of 16] {\parbox{2.7cm}{\centering Solution-Oriented \\ Learning Methods}}; \node[main node,fill = white!70!blue] (19) [right = 0.5cm of 17] {\parbox{2.7cm}{\centering Solution-Oriented \\ Learning Methods}}; \path[every node/.style={font=\sffamily\small,sloped},->,>=stealth',color=blue] (13) edge [transform canvas={xshift=-1em}] (14) (14) edge [transform canvas={xshift=1em}] (13) (18) edge [transform canvas={xshift=-1em}] (19) (19) edge [transform canvas={xshift=1em}] (18); \path[every node/.style={font=\sffamily\small,sloped},->,>=stealth'] (11) edge (13) (12) edge (14) (16.east) edge (18.west) (17.east) edge (19.west) (6) edge (8) (7) edge (9); \draw[->,>=stealth'] (3.north) |- (6.west); \draw[->,>=stealth'] (3.south) |- (7.west); \draw[->,>=stealth'] (1.north) |- (3.west); \draw ([xshift=3.em]8.north) |- ([xshift=.1em]11.west); \draw ([xshift=3.em]8.south) |- ([xshift=.1em]12.west); \draw ([xshift=3.em]9.north) |- ([xshift=.1em]16.west); \draw ([xshift=3.em]9.south) |- ([xshift=.1em]17.west); \node[dimension node] (20) [right = 0.42cm of 4] {\parbox{2.3cm}{\centering non-overlapping}}; \node[draw=none,fill=none] (21) [right = 1.93 cm of 20] {}; \node[main node] (22) [above = 3.7cm of 21] {\parbox{2.9cm}{\centering Dirichlet-Neumann \\ Learning Algorithm}}; \node[main node] (23) [above = 0.81cm of 21] {\parbox{2.9cm}{\centering Neumann-Neumann \\ Learning Algorithm}}; \node[main node] (24) [below = 0.81cm of 21] {\parbox{2.9cm}{\centering Dirichlet-Dirichlet \\ Learning Algorithm}}; \node[main node] (25) [below = 3.7cm of 21] {\parbox{2.9cm}{\centering Robin-Robin \\ Learning Algorithm}}; \node[draw=none,fill=none] (26) [right = 1.5cm of 22] {}; \node[main node] (27) [above = 0.12cm of 26] {\parbox{1.8cm}{\centering Dirichlet \\ subproblems}}; \node[main node] (28) [below = 0.12cm of 26] {\parbox{1.8cm}{\centering Neumann \\ subproblems}}; \node[main node,fill = white!70!blue] (29) [right = 0.5cm of 27] {\parbox{2.7cm}{\centering Solution-Oriented \\ Learning Methods}}; \node[main node,fill = white!70!red] (30) [right = 0.5cm of 28] {\parbox{2.7cm}{\centering Compensated \\ Deep Ritz Method}}; \path[every node/.style={font=\sffamily\small,sloped},->,>=stealth'] (29) edge [transform canvas={xshift=-1em},color=blue] (30) (30) edge [transform canvas={xshift=1em},color=red,dashed] (29); \path[every node/.style={font=\sffamily\small,sloped},->,>=stealth'] (27.east) edge (29.west) (28.east) edge (30.west); \draw ([xshift=3.em]22.north) |- ([xshift=.1em]27.west); \draw ([xshift=3.em]22.south) |- ([xshift=.1em]28.west); \node[draw=none,fill=none] (31) [right = 1.5cm of 23] {}; \node[main node] (32) [above = 0.12cm of 31] {\parbox{1.8cm}{\centering Dirichlet \\ subproblems}}; \node[main node] (33) [below = 0.12cm of 31] {\parbox{1.8cm}{\centering Neumann \\ subproblems}}; \node[main node,fill = white!70!blue] (34) [right = 0.5cm of 32] {\parbox{2.7cm}{\centering Solution-Oriented \\ Learning Methods}}; \node[main node,fill = white!70!red] (35) [right = 0.5cm of 33] {\parbox{2.7cm}{\centering Compensated \\ Deep Ritz Method}}; \path[every node/.style={font=\sffamily\small,sloped},->,>=stealth'] (34) edge [transform canvas={xshift=-1em},color=blue] (35) (35) edge [transform canvas={xshift=1em},color=red,dashed] (34); \path[every node/.style={font=\sffamily\small,sloped},->,>=stealth'] (32.east) edge (34.west) (33.east) edge (35.west); \draw ([xshift=3.em]23.north) |- ([xshift=.1em]32.west); \draw ([xshift=3.em]23.south) |- ([xshift=.1em]33.west); \node[draw=none,fill=none] (36) [right = 1.5cm of 24] {}; \node[main node] (37) [above = 0.12cm of 36] {\parbox{1.8cm}{\centering Neumann \\ subproblems}}; \node[main node] (38) [below = 0.12cm of 36] {\parbox{1.8cm}{\centering Dirichlet \\ subproblems}}; \node[main node,fill = white!70!red] (39) [right = 0.5cm of 37] {\parbox{2.7cm}{\centering Compensated \\ Deep Ritz Method}}; \node[main node,fill = white!70!blue] (40) [right = 0.5cm of 38] {\parbox{2.7cm}{\centering Solution-Oriented \\ Learning Methods}}; \path[every node/.style={font=\sffamily\small,sloped},->,>=stealth'] (40) edge [transform canvas={xshift=1em},color=blue] (39) (39) edge [transform canvas={xshift=-1em},color=red,dashed] (40); \path[every node/.style={font=\sffamily\small,sloped},->,>=stealth'] (37.east) edge (39.west) (38.east) edge (40.west); \draw ([xshift=3.em]24.north) |- ([xshift=.1em]37.west); \draw ([xshift=3.em]24.south) |- ([xshift=.1em]38.west); \node[draw=none,fill=none] (41) [right = 1.5cm of 25] {}; \node[main node] (42) [above = 0.12cm of 41] {\parbox{1.8cm}{\centering Robin \\ subproblems}}; \node[main node] (43) [below = 0.12cm of 41] {\parbox{1.8cm}{\centering Robin \\ subproblems}}; \node[main node,fill = white!70!blue] (44) [right = 0.5cm of 42] {\parbox{2.7cm}{\centering Solution-Oriented \\ Learning Methods}}; \node[main node,fill = white!70!red] (45) [right = 0.5cm of 43] {\parbox{2.7cm}{\centering Compensated \\ Deep Ritz Method}}; \path[every node/.style={font=\sffamily\small,sloped},->,>=stealth'] (44) edge [transform canvas={xshift=-1em},color=blue] (45) (45) edge [transform canvas={xshift=1em},color=red,dashed] (44); \path[every node/.style={font=\sffamily\small,sloped},->,>=stealth'] (42.east) edge (44.west) (43.east) edge (45.west) (4.east) edge (20.west); \draw ([xshift=3.em]25.north) |- ([xshift=.1em]42.west); \draw ([xshift=3.em]25.south) |- ([xshift=.1em]43.west); \draw[->,>=stealth'] ([xshift=2.em]20.north) |- (22.west); \draw[->,>=stealth'] ([xshift=2.em]20.north) |- (23.west); \draw[->,>=stealth'] ([xshift=2.em]20.south) |- (24.west); \draw[->,>=stealth'] ([xshift=2.em]20.south) |- (25.west); \draw[->,>=stealth'] (1.south) |- (4.west); \end{tikzpicture} \end{adjustbox} \hfill \begin{adjustbox}{max totalsize={0.83\textwidth}{0.5\textheight},center} \begin{tikzpicture}[shorten >=1pt,auto,node distance=2.5cm,thick,main node/.style={rectangle,draw,font=\footnotesize},decoration={brace,mirror,amplitude=7},dimension node/.style={draw,dashed,fill=lightgray,font=\footnotesize},decoration={brace,mirror,amplitude=7}] \node[dimension node, fill=white] (1) {\parbox{0.5cm}{ \textcolor{white}{em} }}; \node[draw=none] (2) [right = 0.2cm of 1] {\centering data exchange between subregions }; \node[dimension node] (3) [right = 0.4cm of 2] {\parbox{.5cm}{ \textcolor{lightgray}{em} }}; \node[draw=none] (4) [right = 0.2cm of 3] {\centering domain decomposition strategy }; \node[main node, fill = white!70!blue] (5) [below = 0.2cm of 1] {\parbox{.5cm}{ \textcolor{white!70!blue}{em} }}; \node[draw=none] (6) [right = 0.01cm of 5] {\centering $/$ }; \node[main node, fill = white!70!red] (7) [right = 0.01cm of 6] {\parbox{.5cm}{ \textcolor{white!70!red}{em} }}; \node[draw=none] (8) [right = 0.2cm of 7] {\centering learning methods with accurate treatment of interface solution/flux}; \node[draw=none] (9) [below = 0.2cm of 5] {\parbox{0.5cm}{ \textcolor{white}{em} }}; \draw[->,>=stealth',color=blue] (9.west) -- (9.east); \node[draw=none] (11) [right = 0.2cm of 9] {\centering transmission of the local solution explicitly across the subdomain interfaces }; \node[draw=none] (12) [below = 0.2cm of 9] {\parbox{0.5cm}{ \textcolor{white}{em} }}; \draw[->,>=stealth',color=red,dashed] (12.west) -- (12.east); \node[draw=none] (13) [right = 0.2cm of 12] {\centering transmission of the local flux data implicitly between adjacent subdomains }; \node[draw=none] (14) [left = 0.2cm of 1] {\centering Notation: }; \node[draw=none] (15) [above = 0.2cm of 14] {}; \end{tikzpicture} \end{adjustbox} \vspace{-0.6cm} \caption{Our proposed framework of domain decomposition learning methods for solving the elliptic boundary value problems.} \label{fig-big-picture} \vspace{-0.4cm} \end{figure} In this work, we consider the benchmark Poisson equation that serves as a necessary prerequisite to validate the effectiveness of the domain decomposition learning approaches \cite{li2019d3m,li2020deep,mercier2021coarse}, namely, \begin{equation} \begin{array}{cl} - \Delta u(x) = f(x)\ \ & \text{in}\ \Omega,\\ u(x)=0\ \ & \text{on}\ \partial \Omega, \end{array} \label{Poisson-StrongForm} \end{equation} where $d\in\mathbb{N}_+$ is the dimension, $\Omega\subset\mathbb{R}^d$ a bounded domain, and $f(x)\in L^2(D)$ a given function. Note that for the case of inhomogeneous boundary conditions, it is equivalent to solving \eqref{Poisson-StrongForm} after employing an auxiliary network with distance function for boundary fitting \cite{mcfall2009artificial,berg2018unified}. In contrast to the standard classification of domain decomposition methods that are based on the partition strategy of the computational domain, our proposed framework (see \autoref{fig-big-picture}) is established on the information exchange across the subdomain interfaces. More specifically, with the set of interface points being small compared to the entire training dataset, the trained models are often found to satisfy the governing equations but for overfitted interface conditions \cite{dockhorn2019discussion}. As a direct result, though the solution value of local problem is often found to be of satisfactory accuracy in both the interior and boundary points, the flux prediction through trained models is typically of unacceptable low accuracy along the interface and therefore hampers the convergence of direct flux exchange schemes. For simplicity, we refer to these deep learning solvers as the solution-oriented learning methods \cite{raissi2019physics,yu2018deep} in what follows, since the issue of boundary overfitting will always occur to a greater or lesser extent in practice. To deal with the overfitted interface conditions, we propose a novel learning approach, \textit{i.e.}, the compensated deep Ritz method, that allows the accurate flux transmission between neighbouring subdomains but without explicitly computing the solution's derivatives on the interface. Thanks to the proposed method, we are now able to construct effective learning approaches for realizing the classical Dirichlet-Neumann, Neumann-Neumann, Dirichlet-Dirichlet, and Robin-Robin algorithms in the non-overlapping regime \cite{toselli2004domain} (see \autoref{fig-big-picture}), and experimental results conducted on a series of elliptic boundary value problems validate our statements. It is noteworthy that although the Robin-Robin algorithm only requires the exchange of solution value across the interfaces \cite{chen2014optimal}, two additional parameters in the Robin boundary conditions need to be determined and may lead to interface overfitting if not chosen appropriately. Fortunately, our compensated deep Ritz method can also help to alleviate this problem. The remainder of this paper is organized as follows. In section 2, we present a brief review of the classical domain decomposition methods and the solution-oriented learning methods for solving elliptic boundary value problems. Then, according to our framework (see \autoref{fig-big-picture}), the most straightforward machine learning analogue of the Robin-Robin algorithm is introduced in section 3, followed by the detailed illustration of our compensated deep Ritz method in section 4. Numerical experiments on a series of problems including the regular and irregular interfaces, two and four subdomains, low and high dimensions are reported in section 5, as well as the more challenging elliptic interface problem with high-contrast coefficients. Section 6 will conclude this paper and present some directions for future work. \section{Preliminaries} This section is devoted to briefly reviewing the classical domain decomposition methods \cite{toselli2004domain,quarteroni1999domain} and the widely-used deep learning methods for solving the second-order elliptic boundary value problems. \subsection{Domain Decomposition Methods} The idea of domain decomposition for solving the Poisson equation has a long history dating back to the 18th century \cite{Schwarz1870alternative}, and there is a vast literature on domain decomposition methods (we refer to \cite{toselli2004domain,quarteroni1999domain,mathew2008domain} and the references cited therein). For the ease of illustration, the computational domain $\Omega\subset\mathbb{R}^d$ is first assumed to be partitioned into two subdomains $\{\Omega_i\}_{i=1}^2$ (see \autoref{fig-domain-decomposition} for example), while the case of multiple subdomains can be obtained in a similar way. Depending on the partition strategies being employed, the domain decomposition methods are typically categorized into the overlapping and non-overlapping groups, which poses no problem if the finite difference or finite element methods are utilized as the numerical solvers for local problems \cite{toselli2004domain,quarteroni1999domain}. \begin{figure}[t] \begin{adjustbox}{max totalsize={0.99\textwidth}{0.99\textheight},center} \begin{tikzpicture} \draw[color=gray, fill=lightgray, thick] (1.7,0) rectangle (2.3,4); \draw[thick] (0,0) rectangle (4,4); \node[draw=none] at (1,2) {$\Omega_1$}; \node[draw=none] at (3,2) {$\Omega_2$}; \node [draw=none] at (2,2) {\rotatebox{90}{$\Omega_1\cap\Omega_2$}}; \node[draw=none] at (1.3,3.5) {$\Gamma_2$}; \node[draw=none] at (2.9,3.5) {$\Gamma_1$}; \draw[color=gray, thick] (7,0) -- (7,4); \draw[thick] (5,0) rectangle (9,4); \node[draw=none] at (6,2) {$\Omega_1$}; \node[draw=none] at (8,2) {$\Omega_2$}; \node[draw=none] at (7.3,3.5) {$\Gamma$}; \draw[decorate,decoration={zigzag,segment length=8.mm, amplitude=2.mm, pre=lineto,pre length=0pt,post=lineto,post length=0pt},color=gray, thick] (12,0) -- (12,4); \draw[thick] (10,0) rectangle (14,4); \node[draw=none] at (11,2) {$\Omega_1$}; \node[draw=none] at (13,2) {$\Omega_2$}; \node[draw=none] at (12.5,3.5) {$\Gamma$}; \draw[color=gray, thick] (17,0) -- (17,4); \draw[color=gray, thick] (15,2) -- (19,2); \draw[thick] (15,0) rectangle (19,4); \draw[fill=lightgray] (15,0) rectangle (17,2); \draw[fill=lightgray] (17,2) rectangle (19,4); \node[draw=none] at (16,1) {$\Omega_B$}; \node[draw=none] at (16,3) {$\Omega_R$}; \node[draw=none] at (18,1) {$\Omega_R$}; \node[draw=none] at (18,3) {$\Omega_B$}; \draw[black,fill=black] (17,2) circle (.3ex); \end{tikzpicture} \end{adjustbox} \vspace{-0.6cm} \caption{Decomposition of a bounded domain $\Omega\subset\mathbb{R}^2$ in two dimension. \textbf{Left:} Overlapping partition with two subdomains. \textbf{Middle:} Non-overlapping partition with two subdomains, which is separated by curved regular or irregular interfaces. \textbf{Right:} Red-Black partition of the non-overlapping multidomains into two sets, where the intersection between two subregions in the same class is marked in bold. } \label{fig-domain-decomposition} \vspace{-0.4cm} \end{figure} \begin{algorithm}[htp] \caption{ Domain Decomposition Methods Based on Solution Exchange} \begin{algorithmic} \STATE{Start with the initial guess $h_1^{[0]}$ and $h_2^{[0]}$ of each subsolution along the interface;} \FOR{$k \gets 0$ to $K$ (maximum number of outer iterations)} \WHILE{stopping criteria are not satisfied} \STATE{\% \textit{Local Problem-Solving} } \STATE{ \vspace{-0.72cm} \begingroup \renewcommand*{\arraystretch}{1.3} \begin{equation*} \left\{ \begin{array}{cl} - \Delta u_1^{[k]} = f\ \ & \text{in}\ \Omega_1\\ u_1^{[k]} = 0\ \ & \text{on}\ \partial \Omega_1 \setminus \Gamma_1\\ \mathcal{B}_1 u_1^{[k]} = h_1^{[k]}\ \ & \text{on}\ \Gamma_1 \end{array} \right. \text{\ where\ \ } \mathcal{B}_1 u_1^{[k]} = \left\{ \begin{array}{cl} u_1^{[k]}\ & \text{(SAM)} \\ \nabla u_1^{[k]}\cdot\bm{n}_1 + \kappa_1 u_1^{[k]} \ & \text{(RRA)} \\ \end{array}\right. \end{equation*} \endgroup \vspace{-0.38cm} } \STATE{\% \textit{Solution Exchange Between Subdomains}} \STATE{ \vspace{-0.72cm} \begingroup \renewcommand*{\arraystretch}{1.3} \begin{equation*} h_2^{[k]} = \left\{ \begin{array}{cll} u_1^{[k]}\ \ & \text{on}\ \Gamma_2\ & \text{(SAM)} \\ \displaystyle - h_1^{[k]} + (\kappa_1 + \kappa_2) u_1^{[k]}\ \ & \text{on}\ \Gamma_2\ & \text{(RRA)} \\ \end{array}\right. \end{equation*} \endgroup \vspace{-0.38cm} } \STATE{\% \textit{Local Problem-Solving}} \STATE{ \vspace{-0.72cm} \begingroup \renewcommand*{\arraystretch}{1.3} \begin{equation*} \left\{ \begin{array}{cl} - \Delta u_2^{[k]} = f\ \ & \text{in}\ \Omega_2\\ u_2^{[k]} = 0\ \ & \text{on}\ \partial \Omega_2 \setminus \Gamma_2\\ \mathcal{B}_2 u_2^{[k]} = h_2^{[k]}\ \ & \text{on}\ \Gamma_2 \end{array} \right. \text{\ where\ \ } \mathcal{B}_2 u_2^{[k]} = \left\{ \begin{array}{cl} u_2^{[k]}\ & \text{(SAM)} \\ \nabla u_2^{[k]}\cdot\bm{n}_2 + \kappa_2 u_2^{[k]} \ & \text{(RRA)} \\ \end{array}\right. \end{equation*} \endgroup \vspace{-0.38cm} } \STATE{\% \textit{Solution Exchange Between Subdomains}} \STATE{ \vspace{-0.72cm} \begingroup \renewcommand*{\arraystretch}{1.3} \begin{equation*} h_1^{[k+1]} = \left\{ \begin{array}{cll} u_2^{[k]}\ \ & \text{on}\ \Gamma_1\ & \text{(SAM)} \\ \displaystyle \rho + (1-\rho) (- h_2^{[k]} + (\kappa_1 + \kappa_2) u_2^{[k]})\ \ & \text{on}\ \Gamma_1\ & \text{(RRA)} \end{array}\right. \end{equation*} \endgroup \vspace{-0.38cm} } \ENDWHILE \ENDFOR \\\hrulefill \STATE{\textbf{Remark:} RRA is defined in the non-overlapping regime, \textit{i.e.}, $\Gamma_1=\Gamma_2$ during iteration. } \end{algorithmic} \label{DDM-Solution-Exchange} \end{algorithm} \begin{algorithm}[htp] \caption{Domain Decomposition Methods Based on Solution and Flux Exchange } \begin{algorithmic} \STATE{Start with the initial guess $h_1^{[0]}$ and $h_2^{[0]}$ of each subsolution along the interface;} \FOR{$k \gets 0$ to $K$ (maximum number of outer iterations)} \WHILE{stopping criteria are not satisfied} \STATE{\% \textit{Local Problem-Solving} } \STATE{ \vspace{-0.72cm} \begingroup \renewcommand*{\arraystretch}{1.3} \begin{equation*} \left\{ \begin{array}{cl} - \Delta u_i^{[k]} = f\ \ & \text{in}\ \Omega_i\\ u_i^{[k]} = 0\ \ & \text{on}\ \partial \Omega_i \setminus \Gamma\\ \mathcal{B}_i u_i^{[k]} = h_i^{[k]}\ \ & \text{on}\ \Gamma \end{array}\right. \text{\ where\ \ } \mathcal{B}_i u_i^{[k]} = \left\{ \begin{array}{cll} u_i^{[k]} & \text{for}\ i=1,\ & \text{(DNA)} \\ \displaystyle u_i^{[k]} & \text{for}\ i=1,2,\ & \text{(NNA)} \\ \displaystyle \nabla u_i^{[k]} \cdot \bm{n}_i & \text{for}\ i=1,2,\ & \text{(DDA)} \end{array}\right. \end{equation*} \endgroup \vspace{-0.38cm} } \STATE{\% \textit{Solution or Flux Exchange Between Subdomains}} \STATE{ \vspace{-0.72cm} \begingroup \renewcommand*{\arraystretch}{1.3} \begin{equation*} h_i^{[k]} = \left\{ \begin{array}{clll} -\nabla u_{i-1}^{[k]} \cdot \bm{n}_{i-1} & \text{on}\ \Gamma & \text{for}\ i=2, \ & \text{(DNA)} \\ \displaystyle \nabla u_{1}^{[k]} \cdot \bm{n}_{1} + \nabla u_{2}^{[k]} \cdot \bm{n}_{2} & \text{on}\ \Gamma & \text{for}\ i=1,2, \ & \text{(NNA)} \\ \displaystyle u_{1}^{[k]} - u_{2}^{[k]} & \text{on}\ \Gamma & \text{for}\ i=1,2,\ & \text{(DDA)} \end{array}\right. \end{equation*} \endgroup \vspace{-0.38cm} } \STATE{\% \textit{Local Problem-Solving}} \STATE{ \vspace{-0.72cm} \begingroup \renewcommand*{\arraystretch}{1.3} \begin{equation*} \left\{ \begin{array}{cl} - \Delta u_i^{[k]} = f\ \ & \text{in}\ \Omega_i\\ u_i^{[k]} = 0\ \ & \text{on}\ \partial \Omega_i \setminus \Gamma\\ \mathcal{B}_i u_i^{[k]} = h_i^{[k]}\ \ & \text{on}\ \Gamma \end{array} \right. \text{\ where\ \ } \mathcal{B}_i u_i^{[k]} = \left\{ \begin{array}{cll} \nabla u_i^{[k]} \cdot \bm{n}_i & \text{for}\ i=2,\ & \text{(DNA)} \\ \displaystyle \nabla u_i^{[k]} \cdot \bm{n}_i & \text{for}\ i=1,2,\ & \text{(NNA)} \\ \displaystyle u_i^{[k]} & \text{for}\ i=1,2,\ & \text{(DDA)} \\ \end{array}\right. \end{equation*} \endgroup \vspace{-0.38cm} } \STATE{\% \textit{Solution or Flux Exchange Between Subdomains}} \STATE{ \vspace{-0.72cm} \begingroup \renewcommand*{\arraystretch}{1.3} \begin{equation*} h_i^{[k+1]} = \left\{ \begin{array}{clll} \rho u_{i+1}^{[k]} + (1-\rho) u_{i}^{[k]} & \text{on}\ \Gamma & \text{for}\ i=1, \ & \text{(DNA)} \\ h_i^{[k]} - \rho ( u_{1}^{[k]} + u_{2}^{[k]} ) & \text{on}\ \Gamma & \text{for}\ i=1,2, \ & \text{(NNA)} \\ h_i^{[k]} - \rho ( \nabla u_{1}^{[k]} \cdot \bm{n}_1 + \nabla u_{2}^{[k]} \cdot \bm{n}_2 ) & \text{on}\ \Gamma & \text{for}\ i=1,2,\ & \text{(DDA)} \\ \end{array}\right. \end{equation*} \endgroup \vspace{-0.38cm} } \ENDWHILE \ENDFOR \end{algorithmic} \label{DDM-Flux-Exchange} \end{algorithm} However, when the neural network is adopted as the solution ansatz, the trained model is often found to satisfies the governing equations but for overfitted boundary conditions \cite{dockhorn2019discussion,bajaj2021robust}, which greatly differs from the traditional methods \cite{toselli2004domain,quarteroni1999domain}. To this end, the classification of domain decomposition methods adopted in this paper is based on the information exchange between neighbouring subregions (see also \autoref{fig-big-picture}). More specifically, we summarize in what follows some representative decomposition-based approaches in the literature \cite{toselli2004domain}. Here, we refer to the Schwarz alternating method and the Robin-Robin algorithm as SAM and RRA in algorithm \ref{DDM-Solution-Exchange}, while the Dirichlet-Neumann, Neumann-Neumann, and Dirichlet-Dirichlet algorithms are abbreviated as DNA, NNA, and DDA in algorithm \ref{DDM-Flux-Exchange}, respectively. In addition, the relaxation parameter $\rho$ shall lay between $(0,\rho_{max})$ in order to achieve convergence \cite{funaro1988iterative}. Notably, although the overlapping methods with small overlap are cheap and easy to implement, it comes at the price of slower convergence. Besides, the non-overlapping methods are more efficient in handling elliptic problems with large jumps in the coefficients \subsection{Deep Learning Solvers} As can be concluded from the previous discussion, the decomposed problem on each subregion takes on the following form \begin{equation} \begin{array}{cl} - \Delta u_i(x) = f(x)\ \ & \text{in}\ \Omega_i,\\ u_i(x)=0\ \ & \text{on}\ \partial \Omega_i\setminus\Gamma,\\ \mathcal{B}_i u_i(x)=h_i(x)\ \ & \text{on}\ \Gamma, \end{array} \label{Subproblem-StrongForm} \end{equation} where $\mathcal{B}_i$ is a boundary operator on interface that may represent the Dirichlet, Neumann, or Robin boundary condition, namely, \begin{equation*} \begin{array}{rl} \textnormal{Dirichlet condition:}\ &\ \mathcal{B}_i u(x) = u(x), \\ \textnormal{Neumann condition:}\ &\ \mathcal{B}_i u(x) = \nabla u(x) \cdot \bm{n}_i, \\ \textnormal{Robin condition:}\ &\ \mathcal{B}_i u(x) = \nabla u(x)\cdot \bm{n}_i + \kappa_i u(x), \end{array} \end{equation*} and the associated boundary conditions $h_i(x)$ are iteratively determined so as to ensure the transmission conditions across subdomain interfaces \cite{toselli2004domain}. When the deep learning techniques are employed for solving the boundary value problem \eqref{Subproblem-StrongForm}, the hypothesis space of local solution is first built by neural networks. Here, we adopt the standard fully-connected neural network of depth $L\in\mathbb{N}_+$ \cite{hornik1989multilayer}, in which the $\ell$-th hidden layer receives an input $x^{\ell-1}\in\mathbb{R}^{n_{\ell-1}}$ from its previous layer and transforms it to \begin{equation*} T^{\ell}(x^{\ell-1}) = \bm{W}^{\ell} x^{\ell-1} + \bm{b}^{\ell}, \end{equation*} where $\bm{W}^{\ell} \in \mathbb{R}^{ n_{\ell} \times n_{\ell - 1} }$ and $\bm{b}^{\ell} \in \mathbb{R}^{n_{\ell}}$ are the weights and biases to be learned. By choosing an appropriate activation function $\sigma(\cdot)$ for each hidden layer, the network solution can then be defined as \begin{equation*} \hat{u}_i(x;\theta) = \big(T^L \circ \sigma \circ T^{L-1} \cdots \circ \sigma \circ T^1\big)(x), \end{equation*} where $\circ$ represents the composition operator and $\theta=\{ \bm{W}^{\ell}, \bm{b}^{\ell} \}_{\ell = 1}^L$ denotes the collection of the trainable parameters. One can also employ other network architectures, \textit{e.g.}, the residual network and its variants \cite{he2016deep,han2017deep}, for the parametrization of local solutions, which is omitted here and left for future investigation. To update the trainable parameters through the celebrated backpropagation algorithm \cite{hecht1992theory}, various training loss functions (before employing the numerical integration) have been proposed, \textit{e.g.}, the PINN approach \cite{raissi2019physics} that is based on the strong form of \eqref{Subproblem-StrongForm}, namely, \begin{equation*} \hat{u}_i(x;\theta) = \operatorname*{arg\,min}_{\theta} \int_{\Omega_i} |-\Delta \hat{u}_i-f|^2 dx + \beta \left( \int_{\partial\Omega_i\backslash \Gamma}|\hat{u}_i|^2ds+\int_{\Gamma}|\mathcal{B}_i(\hat{u}_i)-h_i|^2ds\right), \end{equation*} or the deep Ritz method \cite{yu2018deep} that is based on the variational form of \eqref{Subproblem-StrongForm}, namely, \begin{equation*} \hat{u}_i(x;\theta) = \operatorname*{arg\,min}_{\theta} \int_{\Omega_i} \Big( \frac12 |\nabla \hat{u}_i|^2 - f\hat{u}_i \Big) dx + \beta \int_{\partial\Omega_i\backslash \Gamma}|\hat{u}_i|^2ds + L_\Gamma(\hat{u}_i(x;\theta)) \end{equation*} where the last term depends on the boundary condition being imposed at the interface \begingroup \renewcommand*{\arraystretch}{2.} \begin{equation*} L_\Gamma(\hat{u}_i(x;\theta)) = \left\{ \begin{array}{ll} \displaystyle \beta \int_{\Gamma}|\hat{u}_i - h_i|^2ds\ \ &\ \text{(Dirichlet condition),} \\ \displaystyle - \int_{\Gamma} h_i \hat{u}_i\,ds\ \ &\ \text{(Neumann condition),} \\ \displaystyle \int_\Gamma \Big( \frac{\kappa_i}{2} |\hat{u}_i|^2 - h_i \hat{u}_i \Big) ds\ \ &\ \text{(Robin condition),} \end{array}\right. \end{equation*} \endgroup and $\beta>0$ is a user-defined penalty coefficient. Apart from these two widely-used methods, the weak adversarial network \cite{zang2020weak} is based on the weak form of \eqref{Subproblem-StrongForm}, while another series of neural network methods is designed to use separate networks to fit the interior and boundary equations respectively \cite{mcfall2009artificial,berg2018unified}. We refer the readers to \cite{karniadakis2021physics,sheng2022pfnn,heinlein2021combining} for a more detailed review of the existing deep learning solvers. Notably, with the interface conditions being included as soft constraints in the training loss function and the set of interface points being small compare to that of interior domains, the trained network is highly prone to overfitting at the interface \cite{dockhorn2019discussion,bajaj2021robust}, which is a key threat to the integration of the direct flux exchange schemes and the deep learning techniques but rarely studied or addressed in the literature. \section{Robin-Robin Algorithm via Solution-Oriented Learning Methods} Note that in addition to the deep learning analogue of overlapping Schwarz methods \cite{li2019d3m,mercier2021coarse,li2020deep,sheng2022pfnn,taghibakhshi2022learning}, the non-overlapping Robin-Robin algorithm \cite{toselli2004domain,quarteroni1999domain} is also based on a direct exchange of solution value between neighbouring subdomains (see \autoref{fig-big-picture} or algorithm \autoref{DDM-Solution-Exchange}). Moreover, as the decomposition leads to simpler and smoother functions to be learned on each subregion, it thus enables us to employ the standard solution-oriented learning methods \cite{raissi2019physics,yu2018deep,zang2020weak,sirignano2018dgm} for the numerical solution of local problems. As such, the PINN approach \cite{raissi2019physics} is generally preferred since it is known to empirically work better for problems with smooth solutions \cite{chen2020comparison}. However, a major drawback is the determination of two additional parameters in the Robin boundary conditions, which may cause difficulties for network training or require more outer iterations to converge. For the ease of illustration, we consider the case of two non-overlapping subregions (see \autoref{fig-domain-decomposition}) in what follows, where the interface conditions are invariably of the Robin type \cite{toselli2004domain,quarteroni1999domain,chen2014optimal}. Note that the detailed iterative process in terms of differential operators is presented in algorithm \autoref{DDM-Solution-Exchange}, which is not restated here for simplicity. It can be observed that the update of interface conditions only requires the Dirichlet data of local solutions, which seems simpler and more straightforward than those based on a direct flux exchange \cite{toselli2004domain,quarteroni1999domain} when combining with the deep learning models. \begin{algorithm}[t!] \caption{Deep Learning Analogue of Robin-Robin Algorithm for Two Subdomains} \begin{algorithmic} \STATE{\% \textit{Initialization} } \STATE{-- divide domain $\Omega\subset\mathbb{R}^d$ into two non-overlapping subregions $\Omega_1$ and $\Omega_2$;} \STATE{-- specify network structures $\hat{u}_1(x;\theta_1)$ and $\hat{u}_2(x;\theta_2)$ for each subproblem;} \STATE{-- generate Monte Carlo training samples $X_\Gamma$, $X_{\Omega_i}$, and $X_{D_i}$ for $i=1$, 2; } \STATE{\% \textit{Outer Iteration Loop} } \STATE{Start with the initial guess $h^{[0]}$ along the interface $\Gamma$;} \FOR{$k \gets 0$ to $K$ (maximum number of outer iterations)} \WHILE{stopping criteria are not satisfied} \STATE{\% \textit{Robin Subproblem-Solving via Solution-Oriented Learning Method} } \FOR{$\ell \gets 0$ to $L$ (maximum number of training epochs)} \FOR{$m \gets 0$ to $M$ (maximum number of training mini-batches)} \STATE{-- draw mini-batch data uniformly at random from $X_\Gamma$, $X_{\Omega_1}$, and $X_{D_1}$;} \STATE{-- network training by employing solution-oriented learning method \eqref{RRLM-Subprob-Discrete}, \textit{i.e.}, } \STATE{ \STATE{ \vspace{-1.2cm} \begingroup \renewcommand*{\arraystretch}{1.1} \begin{equation*} \theta_1^{[k]} = \operatorname*{arg\,min}_{\theta_1} L_{\Omega_1}( \hat{u}_1 ) + \beta \Big( L_{D_1}( \hat{u}_1 ) + L_R( \hat{u}_1, h_1^{[k]} ) \Big) \end{equation*} \endgroup \vspace{-0.65cm} } } \ENDFOR \ENDFOR \STATE{\% \textit{Update of boundary values at interface $\Gamma$} } \FOR{$n \gets 1$ to $N_\Gamma$} \STATE{ \vspace{-0.8cm} \begin{equation*} h_2^{[k]}(x_n^\Gamma) = - h_1^{[k]}(x_n^\Gamma) + (\kappa_1 + \kappa_2) \hat{u}_1(x_n^\Gamma;\theta_1^{[k]}) \end{equation*} \vspace{-0.75cm} } \ENDFOR \STATE{\% \textit{Robin Subproblem-Solving via Solution-Oriented Learning Method} } \FOR{$\ell \gets 0$ to $L$ (maximum number of training epochs)} \FOR{$m \gets 0$ to $M$ (maximum number of training mini-batches)} \STATE{-- draw mini-batch data uniformly at random from $X_{\Omega_1}$, $X_{\Omega_2}$, $X_{D_1}$, and $X_{D_2}$;} \STATE{-- network training by employing solution-oriented learning method \eqref{RRLM-Subprob-Discrete}, \textit{i.e.}, } \STATE{ \STATE{ \vspace{-1.2cm} \begingroup \renewcommand*{\arraystretch}{1.1} \begin{equation*} \theta_2^{[k]} = \operatorname*{arg\,min}_{\theta_2} L_{\Omega_2}( \hat{u}_2 ) + \beta \Big( L_{D_2}( \hat{u}_2 ) + L_R( \hat{u}_2, h_2^{[k]} ) \Big) \end{equation*} \endgroup \vspace{-0.65cm} } } \ENDFOR \ENDFOR \STATE{\% \textit{Update of boundary values at interface $\Gamma$} } \FOR{$n \gets 1$ to $N_\Gamma$} \STATE{ \vspace{-0.8cm} \begin{equation*} h_1^{[k+1]}(x_n^\Gamma) = \rho h_1^{[k]}(x_n^\Gamma) + (1-\rho)( - h_2^{[k]}(x_n^\Gamma) + (\kappa_1 + \kappa_2) u_2^{[k]}(x_n^\Gamma) ) \end{equation*} \vspace{-0.75cm} } \ENDFOR \ENDWHILE \ENDFOR \end{algorithmic} \label{Algorithm-RR-Learning-2Subdomains} \end{algorithm} To realize the Robin-Robin algorithm using the PINN approach, the local problem is first rewritten as an equivalent optimization problem by minimizing the residual of the governing equation, namely, for $i=1$, 2, \begin{equation} u_i^{[k]} = \operatorname*{arg\,min}_{u_i\in V_i} \int_{\Omega_i} | -\Delta u_i - f |^2\,dx + \beta\left( \int_{\Gamma} \Big| \kappa_i u_i + \frac{\partial u_i}{\partial \bm{n}_i} - h_i^{[k]} \Big|^2\,ds + \int_{\partial\Omega\cap\partial\Omega_i} |u_i|^2\,ds \right), \label{RRLM-PINN-Subprob-Functional} \end{equation} where the boundary and interface conditions are included as soft penalty terms in the training loss function. Then, by parametrizing the trail functions as neural networks \begin{equation*} u_1^{[k]}(x) \approx \hat{u}_1^{[k]}(x) := \hat{u}_1(x;\theta_1^{[k]}) \ \ \ \text{and}\ \ \ u_2^{[k]}(x) \approx \hat{u}_2^{[k]}(x) := \hat{u}_2(x;\theta_2^{[k]}), \end{equation*} and generating the training sample points inside each subregion and at its boundary, \textit{i.e.}, \begin{equation*} X_{\Omega_i} = \big\{ x_n^{\Omega_i} \big\}_{n=1}^{N_{\Omega_i}},\ \ \ X_{D_i} = \big\{ x_n^{D_i} \big\}_{n=1}^{N_{D_i}},\ \ \ \text{and}\ \ \ X_{\Gamma} = \big\{ x_n^{\Gamma} \big\}_{n=1}^{N_{\Gamma}}, \end{equation*} the powerful stochastic optimization tools \cite{bottou2018optimization} can be applied for fulfilling the learning tasks associated with \eqref{RRLM-PINN-Subprob-Functional}, that is, for $i=1$, 2 and the $k$-th outer iteration, \begin{equation} \theta_i^{[k]} = \operatorname*{arg\,min}_{\theta_i} L_{\Omega_i}( \hat{u}_i(x;\theta_i) ) + \beta \Big( L_{D_i}( \hat{u}_i(x;\theta_i) ) + L_{R}( \hat{u}_i(x;\theta_i), h_i^{[k]}(x) ) \Big), \label{RRLM-Subprob-Discrete} \end{equation} where the loss functions (not relabelled) are defined as \begingroup \renewcommand*{\arraystretch}{2.} \begin{equation*} \begin{array}{c} \displaystyle L_{\Omega_i}( \hat{u}_i(x;\theta_i) ) = \frac{|\Omega_i|}{N_{\Omega_i}} \sum_{n=1}^{N_{\Omega_i}} \big| - \Delta \hat{u}_i(x_n^{\Omega_i};\theta_i) - f(x_n^{\Omega_i}) \big|^2, \\ \displaystyle L_{D_i}( \hat{u}_i(x;\theta_i) ) = \frac{|\partial\Omega_i\!\setminus\!\Gamma|}{N_{D_i}} \sum_{n=1}^{N_{D_i}} | \hat{u}_i(x_n^{D_i};\theta_i) |^2, \\ \displaystyle L_{R}( \hat{u}_i(x;\theta_i), h_i^{[k]}(x)) = \frac{|\Gamma|}{N_\Gamma} \sum_{n=1}^{N_\Gamma} \big| \kappa_i \hat{u}_i(x_n^{\Omega_i};\theta_i) + \frac{\partial \hat{u}_i}{\partial \bm{n}_i}(x_n^{\Omega_i};\theta_i) - h_i^{[k]}(x_n^{\Omega_i}) \big|^2. \end{array} \end{equation*} \endgroup Here, and in what follows, $|\Omega_i|$, $|\partial\Omega_i\setminus\Gamma|$, and $|\Gamma|$ denote the Jacobians of the transformations that map the random variables generated from a standard uniform distribution to the input sample points $X_{\Omega_i}$, $X_{D_i}$, and $X_\Gamma$, respectively, where $i=1$, 2. To sum up, by employing the standard PINN approach as the numerical solver of local problems, the deep learning analogue of the classical Robin-Robin algorithm is presented in Algorithm \ref{Algorithm-RR-Learning-2Subdomains}, where $\kappa_1$, $\kappa_2>0$ are two additional user-defined parameters. We can assume, without loss of generality, that $\kappa_1=1$ and leave the other parameter to be tuned. In fact, as the size of the training data on interfaces is typically much smaller than that of interior domains, too large (or small) value of $\kappa_2>0$ may cause weights imbalance in the training loss function and hence make the training process susceptible to overfitting on the interface, while a moderate value can guarantee convergence but at the cost of more outer iterations. This greatly differs from the the conventional finite element setting \cite{chen2014optimal} since the neural network is adopted as the solution ansatz, and is further demonstrated through numerical experiments in \autoref{Section-Experiments}. Fortunately, the issue of weights imbalance can be tackled by using our compensated deep Ritz method, which is theoretically and numerically studied in the following sections. \section{Compensated Deep Ritz Method}\label{Section-Compensated-DeepRitz} In this section, we begin by focusing on the non-overlapping domain decomposition methods that are based on the direct flux exchange between neighbouring subregions (see \autoref{fig-big-picture}). When solving the decomposed problems using neural networks, it is known that although the overall training loss tends to be decreased as the iteration proceeds, the trained model often converges to a local minimizer that adequately satisfies the constrained equations but for overfitted boundary conditions \cite{dockhorn2019discussion,bajaj2021robust}. In other words, the flux approximation using trained networks is typically of unacceptable low accuracy at and near the interface, preventing us from deploying the solution-oriented learning methods \cite{raissi2019physics,yu2018deep} for solving the decomposed subproblems. To overcome such an inevitable drawbacks in practical situations, the compensated deep Ritz method is proposed to enable the accurate transmission of numerical flux in the presence of overfitted interface conditions, which is of key importance for developing the effective Dirichlet-Neumann, Neumann-Neumann, and Dirichlet-Dirichlet learning algorithms. Moreover, it can also help to alleviate the issue of interface overfitting when realizing the Robin-Robin algorithm through deep learning techniques. Consequently, we will be able to fully leverage the benefits of deep learning solvers to deal with the complicated geometry domains, high-dimensional problems, and many others \cite{karniadakis2021physics}. \subsection{Dirichlet-Neumann Learning Algorithm} We begin by considering the Dirichlet-Neumann algorithm \cite{toselli2004domain,quarteroni1999domain}, where the detailed iterative process in terms of differential operators is presented in Algorithm \ref{DDM-Flux-Exchange}. To avoid the explicit computation and transmission of interface flux, the variational formulation of the multidomain problem is taken into consideration. More precisely, by integrating by parts in $\Omega\subset\mathbb{R}^d$, the weak formulation of the Poisson equation \eqref{Poisson-StrongForm} reads: find $u\in H_0^1(\Omega)$ such that \begin{equation} a(u,v) = (f,v)\ \ \ \text{for any}\ v\in H_0^1(\Omega), \label{Poisson-WeakForm} \end{equation} where the bilinear forms are defined as \begin{equation*} a(u,v) = \int_\Omega \nabla u\cdot \nabla v\,dx \ \ \ \text{and}\ \ \ (f,v) = \int_\Omega fv\,dx. \end{equation*} For the ease of illustration, we consider a two subdomain decomposition of \eqref{Poisson-WeakForm} in what follows, and similar results can be achieved for multidomain problems using a red-black partition \cite{toselli2004domain}. Under such a circumstance, let $\Omega_1$, $\Omega_2$ denote a non-overlapping decomposition of domain $\Omega\subset\mathbb{R}^d$, with interface $\Gamma=\partial \Omega_1\cap \partial \Omega_2$ separating the subregions (see \autoref{fig-domain-decomposition} for example). By defining the bilinear terms \begin{equation*} a_i(u_i, v_i) = \int_{\Omega_i} \nabla u_i \cdot \nabla v_i\,dx \ \ \ \text{and}\ \ \ (f, v_i)_i = \int_{\Omega_i} f v_i\, dx \end{equation*} and setting \begin{equation*} V_i = \big\{ v_i\in H^1(\Omega_i)\, \big|\, v_i|_{\partial\Omega\cap\partial\Omega_i} = 0 \big\} \ \ \ \text{and}\ \ \ V_i^0=H_0^1(\Omega_i) \end{equation*} for $i=1$, 2, the Green's formula then implies that \eqref{Poisson-WeakForm} can be equivalently reformulated as: find $u_1\in V_1$ and $u_2\in V_2$ such that \begingroup \renewcommand*{\arraystretch}{1.1} \begin{equation*} \begin{array}{cl} a_1(u_1, v_1) = (f, v_1)_1\ \ & \text{for any}\ v_1\in V_1^0,\\ u_1 = u_2\ \ & \text{on}\ \Gamma, \\ a_2(u_2, v_2) = (f, v_2)_2 + (f, R_1\gamma_0 v_2)_1 - a_1(u_1, R_1\gamma_0 v_2)\ \ & \text{for any}\ v_2\in V_2, \end{array} \end{equation*} \endgroup where $\gamma_0 v= v|_\Gamma$ indicates the restriction of $v\in H^1(\Omega_i)$ on interface $\Gamma$, and $R_i: H_{00}^{\frac12}(\Gamma)\to V_i$ any differentiable extension operator \cite{quarteroni1999domain,toselli2004domain}. Based on the minimum total potential energy principle \cite{evans2010partial}, we immediately obtain its equivalent variational form, that is, \begingroup \renewcommand*{\arraystretch}{1.3} \begin{equation*} \begin{array}{c} \displaystyle \operatorname*{arg\,min}_{u_1\in V_1,\ u_1|_\Gamma = u_2} \frac12 a_1(u_1, u_1) - (f, u_1)_1, \\ \displaystyle \operatorname*{arg\,min}_{u_2\in V_2} \frac12 a_2(u_2, u_2) - (f, u_2)_2 - (f, R_1\gamma_0 u_2)_1 + a_1(u_1, R_1\gamma_0 u_2), \end{array} \end{equation*} \endgroup and therefore the variational formulation of the iterative Dirichlet-Neumann algorithm \cite{toselli2004domain,quarteroni1999domain}: given the initial guess $h^{[0]}\in H_{00}^{\frac12}(\Gamma)$, then solve for $k\geq 0$, \begin{itemize} \vspace{0.4em} \setlength\itemsep{0.4em} \item[1)] $\displaystyle u_1^{[k]} = \operatorname*{arg\,min}_{u_1\in V_1,\ u_1|_\Gamma = h^{[k]}} \frac12 a_1(u_1, u_1) - (f, u_1)_1,$ \item[2)] $\displaystyle u_2^{[k]} = \operatorname*{arg\,min}_{u_2\in V_2} \frac12 a_2(u_2, u_2) - (f, u_2)_2 - (f, R_1\gamma_0 u_2)_1 + a_1(u_1^{[k]}, R_1\gamma_0 u_2),$ \item[3)] $\displaystyle h^{[k+1]} = \rho u_2^{[k]} + (1-\rho) h^{[k]}\ \ \ \text{on}\ \Gamma,$ \vspace{0.4em} \end{itemize} with $\rho\in(0,\rho_{\textnormal{max}})$ being the acceleration parameter \cite{funaro1988iterative}. Notably, the continuity of flux across the interface is guaranteed without explicitly calculating and exchanging local fluxes. The variational formulation also makes it possible to integrate with the machine learning methods \cite{yu2018deep}. More precisely, the unknown solutions are parametrized by neural networks \begin{equation} u_1^{[k]}(x) \approx \hat{u}_1^{[k]}(x) := \hat{u}_1(x;\theta_1^{[k]}) \ \ \ \text{and}\ \ \ u_2^{[k]}(x) \approx \hat{u}_2^{[k]}(x) := \hat{u}_2(x;\theta_2^{[k]}) \label{Compensated-DeepRitz-Network-Parametrization} \end{equation} where $\hat{u}_i(x;\theta_i^{[k]})$ indicates the approximate solution with trained network parameters $\theta_i^{[k]}$ for $i=1$, 2, and the network structure employed here could be a fully-connected neural network \cite{goodfellow2016deep} or a residual neural network with several building blocks \cite{he2016deep}. We note that in contrast to the finite element methods \cite{toselli2004domain} where the extension is mesh-dependent and locally defined, the mesh-free neural network parametrization \eqref{Compensated-DeepRitz-Network-Parametrization} can be regarded as a global function and thus provides a natural extension operator, \textit{i.e.}, the neural network extension operator, \begin{equation*} R_1\gamma_0 \hat{u}_2(x,\theta_2) = \hat{u}_2(x,\theta_2) \end{equation*} which extends the restriction of $\hat{u}_2(x,\theta_2)$ on interface $\Gamma$ to the subregion $\Omega_1$ with zero boundary value on $\partial\Omega_1\cap\partial\Omega$. Here, the requirement of homogeneous boundary condition is dealt with a soft manner by introducing the penalty function \begin{equation*} \int_{ \partial\Omega_1\cap\partial\Omega } |\hat{u}_2|^2\,ds \end{equation*} into the learning task \eqref{Compensated-DeepRitz-Neumann-Subprob-Functional} of the mixed Neumann-Dirichlet problem. Notably, as the extension operator is required to be differentiable, the hyperbolic tangent or sigmoid activation function should be used rather than the ReLU function. Accordingly, by introducing the penalty term for enforcing essential boundary conditions, the Dirichlet problem on $\Omega_1$ can be formulated as\footnote{For notational simplicity, $\hat{u}_1(x,\theta_1)$ and $\hat{u}_2(x,\theta_2)$ are abbreviated as $\hat{u}_1$ and $\hat{u}_2$ if not specified otherwise.} \begin{equation} \theta_1^{[k]} = \operatorname*{arg\,min}_{\theta_1} \int_{\Omega_1} \Big( \frac12 | \nabla \hat{u}_1 |^2 - f \hat{u}_1\Big) dx + \beta \left( \int_{\partial\Omega_1\cap \partial\Omega} |\hat{u}_1|^2\,ds + \int_{\Gamma} |\hat{u}_1 - h^{[k]}|^2\,ds \right), \label{Compensated-DeepRitz-Dirichlet-Subprob-Functional} \end{equation} where $\beta>0$ is a user-defined penalty coefficient. In fact, as the decomposition of equation leads to simpler functions to be learned on each subregion and hence the second-order derivatives can be explicitly involved during the training process, the residual form \begin{equation} \theta_1^{[k]} = \operatorname*{arg\,min}_{\theta_1} \int_{\Omega_1} |- \Delta \hat{u}_1 - f |^2 dx + \beta \left( \int_{\partial\Omega_1\cap \partial\Omega} |\hat{u}_1|^2\,ds + \int_{\Gamma} |\hat{u}_1 - h^{[k]}|^2\,ds \right) \label{Compensated-PINN-Dirichlet-Subprob-Functional} \end{equation} is preferred to the variational form \eqref{Compensated-DeepRitz-Dirichlet-Subprob-Functional} since the former is empirically found to be capable of offering more accurate estimates of the solution's gradient inside the computational domain \cite{chen2020comparison}. On the other hand, the mixed Neumann-Dirichlet problem on $\Omega_2$ can be written as \begin{equation} \theta_2^{[k]} = \operatorname*{arg\,min}_{\theta_2} \int_{\Omega_2} \Big( \frac12 | \nabla \hat{u}_2 |^2 - f \hat{u}_2 \Big) dx + \int_{\Omega_1} \Big( \nabla \hat{u}_1^{[k]} \cdot \nabla \hat{u}_2 - f \hat{u}_2 \Big) dx + \beta \int_{\partial\Omega} |\hat{u}_2|^2\,ds, \label{Compensated-DeepRitz-Neumann-Subprob-Functional} \end{equation} which relies on the gradient value $\nabla \hat{u}_1^{[k]}$ and therefore benefits from the residual form \eqref{Compensated-PINN-Dirichlet-Subprob-Functional}. Now we are ready to discretize the functional integrals (\ref{Compensated-DeepRitz-Dirichlet-Subprob-Functional}, \ref{Compensated-PINN-Dirichlet-Subprob-Functional}) and \eqref{Compensated-DeepRitz-Neumann-Subprob-Functional}, where the Monte Carlo method is adopted so as to overcome the curse of dimensionality \cite{metropolis1949monte}. Specifically, by generating the training sample points inside each subdomain and at its boundary, \textit{i.e.}, \begin{equation*} X_{\Omega_i} = \big\{ x_n^{\Omega_i} \big\}_{n=1}^{N_{\Omega_i}},\ \ \ X_{D_i} = \big\{ x_n^{D_i} \big\}_{n=1}^{N_{D_i}},\ \ \ \text{and}\ \ \ X_{\Gamma} = \big\{ x_n^{\Gamma} \big\}_{n=1}^{N_{\Gamma}}, \end{equation*} where $D_i = \partial\Omega_i\cap\partial\Omega$, and the total number of points in the training datasets $X_{\Omega_i}$, $X_{D_i}$, and $X_\Gamma$ are denoted by $N_{\Omega_i}$, $N_{D_i}$, and $N_\Gamma$, respectively. Consequently, by defining the following loss functions \begingroup \renewcommand*{\arraystretch}{2.} \begin{equation*} L_{\Omega_i}( \hat{u}_i(x;\theta_i) ) = \left\{ \begin{array}{ll} \displaystyle \frac{|\Omega_i|}{N_{\Omega_i}} \sum_{n=1}^{N_{\Omega_i}} \big| - \Delta \hat{u}_i(x_n^{\Omega_i};\theta_i) - f(x_n^{\Omega_i}) \big|^2 & \text{(residual form),} \\ \displaystyle \frac{|\Omega_i|}{N_{\Omega_i}} \sum_{n=1}^{N_{\Omega_i}} \left( \frac12 | \nabla \hat{u}_i(x_n^{\Omega_i};\theta_i) |^2 - f(x_n^{\Omega_i}) \hat{u}_i(x_n^{\Omega_i};\theta_i)\right) & \text{(variational form),} \end{array}\right. \end{equation*} \begin{equation*} \begin{array}{c} \displaystyle L_{D_i}( \hat{u}_j(x;\theta_j) ) \!=\! \frac{|\partial\Omega_i\!\setminus\!\Gamma|}{N_{D_i}} \sum_{n=1}^{N_{D_i}} | \hat{u}_j(x_n^{D_i};\theta_j) |^2, \ \ \ L_{\Gamma}( \hat{u}_1(x;\theta_1) ) \!=\! \frac{|\Gamma|}{N_\Gamma} \sum_{n=1}^{N_\Gamma} | \hat{u}_1(x_n^\Gamma;\theta_1) \!-\! h^{[k]}( x_n^\Gamma ) |^2, \\ \displaystyle L_N( \hat{u}_2(x;\theta_2), \hat{u}_1(x;\theta_1^{[k]}) ) = \frac{|\Omega_1|}{N_{\Omega_1}} \sum_{n=1}^{N_{\Omega_1}} \left( \nabla \hat{u}_1(x_n^{\Omega_1};\theta_1^{[k]}) \cdot \nabla \hat{u}_2(x_n^{\Omega_1};\theta_2) - f(x_n^{\Omega_1}) \hat{u}_2( x_n^{\Omega_1}; \theta_2 ) \right), \end{array} \end{equation*} \endgroup the learning task associated with (\ref{Compensated-DeepRitz-Dirichlet-Subprob-Functional}, \ref{Compensated-PINN-Dirichlet-Subprob-Functional}) is defined as \begin{equation} \theta_1^{[k]} = \operatorname*{arg\,min}_{\theta_1} L_{\Omega_1}( \hat{u}_1(x;\theta_1) ) + \beta \Big( L_{D_1}( \hat{u}_1(x;\theta_1) ) + L_{\Gamma}( \hat{u}_1(x;\theta_1) ) \Big), \label{Compensated-DeepRitz-Dirichlet-Subprob-Discrete} \end{equation} while that of the functional integral \eqref{Compensated-DeepRitz-Neumann-Subprob-Functional} is given by \begin{equation} \theta_2^{[k]} = \operatorname*{arg\,min}_{\theta_2} L_{\Omega_2}( \hat{u}_2(x;\theta_2) ) + L_N( \hat{u}_2(x;\theta_2),\hat{u}_1(x;\theta_1^{[k]}) ) + \beta \Big( L_{D_1}( \hat{u}_2(x;\theta_2) ) + L_{D_2}( \hat{u}_2(x;\theta_2) ) \Big). \label{Compensated-DeepRitz-Neumann-Subprob-Discrete} \end{equation} Clearly, even though the network solution of the Dirichlet subproblem on $\Omega_1$ is prone to overfitting on the interface \cite{dockhorn2019discussion,bajaj2021robust}, it can be observed from \eqref{Compensated-DeepRitz-Neumann-Subprob-Discrete} that the mixed Neumann-Dirichlet problem on $\Omega_2$ can be solved without explicitly computing and enforcing the flux transmission condition across subdomain interfaces. Moreover, as the second-order derivatives are explicitly involved during the training process of Dirichlet subproblems \eqref{Compensated-PINN-Dirichlet-Subprob-Functional}, the network approximation to the solution's gradient is rather accurate inside each subregion, which is highly desirable for solving the mixed Neumann-Dirichlet problem \eqref{Compensated-DeepRitz-Neumann-Subprob-Functional}. \begin{algorithm}[t!] \caption{Dirichlet-Neumann Learning Algorithm for Two Subdomains} \begin{algorithmic} \STATE{\% \textit{Initialization} } \STATE{-- divide domain $\Omega\subset\mathbb{R}^d$ into two non-overlapping subregions $\Omega_1$ and $\Omega_2$;} \STATE{-- specify network structures $\hat{u}_1(x;\theta_1)$ and $\hat{u}_2(x;\theta_2)$ for each subproblem;} \STATE{-- generate Monte Carlo training samples $X_\Gamma$, $X_{\Omega_i}$, and $X_{D_i}$ for $i=1$, 2; } \STATE{\% \textit{Outer Iteration Loop} } \STATE{Start with the initial guess $h^{[0]}$ along the interface $\Gamma$;} \FOR{$k \gets 0$ to $K$ (maximum number of outer iterations)} \WHILE{stopping criteria are not satisfied} \STATE{\% \textit{Dirichlet Subproblem-Solving via Solution-Oriented Learning Method} } \FOR{$\ell \gets 0$ to $L$ (maximum number of training epochs)} \FOR{$m \gets 0$ to $M$ (maximum number of training mini-batches)} \STATE{-- draw mini-batch data uniformly at random from $X_\Gamma$, $X_{\Omega_1}$, and $X_{D_1}$;} \STATE{-- network training by employing the physics-informed neural network \eqref{Compensated-DeepRitz-Dirichlet-Subprob-Discrete}, \textit{i.e.}, } \STATE{ \STATE{ \vspace{-1.2cm} \begingroup \renewcommand*{\arraystretch}{1.1} \begin{equation*} \theta_1^{[k]} = \operatorname*{arg\,min}_{\theta_1} L_{\Omega_1}( \hat{u}_1 ) + \beta \Big( L_{D_1}( \hat{u}_1 ) + L_{\Gamma}( \hat{u}_1 ) \Big) \end{equation*} \endgroup \vspace{-0.65cm} } } \ENDFOR \ENDFOR \STATE{\% \textit{Neumann Subproblem-Solving via Compensated Deep Ritz Method} } \FOR{$\ell \gets 0$ to $L$ (maximum number of training epochs)} \FOR{$m \gets 0$ to $M$ (maximum number of training mini-batches)} \STATE{-- draw mini-batch data uniformly at random from $X_{\Omega_1}$, $X_{\Omega_2}$, $X_{D_1}$, and $X_{D_2}$;} \STATE{-- network training by employing the compensated deep Ritz method \eqref{Compensated-DeepRitz-Neumann-Subprob-Discrete}, \textit{i.e.}, } \STATE{ \STATE{ \vspace{-1.2cm} \begingroup \renewcommand*{\arraystretch}{1.1} \begin{equation*} \theta_2^{[k]} = \operatorname*{arg\,min}_{\theta_2} L_{\Omega_2}( \hat{u}_2 ) + L_N( \hat{u}_2, \hat{u}_1^{[k]} ) + \beta \Big( L_{D_1}( \hat{u}_2 ) + L_{D_2}( \hat{u}_2 ) \Big) \end{equation*} \endgroup \vspace{-0.65cm} } } \ENDFOR \ENDFOR \STATE{\% \textit{Update of boundary values at interface $\Gamma$ for Dirichlet subproblem} } \FOR{$n \gets 1$ to $N_\Gamma$} \STATE{ \vspace{-0.8cm} \begin{equation*} h^{[k+1]}(x_n^\Gamma) = \rho \hat{u}_2(x_n^\Gamma;\theta_2^{[k]}) + (1-\rho) h^{[k]}(x_n^\Gamma) \end{equation*} \vspace{-0.75cm} } \ENDFOR \ENDWHILE \ENDFOR \end{algorithmic} \label{Algorithm-DN-Learning-2Subdomains} \end{algorithm} To sum up, by using our compensated deep Ritz method, the proposed Dirichelt-Neumann learning algorithm is presented in algorithm \ref{Algorithm-DN-Learning-2Subdomains}, where the mini-batch data computed during training are not relabelled for notational simplicity, and the stopping criteria can be constructed by measuring the difference between two consecutive iterations \cite{li2019d3m}. We also note that, though the Dirichlet-Neumann learning algorithm \ref{Algorithm-DN-Learning-2Subdomains} has sequential steps that inherited from the original Dirichlet-Neumann approach \cite{toselli2004domain,quarteroni1999domain}, various techniques have been developed to solve subproblems in parallel (see \cite{mathew2008domain} and references cited therein), which is left for future investigation. \begin{remark}\label{Remark-High-Contrast} Our compensated deep Ritz method (or the Dirichlet-Neumann learning algorithm \ref{Algorithm-DN-Learning-2Subdomains}) can also be easily extended to solve the more challenging elliptic interface problem with high-contrast coefficients \cite{li2006immersed,he2022mesh} \begingroup \renewcommand*{\arraystretch}{1.1} \begin{equation*} \begin{array}{cl} -\nabla \cdot \left( c(x) \nabla u(x) \right) = f(x)\ \ & \text{in}\ \Omega,\\ u(x) = 0\ \ & \text{on}\ \partial \Omega, \end{array} \end{equation*} \endgroup where $\Gamma=\partial \Omega_1\cap\partial\Omega_2$ is an immersed interface (see \autoref{fig-domain-decomposition}), the coefficient function $c(x)$ is piecewise constant with respect to the decomposition of domain \begingroup \renewcommand*{\arraystretch}{1.1} \begin{equation*} c(x) = \left\{ \begin{array}{cl} c_1>0\ \ & \text{in}\ \Omega_1,\\ c_2>0\ \ & \text{in}\ \Omega_2, \end{array}\right. \end{equation*} \endgroup and the natural jump conditions \cite{li2006immersed} are defined as \begin{equation*} [u] = 0\ \ \ \text{and}\ \ \ \left[ c\frac{\partial u}{\partial \bm{n}} \right] = q\ \ \ \text{on}\ \Gamma. \end{equation*} Applying Green's formula in each subdomain and adding them together, we obtain the weak formulation for the high-contrast problem, \textit{i.e.}, find $u_1\in V_1$ and $u_2\in V_2$ such that \begingroup \renewcommand*{\arraystretch}{1.1} \begin{equation*} \begin{array}{cl} b_1(u_1, v_1) = (f, v_1)_1\ \ & \text{for any}\ v_1\in V_1^0,\\ u_1 = u_2\ \ & \text{on}\ \Gamma, \\ b_2(u_2, v_2) = (f, v_2)_2 + (f, R_1\gamma_0 v_2)_1 - b_1(u_1, R_1\gamma_0 v_2) - (q,v_2)_\Gamma,\ \ & \text{for any}\ v_2\in V_2, \end{array} \end{equation*} \endgroup where the bilinear forms are defined as \begin{equation*} b_i(u_i, v_i) = \int_{\Omega_i} c_i \nabla u_i \cdot \nabla v_i\,dx,\ \ \ (f, v_i)_i = \int_{\Omega_i} f v_i\, dx,\ \ \ \text{and}\ \ \ (q,v_2)_\Gamma = \int_\Gamma qv\,ds, \end{equation*} for $i=1$, 2. By resorting to the variational form \cite{evans2010partial,brenner2008mathematical} and parametrizing the trail functions as neural networks, \textit{i.e.}, $u_i(x)\approx \hat{u}_i(x;\theta_i)$ for $i=1$, 2, the learning task associated with the Dirichlet problem\footnote{Here, the residual form is used instead since the solution on each subregion is rather smooth, resulting in more accurate approximation of the solution's gradient inside the subdomain.} on $\Omega_1$ gives \begin{equation} \theta_1^{[k]} = \operatorname*{arg\,min}_{\theta_1} \int_{\Omega_1} | -\nabla\cdot(c_1 \nabla \hat{u}_1) - f |^2dx + \beta \left( \int_{\partial\Omega_1\cap \partial\Omega} |\hat{u}_1|^2\,ds + \int_{\Gamma} |\hat{u}_1 - \hat{u}_2|^2\,ds \right), \label{High-Contrast-Dirichlet-Subprob} \end{equation} and that of the mixed Neumann-Dirichlet problem on $\Omega_2$ takes on the form \begin{equation} \theta_2^{[k]} = \operatorname*{arg\,min}_{\theta_2} \int_{\Omega_2} \Big( \frac{c_2}{2} | \nabla \hat{u}_2 |^2 - f \hat{u}_2 \Big) dx + \int_{\Omega_1} \Big( c_1 \nabla \hat{u}_1^{[k]} \cdot \nabla \hat{u}_2 - f \hat{u}_2 \Big) dx + \int_\Gamma q\hat{u}_2\,ds + \beta \int_{\partial\Omega} |\hat{u}_2|^2\,ds. \label{High-Contrast-Neumann-Subprob} \end{equation} Therefore, an iterative learning approach for solving the elliptic interface problem with high-contrast coefficients can be immediately constructed from \eqref{High-Contrast-Dirichlet-Subprob} and \eqref{High-Contrast-Neumann-Subprob}. \end{remark} \subsection{Neumann-Neumann Learning Algorithm} Similar in spirit, the compensated deep Ritz method can be applied to construct the Neumann-Neumann learning algorithm (see \autoref{fig-big-picture}). Using the same notations as before, the iterative Neumann-Neumann scheme (see Algorithm \ref{DDM-Flux-Exchange}) can be written in an equivalent variational form: given the initial guess $h^{[0]}\in H_{00}^{\frac12}(\Gamma)$, then solve for $k\geq 0$ and $i=1$, 2, \begin{itemize}[leftmargin=2em] \vspace{0.4em} \setlength\itemsep{0.4em} \item[1)] $\displaystyle u_i^{[k]} = \operatorname*{arg\,min}_{u_i\in V_i,\ u_i|_\Gamma = h^{[k]}} \frac12 a_i(u_i, u_i) - (f, u_i)_i,$ \item[2)] $\displaystyle \psi_i^{[k]} = \operatorname*{arg\,min}_{\psi_i \in V_i} \frac12 a_i(\psi_i, \psi_i) \!+\! (f, \psi_i)_i \!+\! (f, R_{3-i}\gamma_0 \psi_i)_{3-i} \!-\! a_i(u_i^{[k]}, \psi_i) \!-\! a_{3-i}(u_{3-i}^{[k]}, R_{3-i}\gamma_0 \psi_i),$ \item[3)] $\displaystyle h^{[k+1]} = h^{[k]} - \rho (\psi_1^{[k]} + \psi_2^{[k]}) \ \ \ \text{on}\ \Gamma,$ \vspace{0.4em} \end{itemize} with $\rho\in(0,\rho_{\text{max}})$ being the acceleration parameter. Next, by parametrizing the trail functions as neural networks, that is, \begin{equation*} u_i^{[k]}(x) \approx \hat{u}_i^{[k]}(x) := \hat{u}_i(x;\theta_i^{[k]}) \ \ \ \text{and}\ \ \ \psi_i^{[k]}(x) \approx \hat{\psi}_i^{[k]}(x) := \hat{\psi}_i(x;\eta_i^{[k]}) \end{equation*} for $i=1$, 2, and by employing the neural network extension operators \begin{equation*} R_1\gamma_0 \hat{\psi}_2(x,\eta_2) = \hat{\psi}_2(x,\eta_2)\ \ \ \text{and}\ \ \ R_2\gamma_0 \hat{\psi}_1(x,\eta_1) = \hat{\psi}_1(x,\eta_1), \end{equation*} the learning tasks associated with the Neumann-Neumann algorithm can be formulated as \begingroup \renewcommand*{\arraystretch}{2.5} \begin{equation*} \begin{array}{c} \displaystyle \theta_i^{[k]} = \operatorname*{arg\,min}_{\theta_i} \int_{\Omega_i} | - \Delta \hat{u}_i - f |^2 dx + \beta \left( \int_{\partial\Omega_i\cap \partial\Omega} |\hat{u}_i|^2\,ds + \int_{\Gamma} |\hat{u}_i - h^{[k]}|^2\,ds \right), \\ \displaystyle \eta_i^{[k]} \! = \! \operatorname*{arg\,min}_{\eta_i} \! \int_{\Omega_i} \!\! \Big( \frac12 | \nabla \hat{\psi}_i |^2 \!+\! f \hat{\psi}_i \!-\! \nabla \hat{u}_i^{[k]} \!\cdot\! \nabla \hat{\psi}_i \Big) dx \! + \! \int_{\Omega_{3-i}} \!\! \Big( f\hat{\psi}_i \!-\! \nabla \hat{u}_{3-i}^{[k]} \!\cdot\! \nabla \hat{\psi}_i \Big) dx \! + \! \beta \! \int_{\partial\Omega} \! |\hat{\psi}_i|^2ds, \end{array} \end{equation*} \endgroup where $\beta>0$ is a user-defined penalty coefficient, $i=1$, 2, and the training tasks associated with Dirichlet subproblems are defined in a residual form as before. Therefore, the iterative learning approach can be constructed by using numerical integration formulas. \subsection{Dirichlet-Dirichlet Learning Algorithm} To build the Dirichlet-Dirichlet learning algorithm in \autoref{fig-big-picture}, we first rewrite the iterative process (see Algorithm \ref{DDM-Flux-Exchange}) as: given the initial guess $h^{[0]}\in H_{00}^{\frac12}(\Gamma)$ and $\psi_1^{[0]}=\psi_2^{[0]}=0$, then solve for $k\geq 0$, \begin{itemize} \vspace{0.6em} \setlength\itemsep{0.6em} \item[1)] solve on $\Omega_i$ for $u_i^{[k]}$: $ \begingroup \renewcommand*{\arraystretch}{1.3} \left\{ \begin{array}{cl} - \Delta u_i^{[k]} = f \ \ & \text{in}\ \Omega_i,\\ u_i^{[k]} = 0\ \ & \text{on}\ \partial\Omega\cap\partial\Omega_i, \\ \nabla u_i^{[k]} \cdot \bm{n}_i = h^{[k]} - \theta ( \nabla \psi_1^{[k]} \cdot \bm{n}_1 + \nabla \psi_2^{[k]} \cdot \bm{n}_2 ) \ \ & \text{on}\ \Gamma, \end{array}\right. \endgroup $ \item[2)] solve on $\Omega_i$ for $\psi_i^{[k+1]}$: $ \begingroup \renewcommand*{\arraystretch}{1.3} \left\{ \begin{array}{cl} - \Delta \psi_i^{[k+1]} = 0 \ \ & \text{in}\ \Omega_i,\\ \psi_i^{[k+1]} = 0\ \ & \text{on}\ \partial\Omega\cap\partial\Omega_i, \\ \psi_i^{[k+1]} = u_1^{[k]} - u_2^{[k]} \ \ & \text{on}\ \Gamma, \end{array}\right. \endgroup $ \vspace{0.6em} \end{itemize} where $i=1$, 2. Using the same notations as before, the Green's theorem indicates that the the variational formulation of the mixed Neumann-Dirichlet problem reads: for $i=1$, 2, \begingroup \renewcommand*{\arraystretch}{1.3} \begin{equation*} \begin{array}{c} \displaystyle u_i^{[k]} = \operatorname*{arg\,min}_{u_i\in V_i} \frac12 a_i(u_i, u_i) - (f, u_i)_i - (h^{[k]},u_i)_\Gamma + \rho \big( a_i(\psi_i^{[k]}, u_i) + a_{3-i}( \psi_{3-i}^{[k]}, R_{3-i} \gamma_0 u_i ) \big) \end{array} \end{equation*} \endgroup where $(\cdot,\cdot)_\Gamma$ denotes the $L_2$ inner product on $\Gamma$, $\rho\in(0,\rho_{\text{max}})$, and the Dirichlet problem is reformulated as \begingroup \renewcommand*{\arraystretch}{1.3} \begin{equation*} \begin{array}{c} \displaystyle \psi_i^{[k]} = \operatorname*{arg\,min}_{\psi_i\in V_i,\ \psi_i|_\Gamma = u_1^{[k]} - u_2^{[k]}} \frac12 a_i(\psi_i, \psi_i) \end{array} \end{equation*} \endgroup for $i=1$, 2. Consequently, by parametrizing the trail functions as neural networks, \textit{i.e.}, \begin{equation*} u_i^{[k]}(x) \approx \hat{u}_i^{[k]}(x) := \hat{u}_i(x;\theta_i^{[k]}) \ \ \ \text{and}\ \ \ \psi_i^{[k]}(x) \approx \hat{\psi}_i^{[k]}(x) := \hat{\psi}_i(x;\eta_i^{[k]}) \end{equation*} for $i=1$, 2, and by employing the neural network extension operators \begin{equation*} R_1\gamma_0 \hat{u}_2(x,\theta_2) = \hat{u}_2(x,\theta_2)\ \ \ \text{and}\ \ \ R_2\gamma_0 \hat{u}_1(x,\theta_1) = \hat{u}_1(x,\theta_1), \end{equation*} the learning tasks associated with the Dirichlet-Dirichlet algorithm can be formulated as \begingroup \renewcommand*{\arraystretch}{2.5} \begin{equation*} \begin{array}{c} \displaystyle \theta_i^{[k]} = \operatorname*{arg\,min}_{\theta_i} \int_{\Omega_i} \Big( \frac12 |\nabla \hat{u}_i |^2 - f \hat{u}_i + \rho \nabla \hat{\psi}_i^{[k]}\cdot \nabla \hat{u}_i\Big) dx + \rho \int_{\Omega_{3-i}} \nabla \hat{\psi}^{[k]}_{3-i} \cdot \nabla \hat{u}_i\,dx \\ \displaystyle - \int_\Gamma h^{[k]}\hat{u}_i\,ds + \beta \int_{\partial\Omega_i\cap \partial\Omega} |\hat{u}_i|^2\,ds,\qquad\qquad\qquad\ \\ \displaystyle \eta_i^{[k]} = \operatorname*{arg\,min}_{\eta_i} \int_{\Omega_i} |\Delta \hat{\psi}_i |^2 \,dx + \beta \left( \int_{\partial\Omega_i\cap \partial\Omega} |\hat{\psi}_i|^2\,ds + \int_{\Gamma} |\hat{\psi}_i - \hat{u}_1^{[k]} + \hat{u}_2^{[k]}|^2\,ds \right), \\ \end{array} \end{equation*} \endgroup where $\beta>0$ is a user-defined penalty coefficient, $i=1$, 2, and the training tasks associated with Dirichlet subproblems are defined in a residual form as before. \subsection{Robin-Robin Learning Algorithm} As mentioned before, the Robin-Robin algorithm only requires solution exchange between neighbouring subdomains, however, it may still suffer from the problem of overfitted interface conditions. More specifically, let $\kappa_1=1$ in what follows, then a relatively large value of $\kappa_2>0$ is typically required in order to reduce the total number of outer iterations \cite{chen2014optimal}. When the Robin boundary condition is incorporated as the soft penalty term during training, it may cause weights imbalance between the solution and its gradient on the interface, thereby making the trained model prone to overfitting at and near the interfaces. To alleviate the issue of weights imbalance, the compensated deep Ritz method is a promising alternative for realizing the deep learning analogue of the classical Robin-Robin algorithm. Note that in terms of differential operator, the second subproblem with $\kappa_2\gg \kappa_1=1$ in the Robin-Robin algorithm \cite{quarteroni1999domain} can be rewritten as \begin{equation*} \begingroup \renewcommand*{\arraystretch}{1.3} \left\{ \begin{array}{ll} - \Delta u_2^{[k]} = f \ \ & \text{in}\ \Omega_2,\\ u_2^{[k]} = 0\ \ & \text{on}\ \partial\Omega\cap\partial\Omega_2, \\ \kappa_2 u_2^{[k]} + \nabla u_2^{[k]} \cdot \bm{n}_2 = \kappa_2 u_1^{[k]} - \nabla u_1^{[k]} \cdot \bm{n}_1 \ \ & \text{on}\ \Gamma. \end{array}\right. \endgroup \end{equation*} Using the same notations as before, it is equivalent to find the weak solution $u_2^{[k]}\in V_2$ such that \begin{equation} a_2(u_2^{[k]}, v_2) = (f, v_2)_2 + ( \kappa_2 (u_1^{[k]} - u_2^{[k]}) - \nabla u_1^{[k]} \cdot \bm{n}_1, v_2 )_\Gamma\ \ \ \textnormal{for any}\ v_2\in V_2. \label{RRLM-2nd-RobinProb-Weak} \end{equation} Next, by using the Green's formula, we arrive at another form of \eqref{RRLM-2nd-RobinProb-Weak}, that is, \begin{equation*} a_2(u_2^{[k]}, v_2) = (f, v_2)_2 + \kappa_2 ( u_1^{[k]} - u_2^{[k]}, v_2 )_\Gamma - a_1( u_1^{[k]}, R_1\gamma_0 v_2 ) + (f, R_1\gamma_0 v_2)_1\ \ \ \textnormal{for any}\ v_2\in V_2 \end{equation*} and therefore the variational formulation of \eqref{RRLM-2nd-RobinProb-Weak} due to the symmetry of bilinear forms \begingroup \renewcommand*{\arraystretch}{1.3} \begin{equation*} \begin{array}{c} \displaystyle u_2^{[k]} = \operatorname*{arg\,min}_{u_2\in V_2} \frac12 a_2(u_2, u_2) - (f, u_2)_2 - \kappa_2( u_1^{[k]} - \frac12 u_2, u_2)_\Gamma + a_1( u_1^{[k]}, R_1\gamma_0 u_2) - (f, R_1\gamma_0u_2)_1 \end{array} \end{equation*} \endgroup which completely differs from the original PINN approach \eqref{RRLM-PINN-Subprob-Functional}. As such, by parametrizing the trail functions as neural networks, \textit{i.e.}, \begin{equation*} u_1^{[k]}(x) \approx \hat{u}_1^{[k]}(x) := \hat{u}_1(x,\theta_1^{[k]})\ \ \ \text{and}\ \ \ u_2^{[k]}(x) \approx \hat{u}_2^{[k]}(x) := \hat{u}_2(x,\theta_2^{[k]}), \end{equation*} and by employing the neural network extension operator \begin{equation*} R_1\gamma_0 \hat{u}_2(x,\theta_2) = \hat{u}_2(x,\theta_2), \end{equation*} the learning task associated with the second Robin problem takes on the form: \begingroup \renewcommand*{\arraystretch}{2.5} \begin{equation*} \begin{array}{c} \displaystyle \theta_2^{[k]} = \operatorname*{arg\,min}_{\theta_2} \int_{\Omega_2} \Big( \frac12 |\nabla \hat{u}_2 |^2 - f \hat{u}_2 \Big) dx - \kappa_2 \int_\Gamma \Big( \hat{u}_1^{[k]}\hat{u}_2 - \frac12 |\hat{u}_2|^2 \Big) ds \\ \displaystyle \qquad\qquad +\ \int_{\Omega_1} \Big( \nabla \hat{u}_1^{[k]} \cdot \nabla \hat{u}_2 - f\hat{u}_2 \Big) dx + \beta \int_{\partial\Omega} |\hat{u}_2|^2\,ds, \end{array} \end{equation*} \endgroup which obviously removes the issue of weights imbalance between the solution value and its normal derivatives on the interface. \section{Numerical Experiments}\label{Section-Experiments} To validate the effectiveness of our proposed domain decomposition learning algorithms, we conduct experiments using the Dirichlet-Neumann and Robin-Robin learning algorithms on a wide range of elliptic boundary value problems in this section, while the Neumann-Neumann and Dirichlet-Dirichlet learning algorithms are omitted due to the limitation of pages. In what follows, our proposed Dirichlet-Neumann learning method, \textit{i.e.}, algorithm \ref{Algorithm-DN-Learning-2Subdomains}, is abbreviated as DNLM for simplicity, with bracket indicating the numerical solver adopted for solving the Dirichlet subproblems. In contrast to our proposed methods, the existing learning approach \cite{li2020deep} for realizing the classical Dirichlet-Neumann algorithm is based on a direct substitution of the numerical solvers with the PINN approach, which is referred to as DN-PINNs and is used for comparison. On the other hand, although the update of interface conditions in the Robin-Robin algorithm does not directly depends on the flux exchange, it may still suffer from the weights imbalance and therefore the interface overfitting. To further investigate the overfitting effects, the Robin-Robin algorithm is realized using the standard PINN approach and our compensated deep Ritz method after the empirical study of DNLM, which is referred to as RR-PINNs, RRLM (PINN), and RRLM (deep Ritz) in a similar fashion. As is common for practical implementation \cite{li2020deep,karniadakis2021physics}, the network architecture deployed for each subregion is a fully connected network with $3$ hidden layers of $50$ neurons each \cite{goodfellow2016deep}, while the hyperbolic tangent activation function is employed due to the smoothness of local solutions and the differentiability of extension operators. During the training mode and for $i=1$, 2, we randomly sample $N_{\Omega_i}=20000$ points from the interior domain $\Omega_i$, $N_{\Gamma}=5000$ points from the interface $\Gamma$, and $N_D=5000$ points from the boundary $\partial\Omega_i\setminus\Gamma$ of length $\Gamma$ each. Then the trained models are evaluated the test dataset, \textit{i.e.}, $N_\Omega = 10000$ points that are uniformly distributed over the entire computational domain, and compared with the true solution to test their performance. The penalty coefficient is set to be $\beta=400$ and the number of mini-batches is chosen as $5$ for all training datasets. When executing the learning task on each subdomain, the initial learning rate of Adam optimizer is set to be $0.1$, which is divided by 10 at the 600 and 800 epochs. The training process is terminated after 1000 epochs for each decomposed subproblem, and the model with minimum training loss is chosen to execute the subsequent operations. All the experiments are implemented using PyTorch and trained on the NVIDIA GeForce RTX 2060. \subsection{Dirichlet-Neumann Learning Algorithm} As a representative benchmark, we consider the learning approaches for realizing the classical Dirichlet-Neumann algorithm. More precisely, a comparison study between DN-PINNs, DNLM (PINN), and DNLM (deep Ritz) is presented in this subsection, where experiments on a wide variety of elliptic problems are conducted to demonstrate the effectiveness and flexibility of our proposed methods. \subsubsection{Poisson's Equation with Simple Interface} To begin with, we consider a benchmark Poisson problem in the two-dimensional case, that is, \begin{equation} \begin{array}{cl} -\Delta u(x,y) = 4 \pi^2 \sin(2 \pi x) (2 \cos(2 \pi y) - 1) \ & \text{in}\ \Omega=(0,1)^2,\\ u(x,y) = 0\ \ & \text{on}\ \partial \Omega, \end{array} \label{Experiments-DNLM-ex1} \end{equation} where the exact solution is given by $u(x,y) = \sin(2\pi x)(\cos(2\pi y)-1)$, and the interface $\Gamma=\partial\Omega_1\cap\partial\Omega_2$ is a straight line segment from $(0.5,0)$ to $(0.5,1)$ as depicted in \autoref{Experiments-DNLM-ex1-exact-solution}. It is noteworthy that although the exact solution on each subregion is rather smooth, its normal derivative on the interface is non-trivial that may bring difficulties for network training \cite{bajaj2021robust}, this differs from the widely-used non-overlapping example that has trivial gradients on the interface \cite{li2020deep}. \begin{figure}[htp] \centering \includegraphics[width=0.212\textwidth]{figure-DNLM//fig-DN-ex1-domain.png} \hspace*{0.1cm} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex1-exact-u.pdf} \includegraphics[width=0.251\textwidth]{figure-DNLM//fig-DN-ex1-exact-u-dx.pdf} \caption{From left to right: decomposition of domain into two subregions, exact solution $u(x,y)$, and its partial derivatives $\partial_x u(x,y)$ for the numerical example \eqref{Experiments-DNLM-ex1}.} \label{Experiments-DNLM-ex1-exact-solution} \end{figure} Based on the conventional Dirichlet-Neumann algorithm \cite{toselli2004domain,quarteroni1999domain} that is defined in terms of differential operators, we first conduct experiments using the most commonly used PINNs \cite{raissi2019physics,karniadakis2021physics} as the numerical solver of all decomposed subproblems. We show in \autoref{Experiments-DNLM-ex1-DN-PINNs-solution} the iterative numerical solutions over the entire computational domain in a typical simulation, and in \autoref{Experiments-DNLM-ex1-DN-PINNs-error} the corresponding pointwise absolute errors. Here, the initial guess for the Dirichlet interface condition is set to be \begin{equation*} h^{[0]}(x,y) = \sin(2\pi x) (\cos(2\pi y) - 1) - 50xy(x-1)(y-1)\ \ \ \text{on}\ \Gamma, \end{equation*} which remains unchanged for the methods tested below. Unfortunately, DN-PINNs fails to converge to the correct solution of \eqref{Experiments-DNLM-ex1} shown in \autoref{Experiments-DNLM-ex1-DN-PINNs}, since the trained networks are prone to overfitting on the interface \cite{dockhorn2019discussion}. In other words, with the interface conditions being included as extra soft constraints in the loss function and the size of training data on interface being smaller than that of interior domains, the trained model using standard PINN approach \cite{raissi2019physics} suffers from the issue of boundary overfitting \cite{dockhorn2019discussion,bajaj2021robust}, and therefore fails to be accurate enough for the prediction of local fluxes even if the training loss is very small. As a result, it would hamper the convergence of outer iteration but is perhaps inevitable in practice for problems with complex interface conditions. In fact, a straightforward replacement of the numerical solvers by other learning strategies, \textit{e.g.}, the deep Ritz method \cite{yu2018deep}, also has the same issue. \begin{figure}[htp] \centering \begin{subfigure}[htp]{\textwidth} \centering \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex1-DN-PINNs-u-NN-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex1-DN-PINNs-u-NN-ite-3.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex1-DN-PINNs-u-NN-ite-6.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex1-DN-PINNs-u-NN-ite-9.png} \caption{The numerical solutions $\hat{u}^{[k]}(x,y)$ along the outer iterations. } \label{Experiments-DNLM-ex1-DN-PINNs-solution} \end{subfigure} \begin{subfigure}[htp]{\textwidth} \centering \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex1-DN-PINNs-pterr-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex1-DN-PINNs-pterr-ite-3.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex1-DN-PINNs-pterr-ite-6.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex1-DN-PINNs-pterr-ite-9.png} \caption{The pointwise absolute errors $|\hat{u}^{[k]}(x,y) - u(x,y)|$ along the outer iterations. } \label{Experiments-DNLM-ex1-DN-PINNs-error} \end{subfigure} \vspace{-0.2cm} \caption{Numerical results of example \eqref{Experiments-DNLM-ex1} using the DN-PINNs on the test dataset.} \label{Experiments-DNLM-ex1-DN-PINNs} \end{figure} In contrast, although the predicted flux through the network solution of Dirichlet subproblem is usually of unacceptable low accuracy, our proposed method doesn't need to explicitly enforce the flux continuity condition along the interface, thereby enabling the effectiveness of outer iteration in the presence of overfitted interface (see \autoref{Experiments-DNLM-ex1-Overfit-Dirichlet-Subproblem}). To validate our statements, we show in \autoref{Experiments-DNLM-ex1-DNLM-PINN} and \autoref{Experiments-DNLM-ex1-DNLM-DeepRitz} the computational results using our Dirichlet-Neumann learning algorithm \ref{Algorithm-DN-Learning-2Subdomains}, where the PINN \cite{raissi2019physics} and the deep Ritz method \cite{yu2018deep} are employed for solving the Dirichlet subproblem, respectively. \begin{figure}[htp] \centering \begin{subfigure}[htp]{\textwidth} \centering \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex1-DNLM-PINN-u-NN-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex1-DNLM-PINN-u-NN-ite-3.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex1-DNLM-PINN-u-NN-ite-6.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex1-DNLM-PINN-u-NN-ite-9.png} \caption{The numerical solutions $\hat{u}^{[k]}(x,y)$ along the outer iterations. } \label{Experiments-DNLM-ex1-DNLM-PINN-solution} \end{subfigure} \begin{subfigure}[htp]{\textwidth} \centering \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex1-DNLM-PINN-pterr-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex1-DNLM-PINN-pterr-ite-3.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex1-DNLM-PINN-pterr-ite-6.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex1-DNLM-PINN-pterr-ite-9.png} \caption{The pointwise absolute errors $|\hat{u}^{[k]}(x,y) - u(x,y)|$ along the outer iterations. } \label{Experiments-DNLM-ex1-DNLM-PINN-error} \end{subfigure} \vspace{-0.2cm} \caption{Numerical results of example \eqref{Experiments-DNLM-ex1} using our DNLM (PINN) on the test dataset.} \label{Experiments-DNLM-ex1-DNLM-PINN} \end{figure} \begin{figure}[htp] \centering \begin{subfigure}[htp]{\textwidth} \centering \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex1-DNLM-DeepRitz-u-NN-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex1-DNLM-DeepRitz-u-NN-ite-3.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex1-DNLM-DeepRitz-u-NN-ite-6.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex1-DNLM-DeepRitz-u-NN-ite-9.png} \caption{The numerical solutions $\hat{u}^{[k]}(x,y)$ along the outer iterations. } \label{Experiments-DNLM-ex1-DNLM-DeepRitz-solution} \end{subfigure} \begin{subfigure}[htp]{\textwidth} \centering \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex1-DNLM-DeepRitz-pterr-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex1-DNLM-DeepRitz-pterr-ite-3.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex1-DNLM-DeepRitz-pterr-ite-6.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex1-DNLM-DeepRitz-pterr-ite-9.png} \caption{The pointwise absolute errors $|\hat{u}^{[k]}(x,y) - u(x,y)|$ along the outer iterations. } \label{Experiments-DNLM-ex1-DNLM-DeepRitz-error} \end{subfigure} \vspace{-0.2cm} \caption{Numerical results of example \eqref{Experiments-DNLM-ex1} using our DNLM (deep Ritz) on the test dataset.} \label{Experiments-DNLM-ex1-DNLM-DeepRitz} \end{figure} \begin{figure}[htp] \centering \begin{subfigure}[htp]{\textwidth} \centering \includegraphics[width=0.18\textwidth]{figure-DNLM//fig-DN-ex1-DNLM-PINN-D-u-dx-ite-9.png} \includegraphics[width=0.18\textwidth]{figure-DNLM//fig-DN-ex1-DNLM-PINN-D-pterr-dx-ite-9.png} \caption{Network solution $\partial_x\hat{u}_1^{[9]}(x,y)$ and its error $|\partial_x \hat{u}_1^{[9]}(x,y) - \partial_x u_1(x,y)|$ using DNLM (PINN). } \end{subfigure} \begin{subfigure}[htp]{\textwidth} \centering \includegraphics[width=0.18\textwidth]{figure-DNLM//fig-DN-ex1-DNLM-DeepRitz-D-u-dx-ite-9.png} \includegraphics[width=0.18\textwidth]{figure-DNLM//fig-DN-ex1-DNLM-DeepRitz-D-pterr-dx-ite-9.png} \caption{Network solution $\partial_x\hat{u}_1^{[9]}(x,y)$ and its error $|\partial_x \hat{u}_1^{[9]}(x,y) - \partial_x u_1(x,y)|$ using DNLM (deep Ritz). } \end{subfigure} \vspace{-0.2cm} \caption{Overfitting phenomenon in solving the Dirichlet subproblem of \eqref{Experiments-DNLM-ex1} on testdata.} \label{Experiments-DNLM-ex1-Overfit-Dirichlet-Subproblem} \end{figure} As can be observed from \autoref{Experiments-DNLM-ex1-DNLM-PINN} and \autoref{Experiments-DNLM-ex1-DNLM-DeepRitz}, the predicted solution using our learning methods is in agreement with the true solution over the entire computational domain, while the corresponding first-order derivatives shown in \autoref{Experiments-DNLM-ex1-Overfit-Dirichlet-Subproblem} indicate that the network solution of Dirichlet subproblem rather learns to overfit at the interface. More quantitatively, we run the simulations for 5 times to calculate the relative $L_2$ errors, and the results (mean value $\pm$ standard deviation) are reported in \autoref{Experiments-DNLM-ex1-Err-Table} and \autoref{Experiments-DNLM-ex1-Err-Figure}. By employing our proposed compensated deep Ritz method for solving the mixed Neumann-Dirichlet subproblem, it is obvious that our learning algorithms converge to the exact solution very well, while the DN-PINNs is typically divergent due to the lack of accurate flux estimation along the interface. Moreover, as the solution of \eqref{Experiments-DNLM-ex1} is rather smooth on each subregion, it can be found in \autoref{Experiments-DNLM-ex1-Err-Table} and \autoref{Experiments-DNLM-ex1-Err-Figure} that the DNLM (PINN) performs better than the DNLM (deep Ritz). This is because that the second-order derivatives are explicitly involved during training, leading to better estimates of the solution's gradient inside the computational domain (see \autoref{Experiments-DNLM-ex1-Overfit-Dirichlet-Subproblem}). \begin{table}[htp] \caption{Relative $L_2$ errors of the predicted solution along the outer iteration $k$ for example \eqref{Experiments-DNLM-ex1}, with mean value ($\pm$ standard deviation) being reported over 5 runs.} \centering \renewcommand{\arraystretch}{1.1} \begin{tabular}{ | c || c | c | c | c | c | c | } \hline \multicolumn{2}{|c|}{ \diagbox[width=17em]{Relative Errors}{Outer Iterations} } & 1 & 3 & 5 & 7 & 9 \\ \hline \hline \multirow{5}{*}{$ \displaystyle \!\! \frac{ \lVert \hat{u}^{[k]} - u \rVert_{L_2(\Omega)} } { \lVert u \rVert_{L_2(\Omega)} }\!\!$} & DN-PINNs & \makecell{24.16 \\ \!($\pm$ 48.04)\!} & \makecell{6.70 \\ \!($\pm$ 12.10)\!} & \makecell{2.28 \\ \!($\pm$ 2.70)\!} & \makecell{0.91 \\ \!($\pm$ 0.52)\!} & \makecell{1.15 \\ \!($\pm$ 0.49)\!} \\ \cline{2-7} & DNLM (PINN) & \makecell{9.16 \\ \!($\pm$ 2.34)\!} & \makecell{2.81 \\ \!($\pm$ 0.56)\!} & \makecell{1.17 \\ \!($\pm$ 0.71)\!} & \makecell{0.08 \\ \!($\pm$ 0.07)\!} & \makecell{0.10 \\ \!($\pm$ 0.04)\!} \\ \cline{2-7} & \!\!\! DNLM (Deep Ritz)\! & \makecell{9.50 \\ \!($\pm$ 2.46)\!} & \makecell{1.83 \\ \!($\pm$ 0.86)\!} & \makecell{0.41 \\ \!($\pm$ 0.35)\!} & \makecell{ 0.62 \\ \!($\pm$ 0.55)\!} & \makecell{0.40 \\ \!($\pm$ 0.49)\!} \\ \hline \end{tabular} \label{Experiments-DNLM-ex1-Err-Table} \end{table} \begin{figure}[htp] \begin{center} \includegraphics[width=0.7\textwidth]{figure-DNLM//fig-DN-ex1-lossL2-curves.png} \caption{Relative $L_2$ errors on testdata along outer iterations for example \eqref{Experiments-DNLM-ex1}.} \label{Experiments-DNLM-ex1-Err-Figure} \end{center} \end{figure} \subsubsection{Poisson's Equation with Zigzag Interface} To demonstrate the advantage of our mesh-free approach over the traditional mesh-based numerical methods \cite{toselli2004domain}, we consider the previous example but with a more complex interface geometry, namely, \begin{equation} \begin{array}{cl} -\Delta u(x,y) = 4\pi^2 \sin(2 \pi y) (2 \cos(2 \pi x) - 1) \ & \text{in}\ \Omega=(0,1)^2,\\ u(x,y) = 0\ \ & \text{on}\ \partial \Omega, \end{array} \label{Experiments-DNLM-ex2} \end{equation} where the exact solution is defined as $u(x,y) = \sin(2\pi y)(\cos(2\pi x)-1)$ and the interface is a curved zigzag line as depicted in \autoref{Experiments-DNLM-ex2-exact-solution}. More precisely, the zigzag function reads \begin{equation*} x = c ( a(20y-\text{floor}(20y)) + b ) + 0.5 \end{equation*} where coefficients $ a = 0.05 (-1 + 2\times \text{mod}(\text{floor}(20y), 2))$, $b=-0.05\times \text{mod}(\text{floor}(20x),2)$ and $c=-2\times\text{mod}(\text{floor}(10x),2)+1$, thereby enabling the sample generation process inside each subdomain and along its boundaries. Our proposed learning algorithm \ref{Algorithm-DN-Learning-2Subdomains} can easily handle such irregular boundary shapes (see \autoref{Experiments-DNLM-ex2-DNLM-PINN} and \autoref{Experiments-DNLM-ex2-DNLM-DeepRitz}), while the traditional finite difference or finite element method \cite{brenner2008mathematical} calls for a computationally expensive mesh generation procedure. \begin{figure}[htp] \centering \includegraphics[width=0.212\textwidth]{figure-DNLM//fig-DN-ex2-domain.png} \hspace{0.1cm} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex1-exact-u.pdf} \includegraphics[width=0.251\textwidth]{figure-DNLM//fig-DN-ex1-exact-u-dx.pdf} \includegraphics[width=0.239\textwidth]{figure-DNLM//fig-DN-ex1-exact-u-dy.pdf} \caption{From left to right: decomposition of domain into two subregions, exact solution $u(x,y)$, and its partial derivatives $\partial_x u(x,y)$, $\partial_y u(x,y)$ for the numerical example \eqref{Experiments-DNLM-ex2}.} \label{Experiments-DNLM-ex2-exact-solution} \end{figure} We first conduct experiments using the DN-PINNs scheme \cite{li2019d3m} for solving \eqref{Experiments-DNLM-ex2}, and the numerical results in a typical simulation are depicted in \autoref{Experiments-DNLM-ex2-DN-PINNs}. Here, the initial guess is \begin{equation*} h^{[0]}(x,y) = \sin(2\pi x)(\cos(2\pi y)-1) - 1000\sin(2\pi x)^2\sin(2\pi y)\ \ \ \text{on}\ \Gamma, \end{equation*} which remains unchanged for other methods tested below. Similar as before, the DN-PINNs scheme fails to converge to the exact solution shown in \autoref{Experiments-DNLM-ex2-exact-solution}, since the network solution of Dirichlet subproblem learns to satisfy the constrained equations but for an overfitted interface condition. \begin{figure}[htp] \centering \begin{subfigure}[htp]{\textwidth} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex2-DN-PINNs-u-NN-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex2-DN-PINNs-u-NN-ite-3.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex2-DN-PINNs-u-NN-ite-6.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex2-DN-PINNs-u-NN-ite-9.png} \caption{The numerical solutions $\hat{u}^{[k]}(x,y)$ along the outer iterations. } \label{Experiments-DNLM-ex2-DN-PINNs-solution} \end{subfigure} \begin{subfigure}[htp]{\textwidth} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex2-DN-PINNs-pterr-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex2-DN-PINNs-pterr-ite-3.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex2-DN-PINNs-pterr-ite-6.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex2-DN-PINNs-pterr-ite-9.png} \caption{The pointwise absolute errors $|\hat{u}^{[k]}(x,y) - u(x,y)|$ along the outer iterations. } \label{Experiments-DNLM-ex2-DN-PINNs-error} \end{subfigure} \vspace{-0.2cm} \caption{Numerical results of example \eqref{Experiments-DNLM-ex1} using the DN-PINNs on the test dataset.} \label{Experiments-DNLM-ex2-DN-PINNs} \end{figure} On the other hand, by replacing the numerical solver of Neumann subproblem with our compensated deep Ritz method, the numerical results depicted in \autoref{Experiments-DNLM-ex2-DNLM-PINN}\footnote{The first figure in \autoref{Experiments-DNLM-ex2-DNLM-PINN-solution} means that the current solution is still far away from the real one, which is not restated in what follows.} indicate that our DNLM (PINN) approach can obtain a satisfactory approximation to the exact solution of \eqref{Experiments-DNLM-ex2}, which circumvents the meshing procedure that remains challenging for problems with complex interfaces. Moreover, our learning approaches remain effective in the presence of interface overfitting (see \autoref{Experiments-DNLM-ex2-Overfit-Dirichlet-Subproblem}), and therefore is highly desirable in practice since overfitting always occurs to a greater or lesser extent. \begin{figure}[htp] \centering \begin{subfigure}[htp]{\textwidth} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex2-DNLM-PINN-u-NN-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex2-DNLM-PINN-u-NN-ite-2.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex2-DNLM-PINN-u-NN-ite-3.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex2-DNLM-PINN-u-NN-ite-4.png} \caption{The numerical solutions $\hat{u}^{[k]}(x,y)$ along the outer iterations.\footnotemark} \label{Experiments-DNLM-ex2-DNLM-PINN-solution} \end{subfigure} \begin{subfigure}[htp]{\textwidth} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex2-DNLM-PINN-pterr-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex2-DNLM-PINN-pterr-ite-2.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex2-DNLM-PINN-pterr-ite-3.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex2-DNLM-PINN-pterr-ite-4.png} \caption{The pointwise absolute errors $|\hat{u}^{[k]}(x,y) - u(x,y)|$ along the outer iterations. } \label{Experiments-DNLM-ex2-DNLM-PINN-error} \end{subfigure} \vspace{-0.2cm} \caption{Numerical results of example \eqref{Experiments-DNLM-ex2} using our DNLM (PINN) on the test dataset.} \label{Experiments-DNLM-ex2-DNLM-PINN} \end{figure} Note that when the deep Ritz method \cite{yu2018deep} is adopted for solving the Dirichlet subproblem, the accuracy of approximate gradients inside the computational domain is no longer comparable to that of PINN approach \cite{chen2020comparison}. The situation is susceptible to become even worse for irregular domains, and therefore the DNLM (deep Ritz) may fail to be accurate enough as shown in \autoref{Experiments-DNLM-ex2-DNLM-DeepRitz}. To further validate our statements, we show in \autoref{Experiments-DNLM-ex2-Err-Table} and \autoref{Experiments-DNLM-ex2-Err-Figure} the quantitative results over 5 runs, where DNLM (PINN) outperforms DN-PINNs and DNLM (deep Ritz) in terms of accuracy. \begin{figure}[htp] \centering \begin{subfigure}[htp]{\textwidth} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex2-DNLM-DeepRitz-u-NN-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex2-DNLM-DeepRitz-u-NN-ite-3.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex2-DNLM-DeepRitz-u-NN-ite-6.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex2-DNLM-DeepRitz-u-NN-ite-9.png} \caption{The numerical solutions $\hat{u}^{[k]}(x,y)$ along the outer iterations. } \label{Experiments-DNLM-ex2-DNLM-DeepRitz-solution} \end{subfigure} \begin{subfigure}[htp]{\textwidth} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex2-DNLM-DeepRitz-pterr-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex2-DNLM-DeepRitz-pterr-ite-3.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex2-DNLM-DeepRitz-pterr-ite-6.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex2-DNLM-DeepRitz-pterr-ite-9.png} \caption{The pointwise absolute errors $|\hat{u}^{[k]}(x,y) - u(x,y)|$ along the outer iterations. } \label{Experiments-DNLM-ex2-DNLM-DeepRitz-error} \end{subfigure} \vspace{-0.2cm} \caption{Numerical results of example \eqref{Experiments-DNLM-ex2} using our DNLM (deep Ritz) on the test dataset.} \label{Experiments-DNLM-ex2-DNLM-DeepRitz} \end{figure} \begin{figure}[htp] \centering \begin{subfigure}[htp]{\textwidth} \centering \includegraphics[width=0.18\textwidth]{figure-DNLM//fig-DN-ex2-DNLM-PINN-D-u-dx-ite-4.png} \includegraphics[width=0.18\textwidth]{figure-DNLM//fig-DN-ex2-DNLM-PINN-D-pterr-dx-ite-4.png} \includegraphics[width=0.18\textwidth]{figure-DNLM//fig-DN-ex2-DNLM-PINN-D-u-dy-ite-4.png} \includegraphics[width=0.18\textwidth]{figure-DNLM//fig-DN-ex2-DNLM-PINN-D-pterr-dy-ite-4.png} \caption{Network solutions $\partial_x\hat{u}_1^{[4]}$, $\partial_y\hat{u}_1^{[4]}$ and errors $|\partial_x \hat{u}_1^{[4]} - \partial_x u_1|$, $|\partial_y \hat{u}_1^{[4]} - \partial_y u_1|$ using DNLM (PINN). } \end{subfigure} \begin{subfigure}[htp]{\textwidth} \centering \includegraphics[width=0.18\textwidth]{figure-DNLM//fig-DN-ex2-DNLM-DeepRitz-D-u-dx-ite-9.png} \includegraphics[width=0.18\textwidth]{figure-DNLM//fig-DN-ex2-DNLM-DeepRitz-D-pterr-dx-ite-9.png} \includegraphics[width=0.18\textwidth]{figure-DNLM//fig-DN-ex2-DNLM-DeepRitz-D-u-dy-ite-9.png} \includegraphics[width=0.18\textwidth]{figure-DNLM//fig-DN-ex2-DNLM-DeepRitz-D-pterr-dy-ite-9.png} \caption{Network solutions $\partial_x\hat{u}_1^{[9]}$, $\partial_y\hat{u}_1^{[9]}$ and errors $|\partial_x \hat{u}_1^{[9]} \!-\! \partial_x u_1|$, $|\partial_y \hat{u}_1^{[9]} - \partial_y u_1|$ using DNLM (deep Ritz). } \end{subfigure} \vspace{-0.2cm} \caption{Overfitting phenomenon in solving the Dirichlet subproblem of \eqref{Experiments-DNLM-ex2} on testdata.} \label{Experiments-DNLM-ex2-Overfit-Dirichlet-Subproblem} \end{figure} \begin{table}[htp] \caption{Relative $L_2$ errors of the predicted solution along the outer iteration $k$ for example \eqref{Experiments-DNLM-ex2}, with mean value ($\pm$ standard deviation) being reported over 5 runs.} \centering \renewcommand{\arraystretch}{1.1} \begin{tabular}{ | c || c | c | c | c | c | c | } \hline \multicolumn{2}{|c|}{ \diagbox[width=17em]{Relative Errors}{Outer Iterations} } & 1 & 2 & 3 & 4 & 5 \\ \hline \hline \multirow{5}{*}{$ \displaystyle \!\! \frac{ \lVert \hat{u}^{[k]} - u \rVert_{L_2(\Omega)} } { \lVert u \rVert_{L_2(\Omega)} }\!\!$} & DN-PINNs & \makecell{3.12 \\ \!($\pm$ 1.18)\!} & \makecell{1.92 \\ \!($\pm$ 0.95)\!} & \makecell{1.69 \\ \!($\pm$ 0.58)\!} & \makecell{1.77 \\ \!($\pm$ 0.32)\!} & \makecell{1.70 \\ \!($\pm$ 0.19)\!} \\ \cline{2-7} & DNLM (PINN) & \makecell{1.49 \\ \!($\pm$ 0.43)\!} & \makecell{0.95 \\ \!($\pm$ 0.39)\!} & \makecell{0.30 \\ \!($\pm$ 0.28)\!} & \makecell{0.27 \\ \!($\pm$ 0.26)\!} & \makecell{0.38 \\ \!($\pm$ 0.36)\!} \\ \cline{2-7} & \!\!\! DNLM (Deep Ritz)\! & \makecell{1.64 \\ \!($\pm$ 0.45)\!} & \makecell{1.37 \\ \!($\pm$ 0.14)\!} & \makecell{1.39 \\ \!($\pm$ 0.08)\!} & \makecell{ 1.40 \\ \!($\pm$ 0.02)\!} & \makecell{1.40 \\ \!($\pm$ 0.01)\!} \\ \hline \end{tabular} \label{Experiments-DNLM-ex2-Err-Table} \end{table} \begin{figure}[htp] \begin{center} \includegraphics[width=0.7\textwidth]{figure-DNLM//fig-DN-ex2-lossL2-curves.png} \caption{Relative $L_2$ errors on testdata along outer iterations for example \eqref{Experiments-DNLM-ex2}.} \label{Experiments-DNLM-ex2-Err-Figure} \end{center} \end{figure} \subsubsection{Poisson's Equation with Four Subdomains} Next, we consider the Poisson problem divided into four subproblems in two-dimension, that is, \begin{equation} \begin{array}{cl} -\Delta u(x,y) = f(x,y)\ & \text{in}\ \Omega=(0,1)^2, \\ u(x,y) = 0\ \ & \text{on}\ \partial \Omega, \end{array} \label{Experiments-DNLM-ex3} \end{equation} where true solution $u(x,y) = \sin(2\pi x)(\cos(2\pi y)-1) + 100xy(x-1)^2(y-1)^2 $, and $f(x,y)=$ $4 \pi^2 \sin(2 \pi x) (2 \cos(2 \pi y) - 1) - 200x(x-1)^2(3y-2) - 200y(y-1)^2(3x-2)$. Here the domain is decomposed using the red-black partition \cite{toselli2004domain}, and the multidomains are categorized into two sets \cite{toselli2004domain} as depicted in \autoref{Experiments-DNLM-ex3-exact-solution}. Then, the learning algorithms of interest can be deployed, and the initial guess of Dirichlet data along the interface is chosen as \begin{equation*} h^{[0]}(x,y)=u(x,y)-50x(x-1)y \ \ \ \text{on}\ \Gamma. \end{equation*} \begin{figure}[htp] \centering \includegraphics[width=0.222\textwidth]{figure-DNLM//fig-DN-ex3-domain.png} \hspace{0.15cm} \includegraphics[width=0.237\textwidth]{figure-DNLM//fig-DN-ex3-exact-u.pdf} \includegraphics[width=0.25\textwidth]{figure-DNLM//fig-DN-ex3-exact-u-dx.pdf} \includegraphics[width=0.242\textwidth]{figure-DNLM//fig-DN-ex3-exact-u-dy.pdf} \caption{From left to right: decomposition of domain into two subregions, exact solution $u(x,y)$ and its partial derivatives $\partial_x u(x,y)$, $\partial_y u(x,y)$ for numerical example \eqref{Experiments-DNLM-ex3}.} \label{Experiments-DNLM-ex3-exact-solution} \end{figure} For the more general multidomain problem \eqref{Experiments-DNLM-ex3} with non-trivial flux functions, the numerical results using DN-PINNs are depicted in \autoref{Experiments-DNLM-ex3-DN-PINNs}, which is not guaranteed to converge to the true solution due to the issue of interface overfitting. \begin{figure}[htp] \centering \begin{subfigure}[htp]{\textwidth} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex3-DN-PINNs-u-NN-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex3-DN-PINNs-u-NN-ite-3.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex3-DN-PINNs-u-NN-ite-6.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex3-DN-PINNs-u-NN-ite-9.png} \caption{The numerical solutions $\hat{u}^{[k]}(x,y)$ along the outer iterations. } \label{Experiments-DNLM-ex3-DN-PINNs-solution} \end{subfigure} \begin{subfigure}[htp]{\textwidth} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex3-DN-PINNs-pterr-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex3-DN-PINNs-pterr-ite-3.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex3-DN-PINNs-pterr-ite-6.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex3-DN-PINNs-pterr-ite-9.png} \caption{The pointwise absolute errors $|\hat{u}^{[k]}(x,y) - u(x,y)|$ along the outer iterations. } \label{Experiments-DNLM-ex3-DN-PINNs-error} \end{subfigure} \vspace{-0.2cm} \caption{Numerical results of example \eqref{Experiments-DNLM-ex3} using the DN-PINNs on the test dataset.} \label{Experiments-DNLM-ex3-DN-PINNs} \end{figure} Note that the overfitting phenomenon on subdomain interfaces remains unsettled when using our methods (see \autoref{Experiments-DNLM-ex3-Overfit-Dirichlet-Subproblem}). However, thanks to the compensated deep Ritz method, it can be observed from \autoref{Experiments-DNLM-ex3-DNLM-PINN} and \autoref{Experiments-DNLM-ex3-DNLM-DeepRitz} that the outer iterations using DNLM (PINN) and DNLM (deep Ritz) have converged, which validates the effectiveness of our proposed methods in dealing with the inevitable overfitting problem in practice. Moreover, we execute the simulation for 5 runs and report the statistical results in \autoref{Experiments-DNLM-ex3-Err-Table} and \autoref{Experiments-DNLM-ex3-Err-Figure} to further demonstrate that DNLM (PINN) can outperform other methods in terms of accuracy. \begin{figure}[htp] \centering \begin{subfigure}[htp]{\textwidth} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex3-DNLM-PINN-u-NN-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex3-DNLM-PINN-u-NN-ite-2.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex3-DNLM-PINN-u-NN-ite-4.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex3-DNLM-PINN-u-NN-ite-6.png} \caption{The numerical solutions $\hat{u}^{[k]}(x,y)$ along the outer iterations. } \label{Experiments-DNLM-ex3-DNLM-PINN-solution} \end{subfigure} \begin{subfigure}[htp]{\textwidth} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex3-DNLM-PINN-pterr-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex3-DNLM-PINN-pterr-ite-2.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex3-DNLM-PINN-pterr-ite-4.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex3-DNLM-PINN-pterr-ite-6.png} \caption{The pointwise absolute errors $|\hat{u}^{[k]}(x,y) - u(x,y)|$ along the outer iterations. } \label{Experiments-DNLM-ex3-DNLM-PINN-error} \end{subfigure} \vspace{-0.2cm} \caption{Numerical results of example \eqref{Experiments-DNLM-ex3} using our DNLM (PINN) on the test dataset.} \label{Experiments-DNLM-ex3-DNLM-PINN} \end{figure} \begin{figure}[htp] \centering \begin{subfigure}[htp]{\textwidth} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex3-DNLM-DeepRitz-u-NN-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex3-DNLM-DeepRitz-u-NN-ite-2.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex3-DNLM-DeepRitz-u-NN-ite-4.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex3-DNLM-DeepRitz-u-NN-ite-7.png} \caption{The numerical solutions $\hat{u}^{[k]}(x,y)$ along the outer iterations. } \label{Experiments-DNLM-ex3-DNLM-DeepRitz-solution} \end{subfigure} \begin{subfigure}[htp]{\textwidth} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex3-DNLM-DeepRitz-pterr-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex3-DNLM-DeepRitz-pterr-ite-2.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex3-DNLM-DeepRitz-pterr-ite-4.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex3-DNLM-DeepRitz-pterr-ite-7.png} \caption{The pointwise absolute errors $|\hat{u}^{[k]}(x,y) - u(x,y)|$ along the outer iterations. } \label{Experiments-DNLM-ex3-DNLM-DeepRitz-error} \end{subfigure} \vspace{-0.2cm} \caption{Numerical results of example \eqref{Experiments-DNLM-ex3} using our DNLM (deep Ritz) on the test dataset.} \label{Experiments-DNLM-ex3-DNLM-DeepRitz} \end{figure} \begin{figure}[htp] \centering \begin{subfigure}[htp]{\textwidth} \centering \includegraphics[width=0.22\textwidth]{figure-DNLM//fig-DN-ex3-DNLM-PINN-D-u-dx-ite-6.png} \includegraphics[width=0.22\textwidth]{figure-DNLM//fig-DN-ex3-DNLM-PINN-D-pterr-dx-ite-6.png} \includegraphics[width=0.22\textwidth]{figure-DNLM//fig-DN-ex3-DNLM-PINN-D-u-dy-ite-6.png} \includegraphics[width=0.22\textwidth]{figure-DNLM//fig-DN-ex3-DNLM-PINN-D-pterr-dy-ite-6.png} \caption{Network solutions $\partial_x\hat{u}_R^{[6]}$, $\partial_y\hat{u}_R^{[6]}$ and errors $|\partial_x \hat{u}_R^{[6]} - \partial_x u_R|$, $|\partial_y \hat{u}_R^{[6]} - \partial_y u_R|$ using DNLM (PINN). } \end{subfigure} \begin{subfigure}[htp]{\textwidth} \centering \includegraphics[width=0.22\textwidth]{figure-DNLM//fig-DN-ex3-DNLM-DeepRitz-D-u-dx-ite-6.png} \includegraphics[width=0.22\textwidth]{figure-DNLM//fig-DN-ex3-DNLM-DeepRitz-D-pterr-dx-ite-6.png} \includegraphics[width=0.22\textwidth]{figure-DNLM//fig-DN-ex3-DNLM-DeepRitz-D-u-dy-ite-6.png} \includegraphics[width=0.22\textwidth]{figure-DNLM//fig-DN-ex3-DNLM-DeepRitz-D-pterr-dy-ite-6.png} \caption{Network solutions $\partial_x\hat{u}_R^{[7]}$, $\partial_y\hat{u}_R^{[7]}$ and errors $|\partial_x \hat{u}_R^{[7]} - \partial_x u_R|$, $|\partial_y \hat{u}_R^{[7]} - \partial_y u_R|$ using DNLM (deep Ritz). } \end{subfigure} \vspace{-0.2cm} \caption{Overfitting phenomenon in solving the Dirichlet subproblem of \eqref{Experiments-DNLM-ex3} on testdata.} \label{Experiments-DNLM-ex3-Overfit-Dirichlet-Subproblem} \end{figure} \begin{table}[htp] \caption{Relative $L_2$ errors of the predicted solution along the outer iteration $k$ for example \eqref{Experiments-DNLM-ex3}, with mean value ($\pm$ standard deviation) being reported over 5 runs.} \centering \renewcommand{\arraystretch}{1.1} \begin{tabular}{ | c || c | c | c | c | c | c | } \hline \multicolumn{2}{|c|}{ \diagbox[width=17em]{Relative Errors}{Outer Iterations} } & 1 & 2 & 3 & 4 & 6 \\ \hline \hline \multirow{5}{*}{$ \displaystyle \!\! \frac{ \lVert \hat{u}^{[k]} - u \rVert_{L_2(\Omega)} } { \lVert u \rVert_{L_2(\Omega)} }\!\!$} & DN-PINNs & \makecell{2.40 \\ \!($\pm$ 0.13)\!} & \makecell{1.49 \\ \!($\pm$ 0.22)\!} & \makecell{0.82 \\ \!($\pm$ 0.13)\!} & \makecell{0.91 \\ \!($\pm$ 0.52)\!} & \makecell{0.58 \\ \!($\pm$ 0.18)\!} \\ \cline{2-7} & DNLM (PINN) & \makecell{1.68 \\ \!($\pm$ 0.16)\!} & \makecell{1.30 \\ \!($\pm$ 0.24)\!} & \makecell{0.88 \\ \!($\pm$ 0.26)\!} & \makecell{0.35 \\ \!($\pm$ 0.11)\!} & \makecell{0.29 \\ \!($\pm$ 0.28)\!} \\ \cline{2-7} & \!\!\! DNLM (Deep Ritz)\! & \makecell{1.97 \\ \!($\pm$ 0.47)\!} & \makecell{1.38 \\ \!($\pm$ 0.29)\!} & \makecell{0.90 \\ \!($\pm$ 0.29)\!} & \makecell{ 0.95 \\ \!($\pm$ 0.63)\!} & \makecell{0.42 \\ \!($\pm$ 0.26)\!} \\ \hline \end{tabular} \label{Experiments-DNLM-ex3-Err-Table} \end{table} \begin{figure}[htp] \begin{center} \includegraphics[width=0.7\textwidth]{figure-DNLM//fig-DN-ex3-lossL2-curves.png} \caption{Relative $L_2$ errors on testdata along outer iterations for example \eqref{Experiments-DNLM-ex3}.} \label{Experiments-DNLM-ex3-Err-Figure} \end{center} \end{figure} \subsubsection{Poisson's Equation in High Dimension} As is well known, another key and desirable advantage of using deep learning solvers is that it can tackle difficulties induced by the curse of dimensionality. To this end, we consider a Poisson problem in five dimension, \textit{i.e.}, \begin{equation} \begin{array}{cl} \displaystyle -\Delta u(x_1,\cdots,x_5) = 4\pi^2\sum\limits_{i=1}^5 \sin (x_i)\ & \text{in}\ \Omega = (0,1)^5, \\ u(x_1,\cdots,x_5) = 0\ \ & \text{on}\ \partial \Omega, \end{array} \label{Experiments-DNLM-ex4} \end{equation} where the exact solution is given by $u(x_1,\cdots,x_5) = \sum\limits_{i=1}^5 \sin (x_i)$, and the domain is decomposed into two subregions \begin{equation*} \Omega_1= \big\{(x_1,\cdots,x_5)\in\Omega \,\big|\, x_1<0.5 \big\}\ \ \text{and}\ \ \Omega_2= \big\{(x_1,\cdots,x_5)\in\Omega \,\big|\, x_1>0.5 \big\}. \end{equation*} Here, the initial guess of the Dirichlet data at interface is chosen as \begin{equation*} h^{[0]}(\mathbf{x})=-5000\sum\limits_{i=1}^5\prod\limits_{i=1}^5 x_i(x_i-1), \end{equation*} and the fully-connected neural network employed here has 4 hidden layers of 50 neurons each. The computational results using DN-PINNs, DNLM (PINN), and DNLM (Deep Ritz) approaches are shown in \autoref{Experiments-DNLM-ex4-Err-Table} and \autoref{Experiments-DNLM-ex4-Err-Figure}, which implies that our proposed learning algorithms can achieve comparable performance to the existing learning methods. \begin{table}[htp] \caption{Relative $L_2$ errors of the predicted solution along the outer iteration $k$ for example \eqref{Experiments-DNLM-ex4}, with mean value ($\pm$ standard deviation) being reported over 5 runs.} \centering \renewcommand{\arraystretch}{1.1} \begin{tabular}{ | c || c | c | c | c | c |} \hline \multicolumn{2}{|c|}{ \diagbox[width=17em]{Relative Errors}{Outer Iterations} } & 1 & 3 & 5 & 7 \\ \hline \hline \multirow{5}{*}{$ \displaystyle \!\! \frac{ \lVert \hat{u}^{[k]} - u \rVert_{L_2(\Omega)} } { \lVert u \rVert_{L_2(\Omega)} }\!\!$} & DN-PINNs & \makecell{0.77 \\ \!($\pm$ 0.38)\!} & \makecell{0.08 \\ \!($\pm$ 0.12)\!} & \makecell{0.01 \\ \!($\pm$ 0.06)\!} & \makecell{0.02 \\ \!($\pm$ 0.03)\!} \\ \cline{2-6} & DNLM (PINN) & \makecell{0.48 \\ \!($\pm$ 0.03)\!} & \makecell{0.04 \\ \!($\pm$ 0.01)\!} & \makecell{0.05 \\ \!($\pm$ 0.02)\!} & \makecell{0.05 \\ \!($\pm$ 0.02)\!} \\ \cline{2-6} & \!\!\! DNLM (Deep Ritz)\! & \makecell{0.54 \\ \!($\pm$ 0.80)\!} & \makecell{0.11 \\ \!($\pm$ 0.11)\!} & \makecell{0.05 \\ \!($\pm$ 0.02)\!} & \makecell{0.04\\ \!($\pm$ 0.02)\!} \\ \hline \end{tabular} \label{Experiments-DNLM-ex4-Err-Table} \end{table} \begin{figure}[htp] \begin{center} \includegraphics[width=0.7\textwidth]{figure-DNLM//fig-DN-ex4-lossL2-curves.png} \caption{Relative $L_2$ errors on testdata along outer iterations for example \eqref{Experiments-DNLM-ex4}.} \label{Experiments-DNLM-ex4-Err-Figure} \end{center} \end{figure} \subsubsection{High-Contrast Elliptic Equation} Note that as mentioned in remark \ref{Remark-High-Contrast}, our proposed Dirichlet-Neumann learning algorithm \ref{Algorithm-DN-Learning-2Subdomains} can also be easily extended to solving the more complicated interface problems with high-contrast coefficients. As such, we consider the an elliptic interface problem in two dimension, that is, \begin{equation} \begin{array}{cl} -\nabla \cdot \left( c(x,y) \nabla u(x,y) \right) = 32 \pi^2 \sin(4\pi x)\cos(4\pi y)\ \ & \text{in}\ \Omega=(0,1)^2,\\ u(x,y) = 0\ \ & \text{on}\ \partial \Omega, \end{array} \label{Experiments-DNLM-ex5} \end{equation} where the domain is decomposed into four subregions using the red-black partition (see \autoref{Experiments-DNLM-ex5-exact-solution}), the exact solution is given by $u(x,y) = \sin(4\pi x) \sin(4\pi y) / c(x,y)$, and the coefficient $c(x,y)$ is piecewise constant with respect to the partition of domain \begin{equation*} c(x,y) = \left\{ \begin{array}{cl} 1 \ & \text{in}\ \Omega_R,\\ 100\ \ & \text{in}\ \Omega_B. \end{array}\right. \end{equation*} \begin{figure}[htp] \centering \includegraphics[width=0.212\textwidth]{figure-DNLM//fig-DN-ex5-domain.png} \hspace{0.15cm} \includegraphics[width=0.245\textwidth]{figure-DNLM//fig-DN-ex5-exact-u.pdf} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-exact-u-dx.pdf} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-exact-u-dy.pdf} \caption{From left to right: decomposition of domain into two subregions, exact solution $u(x,y)$ and its partial derivatives $\partial_x u(x,y)$, $\partial_y u(x,y)$ for numerical example \eqref{Experiments-DNLM-ex5}.} \label{Experiments-DNLM-ex5-exact-solution} \end{figure} Here, we choose $h^{[0]}=100\cos(100\pi x)\cos(100\pi y)+100xy(x-1)^3(y-1)^3$ as the initial guess on the interface, and the computational results using DN-PINNs, DNLM (PINN), and DNLM (deep Ritz) are depicted in \autoref{Experiments-DNLM-ex5-DN-PINNs}, \autoref{Experiments-DNLM-ex5-DNLM-PINN}, and \autoref{Experiments-DNLM-ex5-DNLM-DeepRitz}, respectively. Clearly, our proposed learning methods can facilitate the convergence of outer iteration with guaranteed accuracy than the straightforward DN-PINNs approach, which shows the potential to solve more complicated interface problems in the presence of interface overfitting (see \autoref{Experiments-DNLM-ex5-Overfit-Dirichlet-Subproblem}). \begin{figure}[htp] \centering \begin{subfigure}[htp]{\textwidth} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DN-PINNs-u-NN-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DN-PINNs-u-NN-ite-3.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DN-PINNs-u-NN-ite-6.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DN-PINNs-u-NN-ite-9.png} \caption{The numerical solutions $\hat{u}^{[k]}(x,y)$ along the outer iterations. } \label{Experiments-DNLM-ex5-DN-PINNs-solution} \end{subfigure} \begin{subfigure}[htp]{\textwidth} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DN-PINNs-pterr-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DN-PINNs-pterr-ite-3.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DN-PINNs-pterr-ite-6.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DN-PINNs-pterr-ite-9.png} \caption{The pointwise absolute errors $|\hat{u}^{[k]}(x,y) - u(x,y)|$ along the outer iterations. } \label{Experiments-DNLM-ex5-DN-PINNs-error} \end{subfigure} \vspace{-0.2cm} \caption{Numerical results of example \eqref{Experiments-DNLM-ex5} using the DN-PINNs on the test dataset.} \label{Experiments-DNLM-ex5-DN-PINNs} \end{figure} \begin{figure}[htp] \centering \begin{subfigure}[htp]{\textwidth} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DNLM-PINN-u-NN-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DNLM-PINN-u-NN-ite-3.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DNLM-PINN-u-NN-ite-6.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DNLM-PINN-u-NN-ite-9.png} \caption{The numerical solutions $\hat{u}^{[k]}(x,y)$ along the outer iterations. } \label{Experiments-DNLM-ex5-DNLM-PINN-solution} \end{subfigure} \begin{subfigure}[htp]{\textwidth} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DNLM-PINN-pterr-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DNLM-PINN-pterr-ite-3.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DNLM-PINN-pterr-ite-6.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DNLM-PINN-pterr-ite-9.png} \caption{The pointwise absolute errors $|\hat{u}^{[k]}(x,y) - u(x,y)|$ along the outer iterations. } \label{Experiments-DNLM-ex5-DNLM-PINN-error} \end{subfigure} \vspace{-0.2cm} \caption{Numerical results of example \eqref{Experiments-DNLM-ex5} using our DNLM (PINN) on the test dataset.} \label{Experiments-DNLM-ex5-DNLM-PINN} \end{figure} \begin{figure}[htp] \centering \begin{subfigure}[htp]{\textwidth} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DNLM-DeepRitz-u-NN-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DNLM-DeepRitz-u-NN-ite-3.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DNLM-DeepRitz-u-NN-ite-6.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DNLM-DeepRitz-u-NN-ite-9.png} \caption{The numerical solutions $\hat{u}^{[k]}(x,y)$ along the outer iterations. } \label{Experiments-DNLM-ex5-DNLM-DeepRitz-solution} \end{subfigure} \begin{subfigure}[htp]{\textwidth} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DNLM-DeepRitz-pterr-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DNLM-DeepRitz-pterr-ite-3.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DNLM-DeepRitz-pterr-ite-6.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DNLM-DeepRitz-pterr-ite-9.png} \caption{The pointwise absolute errors $|\hat{u}^{[k]}(x,y) - u(x,y)|$ along the outer iterations. } \label{Experiments-DNLM-ex5-DNLM-DeepRitz-error} \end{subfigure} \vspace{-0.2cm} \caption{Numerical results of example \eqref{Experiments-DNLM-ex5} using our DNLM (deep Ritz) on the test dataset.} \label{Experiments-DNLM-ex5-DNLM-DeepRitz} \end{figure} \begin{figure}[htp] \centering \begin{subfigure}[htp]{\textwidth} \centering \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DNLM-PINN-D-u-dx-ite-7.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DNLM-PINN-D-pterr-dx-ite-7.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DNLM-PINN-D-u-dy-ite-7.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DNLM-PINN-D-pterr-dy-ite-7.png} \caption{Network solutions $\partial_x\hat{u}^{[7]}_R$, $\partial_y\hat{u}^{[7]}_R$ and errors $|\partial_x \hat{u}^{[7]}_R - \partial_x u_R|$, $|\partial_y \hat{u}^{[7]}_R - \partial_y u_R|$ using DNLM (PINN). } \end{subfigure} \begin{subfigure}[htp]{\textwidth} \centering \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DNLM-DeepRitz-D-u-dx-ite-7.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DNLM-DeepRitz-D-pterr-dx-ite-7.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DNLM-DeepRitz-D-u-dy-ite-7.png} \includegraphics[width=0.24\textwidth]{figure-DNLM//fig-DN-ex5-DNLM-DeepRitz-D-pterr-dy-ite-7.png} \caption{Network solutions $\partial_x\hat{u}^{[7]}_R$, $\partial_y\hat{u}^{[7]}_R$ and errors $|\partial_x \hat{u}^{[7]}_R - \partial_x u_R|$, $|\partial_y \hat{u}^{[7]}_R - \partial_y u_R|$ using DNLM (deep Ritz). } \end{subfigure} \vspace{-0.2cm} \caption{Overfitting phenomenon in solving the Dirichlet subproblem of \eqref{Experiments-DNLM-ex5} on testdata.} \label{Experiments-DNLM-ex5-Overfit-Dirichlet-Subproblem} \end{figure} Moreover, the statistical results over 5 runs are reported in \autoref{Experiments-DNLM-ex5-Err-Table} and \autoref{Experiments-DNLM-ex5-Err-Figure}, which further validate the effectiveness of our proposed learning methods for solving the elliptic interface problems. \begin{table}[htp] \caption{ Relative $L_2$ errors of the predicted solution along the outer iteration $k$ for example \eqref{Experiments-DNLM-ex5}, with mean value ($\pm$ standard deviation) being reported over 5 runs.} \centering \renewcommand{\arraystretch}{1.1} \begin{tabular}{ | c || c | c | c | c | c | c | } \hline \multicolumn{2}{|c|}{ \diagbox[width=17em]{Relative Errors}{Outer Iterations} } & 1 & 3 & 5 & 6 & 7 \\ \hline \hline \multirow{5}{*}{$ \displaystyle \!\! \frac{ \lVert \hat{u}^{[k]} - u \rVert_{L_2(\Omega)} } { \lVert u \rVert_{L_2(\Omega)} }\!\!$} & DN-PINNs & \makecell{1.90 \\ \!($\pm$ 1.03)\!} & \makecell{1.17 \\ \!($\pm$ 0.34)\!} & \makecell{13.75 \\ \!($\pm$ 22.70)\!} & \makecell{20.19 \\ \!($\pm$ 33.04)\!} & \makecell{2.47 \\ \!($\pm$ 2.45)\!} \\ \cline{2-7} & DNLM (PINN) & \makecell{62.46 \\ \!($\pm$ 56.64)\!} & \makecell{0.92 \\ \!($\pm$ 0.31)\!} & \makecell{0.65 \\ \!($\pm$ 0.49)\!} & \makecell{0.83 \\ \!($\pm$ 0.40)\!} & \makecell{0.51 \\ \!($\pm$ 0.54)\!} \\ \cline{2-7} & \!\!\! DNLM (Deep Ritz)\! & \makecell{81.56 \\ \!($\pm$ 48.50)\!} & \makecell{0.98 \\ \!($\pm$ 0.14)\!} & \makecell{2.94 \\ \!($\pm$ 4.41)\!} & \makecell{4.72 \\ \!($\pm$ 8.80)\!} & \makecell{0.82 \\ \!($\pm$ 0.42)\!} \\ \hline \end{tabular} \label{Experiments-DNLM-ex5-Err-Table} \end{table} \begin{figure}[htp] \begin{center} \includegraphics[width=0.7\textwidth]{figure-DNLM//fig-DN-ex5-lossL2-curves.png} \caption{ Relative $L_2$ errors on testdata along outer iterations for example \eqref{Experiments-DNLM-ex4}.} \label{Experiments-DNLM-ex5-Err-Figure} \end{center} \end{figure} \subsection{Robin-Robin Learning method} To demonstrate the effectiveness and efficiency of our compensated deep Ritz method for realizing the classical Robin-Robin algorithm, we consider the following Poisson equation in two-dimension \begin{equation} \begin{array}{cl} -\Delta u(x,y) = 4 \pi^2 \sin(2 \pi x) (2 \cos(2 \pi y) - 1) \ & \text{in}\ \Omega=(0,1)^2,\\ u(x,y) = 0\ \ & \text{on}\ \partial \Omega, \end{array} \label{Experiments-RRLM-ex1} \end{equation} where the exact solution $u(x,y) = \sin(2\pi x)(\cos(2\pi y)-1)$, and the interface $\Gamma=\partial\Omega_1\cap\partial\Omega_2$ is a straight line segment from $(0.5,0)$ to $(0.5,1)$ as depicted in \autoref{Experiments-DNLM-ex1-exact-solution}. By choosing $(\kappa_1,\kappa_2)=(1,0.01)$, the computational results using RR-PINNs, \textit{i.e.}, Algorithm \ref{Algorithm-RR-Learning-2Subdomains}, in a typical simulation is depicted \autoref{Experiments-RRLM-ex1-RR-PINNs}, which can converge to the true solution but requires more outer iterations when compared to the DNLM (PINN) or DNLM (deep Ritz) approach (see \autoref{Experiments-DNLM-ex1-DNLM-PINN} or \autoref{Experiments-DNLM-ex1-DNLM-DeepRitz}). \begin{figure}[htp] \centering \begin{subfigure}[htp]{\textwidth} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex1-RR-PINNs-u-NN-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex1-RR-PINNs-u-NN-ite-5.png} \includegraphics[width=0.22\textwidth]{figure-RRLM//fig-RR-ex1-RR-PINNs-u-NN-ite-10.png} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex1-RR-PINNs-u-NN-ite-15.png} \caption{The numerical solutions $\hat{u}^{[k]}(x,y)$ along the outer iterations. } \label{Experiments-RRLM-ex1-RR-PINNs-solution} \end{subfigure} \begin{subfigure}[htp]{\textwidth} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex1-RR-PINNs-pterr-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex1-RR-PINNs-pterr-ite-5.png} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex1-RR-PINNs-pterr-ite-10.png} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex1-RR-PINNs-pterr-ite-15.png} \caption{The pointwise absolute errors $|\hat{u}^{[k]}(x,y) - u(x,y)|$ along the outer iterations. } \label{Experiments-RRLM-ex1-RR-PINNs-error} \end{subfigure} \vspace{-0.2cm} \caption{Numerical results of example \eqref{Experiments-RRLM-ex1} using the RR-PINNs on the test dataset.} \label{Experiments-RRLM-ex1-RR-PINNs} \end{figure} To further enhance the speed of outer convergence, these two additional parameters are set to be $(\kappa_1,\kappa_2)=(1,1000)$ in what follows. Unfortunately, due to the weights imbalance, the RR-PINNs approach could not work under such a circumstance (see \autoref{Experiments-RRLM-ex2-RRLM-PINN}). On the other hand, the integration with our compensated deep Ritz method (see \autoref{Experiments-RRLM-ex2-RR-PINNs} and \autoref{Experiments-RRLM-ex2-RRLM-DeepRItz}) is convergent in the presence of interface overfitting, which only requires the substitution of second subproblem with our proposed learning approach. \begin{figure}[htp] \centering \begin{subfigure}[htp]{\textwidth} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex2-RR-PINNs-u-NN-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex2-RR-PINNs-u-NN-ite-2.png} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex2-RR-PINNs-u-NN-ite-3.png} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex2-RR-PINNs-u-NN-ite-4.png} \caption{The numerical solutions $\hat{u}^{[k]}(x,y)$ along the outer iterations. } \label{Experiments-RRLM-ex2-RR-PINNs-solution} \end{subfigure} \begin{subfigure}[htp]{\textwidth} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex2-RR-PINNs-pterr-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex2-RR-PINNs-pterr-ite-2.png} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex2-RR-PINNs-pterr-ite-3.png} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex2-RR-PINNs-pterr-ite-4.png} \caption{The pointwise absolute errors $|\hat{u}^{[k]}(x,y) - u(x,y)|$ along the outer iterations. } \label{Experiments-RRLM-ex2-RR-PINNs-error} \end{subfigure} \vspace{-0.2cm} \caption{Numerical results of example \eqref{Experiments-RRLM-ex1} using the RR-PINNs on the test dataset.} \label{Experiments-RRLM-ex2-RR-PINNs} \end{figure} \begin{figure}[htp] \centering \begin{subfigure}[htp]{\textwidth} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex2-RRLM-PINN-u-NN-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex2-RRLM-PINN-u-NN-ite-2.png} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex2-RRLM-PINN-u-NN-ite-3.png} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex2-RRLM-PINN-u-NN-ite-4.png} \caption{The numerical solutions $\hat{u}^{[k]}(x,y)$ along the outer iterations. } \label{Experiments-RRLM-ex2-RRLM-PINN-solution} \end{subfigure} \begin{subfigure}[htp]{\textwidth} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex2-RRLM-PINN-pterr-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex2-RRLM-PINN-pterr-ite-2.png} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex2-RRLM-PINN-pterr-ite-3.png} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex2-RRLM-PINN-pterr-ite-4.png} \caption{The pointwise absolute errors $|\hat{u}^{[k]}(x,y) - u(x,y)|$ along the outer iterations. } \label{Experiments-RRLM-ex2-RRLM-PINN-error} \end{subfigure} \vspace{-0.2cm} \caption{Numerical results of example \eqref{Experiments-RRLM-ex1} using our RRLM (PINN) on the test dataset.} \label{Experiments-RRLM-ex2-RRLM-PINN} \end{figure} \begin{figure}[htp] \centering \begin{subfigure}[htp]{\textwidth} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex2-RRLM-DeepRitz-u-NN-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex2-RRLM-DeepRitz-u-NN-ite-2.png} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex2-RRLM-DeepRitz-u-NN-ite-3.png} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex2-RRLM-DeepRitz-u-NN-ite-4.png} \caption{The numerical solutions $\hat{u}^{[k]}(x,y)$ along the outer iterations. } \label{Experiments-RRLM-ex2-RRLM-DeepRitz-solution} \end{subfigure} \begin{subfigure}[htp]{\textwidth} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex2-RRLM-DeepRitz-pterr-ite-1.png} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex2-RRLM-DeepRitz-pterr-ite-2.png} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex2-RRLM-DeepRitz-pterr-ite-3.png} \includegraphics[width=0.24\textwidth]{figure-RRLM//fig-RR-ex2-RRLM-DeepRitz-pterr-ite-4.png} \caption{The pointwise absolute errors $|\hat{u}^{[k]}(x,y) - u(x,y)|$ along the outer iterations. } \label{Experiments-RRLM-ex2-RRLM-DeepRitz-error} \end{subfigure} \vspace{-0.2cm} \caption{Numerical results of example \eqref{Experiments-RRLM-ex1} using our RRLM (deep Ritz) on the test dataset.} \label{Experiments-RRLM-ex2-RRLM-DeepRItz} \end{figure} \section{Conclusion} In this paper, a general framework is proposed for realizing the classical domain decomposition methods through deep learning approaches, which is based on the information exchange between neighbouring subregions rather than the domain partition strategies. For the methods that are based on a direct flux exchange, a key difficulty of deploying the deep learning solvers is the issue of interface overfitting that will always occur to a greater or lesser extent in practice. To deal with the overfitted interface conditions, we develop a novel learning approach, \textit{i.e.}, the compensated deep Ritz method, that enables the flux transmission across subdomain interfaces with guaranteed accuracy. As a result, it allows us to construct effective learning approaches for realizing the classical Dirichlet-Neumann, Neumann-Neumann, and Dirichlet-Dirichlet algorithms, and therefore fully leverage the advantages of deep learning solvers to deal with the complicated geometry domains and high-dimensional problems. On the other hand, the Robin-Robin algorithm, which does not require the flux exchange but may suffer from the issue of weights imbalance, can also benefit from our compensated deep Ritz method. Finally, we conduct numerical experiments on a series of elliptic boundary value problems to demonstrate the effectiveness of our proposed learning algorithms. Possible future explorations would involve the coarse space acceleration \cite{mercier2021coarse}, the adaptive sampling technique \cite{he2022mesh}, and the improvement of network architecture that could potentially further accelerate the convergence at a reduced cost. \clearpage \bibliographystyle{siamplain}
2024-02-18T23:40:41.488Z
2022-07-22T02:14:50.000Z
algebraic_stack_train_0000
3,086
20,654
proofpile-arXiv_065-15066
\section{\label{sec:introduction}Introduction} Reservoir computing \cite{jaeger2001echo,maass2002real,jaeger2007echo,lukovsevivcius2009reservoir} is a machine-learning approach that has demonstrated success at a variety of tasks, including time series prediction \cite{jaeger2004harnessing,parlitz2005,wyffels2010,pathak2017using} and inferring unmeasured variables of a dynamical system from measured variables \cite{lu2017reservoir,zimmermann2018}. In this approach, a ``reservoir'' is a high-dimensional, non-autonomous (driven) dynamical system, chosen independently of the task. A particular task provides an input time series, and the reservoir state as a function of time is regarded as a ``raw'' output time series, which is post-processed to fit the task. The post-processing function is determined, typically by linear regression, from a limited-time ``training'' data set consisting of the desired output time series for a given input time series. Reservoir computing can be performed entirely in software, typically with an artificial neural network model, or with a physical reservoir; examples of the latter include a bucket of water \cite{fernando2003pattern}, an electronic circuit with a time delay \cite{appeltant2011information}, a field-programmable gate array (FPGA) \cite{haynes2015reservoir}, an optical network of semiconductor lasers \cite{brunner2016all}, and an optic-electronic phase-delay system \cite{larger2017high}. Other machine-learning techniques, including deep learning \cite{lecun2015deep,goodfellow2016deep}, attempt to optimize internal system parameters to fit the training data; doing so requires a mathematical model of the machine-learning system. By contrast, reservoir computing does not require a model for the reservoir, nor the ability to alter the reservoir dynamics, because it seeks only to optimize the parameters of the post-processing function. The ability to use a physical reservoir as a ``black box'' allows for various potential advantages over other machine-learning techniques, including greatly enhanced speed. In this article, we consider the task of predicting future measurements from a deterministic dynamical system, whose equations of motion are unknown, from limited time series data. We describe a general framework that includes the reservoir computing prediction method proposed by Jaeger and Haas \cite{jaeger2004harnessing}. With appropriate modifications, the same framework applies to other machine-learning methods for time series prediction (including an LSTM approach \cite{vlachas2018}), as we discuss further in Sec.~\ref{sec:discussion}. We assume the vector $\mathbf{u}(t)$ of measurements to be a function $\mathbf{h}$ of the finite-dimensional system state $\mathbf{s}(t)$, \begin{equation}\label{eqmeas} \mathbf{u}(t) = \mathbf{h}(\mathbf{s}(t)). \end{equation} For simplicity, we assume that there is no measurement noise, though our discussion below could be modified for the case that Eq.~(\ref{eqmeas}) is an approximation. We do not assume that $\mathbf{h}$ is invertible, nor that $\mathbf{h}$ or $\mathbf{s}$ is known in practice. Training data consists of a finite time series $\{\mathbf{u}(t)\}$ of measurements. We predict future values of $\mathbf{u}(t)$ by a sequence of three steps, which we call listening, training, and predicting. Listening consists of using the training time series as input to the reservoir, which we model as a discrete time deterministic process: \begin{equation}\label{eqnon} \mathbf{r}(t+\tau) = \mathbf{f}[\mathbf{r}(t),\mathbf{u}(t)]. \end{equation} Here $\mathbf{r}(t)$ is the reservoir state, $\tau$ is a time increment, and we assume $\mathbf{f}$ to be a differentiable function. We emphasize that in practice, a formula for $\mathbf{f}$ need not be known; only its outputs are used for training and prediction. For convenience, we assume that the full reservoir state $\mathbf{r}(t)$ can be measured or computed, though our arguments can be modified easily for the more general case that the reservoir output is a function of its internal state. We call Eq.~(\ref{eqnon}) the ``listening reservoir''. Training consists of determining a post-processing function $\hat{\ps}$ that, when applied to the reservoir output $\mathbf{r}(t+\tau)$, estimates the next input $\mathbf{u}(t+\tau)$. (We view $\hat{\ps}$ as an approximation to an ``ideal'' post-processing function $\bm{\uppsi}$, to be introduced in Sec.~\ref{sec:train}.) Thus, the goal of training is to find $\hat{\ps}$ such that $\hat{\ps}(\mathbf{r}(t+\tau)) \approx \mathbf{u}(t+\tau)$, or equivalently, \begin{equation}\label{eqfit} \hat{\ps}(\mathbf{r}(t)) \approx \mathbf{u}(t), \end{equation} for $t$ large enough that the listening reservoir (\ref{eqnon}) has evolved beyond transient dynamics. We compute $\hat{\ps}$ by a fitting procedure, such as linear regression, on the training time series $\{\mathbf{u}(t)\}$ and the corresponding time series $\{\mathbf{r}(t)\}$ determined from the listening reservoir (\ref{eqnon}). Predicting then proceeds by modifying the reservoir to run autonomously with a feedback loop, replacing its input [$\mathbf{u}(t)$ in Eq.~(\ref{eqnon})] with its post-processed output from the previous time increment: \begin{equation}\label{eqaut} \hat{\rr}(t+\tau) = \mathbf{f}[\hat{\rr}(t),\hat{\ps}(\hat{\rr}(t))]. \end{equation} We call Eq.~(\ref{eqaut}) the ``predicting reservoir''. When initialized (from the listening reservoir state) with $\hat{\rr}(t_0) = \mathbf{r}(t_0)$, iterating the predicting reservoir yields a time series $\{\hat{\ps}(\hat{\rr}(t_0+\tau)), \hat{\ps}(\hat{\rr}(t_0+2\tau)), \ldots\}$ of predictions for future measurements $\{\mathbf{u}(t_0+\tau), \mathbf{u}(t_0+2\tau), \ldots\}$. (Our notation reflects the fact that for $t > t_0$, the predicting reservoir state $\hat{\rr}(t)$ estimates the state $\mathbf{r}(t)$ that would result from evolving the listening reservoir (\ref{eqnon}) with the future measurements.) The reservoir prediction method we have described has been shown to produce successful short-term forecasts for a variety of dynamical systems \cite{jaeger2004harnessing,wyffels2010,pathak2017using}. If the system has a chaotic attractor, then, as for any imperfect model, the prediction error $\|\hat{\ps}(\hat{\rr}(t)) - \mathbf{u}(t)\|$ cannot remain small for $t \gg t_0$. However, in some cases, the long-term time series $\{\hat{\ps}(\hat{\rr}(t))\}$ continues to behave like the measurements from a typical trajectory on the attractor, and in this sense the predicting reservoir (\ref{eqaut}) approximately reproduces the ergodic properties of the dynamical system that generated the measurements \cite{pathak2017using}. We refer to this ability, often called attractor reconstruction, as replication of the ``climate''. In this article, we develop and illustrate a theory of how reservoir prediction is able to ``learn'' the dynamics of a system well enough to produce both accurate short-term forecasts and accurate long-term climate. We make use of the notion of \emph{generalized synchronization} \cite{afraimovich1986stochastic,pecora1990synchronization,rulkov1995generalized,kocarev1996}, which in our context means that the reservoir state $\mathbf{r}(t)$ becomes asymptotically a continuous function $\bm{\upphi}$ of $\mathbf{s}(t)$, in the limit that the listening reservoir (\ref{eqnon}) is run infinitely long. In Sec.~\ref{sec:Hunt}, we argue that the following four conditions are sufficient for both short-term prediction and attractor/climate replication. \begin{enumerate} \item The listening reservoir (\ref{eqnon}) achieves generalized synchronization with the process $\{\mathbf{s}(t)\}$, so that $\mathbf{r}(t) \approx \bm{\upphi}(\mathbf{s}(t))$ for a continuous function $\bm{\upphi}$, within the time interval covered by the training time series. \item The synchronization function $\bm{\upphi}$ is one-to-one, or at least carries enough information about its input to recover $\mathbf{u}(t) = \mathbf{h}(\mathbf{s}(t))$ from $\bm{\upphi}(\mathbf{s}(t))$. \item Training is successful in finding a function $\hat{\ps}$ such that Eq.~(\ref{eqfit}) holds, or equivalently in view of generalized synchronization, that $\hat{\ps}(\bm{\upphi}(\mathbf{s}(t))) \approx \mathbf{h}(\mathbf{s}(t))$. \item The attractor approached by the listening reservoir is also stable for the predicting reservoir (\ref{eqaut}). \end{enumerate} Conditions 1--3 enable short-term prediction. Condition~4 ensures that the climate established by generalized synchronization of the listening reservoir is preserved when its input is replaced by a feedback term to form the predicting reservoir. One of the main points of Sec.~\ref{sec:Hunt} is to precisely formulate the stability condition described in Condition~4. We remark that generalized synchronization of the listening reservoir \cite{parlitz2005,zimmermann2018} is related to the ``echo state property'' \cite{jaeger2001echo,yildiz2012}, which states that an infinite history of inputs $\{\mathbf{u}(t-\tau), \mathbf{u}(t-2\tau), \ldots\}$ uniquely determines $\mathbf{r}(t)$, subject to the condition that the trajectory $\{\mathbf{r}(t)\}$ is bounded. Indeed, if $\{\mathbf{s}(t)\}$ is a trajectory of an invertible dynamical system, then the past inputs are functions of $\mathbf{s}(t)$, so the echo state property implies that if the listening reservoir (\ref{eqnon}) has run for an infinite period of time in a bounded domain, then $\mathbf{r}(t)$ is a function of $\mathbf{s}(t)$ [though it does not imply that this function is continuous]. We believe that for the reservoir prediction method we described, it is desirable (though not strictly necessary) to have the echo state property and generalized synchronization. In Sec.~\ref{sec:synch}, we show why both properties hold if the listening reservoir is uniformly contracting as a function of $\mathbf{r}$, and that we can quantify the amount of transient time it takes for the reservoir to achieve the approximation $\mathbf{r}(t) \approx \bm{\upphi}(\mathbf{s}(t))$ of Condition~1. Conditions~2 and 3 are significantly more difficult to ensure \emph{a priori}. In Sec.~\ref{sec:train}, we argue why it is plausible that these conditions can be achieved. In Secs.~\ref{sec:attractor} and \ref{sec:lyapunov}, we describe the consequences of Conditions~1-3 for short-term prediction, and formulate more precisely the stability criterion of Condition~4 that determines whether the correct attractor and climate are approximately reproduced by the long-term dynamics of the predicting reservoir (\ref{eqaut}). In Sec.~\ref{sec:compute}, we describe how a model for the reservoir dynamics can be used to compute Lyapunov exponents that reflect climate stability. In Sec.~\ref{sec:RCN}, we give examples of short-term state and long-term climate predictions using the Lorenz equations as our input system. In addition to a case where the climate is approximated well, we show a case where the predicted climate is inaccurate, though the short-term forecast is still reasonably accurate. We compute the Lyapunov exponents of the predicting reservoir (\ref{eqaut}), and show that the transition from accurate climate to inaccurate climate corresponds to a Lyapunov exponent crossing zero. When this Lyapunov exponent is positive but close to zero, the reservoir prediction remains close to the correct climate for a transient period, and we relate the average duration of this transient to the value of the Lyapunov exponent. \section{\label{sec:Hunt}Theory} We consider the application of the reservoir prediction method described in the introduction to a time series $\{\mathbf{u}(t)\}$ that is a function $\mathbf{h}$ of a trajectory $\{\mathbf{s}(t)\}$ of the dynamical system \begin{equation}\label{eqinp} \mathbf{s}(t+\tau) = \mathbf{g}(\mathbf{s}(t)), \end{equation} where $\mathbf{g}$ is differentiable and invertible, and we assume that $\mathbf{s}(t)$ evolves on a bounded attractor $A$. In preparation for training and prior to prediction, the reservoir state $\mathbf{r}(t)$ evolves according to the listening reservoir (\ref{eqnon}). The system described by Eqs.~(\ref{eqinp}) and (\ref{eqnon}), coupled by Eq.~(\ref{eqmeas}), is often called a drive-response, skew-product, or one-way coupled system. The coupled system dynamics are illustrated by Fig.~\ref{fig:new_figure_1_of_sec_2}. We next consider the evolution of the coupled system as $t \to \infty$. \begin{figure}[htbp] \centering \includegraphics[width=0.4\textwidth]{new_figure_1_of_sec_2.pdf} \caption{Drive-response system dynamics, with the drive state $\mathbf{s}(t)$ coupled to the listening reservoir state $\mathbf{r}(t)$ through the measurement vector $\mathbf{u}(t)$.} \label{fig:new_figure_1_of_sec_2} \end{figure} \subsection{Listening and Generalized Synchronization}\label{sec:synch} The goal of training can be regarded as finding a post-processing function $\hat{\ps}$ such that $\hat{\ps}(\mathbf{r}(t))$ is in approximate identical synchronization \cite{pecora1990synchronization} with $\mathbf{u}(t) = \mathbf{h}(\mathbf{s}(t))$, when $\mathbf{r}(t)$ is evolved with the listening reservoir (\ref{eqnon}). The desired relationship $\mathbf{u}(t) \approx \hat{\ps}(\mathbf{r}(t))$ can also be thought of as approximate generalized synchronization between $\mathbf{u}(t)$ [or the underlying state $\mathbf{s}(t)$] and $\mathbf{r}(t)$. The existence of such a relationship would be implied by stochastic synchronization \cite{afraimovich1986stochastic}, which in our context means a one-to-one correspondence between $\mathbf{r}(t)$ and $\mathbf{s}(t)$ in the limit $t\to\infty$. However, in drive-response systems, the definition of \emph{generalized synchronization} \cite{rulkov1995generalized,kocarev1996} requires only that the response state be asymptotically a function of the drive state: in our case, that there is a continuous function $\bm{\upphi}$ such that $\mathbf{r}(t) - \bm{\upphi}(\mathbf{s}(t)) \to 0$ as $t \to\infty$. The existence of such a $\bm{\upphi}$ is typically easier to establish than its invertibility. Next, we describe conditions on the reservoir system $\mathbf{f}$ that guarantee generalized synchronization. Though weaker conditions are possible, we assume uniform contraction for $\mathbf{f}$, as is often the case in practice. By \emph{uniform contraction}, we mean that there is some $\rho < 1$ such that for all $\mathbf{r}_1$, $\mathbf{r}_2$, and $\mathbf{u}$ we have that $|\mathbf{f}(\mathbf{r}_1,\mathbf{u})-\mathbf{f}(\mathbf{r}_2,\mathbf{u})| < \rho |\mathbf{r}_1-\mathbf{r}_2|$. It then follows that two trajectories $\{\mathbf{r}_1(t),\mathbf{u}(t)\}$ and $\{\mathbf{r}_2(t),\mathbf{u}(t)\}$ of (\ref{eqnon}) with the same input time series approach each other exponentially: $|\mathbf{r}_1(t) - \mathbf{r}_2(t)| \leq |\mathbf{r}_1(0) - \mathbf{r}_2(0)| \rho^{t/\tau}$. Thus, for a given input time series $\{\mathbf{u}(t)\}$, the reservoir state $\mathbf{r}(t)$ is asymptotically independent of its initial state; this is essentially what Jaeger \cite{jaeger2001echo} called the ``echo state property''. Furthermore, because $\mathbf{g}$ is invertible and $A$ is bounded, and due to results of Hirsch, Pugh, and Shub \cite{hirsch1970,hirsch1977} (a direct proof is given by Stark \cite{stark1997}), uniform contraction implies generalized synchronization, as defined above. (In general, the synchronization function $\bm{\upphi}$ cannot be determined analytically from $\mathbf{f}$, $\mathbf{g}$, and $\mathbf{h}$.) A weaker form of generalized synchronization can also be guaranteed \cite{stark1997} from the non-uniform contraction implied by negative conditional Lyapunov exponents. We remark that if the listening reservoir (\ref{eqnon}) is uniformly contracting, then $\mathbf{r}(t) - \bm{\upphi}(\mathbf{s}(t))$ converges to zero exponentially. If the designer of the reservoir can guarantee a specific contraction rate $\rho$, this determines the convergence rate, so that the amount of transient time needed to make the approximation $\mathbf{r}(t) \approx \bm{\upphi}(\mathbf{s}(t))$ accurate can be known in practice. Generalized synchronization implies that the set of $(\mathbf{s},\mathbf{r})$ such that $\mathbf{s}$ is on its attractor $A$ and $\mathbf{r} = \bm{\upphi}(\mathbf{s})$ is an attractor for the drive-response system given by Eqs.~(\ref{eqinp}), (\ref{eqmeas}), and (\ref{eqnon}). Below we will use the fact that this set is invariant: $\mathbf{r}(t) = \bm{\upphi}(\mathbf{s}(t))$ implies $\mathbf{r}(t+\tau) = \bm{\upphi}(\mathbf{s}(t+\tau))$. \subsection{Training} \label{sec:train} Recall that training seeks a function $\hat{\ps}$ that predicts the current measurement vector $\mathbf{u}(t)$ from the current listening reservoir state $\mathbf{r}(t)$ [which is computed from past measurements], and that when generalized synchronization is achieved, accuracy of this prediction is equivalent to $\hat{\ps}(\bm{\upphi}(\mathbf{s}(t))) \approx \mathbf{h}(\mathbf{s}(t))$. For the rest of Section~\ref{sec:Hunt}, we assume that there is a function $\bm{\uppsi}$ defined on $\bm{\upphi}(A)$ such that $\bm{\uppsi}(\bm{\upphi}(\mathbf{s})) = \mathbf{h}(\mathbf{s})$ for all $\mathbf{s}$ in $A$. This assumption means that in the asymptotic limit of generalized synchronization, the listening reservoir state $\mathbf{r}(t) = \bm{\upphi}(\mathbf{s}(t))$ uniquely determines $\mathbf{u}(t) = \mathbf{h}(\mathbf{s}(t))$. The goal of training can then be described as finding a function $\hat{\ps}$ defined on the state space of the reservoir that approximates $\bm{\uppsi}$ on the set $\bm{\upphi}(A)$. We summarize our notation in Table~\ref{tab:notation}. \begin{table}[htbp] \begin{tabular}{lcl} \hline \hline \multicolumn{3}{c}{Dynamical System to be Predicted}\\ $\mathbf{s}(t)$ && System state \\ $\mathbf{g} : \mathbf{s}(t) \to \mathbf{s}(t+\tau)$ && System evolution \\ $A$ && Attractor for $\mathbf{s}(t)$ \\ \hline \multicolumn{3}{c}{Measurements}\\ $\mathbf{u}(t)$ && Measurement vector \\ $\mathbf{h} : \mathbf{s}(t) \to \mathbf{u}(t)$ && Measurement function \\ \hline \multicolumn{3}{c}{Reservoir}\\ $\mathbf{r}(t)$ && Listening reservoir state \\ $\mathbf{f} : [\mathbf{r}(t),\mathbf{u}(t)] \to \mathbf{r}(t+\tau)$ && Listening reservoir evolution \\ $\hat{\rr}(t)$ && Predicting reservoir state \\ $\hat{\uu}(t) = \hat{\ps}(\hat{\rr}(t))$ && Predicted measurements \\ $\mathbf{f} : [\hat{\rr}(t),\hat{\uu}(t)] \to \hat{\rr}(t+\tau)$ && Predicting reservoir evolution \\ \hline \multicolumn{3}{c}{Generalized Synchronization}\\ $\bm{\upphi} : \mathbf{s} \to \mathbf{r}$ for $\mathbf{s}$ in $A$ && Synchronization function \\ $\bm{\uppsi} : \mathbf{r} \to \mathbf{u}$ for $\mathbf{r}$ in $\bm{\upphi}(A)$ && Ideal post-processing function \\ $\hat{\ps} : \hat{\rr}(t) \to \hat{\uu}(t)$ && Actual post-processing function \\ \hline \hline \end{tabular} \caption{Summary of Notation}\label{tab:notation} \end{table} Though the existence of $\bm{\uppsi}$ is not strictly necessary for the reservoir to make useful predictions, if no such $\bm{\uppsi}$ exists, then it seems unlikely that training can successfully achieve the desired approximation $\bm{\uppsi}(\bm{\upphi}(\mathbf{s}(t))) \approx \mathbf{h}(\mathbf{s}(t))$, and thus unlikely that $\mathbf{u}(t)$ can be approximated as a function of the reservoir state during either listening or predicting. The existence of $\bm{\uppsi}$ is guaranteed if $\bm{\upphi}$ is one-to-one on $A$; then $\bm{\uppsi} = \mathbf{h} \circ \bm{\upphi}^{-1}$. Furthermore, if $\mathbf{h}$ is one-to-one on $A$ (in other words, the measurements at a given time determine the system state), then $\bm{\upphi}$ must be one-to-one on $A$ in order for $\bm{\uppsi}$ to exist. Thus, we propose that a goal of reservoir design should be to yield a one-to-one synchronization function $\bm{\upphi}$ for a variety of input systems. In practice, having a sufficiently high-dimensional reservoir may suffice; embedding results\cite{whitney,embedology} imply that if the dimension of the reservoir state $\mathbf{r}$ is more than twice the dimension of $A$, functions from $A$ to the reservoir state space are typically one-to-one. We note that in practice, the dimension of $\mathbf{r}$ must be much larger than twice the dimension of $A$ in order to provide a suitable basis for approximating $\bm{\uppsi}$, in the sense described below. Careful consideration of conditions under which training is successful in determining an accurate approximation $\hat{\ps}$ to $\bm{\uppsi}$ is beyond the scope of our theory. However, we argue that success is plausible if the training time series is sufficiently long that the trajectory $\{\mathbf{s}(t)\}$ well samples its attractor $A$, if the dimension of the reservoir state $\mathbf{r}(t)$ is sufficiently high, and if the dynamics of the coordinates of $\mathbf{r}(t)$ are sufficiently heterogeneous. If, for example, training uses linear regression of $\{\mathbf{u}(t)\} = \{\mathbf{h}(\mathbf{s}(t))\}$ versus $\{\mathbf{r}(t)\}$, then since $\mathbf{r}(t) \approx \bm{\upphi}(\mathbf{s}(t))$, the coordinates of the vector-valued function $\bm{\upphi}(\mathbf{s})$ can be thought of ``basis functions'' \cite{parlitz2005}; training seeks a linear combination $\hat{\ps}$ of these basis functions that approximates $\mathbf{h}(\mathbf{s})$ on $A$. A suitable basis for training (using a linear or nonlinear combination) is plausible if the listening reservoir yields a sufficiently large variety of responses to its input. \subsection{Prediction and Attractor Reconstruction}\label{sec:attractor} After training determines the post-processing function $\hat{\ps}$, prediction proceeds by initializing $\hat{\rr}(t_0) = \mathbf{r}(t_0)$ and evolving $\hat{\rr}(t)$ for $t \geq t_0$ according to the predicting reservoir (\ref{eqaut}). The reservoir state $\mathbf{r}(t_0)$ is determined by evolving the listening reservoir (\ref{eqnon}) for an interval of time preceding $t_0$; this could be the time interval used for training, or it could be a later time interval that uses inputs $\{\mathbf{u}(t)\}$ measured after training (we call this feature ``training reusability'' \cite{pathak2018}). We assume that the listening time preceding $t_0$ is sufficiently long to achieve generalized synchronization, so that $\hat{\rr}(t_0) = \mathbf{r}(t_0) \approx \bm{\upphi}(\mathbf{s}(t_0))$ is near $\bm{\upphi}(A)$. For $t \geq t_0$, the predicted value of $\mathbf{u}(t)$ is \begin{equation}\label{eqpred} \hat{\uu}(t) = \hat{\ps}(\hat{\rr}(t)). \end{equation} Figure~\ref{fig:new_figure_2_of_sec_2} depicts the dynamics of the predicting reservoir (\ref{eqaut}). \begin{figure}[htbp] \centering \includegraphics[width=0.4\textwidth]{new_figure_2_of_sec_2.pdf} \caption{Predicting reservoir dynamics, with the listening reservoir input $\mathbf{u}(t)$ replaced by the estimate $\hat{\uu}(t)$ determined from the predicting reservoir state $\hat{\rr}(t)$.} \label{fig:new_figure_2_of_sec_2} \end{figure} Consider now the idealized scenario that our approximations are instead exact relations $\hat{\ps} = \bm{\uppsi}$ on $\bm{\upphi}(A)$, and $\hat{\rr}(t_0) = \mathbf{r}(t_0) = \bm{\upphi}(\mathbf{s}(t_0))$. Suppose hypothetically that the measurements $\{\mathbf{u}(t)\}$ for $t \geq t_0$ (these are the values we want to predict in practice) are available, so that we can evolve both the listening reservoir (\ref{eqnon}) depicted in Fig.~\ref{fig:new_figure_1_of_sec_2}, and the predicting reservoir (\ref{eqaut}) depicted in Fig.~\ref{fig:new_figure_2_of_sec_2}, and compare their outputs. Then we claim that the two reservoirs agree exactly: $\hat{\rr}(t) = \mathbf{r}(t)$ and $\hat{\uu}(t) = \mathbf{u}(t)$ for all $t \geq t_0$. First notice that $\hat{\uu}(t_0) = \hat{\ps}(\hat{\rr}(t_0)) = \bm{\uppsi}(\bm{\upphi}(\mathbf{s}(t_0)) = \mathbf{h}(\mathbf{s}(t_0)) = \mathbf{u}(t_0)$. Then $\hat{\rr}(t_0+\tau) = \mathbf{f}[\hat{\rr}(t_0),\hat{\uu}(t_0)] = \mathbf{f}[\mathbf{r}(t_0),\mathbf{u}(t_0)] = \mathbf{r}(t_0+\tau)$, and $\mathbf{r}(t_0+\tau) = \bm{\upphi}(\mathbf{s}(t_0+\tau))$ due to generalized synchronization. Similarly, $\hat{\uu}(t_0+\tau)$ then equals $\mathbf{u}(t_0+\tau)$, so $\hat{\rr}(t_0+2\tau) = \mathbf{r}(t_0+2\tau) = \bm{\upphi}(\mathbf{s}(t_0+2\tau))$, etc. This agreement between the trajectories also shows that $\bm{\upphi}(A)$ is an invariant set for the idealized predicting reservoir \begin{equation}\label{eqid} \mathbf{r}(t+\tau) = \mathbf{f}[\mathbf{r}(t),\bm{\uppsi}(\mathbf{r}(t))], \end{equation} and that its dynamics, observed through $\bm{\uppsi}$, are equivalent to the dynamics of $A$ observed through $\mathbf{h}$. Thus, if the time series $\{\mathbf{u}(t)\}$ of measurements has enough information to reconstruct the attractor $A$, then we can regard $\bm{\upphi}(A)$ and the idealized predicting reservoir (\ref{eqid}) as an exact reconstruction of $A$ and its dynamics. When the approximation $\hat{\ps} \approx \bm{\uppsi}$ is not exact on $\bm{\upphi}(A)$, the actual predicting reservoir (\ref{eqaut}) is still initialized near $\bm{\upphi}(A)$, but $\bm{\upphi}(A)$ is only approximately invariant. The better the approximation, the more accurate the predictions $\hat{\uu}(t) \approx \mathbf{u}(t)$ will be, at least in the short term. However, if the system (\ref{eqinp}) that generates the measurements $\{\mathbf{u}(t)\}$ is chaotic, the prediction error $\|\hat{\uu}(t) - \mathbf{u}(t)\|$ will typically grow exponentially as $t$ increases. Nonetheless, it remains possible that $\hat{\uu}(t)$ will maintain a climate similar to $\mathbf{u}(t)$ in the long term. This will happen if (and practically speaking, only if) the predicting reservoir trajectory $\{\hat{\rr}(t)\}$ remains close to $\bm{\upphi}(A)$ for all time, and its attractor has a similar climate to that of the idealized predicting reservoir on $\bm{\upphi}(A)$. In this sense, climate replication (attractor reconstruction) relies on both state-space stability and structural stability of the predicting reservoir near the idealized reconstructed attractor $\bm{\upphi}(A)$. Structural stability is difficult to ensure rigorously, but in practice small perturbations of the dynamics near an attractor tend to yield small perturbations to the climate. Thus, we argue that climate replication is likely if $\bm{\upphi}(A)$, which according to our assumptions is invariant for the idealized predicting reservoir, is also attracting, in the sense described below. \subsection{Stability and Lyapunov Exponents}\label{sec:lyapunov} Recall that generalized synchronization implies that the set $\bm{\upphi}(A)$ is attracting for the listening reservoir (\ref{eqnon}), when driven by $\mathbf{u}(t) = \mathbf{h}(\mathbf{s}(t))$ where $\mathbf{s}(t)$ evolves on $A$. Whether $\bm{\upphi}(A)$ is attracting for the predicting reservoir is complicated by the fact that it is invariant only in the idealized case $\hat{\ps} = \bm{\uppsi}$, and that $\bm{\uppsi}$ is defined only on $\bm{\upphi}(A)$, so that the idealized predicting reservoir (\ref{eqid}) is also defined only on $\bm{\upphi}(A)$. For its stability to be well-defined, the domain of $\bm{\uppsi}$ must be extended to a neighborhood of $\bm{\upphi}(A)$, and whether $\bm{\upphi}(A)$ is attracting depends on how the extension is chosen. Thus, the suitability of the empirically determined function $\hat{\ps}$ for climate prediction depends not only on how well it approximates $\bm{\uppsi}$ on $\bm{\upphi}(A)$, but also on how it behaves near $\bm{\upphi}(A)$. For a particular $\hat{\ps}$, we consider hypothetically a particular extension of $\bm{\uppsi}$ such that $\hat{\ps} \approx \bm{\uppsi}$ near $\bm{\upphi}(A)$. This extension gives the idealized predicting reservoir a full set of Lyapunov exponents on $\bm{\upphi}(A)$, some of which correspond to infinitesimal perturbations tangent to $\bm{\upphi}(A)$ and some of which correspond to infinitesimal perturbations transverse to $\bm{\upphi}(A)$. Then $\bm{\upphi}(A)$ is attracting if the transverse Lyapunov exponents are all negative, and is unstable if there is a positive transverse Lyapunov exponent. If the generalized synchronization function $\bm{\upphi}$ is one-to-one and differentiable, then the tangential Lyapunov exponents of the system (\ref{eqinp}) on $A$ are reproduced as the tangential Lyapunov exponents of the idealized predicting reservoir on $\bm{\upphi}(A)$. Generalized synchronization does not always yield a differentiable $\bm{\upphi}$ \cite{hunt1997,stark1997}, but even when differentiability cannot be guaranteed, it is possible in practice to reproduce much of the Lyapunov spectrum of $A$, including negative Lyapunov exponents in some cases, with a predicting reservoir \cite{pathak2017using}. We remark that unlike the conditional Lyapunov exponents for a drive-response system (such as the listening reservoir), which correspond to perturbations of the response system state, for the predicting reservoir it is not clear in advance which perturbations correspond to transverse Lyapunov exponents. However, in a numerical experiment where the equations for the driving system (\ref{eqinp}) and the reservoir are known, the existence or absence of a positive transverse Lyapunov exponent can be inferred by computing all of the positive Lyapunov exponents of the predicting reservoir and eliminating those that are Lyapunov exponents of $A$. \subsection{Computation of Lyapunov Exponents}\label{sec:compute} We now describe how to estimate the Lyapunov exponents of the idealized predicting reservoir (\ref{eqid}) on $\bm{\upphi}(A)$, for a particular extension of $\bm{\uppsi}$ to a neighborhood of $\bm{\upphi}(A)$, from its empirical approximation $\hat{\ps}$. To do so, we assume that we have a formula for $\mathbf{f}$, so that we can compute its Jacobian matrix. (We emphasize that we estimate the Lyapunov exponents in order to corroborate the theory we have presented; their computation, and a formula for $\mathbf{f}$, are not needed for the reservoir prediction method we have described.) If climate replication is successful, we can simply generate a long trajectory of the predicting reservoir (\ref{eqaut}), and use it to compute the Lyapunov exponents of the trajectory \cite{pathak2017using}. However, this trajectory cannot be expected to remain close to $\bm{\upphi}(A)$ if the set is unstable. Nonetheless, if we have a sufficiently long time time series $\{\mathbf{u}(t)\}$ of measurements, we can estimate the Lyapunov exponents of $\bm{\upphi}(A)$, whether or not it is stable, as follows. First, we use the time series $\{\mathbf{u}(t)\}$ to generate a trajectory $\{\mathbf{r}(t)\}$ of the listening reservoir (\ref{eqnon}); as we have argued, $\mathbf{r}(t)$ will approach $\bm{\upphi}(A)$ under the conditions for generalized synchronization. Then along this trajectory, which is an approximate trajectory for the predicting reservoir, we compute Lyapunov exponents using the Jacobian matrix of the predicting reservoir (\ref{eqaut}). \section{\label{sec:RCN}Numerical Experiments} In this section, we give examples of short-term state and long-term climate predictions for the Lorenz system \cite{lorenz}, with standard parameter values that yield chaotic trajectories: \begin{align} \begin{split} dx/dt &= 10(y-x),\\ dy/dt &= x(28-z) -y,\\ dz/dt &= xy - 8z/3. \end{split}\label{eqn:lorenz} \end{align} We consider the case where the measurement function $\mathbf{h}$ is the identity, so that $\mathbf{u}(t) = \mathbf{s}(t) = [x(t),y(t),z(t)]^T$. For the reservoir, we use an artificial neural network similar to the one used by Jaeger and Haas\cite{jaeger2004harnessing}; our listening reservoir [a continuous-time version of Eq.~(\ref{eqnon})] evolves according to \begin{equation} \frac{d}{dt}\mathbf{r}(t) = \gamma[-\mathbf{r}(t)+\tanh(\mathbf{M}\mathbf{r}(t)+\sigma\WW_{\text{in}}\mathbf{u}(t))], \label{eqn:listening_reservoir} \end{equation} where $\mathbf{r}$ is an $N$-dimensional vector, $\gamma$ is a scalar, $\mathbf{M}$ is an adjacency matrix representing internal network connections. The matrix $\sigma\WW_{\text{in}}$ consists of ``input weights''; in our numerical results, we will fix $\WW_{\text{in}}$ and vary the scalar input strength $\sigma$. The vector function $\tanh$ is computed by applying the scalar hyperbolic tangent to each coordinate of its input vector. We compute trajectories of both the Lorenz and reservoir systems using the $4$th order Runge-Kutta method with time step $\tau = 0.001$. We will show cases where climate replication (attractor reconstruction) succeeds and where it fails, and compare the results with Lyapunov exponents we compute for the predicting reservoir. \begin{figure}[htbp] \centering \includegraphics[scale=.5]{open_reservoir.pdf} \caption{Listening reservoir based on an artificial neural network with $N$ neurons. The input vector $\mathbf{u}(t) \in \mathbb{R}^3$ is mapped to the reservoir state space $\mathbb{R}^N$ by the input weight matrix $\sigma\WW_{\text{in}}$, and the resulting reservoir state is mapped to $\mathbb{R}^3$ by the post-processing function $\hat{\ps} = \WW_{\text{out}}\mathbf{q}$.} \label{fig:reservoir1} \end{figure} We consider post-processing functions of the form $\hat{\ps}(\mathbf{r}) = \WW_{\text{out}}\mathbf{q}(\mathbf{r})$, where $\mathbf{q}(\mathbf{r})$ is the $2N$-dimensional vector consisting of the $N$ coordinates of $\mathbf{r}$ followed by their squares, and the ``output weight'' matrix $\WW_{\text{out}}$ is determined by a linear regression procedure described below. The listening reservoir (\ref{eqn:listening_reservoir}) and the post-processing function are illustrated as an input-output system in Fig.~\ref{fig:reservoir1}. The goal of training is that the post-processed output $\WW_{\text{out}}\mathbf{q}(\mathbf{r}(t+\tau))$ based on input up to time $t$ estimates the subsequent input $\mathbf{u}(t+\tau)$. Once $\WW_{\text{out}}$ is determined, the external input can be replaced in a feedback loop by the post-processed output to form the predicting reservoir, as depicted in Fig.~\ref{fig:reservoir2}. The predicting reservoir evolves according to \begin{equation} \frac{d}{dt}\hat{\rr}(t) = \gamma[-\hat{\rr}(t)+\tanh(\mathbf{M}\hat{\rr}(t)+\sigma\WW_{\text{in}}\WW_{\text{out}}\mathbf{q}(\hat{\rr}(t))], \label{eqn:predicting_reservoir} \end{equation} and the predicted value of $\mathbf{u}(t)$ is $\hat{\uu}(t) = \hat{\ps}(\hat{\rr}(t)) = \WW_{\text{out}}\mathbf{q}(\hat{\rr}(t))$. \begin{figure}[htbp] \centering \includegraphics[scale=.5]{close_reservoir.pdf} \caption{The predicting reservoir replaces the external input of the listening reservoir with the post-processed reservoir output. The time increment $\tau$ in our discussion represents the amount of time for information to travel once around the feedback loop.} \label{fig:reservoir2} \end{figure} Details of our reservoir implementation are as follows. The reservoir dimension is $N = 2000$, and we use $\gamma = 10$. The $N$-by-$N$ adjacency matrix $\mathbf{M}$ is chosen randomly with sparse Erd\"{o}s-Renyi connectivity and spectral radius $0.9$; specifically, each element is chosen independently to be nonzero with probability $0.02$, nonzero elements are chosen uniformly between $-1$ and $1$, and the resulting matrix is rescaled so that the magnitude of its largest eigenvalue is $0.9$. The $N$-by-$3$ matrix $\WW_{\text{in}}$ is chosen randomly so that each row has one non-zero element, chosen uniformly between $-1$ and $1$. We evolve the Lorenz system and the listening reservoir (\ref{eqn:listening_reservoir}) from time $t = -100$ to $t = 60$, and we discard $100$ time units of transient evolution, so that training is based on $\mathbf{u}(t)$ and $\mathbf{r}(t)$ for $0 \leq t \leq 60$. For training, we constrain the $3$-by-$2N$ matrix $\WW_{\text{out}}$ to have only $3N$ nonzero elements, namely the first $N$ elements of its first two rows, and the first $N/2$ and last $N/2$ elements of its third row. (Thus, we fit the $x$ and $y$ coordinates of the Lorenz state with linear functions of $\mathbf{r}$, and the $z$ coordinate with a linear combination of the first $N/2$ coordinates of $\mathbf{r}$ and the squares of the second $N/2$ coordinates; for the Lorenz system, this is advantageous over using a purely linear function of $\mathbf{r}$ \cite{pathak2017using}.) Subject to this constraint, we select $\WW_{\text{out}}$ so as to minimize the error function \begin{equation} \sum_{k=1}^{3000}\Vert\WW_{\text{out}}\mathbf{q}(\mathbf{r}(0.02k)) - \mathbf{u}(0.02k)\Vert^2 + \beta \Vert\WW_{\text{out}}\Vert^2; \label{eqn:error} \end{equation} here we have coarsely sampled the training data every $0.02$ time units in order to reduce the amount of computation required by the regression. The second term in the error function modifies ordinary linear least-squares regression in order to discourage overfitting; this modification is often called ridge regression or Tikhonov regularization. Below, we will show results with regularization parameter $\beta = 10^{-6}$ and with $\beta = 0$ (no regularization). We begin prediction by initializing $\hat{\rr}(T) = \mathbf{r}(T)$ and evolving the predicting reservoir (\ref{eqn:predicting_reservoir}), where $T = 60$ is the end of the listening and training periods. \begin{figure}[htbp] \centering \subfigure{\includegraphics[scale=.25]{idx3w_in_amp0_012_outputZ-crop.pdf}} \subfigure{\includegraphics[scale=.25]{idx3w_in_amp0_014_outputZ-crop.pdf}} \caption{Predicted (red) and actual (blue) $z(t)$ for a chaotic Lorenz system trajectory, using the same randomly-generated reservoir with different input strengths $\sigma=0.012$ [panel (a)] and $\sigma=0.014$ [panel (b)]. Both predictions remain well correlated with the actual trajectory for roughly $10$ time units. After decorrelation, the first prediction approaches a periodic orbit, whereas the second prediction appears to continue with a climate similar to that of the actual trajectory.} \label{fig:good_bad_prediction} \end{figure} In Fig.~\ref{fig:good_bad_prediction}, we show the actual $z(t)$ from a trajectory of the Lorenz system, and predictions $\hat{z}(t)$ from two reservoirs that are identical except for their input strength parameter values [$\sigma=0.012$ for Fig.~\ref{fig:good_bad_prediction}(a) and $\sigma=0.014$ for Fig.~\ref{fig:good_bad_prediction}(b)]. Each reservoir is trained with the same Lorenz trajectory and with regularization parameter $\beta=10^{-6}$. Both reservoirs predict the short-term future similarly well, but for larger values of the prediction time $t-T$, only the second prediction continues with a Lorenz-like climate. We compare the two climate predictions over a longer time period in Fig.~\ref{fig:return_map}, which shows Poincar\'e return maps of successive $z(t)$ maxima. In Fig.~\ref{fig:return_map}(a), the red dots (showing the reservoir prediction) initially are near the blue dots (representing the Lorenz attractor), but eventually the red dots approach a period two orbit, indicated by the arrows. The large distance of the upper left arrow from the blue dots indicates that this period two orbit for the reservoir is not on the Lorenz attractor. In contrast, the red dots in Fig.~\ref{fig:return_map}(b) remain near the blue dots at all times, indicating that the reservoir replicates the climate in the long term. \begin{figure}[htbp] \centering \subfigure{\includegraphics[scale=.22]{idx3w_in_amp0_012_map-crop.pdf}} \subfigure{\includegraphics[scale=.22]{idx3w_in_amp0_014_map-crop.pdf}} \caption{Poincar\'e return map of successive local maxima of $z(t)$ for the actual (blue) and predicted (red) trajectories for $t-T$ from $0$ to $300$, using the same Lorenz trajectory and reservoir as Fig.~\ref{fig:good_bad_prediction}, again with $\sigma=0.012$ [panel (a)] and $\sigma=0.014$ [panel (b)]. Here $z^n_{\text{max}}$ represents the $n$th local maximum of $z(t)$. The first prediction approaches a period two orbit (indicated by the arrows) that is not on the Lorenz attractor whereas the second prediction remains close to the Lorenz attractor.} \label{fig:return_map} \end{figure} Based on the arguments in Sec.~\ref{sec:synch}, we hypothesize that for both $\sigma=0.012$ and $\sigma=0.014$, the listening reservoir (\ref{eqn:listening_reservoir}) evolves toward a set $\bm{\upphi}_\sigma(A)$, where $A$ is the Lorenz attractor and $\bm{\upphi}_\sigma$ is a generalized synchronization function. Our choice of spectral radius $0.9$ for the adjacency matrix $\mathbf{M}$ is consistent with common practice in reservoir computing \cite{caluwaerts2014}, though it does not guarantee uniform contraction for the listening reservoir \cite{yildiz2012}. However, it does guarantee that the eigenvalues of the Jacobian matrix of the right side of (\ref{eqn:listening_reservoir}), evaluated at $\mathbf{r} = \mathbf{u} = 0$, have real parts at most $\gamma(-1+0.9) = 10(-0.1) = -1$. This suggests an asymptotic contraction rate of $-1$ or faster for the listening reservoir, and that after discarding $100$ transient time units, $\mathbf{r}(t)$ is extremely close to $\bm{\upphi}_\sigma(A)$ for $t \geq 0$. Based on the arguments in Sec.~\ref{sec:attractor}, we hypothesize that the set $\bm{\upphi}_\sigma(A)$ is approximately invariant for the predicting reservoir (\ref{eqn:predicting_reservoir}). Based on the results in Figs.~\ref{fig:good_bad_prediction} and \ref{fig:return_map}, we hypothesize further that for $\sigma=0.014$, there is an attracting invariant set for the predicting reservoir near $\bm{\upphi}_\sigma(A)$, but that between $\sigma=0.014$ and $\sigma=0.012$, there is a bifurcation that causes this invariant set either to become unstable or to be destroyed entirely. To corroborate this hypothesis, we compute the Lyapunov exponents of the predicting reservoir for an approximate trajectory on $\bm{\upphi}_\sigma(A)$, as described in Sec.~\ref{sec:compute}. \begin{figure}[htbp] \centering \includegraphics[scale=.3]{Forpaper_crossing_with_escape_Ridge_new-crop.pdf} \caption{The three largest Lyapunov exponents of the predicting reservoir (\ref{eqn:predicting_reservoir}) on the invariant set $\bm{\upphi}_\sigma(A)$ for the listening reservoir (\ref{eqn:listening_reservoir}), as a function of the input strength $\sigma$, for the same reservoir as Figs.~\ref{fig:good_bad_prediction} and \ref{fig:return_map}. Two exponents that are approximately constant as a function of $\sigma$, and which approximate the two largest Lyapunov exponents of the Lorenz attractor, are colored red and blue; the more variable exponent, which we call the transverse Lyapunov exponent and which determines climate stability, is colored green. For values of $\sigma$ for which we detect divergence from the Lorenz climate, we graph with a black dot the observed divergence rate $\lambda^*$, computed as described in the text.} \label{fig:Lyp_ridge_regression} \end{figure} Fig.~\ref{fig:Lyp_ridge_regression} shows the three largest Lyapunov exponents of the predicting reservoir (\ref{eqn:predicting_reservoir}) as the input strength $\sigma$ varies from $0.004$ to $0.02$. We do not change the matrices $\mathbf{M}$ and $\WW_{\text{in}}$, but for each value of $\sigma$, we perform a separate training (with $\beta = 10^{-6}$ as before), resulting in a different output weight matrix $\WW_{\text{out}}$. The exponents colored red and blue approximate the positive and zero Lyapunov exponents of the Lorenz attractor $A$ (the approximation is closest for $\sigma \geq 0.01$). Reproduction of the positive exponent of $A$ in the reservoir dynamics on $\bm{\upphi}_\sigma(A)$ is a necessary consequence of successful attractor reconstruction, and does not indicate instability of $\bm{\upphi}_\sigma(A)$ to transverse perturbations. The exponent colored green estimates the largest of the transverse Lyapunov exponents described in Sec.~\ref{sec:lyapunov}. This exponent passes through zero, indicating a bifurcation, at $\sigma\approx 0.013$. Next, we compare the change in stability indicated by the computed transient Lyapunov exponent to a more direct computation indicating success or failure of climate replication. To detect when the prediction $\hat{\uu}(t) = \hat{\ps}(\hat{\rr}(t))$ of the Lorenz state diverges from the true Lorenz attractor, we let $\Delta(t)$ be the Euclidean distance between the vector field $d\hat{\uu}/dt$ implied by the predicting reservoir and the vector field (right-hand side) of the Lorenz system (\ref{eqn:lorenz}), evaluated at $[x,y,z]^T = \hat{\uu}(t)$. [We calculate the reservoir-implied vector field by the chain rule $d\hat{\uu}/dt = D\hat{\ps}(\hat{\rr}(t)) d\hat{\rr}/dt$, where $D\hat{\ps}$ is the Jacobian matrix of $\hat{\ps} = \WW_{\text{out}}\mathbf{q}$, and $d\hat{\rr}/dt$ is given by Eq.~(\ref{eqn:predicting_reservoir}).] For each value of $\sigma$ depicted in Fig.~\ref{fig:Lyp_ridge_regression}, we calculate the vector field discrepancy $\Delta(t)$ for the prediction time period $t \geq T$. If $\Delta(t)$ does not exceed a threshold value $20$ for a duration of $800$ time units, we consider the climate to be approximately reproduced. (Our threshold value $20$ is small compared to the typical magnitude of the Lorenz vector field.) Otherwise, we say that the prediction has ``escaped'' from the Lorenz attractor. In Fig.~\ref{fig:Lyp_ridge_regression}, we show a black dot at each value of $\sigma$ for which we detect escape; these values are the same as those for which the computed transverse Lyapunov exponent is positive. The height of each black dot represents an observed divergence rate $\lambda^*$, computed as follows. When we detect escape for a particular value of $\sigma$, we reinitialize the predicting reservoir (\ref{eqn:predicting_reservoir}) using $\hat{\rr}(t_0) = \mathbf{r}(t_0)$ for $1000$ different values of $t_0 \geq T$, where the values of $\mathbf{r}(t_0)$ are determined by continuing to run the listening reservoir (\ref{eqn:listening_reservoir}) for $t \geq T$. For each $t_0$, we evolve the predicting reservoir until the first time $t_1$ for which $\Delta(t_1) \geq 20$, or until $t_1-t_0 = 800$, whichever comes first. If divergence from the attractor is governed by Lyapunov exponent $\lambda$, we should have $\Delta(t_1) \approx \Delta(t_0) \exp(\lambda (t_1-t_0))$ in a certain average sense. We compute the observed exponential divergence rate $\lambda^*=\langle \ln[\Delta(t_1)/\Delta(t_0)] \rangle / \langle t_1-t_0\rangle$, where the angle brackets represent an average over the $1000$ values of $t_0$. The computed values of $\lambda^*$ are shown as black dots in Fig.~\ref{fig:Lyp_ridge_regression}. The approximate agreement of $\lambda^*$ with the green curve (especially for $0.01 \leq \sigma \leq 0.013$) demonstrates that the computed transverse Lyapunov exponent reflects divergence of predictions from the Lorenz attractor. \begin{figure}[htbp] \includegraphics[scale=.35]{Forpaper_crossing_with_escape_NoRidge_new-crop.pdf} \caption{The three largest Lyapunov exponents of the predicting reservoir (\ref{eqn:predicting_reservoir}), and the estimated divergence rate $\lambda^*$, as a function of $\sigma$, using the same color scheme as Fig.~\ref{fig:Lyp_ridge_regression}. Here we use a different randomly-generated reservoir than in Fig.~\ref{fig:Lyp_ridge_regression}, and no regularization ($\beta = 0$) in the training.} \label{fig:LYP} \end{figure} To illustrate the correspondence between the computed transverse Lyapunov exponent and the observed divergence rates in a case where their dependence on $\sigma$ is more complicated, we show in Fig.~\ref{fig:LYP} the analogue of Fig.~\ref{fig:Lyp_ridge_regression} in a case where no regularization ($\beta = 0$) is used in the training. Again, we see precise correspondence between detected failure of climate replication (presence of a black dot) and positive values of the transverse Lyapunov exponent (green curve), and good agreement with the observed divergence rates for these values of $\sigma$. In this case, there are two bifurcations, one near $\sigma = 0.12$ and one near $\sigma = 0.16$. \begin{figure}[htbp] \centering \subfigure{\includegraphics[scale=.28]{Forpaper_avg_LYP_Ridge-crop.pdf}}\\ \subfigure{\includegraphics[scale=.28]{Forpaper_avg_LYP_NoRidge-crop.pdf}} \caption{The means and standard deviations of three largest Lyapunov exponents for the same $10$ randomly-generated reservoirs trained with regularization parameter $\beta = 10^{-6}$ [panel (a)] and with $\beta=0$ [panel (b)]. Again, the red and blue curves approximate the two largest exponents of the Lorenz attractor, and the green curve is the computed transverse Lyapunov exponent.} \label{fig:LYP_ridge_and_no_ridge} \end{figure} We remark that when we use regularization ($\beta = 10^{-6}$) in the training, we do not observe as complicated a dependence of the computed transverse Lyapunov exponent on the input strength $\sigma$ as in Fig.~\ref{fig:LYP}. Instead, the computed transverse Lyapunov exponent is typically negative and slowly varying across a wide range of $\sigma$ values, for which climate replication is successful. In Fig.~\ref{fig:LYP_ridge_and_no_ridge}, we use the transverse Lyapunov exponent computation, averaged over $10$ different randomly-generated reservoirs, to give a quantitative illustration of the advantage of regularization. When regularization is used, the negative means and small standard deviations of the computed transverse Lyapunov exponent indicate robust climate stability over the entire range $0.05 \leq \sigma \leq 0.5$. (By contrast, Figs.~\ref{fig:good_bad_prediction}--\ref{fig:Lyp_ridge_regression} depicted values of $\sigma \leq 0.02$.) With no regularization, the means are larger and more variable, indicating less stability and greater sensitivity to the value of $\sigma$, and the standard deviations are significantly larger, indicating lack of robustness from one random reservoir realization to another. \section{Conclusions and Discussion} \label{sec:discussion} We presented in Sec.~\ref{sec:Hunt} a partial explanation for how reservoir computing prediction is able to reconstruct the attractor (replicate the climate) for a chaotic process from limited time series data. We argued that the reservoir dynamics (\ref{eqnon}) can be designed so that during the listening period on which training is based, the reservoir state $\mathbf{r}(t)$ is approximately a continuous function $\bm{\upphi}$ of the state $\mathbf{s}(t)$ of the chaotic process. This property, called generalized synchronization, is closely related to the echo state property for reservoir computing. We showed that both properties hold if the listening reservoir (\ref{eqnon}) is uniformly contracting as a function of the reservoir state; other criteria for these properties have also been identified \cite{stark1997,yildiz2012,caluwaerts2014}. Ideally, the synchronization function $\bm{\upphi}$ should be one-to-one in order to recover the process dynamics from the reservoir dynamics. Investigation of conditions that can guarantee $\bm{\upphi}$ to be one-to-one could help guide reservoir design. However, even in the absence of a guarantee, we noted that embedding results suggest that $\bm{\upphi}$ is likely to be one-to-one if the reservoir state space is sufficiently high-dimensional compared with dimensionality of the chaotic process. Practically speaking, a necessary condition for climate replication is that training be successful in approximately recovering the measured state $\mathbf{u}(t) = \mathbf{h}(\mathbf{s}(t))$ from the reservoir state $\mathbf{r}(t)$; this depends on the amount of training data available and the method of regression used, among other things. We did not address theoretical aspects of training, but we argued that success is plausible if the reservoir is sufficiently high-dimensional and heterogeneous to yield a large variety of basis functions for the regression. We showed that in the limit that the approximations we described are exact, the predicting reservoir (\ref{eqaut}) exactly predicts future values of $\mathbf{u}(t)$. Thus, accurate approximations yield commensurately accurate short-term forecasts. Long-term climate replication depends on stability of the predicting reservoir dynamics with respect to perturbations produced by the approximations. We discussed how to estimate Lyapunov exponents for the predicting reservoir in numerical experiments, whether or not the desired climate is stable. We emphasize that our computation of Lyapunov exponents was intended to illustrate our theory, and that the method we described requires measurements $\{\mathbf{u}(t)\}$ over a long time period to maintain the desired climate. If one's goal is to estimate the Lyapunov exponents of the process that produced $\{\mathbf{u}(t)\}$ from a limited amount of data, one should seek parameters of the predicting reservoir that replicate the climate, and simply compute the Lyapunov exponents of the resulting trajectory\cite{pathak2017using}. In Sec.~\ref{sec:RCN}, we gave examples of climate replication successes and failures, and showed how they correspond to the Lyapunov exponents we computed. We emphasize that the results and the ranges of $\sigma$ we displayed were selected to illustrate and analyze failures that can occur with inadequate input strength (Figs.~\ref{fig:good_bad_prediction}--\ref{fig:Lyp_ridge_regression}) or without regularization (Fig.~\ref{fig:LYP}) in the training. With regularization, we are able to obtain robust climate replication [indicated by Fig.~\ref{fig:LYP_ridge_and_no_ridge}(a)] over a wide range of input strengths. We remark that for simplicity, our theory considered discrete-time reservoir dynamics. Discrete time is the appropriate way to model software reservoirs, but physical reservoirs typically are better modeled by continuous time. With appropriate modifications, our theory applies to the continuous-time case. The prediction time increment $\tau$ used in the training should be the amount of time information takes to traverse the feedback loop depicted in Fig.~\ref{fig:reservoir2}. However, with a physical reservoir, careful calibration of the sampled training data may be necessary to meet the goal of predicting $\mathbf{u}(t+\tau)$ based on the listening reservoir's response to input up to time $t$, in part because $\tau$ is a property of the predicting reservoir and not of the listening reservoir. Finally, we argue that in addition to reservoir computing, the theory we presented in Section~\ref{sec:Hunt} applies to some other machine learning methods for time series prediction. The essential features a prediction method needs for our theory to apply are: (1) that the method maintains an internal state, or ``memory'', that depends on the sequence of inputs it receives during training; (2) that it is trained to predict a short time increment ahead, after receiving the input time series for a relatively long time interval; and (3) that it is used to predict farther into the future by iterating its incremental forecasts through a feedback loop. These features are present, for example, in prediction using the FORCE method for training reservoirs \cite{sussillo2009generating} and in recent work using long short-term memory (LSTM) networks for prediction \cite{vlachas2018}. For methods that (unlike reservoir computing) train parameters that affect the internal state in the absence of feedback, our theory applies if we take the function $\mathbf{f}$ in Eq.~(\ref{eqnon}) to represent the update rule for the internal state $\mathbf{r}$ after training has selected parameter values. Though our description of how training arrives at the pair of functions ($\mathbf{f}$,$\hat{\ps}$) was specific to reservoir computing, our discussion of how these functions can be used with Eqs.~(\ref{eqaut}) and (\ref{eqpred}) for prediction and attractor reconstruction are independent of which machine-learning method is used to determine the functions. We gratefully acknowledge the support of grants from ARO (W911NF-12-1-0101) and DARPA. We thank Michelle Girvan, Jaideep Pathak, Sarthak Chandra, Daniel Gauthier, and Daniel Canaday for their input. \bibliographystyle{aipnum4-1
2024-02-18T23:40:41.675Z
2018-06-20T02:00:36.000Z
algebraic_stack_train_0000
3,090
8,746
proofpile-arXiv_065-15348
\section{Introduction} \label{sec:intro} Consider a multiple access channel (MAC) with two senders and one receiver, in which the receiver wishes to reliably estimate a linear function of the transmitted sources from the senders (see Figure~\ref{fig:comp}). One trivial approach to this \emph{computation} problem involves two steps: first recover the individual sources and then compute the function from the recovered sources. When the problem is isolated to the first communication step of this plug-in approach, using the conventional random independently and identically distributed (i.i.d.) code ensembles achieves the optimal rates of communicating independent sources~\cite{Ahlswede1971,Liao1972}. For the problem as a whole, however, the use of random i.i.d.\@ code ensembles is strictly suboptimal even for a trivial MAC. As shown by K\"orner and Marton~\cite{Korner--Marton1979} for the problem of encoding a modulo-two sum of distributed dependent binary sources, using the \emph{same} random ensemble of linear codes at multiple encoders can achieve strictly better rates than using independently generated ensembles of codes. Building on this observation, Nazer and Gastpar~\cite{Nazer--Gastpar2007a} developed a channel coding scheme that uses the same random ensemble of lattice codes at multiple encoders and showed that this \emph{structured} coding scheme outperforms conventional random coding schemes for computing a linear combination of the sources over a linear MAC, even for independent sources. This influential work led to the development of the \emph{compute--forward} strategy for relay networks~\cite{Wilson--Narayanan--Pfister--Sprintson2010,Nam--Chung--Lee2010,Nazer--Gastpar2011}. Over the past decade, the compute--forward strategy based on lattice codes and its extensions have shown to provide higher achievable rates for several communication problems over relay networks~\cite{Wilson--Narayanan--Pfister--Sprintson2010,Nam--Chung--Lee2010,Nazer--Gastpar2011,Niesen--Whiting2012,Song--Devroye2013,Hong--Caire2013,Ren--Goseling--Weber--Gastpar2014}. \begin{figure}[t] \center \includegraphics[scale=0.85]{figures/compmac2} \caption{Linear computation over two-sender multiple access channel} \label{fig:comp} \end{figure} More recently, \emph{nested coset codes}~\cite{Miyake2010, Padakandla--Pradhan2013c} were proposed as more flexible alternatives for achieving the desired linear structure at multiple encoders. In particular, Padakandla and Pradhan~\cite{Padakandla--Pradhan2013c} developed a fascinating coding scheme for the computation problem over an \emph{arbitrary} MAC. In this coding scheme, a coset code with a rate higher than the target (message) rate is first generated randomly. Next, in the \emph{shaping} step, a codeword of a desired property (such as type or joint type) is selected from a subset of codewords (a coset of a subcode). Although reminiscent of the multicoding scheme of Gelfand and Pinsker~\cite{Gelfand--Pinsker1980a} for channels with state, and Marton's coding scheme~\cite{Marton1979} for broadcast channels, this construction is more fundamental in some sense, since the scheme is directly applicable even for classical point-to-point communication channels. A similar shaping technique was also developed for lattice codes in~\cite{Tal--Erez2008}. For multiple encoders, the desired common structure is obtained by using coset codes with the same generator matrix. Recent efforts exploited the benefit of such constructions for a broader class of channel models, such as interference channels~\cite{Padakandla--Pradhan2012,Padakandla--Pradhan2016}, multiple access channels~\cite{Sen--Kim2017,Sen--Kim2018s}, and multiple access channels with state~\cite{Padakandla--Pradhan2013}. To develop a unified framework for the compute--forward strategy, Lim, Feng, Pastore, Nazer, and Gastpar~\cite{Lim--Gastpar2016,Lim--Gastpar2017} generalized the nested coset codes of the same generator matrix to asymmetric rate pairs. We referred to this generalized version, together with the shaping step, as \emph{homologous} codes~\cite{Sen--Kim2017, Sen--Kim2018s, Sen--Lim--Kim2018c}. This terminology is motivated from its biological definition, i.e., the structures modified from the same ancestry (underlying linear code) to adapt to different purposes (desired shape). Lim et al.~\cite{Lim--Gastpar2016,Lim--Gastpar2017} further analyzed \emph{simultaneous decoding} of random ensembles of homologous codes and showed that it can achieve rates higher than existing approaches to computation problems. For instance, when adapted to the Gaussian MAC, the resulting achievable rates improve upon those of lattice codes~\cite{Nazer--Gastpar2011}. With mathematical rate expressions in single-letter mutual information terms and with physical rate performances better than those of lattice codes, homologous codes have a potential to bringing a deeper understanding of the fundamental limits of the computation problem. Several open questions remain, however. What is the optimal tradeoff between achievable rates for reliable computation? Which scheme achieves this computation capacity region? The answers require a joint optimization of encoder and decoder designs, which seems to be intractable as in many other network information theory problems. In this paper, we instead concentrate on the performance of the optimal maximum likelihood decoder when the encoder is restricted to a given random ensemble of homologous codes. We characterize the optimal rate region when the desired linear combination and the channel structure are ``matched'' (see Definition~\ref{def:natural} in Section~\ref{sec:main_result}), which is the case in which the benefit of computation can be realized to the fullest extent as indicated by~\cite{Karamchandani--Niesen--Diggavi2013}. This result, inter alia, implies that the suboptimal joint typicality decoding rule proposed in~\cite{Lim--Gastpar2016,Lim--Gastpar2017} achieves this optimal rate region. Thus, the performance of random ensembles of homologous codes cannot be improved by the maximum likelihood decoder. The main contribution lies in the outer bound on the optimal rate region (Theorem~\ref{thm:outer_ncc}), which characterizes the necessary condition that a rate pair must satisfy if the average probability of decoding error vanishes asymptotically. The proof of this bound relies on two key observations. First, the distribution of a given random ensemble of homologous codes converges asymptotically to the product of the desired input distribution. Second, given the channel output, a relatively short list of messages can be constructed that includes the actually transmitted message with high probability. The second observation, which is adapted from the analysis in~\cite{Bandemer--El-Gamal--Kim2012a} for the optimal rate region of interference networks with random i.i.d. code ensembles, seems to be a recurring path to establishing the optimal performance of random code ensembles. As hinted earlier, the construction of random ensemble of homologous codes has many similarities to Marton's coding scheme~\cite{Marton1979}, one of the fundamental coding schemes in network information theory. As a result, adapting the proof techniques that we developed for homologous codes, we can establish an outer bound on the optimal rate region for broadcast channels with Marton's coding scheme (Proposition~\ref{prop:outer_marton}). The resulting outer bound coincides with the inner bound that is achieved by \emph{simultaneous nonunique decoding}, thus characterizing the optimal rate region of a two-receiver general broadcast channel achieved by a given random code ensemble. The rest of the paper is organized as follows. Section~\ref{sec:prob_def} formally defines the computation problem. Section~\ref{sec:main_result} presents the main result of the paper---the optimal rate region achievable by a random ensemble of homologous codes. The inner and the outer bounds on this region are presented in Sections~\ref{sec:achiev} and \ref{sec:converse}, respectively. Section~\ref{sec:marton} discusses the optimal rate region for a broadcast channel achievable by Marton's coding scheme. We adapt the notation in~\cite{Cover--Thomas2006, El-Gamal--Kim2011}. The set of integers $\{ 1,2,\ldots, n \}$ is denoted by $[n]$. For a length-$n$ sequence (row vector) $x^n=(x_1,x_2,\ldots,x_n) \in \mathcal{X}^n$, we define its type as $\pi(x | x^n) = {|\{ i \colon x_i = x \}|}/{n}$ for $x \in \mathcal{X}$. Upper case letters $X,Y,\ldots$ denote random variables. For $\epsilon \in (0,1)$, we define the $\epsilon$-typical set of $n$-sequences (or the typical set in short) as ${\mathcal{T}_{\epsilon}^{(n)}}(X) = \{ x^n \colon | p(x) - \pi(x| x^n) | \le \epsilon p(x), \, x \in \mathcal{X} \}$. The indicator function $\mathbbm{1}_{\mathcal{S}}: \mathcal{X} \to \{ 0, 1 \}$ for $\mathcal{S} \sbq \mathcal{X}$ is defined as $\mathbbm{1}_{\mathcal{S}}(x) = 1$ if $x \in \mathcal{S}$ and $0$ otherwise. A length-$n$ row vector of all zeros is denoted by $\mathbf{0}_n$, where the subscript is omitted when it is clear from the context. We denote by $\mathbb{F}_q$ a finite field of size $q$, $\mathbb{F}_q^*$ is the set of nonzero elements in $\mathbb{F}_q$, and $\mathbb{F}_q^{d}$ is the $d$-dimensional vector space over $\mathbb{F}_q$. The limit of a collection of sets $\{\mathcal{A}(\epsilon)\}$ indexed by $\epsilon>0$ is defined as \begin{equation} \label{eq:limit_reg} \lim_{\epsilon \to 0} \mathcal{A}(\epsilon) := \bigcup_{\epsilon > 0} \; \bigcap_{0 < \gamma < \epsilon} \mathcal{A}(\gamma) \overset{(a)}{=} \bigcap_{\epsilon > 0} \; \bigcup_{0 < \gamma < \epsilon} \mathcal{A}(\gamma), \end{equation} which exists if $(a)$ holds. The closure $\mathrm{cl}(\mathcal{A})$ of a set $\mathcal{A} \subseteq \mathbb{R}^d$ denotes the smallest closed superset of $\mathcal{A}$. We use $\epsilon_n \ge 0$ to denote a generic sequence of $n$ that tends to zero as $n \to \infty$, and use $\delta_i(\epsilon) \ge 0$, $i \in \mathbb{Z}^+$, to denote a continuous function of $\epsilon$ that tends to zero as $\epsilon \to 0$. Throughout the paper, information measures are in logarithm base~$q$. \section{Formal Statement of the Problem} \label{sec:prob_def} Consider the two-sender finite-field input memoryless multiple access channel (MAC) \[ (\mathcal{X}_1\times\mathcal{X}_2, p(y|x_1, x_2), \mathcal{Y}) \] in Figure~\ref{fig:comp}, which consists of two sender alphabets $\mathcal{X}_1 = \mathcal{X}_2 = \mathbb{F}_q$, a receiver alphabet $\mathcal{Y}$, and a collection of conditional probability distributions $p_{Y|X_1,X_2}(y|x_1,x_2)$. Each sender $j=1,2$ encodes a message $M_j \in \mathbb{F}_q^{nR_j}$ into a codeword $X_j^n = x_j^n(M_j) \in \mathbb{F}_q^n$ and transmits $X_j^n$ over the channel. Here and henceforth, we assume without loss of generality that $nR_1$ and $nR_2$ are integers. The goal of communication is to convey a linear combination of the codewords. Hence, the receiver finds an estimate ${\hat{W}}^n_{{\bf a}} = {\hat{w}}_{{\bf a}}^n(Y^n) \in \mathbb{F}_q^n$ of \[ W_{{\bf a}}^n := a_1 X_1^n \oplus a_2 X_2^n \] for a desired (nonzero) vector ${\bf a} = [a_1 \; a_2]$ over $\mathbb{F}_q$. Formally, an $(n,nR_1,nR_2)$ \emph{computation code} for the multiple access channel consists of two encoders that map $x_j^n(m_j)$, $j=1,2$, and a decoder that maps ${\hat{w}}^n_{{\bf a}}(y^n)$. The collection of codewords $\text{\footnotesize $\mathcal{C}$} _n := \{(x_1^n(m_1), x_2^n(m_2)): (m_1,m_2) \in \mathbb{F}_q^{(nR_1) \times (nR_2)}\}$ is referred to as the \emph{codebook} associated with the $(n,nR_1,nR_2)$ code. \begin{remark} For simplicity of presentation, we consider the case $\mathcal{X}_1 = \mathcal{X}_2 = \mathbb{F}_q$, but our arguments can be extended to arbitrary $\mathcal{X}_1$ and $\mathcal{X}_2$ through the channel transformation technique by Gallager~\cite[Sec. 6.2]{Gallager1968}. More specifically, given a pair of symbol-by-symbol mappings $\varphi_j: \mathbb{F}_q \to \mathcal{X}_j$, $j=1,2$, consider the \emph{virtual channel} with finite field inputs, $p(y | v_1,v_2) = p_{Y | X_1,X_2}(y | \varphi_1(v_1),\varphi_2(v_2))$, for which a computation code is to be defined. The goal of the communication is to convey $W_{{\bf a}} := a_1 V_1^n \oplus a_2 V_2^n$, where $V_j^n = v_j^n(M_j) \in \mathbb{F}_q^n$ is the virtual codeword mapped to message $M_j$ at sender $j=1,2$. Our results can be readily applied to this computation problem defined on the virtual channel. \end{remark} The performance of a given computation code with codebook $\text{\footnotesize $\mathcal{C}$} _n$ is measured by the average probability of error \[ P_e^{(n)} (\text{\footnotesize $\mathcal{C}$} _n) = \P( {\hat{W}}_{{\bf a}}^n \neq W_{{\bf a}}^n | \text{\footnotesize $\mathcal{C}$} _n), \] when $M_1$ and $M_2$ are independent and uniformly distributed. A rate pair $(R_1,R_2)$ is said to be \emph{achievable} if there exists a sequence of $(n,nR_1,nR_2)$ computation codes such that \[ \lim_{n \to \infty} P_e^{(n)} (\text{\footnotesize $\mathcal{C}$} _n) = 0 \] and \begin{equation} \label{eq:cond_ent} \lim_{n \to \infty} H(M_j | x_j^n(M_j), \text{\footnotesize $\mathcal{C}$} _n) = 0, \quad j \in \{1,2\} \textrm{ with } a_j \neq 0. \end{equation} Note that without the condition in (\ref{eq:cond_ent}), the problem is trivial and an arbitrarily large rate pair is achievable. We now define the random ensemble of computation codes referred to as homologous codes. Let $p = p(x_1) p(x_2)$ be a given input pmf on $\mathbb{F}_q \times \mathbb{F}_q$, and let $\epsilon > 0$. Suppose that the codewords $x_1^n(m_1)$, $m_1 \in \mathbb{F}_q^{nR_1}$, and $x_2^n(m_2)$, $m_2 \in \mathbb{F}_q^{nR_2}$ that constitute the codebook are generated according to the following steps: \begin{enumerate} \item Let ${\hat{R}}_j = D(p_{X_j} \| \mathrm{Unif}(\mathbb{F}_q))+\epsilon$, $j=1,2,$ where $D(\cdot\|\cdot)$ is the Kullback--Leibler divergence. \item Randomly generate a $\kappa \times n$ generator matrix $G$, and two dither vectors $D_1^n$ and $D_2^n$ such that the elements of $G, D_1^n$, and $D_2^n$ are i.i.d. $\mathrm{Unif}(\mathbb{F}_q)$ random variables, where $\kappa = \max \{ nR_1 + n {\hat{R}}_1, nR_2 + n {\hat{R}}_2 \}$. \item Given the realizations $\mathsf{G}, d_1^n$, and $d_2^n$ of the generator matrix and dithers, let \[ u_j^n(m_j,l_j) = [m_j \;\: l_j \;\: \mathbf{0}] \: \mathsf{G} + d_j^n, \quad m_j \in \mathbb{F}_q^{nR_j}, \; l_j \in \mathbb{F}_q^{n{\hat{R}}_j}, \; j=1,2. \] At sender $j=1,2$, assign a codeword $x_j^n(m_j) = u_j^n(m_j, L_j(m_j))$ to each message $m_j \in \mathbb{F}_q^{nR_j}$ where $L_j(m_j)$ is a random variable that is drawn uniformly at random among all $l_j$ vectors satisfying $u_j^n(m_j,l_j) \in {\mathcal{T}_{\epsilon}^{(n)}}(X_j)$ if there exists one, or among $\mathbb{F}_q^{n{\hat{R}}_j}$ otherwise. \end{enumerate} With a slight abuse of terminology, we refer to the random tuple $\mathcal{C}_n := (G, D_1^n, D_2^n, (L_1(m_1): m_1 \in \mathbb{F}_q^{nR_1}), (L_2(m_2): m_2 \in \mathbb{F}_q^{nR_2}))$ as the \emph{random homologous codebook}. Each realization of the random homologous codebook $\mathcal{C}_n$ results in one instance $\{(x_1^n(m_1), x_2^n(m_2)): (m_1,m_2) \in \mathbb{F}_q^{nR_1} \times \mathbb{F}_q^{nR_2} \}$ of such generated codebooks, which constitutes an $(n,nR_1,nR_2)$ computation code along with the optimal decoder. The random code ensemble generated in this manner is referred to as an $(n, nR_1,nR_2; p, \epsilon)$ \emph{random homologous code ensemble}, where $p$ is the given input pmf and $\epsilon>0$ is the parameter used in steps $1$ and $3$ in codebook generation. A rate pair $(R_1,R_2)$ is said to be \emph{achievable by the $(p,\epsilon)$-distributed random homologous code ensemble} if there exits a sequence of $(n, nR_1,nR_2; p, \epsilon)$ random homologous code ensembles such that \[ \lim_{n \to \infty} \E_{\mathcal{C}_n} [ P_e^{(n)} (\mathcal{C}_n)] = 0 \] and \begin{equation} \label{eq:con_ent_avg} \lim_{n \to \infty} H(M_j | X_j^n(M_j), \mathcal{C}_n) = 0, \quad j \in \{1,2\} \textrm{ with } a_j \neq 0. \end{equation} Here the expectation is with respect to the random homologous codebook $\mathcal{C}_n$, i.e., $(G, D_1^n, D_2^n, (L_1(m_1): m_1 \in \mathbb{F}_q^{nR_1}), (L_2(m_2): m_2 \in \mathbb{F}_q^{nR_2}))$. Given $(p,\epsilon)$, let $\mathscr{R}^*(p,\epsilon)$ be the set of all rate pairs achievable by the $(p,\epsilon)$-distributed random homologous code ensemble. Given the input pmf $p$, the optimal rate region $\mathscr{R}^*(p)$, when it exists, is defined as \[ \mathscr{R}^*(p) := \mathrm{cl} \left[\lim_{\epsilon \to 0} \mathscr{R}^*(p,\epsilon) \right]. \] \section{Main Result} \label{sec:main_result} In this section, we present a single-letter characterization of the optimal rate region when the target linear combination is in the following class. \begin{definition} \label{def:natural} A linear combination $W_{{\bf a}} = a_1 X_1 \oplus a_2 X_2$ for some ${\bf a} = [a_1 \; a_2] \in \mathbb{F}_q^2 \setminus{ \{ \mathbf{0}\}}$ is said to be \emph{natural} if \begin{align} \label{eq:comp_favor} H(W_{{\bf a}}|Y) = \min_{{\bf b} \neq \mathbf{0}} H(W_{{\bf b}} |Y), \end{align} where ${\bf b} = [b_1 \; b_2]$ and $W_{{\bf b}} = b_1 X_1 \oplus b_2 X_2$ are over $\mathbb{F}_q$. \end{definition} In words, a natural combination $W_{{\bf a}}$ is the easiest to recover at the receiver and thus, in some sense, is the best linear combination that is matched to the channel structure. We are now ready to present the optimal rate region for computing natural linear combinations. \begin{theorem} \label{thm:optimal_comp} Given an input pmf $p = p(x_1)p(x_2)$, the optimal rate region $\mathscr{R}^*(p)$ for computing a natural combination $W_{{\bf a}}$ is the set of rate pairs $(R_1,R_2)$ such that \begin{subequations} \label{eq:Rr**} \begin{align} R_j &\le I(X_j; Y|X_{j^c}),\\ R_j &\le I(X_1, X_2; Y)-\min\{R_{j^c}, I(X_{j^c}; W_{{\bf a}}, Y)\} \end{align} \end{subequations}% for every $j \in \{1,2\}$ with $a_j \neq 0$, where $j^c = \{1,2 \} \setminus \{j \}$. \end{theorem} The rate region in (\ref{eq:Rr**}) in Theorem \ref{thm:optimal_comp}, which we will denote as $\mathscr{R}^{**}(p)$, can be equivalently characterized in terms of well-known rate regions for compute--forward and message communication. Let $\Rr_\mathrm{CF}(p)$ be the set of rate pairs $(R_1,R_2)$ such that \begin{equation} \label{eq:Rcf} R_j \le H(X_j) - H(W_{{\bf a}} | Y), \quad \forall j \in \{1,2\} \text{ with } a_j \neq 0. \end{equation} Let $\Rr_\mathrm{MAC}(p)$ be the set of rate pairs $(R_1,R_2)$ such that \begin{align*} R_1 &\le I(X_1;Y|X_2), \\ R_2 &\le I(X_2;Y|X_1), \\ R_1 + R_2 &\le I(X_1,X_2;Y). \end{align*} \begin{proposition} \label{prop:opt_reg} For any input pmf $p=p(x_1)p(x_2)$ and any linear combination $W_{{\bf a}}$, \[ \mathscr{R}^{**}(p) = \Rr_\mathrm{CF}(p) \cup \Rr_\mathrm{MAC}(p). \] \end{proposition} The proof of Proposition~\ref{prop:opt_reg} is relegated to Appendix~\ref{app:prop1}. We prove Theorem \ref{thm:optimal_comp} in three steps: 1) we first present a general (not necessarily for natural combinations) inner bound on the optimal rate region in Section~\ref{sec:achiev}, where we follow the results in~\cite{Lim--Gastpar2016, Lim--Gastpar2017} that studied the rate region achievable by random homologous code ensembles using a suboptimal joint typicality decoding rule, 2) we then show by Lemma \ref{lem:equiv_reg} in Section~\ref{sec:achiev} that this inner bound is equivalent to $\mathscr{R}^{**}(p)$ in Proposition \ref{prop:opt_reg} if $W_{{\bf a}}$ is a natural combination, and 3) we present a general (not necessarily for natural combinations) outer bound on the optimal rate region in Section~\ref{sec:converse} by showing that if a rate pair $(R_1,R_2)$ is achievable by the $(p,\epsilon)$-distributed random homologous code ensemble for arbitrarily small $\epsilon$, then $(R_1,R_2)$ must lie in $\mathscr{R}^{**}(p)$ in Theorem \ref{thm:optimal_comp}. \section{An Inner Bound} \label{sec:achiev} The computation performance of random homologous code ensembles was studied using a suboptimal \emph{joint typicality} decoder in~\cite{Lim--Gastpar2016, Lim--Gastpar2017}. For completeness, we first describe the joint typicality decoding rule and then characterize the rate region achievable by the $(p,\epsilon)$-distributed random homologous code ensemble \emph{under this joint typicality decoding rule}. We then concentrate on an arbitrarily small $\epsilon$ to provide an inner bound on the optimal rate region $\mathscr{R}^*(p)$. We will omit the steps that were already established in~\cite{Lim--Gastpar2016, Lim--Gastpar2017} and instead provide detailed references. Upon receiving $y^n$, the $\epsilon'$-joint typicality decoder, $\epsilon'>0$, looks for a unique vector $s \in \mathbb{F}_q^{\kappa}$ such that \[ s = a_1[m_1 \; l_1\; \mathbf{0}] \oplus a_2 [m_2 \; l_2\; \mathbf{0}], \] for some $(m_1,l_1,m_2,l_2) \in \mathbb{F}_q^{nR_1} \times \mathbb{F}_q^{n{\hat{R}}_1} \times \mathbb{F}_q^{nR_2} \times \mathbb{F}_q^{n{\hat{R}}_2}$ that satisfies \[ (u_1^n(m_1,l_1), u_2^n(m_2,l_2),y^n) \in {\mathcal{T}_{\epsilon'}^{(n)}}(X_1,X_2,Y). \] If the decoder finds such $s$, then it declares ${\hat{w}}_{{\bf a}}^n = s \mathsf{G} \oplus a_1 d_1^n \oplus a_2 d_2^n$ as an estimate; otherwise, it declares an error. To describe the performance of the joint typicality decoder, we define $\Rr_\mathrm{CF}(p,\delta)$ for a given input pmf $p$ and $\delta \ge 0$ as the set of rate pairs $(R_1,R_2)$ such that \[ R_j \le H(X_j) - H(W_{{\bf a}} | Y) - \delta, \quad \forall j \in \{1,2\} \text{ with } a_j \neq 0. \] Similarly, we define $\mathscr{R}_1(p,\delta)$ as the set of rate pairs $(R_1,R_2)$ such that \begin{subequations} \label{eq:Rr1} \begin{align} R_1 &\le I(X_1;Y|X_2) - \delta, \\ R_2 &\le I(X_2;Y|X_1) - \delta, \\ R_1 + R_2 &\le I(X_1,X_2;Y) - \delta, \\ R_1 &\le I(X_1,X_2;Y) - H(X_2) + \min_{b_1,b_2 \in \mathbb{F}_q^*} H(W_{{\bf b}}|Y) - \delta, \end{align} \end{subequations}% and $\mathscr{R}_2(p,\delta)$ as the set of rate pairs $(R_1,R_2)$ such that \begin{subequations} \label{eq:Rr2} \begin{align} R_1 &\le I(X_1;Y|X_2) - \delta, \\ R_2 &\le I(X_2;Y|X_1) - \delta, \\ R_1 + R_2 &\le I(X_1,X_2;Y) - \delta, \\ R_2 &\le I(X_1,X_2;Y) - H(X_1) + \min_{b_1,b_2 \in \mathbb{F}_q^*} H(W_{{\bf b}}|Y) - \delta, \end{align} \end{subequations}% where ${\bf b} = [b_1 \; b_2]$ and $W_{{\bf b}} = b_1 X_1 \oplus b_2 X_2$ are over $\mathbb{F}_q$. Note that the region $\Rr_\mathrm{CF}(p) = \Rr_\mathrm{CF}(p,\delta=0)$, as defined in (\ref{eq:Rcf}) in Section~\ref{sec:main_result}. Similarly, let $\mathscr{R}_{j}(p)$ denote the region $\mathscr{R}_{j}(p,\delta=0)$ for $j=1,2$ in (\ref{eq:Rr1}) and (\ref{eq:Rr2}). We are now ready to state the rate region achievable by the random homologous code ensembles that combines the inner bounds in~{\cite[Theorem 1]{Lim--Gastpar2016} and~\cite[Corollary 1]{Lim--Gastpar2017}. \begin{theorem} \label{thm:achiev} Let $p=p(x_1)p(x_2)$ be an input pmf and $\delta>0$. Then, there exists $\epsilon' < \delta$ such that for every $\epsilon < \epsilon'$ sufficiently small, a rate pair \begin{equation} \label{eq:comp_inner} (R_1,R_2) \in \mathscr{R}_{CF}(p,\delta) \cup \mathscr{R}_1(p,\delta) \cup \mathscr{R}_2(p,\delta) \end{equation} is achievable by the $(p,\epsilon)$-distributed random homologous code ensemble along with the $\epsilon'$-joint typicality decoder for computing an \emph{arbitrary} linear combination $W_{{\bf a}}$. In particular, \begin{equation} \label{eq:comp_inner_limit} [\mathscr{R}_{CF}(p) \cup \mathscr{R}_1(p) \cup \mathscr{R}_2(p)] \sbq \mathscr{R}^{*}(p). \end{equation} \end{theorem} \begin{IEEEproof} By~\cite[Theorem 1]{Lim--Gastpar2016}, for sufficiently small $\epsilon < \epsilon' < \delta$, the average probability of error for the $(p,\epsilon)$-distributed random homologous code ensemble paired with the $\epsilon'$-joint typicality decoder tends to zero as $n \to \infty$ if \begin{equation} \label{eq:achiev_cond1} (R_1,R_2) \in \mathscr{R}_{CF}(p,\delta). \end{equation} Similarly, by~\cite[Corollary 1]{Lim--Gastpar2017}, the average probability of error tends to zero as $n \to \infty$ if \begin{equation} \label{eq:achiev_cond2} (R_1,R_2) \in \mathscr{R}_{1}(p,\delta) \cup \mathscr{R}_{2}(p,\delta). \end{equation} Combining (\ref{eq:achiev_cond1}) and (\ref{eq:achiev_cond2}) establishes (\ref{eq:comp_inner}). We still need to show that the condition in (\ref{eq:con_ent_avg}) holds. Suppose that $a_j \neq 0$. Let $G_j$ denote the submatrix that consists of the first $(nR_j + n{\hat{R}}_j )$ rows of $G$ and $S_j$ be the indicator variable such that $S_j = 1$ if $G_j$ is full rank. Then, \begin{align*} H(M_j|X_j^n(M_j),\mathcal{C}_n) &= H(M_j | X_j^n(M_j), G, D_1^n, D_2^n, (L_1(m_1): m_1 \in \mathbb{F}_q^{nR_1}),(L_2(m_2): m_2 \in \mathbb{F}_q^{nR_2})) \\ & \le H(M_j | X_j^n(M_j), G, D_j^n, (L_j(m_j): m_j \in \mathbb{F}_q^{nR_j})) \\ &= H(M_j | X_j^n(M_j), G, S_j, D_j^n, (L_j(m_j): m_j \in \mathbb{F}_q^{nR_j})) \\ &= H(M_j | X_j^n(M_j), G, S_j = 1, D_j^n, (L_j(m_j): m_j \in \mathbb{F}_q^{nR_j})) \P(S_j = 1) \\ & \quad \quad \quad \quad + H(M_j | X_j^n(M_j), G, S_j = 0, D_j^n, (L_j(m_j): m_j \in \mathbb{F}_q^{nR_j})) \P(S_j = 0) \\ &= H(M_j | X_j^n(M_j), G, S_j = 0, D_j^n, (L_j(m_j): m_j \in \mathbb{F}_q^{nR_j})) \P(S_j = 0) \\ &\le nR_j \P(S_j = 0). \end{align*} Now, by Lemma~\ref{lem:full_rank_G} in Appendix~\ref{app:full_rank} (with $R \leftarrow R_j + {\hat{R}}_j$), the term $n \P(S_j = 0)$ tends to zero as $n \to \infty$ if $R_j < H(X_j) - \epsilon$. Since this condition is satisfied if (\ref{eq:comp_inner}) holds, the proof of (\ref{eq:comp_inner}) follows. The proof of (\ref{eq:comp_inner_limit}) follows by taking the closure of the union of (\ref{eq:comp_inner}) over all $\delta>0$, which completes the proof of Theorem~\ref{thm:optimal_comp}. \end{IEEEproof} The inner bound (\ref{eq:comp_inner_limit}) in Theorem~\ref{thm:optimal_comp} is valid for computing an arbitrary linear combination, which may not be equal to the rate region $\mathscr{R}^{**}(p)$ in Theorem \ref{thm:optimal_comp} in general. For computing a \emph{natural} linear combination, however, the following lemma shows that the equivalent rate region in Proposition \ref{prop:opt_reg} is achievable. \begin{lemma} \label{lem:equiv_reg} If the desired linear combination $W_{{\bf a}}=a_1 X_1 \oplus a_2 X_2$ for $(a_1,a_2) \neq (0,0)$ is natural, then \[ [\Rr_\mathrm{CF}(p) \cup \mathscr{R}_{1}(p) \cup \mathscr{R}_2(p)] = [\Rr_\mathrm{CF}(p) \cup \Rr_\mathrm{MAC}(p)]. \] \end{lemma} The proof of Lemma~\ref{lem:equiv_reg} is relegated to Appendix~\ref{app:lemma1}. \section{An Outer Bound} \label{sec:converse} We first present an outer bound on the rate region $\mathscr{R}^*(p,\epsilon)$ for a fixed input pmf $p$ and $\epsilon>0$. We then discuss the limit of this outer bound as $\epsilon \to 0$ to establish an outer bound on the rate region $\mathscr{R}^*(p)$. Given an input pmf $p$ and $\delta >0$, we define the rate region $\mathscr{R}^{**}(p,\delta)$ as the set of rate pairs $(R_1,R_2)$ such that \begin{subequations} \begin{align} \label{eq:type1} R_j &\le I(X_j; Y|X_{j^c}) + \delta,\\ \label{eq:type2} R_j &\le I(X_1, X_2; Y)-\min\{R_{j^c}, I(X_{j^c}; W_{{\bf a}}, Y)\} + \delta, \end{align} \end{subequations}% for every $j \in \{1,2\}$ with $a_j \neq 0$, where $j^c = \{1,2 \} \setminus \{j \}$. Note that $\mathscr{R}^{**}(p, \delta=0)$ is equal to $\mathscr{R}^{**}(p)$ as defined in (\ref{eq:Rr**}). We are now ready to state the outer bound on the optimal rate region for computing an \emph{arbitrary} linear combination, which is also an outer bound on $\mathscr{R}^*(p)$ in Theorem \ref{thm:optimal_comp} for computing a \emph{natural} combination. \begin{theorem} \label{thm:outer_ncc} Let $p=p(x_1)p(x_2)$ be an input pmf and $\epsilon>0$. If a rate pair $(R_1, R_2)$ is achievable by the $(p,\epsilon)$-distributed random homologous code ensemble for computing an arbitrary linear combination $W_{{\bf a}}$, then there exists a continuous $\delta'(\epsilon)$ that tends to zero monotonically as $\epsilon \to 0$ such that \begin{equation} \label{eq:outer_comp} (R_1,R_2) \in \mathscr{R}^{**}(p,\delta'(\epsilon)). \end{equation} In particular, \begin{equation} \label{eq:outer_comp_limit} \mathscr{R}^*(p) \sbq \mathscr{R}^{**}(p). \end{equation} \end{theorem} \begin{IEEEproof} We first start with an averaged version of Fano's inequality for a random homologous code ensemble $\mathcal{C}_n$ (recall the notation in Section~\ref{sec:prob_def}), the proof of which is relegated to Appendix~\ref{app:fano_homolog}. \begin{lemma} \label{lem:Fano_homolog} If \[ \lim_{n \to \infty} \E_{\mathcal{C}_n} [ P_e^{(n)}(\mathcal{C}_n)] = 0 \] and \[ \lim_{n \to \infty} H(M_j | X_j^n(M_j), \mathcal{C}_n) = 0 \] for $j \in \{1,2\}$ with $a_j \neq 0$, then for each $j \in \{1, 2\}$ with $a_j \neq 0$ \[ H(M_j|Y^n,M_{j^c},\mathcal{C}_n) \le n \epsilon_n \] for some $\epsilon_n \to 0$ as $n \to \infty$. \end{lemma} We next define the indicator random variable \begin{equation} \label{eq:indicator_en} E_n = \mathbbm{1}_{\{ (X_1^n(M_1),X_2^n(M_2)) \in {\mathcal{T}_{\epsilon'}^{(n)}}(X_1,X_2) \}} \end{equation}% for $\epsilon'>0$. Since ${\hat{R}}_i = D(p_{X_i} \| \mathrm{Unif}(\mathbb{F}_q)) + \epsilon $, $i=1,2$, by the Markov lemma~\cite[Lemma 12]{Lim--Gastpar2016} for homologous codes, $\P(E_n = 0)$ tends to zero as $n \to \infty$ if $\epsilon'$ is sufficiently large compared to $\epsilon$. Let $\epsilon' = \delta_1(\epsilon)$, which still tends to zero as $\epsilon \to 0$. Suppose that $a_j \neq 0$. Then, for $n$ sufficiently large, \begin{align} \notag nR_j &= H(M_j|M_{j^c}, \mathcal{C}_n) \\ \notag & \overset{(a)}{\le} I(M_j;Y^n| M_{j^c}, \mathcal{C}_n) + n \epsilon_n \\ \notag & \le I(M_j, E_n ;Y^n| M_{j^c}, \mathcal{C}_n) + n \epsilon_n \\ \notag & \overset{(b)}{\le} 1 + I(M_j ;Y^n| M_{j^c}, \mathcal{C}_n, E_n) + n \epsilon_n \\ \notag & \le 1 + I(M_j ;Y^n| M_{j^c}, \mathcal{C}_n, E_n = 0) \P(E_n = 0) + I(M_j ;Y^n| M_{j^c}, \mathcal{C}_n, E = 1) \P(E_n = 1) + n \epsilon_n \\ \notag & \le 1 + nR_j \P(E_n = 0) + I(M_j ;Y^n| M_{j^c}, \mathcal{C}_n, E_n = 1) + n \epsilon_n \\ \notag & = 1 + nR_j \P(E_n = 0) + \sum_{i=1}^{n} I(M_j;Y_i | Y^{i-1}, M_{j^c}, \mathcal{C}_n, X_{j^c i}, E_n = 1) + n \epsilon_n \\ \notag & \le 1 + nR_j \P(E_n = 0) + \sum_{i=1}^{n} I(M_j, X_{ji}, Y^{i-1}, M_{j^c}, \mathcal{C}_n ;Y_i | X_{j^c i}, E_n = 1) + n \epsilon_n \\ &\overset{(c)}{=} 1 + nR_j \P(E_n = 0) + \sum_{i=1}^{n} I( X_{ji} ;Y_i | X_{j^c i}, E_n = 1) + n \epsilon_n, \label{eq:outer0} \end{align}% where $(a)$ follows by Lemma~\ref{lem:Fano_homolog}, $(b)$ follows since $E_n$ is a binary random variable, and $(c)$ follows since $(M_1,M_2, Y^{i-1}, \mathcal{C}_n, E_n)\to(X_{1i}, X_{2i}) \to Y_i$ form a Markov chain for every $i \in [n]$. To further upper bound (\ref{eq:outer0}), we make a connection between the distribution of the random homologous codebook and the input pmf $p$ as follows. \begin{lemma} \label{lem:output_ncc_input} Let $(X,Y) \sim p_{X,Y}(x,y)$ on $\mathbb{F}_q \times \mathcal{Y}$ and $\epsilon > 0$. Let $X^n(m)$ be the random codeword assigned to message $m \in \mathbb{F}_q^{nR}$ by an $(n,nR;p_X,\epsilon)$ random homologous code ensemble. Further let $Y^n$ be a random sequence distributed according to $\prod_{i=1}^n p_{Y|X}(y_i|x_i)$. Then, for every $(x,y) \in \mathbb{F}_q \times \mathcal{Y}$ \[ (1-\epsilon) p_{X,Y}(x,y) \le \P(X_i = x, Y_i = y | X^n \in {\mathcal{T}_{\epsilon}^{(n)}}(X)) \le (1+\epsilon) p_{X,Y}(x,y), \] where $i=1,2,\ldots,n$. \end{lemma} The proof of Lemma~\ref{lem:output_ncc_input} is relegated to Appendix~\ref{app:lemma_pmf}. Back to the proof of Theorem~\ref{thm:outer_ncc}, we are now ready to establish (\ref{eq:type1}). Combining (\ref{eq:outer0}) with Lemma~\ref{lem:output_ncc_input} (with $p(x) \leftarrow p(x_1)p(x_2)$), we have \begin{align} \notag nR_j &\le 1 + nR_j \P(E_n = 0) + n ( I( X_j ;Y | X_{j^c}) + \delta_2(\epsilon)) + n \epsilon_n \\ \label{eq:outer1} & \overset{(d)}{\le} n ( I( X_j ;Y | X_{j^c}) + \delta_2(\epsilon)) + 2 n \epsilon_n, \end{align}% where $(d)$ follows since $\P(E_n = 0)$ tends to zero as $n \to \infty$. For the proof of (\ref{eq:type2}), we start with \begin{align} \notag nR_j &= H(M_j|M_{j^c}, \mathcal{C}_n) \\ \notag & \overset{(a)}{\le} I(M_j;Y^n| M_{j^c}, \mathcal{C}_n) + n \epsilon_n \\ \label{eq:outer2_0} & = I(M_1, M_2; Y^n|\mathcal{C}_n) - I(M_{j^c}; Y^n |\mathcal{C}_n) + n \epsilon_n, \end{align}% where $(a)$ follows by Lemma~\ref{lem:Fano_homolog}. Following arguments similar to (\ref{eq:outer1}), the first term in (\ref{eq:outer2_0}) can be bounded as \begin{align} \notag I(M_1, M_2; Y^n|\mathcal{C}_n) &\le 1 + n(R_1+R_2) \P(E_n = 0) + \sum_{i=1}^n I(M_1, M_2; Y_i | \mathcal{C}_n, Y^{i-1}, E_n=1) \\ \notag & \le n \epsilon_n + \sum_{i=1}^n I(M_1, M_2, \mathcal{C}_n, Y^{i-1} ; Y_i | E_n = 1) \\ \notag &= n \epsilon_n + \sum_{i=1}^n I(M_1, M_2, \mathcal{C}_n, Y^{i-1}, X_{1i}, X_{2i} ; Y_i | E_n = 1) \\ \notag &= n \epsilon_n + \sum_{i=1}^n I(X_{1i}, X_{2i} ; Y_i | E_n = 1) \\ \label{eq:outer2_1} &\le n \epsilon_n + n (I(X_1, X_2 ; Y) + \delta_3(\epsilon)). \end{align}% To bound the second term in (\ref{eq:outer2_0}), we need the following lemma, which is proved in Appendix~\ref{app:list_homolog}. \begin{lemma} \label{lem:outer2_list} For every $\epsilon'' > \epsilon'$ and for $n$ sufficiently large, \[ I(M_{j^c}; Y^n |\mathcal{C}_n) \ge n [ \min \{ R_{j^c}, I(X_{j^c};W_{{\bf a}}, Y) \} - \delta_4(\epsilon'') ]- n \epsilon_n. \] \end{lemma} Combining (\ref{eq:outer2_0}), (\ref{eq:outer2_1}), and Lemma \ref{lem:outer2_list} with $\epsilon'' = 2\delta_1(\epsilon)$, we have \begin{align} \label{eq:outer2} nR_j \le n (I(X_1,X_2;Y) + \delta_3(\epsilon)) - n [\min \{ R_{j^c}, I(X_{j^c};W_{{\bf a}},Y) \} - \delta_5(\epsilon)] + 2 n \epsilon_n \end{align}% for $n$ sufficiently large. Letting $n \to \infty$ in (\ref{eq:outer1}) and (\ref{eq:outer2}) establishes \begin{align*} R_j & \le I(X_j; Y | X_{j^c}) + \delta_2(\epsilon), \\ R_j & \le I(X_1,X_2;Y) - \min \{ R_{j^c}, I(X_{j^c};W_{{\bf a}},Y) \} + \delta_6(\epsilon). \end{align*} The proof of (\ref{eq:outer_comp}) follows by taking a continuous monotonic function $\delta'(\epsilon) \ge \max \{\delta_2(\epsilon), \delta_6(\epsilon) \}$ that tends to zero as $\epsilon \to 0$. Letting $\epsilon \to 0$ in (\ref{eq:outer_comp}) establishes (\ref{eq:outer_comp_limit}), which completes the proof of Theorem~\ref{thm:outer_ncc}. \end{IEEEproof} \section{Optimal Achievable Rates for Broadcast Channels with Marton Coding} \label{sec:marton} In this section, we apply the techniques developed in the previous sections to establish the optimal rate region for broadcast channels by Marton coding. Consider the two-receiver discrete memoryless broadcast channel (DM-BC) $(\mathcal{X}, p(y_1,y_2|x), \mathcal{Y}_1 \times \mathcal{Y}_2)$ in Fig.~\ref{fig:bc}, where the sender communicates independent messages $M_1$ and $M_2$ to respective receivers (see~\cite{Cover1972, Cover1998, Marton1979} for the formal definition of the communication problem over the broadcast channel). \begin{figure}[h] \center \includegraphics[scale=0.85]{figures/broadcast_chan} \caption{Two-receiver broadcast channel} \label{fig:bc} \end{figure} Let $p = p(u_1,u_2)$ be a given pmf on some finite set $\mspace{1.5mu}\mathcal{U}_1 \times \mspace{1.5mu}\mathcal{U}_2$, and $x(u_1,u_2)$ be a function from $\mspace{1.5mu}\mathcal{U}_1 \times \mspace{1.5mu}\mathcal{U}_2$ to $\mathcal{X}$, and let $\epsilon > 0$ and $\alpha \in [0 \; 1]$. The random ensemble of \emph{Marton} codes \cite{Marton1979} is generated according to the following steps: \begin{enumerate} \item Let ${\hat{R}}_1 = \alpha(I(U_1;U_2) + 10 \epsilon H(U_1,U_2))$ and ${\hat{R}}_2 = \overline{\a}(I(U_1;U_2) + 10 \epsilon H(U_1,U_2))$, where $\overline{\a} := (1-\alpha)$. \item For each $m_1 \in [2^{nR_1}]$, generate \emph{auxiliary} codewords $u_1^n(m_1,l_1), l_1 \in [2^{n{\hat{R}}_1}]$, each drawn i.i.d. from $p(u_1)$. Similarly, for each $m_2 \in [2^{nR_2}]$, generate \emph{auxiliary} codewords $u_2^n(m_2,l_2), l_2 \in [2^{n{\hat{R}}_2}]$, each drawn i.i.d. from $p(u_2)$. \item At the sender, for each message pair, $(m_1,m_2) \in [2^{nR_1}] \times [2^{nR_2}]$, find an index pair $(l_1,l_2) \in [2^{n{\hat{R}}_1}] \times [2^{n{\hat{R}}_2}]$ such that \[ (u_1^n(m_1,l_1), u_2^n(m_2,l_2)) \in {\mathcal{T}_{\epsilon}^{(n)}}(U_1,U_2), \] and assign the codeword $x^n(m_1,m_2)$ as $x_i(m_1,m_2) = x(u_{1i}(m_1, l_1), u_{2i}(m_2, l_2)), i \in [n]$. If there are more than one such pair of $(l_1,l_2)$, choose one of them uniformly at random; otherwise, choose one uniformly at random from $[2^{n{\hat{R}}_1}] \times [2^{n{\hat{R}}_2}]$. \end{enumerate} We refer to the random tuple $\mathcal{C}_n := ((U_1^n(m_1,l_1): m_1 \in [2^{nR_1}], l_1 \in [2^{n{\hat{R}}_1}]), (U_2^n(m_2,l_2): m_2 \in [2^{nR_2}], l_2 \in [2^{n{\hat{R}}_2}]), ((L_1,L_2,x)(m_1,m_2): m_1 \in [2^{nR_1}], m_2 \in [2^{nR_2}]))$ as the \emph{Marton random codebook}. Each realization of the Marton random codebook $\mathcal{C}_n$ results in one instance $\{x^n(m_1,m_2): (m_1,m_2) \in [2^{nR_1}] \times [2^{nR_2}] \}$ of such generated codebooks, which constitutes an $(n,nR_1,nR_2)$ code for the DM-BC along with the optimal decoder. The random code ensemble generated in this manner is referred to as an $(n, nR_1,nR_2; p, \alpha, \epsilon)$ \emph{Marton random code ensemble}, where $p=p(u_1,u_2)$ is the given pmf, $\alpha \in [0 \; 1]$ is the parameter used in step $(1)$, and $\epsilon>0$ is the parameter used in steps $(1)$ and $(3)$. A rate pair $(R_1,R_2)$ is said to be \emph{achievable by the $(p, \alpha, \epsilon)$-distributed Marton random code ensemble} if there exits a sequence of $(n, nR_1,nR_2; p, \alpha, \epsilon)$ Marton random code ensembles such that \[ \lim_{n \to \infty} \E_{\mathcal{C}_n} [ P_e^{(n)} (\mathcal{C}_n)] = 0, \] where the expectation is with respect to the Marton random codebook $\mathcal{C}_n$. Given $(p, \alpha, \epsilon)$, let $\Rr_\mathrm{BC}^*(p, \alpha, \epsilon)$ be the set of all rate pairs achievable by the $(p, \alpha, \epsilon)$-distributed Marton random code ensemble. Given pmf $p=p(u_1,u_2)$ and function $x(u_1,u_2)$, the optimal rate region $\Rr_\mathrm{BC}^*(p)$, when it exists, is defined as \[ \Rr_\mathrm{BC}^*(p) := \mathrm{cl} \left[ \bigcup_{\alpha \in [0 \; 1]} \lim_{\epsilon \to 0} \Rr_\mathrm{BC}^*(p, \alpha, \epsilon) \right]. \] We are now ready to state main result of this section. \begin{theorem} \label{thm:opt_marton} Given a pmf $p(u_1,u_2)$ and a function $x(u_1,u_2)$, the optimal rate region $\Rr_\mathrm{BC}^*(p)$ for the broadcast channel $p(y_1,y_2|x)$ is the closure of the set of rate pairs $(R_1,R_2)$ satisfying \begin{subequations} \label{eq:marton} \begin{align} \label{eq:thm_marton1} R_1 &\le I(U_1;Y_1,U_2) - \alpha I(U_1;U_2), \\ \label{eq:thm_marton2} R_1 &\le I(U_1,U_2;Y_1) - \min\{R_2;I(U_2;Y_1,U_1)-\overline{\a} I(U_1;U_2),I(U_1,U_2;Y_1) \}, \\ R_2 &\le I(U_2;Y_2,U_1) - \overline{\a} I(U_1;U_2), \\ R_2 &\le I(U_1,U_2;Y_2) - \min\{R_1;I(U_1;Y_2,U_2)-\alpha I(U_1;U_2),I(U_1,U_2;Y_2) \}, \end{align} \end{subequations}% for some $\alpha \in [0 \; 1]$. \end{theorem} We prove Theorem \ref{thm:opt_marton} by showing that given a pmf $p(u_1,u_2)$, a function $x(u_1,u_2)$, and $\alpha \in [0 \; 1]$, the rate region $\Rr_\mathrm{BC}^*(p,\alpha):=\mathrm{cl}\left[\lim_{\epsilon \to 0} \Rr_\mathrm{BC}^*(p, \alpha, \epsilon) \right]$ is equal to the rate region characterized by (\ref{eq:marton}), which we will denote as $\Rr_\mathrm{BC}^{**}(p,\alpha)$. We take a two-step approach similar to Sections \ref{sec:achiev} and \ref{sec:converse}, and establish the achievability and the converse on the rate region $\Rr_\mathrm{BC}^*(p,\alpha)$, respectively. The achievability proof is relegated to Appendix~\ref{app:marton_achiev}. For the converse, given a fixed pmf $p=p(u_1,u_2)$, $\alpha \in [0 \; 1]$, and $\epsilon>0$, we define the rate region $\Rr_\mathrm{BC}^{**}(p,\alpha,\delta)$ as the set of rate pairs $(R_1,R_2)$ such that \begin{subequations} \begin{align} \label{eq:marton_type1} R_1 &\le I(U_1; Y_1,U_2) - \alpha I(U_1;U_2) + \delta,\\ R_1 &\le I(U_1,U_2;Y_1) - \min\{R_2;I(U_2;Y_1,U_1)-\overline{\a} I(U_1;U_2),I(U_1,U_2;Y_1) \} + \delta, \label{eq:marton_type2} \\ \label{eq:marton_type1_r2} R_2 &\le I(U_2; Y_2,U_1) - \overline{\a} I(U_1;U_2) + \delta,\\ \label{eq:marton_type2_r2} R_2 &\le I(U_1,U_2;Y_2) - \min\{R_1;I(U_1;Y_2,U_2)-\alpha I(U_1;U_2),I(U_1,U_2;Y_2) \} + \delta. \end{align} \end{subequations}% Note that the region $\Rr_\mathrm{BC}^{**}(p,\alpha,\delta=0)$ is equal to $\Rr_\mathrm{BC}^{**}(p,\alpha)$ as defined in (\ref{eq:marton}). \begin{proposition} \label{prop:outer_marton} Let $p=p(u_1,u_2)$ be a pmf, $x(u_1,u_2)$ be a function, $\alpha \in [0 \; 1]$, and $\epsilon>0$. If a rate pair $(R_1, R_2)$ is achievable by the $(p, \alpha, \epsilon)$-distributed Marton random code ensemble, then there exists a continuous $\delta'(\epsilon)$ that tends to zero monotonically as $\epsilon \to 0$ such that \begin{equation} \label{eq:outer_marton} (R_1,R_2) \in \Rr_\mathrm{BC}^{**}(p,\alpha, \delta'(\epsilon)). \end{equation} In particular, \begin{equation} \label{eq:outer_marton_limit} \Rr_\mathrm{BC}^{*}(p,\alpha) \sbq \Rr_\mathrm{BC}^{**}(p,\alpha). \end{equation} \end{proposition} \begin{IEEEproof} We first start with an averaged version of Fano's inequality for a Marton random code ensemble $\mathcal{C}_n$. Consider a fixed codebook $\mathcal{C}_n = \text{\footnotesize $\mathcal{C}$} _n$. By Fano's inequality, \[ H(M_j | Y_j^n, \mathcal{C}_n= \text{\footnotesize $\mathcal{C}$} _n) \le 1 + nR_j P_e^{(n)}(\text{\footnotesize $\mathcal{C}$} _n)\quad j=1,2. \] Taking the expectation over Marton random codebook $\mathcal{C}_n$, it follows that \begin{equation} \label{eq:marton_fano} H(M_j | Y_j^n, \mathcal{C}_n) \le 1 + nR_j \E_{\mathcal{C}_n}[P_e^{(n)}(\mathcal{C}_n)] \le n\epsilon_n, \quad j=1,2 \end{equation}% for some $\epsilon_n \to 0$ as $n \to \infty$ since $\E_{\mathcal{C}_n}[P_e^{(n)}(\mathcal{C}_n)] \to 0$. We next define the indicator random variable \begin{equation} \label{eq:indicator_en_marton} {\tilde{E}}_n = \mathbbm{1}_{\{ (U_1^n(M_1,L_1),U_2^n(M_2,L_2)) \in {\mathcal{T}_{\epsilon}^{(n)}}(X_1,X_2) \}}. \end{equation}% Since ${\hat{R}}_1+ {\hat{R}}_2 = I(U_1;U_2) + 10 \epsilon H(U_1,U_2)$, $\P({\tilde{E}}_n = 0)$ tends to zero as $n \to \infty$ by the mutual covering lemma in~\cite[p.~208]{El-Gamal--Kim2011}. We are now ready to establish (\ref{eq:marton_type1}). For $n$ sufficiently large, we have \begin{align} \notag nR_1 &= H(M_1|M_2, \mathcal{C}_n) \\ \notag & \overset{(a)}{\le} I(M_1;Y_1^n| M_2, \mathcal{C}_n) + n \epsilon_n \\ \notag & \le I(M_1, {\tilde{E}}_n ;Y_1^n| M_2, \mathcal{C}_n) + n \epsilon_n \\ \notag & \overset{(b)}{\le} 1 + I(M_1 ;Y_1^n| M_2, \mathcal{C}_n, {\tilde{E}}_n) + n \epsilon_n \\ \notag & \le 1 + I(M_1 ;Y_1^n| M_2, \mathcal{C}_n, {\tilde{E}}_n = 0) \P({\tilde{E}}_n = 0) + I(M_1 ;Y_1^n| M_2, \mathcal{C}_n, {\tilde{E}}_n = 1) \P({\tilde{E}}_n = 1) + n \epsilon_n \\ \notag & \le 1 + nR_1 \P({\tilde{E}}_n = 0) + I(M_1 ;Y_1^n| M_2, \mathcal{C}_n, {\tilde{E}}_n = 1) + n \epsilon_n \\ \notag & \le 1 + nR_1 \P({\tilde{E}}_n = 0) + I(M_1,L_2 ;Y_1^n| M_2, \mathcal{C}_n, {\tilde{E}}_n = 1) + n \epsilon_n \\ \notag & \le 1 + nR_1 \P({\tilde{E}}_n = 0) + n{\hat{R}}_2 + I(M_1;Y_1^n| M_2,L_2, \mathcal{C}_n, {\tilde{E}}_n = 1) + n \epsilon_n \\ \notag & = 1 + nR_1 \P({\tilde{E}}_n = 0) + n{\hat{R}}_2 + \sum_{i=1}^{n} I(M_1;Y_{1i} | Y_1^{i-1}, M_2,L_2, \mathcal{C}_n, U_{2i}, {\tilde{E}}_n = 1) + n \epsilon_n \\ \notag & \le 1 + nR_1 \P({\tilde{E}}_n = 0) + n{\hat{R}}_2 + \sum_{i=1}^{n} I(M_1, U_{1i}, Y_1^{i-1}, M_2,L_2, \mathcal{C}_n ;Y_{1i} | U_{2i}, {\tilde{E}}_n = 1) + n \epsilon_n \\ \notag &\overset{(c)}{=} 1 + nR_1 \P({\tilde{E}}_n = 0) + n{\hat{R}}_2 +\sum_{i=1}^{n} I( U_{1i} ;Y_{1i} | U_{2i}, {\tilde{E}}_n = 1) + n \epsilon_n \\ \notag & \overset{(d)}{\le} 1 + nR_1 \P({\tilde{E}}_n = 0) + n{\hat{R}}_2 + n ( I( U_1 ;Y_1 | U_2) + \delta_2(\epsilon)) + n \epsilon_n, \\ \notag &\le 1 + nR_1 \P({\tilde{E}}_n = 0) + n \overline{\a} (I(U_1;U_2)+ \delta_1(\epsilon) ) + n ( I( U_1 ;Y_1 | U_2) + \delta_2(\epsilon)) + n \epsilon_n, \\ \label{eq:outer1_marton} & \overset{(e)}{\le} n ( I( U_1 ;Y_1 , U_2) - \alpha I(U_1;U_2) + \delta_3(\epsilon)) + 2 n \epsilon_n, \end{align}% where $(a)$ follows by (the averaged version of) Fano's inequality in (\ref{eq:marton_fano}), $(b)$ follows since ${\tilde{E}}_n$ is a binary random variable, $(c)$ follows since $(M_1,M_2, Y_1^{i-1}, \mathcal{C}_n, {\tilde{E}}_n) \to (U_{1i}, U_{2i})\to Y_{1i}$ form a Markov chain for every $i \in [n]$, $(d)$ follows by the memoryless property of the channel and by Lemma~\ref{lem:uniform_tyep_iid} in Appendix~\ref{app:lemma_pmf} since the distribution of $(U_1^n(M_1,L_1), U_2^n(M_2,L_2))$ is permutation invariant by construction, and $(e)$ follows since $\P({\tilde{E}}_n = 0)$ tends to zero as $n \to \infty$. For the proof of (\ref{eq:marton_type2}), we start with \begin{align} \notag nR_1 &= H(M_1|M_{2}, \mathcal{C}_n) \\ \notag & \overset{(a)}{\le} I(M_1;Y_1^n| M_2, \mathcal{C}_n) + n \epsilon_n \\ \label{eq:outer2_0_marton} & = I(M_1, M_2; Y_1^n|\mathcal{C}_n) - I(M_2; Y_1^n |\mathcal{C}_n) + n \epsilon_n, \end{align}% where $(a)$ follows by (the averaged version of) Fano's inequality in (\ref{eq:marton_fano}). Following arguments similar to (\ref{eq:outer1_marton}), the first term in (\ref{eq:outer2_0_marton}) can be bounded as \begin{align} \notag I(M_1, M_2; Y_1^n|\mathcal{C}_n) &\le 1 + n(R_1+R_2) \P({\tilde{E}}_n = 0) + \sum_{i=1}^n I(M_1, M_2; Y_{1i} | \mathcal{C}_n, Y_1^{i-1}, {\tilde{E}}_n=1) \\ \notag & \le n \epsilon_n + \sum_{i=1}^n I(M_1, M_2, \mathcal{C}_n, Y_1^{i-1} ; Y_{1i} | {\tilde{E}}_n = 1) \\ \notag &= n \epsilon_n + \sum_{i=1}^n I(M_1, M_2, \mathcal{C}_n, Y_1^{i-1}, U_{1i}, U_{2i} ; Y_{1i} | {\tilde{E}}_n = 1) \\ \notag &= n \epsilon_n + \sum_{i=1}^n I(U_{1i}, U_{2i} ; Y_{1i} | {\tilde{E}}_n = 1), \\ \label{eq:outer2_1_marton} &\le n \epsilon_n + n (I(U_1, U_2 ; Y_1) + \delta_5(\epsilon)). \end{align} For the second term in (\ref{eq:outer2_0_marton}), we need the following lemma, which is proved in Appendix~\ref{app:list_marton}. This lemma is a version of Lemma \ref{lem:outer2_list} for Marton random code ensembles. \begin{lemma} \label{lem:outer2_list_marton} For every $\epsilon' > \epsilon$ and for $n$ sufficiently large, \[ I(M_2; Y_1^n |\mathcal{C}_n) \ge n [ \min \{ R_2, I(U_2;Y_1,U_1) - \overline{\a} I(U_1;U_2), I(U_1,U_2;Y_1) \} - \delta_6(\epsilon') ]- n \epsilon_n. \] \end{lemma} Combining (\ref{eq:outer2_0_marton}), (\ref{eq:outer2_1_marton}), and Lemma \ref{lem:outer2_list_marton} with $\epsilon' = 2 \epsilon$, we have \begin{align} nR_1 \le n [I(U_1,U_2;Y_1) - \min \{ R_2, I(U_2;Y_1,U_1)-\overline{\a} I(U_1;U_2), I(U_1,U_2;Y_1) \} + \delta_7(\epsilon)] + 2 n \epsilon_n \label{eq:outer2_marton} \end{align} for $n$ sufficiently large. For (\ref{eq:marton_type1_r2}) and (\ref{eq:marton_type2_r2}), we can similarly establish for receiver $2$ \begin{equation} \label{eq:outer1_marton2} nR_2 \le n ( I( U_2 ;Y_2 , U_1) - \overline{\a} I(U_1;U_2) + \delta_4(\epsilon)) + 2 n \epsilon_n \end{equation} and \begin{align} nR_2 \le n [I(U_1,U_2;Y_2) - \min \{ R_1, I(U_1;Y_2,U_2)-\alpha I(U_1;U_2), I(U_1,U_2;Y_2) \} + \delta_8(\epsilon)] + 2 n \epsilon_n \label{eq:outer2_marton2} \end{align} for $n$ sufficiently large. The proof of (\ref{eq:outer_marton}) follows by letting $n \to \infty$ in (\ref{eq:outer1_marton}), (\ref{eq:outer2_marton}), (\ref{eq:outer1_marton2}), and (\ref{eq:outer2_marton2}) and taking a continuous monotonic function $\delta'(\epsilon) \ge \max \{\delta_3(\epsilon), \delta_4(\epsilon), \delta_7(\epsilon), \delta_8(\epsilon) \}$ that tends to zero as $\epsilon \to 0$. Letting $\epsilon \to 0$ in (\ref{eq:outer_marton}) establishes (\ref{eq:outer_marton_limit}), which completes the proof of Proposition~\ref{prop:outer_marton}. \end{IEEEproof} \begin{remark} Marton coding we have analyzed involves two codewords. Marton's original coding scheme~\cite{Marton1979} uses rate splitting and superposition coding, and involves an additional codeword that carries messages for both receivers (see also~\cite[Proposition 8.1]{El-Gamal--Kim2011}). Our technique can be similarly adapted to this general version of Marton coding. \end{remark} \section{Discussion} \label{sec:discuss} For the linear computation problem, the outer bound on the optimal rate region presented in Section~\ref{sec:converse} is valid for \emph{any} computation, not only for natural computation. The inner bound presented in Theorem~\ref{thm:achiev}, however, matches with this outer bound only for \emph{natural} computation. It is an interesting but difficult problem to characterize the optimal rate region for an arbitrary linear computation problem. At this point, it is unclear whether it is the inner or the outer bound that is loose. The extension of the results in this paper to more than two senders is also a challenging question. A more fundamental question is to establish a general outer bound on the \emph{capacity region} of the linear computation problem. When $(X_1,X_2)\to W_{{\bf a}} \to Y$ form a Markov chain and $a_1,a_2 \neq 0$, we can establish the following outer bound by using Fano's inequality. If a rate pair $(R_1, R_2)$ is achievable, then \begin{subequations} \label{eq:general_outer_comp} \begin{align} R_1 &\le I(X_1; Y|X_2, Q),\\ R_2 &\le I(X_2; Y|X_1, Q),\\ R_1 &\le I(W_{{\bf a}}; Y|Q)-I(X_2; W_{{\bf a}}|T, Q),\\ R_2 &\le I(W_{{\bf a}}; Y|Q)-I(X_1; W_{{\bf a}}|T, Q),\\ R_1+R_2 &\le I(W_{{\bf a}}; Y|Q) + I(X_1, X_2; W_{{\bf a}}|T, Q) - I(X_1; W_{{\bf a}}|T, Q)-I(X_2; W_{{\bf a}}|T, Q) \end{align} \end{subequations}% for some $p(q)p(x_1|q)p(x_2|q)p(t|x_1,x_2, q)$ such that $W_{{\bf a}}\to(X_1, X_2)\to T$. Suppose that we set $Q = \emptyset$ and fix a pmf $p=p(x_1)p(x_2)$ in (\ref{eq:general_outer_comp}). If the auxiliary random variable $T = \emptyset$, (\ref{eq:general_outer_comp}) reduces to the rate region $\Rr_\mathrm{CF}(p)$ in Section~\ref{sec:main_result}. If $T = (X_1,X_2)$, (\ref{eq:general_outer_comp}) reduces to the rate region $\Rr_\mathrm{MAC}(p)$ in Section~\ref{sec:main_result}. Thus, we can conclude that this general outer bound recovers as extreme special cases the components of the outer bound in Theorem~\ref{thm:outer_ncc} that was established for a random ensemble of homologous codes. Whether and when both outer bounds coincide after taking time sharing and the union over all $p$ is left as another open problem. \appendices \section{Proof of Proposition~\ref{prop:opt_reg}} \label{app:prop1} Fix pmf $p = p(x_1)p(x_2)$. We first show that $[\mathscr{R}_{CF}(p) \cup \Rr_\mathrm{MAC}(p) ] \sbq \mathscr{R}^*(p)$. Suppose that the rate pair $(R_1, R_2) \in \mathscr{R}_{CF}(p)$. Then, for every $j \in \{1,2\}$ such that $a_j \neq 0$, the rate pair $(R_1,R_2)$ satisfies \begin{align*} R_j &\le H(X_j) - H(W_{{\bf a}} |Y) \\ &\le H(X_j) - H(W_{{\bf a}} |Y, X_{j^c}) \\ &= I(X_j;Y|X_{j^c}), \end{align*} and \begin{align*} R_j &\le H(X_j) - H(W_{{\bf a}} |Y) \\ &= I(X_1,X_2;Y) - I(X_{j^c};W_{{\bf a}},Y) \\ &\le I(X_1,X_2;Y) - \min\{R_{j^c},I(X_{j^c};W_{{\bf a}},Y)\}, \end{align*} which implies that $(R_1,R_2) \in \mathscr{R}^*(p)$. It follows that $\mathscr{R}_{CF}(p) \sbq \mathscr{R}^*(p)$. Similarly, suppose that the rate pair $(R_1,R_2) \in \Rr_\mathrm{MAC}(p)$. Then, for every $j \in \{1,2\}$ such that $a_j \neq 0$, the rate pair $(R_1,R_2)$ satisfies \[ R_j \le I(X_j;Y|X_{j^c}), \] and \begin{align*} R_j &\le I(X_1,X_2;Y) - R_{j^c} \\ & \le I(X_1,X_2;Y) - \min \{R_{j^c}, I(X_{j^c};W_{{\bf a}}, Y)\}, \end{align*} which implies that $(R_1,R_2) \in \mathscr{R}^*(p)$. Therefore, $\Rr_\mathrm{MAC}(p) \sbq \mathscr{R}^*(p)$. Next, we show that $\mathscr{R}^*(p) \sbq [\mathscr{R}_{CF}(p) \cup \Rr_\mathrm{MAC}(p)]$. Suppose that the rate pair $(R_1,R_2) \in \mathscr{R}^*(p)$ such that $R_{j^c} > I(X_{j^c};W_{{\bf a}},Y)$ for each $j \in \{1,2\}$ with $a_j \neq 0$. Then, $(R_1,R_2)$ satisfies \begin{align*} R_j &\le I(X_1,X_2;Y) - I(X_{j^c};W_{{\bf a}},Y) \\ &= H(X_j) - H(W_{{\bf a}}|Y), \end{align*} for each $j \in \{1,2\}$ with $a_j \neq 0$. Then, $(R_1,R_2) \in \mathscr{R}_{CF}(p)$. It is easy to see that the rate pair $(R_1,R_2) \in \mathscr{R}^*(p)$ that satisfies $R_{j^c} \le I(X_{j^c};W_{{\bf a}},Y)$ for one or more $j \in \{1,2\}$ with $a_j \neq 0$, is included in $\Rr_\mathrm{MAC}(p)$. Thus, $\mathscr{R}^*(p) \sbq [\mathscr{R}_{CF}(p) \cup \Rr_\mathrm{MAC}(p)]$, which completes the proof. \section{} \label{app:full_rank} \begin{lemma} \label{lem:full_rank_G} Let $G$ be an $ nR \times n$ random matrix over $\mathbb{F}_q$ with $R < 1$ where each element is drawn i.i.d. $\mathrm{Unif}(\mathbb{F}_q)$. Then, \[ \lim_{n \to \infty} n \P(G \textrm{ is not full rank}) = 0. \] \end{lemma} \begin{IEEEproof} Probability of choosing $ nR $ linearly independent rows can be written as \begin{align*} \P(G \textrm{ is full rank}) &= \frac{\prod_{j=1}^{ nR } (q^n-q^{j-1})}{(q^n)^{ nR }} \\ & = \prod_{j=1}^{ nR } (1-q^{j-1-n}) \\ &\ge (1-q^{-n(1-R)})^{nR}. \end{align*} Using this relation, we have \begin{align*} n \P(G \textrm{ is not full rank}) &= n (1 - \P(G \textrm{ is full rank}) ) \\ & \le n (1 - (1-q^{-n(1-R)})^{nR}) \\ & \overset{(a)}{\le} n^2 R q^{-n(1-R)}, \end{align*} where $(a)$ follows by Bernoulli's inequality. Since $R < 1$, $\lim_{n \to \infty} n^2 q^{-n(1-R)} = 0$, which completes the proof. \end{IEEEproof} \section{Proof of Lemma~\ref{lem:equiv_reg}} \label{app:lemma1} Fix pmf $p = p(x_1)p(x_2)$. We will show that if the condition in (\ref{eq:comp_favor}) holds, then $\mathscr{R}_{CF}(p) \cup \mathscr{R}_1(p) \cup \mathscr{R}_2(p) = \mathscr{R}_{CF}(p) \cup \Rr_\mathrm{MAC}(p)$. By definition of the rate regions $ \mathscr{R}_1(p), \mathscr{R}_2(p)$ and $\Rr_\mathrm{MAC}(p)$, it is easy to see that $\mathscr{R}_{CF}(p) \cup \mathscr{R}_1(p) \cup \mathscr{R}_2(p) \sbq \mathscr{R}_{CF}(p) \cup \Rr_\mathrm{MAC}(p)$ holds in general. Then, it suffices to show that if the condition in (\ref{eq:comp_favor}) holds, then $\Rr_\mathrm{MAC}(p) \sbq [\mathscr{R}_{CF}(p) \cup \mathscr{R}_1(p) \cup \mathscr{R}_2(p)]$. Suppose that the condition in (\ref{eq:comp_favor}) is satisfied. Let the rate pair $(R_1,R_2) \in \Rr_\mathrm{MAC}(p)$ be such that $R_{j^c} > I(X_{j^c};W_{{\bf a}},Y)$ for every $j \in \{1,2\}$ with $a_j \neq 0$. Then, $(R_1,R_2)$ satisfies \begin{align*} R_j &\le I(X_1,X_2;Y) - I(X_{j^c};W_{{\bf a}},Y) \\ &= H(X_j) - H(W_{{\bf a}}|Y), \end{align*} for each $j \in \{1,2\}$ with $a_j \neq 0$, implying that $(R_1,R_2) \in \mathscr{R}_{CF}(p)$. Now, let the rate pair $(R_1,R_2) \in \Rr_\mathrm{MAC}(p)$ be such that $R_{j^c} \le I(X_{j^c};W_{{\bf a}},Y)$ for some $j \in \{1,2\}$ with $a_j \neq 0$. By condition (\ref{eq:comp_favor}), we have \begin{align*} I(X_{j^c};W_{{\bf a}},Y) &= I(X_1,X_2;Y) - H(X_j) + H(W_{{\bf a}}|Y) \\ &= I(X_1,X_2;Y) - H(X_j) + \min_{{\bf b} \neq 0} H(W_{{\bf b}}|Y) \\ &\le I(X_1,X_2;Y) - H(X_j) + \min_{{\bf b} \in \hat{\mathbb{F}}_q^{1 \times 2}} H(W_{{\bf b}}|Y). \end{align*} Then, the rate pair $(R_1,R_2) \in \mathscr{R}_1(p) \cup \mathscr{R}_2(p)$, which completes the proof. \section{Proof of Lemma~\ref{lem:Fano_homolog}} \label{app:fano_homolog} Suppose that $a_j \neq 0$. Then, \begin{equation} \label{eq:fano_proof1} H(M_j|Y^n,M_{j^c},\mathcal{C}_n) = I(M_j; W^n_{{\bf a}} | Y^n, M_{j^c}, \mathcal{C}_n) + H(M_j|W^n_{{\bf a}},Y^n,M_{j^c},\mathcal{C}_n). \end{equation} To bound the first term in (\ref{eq:fano_proof1}), we need a version of Fano's inequality for computation. \begin{lemma} \label{lem:Fano_compute} If the average probability of error $\E_{\mathcal{C}_n} [ P_e^{(n)}(\mathcal{C}_n)]$ tends to zero as $n \to \infty$, then \[ H(W_{{\bf a}}^n|Y^n, \mathcal{C}_n) \le n \epsilon_n \] for some $\epsilon_n \to 0$ as $n \to \infty$. \end{lemma} \begin{IEEEproof} For fixed codebook $\mathcal{C}_n = \text{\footnotesize $\mathcal{C}$} _n$, by Fano's inequality \[ H(W_{{\bf a}}^n|Y^n, \mathcal{C}_n = \text{\footnotesize $\mathcal{C}$} _n) \le 1 + n P_e^{(n)}(\text{\footnotesize $\mathcal{C}$} _n). \] Taking the expectation over the random homologous codebook $\mathcal{C}_n$, we have \[ H(W_{{\bf a}}^n|Y^n, \mathcal{C}_n ) \le 1 + n E_{\mathcal{C}_n} [ P_e^{(n)}(\mathcal{C}_n)] \overset{(a)}{\le} n \epsilon_n, \] where $(a)$ follows since $\E_{\mathcal{C}_n} [ P_e^{(n)}(\mathcal{C}_n)]$ tends to zero as $n \to \infty$. \end{IEEEproof} Combining (\ref{eq:fano_proof1}) with Lemma~\ref{lem:Fano_compute}, we have \begin{align*} H(M_j|Y^n,M_{j^c},\mathcal{C}_n) &\le n \epsilon_n + H(M_j|W^n_{{\bf a}},Y^n,M_{j^c},\mathcal{C}_n) \\ & \overset{(a)}{=} n \epsilon_n + H(M_j|W^n_{{\bf a}}, X_{j^c}^n(M_{j^c}), Y^n,M_{j^c},\mathcal{C}_n) \\ & \overset{(b)}{=} n \epsilon_n + H(M_j|W^n_{{\bf a}}, X_{j}^n(M_j), X_{j^c}^n(M_{j^c}), Y^n,M_{j^c},\mathcal{C}_n) \\ & \le n \epsilon_n + H(M_j| X_{j}^n(M_j), \mathcal{C}_n) \\ & \overset{(d)}{\le} 2 n\epsilon_n \end{align*} where $(a)$ follows since $X^n_{j^c}(M_{j^c})$ is a function of $(M_{j^c}, \mathcal{C}_n)$, $(b)$ follows since $a_j \neq 0$ and $X^n_j(M_j)$ is a function of $(X^n_{j^c}(M_{j^c}), W^n_{{\bf a}})$, and $(d)$ follows since $H(M_j | X_j^n(M_j), \mathcal{C}_n)$ tends to zero as $n \to \infty$. \section{Proof of Lemma~\ref{lem:output_ncc_input}} \label{app:lemma_pmf} Let $i \in [n]$, and $(x,y) \in \mathbb{F}_q \times \mathcal{Y}$. Then, \begin{align} \notag \P(X_i = x, Y_i = y | X^n \in {\mathcal{T}_{\epsilon}^{(n)}}(X)) &= \P(X_i = x| X^n \in {\mathcal{T}_{\epsilon}^{(n)}}(X)) \P(Y_i = y | X_i = x, X^n \in {\mathcal{T}_{\epsilon}^{(n)}}(X)) \\ &= \P(X_i = x| X^n \in {\mathcal{T}_{\epsilon}^{(n)}}(X)) p_{Y|X}(y | x). \label{eq:dist_proof} \end{align} We make a connection between the conditional distribution of $X_i$ given $\{X^n \in {\mathcal{T}_{\epsilon}^{(n)}}(X)\}$ and the input pmf $p(x)$. Therefore, we start with exploring the conditional distribution of $X_i$ given $\{X^n \in {\mathcal{T}_{\epsilon}^{(n)}}(X)\}$. \begin{lemma} \label{lem:ncc_uniform_type} Let $p_X$ be a pmf on $\mathbb{F}_q$, and $\epsilon > 0$. Define ${\mathcal{T}_{\epsilon}^{(n)}}(X,\Theta)$ as the set of elements in ${\mathcal{T}_{\epsilon}^{(n)}}(X)$ with type $\Theta$. Suppose $X^n(m) = U^n(m,L(m))$ denote the random codeword assigned to message $m$ by $(n,nR;p_X,\epsilon)$ random homologous code ensemble. Then, \[ U^n(m,L) | \{ U^n(m,L) \in {\mathcal{T}_{\epsilon}^{(n)}}(X,\Theta) \} \sim \mathrm{Unif}({\mathcal{T}_{\epsilon}^{(n)}}(X,\Theta)), \] for every $m \in \mathbb{F}_q^{nR}$. \end{lemma} \begin{IEEEproof} Without loss of generality, we drop index $m$. It suffices to show that the distribution of $U^n(L)$ is permutation invariant. Let $u^n, v^n$ have the same type (typical or not) and let $u^n = \sigma(v^n)$ for some permutation $\sigma$. Then, we have \begin{align*} \P(U^n(L)=u^n ) &= \sum_{l} \sum_{\mathsf{G}} \P( L=l, G = \mathsf{G}, D^n = u^n \ominus l \mathsf{G}) \\ &\overset{(a)}{=} \sum_{l} \sum_{\mathsf{G}} \P( L=l, G = \sigma(\mathsf{G}), D^n = v^n \ominus l \sigma(\mathsf{G})) \\ &= \P(U^n(L)=v^n ), \end{align*} where $\sigma(\mathsf{G})$ is the matrix constructed by applying permutation $\sigma$ to the columns of $\mathsf{G}$, and $(a)$ follows since a permutation applied to a coset code preserves the type of each codeword. \end{IEEEproof} Building on top of Lemma~\ref{lem:ncc_uniform_type}, we next establish that the conditional distribution of $X_i$ given $\{X^n \in {\mathcal{T}_{\epsilon}^{(n)}}(X)\}$ is \emph{close} to the input pmf $p(x)$. \begin{lemma} \label{lem:uniform_tyep_iid} Let $\epsilon > 0$. Define ${\mathcal{T}_{\epsilon}^{(n)}}(X,\Theta)$ in a similar way to Lemma~\ref{lem:ncc_uniform_type}. Suppose that the distribution of $X^n$ is uniform within each type in the typical set, namely, for each type $\Theta$ \begin{equation} \label{eq:cond_lem_iid} X^n | \{ X^n \in {\mathcal{T}_{\epsilon}^{(n)}}(X,\Theta) \} \sim \mathrm{Unif}({\mathcal{T}_{\epsilon}^{(n)}}(X,\Theta)). \end{equation} Then, conditional on the typical set, $X_i$'s have identical distribution that satisfies \[ (1-\epsilon) p(x) \le P(X_i = x | X^n \in {\mathcal{T}_{\epsilon}^{(n)}}(X)) \le (1+\epsilon) p(x), \quad \forall x \in \mathcal{X}. \] \end{lemma} \begin{IEEEproof} Let $x \in \mathcal{X}$. For a type $\Theta$, let $\Theta_x$ denote the empirical mode of $x$ within type $\Theta$. Then, for every type $\Theta$ within the set ${\mathcal{T}_{\epsilon}^{(n)}}(X)$, we have \begin{align*} \P(X_i = x | X^n \in {\mathcal{T}_{\epsilon}^{(n)}}(X,\Theta)) &= \sum_{x^n \in {\mathcal{T}_{\epsilon}^{(n)}}(X,\Theta) \atop \textrm{s.t. } x_i = x} \P(X^n = x^n | X^n \in {\mathcal{T}_{\epsilon}^{(n)}}(X,\Theta)) \\ &\overset{(a)}{=} \sum_{x^n \in {\mathcal{T}_{\epsilon}^{(n)}}(X,\Theta) \atop x_i = x} \frac{1}{| {\mathcal{T}_{\epsilon}^{(n)}}(X,\Theta) |} \\ &\overset{(b)}{=} \Theta_x | {\mathcal{T}_{\epsilon}^{(n)}}(X,\Theta) | \frac{1}{| {\mathcal{T}_{\epsilon}^{(n)}}(X,\Theta) |} \\ &= \Theta_x, \end{align*} where $(a)$ follows since $X^n$ is conditionally uniform over ${\mathcal{T}_{\epsilon}^{(n)}}(X,\Theta)$, and $(b)$ follows since ${\mathcal{T}_{\epsilon}^{(n)}}(X,\Theta)$ is closed under permutation. Combining this observation with the fact that $\Theta$ is the type of a typical sequence, we get \[ (1-\epsilon) p(x) \le \P(X_i = x | X^n \in {\mathcal{T}_{\epsilon}^{(n)}}(X,\Theta)) \le (1+\epsilon) p(x), \quad \forall x \in \mathcal{X}. \] Since ${\mathcal{T}_{\epsilon}^{(n)}}(X)$ is the disjoint union of ${\mathcal{T}_{\epsilon}^{(n)}}(X,\Theta)$ over all types, multiplying each side with $\P(X^n \in {\mathcal{T}_{\epsilon}^{(n)}}(X,\Theta))$ and then summing over $\Theta$ gives \[ (1-\epsilon) p(x) \P(X^n \in {\mathcal{T}_{\epsilon}^{(n)}}(X)) \le \P(X_i = x , X^n \in {\mathcal{T}_{\epsilon}^{(n)}}(X)) \le (1+\epsilon) p(x) \P(X^n \in {\mathcal{T}_{\epsilon}^{(n)}}(X)), \] for all $x \in \mathcal{X}$. The claim follows from dividing each side by $\P(X^n \in {\mathcal{T}_{\epsilon}^{(n)}}(X))$. \end{IEEEproof} Back to the proof of Lemma~\ref{lem:output_ncc_input}, we have by Lemma \ref{lem:ncc_uniform_type} that the distribution of $X^n$ satisfies the condition in (\ref{eq:cond_lem_iid}) in Lemma~\ref{lem:uniform_tyep_iid}. Therefore, combining (\ref{eq:dist_proof}) with Lemma~\ref{lem:uniform_tyep_iid} completes the proof. \section{Proof of Lemma~\ref{lem:outer2_list}} \label{app:list_homolog} Let $\epsilon'' > \epsilon'$. Suppose that $a_j \neq 0$, and $j^c = \{1,2\} \setminus \{j\}$. First, by Lemma~\ref{lem:Fano_compute}, we have \[ I(M_{j^c}; Y^n |\mathcal{C}_n) \ge I(M_{j^c}; W_{{\bf a}}^n, Y^n |\mathcal{C}_n) - n \epsilon_n. \] Therefore, it suffices to prove that for $n$ sufficiently large, \[ I(M_{j^c}; W_{{\bf a}}^n, Y^n |\mathcal{C}_n) \ge n [\min \{ R_{j^c}, I(X_{j^c};W_{{\bf a}}, Y) \} - \delta_4(\epsilon'') - \epsilon_n]. \] Similar to~\cite{Bandemer--El-Gamal--Kim2012a}, we will show that given $W_{{\bf a}}^n, Y^n$, and $\mathcal{C}_n$, a relatively short list $\mathcal{L} \sbq \mathbb{F}_q^{nR_{j^c}}$ can be constructed that contains $M_{j^c}$ with high probability. Define a random set \begin{align*} \mathcal{L} = \{ m \in \mathbb{F}_q^{nR_{j^c}}: (X_{j^c}^n(m), W_{{\bf a}}^n, Y^n) \in {\mathcal{T}_{\epsilon''}^{(n)}}(X_{j^c},W_{{\bf a}}, Y) \}. \end{align*} Define two events $\mathcal{M}_1 = \{M_1 = M_2 = \mathbf{0} \}$ and $\mathcal{M}_2 = \{L_1(M_1) = L_2(M_2) = \mathbf{0}\}$. The indicator random variable $E_n$ is as defined in (\ref{eq:indicator_en}). By the symmetry of the codebook generation, for each $m \in \mathbb{F}_q^{nR_{j^c}}, m \neq M_{j^c}$, we have \begin{align*} \P( m \in \mathcal{L},&\, E_n = 1) \\ &= \P( m \in \mathcal{L}, E_n = 1 | \mathcal{M}_1, \mathcal{M}_2) \\ &= \P( (X_{j^c}^n(m), W_{{\bf a}}^n, Y^n) \in {\mathcal{T}_{\epsilon''}^{(n)}}, (X_1^n(\mathbf{0}),X_2(\mathbf{0})) \in {\mathcal{T}_{\epsilon'}^{(n)}} | \mathcal{M}_1, \mathcal{M}_2) \\ & \le \P( (U_{j^c}^n(m,l), W_{{\bf a}}^n, Y^n) \in {\mathcal{T}_{\epsilon''}^{(n)}} \textrm{ for some } l \in \mathbb{F}_q^{n{\hat{R}}_{j^c}}, (X_1^n(\mathbf{0}),X_2(\mathbf{0})) \in {\mathcal{T}_{\epsilon''}^{(n)}} | \mathcal{M}_1, \mathcal{M}_2) \\ & \le \sum_{l} \sum_{(x_1^n,x_2^n) \in \atop {\mathcal{T}_{\epsilon''}^{(n)}}(X_1,X_2)} \sum_{(u^n,w^n,y^n) \in \atop {\mathcal{T}_{\epsilon''}^{(n)}}(X_{j^c},W_{{\bf a}},Y)} \P \left( \begin{array}{c | c} U_{j^c}^n(m,l) = u^n, a_1 D_1^n \oplus a_2 D_2^n = w^n, & \mathcal{M}_1, \\ D_1^n = x_1^n, D_2^n = x_2^n, Y^n = y^n & \mathcal{M}_2 \end{array} \right) \\ &= \sum_{l} \sum_{(x_1^n,x_2^n) \in \atop {\mathcal{T}_{\epsilon''}^{(n)}}(X_1,X_2)} \sum_{(u^n,w^n,y^n) \in \atop {\mathcal{T}_{\epsilon''}^{(n)}}(X_{j^c},W_{{\bf a}},Y)} \P \left( \begin{array}{c | c} U_{j^c}^n(m,l) = u^n, \\ a_1 D_1^n \oplus a_2 D_2^n = w^n, & \mathcal{M}_1, \mathcal{M}_2 \\ D_1^n = x_1^n, D_2^n = x_2^n \end{array} \right) p(y^n|x_1^n,x_2^n) \\ & \overset{(a)}{\le} q^{n({\hat{R}}_1+ {\hat{R}}_2)} \sum_{l} \sum_{(x_1^n,x_2^n) \in \atop {\mathcal{T}_{\epsilon''}^{(n)}}(X_1,X_2)} \sum_{(u^n,w^n,y^n) \in \atop {\mathcal{T}_{\epsilon''}^{(n)}}(X_{j^c},W_{{\bf a}},Y)} \P \left( \begin{array}{c | c} U_{j^c}^n(m,l) = u^n,\\ a_1 D_1^n \oplus a_2 D_2^n = w^n, \\ D_1^n = x_1^n, D_2^n = x_2^n \end{array} \, \mathcal{M}_1 \right) p(y^n|x_1^n,x_2^n) \\ &= q^{n({\hat{R}}_1+ {\hat{R}}_2)} \sum_{l} \sum_{(x_1^n,x_2^n) \in \atop {\mathcal{T}_{\epsilon''}^{(n)}}(X_1,X_2)} \sum_{(w^n,y^n) \in \atop {\mathcal{T}_{\epsilon''}^{(n)}}(W_{{\bf a}},Y)} \\ & \quad\quad\quad\quad\quad \sum_{u^n \in \atop {\mathcal{T}_{\epsilon''}^{(n)}}(X_{j^c}|w^n,y^n)} \P \left( \begin{array}{c} [m \; l] G \oplus D_{j^c}^n = u^n, \\ D_1^n = x_1^n, D_2^n = x_2^n \end{array} \right) p(y^n|x_1^n,x_2^n) \, \mathbbm{1}_{\{w^n = a_1 x_1^n \oplus a_2 x_2^n\}} \\ &= q^{n({\hat{R}}_1+ {\hat{R}}_2)} \sum_{l} \sum_{(x_1^n,x_2^n) \in \atop {\mathcal{T}_{\epsilon''}^{(n)}}(X_1,X_2)} \sum_{(w^n,y^n) \in \atop {\mathcal{T}_{\epsilon''}^{(n)}}(W_{{\bf a}},Y)} \sum_{u^n \in \atop {\mathcal{T}_{\epsilon''}^{(n)}}(X_{j^c}|w^n,y^n)} q^{-3n} \, p(y^n|x_1^n,x_2^n) \, \mathbbm{1}_{\{w^n = a_1 x_1^n \oplus a_2 x_2^n\}} \\ & \le q^{n({\hat{R}}_1+{\hat{R}}_2+{\hat{R}}_{j^c})} \: q^{-3n} \: q^{n(H(X_{j^c}|W_{{\bf a}},Y)+H(X_1)+H(X_2) +\delta_4(\epsilon''))} \\ & \overset{(b)}{\le} q^{-n (I(X_{j^c};W_{{\bf a}},Y) - \delta_4(\epsilon'') - 3 \epsilon )}, \\ & \le q^{-n (I(X_{j^c};W_{{\bf a}},Y) - \delta_4(\epsilon'') )}, \end{align*}% where $(a)$ follows by~\cite[Lemma 11]{Lim--Gastpar2016}, and $(b)$ follows by the construction of the random homologous codebook $\mathcal{C}_n$ with ${\hat{R}}_i = D(p_{X_i} \| \mathrm{Unif}(\mathbb{F}_q)) + \epsilon$. Since $\P(E_n = 1)$ tends to one as $n \to \infty$, for $n$ sufficiently large we have $\P(E_n=1) \ge q^{-\epsilon}$. Therefore, for $n$ sufficiently large, the conditional probability is bounded as follows \begin{align*} \P( m \in \mathcal{L} | E_n = 1) &= \frac{\P( m \in \mathcal{L} , E_n = 1)}{\P(E_n=1)} \\ & \le \P( m \in \mathcal{L} , E_n = 1) q^{\epsilon}. \end{align*} The expected cardinality of $\mathcal{L}$ given $\{E_n = 1 \}$ is then bounded as \begin{align} \notag \E(|\mathcal{L}| | E_n = 1) & \le 1 + \sum_{m \neq M_{j^c}} \P( m \in \mathcal{L} | E_n = 1) \\ & \le 1+ q^{n(R_{j^c} - I(X_{j^c};W_{{\bf a}},Y) + \delta_4(\epsilon'') + \frac{\epsilon}{n}) } \\ &= 1+ q^{n(R_{j^c} - I(X_{j^c};W_{{\bf a}},Y) + \delta_4(\epsilon'') + \epsilon_n) }, \label{eq:ent_list} \end{align} for $n$ sufficiently large. Define another indicator random variable $F_n = \mathbbm{1}_{\{ M_{j^c} \in \mathcal{L} \}}$. Since $\epsilon'' > \epsilon'$ and $\P(E_n=1)$ tends to one as $n \to \infty$, by the conditional typicality lemma in~\cite[p.~27]{El-Gamal--Kim2011}, $\P(F_n=1)$ tends to one as $n \to \infty$. Then, for $n$ sufficiently large, we have \begin{align} \notag H(M_{j^c} | \mathcal{C}_n, & W_{{\bf a}}^n, Y^n) \\\notag &= H(M_{j^c} | \mathcal{C}_n, W_{{\bf a}}^n, Y^n, E_n, F_n) + I ( M_{j^c}; E_n, F_n | \mathcal{C}_n, W_{{\bf a}}^n, Y^n) \\ \notag & \le H(M_{j^c} | \mathcal{C}_n, W_{{\bf a}}^n, Y^n, E_n, F_n) + 2 \\ \notag & \le 2 + \P(F_n=0) H(M_{j^c} | \mathcal{C}_n, W_{{\bf a}}^n, Y^n, F_n=0, E_n) + H(M_{j^c} | \mathcal{C}_n, W_{{\bf a}}^n, Y^n, F_n=1, E_n) \\ & \le 2 + nR_{j^c} \P(F_n=0) + H(M_{j^c} | \mathcal{C}_n, W_{{\bf a}}^n, Y^n, F_n=1, E_n). \label{eq:list_comp} \end{align} For the last term in (\ref{eq:list_comp}), we use the fact that if $M_{j^c} \in \mathcal{L}$, then the conditional entropy cannot exceed $\log(|\mathcal{L}|)$: {\allowdisplaybreaks \begin{align*} H(M_{j^c} | \mathcal{C}_n, W_{{\bf a}}^n, & Y^n, F_n=1, E_n) \\ & \overset{(a)}{=} H(M_{j^c} | \mathcal{C}_n, W_{{\bf a}}^n, Y^n, F_n=1, E_n, \mathcal{L}, |\mathcal{L}|) \\ & \le H(M_{j^c} | F_n=1, E_n, \mathcal{L}, |\mathcal{L}|) \\ &= \sum_{l=0}^{q^{nR_{j^c}}} \P(|\mathcal{L}|=l, E_n = 1) H(M_{j^c} | E_n=1, F_n = 1, \mathcal{L}, |\mathcal{L}| = l) \\ & \quad +\sum_{l=0}^{q^{nR_{j^c}}} \P(|\mathcal{L}|=l, E_n = 0) H(M_{j^c} | E_n=0, F_n = 1, \mathcal{L}, |\mathcal{L}| = l) \\ & \le \sum_{l=0}^{q^{nR_{j^c}}} \P(|\mathcal{L}|=l, E_n = 1) H(M_{j^c} | E_n=1, F_n = 1, \mathcal{L}, |\mathcal{L}| = l) + \P(E_n = 0) nR_{j^c} \\ & \le \sum_{l=0}^{q^{nR_{j^c}}} \P(|\mathcal{L}|=l, E_n = 1) \log(l) + nR_{j^c} \P(E_n = 0) \\ & \le \sum_{l=0}^{q^{nR_{j^c}}} \P(|\mathcal{L}|=l | E_n = 1) \log(l) + nR_{j^c} \P(E_n = 0) \\ &= \E [\log(|\mathcal{L}|) | E_n = 1] + nR_{j^c} \P(E_n = 0) \\ & \overset{(b)}{\le} \log(\E [|\mathcal{L}| | E_n = 1]) + nR_{j^c} \P(E_n = 0) \\ & \overset{(c)}{\le} 1 + \max \{0, n(R_{j^c} - I(X_{j^c};W_{{\bf a}},Y) + \delta_4(\epsilon'') + \epsilon_n) \} + nR_{j^c} \P(E_n = 0) \\ & \le 1 + \max \{0, n(R_{j^c} - I(X_{j^c};W_{{\bf a}},Y)) \} + n\delta_4(\epsilon'') + n\epsilon_n + nR_{j^c} \P(E_n = 0) \end{align*} }% where $(a)$ follows since the set $\mathcal{L}$ and its cardinality $| \mathcal{L}|$ are functions of $(\mathcal{C}_n, W_{{\bf a}}^n,Y^n)$, $(b)$ follows by Jensen's inequality, and $(c)$ follows by (\ref{eq:ent_list}) and the soft-max interpretation of the log-sum-exp function~\cite[p.~72]{Boyd--Vandenberghe2004}. Substituting back gives \begin{align*} I(M_{j^c}; W_{{\bf a}}^n, Y^n | \mathcal{C}_n) &= H(M_{j^c} | \mathcal{C}_n) - H(M_{j^c} | \mathcal{C}_n, W_{{\bf a}}^n, Y^n) \\ &= nR_{j^c} - H(M_{j^c} | \mathcal{C}_n, W_{{\bf a}}^n, Y^n) \\ & \ge nR_{j^c} - 2 - nR_{j^c} \P(F_n=0) - H(M_{j^c} | \mathcal{C}_n, W_{{\bf a}}^n, Y^n, F_n=1, E_n) \\ & \ge nR_{j^c} - 3 - nR_{j^c} (\P(E_n=0) + \P(F_n=0))\\ &\quad\quad\quad - \max \{0, n(R_{j^c} - I(X_{j^c};W_{{\bf a}},Y)) \} - n\delta_4(\epsilon'') - n\epsilon_n \\ &= n [ \min \{ R_{j^c}, I(X_{j^c};W_{{\bf a}},Y) \} - \delta_4(\epsilon'') - \epsilon_n ] - 3 - nR_{j^c} (\P(E=0) + \P(F = 0)) \\ &\overset{(a)}{=} n [ \min \{ R_{j^c}, I(X_{j^c};W_{{\bf a}},Y)\} - \delta_4(\epsilon'') - 2 \epsilon_n], \end{align*} where $(a)$ follows for large $n$ since both probabilities $\P(E_n=0)$ and $\P(F_n=0)$ tend to zero as $n \to \infty$. \section{Proof of Achievability for Theorem~\ref{thm:opt_marton}} \label{app:marton_achiev} Let $\alpha \in [0 \; 1]$ and $\epsilon>0$. Consider an $(n,nR_1,nR_2;p,\alpha,\epsilon)$ Marton random code ensemble. We use the nonunique simultaneous joint typicality decoding rule in~\cite{Wang--Sasoglu--Bandemer--Kim2013} to establish the achievability. Let $\epsilon' > \epsilon$. Upon receiving $y_j^n$ at receiver $j=1,2$, the $\epsilon'$-joint typicality decoder $j$ looks for a unique $m_j \in [2^{nR_j}]$ such that \[ (u_1^n(m_1,l_1), u_2^n(m_2,l_2),y_j^n) \in {\mathcal{T}_{\epsilon'}^{(n)}}(U_1,U_2,Y_j), \] for some $l_1 \in [2^{n{\hat{R}}_1}]$, $l_2 \in [2^{n{\hat{R}}_2}]$ and $m_{j^c} \in [2^{n R_{j^c}}]$, where $j^c$ denotes $\{1,2\} \setminus j$. If the decoder $j=1,2$ finds such $m_j$, then it declares $m_j$ as an estimate; otherwise, it declares an error. We analyze the probability of error. It suffices to consider decoder 1, which declares an error if one or more of the following events occur \begin{align*} \mathcal{E}_0 &= \{ (U_1^n(M_1,l_1),U_2^n(M_2,l_2)) \notin {\mathcal{T}_{\epsilon}^{(n)}}(U_1,U_2) \textrm{ for every } (l_1,l_2) \in [2^{n{\hat{R}}_1}] \times [2^{n{\hat{R}}_2}]\}, \\ \mathcal{E}_1 &= \{ (U_1^n(M_1,L_1),U_2^n(M_2,L_2),Y_1^n) \notin {\mathcal{T}_{\epsilon'}^{(n)}}(U_1,U_2,Y_1) \}, \\ \mathcal{E}_2 &= \{ (U_1^n(m_1,l_1),U_2^n(m_2,l_2),Y_1^n) \in {\mathcal{T}_{\epsilon'}^{(n)}}(U_1,U_2,Y_1) \textrm{ for some } m_1 \neq M_1, \\ &\qquad\qquad\qquad\qquad\qquad \textrm{ for some } (m_2,l_1,l_2) \in [2^{nR_2}] \times [2^{n{\hat{R}}_1}] \times [2^{n{\hat{R}}_2}]\}. \end{align*} By the union of events bound, $\P_e^{(n)}(\mathcal{C}_n) \le \P(\mathcal{E}_0) + \P(\mathcal{E}_1 \cap \mathcal{E}_0^c) + \P(\mathcal{E}_2 \cap \mathcal{E}_0^c)$. Since ${\hat{R}}_1 + {\hat{R}}_2 = I(U_1;U_2) + 10 \epsilon H(U_1,U_2)$, by the mutual covering lemma in~\cite[p.~208]{El-Gamal--Kim2011}, the probability $\P(\mathcal{E}_0)$ tends to zero as $n \to \infty$. By the conditional typicality lemma in~\cite[p.~27]{El-Gamal--Kim2011}, the probability $\P(\mathcal{E}_1 \cap \mathcal{E}_0^c)$ tends to zero as $n \to \infty$. The last term can be bounded by two ways. First, by the symmetric codebook generation, \begin{align*} \P(\mathcal{E}_{2} \cap \mathcal{E}_0^c) &\le \P(\mathcal{E}_2) \\ &= \P(\mathcal{E}_2 | M_1=M_2=1) \\ &\le \P( (U_1^n(m_1,l_1),Y_1^n) \in {\mathcal{T}_{\epsilon'}^{(n)}}(U_1,Y_1) \textrm{ for some } m_1 \neq 1, \textrm{ for some } l_1 \in [2^{n{\hat{R}}_1}] | M_1 = 1), \end{align*} which tends to zero as $n \to \infty$ if $R_1 + {\hat{R}}_1 \le I(U_1;Y_1) - \delta(\epsilon')$ by the packing lemma in \cite{El-Gamal--Kim2011}. Letting ${\hat{R}}_1 = \alpha (I(U_1;U_2) + 10\epsilon H(U_1,U_2))$, we have \begin{equation} R_1 \le \max\{0, I(U_1;Y_1) - \alpha I(U_1;U_2) - 2\delta(\epsilon')\}. \label{eq:snd_1} \end{equation} Secondly, we can decompose the event $\mathcal{E}_2 = \mathcal{E}_{21} \cup \mathcal{E}_{22}$ such that \begin{align*} \mathcal{E}_{21} &= \{ (U_1^n(m_1,l_1),U_2^n(M_2,l_2),Y_1^n) \in {\mathcal{T}_{\epsilon'}^{(n)}}(U_1,U_2,Y_1) \textrm{ for some } m_1 \neq M_1, \\ &\qquad\qquad\qquad\qquad\qquad \textrm{ for some } (l_1,l_2) \in [2^{n{\hat{R}}_1}] \times [2^{n{\hat{R}}_2}]\}, \\ \mathcal{E}_{22} &= \{ (U_1^n(m_1,l_1),U_2^n(m_2,l_2),Y_1^n) \in {\mathcal{T}_{\epsilon'}^{(n)}}(U_1,U_2,Y_1) \textrm{ for some } m_1 \neq M_1,\textrm{ for some } m_2 \neq M_2, \\ &\qquad\qquad\qquad\qquad\qquad \textrm{ for some } (l_1,l_2) \in [2^{n{\hat{R}}_1}] \times [2^{n{\hat{R}}_2}]\}. \end{align*} We start with bounding $\P(\mathcal{E}_{22})$ as follows: {\allowdisplaybreaks \begin{align*} \P(\mathcal{E}_{22}) &= \P(\mathcal{E}_{22}|M_1=M_2=1) \\ &= \P((U_1^n(m_1,l_1),U_2^n(m_2,l_2),Y_1^n) \in {\mathcal{T}_{\epsilon'}^{(n)}}(U_1,U_2,Y_1) \textrm{ for some } m_1 \neq 1,\textrm{ for some } m_2 \neq 1, \\ &\qquad\qquad\qquad\qquad\qquad \textrm{ for some } (l_1,l_2) \in [2^{n{\hat{R}}_1}] \times [2^{n{\hat{R}}_2}] |M_1=M_2=1) \\ &\overset{(a)}{\le} \sum_{m_1 \neq 1} \sum_{l_1} \sum_{m_2 \neq 1} \sum_{l_2} \sum_{(u_1^n,u_2^n,y_1^n) \atop \in {\mathcal{T}_{\epsilon'}^{(n)}}(U_1,U_2,Y_1)} p(y_1^n|M_1=M_2=1) 2^{-n(H(U_1)+H(U_2)-\delta(\epsilon'))} \\ &\le \sum_{m_1 \neq 1} \sum_{l_1} \sum_{m_2 \neq 1} \sum_{l_2} 2^{-n(H(U_1)+H(U_2)-H(U_1,U_2|Y_1)-2\delta(\epsilon'))} \\ &\le 2^{n(R_1+R_2+{\hat{R}}_1+{\hat{R}}_2)} 2^{-n(H(U_1)+H(U_2)-H(U_1,U_2|Y_1)-2\delta(\epsilon'))}, \end{align*} }% where $(a)$ follows since given $\{M_1=M_2=1\}$, the pair $(U_1^n(m_1,l_1),U_2^n(m_2,l_2))$ for $m_1 \neq 1, m_2 \neq 1$ is i.i.d. with respect to the product pmf $p(u_1)p(u_2)$ and is independent from $Y_1^n$. Substituting ${\hat{R}}_1 + {\hat{R}}_2 = I(U_1;U_2) + 10 \epsilon H(U_1,U_2)$, it follows that $\P(\mathcal{E}_{22})$ tends to zero as $n \to \infty$ if $R_1 + R_2 \le I(U_1,U_2;Y_1)- 3\delta(\epsilon')$. We next bound the probability $\P(\mathcal{E}_{21} \cap \mathcal{E}_0^c)$. Define the events $\mathcal{M}_1 :=\{M_1=M_2=1\}$ and $\mathcal{M}_2 :=\{L_1=L_2=1\}$. By the symmetric codebook generation, \[ \P(\mathcal{E}_{21} \cap \mathcal{E}_0^c) = \P(\mathcal{E}_{21} \cap \mathcal{E}_0^c |\mathcal{M}_1,\mathcal{M}_2 ), \] which can be bounded as \begin{align}\notag &\P(\mathcal{E}_{21} \cap \mathcal{E}_0^c| \mathcal{M}_1,\mathcal{M}_2) \\ \notag &\le \sum_{m_1 \neq 1} \sum_{l_1,l_2} \P( (U_1^n(m_1,l_1), U_2^n(1,l_2),Y_1^n) \in {\mathcal{T}_{\epsilon'}^{(n)}}, (U_1^n(1,1),U_2^n(1,1)) \in {\mathcal{T}_{\epsilon}^{(n)}} | \mathcal{M}_1,\mathcal{M}_2) \\ \notag &\le \sum_{m_1 \neq 1} \sum_{l_1} \P( (U_1^n(m_1,l_1), U_2^n(1,1),Y_1^n) \in {\mathcal{T}_{\epsilon'}^{(n)}}, (U_1^n(1,1),U_2^n(1,1)) \in {\mathcal{T}_{\epsilon}^{(n)}} | \mathcal{M}_1,\mathcal{M}_2) + \\ &\qquad \sum_{m_1 \neq 1} \sum_{l_1} \sum_{l_2 \neq 1} \P( (U_1^n(m_1,l_1), U_2^n(1,l_2),Y_1^n) \in {\mathcal{T}_{\epsilon'}^{(n)}}, (U_1^n(1,1),U_2^n(1,1)) \in {\mathcal{T}_{\epsilon}^{(n)}} | \mathcal{M}_1,\mathcal{M}_2). \label{eq:sum12} \end{align} The first summation term in (\ref{eq:sum12}) can be bounded as {\allowdisplaybreaks \begin{align*} &\sum_{m_1 \neq 1} \sum_{l_1} \P( (U_1^n(m_1,l_1), U_2^n(1,1),Y_1^n) \in {\mathcal{T}_{\epsilon'}^{(n)}}, (U_1^n(1,1),U_2^n(1,1)) \in {\mathcal{T}_{\epsilon}^{(n)}} |\mathcal{M}_1,\mathcal{M}_2) \\ &\le \sum_{m_1 \neq 1} \sum_{l_1} \sum_{(u_1^n,u_2^n) \atop \in {\mathcal{T}_{\epsilon}^{(n)}}} \sum_{({\tilde{u}}_1^n,y_1^n) \atop \in {\mathcal{T}_{\epsilon'}^{(n)}}(U_1,Y_1|u_2^n) } \P( U_1^n(m_1,l_1) = {\tilde{u}}_1^n, U_1^n(1,1)=u_1^n, U_2^n(1,1) = u_2^n,Y_1^n = y_1^n |\mathcal{M}_1,\mathcal{M}_2) \\ &\overset{(a)}{=} \sum_{m_1 \neq 1} \sum_{l_1} \sum_{(u_1^n,u_2^n) \atop \in {\mathcal{T}_{\epsilon}^{(n)}}} \sum_{({\tilde{u}}_1^n,y_1^n) \atop \in {\mathcal{T}_{\epsilon'}^{(n)}}(U_1,Y_1|u_2^n) } \P( U_1^n(m_1,l_1) = {\tilde{u}}_1^n, U_1^n(1,1)=u_1^n, U_2^n(1,1) = u_2^n|\mathcal{M}_1,\mathcal{M}_2) p(y_1^n|u_1^n,u_2^n) \\ &\overset{(b)}{\le} 2^{n({\hat{R}}_1+{\hat{R}}_2)} \sum_{m_1 \neq 1} \sum_{l_1} \sum_{(u_1^n,u_2^n) \atop \in {\mathcal{T}_{\epsilon'}^{(n)}}} \sum_{({\tilde{u}}_1^n,y_1^n) \atop \in {\mathcal{T}_{\epsilon'}^{(n)}}(U_1,Y_1|u_2^n) } \P( U_1^n(m_1,l_1) = {\tilde{u}}_1^n, U_1^n(1,1)=u_1^n, U_2^n(1,1) = u_2^n|\mathcal{M}_1) p(y_1^n|u_1^n,u_2^n) \\ &\overset{(c)}{\le} 2^{n({\hat{R}}_1+{\hat{R}}_2)} \sum_{m_1 \neq 1} \sum_{l_1} \sum_{(u_1^n,u_2^n) \atop \in {\mathcal{T}_{\epsilon'}^{(n)}}} \sum_{({\tilde{u}}_1^n,y_1^n) \atop \in {\mathcal{T}_{\epsilon'}^{(n)}}(U_1,Y_1|u_2^n) } p(y_1^n|u_1^n,u_2^n) 2^{-n(2H(U_1)+H(U_2) -\delta(\epsilon'))} \\ &\le 2^{n({\hat{R}}_1+{\hat{R}}_2)} \sum_{m_1 \neq 1} \sum_{l_1} \sum_{(u_1^n,u_2^n) \atop \in {\mathcal{T}_{\epsilon'}^{(n)}}} 2^{n(H(U_1|Y_1,U_2) + \delta(\epsilon')) } 2^{-n(2H(U_1)+H(U_2) -\delta(\epsilon'))} \\ &\le 2^{n({\hat{R}}_1+{\hat{R}}_2)} \sum_{m_1 \neq 1} \sum_{l_1} 2^{n(H(U_1,U_2)+\delta(\epsilon'))} 2^{n(H(U_1|Y_1,U_2) + \delta(\epsilon')) } 2^{-n(2H(U_1)+H(U_2) -\delta(\epsilon'))} \\ &\le 2^{n(R_1+2{\hat{R}}_1+{\hat{R}}_2+H(U_1,U_2)+H(U_1|Y_1,U_2) -2H(U_1)-H(U_2) + 3\delta(\epsilon'))} \\ &= 2^{n(R_1+2{\hat{R}}_1+{\hat{R}}_2-I(U_1;U_2)-I(U_1;Y_1,U_2) + 3\delta(\epsilon'))}, \end{align*} }% where $(a)$ follows since given $(\mathcal{M}_1,\mathcal{M}_2)$ the tuple $U_1^n(m_1,l_1) \to (U_1^n(1,1),U_2^n(1,1)) \to Y_1^n$ form a Markov chain, $(b)$ follows by~\cite[Lemma 11]{Lim--Gastpar2016}, and $(c)$ follows since the tuple $(U_1^n(m_1,l_1),U_1^n(1,1),U_2^n(1,1))$ is independent of the event $\mathcal{M}_1$ and is i.i.d. with respect to the product pmf $p(u_1)p(u_1)p(u_2)$. Similarly, the second summation term in (\ref{eq:sum12}) can be bounded as \begin{align*} &\sum_{m_1 \neq 1} \sum_{l_1} \sum_{l_2 \neq 1} \P( (U_1^n(m_1,l_1), U_2^n(1,l_2),Y_1^n) \in {\mathcal{T}_{\epsilon'}^{(n)}}, (U_1^n(1,1),U_2^n(1,1)) \in {\mathcal{T}_{\epsilon}^{(n)}} |\mathcal{M}_1,\mathcal{M}_2) \\ &\le 2^{n(R_1+2{\hat{R}}_1+2{\hat{R}}_2-2I(U_1;U_2)-I(U_1,U_2;Y_1) + 3\delta(\epsilon'))}. \end{align*} Therefore, $\P(\mathcal{E}_{21} \cap \mathcal{E}_0^c)$ tends to zero as $n \to \infty$ if $R_1+2{\hat{R}}_1+{\hat{R}}_2 \le I(U_1;U_2)+I(U_1;Y_1,U_2) - 3\delta(\epsilon')$ and $R_1+2{\hat{R}}_1+2{\hat{R}}_2 \le 2I(U_1;U_2)+I(U_1,U_2;Y_1)-3\delta(\epsilon')$. Letting ${\hat{R}}_1 = \alpha (I(U_1;U_2) + 10\epsilon H(U_1,U_2))$ and ${\hat{R}}_2 = \overline{\a} (I(U_1;U_2) + 10\epsilon H(U_1,U_2))$ results in $R_1 \le I(U_1;Y_1,U_2) - \alpha I(U_1;U_2) - 4\delta(\epsilon')$ and $R_1 \le I(U_1,U_2;Y_1)-4\delta(\epsilon')$. Combining with (\ref{eq:snd_1}), the probability of error at Decoder 1 tends to zero as $n \to \infty$ if \begin{equation} \label{eq:marton11} R_1 \le \max\{0, I(U_1;Y_1) - \alpha I(U_1;U_2) - 4\delta(\epsilon')\}, \end{equation} or \begin{subequations} \label{eq:marton12} \begin{align} R_1 &\le I(U_1;Y_1,U_2) - \alpha I(U_1;U_2) - 4\delta(\epsilon'),\\ R_1 + R_2 &\le I(U_1,U_2;Y_1) - 4\delta(\epsilon'). \end{align} \end{subequations} Repeating similar steps, the probability of error at Decoder 2 tends to zero as $n \to \infty$ if \begin{equation} \label{eq:marton21} R_2 \le \max\{0, I(U_2;Y_2) - \overline{\a} I(U_1;U_2) - 4\delta(\epsilon')\}, \end{equation} or \begin{subequations} \label{eq:marton22} \begin{align} R_2 &\le I(U_2;Y_2,U_1) - \overline{\a} I(U_1;U_2) - 4\delta(\epsilon'),\\ R_1 + R_2 &\le I(U_1,U_2;Y_2) - 4\delta(\epsilon'). \end{align} \end{subequations} If we denote the set of rate pairs satisfying (\ref{eq:marton11}) or (\ref{eq:marton12}) as $\Rr_{\mathrm{BC},1}(p, \alpha, \delta(\epsilon'))$, and denote the set of rate pairs satisfying (\ref{eq:marton21}) or (\ref{eq:marton22}) as $\Rr_{\mathrm{BC},2}(p, \alpha, \delta(\epsilon'))$, then the rate region $\Rr_{\mathrm{BC},1}(p,\alpha,\delta(\epsilon')) \cap \Rr_{\mathrm{BC},2}(p,\alpha,\delta(\epsilon'))$ is achievable by the $\epsilon'$-typicality decoders. Define the rate regions $\Rr_{\mathrm{BC},j}(p,\alpha) := \Rr_{\mathrm{BC},j}(p,\alpha,\delta(\epsilon')=0)$, $j=1,2$. Let $\epsilon'=2\epsilon$. Taking $\epsilon \to 0$ and then taking the closure implies \[ \Rr_{\mathrm{BC},1}(p,\alpha) \cap \Rr_{\mathrm{BC},2}(p,\alpha) \; \sbq \; \Rr_\mathrm{BC}^*(p,\alpha). \] The achievability proof follows from the next lemma that provides an equivalent characterization for the rate region in Theorem \ref{thm:opt_marton}. \begin{lemma} \label{lem:equiv_reg_marton} For any input pmf $p=p(u_1,u_2)$, function $x(u_1,u_2)$, and $\alpha \in [0 \; 1]$, \[ \Rr_\mathrm{BC}^{**}(p,\alpha) = \Rr_{\mathrm{BC},1}(p,\alpha) \cap \Rr_{\mathrm{BC},2}(p,\alpha). \] \end{lemma} \begin{IEEEproof} Fix pmf $p = p(u_1,u_2)$, function $x(u_1,u_2)$ and $\alpha \in [0 \; 1]$. It suffices to show that the rate region $\Rr_{\mathrm{BC},1}(p,\alpha)$ is equivalent to the set of rate pairs $(R_1,R_2)$ that satisfy (\ref{eq:thm_marton1})-(\ref{eq:thm_marton2}). We first show that any rate pair in $\Rr_{\mathrm{BC},1}(p,\alpha)$ satisfies (\ref{eq:thm_marton1})-(\ref{eq:thm_marton2}). Suppose that the rate pair $(R_1, R_2) \in \Rr_{\mathrm{BC},1}(p,\alpha)$, which implies that \begin{align*} R_1 \le I(U_1;Y_1,U_2) - \alpha I(U_1;U_2), \end{align*} and \begin{align*} R_1 &\le \max\{0, I(U_1;Y_1) - \alpha I(U_1;U_2), I(U_1,U_2;Y_1) - R_2\} \\ &= I(U_1,U_2;Y_1) - \min \{I(U_1,U_2;Y_1), I(U_2;Y_1,U_1) - \overline{\a} I(U_1;U_2), R_2\}. \end{align*} Therefore, $(R_1,R_2)$ satisfies (\ref{eq:thm_marton1})-(\ref{eq:thm_marton2}). For the other direction, suppose that the rate pair $(R_1, R_2)$ satisfies (\ref{eq:thm_marton1})-(\ref{eq:thm_marton2}). Assume also that $R_2 < \min\{I(U_2;Y_1,U_1)-\overline{\a} I(U_1;U_2), I(U_1,U_2;Y_1)\}$. It then follows that \begin{align*} R_1 &\le I(U_1;Y_1,U_2) - \alpha I(U_1;U_2),\\ R_1 &\le I(U_1,U_2;Y_1) - R_2. \end{align*} So, $(R_1,R_2) \in \tilde{\Rr}(p,\alpha)$. If instead $R_2 \ge \min\{I(U_2;Y_1,U_1)-\overline{\a} I(U_1;U_2), I(U_1,U_2;Y_1)\}$, then \begin{align*} R_1 &\le I(U_1;Y_1,U_2) - \alpha I(U_1;U_2),\\ R_1 &\le I(U_1,U_2;Y_1) - \min\{I(U_2;Y_1,U_1)-\overline{\a} I(U_1;U_2), I(U_1,U_2;Y_1)\} = \max\{0, I(U_1;Y_1) - \alpha I(U_1;U_2) \}. \end{align*} Therefore, $(R_1,R_2) \in \Rr_{\mathrm{BC},1}(p,\alpha)$, which completes the proof of the lemma. \end{IEEEproof} \section{Proof of Lemma~\ref{lem:outer2_list_marton}} \label{app:list_marton} Let $\epsilon' > \epsilon$. First, by (the averaged version of) Fano's lemma in (\ref{eq:marton_fano}), we have \[ I(M_2; Y_1^n |\mathcal{C}_n) \ge I(M_2; M_1, Y_1^n | \mathcal{C}_n) - n \epsilon_n. \] Therefore, it suffices to prove that for $n$ sufficiently large, \[ I(M_2; M_1, Y_1^n |\mathcal{C}_n) \ge n [\min \{ R_2, I(U_2;Y_1,U_1) - \overline{\a} I(U_1;U_2), I(U_1,U_2;Y_1) \} - \delta(\epsilon') - 2\epsilon_n], \] for some $\delta(\epsilon')$ that tends to zero as $\epsilon \to 0$. Similar to \cite{Bandemer--El-Gamal--Kim2012a}, we will show that given $M_1, Y_1^n$ and $\mathcal{C}_n$, a relatively short list $\mathcal{L} \sbq [2^{nR_2}]$ can be constructed that contains $M_2$ with high probability. Define a random set \begin{align*} \mathcal{L} &= \{ m_2 \in [2^{nR_2}]: (U_1^n(M_1,l_1),U_2^n(m_2,l_2), Y_1^n) \in {\mathcal{T}_{\epsilon'}^{(n)}}(U_1,U_2, Y_1) \\ &\qquad\qquad\qquad\qquad \textrm{ for some } (l_1,l_2) \in [2^{n{\hat{R}}_1}] \times [2^{n{\hat{R}}_2}]\}. \end{align*} Define the events $\mathcal{M}_1 = \{M_1 = M_2 = 1\}$ and $\mathcal{M}_2=\{L_1=L_2=1 \}$. The indicator random variable ${\tilde{E}}_n$ is as defined in (\ref{eq:indicator_en_marton}). By the symmetry of the codebook generation, for each $m_2 \neq M_2 \in [2^{nR_2}]$ we start with \begin{align} \notag & \P( m_2 \in \mathcal{L}, {\tilde{E}}_n = 1) \\ \notag &= \P( m_2 \in \mathcal{L}, {\tilde{E}}_n=1 | \mathcal{M}_1,\mathcal{M}_2) \\ \notag & \overset{(a)}{=} \P( (U_1^n(1,l_1), U_2^n(m_2,l_2), Y_1^n) \in {\mathcal{T}_{\epsilon'}^{(n)}} \textrm{ for some } (l_1,l_2) \in [2^{n{\hat{R}}_1}] \times [2^{n{\hat{R}}_2}], \\ \notag &\qquad\qquad\qquad\qquad\qquad\qquad\qquad (U_1^n(1,1), U_2^n(1,1)) \in {\mathcal{T}_{\epsilon}^{(n)}} | \mathcal{M}_1,\mathcal{M}_2)\\ \notag & \overset{(b)}{\le} \sum_{l_2} \sum_{(u_1^n,u_2^n) \in \atop {\mathcal{T}_{\epsilon}^{(n)}}(U_1,U_2)} \sum_{({\tilde{u}}_2^n,y_1^n) \in \atop {\mathcal{T}_{\epsilon'}^{(n)}}(U_2,Y_1|u_1^n)} \P ( U_{1}^n(1,1) = u_1^n, U_{2}^n(1,1) = u_2^n, U_{2}^n(m_2,l_2) = {\tilde{u}}_2^n, Y_1^n = y_1^n | \mathcal{M}_1,\mathcal{M}_2) \\ &\quad + \sum_{l_1\neq 1} \sum_{l_2} \sum_{(u_1^n,u_2^n) \in \atop {\mathcal{T}_{\epsilon}^{(n)}}(U_1,U_2)} \sum_{({\tilde{u}}_1^n,{\tilde{u}}_2^n,y_1^n) \in \atop {\mathcal{T}_{\epsilon'}^{(n)}}(U_1,U_2,Y_1)} \P \left(\begin{array}{c|c} U_{1}^n(1,1) = u_1^n, U_{2}^n(1,1) = u_2^n, & \mathcal{M}_1, \\ U_{1}^n(m_1,l_1) = {\tilde{u}}_1^n, U_{2}^n(m_2,l_2) = {\tilde{u}}_2^n, Y_1^n = y_1^n & \mathcal{M}_2 \end{array}\right) \label{eq:marton_two_sum} \end{align} where $(b)$ follows by the union of events bound and by decomposing the event in $(a)$ onto two sets: $\{l_1=1\}$ and $\{l_1 \neq 1\}$. Two summation terms on the right hand side of (\ref{eq:marton_two_sum}) can be bounded using techniques similar to those in the achievability proof (see Appendix~\ref{app:marton_achiev}) for Theorem \ref{thm:opt_marton} to get \[ \P( m_2 \in \mathcal{L}, {\tilde{E}}_n = 1) \le 2^{-n(I(U_2;Y_1,U_1)-\overline{\a} I(U_1;U_2) - 4\delta(\epsilon'))} + 2^{-n(I(U_1,U_2;Y_1)-4\delta(\epsilon'))}. \] Since $\P({\tilde{E}}_n = 1)$ tends to one as $n \to \infty$, for $n$ sufficiently large, $ \P( m_2 \in \mathcal{L} | {\tilde{E}}_n = 1) \le \P( m_2 \in \mathcal{L} , {\tilde{E}}_n = 1) q^{\epsilon}$. The expected cardinality of $\mathcal{L}$ given $\{{\tilde{E}}_n = 1 \}$ is then bounded as \begin{align} \notag \E(|\mathcal{L}| | {\tilde{E}}_n = 1) & \le 1 + \sum_{m_2 \neq M_2} \P( m_2 \in \mathcal{L} | {\tilde{E}}_n = 1) \\ \notag & \le 1+ 2^{n(R_2 - I(U_2;Y_1,U_1)+\overline{\a} I(U_1;U_2) + 4\delta(\epsilon')+\frac{\epsilon}{n})} + 2^{n(R_2-I(U_1,U_2;Y_1)+4\delta(\epsilon') + \frac{\epsilon}{n})} \\ &= 1+ 2^{n(R_2 - I(U_2;Y_1,U_1)+\overline{\a} I(U_1;U_2) + 4\delta(\epsilon')+\epsilon_n)} + 2^{n(R_2-I(U_1,U_2;Y_1)+4\delta(\epsilon') + \epsilon_n)} \label{eq:ent_list_marton} \end{align} for $n$ sufficiently large. Define another indicator random variable ${\tilde{F}}_n = \mathbbm{1}_{\{ M_2 \in \mathcal{L} \}}$. Since $\epsilon' > \epsilon$ and $\P({\tilde{E}}_n=1)$ tends to one as $n \to \infty$, by the conditional typicality lemma in~\cite[p.~27]{El-Gamal--Kim2011}, $\P({\tilde{F}}_n=1)$ tends to one as $n \to \infty$. Then, for $n$ sufficiently large, we have \begin{align*} H(M_2 | \mathcal{C}_n, M_1, Y_1^n) &= H(M_2 | \mathcal{C}_n, M_1, Y_1^n, {\tilde{E}}_n,{\tilde{F}}_n) + I ( M_2; {\tilde{E}}_n,{\tilde{F}}_n | \mathcal{C}_n, M_1, Y_1^n) \\ & \le H(M_2 | \mathcal{C}_n, M_1, Y_1^n, {\tilde{E}}_n,{\tilde{F}}_n) + 2 \\ & \le 2 + \P({\tilde{F}}_n=0) H(M_2 | \mathcal{C}_n, M_1, Y_1^n, {\tilde{E}}_n,{\tilde{F}}_n=0) + H(M_2 | \mathcal{C}_n, M_1, Y_1^n, {\tilde{E}}_n,{\tilde{F}}_n=1) \\ & \le 2 + nR_2 \P({\tilde{F}}_n=0) + H(M_2 | \mathcal{C}_n, M_1^n, Y_1^n, {\tilde{E}}_n,{\tilde{F}}_n=1). \end{align*} For the last term, we use the fact that if $M_2 \in \mathcal{L}$, then the conditional entropy cannot exceed $\log(|\mathcal{L}|)$: {\allowdisplaybreaks \begin{align*} &H(M_2 | \mathcal{C}_n, M_1, Y_1^n, {\tilde{E}}_n,{\tilde{F}}_n=1) \\ & \overset{(a)}{=} H(M_2 | \mathcal{C}_n, M_1, Y_1^n, {\tilde{E}}_n,{\tilde{F}}_n=1, \mathcal{L}, |\mathcal{L}|) \\ & \le H(M_2 | {\tilde{E}}_n,{\tilde{F}}_n=1, \mathcal{L}, |\mathcal{L}|) \\ &= \sum_{l=0}^{2^{nR_2}} \P(|\mathcal{L}|=l,{\tilde{E}}_n=1) H(M_2 | {\tilde{E}}_n=1, {\tilde{F}}_n = 1, \mathcal{L}, |\mathcal{L}| = l) \\ &\quad\quad\quad + \sum_{l=0}^{2^{nR_2}} \P(|\mathcal{L}|=l,{\tilde{E}}_n=0) H(M_2 | {\tilde{E}}_n=0, {\tilde{F}}_n = 1, \mathcal{L}, |\mathcal{L}| = l) \\ &\le \sum_{l=0}^{2^{nR_2}} \P(|\mathcal{L}|=l,{\tilde{E}}_n=1) H(M_2 | {\tilde{E}}_n=1, {\tilde{F}}_n = 1, \mathcal{L}, |\mathcal{L}| = l) + {nR_2} \P({\tilde{E}}_n=0) \\ &\le \sum_{l=0}^{2^{nR_2}} \P(|\mathcal{L}|=l,{\tilde{E}}_n=1) \log(l) + {nR_2} \P({\tilde{E}}_n=0) \\ &\le \sum_{l=0}^{2^{nR_2}} \P(|\mathcal{L}|=l|{\tilde{E}}_n=1) \log(l) + {nR_2} \P({\tilde{E}}_n=0) \\ &= \E [\log(|\mathcal{L}|) | {\tilde{E}}_n=1] + {nR_2} \P({\tilde{E}}_n=0) \\ & \overset{(b)}{\le} \log(\E [|\mathcal{L}| | {\tilde{E}}_n=1]) + {nR_2} \P({\tilde{E}}_n=0) \\ & \overset{(c)}{\le} \max \{0, n(R_2 - I(U_2;Y_1,U_1)+\overline{\a} I(U_1;U_2)+ 4\delta(\epsilon')+\epsilon_n), n(R_2 - I(U_1,U_2;Y_1) + 4\delta(\epsilon')+\epsilon_n) \} \\ &\qquad\qquad\qquad + {nR_2} \P({\tilde{E}}_n=0) \\ & \le n \cdot \max \{0, R_2 - I(U_2;Y_1,U_1)+\overline{\a} I(U_1;U_2), R_2 - I(U_1,U_2;Y_1) \} + n 4\delta(\epsilon') + n\epsilon_n + {nR_2} \P({\tilde{E}}_n=0), \end{align*} }% where $(a)$ follows since the set $\mathcal{L}$ and its cardinality $| \mathcal{L}|$ are functions of $(\mathcal{C}_n, M_1, Y_1^n)$, $(b)$ follows by Jensen's inequality, and $(c)$ follows by (\ref{eq:ent_list_marton}) and the soft-max interpretation of the log-sum-exp function~\cite[p.~72]{Boyd--Vandenberghe2004}. Substituting back gives \begin{align*} I(M_2; M_1, Y_1^n | \mathcal{C}_n) &= H(M_2 | \mathcal{C}_n) - H(M_2 | \mathcal{C}_n, M_1, Y_1^n) \\ &= nR_2 - H(M_2 | \mathcal{C}_n, M_1, Y_1^n) \\ & \ge nR_2 - 2 - nR_2 \P({\tilde{F}}_n=0) - H(M_2 | \mathcal{C}_n, M_1^n, Y_1^n, {\tilde{E}}_n,{\tilde{F}}_n=1) \\ & \ge nR_2 - 2 - nR_2 \P({\tilde{F}}_n=0) - n 4\delta(\epsilon') - n\epsilon_n - {nR_2} \P({\tilde{E}}_n=0) \\ &\quad\quad -n \cdot \max \{0, R_2 - I(U_2;Y_1,U_1)+\overline{\a} I(U_1;U_2), R_2 - I(U_1,U_2;Y_1) \} \\ &\overset{(a)}{=} n [ \min \{ R_2, I(U_2;Y_1,U_1)-\overline{\a} I(U_1;U_2), I(U_1,U_2;Y_1)\} - 4\delta(\epsilon') - 2\epsilon_n ], \end{align*} where $(a)$ follows since both of the probabilities $\P({\tilde{E}}_n=0)$ and $\P({\tilde{F}}_n=0)$ tend to zero as $n \to \infty$.
2024-02-18T23:40:42.827Z
2018-10-30T01:26:05.000Z
algebraic_stack_train_0000
3,164
16,819
proofpile-arXiv_065-15559
\section{Introduction} Facial expression is a natural signal to convey emotions and intentions of human beings. Therefore, facial expression recognition (FER) is essential for machines to understand human behaviors and interact with humans. Though great efforts have been made in last decades and promising progress has been achieved, FER still suffers from data uncertainty problem, i.e., data is frequently mislabeled because of the subjectiveness of annotators and the ambiguities of facial images. Existing works on FER rarely focus on this problem, except IPA2LT~\cite{r03} and Self-Cure Network (SCN)~\cite{r01}, which suppress data uncertainties by discovering the latent truth from inconsistent pseudo labels or relabeling mislabeled data by weighting and ranking samples in a mini-batch. Though learning with noisy labels has been studied extensively in the community of computer vision, existing works mainly focus on correcting mislabeled data by estimating label quality and noise distribution, or guiding network training with knowledge learned from clean data. However, as pointed in~\cite{r02}, when people encounter a vague facial image with fuzzy expression, they often associate it with other images sharing similar expressions, instead of staring at its parts for research, which is called human adaptively associative learning process. In other words, humans tend to make associative comparisons when discerning subtle expressions, while current works neglects the associative relations of facial images. \begin{figure}[tp] \centering \includegraphics[width=8cm]{motivation5.pdf} \caption{The excursive semantic and featuer covariate shifting problem caused by uncertain data. After being normalized by our AGFN, feature distribution is with much clearer boundaries although mislabeled samples are still there.} \label{fig_1} \end{figure} Moreover, with the growth in scale of training samples gathered from internet, data uncertainties have been introducing great challenges to FER by leading to excursive semantic and feature covariate shifting, as shown in Figure~\ref{fig_1}, where distributions of individual classes are with serious overlaps because of mislabeled data. Therefore, in this work, we propose an efficient normalization method called Adaptive Graph-based Feature Normalization (AGFN) to tackle the data uncertainty problem by normalizing feature distributions with the association of expressions. As shown in Figure~\ref{fig_1}, with the assistance of proposed AGFN, individual classes can be split with much clearer boundaries though mislabeled data still exists. Specifically, given feature maps extracted from facial images, AGFN firstly projects them into emotional feature vectors. Then, under the assumption that the probability of sample connections satisfies Poisson distribution and corresponding parameters are closely related to samples' similarities, a Poisson graph generator is designed to adaptively construct topological graphs for samples in each mini-batches. Afterwards, Graph Convolutional Network (GCN) is exploited to convey the semantic information of associated samples because expressions present in facial images can be reflected by other images sharing similar features in a proper feature space. In addition, since the calculation of adjacent matrices used for graph generation involves a sampling process, parameters of our network can not be optimized by the widely used gradient decent method. Therefore, we design a coordinate decent strategy to jointly optimize parameters of neural networks and the sampling process. According to our experiments, when equipping a naive FER model with proposed AGFN, its recognition performance can be improved by large margins from 85.37\% to 91.84\% and from 85.89\% to 91.11\% on the benchmark datasets FER2013plus and RAF-DB, implying the importance of tackling the uncertainty problem in FER as well as the effectiveness of proposed AGFN. Moreover, we also conduct experiments on synthetic datasets where a large portion of samples are mislabeled, finding that our AGFN-equipped network surpasses existing state-of-the-art works significantly by 3.38\% and 4.52\% on data with serious uncertainties. Our contributions are as follows: (1) We propose to utilize the associative relations of expressions to tackle the excursive semantic and feature covariate shifting problem caused by data uncertainties, and propose an effective normalization method named AGFN to elevate the performance of FER models; (2) We design a Poisson graph generator to adaptively construct topological graphs for samples in each mini-batches with a sampling process, and utilize GCN to normalize feature distributions. Moreover, we design a coordinate decent strategy to jointly optimize parameters involved in neural networks and sampling process; (3) We conduct extensive experiments to show the superiority of proposed AGFN, and demonstrate that when a large portion (e.g. 20\%) of samples are mislabeled, our AGFN-equipped network shows much better robustness and effectiveness. \section{Related works} \subsection{Facial Expression Recognition} Feature extraction and expression classification are the two basic modules of typical FER pipeline, and in past years, most of related works have focused on designing more effective and robust feature extractors. For example, works~\cite{r05}~\cite{r06}~\cite{r07}~\cite{r08} explored various deep neural networks to extract more powerful features, including VGG network, Inception network, Residual network and Capsule network etc. Dinesh et al.~\cite{r09} pointed out that the widely used convolutional layers and average pooling layers only captured first-order statistics, so they proposed to extract the second order statistic features with covariance pooling. In practice, pose variations, occlusions and uneven illuminations always resulted in low quality facial images, on which FER models usually failed to extract discriminative features. Therefore, Wang et al.~\cite{r10} designed a regional attention network to improve the performance of FER. Inspired by the psychological theory that expressions could be decomposed into multiple facial action units, Liu et al.~\cite{r11} constructed a deep network called AU-inspired Deep Networks (AUDN) to combine the informative local appearance variation and high level representation. Generally, objective functions of FER networks considered each samples independently, while Zhao et al.~\cite{r12} addressed this issue from another perspective. They designed a peak-piloted deep network (PPDN) to supervise the intermediate feature responses for samples with non-peak expression. Recently, Transformer~\cite{r26} showed its power in the field of Neural Language Processing (NLP) and Computer Vision (CV). Therefore, Ma et al.~\cite{r25} and Huang et al.~\cite{r22} also introduced Visual Transformer to FER and achieved promising performance. \subsection{Uncertainties in Facial Expression Recognition} Uncertainties result in mislabeled data, which seriously affects the performance of FER models. Though learning with noisy labels has attracted extensive attentions in the community of CV, it is rarely studied in the task of FER. On the other hand, existing works on general CV tasks mainly focus on addressing this issue by pre-training networks on weak data and then fine-tuning them with true labels~\cite{r14}, guiding the training of networks with knowledge learned from clean data~\cite{r13}, or relabeling mislabeled data together with learning powerful representations and classifiers~\cite{r16}. For example, to alleviate the harm from noisy data, Dehghani et al.~\cite{r13} updated the gradients of a target network under the guidance of a second confidence network that trained with a small set of clean data. Li et al.~\cite{r15} designed a unified distillation framework, where the distillation process was guided with a knowledge graph, to `hedge the risk' of learning from noisy labels. Apparently, above works tried to estimate label quality or noisy distribution with a small set of clean data, while works without utilizing clean data usually introduced additional constrains or distributions on noisy data~\cite{r17}~\cite{r18}. For example, Mnih et al.~\cite{r17} proposed more robust loss functions to deal with omission noise and registration noise on aerial image datasets. To estimate correct labels, Goldberger et al.~\cite{r18} viewed the correct label as latent random variable and modeled the noise processes by a communication channel with unknown parameters, which were optimized with EM algorithm. For the task of FER, Zeng et al.~\cite{r03} was the first to improve FER performance by addressing the data uncertainty issue. They assigned more than one labels to each samples and discovered the latent truth from the inconsistent pseudo labels with an end-to-end LTNet. Subsequently, Wang et al.~\cite{r01} suppressed data uncertainties by weighting and ranking samples in a mini-batch with a self-attention mechanism, followed by modifying samples' labels in the lowest-ranked group. In summary, previous works have explored how to discover the truth of mislabeled data and prevent networks from the harm of noisy data, neglecting the associative relations of expressions. In contrast, our AGFN protects FER models from data uncertainties by tackling the excursive semantic and feature covariate shifting problem with considering the association of expressions. \begin{figure*}[tp] \begin{center} \includegraphics[width=15cm]{pipeline7.pdf} \end{center} \caption{Architecture of proposed AGFN-equipped network. The AGFN module is composed of similarity calculator, Poisson Graph generator and GCN-based feature normalizer, and can be conveniently inserted into FER models between their feature extractors and expression classifiers. } \label{fig:short} \end{figure*} \section{Method} Facial images containing similar subtle expressions are most likely to share the same labels and according to human associative learning mechanism~\cite{r02}, humans tend to correlate objects with similar abstract features. Therefore, exchanging semantic information among samples with high similarity can help to normalize features of individual samples, leading to improvement of FER performance. Toward this end, we design a feature normalization method called Adaptive Graph-based Feature Normalization (AGFN). Given a baseline model that composed of a feature extractor and an expression classifier, which are basic components used to extract feature maps from facial images and distinguish corresponding expressions, our AGFN can be conveniently inserted into it as shown in Figure~\ref{fig:short}. Specifically, AGFN exploits a novel graph generator to dynamically and adaptively construct topological graphs for samples in each mini-batches according to their similarities. In this generator, adjacent matrices of topological graphs are determined by a sampling process. Then, GCN is used to transfer semantic information among associated samples. Since gradient calculation rules of parameters from neural networks and above sampling process are different, traditional gradient decent method is not applicable anymore. Therefore, we propose a coordinate decent strategy to optimize our network in an end-to-end way. \subsection{Poisson Graph Generator} Traditional graph-based methods usually connect samples with high similarities with the widely used threshold-based staircase function. However, samples with very similar features may not belong to the same classes and high similarities only imply high probabilities of sharing the same class labels. To address this issue, we propose to model the relations between feature similarities and sample connection probabilities with Poisson distribution. Poisson distribution is used to describe times that an event happens in a unit time, as shown in Eq.~\ref{e0}, where $k$ and $\lambda$ denote times and average times that an event happens per unit time. In human associative learning, two samples are usually compared for multiple times to confirm whether they belong to the same classes, and different regions of interest are looked every time. Therefore, we assume that probabilities of sample connections satisfy Poisson distribution and corresponding parameters are closely related to samples' similarities. Subsequently, a novel Poisson graph generator is proposed to calculate the adjacent matrices of topological graphs with a sampling process. \begin{equation} Po(X=k;\lambda)=\frac{\lambda ^k}{k!}e^{-\lambda},k=0,1,...K \label{e0} \end{equation} Given input images $Im=\{im_1, im_2,..., im_N\}$ in a mini-batch, the FCN-based feature extractor outputs corresponding feature maps $M=\{m_1, m_2,..., m_N\}$, which is further projected into emotional feature vectors $X=\{x_1, x_2,..., x_N\}$ by a Multi-Layer Perceptron (MLP). Then, similarity $cossim(x_i, x_j)$ between sample $im_i$ and $im_j$ can be calculated with the cosine similarity coefficient formulated in Eq.~\ref{e1}: \begin{equation} cossim(x_i, x_j)=\frac{x_i*x_j}{||x_i||||x_j||} \label{e1} \end{equation} Intuitively, we associate an object to different ones for multiple times to capture more detailed information. Therefore, for better robustness, the construction of topological graphs should follow a stochastic mechanism rather than the widely used threshold-based staircase function so that different contrastive objects can be seen in different iterations. Hence, we model the connection probability $p(e_{i,j}=1)$ of sample $im_i$ and sample $im_j$ with Poisson distribution, as shown in Eq.~\ref{e2}, and the Poisson parameter ${\lambda_{i,j}}$ is computed with the similarity $cossim(x_i, x_j)$ via a linear function described in Eq.~\ref{e3}. Here, parameters ${\alpha}$ and ${\beta}$ are introduced to scale the probability distribution, and will be learned during the training procedure. \begin{equation} \begin{aligned} P(e_{i,j}=1) & =1-Po(0;\lambda_{i,j}) \\ & = 1-\frac{e^{-\lambda_{i,j}}\lambda_{i,j}^0}{0!}=1-e^{-\lambda_{i,j}} \end{aligned} \label{e2} \end{equation} \begin{equation} \lambda_{i,j}=\alpha cossim(x_i, x_j) + \beta \label{e3} \end{equation} Afterwards, we sample the adjacent matrix ${A}$ according to $A\sim P$ (see Eq.~\ref{e4}) for samples in current mini-batch. Expectations of the sampling process will be optimized as introduced in Section~\ref{pipeline}. \begin{equation} \begin{aligned} A= \{a_{i,j}|a_{i,j}\sim P(e_{i,j}=1), i,j\in\{1,2,...,N\}\} \end{aligned} \label{e4} \end{equation} \subsection{Feature Normalization with GCN} To alleviate the excursive semantic and feature covariate shifting problem, we imitate human associative learning procedure by conveying semantic information among associated samples with GCN~\cite{r34}, which is built upon samples' topological graphs generated by our Poisson graph generator. The employed GCN is in the second order Chebyshev expansion, as formulated in Eq.~\ref{e5}, where ${A}$ is the adjacent matrix obtained by above sampling strategy, ${W}$ is trainable parameters, $I$ is an identity matrix, $\widetilde{D}$ is a diagonal matrix, ${X}$ denotes samples' emotional feature vectors and $\hat{X}$ represents the expected normalized features. Here, $\widetilde{D}^{\frac{-1}{2}}$ is used to weight information from associated samples. For $i$-th sample, more associated samples result in greater value of $\widetilde{D}_{ii}$, which means less information from associated samples will be passed to current sample. \begin{equation} \begin{aligned} & \hat{X}=g(X, W, A)= (\widetilde{D}^{\frac{-1}{2}}(A+I)\widetilde{D}^{\frac{-1}{2}})XW \\ & {\widetilde{D}_{ii}=1+\sum_j{A_{i,j}}} \end{aligned} \label{e5} \end{equation} \subsection{Optimization with Coordinate Decent}\label{pipeline} Suppose our loss function is defined as Eq.~\ref{e6}, where $f(.)$ denotes the expression classifier, then our final goal is twofold: 1) optimizing parameters $W$ involved in neural networks, including feature extractor, GCN and expression classifier; and 2) learning parameters $\alpha$ and $\beta$ used to find the best adjacent matrix ${A \in \mathcal{H}_N}$ in Eq.~\ref{e2} and~\ref{e3}. Here, ${\mathcal{H}_N}$ is the convex hull of the set of all possible adjacency matrices under the Poisson distribution. Furthermore, the objective of our network is to minimize the expectation formulated in Eq.~\ref{e7}. \begin{equation} \ell(f(\hat{X}), y)=||f(\hat{X})-y||_2^2 \label{e6} \end{equation} \begin{equation} J=\mathop{min}\limits_{W, \alpha,\beta} E_{A\sim P}[\ell(f(\hat{X}), y)] \label{e7} \end{equation} Since the gradient calculation rule of $W$ is different from that of $\alpha$ and $\beta$, traditional gradient decent strategy is not applicable in our optimization procedure. Therefore, under the assumption that parameters to be optimized are independent from each other, we design a coordinate descent strategy to optimize our network in an end-to-end way. Concretely, we update $W$ with the tractable approximate learning dynamics shown in Eq.~\ref{e8}, and obtain the approximate gradient $\nabla_WE$ with Eq.~\ref{e9}, where $P(A)$ is the probability of sampling $A$ from distribution $P$ with Eq.~\ref{e4}, ${S}$ represents the pre-defined sampling times and $A_s$ denotes the result of the $s$-th sampling. \begin{equation} \hat{W} = W - \gamma_1\nabla_WE \label{e8} \end{equation} \begin{equation} \begin{split} \nabla_WE &= \nabla_WE_{A\sim P}[\ell(f(\hat{X}), y)]\\ &=\sum P(A)\nabla_W \ell(f(g(X, W, A)), y) \\ &\approx \frac{1}{S}\sum_{s=1}^{S}\nabla_W \ell(f(g(X, W, A_s)), y) \end{split} \label{e9} \end{equation} On the other hand, we update $\alpha$ and $\beta$ with Eq.~\ref{e10}, where $\nabla_{\alpha}\lambda_{i,j} = cossim(x_i,x_j) $, $ \nabla_{\beta}\lambda_{i,j} = 1$ and $\nabla_{\lambda} E$ is obtained with an estimator (see Eq.~\ref{e11_0}$\sim$\ref{e12}). \begin{equation} \begin{aligned} & \hat{\alpha} = \alpha - \gamma_2 \frac{1}{N^2}\sum_{i=1}^{N}\sum_{j=1}^{N}(\nabla_{\alpha}\lambda_{i,j}\nabla_{\lambda_{i,j}} E) \\ & \hat{\beta} = \beta - \gamma_2 \frac{1}{N^2}\sum_{i=1}^{N}\sum_{j=1}^{N}(\nabla_{\beta}\lambda_{i,j}\nabla_{\lambda_{i,j}} E) \\ \end{aligned} \label{e10} \end{equation} According to~\cite{r27}, continuous distributions have a simulation property that samples can be drawn from them in both direct and indirect ways, and for the general case $x\sim p_{(x;\theta)}$, we can draw a sample $\hat{x}$ in an indirect way by firstly sampling $\overline{x}$ from a simple base distribution ${p_{(\epsilon)}}$, which is independent of the parameters $\theta$, and then transforming $\overline{x}$ to $\hat{x}$ through a sampling path ${sp(\epsilon;\theta)}$. Therefore, the expectation $E_{p(x;\theta)}[f(x)]$ can be converted to $E_{p(\epsilon)}[f(sp(\epsilon;\theta))]$ with Eq.~\ref{e11_0}. Furthermore, based on the \textit{Law of the Unconscious Statistician} (LOTUS)~\cite{r27}, a path-wise estimator for gradient ${\nabla_{\theta} E_{p(x;\theta)}[f(x)]}$ can be calculated with Eq.~\ref{e11}. \begin{equation} E_{p(x;\theta)}[f(x)] = E_{p(\epsilon)}[f(sp(\epsilon;\theta))], \label{e11_0} \end{equation} \begin{equation} \begin{split} \nabla_{\theta} E_{p(x;\theta)}[f(x)] &= \nabla_{\theta} \int p(\epsilon)f(sp(\epsilon;\theta))\mathrm{d}\epsilon\\ &=\int p(\epsilon)\nabla_{x} f(x)|_{x=sp(\epsilon;\theta)}\nabla_{\theta}sp(\epsilon;\theta)\mathrm{d}\epsilon\\ &=E_{p(x;\theta)}[\nabla_{x} f(x)\nabla_{\theta} x] \end{split} \label{e11} \end{equation} However, in our case, the distribution $A\sim P$ is discontinuous because elements of $A$ are binarized, so we approximately estimate $\nabla_\lambda E$ with an inexact but smooth reparameterization of $A\sim P$ (see Eq.~\ref{e12}). Specifically, we employ the identity mapping $A=sp(\epsilon;\lambda)=1 - Po(0;\lambda)$ of straight-through estimators (STE)~\cite{r29}, and accordingly, get ${|\nabla_\lambda A| = |\nabla_\lambda Po(0;\lambda)| \approx I}$. \begin{equation} \begin{aligned} \nabla_\lambda E &= \nabla_\lambda E_{A\sim P}[\ell(f(\hat{X}), y)] \\ &= E_{A\sim P}[\nabla_\lambda A \nabla_A \ell(f(g(X, W, A)), y)]\\ &\approx \frac{1}{S}\sum_{s=1}^{S}\nabla_A \ell(f(g(X, W, A_s)), y) \end{aligned} \label{e12} \end{equation} \section{Experiments} In this section, we give details of our implementation and conduct extensive experiments to prove the effectiveness and robustness of our AGFN on datasets with uncertainties. \subsection{Datasets and Implementation} RAF-DB~\cite{r19} contains 12,271 training images and 3,068 test images collected from thousands of individuals. In our experiments, only images belonging to the 7 basic expressions (i.e., neutral, happiness, surprise, sadness, anger, disgust and fear) are used. FERPlus~\cite{r20} is a large-scale dataset collected by Google search engine. It consists about 28,000 training images and 3,000 test images. Compared with RAF-DB, FERPlus includes an extra expression, i.e., contempt, resulting in 8 expression classes. In addition, since each sample in FERPlus is labeled by 10 annotators, we select label with the highest score as its ground truth. In our implementation, we embed the proposed AGFN into a naive FER baseline model, who employs ResNet-18 as its feature extractor and a fully-connected layer as its expression classifier. The projected emotional feature vectors are with a dimension of 512 and the batch size is set to 256. Moreover, we set the learning rates $\gamma_1$ and $\gamma_2$ (used in Eq.~\ref{e8} and ~\ref{e10}) to 0.01 and 0.001, respectively, and to speed the training procedure, we pre-train the baseline model for about 10 epochs before integrating the proposed AGFN module. \subsection{Comparison with Existing FER Models} In Table~\ref{Tab_1}, we compare the proposed AGFN-equipped network with existing FER works to demonstrate the effectiveness of our AGFN and the importance of dealing with data uncertainties in FER. Apparently, our network outperforms all of the listed methods on both FERPlus and RAF-DB datasets, and from the comparison with our baseline model, recognition accuracies are elevated significantly from 85.37\% to 91.84\% and from 85.89\% to 91.11\% on FERPlus and RAD-DB dataset, respectively by integrating the proposed AGFN module. Recently, Transformer~\cite{r26} has largely fueled the performance of computer vision tasks, including FER. For example, FER-VT~\cite{r22} exploits grid-wise attention and visual Transformer to learn long-range inductive biases between different facial regions, and TransFER~\cite{r33} learns rich relation-aware local representation with Transformer. However, thought FER-VT and TransFER surpass other existing methods significantly in Table~\ref{Tab_1}, our AGFN-equipped network achieves better performance than them, demonstrating the effectiveness of our network. Moreover, we also conduct ablation study with methods employing GCN, e.g., GA-FER~\cite{r31} and FDRL~\cite{r32}, since GCN has been widely used in FER. Here, GA-FER~\cite{r31} builds graphs upon landmarks of single face images, while FDRL~\cite{r32} constructs graphs with latent features obtained by decomposing basic features of face images. On the other hand, GA-FER~\cite{r31} exploits a GCN equipped with complicated attention mechanism, which FDRL~\cite{r32} and our network use the naive GCN. Obviously, our network achieves better performance than both GA-FER~\cite{r31} and FDRL~\cite{r32}. In addition, we also compare our network with a model named Baseline-GCN, where GCN is integrated into the baseline model and samples with similarities greater than 0.5 are directly connected to build topological graphs. Apparently, our network obtains much higher accuracies than Baseline-GCN, indicating the superiority of our Poisson graph generator over the widely used threshold-based staircase function. On the other hand, SCN~\cite{r01} also improves FER performance by tackling with the data uncertainty problem. It relabels mislabeled data by weighting and ranking samples in a mini-batch with a self-attention mechanism. In contrast, our AGFN protects FER models from data uncertainties by alleviating the excursive semantic and feature covariate shifting problem with associative relations of expressions. Obviously, our network surpasses SCN~\cite{r01} by 2.49\% and 2.91\% on FERPlus and RAF-DB, indicating that AGFN is with more advantages than SCN~\cite{r01} when dealing with the data uncertainty problem. \begin{table}[h] \caption{\textbf{Comparison with existing works on RAF-DB and FERPlus datasets. Recognition accuracy is the metric.} \label{Tab_1} \centerin \begin{tabular}{lll} \toprul Method & FERPlus & RAF-DB \\ \midrule SeNet~\cite{r21} & 88.8 & -\\ RAN~\cite{r10} & 88.55 & 86.90 \\ DLP-CNN~\cite{r19} & -& 84.22\\ SCN~\cite{r01} & 89.35& 88.14\\ Baseline & 85.37 & 85.89\\ \hline FER-VT~\cite{r22} & 90.04 & 88.26 \\ TransFER~\cite{r33} & 90.83 & 90.91\\ \hline GA-FER~\cite{r31} & - & 87.52 \\ FDRL~\cite{r32} & - & 89.47 \\ Baseline-GCN & 88.8 & 88.95 \\ Our network & \textbf{91.84}& \textbf{91.11}\\ \bottomrule \end{tabular} \end{table} \subsection{Performance of AGFN on Datasets with Serious Uncertainties} With the growth in scale of training samples gathered from internet, the problem of data uncertainty is getting more and more severe. To further explore the effectiveness of AGFN on datasets with serious uncertainties, we conduct extra experiments on our synthetic datasets. Specifically, we randomly select 10\% or 20\% samples from FERPlus and RAF-DB datasets, and assign wrong labels to them. The comparison results are shown in Table~\ref{Tab_2}, where the proposed network is compared with SCN~\cite{r01} and other two state-of-the-art noise-tolerant methods, i.e., CurriculumNet~\cite{r23} and MetaCleaner~\cite{r24}. CurriculumNet~\cite{r23} handles massive amount of noisy labels and data imbalance on large-scale web images by leveraging curriculum learning, which measures and ranks the complexity of data in an unsupervised manner, while MetaCleaner~\cite{r24} learns to hallucinate a clean representation for an object category according to a small noisy subset from the same category. From Table~\ref{Tab_2}, the recognition accuracy of our network is obviously higher than that of other three methods. Especially, from the comparison with SCN~\cite{r01}, our network achieves accuracy improvements of 2.75\% and 4.84\% on FERPlus and RAF-DB datasets with 10\% mislabeled samples, and the figures are 3.38\% and 4.52\% on datasets with 20\% mislabeled samples. Therefore, our AGFN-equipped network is with better effectiveness and robustness than other works when the data becomes more noisy. \begin{table}[h] \caption{\textbf{Comparison results on synthetic datasets with noise ratios of 10\% and 20\%.} \label{Tab_2} \centerin \begin{tabular}{c|cc|cc} \hline \multirow {2}{*}{Method} & \multicolumn{2}{c|}{FERPlus} & \multicolumn{2}{c}{RAF-DB} \\ \cline{2-5} & 10\% & 20\% & 10\% & 20\% \\ \hline CurriculumNet & - & - & 68.50 & 61.23 \\ MetaCleaner & - & - & 68.45 & 61.35 \\ SCN & 84.28 & 83.17 & 82.18 & 80.10 \\ Our network & \textbf{87.03} & \textbf{86.55} & \textbf{87.02} & \textbf{84.62} \\ \hline \end{tabular} \end{table} \subsection{Performance on Datasets with Occlusion and Pose Variant} In practice, occlusion and pose variation frequently happen and result in data uncertainties. Therefore, follow RAN~\cite{r10}, we also conduct additional experiments on Occlusion-FERPlus, Pose-FERPlus, Occlusion-RAF-DB and Pose-RAF-DB, which are generated from FERPlus and RAF-DB by Wang et al~\cite{r10}, to evaluate the performance of our network. The experimental results are listed in Table~\ref{Tab_4} and Table~\ref{Tab_5}. Here, RAN~\cite{r10} adaptively captures the importance of facial regions for occlusion and pose variant FER, and CVT~\cite{r25} translates facial images into sequences of visual words and performs expression recognition from a global perspective with convolutional visual Transformers. As shown in Table~\ref{Tab_4} and Table~\ref{Tab_5}, our network outperforms all of the listed methods on datasets with occlusion and achieves comparable performance with the latest Transformer-based works, i.e., CVT~\cite{r25} and FER-VT~\cite{r22}, on datasets with pose variation. \begin{table}[h] \caption{\textbf{Performance on datasets with occlusion.} \label{Tab_4} \centerin \begin{tabular}{ccc \toprul Method& Occlusion-FERPlus& Occlusion-RAF-DB \\ \midrul RAN~\cite{r10} & 83.63& 82.72\\ CVT~\cite{r25} & 84.79& 83.95\\ FER-VT~\cite{r22} & 85.24& 84.32\\ Our network & \textbf{85.95}& \textbf{86.53}\\ \bottomrul \end{tabular} \end{table} \begin{table}[h] \caption{\textbf{Performance on datasets with pose variation.} \label{Tab_5} \centerin \begin{tabular}{ccc \toprul Method& Pose-FERPlus & Pose-RAF-DB\\ \midrul RAN~\cite{r10} & 82.23& 85.20\\ CVT~\cite{r25} & 88.29& \textbf{88.35}\\ FER-VT~\cite{r22} & \textbf{88.56}& 86.08\\ Our network & 84.87& \textbf{88.35}\\ \bottomrul \end{tabular} \end{table} \subsection{Parameter Sensitivity Analysis} In this part, we analyze the parameter sensitivity of our AGFN-equipped network in terms of batch size and backbone depth. Firstly, since our Poisson graph generator constructs topological graphs for samples within mini-batches, the setting of batch size is important. Therefore, as shown in Figure~\ref{fig:batchsize-backbone}, we evaluate the effect of different batch size settings to the performance of proposed network. As we can see, when the batch size is set to greater than 16, the performance is barely changed, and for batch size less than 16, the drop of performance is also in a reasonable range, so our proposed network is with strong robustness. \begin{figure}[h] \begin{center} \includegraphics[width=9cm]{batchsize-backbone.pdf} \end{center} \caption{Performance of our network with different batch size (left) and comparison of networks with AGFN and deeper backbones (right) on RAF-DB dataset. } \label{fig:batchsize-backbone} \end{figure} \begin{table}[h] \caption{\textbf{Comparison of networks with AGFN and different backbones on RAF-DB dataset.} \label{Tab_6} \centerin \begin{tabular}{cc|cc \toprul Method & Acc. & Method & Acc.\\ \midrul ResNet18 & 85.89 & ResNet18+AGFN & 91.11\\ ResNet34 & 85.95 & ResNet34+AGFN & 90.38\\ ResNet152 & 86.02 & ResNet152+AGFN & 87.22\\ \bottomrul \end{tabular} \end{table} \begin{figure*}[tp] \begin{center} \includegraphics[width=8cm]{mnist.pdf} \end{center} \caption{Comparison results on MNIST.} \label{fig:mnist} \end{figure*} \begin{figure*}[h] \begin{center} \includegraphics[width=12cm]{vis.pdf} \end{center} \caption{Visualization results of samples with ambiguity.} \label{fig:vis} \end{figure*} On the other hand, our AGFN module introduces additional parameters to the baseline network. It is a common sense that increasing the scale of deep networks is an effective way to enhance model's ability. Therefore, to prove that the performance improvement of our network is contributed by the strategy of utilizing associative relations of expressions rather than increasing model's complexity, we conduct extra experiments in Figure~\ref{fig:batchsize-backbone} and Table~\ref{Tab_6}, where performances of networks with proposed AGFN and different backbones are compared. Apparently, no matter what backbone is used, the performance can always be obviously improved by integrating proposed AGFN, especially, when ResNet18 is exploited, the accuracy is elevated significantly from 85.89\% to 91.11\%. Moreover, from the comparison of networks with ResNet18, ResNet34, ResNet152 and ResNet18+AGFN, we can conclude that integrating AGFN is more effective than simply increasing network's depth. Besides, from the first column of Table~\ref{Tab_6}, the accuracy is barely improved when increasing network's depth from 18 to 152, implying that for data with serious uncertainties, expanding the scale of networks does not work well. Additionally, we blame the performance degradation from ResNet152+AGFN to ResNet18+AGFN to the over-fitting problem, which is another reason why the performance is barely improved when changing ResNet18 to ResNet152 in the baseline model. \subsection{Generalizability of Proposed AGFN} To evaluate the generalizability of proposed AGFN on dealing with data uncertainties, we conduct experiments on the well-known MNIST dataset, as shown in Figure~\ref{fig:mnist}. Here, we randomly select 10,000 samples from the training set of MNIST and assign wrong labels to 10\% of them. These samples form our training set and the original test set of MNIST is kept as our test set. From the distributions of samples in Figure~\ref{fig:mnist} we can see that our AGFN-equipped network is able to effectively decrease the intra-class variance and, meanwhile, increase the inter-class variance, resulting in a better recognition accuracy of 81\%, which is 15\% higher than that of our baseline model. On the other hand, Center Loss~\cite{r27} is widely used in various computer vision tasks. It also utilizes relations of samples to enhance the discriminative power of extracted features. However, different from our AGFN, Center Loss is a supervision signal who builds connections among samples according to their labels, while our AGFN constructs topological graphs in an unsupervised way by utilizing samples' similarities. On the other hand, Center Loss simultaneously learns a center for deep features of each classes and penalizes distances between deep features and their corresponding class centers. In contrast, our AGFN conveys semantic information among samples with GCN to normalize the distribution of features. Therefore, mislabeled data would affect the learning of class centers in Center Loss, resulting in performance degradation, while our AGFN is able to protect FER models from mislabeled data. Comparison results of their effects to feature distribution are present in Figure~\ref{fig:mnist}. \subsection{Visualization results} To intuitively understand the effectiveness of our AGFN-equipped network, we present the visualization results for samples with ambiguity in Figure~\ref{fig:vis}. From samples in the first three columns, the baseline model usually assigns similar scores to its top-2 predictions and fails to generate correct predictions. In contrast, our network not only generates correct predictions but also predicts top-2 scores with relatively larger distance. Moreover, from the fourth sample in the first row, the baseline model fails to distinguish the `Fear' expression from `Surprise', `Sad' and `Anger'. It assigns the ground truth label `Fear' a score of 0.19, which is the same as that of `Sad' and `Anger', and lower than that of `Surprise'. In contrast, our network predicts the expression as `Fear' with a score of 0.25, which is obviously higher than that of 'Sad' and `Anger'. \section{Conclusion} This paper proposes to utilize the associative relations of expressions to tackle the excursive semantic and feature covariate shifting problem caused by data uncertainties in FER. It presents an effective feature normalization method named AGFN, who exploits a Poisson graph generator to dynamically and adaptively construct topological graphs for samples in each mini-batches, and employs GCN to convey semantic information among samples. Additionally, to jointly optimize parameters involved in neural networks and the sampling process, a coordinate decent strategy is designed. Extensive experiments demonstrate the effectiveness of proposed AGFN and the importance of addressing the data uncertainty problem. The proposed network not only outperforms existing works on the benchmark datasets FERPlus and RAF-DB but also surpasses state-of-the-art works by 3.38\% and 4.52\% when the percentage of mislabeled data significantly increases (i.e., to 20\%). \clearpage {\small \bibliographystyle{ieee_fullname}
2024-02-18T23:40:43.765Z
2022-07-25T02:16:21.000Z
algebraic_stack_train_0000
3,199
5,589
proofpile-arXiv_065-15623
\section{Introduction} Various recent surveys in leading economic journals suggest that weak instruments remain important concerns for empirical practice. For instance, I.\cite{Andrews-Stock-Sun(2019)} survey 230 instrumental variable (IV) regressions from 17 papers published in the American Economic Review (AER). They find that many of the first-stage F-statistics (and non-homoskedastic generalizations) are in a range that raises such concerns, and virtually all of these papers report at least one first-stage F with a value smaller than 10. Similarly, in \citeauthor{lee2021}'s (\citeyear{lee2021}) survey of 123 AER articles involving IV regressions, 105 out of 847 specifications have first-stage Fs smaller than 10. Moreover, many IV applications involve a large number of instruments. For example, in their seminar paper, \cite{Angrist-Krueger(1991)} study the effect of schooling on wage by interacting three base instruments (dummies for the quarter of birth) with state and year of birth, resulting in 180 instruments. \cite{Hansen-Hausman-Newey(2008)} show that using the 180 instruments gives tighter confidence intervals than using the base instruments even after adjusting for the effect of many instruments. In addition, as pointed out by \cite{MS22}, in empirical papers that employ the ``judge design" (e.g., see \cite{maestas2013}, \cite{sampat2019}, and \cite{dobbie2018}), the number of instruments (the number of judges) is typically proportional to the sample size, and the famous Fama-MacBeth two-pass regression in empirical asset pricing (e.g., see \cite{fama1973}, \cite{shanken1992}, and \cite{anatolyev2022}) is equivalent to IV estimation with the number of instruments proportional to the number of assets. Furthermore, as pointed out by \cite{Goldsmith(2020)}, the shift-share or Bartik instrument (e.g., see \cite{bartik1991} and \cite{blanchard1992}), which has been widely applied in many fields such as labor, public, development, macroeconomics, international trade, and finance, can be considered as a particular way of combining many instruments. For example, in the canonical setting of estimating the labor supply elasticity, the corresponding number of instruments is equal to the number of industries, which is also typically proportional to the sample size. In this paper, we propose a jackknife conditional linear combination (CLC) test, which is robust to weak identification, many instruments, and heteroskedasticity. The proposed test also achieves efficiency under strong identification against local alternatives. The starting point of our analysis is an observation that, under strong identification, an orthogonalized jackknife Lagrangian multiplier (LM) test is the uniformly most powerful test against local alternatives in the limit experiment among the class of tests that are invariant to sign changes and constructed based on jackknife LM and Anderson-Rubin (AR) tests only. However, the orthogonalized LM test may not have good power under weak identification or against fixed alternatives. We then consider a linear combination of jackknife AR and orthogonalized LM tests. Specifically, we follow I.\cite{Andrews(2016)} and determine the linear combination weight by minimizing the maximum power loss, which is further calibrated based on the limit experiment and a sufficient statistic for the identification strength under many instruments. We show such a jackknife CLC test is adaptive to the identification strength in the sense that (1) it achieves exact asymptotic size under both weak and strong identifications, (2) it is asymptotically and conditionally admissible under weak identification, and (3) it converges to the uniformly most powerful test mentioned above under strong identification against local alternatives. \vspace{0.2in} \textbf{Relation to the literature.} The contributions in the present paper relate to two strands of literature. First, it is related to the literature on many instruments; e.g., see \cite{Kunitomo1980}, \cite{morimune1983}, \cite{Bekker(1994)}, \cite{donald2001}, \cite{chamberlain2004}, \cite{Chao-Swanson(2005)}, \cite{stock2005}, \cite{han2006}, D.\cite{Andrews-Stock(2007)}, \cite{Hansen-Hausman-Newey(2008)}, \cite{Newey-Windmeijer(2009)}, \cite{anderson2010}, \cite{kuersteiner2010}, \cite{anatolyev2011}, \cite{belloni2011}, \cite{okui2011}, \cite{belloni2012}, \cite{carrasco2012}, \cite{Chao(2012)}, \cite{Haus2012}, \cite{hansen2014}, \cite{carrasco2015}, \cite{Wang_Kaffo_2016}, \cite{kolesar2018}, \cite{Matsushita2020}, \cite{solvsten2020}, \cite{crudu2021}, and \cite{MS22}, among others. For implementing inferences in the context of many instruments and heteroskedasticity, \cite{Chao(2012)} and \cite{Haus2012} provide standard errors for Wald-type inferences that are based on the jackknife instrumental variable estimator (JIVE) and a jackknifed version of the limited information maximum likelihood estimator and the \cite{Fuller(1977)} estimator. These estimators are more robust to many instruments than the commonly used two-stage least squares (TSLS) estimator as they are able to correct the bias due to the high dimension of IVs. In simulations derived from \citeauthor{Angrist-Krueger(1991)}'s (\citeyear{Angrist-Krueger(1991)}) data, which is representative of empirical labor studies with a many-instrument concern, \citet[Section IV]{Angrist-Frandsen2022} show that such bias-corrected estimators outperform the TSLS that is based on the instruments selected by the least absolute shrinkage and selection operator (LASSO) introduced in \cite{belloni2012} or the random forest-fitted first stage introduced in \cite{athey2019}. However, the Wald inference methods are not valid under weak identification, a situation where the ratio of the concentration parameter, a measure of the overall instrument strength, over the square root of the number of instruments stay bounded as the sample size diverges to infinity. In this case, even the aforementioned bias-corrected estimators are inconsistent and there does not exist a consistent test for the structural parameter of interest (see the discussions in Section 3 of \cite{MS22}). For weak-identification-robust inference under many instruments, D.\cite{Andrews-Stock(2007)} considered the AR test, the score test introduced in \cite{Kleibergen(2002)}, and the conditional likelihood ratio test introduced in \cite{Moreira(2003)}. Their IV model is homoskedastic and requires stringent conditions on the number of instruments relative to the sample size ($K^3/n \rightarrow 0$, where $K$ and $n$ denote the number of instruments and the sample size, respectively). \cite{anatolyev2011} proposed a modified AR test, which allows for the number of instruments to be proportional to the sample size but also require homoskedastic errors. Recently, \cite{crudu2021} and \cite{MS22} proposed jackknifed versions of the AR test in a model with many instruments and heteroskedasticity. Both tests are robust toward weak identification, while \citeauthor{MS22}'s (\citeyear{MS22}) jackknife AR test has better power properties because of the usage of a cross-fit variance estimator. However, the jackknife AR tests may be inefficient under strong identification. \cite{MS22} also proposed a new pre-test for weak identification under many instruments and applied it to form a two-stage testing procedure with a Wald test based on the JIVE introduced in \cite{Angrist(1999)}. The JIVE-Wald test is more efficient than the jackknife AR under strong identification. An empirical researcher can therefore employ the jackknife AR if the pre-test suggests weak identification or the JIVE-Wald if the pre-test suggests strong identification. Furthermore, \cite{Matsushita2020} proposed a jackknife LM test, which is also robust to weak identification, many instruments, and heteroskedastic errors. Under strong identification, our jackknife CLC test is proved to be more efficient than the jackknife AR, the jackknife LM, and the two-step test. Second, our paper is related to the literature on weak identification under the framework of a fixed number of instruments or moment conditions, in which various robust inference methods are available for non-homoskedastic errors, among them \cite{Stock-Wright(2000)}, \cite{Kleibergen(2005)}, D.\cite{Andrews-Cheng(2012)}, I.\cite{Andrews(2016)}, I.\cite{Andrews-Mikusheva(2016)}, I.\cite{Andrews(2018)}, \cite{Moreira-Moreira(2019)}, D.\cite{Andrews-Guggenberger(2019)}, and \cite{lee2021}. In particular, our jackknife CLC test extends I.\cite{Andrews(2016)} to the framework with many weak instruments. I.\cite{Andrews(2016)} considers the convex combination between the generalized AR statistic (S statistic) introduced by \cite{Stock-Wright(2000)} and the score statistic (K statistic) introduced by \cite{Kleibergen(2005)}. We find that under many weak instruments, the orthogonalized jackknife LM statistic plays a role similar to the K statistic. However, the trade-off between the jackknife AR and orthogonalized LM statistics turns out to be rather different from that between the S and K statistics. As pointed out by I.\cite{Andrews(2016)}, in the case with a fixed number of weak instruments (or moment conditions), the K statistic picks out a particular (random) direction corresponding to the span of a conditioning statistic that measures the identification strength, and restricts attention to deviations from the null along this specific direction. In contrast to the K statistic, the S statistic treats all deviations from the null equally. Therefore, the trade-off between the K and S statistics is mainly from the difference in attention to deviation directions. We find that with many weak instruments, the jackknife AR and orthogonalized LM tests do not have such difference in deviation directions. Instead, their trade-off is mostly between local and non-local alternatives. In particular, although the orthogonalized LM test is efficient under strong identification and local alternatives, it may have no power against certain fixed alternatives. Such power issue may occur even under strong identification, which is very different from the K statistic in the context of a small number of instruments. Still, we are able to construct tests with good power properties by using the idea of CLC test. \vspace{0.2in} \textbf{Notation.} We denote $\chi^2_k(\lambda)$ as the chi-square distribution with noncentrality parameter $\lambda$ and degree of freedom $k$ and $[n] = \{1,2,\cdots, n\}.$ We further simplify $\chi^2_k(0)$ as $\chi^2_k$. We denote $z_{\alpha}$ as the $(1-\alpha)$ quantile of a standard normal random variable, $\mathbb{C}_{\alpha}(a)$ as the $(1-\alpha)$ quantile of random variable $a\chi_1^2 + (1-a)\chi_1^2$ where the two $\chi^2_1$'s are independent. Furthermore, we let $\mathbb{C}_{\alpha} = \mathbb{C}_{\alpha}(0) = \mathbb{C}_{\alpha}(1)$, and $\mathbb{C}_{\alpha,\max} = \sup_{a \in [0,1]} \mathbb{C}_{\alpha}(a)$. $\mathbb{E}^*$ and $\mathbb{P}^*$ are expectation and probability taken conditionally on data, respectively. For example, $\mathbb{E}^*1\{\chi_1^2(\hat{\lambda}) \geq \mathbb{C}_{\alpha}\}$, in which $\hat{\lambda}$ is some estimator of the noncentrality parameter based on data, means the expectation is taken over the chi-square random variable by treating $\hat{\lambda}$ as deterministic. \section{Setup and Limit Problems} We consider the linear IV regression with a scalar outcome $Y_i$, a scalar endogenous variable $X_i$, and a $K \times 1$ vector of instruments $Z_i$ such that \begin{align*} Y_i = X_i \beta + e_i, \quad X_i = \Pi_i + V_i, \quad \forall i \in [n]. \end{align*} where $\Pi_i = \mathbb{E}(X_i|Z_i)$. We focus on the model with a single endogenous variable which is prevalent in empirical researches. We let $K$ diverge with sample size $n$, allowing the case that $K$ is of the same order of magnitude as $n$. For the rest of the paper, we follow the many-instrument literature and treat $\{Z_i\}_{i\in [n]}$ as fixed so that $\Pi_i$ can also be written as $\mathbb{E}X_i$ which is non-random, $\mathbb{E}V_i = 0$ by construction, and $\mathbb{E}e_i = 0$ by IV exogeneity. We allow $(e_i, V_i)$ to be independent but not identically distributed and heteroskedastic across $i$. Also, following the literature, we assume without loss of generality that there are no controls included in our model as they can be partialled out. We are interested in testing $\beta = \beta_0$. Let $e_i(\beta_0) = Y_i - X_i \beta_0 = e_i + X_i \Delta$, where $\Delta = \beta- \beta_0$. We collect the transpose of $Z_i$ in each row of $Z$, an $n \times K$ matrix of instruments, and denote $P = Z (Z^\top Z)^{-1}Z^\top$. In addition, Let $Q_{ab} = \frac{\sum_{i \in [n]}\sum_{j \neq i}a_i P_{ij}b_j}{\sqrt{K}}$ and $\mathcal{C} =Q_{\Pi\Pi}$. Then, \cite{MS22} point out that (rescaled) $\mathcal{C}$ is the concentration parameter that measures the strength of identification in the heteroskedastic IV model with many instruments. Specifically, the parameter $\beta$ is weakly identified if $\mathcal{C}$ is bounded and strongly identified if $|\mathcal{C}|\rightarrow \infty$. We consider drifting sequence asymptotics so that all quantities are indexed by the sample size $n$. We omit such dependence for notation simplicity. Throughout the paper, we maintain the following assumption, which is just \citet[Assumption 1]{MS22}. \begin{ass} The observations $(Y_i,X_i,Z_i)_{i \in [n]}$ are i.i.d. Suppose $P$ is an $n \times n$ projection matrix of rank $K$, $K\rightarrow \infty$ as $n \rightarrow \infty$ and there exists a constant $\delta$ such that $P_{ii}\leq \delta <1$. \label{ass:K} \end{ass} Throughout the paper, we consider three scenarios: (1) weak identification and fixed alternative in which both $\mathcal{C}$ and $\Delta$ are fixed and bounded, (2) strong identification and local alternative, in which $\mathcal{C} = \widetilde{\mathcal{C}}/d_n $, $\Delta = \widetilde{\Delta}d_n $, $\widetilde{\mathcal{C}}$ and $\widetilde{\Delta}$ are bounded constants independent of $n$, and $d_n \rightarrow 0$ is a deterministic sequence, and (3) strong identification and fixed alternative, in which $\mathcal{C} = \widetilde{\mathcal{C}}/d_n $ and $\Delta$ is fixed and bounded. All the weak-identification-robust tests proposed in the literature (namely, the jackknife AR tests in \cite{crudu2021} and \cite{MS22}, the two-step test in \cite{MS22}, and the jackknife LM test in \cite{Matsushita2020}) depend on a subset of the following three quantities: $(Q_{e(\beta_0)e(\beta_0)},Q_{Xe(\beta_0)},Q_{XX})$. Following the results in \cite{Chao(2012)} and \cite{MS22}, we can show that under Assumption \ref{ass:K} and either weak or strong identification, the following weak convergence holds: \begin{align} \begin{pmatrix} & Q_{ee} \\ & Q_{Xe} \\ & Q_{XX} - \mathcal{C} \end{pmatrix} \rightsquigarrow \mathcal{N}\left(\begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix},\begin{pmatrix} \Phi_1 & \Phi_{12} & \Phi_{13} \\ \Phi_{12} & \Psi & \tau \\ \Phi_{13} & \tau & \Upsilon \end{pmatrix}\right), \label{eq:limittrue} \end{align} where $\sigma_i^2 = \mathbb{E}e_i^2$, $\eta_i^2 = \mathbb{E}V_i^2$, $\gamma_i = \mathbb{E}e_iV_i$, $\omega_i = \sum_{j \neq i}P_{ij}\Pi_j$, \begin{align*} \Phi_1 & = \lim_{n \rightarrow \infty} \frac{2}{K}\sum_{i \in [n]}\sum_{j \neq i}P_{ij}^2 \sigma_i^2\sigma_j^2,\\ \Phi_{12}& = \lim_{n \rightarrow \infty}\frac{1}{K}\sum_{i \in [n]}\sum_{j \neq i}P_{ij}^2 (\gamma_j \sigma_i^2 + \gamma_i\sigma_j^2), \\ \Phi_{13} & = \lim_{n \rightarrow \infty}\frac{2}{K}\sum_{i \in [n]}\sum_{j \neq i}P_{ij}^2 \gamma_i\gamma_j, \\ \Psi & = \lim_{n \rightarrow \infty}\left[\frac{1}{K}\sum_{i \in [n]}\sum_{j \neq i}P_{ij}^2 (\eta_i^2 \sigma_j^2 + \gamma_i \gamma_j) + \frac{1}{K}\sum_{i \in [n]} \omega_i^2 \sigma_i^2\right], \\ \tau & = \lim_{n \rightarrow \infty} \left[ \frac{2}{K}\sum_{i \in [n]}\sum_{j \neq i}P_{ij}^2 \eta_i^2 \gamma_j + \frac{2}{K}\sum_{i \in [n]} \omega_i^2 \gamma_i \right], \quad \text{and}\\ \Upsilon & = \lim_{n \rightarrow \infty} \left[ \frac{2}{K}\sum_{i \in [n]}\sum_{j \neq i}P_{ij}^2\eta_i^2 \eta_j^2 + \frac{4}{K}\sum_{i \in [n]} \omega_i^2 \eta_i^2 \right]. \end{align*} This implies that, under both strong and weak identifications, \begin{align} \begin{pmatrix} & Q_{e(\beta_0)e(\beta_0)} - \Delta^2 \mathcal{C} \\ & Q_{Xe(\beta_0)} - \Delta \mathcal{C}\\ & Q_{XX} - \mathcal{C} \end{pmatrix} \stackrel{d}{=} \mathcal{N}\left(\begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix},\begin{pmatrix} \Phi_1(\beta_0) & \Phi_{12}(\beta_0) & \Phi_{13}(\beta_0) \\ \Phi_{12}(\beta_0) & \Psi(\beta_0) & \tau(\beta_0) \\ \Phi_{13}(\beta_0) & \tau(\beta_0) & \Upsilon \end{pmatrix}\right) + o_p(1), \label{eq:limit} \end{align} where \begin{align} \Phi_1(\beta_0) & = \Delta^4 \Upsilon + 4\Delta^3 \tau + \Delta^2 (4 \Psi + 2 \Phi_{13}) + 4\Delta \Phi_{12} + \Phi_1, \notag \\ \Phi_{12}(\beta_0) & = \Delta^3 \Upsilon + 3\Delta^2 \tau + \Delta(2 \Psi + \Phi_{13}) + \Phi_{12}, \notag \\ \Phi_{13}(\beta_0) & = \Delta^2 \Upsilon + 2\Delta \tau + \Phi_{13}, \notag \\ \Psi(\beta_0) & = \Delta^2 \Upsilon + 2\Delta \tau + \Psi, \notag \\ \tau(\beta_0) & = \Delta \Upsilon + \tau. \label{eq:phipsi} \end{align} Specifically, under strong identification, we have $Q_{XX}d_n \stackrel{p}{\longrightarrow} \widetilde{\mathcal{C}}$, which has a degenerate distribution. Also, under the local alternative, we have $\Delta = o(1)$ so that \begin{align*} (\Phi_1(\beta_0),\Phi_{12}(\beta_0), \Phi_{13}(\beta_0),\Psi(\beta_0),\tau(\beta_0)) \rightarrow (\Phi_1,\Phi_{12}, \Phi_{13},\Psi,\tau). \end{align*} To describe a feasible version of the tests, we assume we have consistent estimates of all the variance components. \begin{ass} Let $\rho(\beta_0) = \frac{\Phi_{12}(\beta_0)}{\sqrt{\Phi_1(\beta_0)\Psi(\beta_0)}}$, $\widehat{\gamma}(\beta_0) = (\widehat{\Phi}_1(\beta_0), \widehat{\Phi}_{12}(\beta_0), \widehat{\Phi}_{13}(\beta_0), \widehat{\Psi}(\beta_0), \widehat{\tau}(\beta_0), \widehat{\Upsilon}, \widehat{\rho}(\beta_0))$ be an estimator, and $\mathcal{B} \in \Re$ be a compact parameter space. Then, we have $\inf_{\beta_0 \in \mathcal{B}}\Phi_1(\beta_0)>0$, $\inf_{\beta_0 \in \mathcal{B}}\Psi(\beta_0)>0$, $\Upsilon >0$, and for $\beta_0 \in \mathcal{B}$, \begin{align*} ||\widehat{\gamma}(\beta_0) - \gamma(\beta_0)||_2 = o_p(1), \end{align*} where $\gamma(\beta_0) \equiv (\Phi_1(\beta_0), \Phi_{12}(\beta_0),\Phi_{13}(\beta_0),\Psi(\beta_0), \tau(\beta_0),\Upsilon,\rho(\beta_0))$. \label{ass:variance_est} \end{ass} Several remarks on Assumption \ref{ass:variance_est} are in order. First, \cite{Chao(2012)} proposed a consistent estimator of $\Psi$ under strong identification and many instruments. It is possible to compute $\widehat{\gamma}(\beta_0)$ based on \citeauthor{Chao(2012)}'s (\citeyear{Chao(2012)}) argument with their JIVE-based residuals $\hat{e}_i$ from the structure equation replaced by $e_i(\beta_0)$. The consistency of such an estimator $\widehat{\Phi}_1(\beta_0)$ for $\Phi_1(\beta_0)$ has been established by \cite{crudu2021} under weak identification and $\beta_0 = \beta$. Similar arguments can be used to show the consistency of rest of the elements in $\widehat{\gamma}(\beta_0)$ under both weak and strong identifications. In addition, the consistency can be established under both local and fixed alternatives. We provide more details in Section \ref{sec:var1} in the Online Supplement. Second, motivated by \cite{KSS2020}, \cite{MS22} proposed cross-fit estimators $\widehat{\Phi}_1(\beta_0)$, $\widehat{\Psi}(\beta_0)$, and $\widehat{\Upsilon}$\footnote{They show the consistency of $\widehat{\Psi}$ when the residual $\hat{e}_i$ from the structure equation is computed based on the JIVE estimator. We can construct $\widehat{\Psi}(\beta_0)$ by replacing $\hat{e}_i$ by $e_i(\beta_0)$. Then, the argument as theirs with $-Q_{Xe}/Q_{XX}$ replaced by $\Delta$ establishes that $\widehat{\Psi}(\beta_0) \stackrel{p}{\longrightarrow} \Psi(\beta_0)$.} which are consistent under both weak and strong identifications and lead to a better finite sample power. Following their lead, one can write down the cross-fit estimators for the rest of the elements in $\gamma(\beta_0)$ and show they are consistent. We provide more details in Section \ref{sec:var2} in the Online Supplement. Note both \citeauthor{crudu2021}'s (\citeyear{crudu2021}) and \citeauthor{MS22}'s (\citeyear{MS22}) estimators are consistent under heteroskedasticity and allow for $K$ to be of the same order of $n$. Third, our jackknife CLC test proposed below controls size under both weak and strong identifications as long as $\widehat{\gamma}(\beta_0)$ is consistent under the null. Fourth, the power analysis in Lemmas \ref{lem:strongID} and \ref{lem:weakID} below, and subsequently, Theorems \ref{thm:strongid} and \ref{thm:weakid} only require the consistency of $\widehat{\gamma}(\beta_0)$ under strong identification with local alternatives and weak identification with fixed alternatives, respectively. Fifth, Lemma \ref{lem:strongID2} holds as long as the limit of $\widehat{\gamma}(\beta_0)$ exists. The limit does not necessarily need to be the same as those defined in \eqref{eq:phipsi}. This implies, the power analysis for the strong identification under fixed alternative in Theorems \ref{thm:admissible}(iii) and \ref{thm:strong_fixed} only requires $\widehat{\gamma}(\beta_0)$ is convergent. Nevertheless, by an abuse of notation, in this case, we still denote the limit of $\widehat{\gamma}(\beta_0)$ as $\gamma(\beta_0)$. Under this framework, \cite{MS22} consider the jackknife Anderson-Rubin test \begin{align} 1\{AR(\beta_0) \geq z_{\alpha}\}, \quad AR(\beta_0) = \frac{Q_{e(\beta_0)e(\beta_0)} }{\widehat{\Phi}_1^{1/2}(\beta_0)}, \label{eq:AR} \end{align} and \cite{Matsushita2020} consider the Jackknife Lagrangian Multiplier test \begin{align} 1\{LM^2(\beta_0) \geq \mathbb{C}_{\alpha}\}, \quad LM(\beta_0) = \frac{Q_{Xe(\beta_0)} }{\widehat{\Psi}^{1/2}(\beta_0)}. \label{eq:LM} \end{align} Both tests are robust to weak identification, many instruments, and heteroskedasticity. Lemma \ref{lem:strongID} below characterizes the joint limit distribution of $(AR(\beta_0),LM(\beta_0))^\top$ under strong identification. \begin{lem} Suppose \eqref{eq:limit} and Assumption \ref{ass:variance_est} hold and we are under strong identification with local alternatives, i.e., there exists a deterministic sequence $d_n \rightarrow 0$ such that $\mathcal{C} = \widetilde{\mathcal{C}}/d_n $ and $\Delta = \widetilde{\Delta}d_n $, where $\widetilde{\mathcal{C}}$ and $\widetilde{\Delta}$ are bounded constants independent of $n$. Then, we have \begin{align*} \begin{pmatrix} AR(\beta_0) \\ LM(\beta_0) \end{pmatrix} \rightsquigarrow \begin{pmatrix} \mathcal{N}_1 \\ \mathcal{N}_2 \end{pmatrix} \stackrel{d}{=} \mathcal{N}\left(\begin{pmatrix} 0 \\ \frac{\widetilde{\Delta} \widetilde{\mathcal{C}}}{\Psi^{1/2}} \end{pmatrix},\begin{pmatrix} 1 & \rho \\ \rho & 1 \end{pmatrix}\right) \end{align*} where $\rho = \Phi_{12}/\sqrt{\Phi_1\Psi}$. \label{lem:strongID} \end{lem} Two remarks are in order. First, under strong identification, we consider local alternative so that $ \beta-\beta_0 \rightarrow 0$. This is why we have $(\Psi(\beta_0),\Phi_1(\beta_0),\Phi_{12}(\beta_0)$ converge to $(\Psi,\Phi_1,\Phi_{12})$, which are just the counterparts of $(\Psi(\beta_0),\Phi_1(\beta_0),\Phi_{12}(\beta_0)$ when $\beta_0$ is replaced by $\beta$. Second, although $AR(\beta_0)$ has zero mean, it is correlated with $LM(\beta_0)$ in the current context of many instruments. It is therefore possible to use $AR(\beta_0)$ to reduce the variance of $LM(\beta_0)$ and obtain a test that is more powerful than the LM test. \begin{lem} Consider the limit experiment in which researchers observe \begin{align*} \begin{pmatrix} \mathcal{N}_1 \\ \mathcal{N}_2 \end{pmatrix} \stackrel{d}{=} \mathcal{N}\left(\begin{pmatrix} 0 \\ \theta \end{pmatrix},\begin{pmatrix} 1 & \rho \\ \rho & 1 \end{pmatrix}\right), \end{align*} know the value of $\rho$ and that $\mathbb{E}\mathcal{N}_1 = 0$, and want to test for $\theta=0$ versus the two-sided alternative. In this case, the uniformly most powerful level $\alpha$ test that is invariant to sign changes is $1\{\mathcal{N}_2^{*2} \geq \mathbb{C}_{\alpha}\}$, where \begin{align*} \mathcal{N}_2^* = (1-\rho^2)^{-1/2}(\mathcal{N}_2 - \rho \mathcal{N}_1) \end{align*} is the normalized residual from the projection of $\mathcal{N}_2$ on $\mathcal{N}_1$. \label{lem:ump} \end{lem} Let $LM^*(\beta_0) = (1-\widehat{\rho}(\beta_0)^2)^{-1/2} (LM(\beta_0)- \widehat{\rho}(\beta_0) AR(\beta_0))$. Then, Lemma \ref{lem:strongID} implies, under strong identification and local alternatives, \begin{align} \begin{pmatrix} AR(\beta_0) \\ LM^*(\beta_0) \end{pmatrix} \rightsquigarrow \begin{pmatrix} \mathcal{N}_1 \\ \mathcal{N}_2^* \end{pmatrix} \stackrel{d}{=} \mathcal{N}\left(\begin{pmatrix} 0 \\ \frac{\widetilde{\Delta} \widetilde{\mathcal{C}}}{[(1-\rho^2)\Psi]^{1/2}} \end{pmatrix},\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}\right). \label{eq:lmstar_str} \end{align} Lemma \ref{lem:ump} with $\theta = \widetilde{\Delta} \widetilde{\mathcal{C}}\Psi^{-1/2}$ implies, in this case, the test $1\{LM^{*2}(\beta_0) \geq \mathbb{C}_{\alpha}\}$ is asymptotically strictly more powerful than the jackknife AR and LM tests based on $AR(\beta_0)$ and $LM(\beta_0)$ against local alternatives as long as $\rho \neq 0$. Next, we compare the performance of $AR(\beta_0)$ and $LM^*(\beta_0)$ under strong identification and fixed alternatives. \begin{lem} Suppose Assumption \ref{ass:variance_est} holds, $( Q_{e(\beta_0)e(\beta_0)} - \Delta^2 \mathcal{C}, Q_{Xe(\beta_0)} - \Delta \mathcal{C},Q_{XX} - \mathcal{C})^\top = O_p(1)$, and we are under strong identification so that $d_n \mathcal{C} \rightarrow \widetilde{\mathcal{C}}$ for some $d_n \rightarrow 0$. Then, we have for any fixed $\Delta \neq 0$, \begin{align*} d_n^2 \begin{pmatrix} AR^2(\beta_0) \\ LM^{*2}(\beta_0) \end{pmatrix} \stackrel{p}{\longrightarrow} \begin{pmatrix} \Phi_1^{-1}(\beta_0) \Delta^4 \widetilde{\mathcal{C}}^2 \\ (1-\rho^2(\beta_0))^{-1}(\Psi^{-1/2}(\beta_0) - \rho(\beta_0) \Phi_1^{-1/2}(\beta_0)\Delta)^2 \Delta^2 \widetilde{\mathcal{C}}^2 \end{pmatrix}. \end{align*} \label{lem:strongID2} \end{lem} Given $d_n \rightarrow 0$ and $\Phi_1^{-1}(\beta_0) \Delta^4 \widetilde{\mathcal{C}}^2>0$, $AR(\beta_0)$ has power $1$ against fixed alternatives asymptotically. On the other hand, the $LM^*(\beta_0)$ may not have power if $\Delta = \Delta_*(\beta_0) \equiv \Phi_1^{1/2}(\beta_0)\Psi^{-1/2}(\beta_0)\rho^{-1}(\beta_0)$. Next, we compare the performance of $AR(\beta_0)$ and $LM^*(\beta_0)$ under weak identification and fixed alternatives. \begin{lem} Suppose \eqref{eq:limit} and Assumption \ref{ass:variance_est} hold and we are under weak identification so that $\mathcal{C}$ is fixed. Then, we have for any fixed $\Delta \neq 0$, \begin{align} \begin{pmatrix} AR(\beta_0) \\ LM^*(\beta_0) \end{pmatrix} \rightsquigarrow \begin{pmatrix} \mathcal{N}_1 \\ \mathcal{N}_2^* \end{pmatrix} \stackrel{d}{=} \mathcal{N}\left(\begin{pmatrix} m_1(\Delta) \\ m_2(\Delta) \end{pmatrix},\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}\right), \label{eq:lmstar_wk} \end{align} where $\rho(\beta_0) = \frac{\Phi_{12}(\beta_0)}{\sqrt{\Psi(\beta_0)\Phi_1(\beta_0) }}$ and \begin{align*} \begin{pmatrix} m_1(\Delta) \\ m_2(\Delta) \end{pmatrix} & = \begin{pmatrix} \Phi_1^{-1/2}(\beta_0)\Delta^2\mathcal{C}\\ (1-\rho^2(\beta_0))^{-1/2}\Psi^{-1/2}(\beta_0)\Delta \mathcal{C}-\rho(\beta_0) (1-\rho^2(\beta_0))^{-1/2}\Phi_1^{-1/2}(\beta_0)\Delta^2\mathcal{C} \end{pmatrix}. \end{align*} In particular, as $\Delta \rightarrow \infty$, we have \begin{align*} m_1(\Delta) & \rightarrow \frac{\mathcal{C}}{\Upsilon^{1/2}} \quad \text{and} \quad m_2(\Delta) \rightarrow \frac{\mathcal{C}}{\Upsilon^{1/2}} \frac{\rho_{23}}{(1-\rho_{23}^2)^{1/2}}, \end{align*} where $\rho_{23} = \frac{\tau}{(\Psi \Upsilon)^{1/2}}$ is the correlation between $Q_{Xe}$ and $Q_{XX}$.\footnote{We suppress the dependence of $m_1(\Delta)$ and $m_2(\Delta)$ on $\gamma(\beta_0)$ and $\mathcal{C}$ for notation simplicity. } \label{lem:weakID} \end{lem} By comparing the means of the normal limit distribution in (\ref{eq:lmstar_wk}), we notice that under weak identification and fixed alternatives, neither $LM^*(\beta_0)$ dominates $AR(\beta_0)$ or vice versa. We also notice from Lemma \ref{lem:weakID} that for testing distant alternatives, the power of $LM^*(\beta_0)$ is different from $AR(\beta_0)$ by a factor of $\rho_{23}/\sqrt{1-\rho^2_{23}}$, so that it will be lower when $|\rho_{23}| \leq 1/\sqrt{2}$. Under homoskedasticity in the sense that $(\sigma_i,\gamma_i,\eta_i)$ defined after \eqref{eq:limit} are constant across $i$ and weak identification, $\rho_{23} = \rho = \Phi_{12}/\sqrt{\Psi \Phi_1}$. Therefore, although the test $1\{LM^*(\beta_0) \geq \mathbb{C}_{\alpha}\}$ has power advantage under strong identification against local alternatives, it may lack power under weak identification if the degree of endogeneity is low. Furthermore, $LM^*(\beta_0)$ may not have power if $\Delta = \Delta_*(\beta_0)$. We notice that such power issue of $LM^*(\beta_0)$ is similar to that of the tests based on the K statistic introduced by \cite{Kleibergen(2002), Kleibergen(2005)} under the framework of a fixed number of instruments. Under such a framework, the K statistic is efficient under strong identification but may have non-monotonic power function under weak identification (e.g., see the discussions in Section 3.1 of I.\cite{Andrews(2016)}). However, different from the K statistic, $LM^*(\beta_0)$ can have power equal to zero even under strong identification and fixed alternatives (as can be seen from Lemma \ref{lem:strongID2}). To achieve the advantages of $AR(\beta_0)$ and $LM^*(\beta_0)$ in all three scenarios above, we need to combine them in a way that is adaptive to the identification strength. Following I.\cite{Andrews(2016)}, we consider the linear combination of $AR^2(\beta_0)$ and $LM^{*2}(\beta_0)$. In the limit experiment, recall $(\mathcal{N}_1,\mathcal{N}_2^*)$ are the limits of $(AR(\beta_0),LM^{*}(\beta_0))$ in either strong or weak identification. See \eqref{eq:lmstar_str} and \eqref{eq:lmstar_wk} for their expressions. Then in the limit experiment, the linear combination test can be written as \begin{align} \phi_{a,\infty} = 1\{a \mathcal{N}_1^2 + (1-a)\mathcal{N}_2^{*2} \geq \mathbb{C}_{\alpha}(a) \}, \label{eq:phi1} \end{align} where $a \in [0,1]$, $\mathcal{N}_1^2 \sim \chi_1^2(\theta_1)$, and $\mathcal{N}_2^{*2} \sim \chi_1^2(\theta_2)$; the noncentrality parameters $\theta_1$ and $\theta_2$ are defined in Lemmas \ref{lem:strongID} and \ref{lem:weakID} for strong and weak identifications, respectively. \begin{thm} \begin{enumerate}[label=(\roman*)] \item Under weak identification, $\mathcal{N}_1^2 \sim \chi_1^2(\theta_1)$, $\mathcal{N}_2^{*2} \sim \chi_1^2(\theta_2)$, and they are independent, where $\theta_1 = m_1(\Delta)$ and $\theta_2 = m_2(\Delta)$ as in \eqref{eq:lmstar_wk}. We consider the test of $H_0: \theta_1 = \theta_2 = 0$ against $H_1: \theta_1 \neq 0$ or $\theta_2 \neq 0$. Then, for any $a \in [0,1]$, $\phi_{a,\infty}$ defined in \eqref{eq:phi1} is admissible in the class of tests which depend on $(\mathcal{N}_1^2,\mathcal{N}_2^{*2})$. \item Under strong identification and local alternative, $\mathcal{N}_1^2 \sim \chi_1^2(0)$, $\mathcal{N}_2^{*2} \sim \chi_1^2(\theta)$, where $\theta =\frac{\widetilde{\Delta}\widetilde{\mathcal{C}}}{[(1-\rho^2)\Psi]^{1/2}} $ as in \eqref{eq:lmstar_str}. We consider the test of $H_0: \theta = 0$ against $H_1: \theta \neq 0$. Then, $\phi_{a,\infty}$ defined in \eqref{eq:phi1} is the uniformly most powerful test in the class of tests which depend on $(\mathcal{N}_1,N_2^{*})$ and are invariant to sign changes if and only if $a = 0$. \item Suppose Assumption \ref{ass:variance_est} holds, $( Q_{e(\beta_0)e(\beta_0)} - \Delta^2 \mathcal{C}, Q_{Xe(\beta_0)} - \Delta \mathcal{C},Q_{XX} - \mathcal{C})^\top = O_p(1)$, and we are under strong identification with the fixed alternative. If $1 \geq a_n \geq \frac{\tilde{q}\Phi_1(\beta_0)}{\mathcal{C}^2\Delta_*^4(\beta_0)}$ for some constant $\tilde{q} > \mathbb{C}_{\alpha,max}$, then \begin{align*} 1\{a_n AR^2(\beta_0) + (1-a_n)LM^{*2}(\beta_0) \geq \mathbb{C}_{\alpha}(a_n)\} \stackrel{p}{\longrightarrow} 1. \end{align*} \end{enumerate} \label{thm:admissible} \end{thm} Three remarks are in order. First, in the case with a fixed number of weak instruments (or moment conditions), I.\cite{Andrews(2016)} considered the linear combination of $K$ and $S$ statistics. The trade-off between K and S statistics is from the difference in attention to deviation directions. On the other hand, we notice from Theorem \ref{thm:admissible} that in the current context with many weak instruments, $AR(\beta_0)$ and $LM^*(\beta_0)$ do not have such difference in deviation directions. Instead, the trade-off between $AR(\beta_0)$ and $LM^*(\beta_0)$ is between local and non-local alternatives. Second, unlike the one-sided jackknife AR test proposed by \cite{MS22}, we construct the jackknife CLC test based on $AR^2(\beta_0)$. There are several reasons. First, under weak identification, when the concentration parameter $\mathcal{C}$, and thus, $m_1(\Delta)$ defined in Lemma \ref{lem:weakID} is nonnegative, the one-sided test has good power. However, even in this case, the power curves simulation in Section \ref{sec:sim1} shows our jackknife CLC test is more powerful than the one-sided AR test in most scenarios. Second, our jackknife CLC test will have good power even when $\mathcal{C}$ is negative.\footnote{We note that $\mathcal{C} = \frac{\sum_{i \in [n]} \sum_{ j \neq i} \Pi_i P_{ij}\Pi_j}{\sqrt{K}} = \frac{ \sum_{i \in [n]}(1-P_{ii})\Pi_i^2 - \Pi^\top M \Pi}{\sqrt{K}}$, where $M = I-P$. If $\Pi^\top M \Pi$ and $\sum_{i \in [n]}P_{ii}\Pi_i^2$ are sufficiently large, $\mathcal{C}$ can be negative. \cite{MS22} further assume $\Pi^\top M \Pi \leq \frac{C \Pi^\top \Pi}{K}$ for some constant $C>0$, which implies $\mathcal{C}>0$.} Third, under strong identification and local alternative, we will show below that our jackknife CLC test converges to the uniformly most powerful test $1\{\mathcal{N}^{*2}_2>\mathbb{C}_{\alpha}\}$ while both the one- and two-sided tests based on $AR(\beta_0)$ have no power, as shown in Lemma \ref{lem:strongID}. Fourth, under strong identification and fixed alternative, our jackknife CLC test has power 1, as shown in Lemma \ref{lem:strongID2} and Theorem \ref{thm:strong_fixed}. Fifth, combining $LM^{*2}(\beta_0)$ with $AR^2(\beta_0)$, rather than $AR(\beta_0)$, can mitigate the impact of power loss of $LM^*(\beta_0)$ at $\Delta_*(\beta_0)$, as shown in the numerical investigation in Section \ref{sec:sim}. Third, it is possible to combine the rotated $AR(\beta_0)$ and $LM^*(\beta_0)$. Specifically, let $$\mathcal{O}(\zeta) = \begin{pmatrix} \cos(\zeta) & -\sin(\zeta) \\ \sin(\zeta) & \cos(\zeta) \end{pmatrix}$$ be a rotation matrix with angle $\zeta$ and $\begin{pmatrix} AR^\dagger(\beta_0,\zeta) \\ LM^\dagger(\beta_0,\zeta) \end{pmatrix} = \mathcal{O}(\zeta) \begin{pmatrix} AR(\beta_0) \\ LM^*(\beta_0) \end{pmatrix}$. Then, in the limit experiment, the linear combination test can be written as \begin{align*} \phi_{a,\zeta,\infty} = 1\{a \mathcal{N}_1^{\dagger 2} + (1-a)\mathcal{N}_2^{\dagger 2} \geq \mathbb{C}_{\alpha}(a)\}, \end{align*} where $(\mathcal{N}^{\dagger}_1,\mathcal{N}^{\dagger}_2)$ are the limits of $(AR^\dagger(\beta_0,\zeta) ,LM^\dagger(\beta_0,\zeta) )$ under either weak or strong identification. The benefit of introducing the rotation is now $\phi_{a,\zeta,\infty} $ includes the original $LM$ test as a special case by letting $a =0$ and $\zeta = \arctan\left(\frac{\rho(\beta_0)}{(1-\rho^2(\beta_0))^{1/2}} \right)$. In the following, we use a minimax procedure to select the weight $a$ in $\phi_{a,\infty}$. We can do the same to select both $a$ and the angle $\zeta$ in $\phi_{a,\zeta,\infty}$. However, we find in simulations that the power of the jackknife CLC test without the rotation is similar to that with the rotation and in most cases, dominates that for the LM test under weak identification. Furthermore, under strong identification, Lemma \ref{lem:ump} shows $LM^*(\beta_0)$ is the most powerful test against local alternatives, which corresponds to the jackknife CLC test $\phi_{a,\infty}$ with $a=0$. In this case, adding the rotation will not improve the power. Therefore, for simplicity, we just focus on the jackknife CLC test without the rotation. \section{A Conditional Linear Combination Test} In this section, we follow I.\cite{Andrews(2016)} and determine the weight $a$ in the jackknife CLC test via a minimax procedure. Under weak identification, the limit power of the jackknife CLC test with weight $a$ is $$\mathbb{E}\phi_{a,\infty} = \mathbb{E}1\{a \chi_1^2(m_1^2(\Delta)) + (1-a)\chi_1^2(m_2^2(\Delta)) \geq \mathbb{C}_{\alpha}(a) \},$$ where $m_1(\Delta)$ and $m_2(\Delta)$ are defined in Lemma \ref{lem:weakID}. In this case, we can be explicit and write $\phi_{a,\infty} = \phi_{a,\infty}(\Delta)$. However, the limit power of the jackknife CLC test will typically remain unknown as the true parameter $\beta$ (and hence $\Delta$) is unknown. To overcome this issue, we calibrate the power of $\mathbb{E}\phi_{a,\infty}(\delta)$, where $\delta$ ranges over all possible values that $\Delta$ can potentially take; we define $\phi_{a,\infty}(\delta)$ as well as the range of potential values of $\Delta$ below. Let $\widehat{D} = Q_{XX} - (Q_{e(\beta_0)e(\beta_0)},Q_{Xe(\beta_0)}) \begin{pmatrix} \widehat{\Phi}_1(\beta_0) & \widehat{\Phi}_{12}(\beta_0) \\ \widehat{\Phi}_{12}(\beta_0) & \widehat{\Psi}(\beta_0) \end{pmatrix}^{-1} \begin{pmatrix} \widehat{\Phi}_{13}(\beta_0)\\ \widehat{\tau}(\beta_0) \end{pmatrix}$ be the residual from the projection of $Q_{XX}$ on $(Q_{e(\beta_0)e(\beta_0)},Q_{Xe(\beta_0)})$. By \eqref{eq:limit}, under weak identification, we have \begin{align*} \widehat{D} = D + o_p(1), \quad D \stackrel{d}{=} \mathcal{N}(\mu_D,\sigma_D^2), \end{align*} where \begin{align*} \mu_D & = \mathcal{C}\left[1 - (\Delta^2,\Delta)\left(\begin{pmatrix} \Phi_1(\beta_0) & \Phi_{12}(\beta_0) \\ \Phi_{12}(\beta_0) & \Psi(\beta_0) \end{pmatrix}^{-1} \begin{pmatrix} \Phi_{13}(\beta_0) \\ \tau(\beta_0) \end{pmatrix}\right) \right] \quad \text{and}\\ \sigma_D^2 & = \Upsilon - \left((\Phi_{13}(\beta_0),\tau(\beta_0))\begin{pmatrix} \Phi_1(\beta_0) & \Phi_{12}(\beta_0) \\ \Phi_{12}(\beta_0) & \Psi(\beta_0) \end{pmatrix}^{-1} \begin{pmatrix} \Phi_{13}(\beta_0) \\ \tau(\beta_0) \end{pmatrix} \right). \end{align*} We note that $\widehat{D}$ is a sufficient statistic for $\mu_D$, which contains information about the concentration parameter $\mathcal{C}$ and is asymptotically independent of $AR(\beta_0)$, $LM(\beta_0)$, and hence $LM^*(\beta_0)$. Under weak identification, we observe that $m_1(\Delta)$ and $m_2(\Delta)$ in Lemma \ref{lem:weakID} can be written as \begin{align} \begin{pmatrix} m_1(\Delta) \\ m_2(\Delta) \end{pmatrix} = \begin{pmatrix} C_{1}(\Delta) \\ C_{2}(\Delta) \end{pmatrix} \mu_D, \label{eq:C12} \end{align} where \begin{align} \begin{pmatrix} C_{1}(\Delta) \\ C_{2}(\Delta) \end{pmatrix} & \equiv \begin{pmatrix} \Phi_1^{-1/2}(\beta_0)\Delta^2 \\ (1-\rho^2(\beta_0))^{-1/2}(\Psi^{-1/2}(\beta_0)\Delta -\rho(\beta_0) \Phi_1^{-1/2}(\beta_0)\Delta^2) \end{pmatrix} \notag \\ & \times \left[1 - (\Delta^2,\Delta)\left(\begin{pmatrix} \Phi_1(\beta_0) & \Phi_{12}(\beta_0) \\ \Phi_{12}(\beta_0) & \Psi(\beta_0) \end{pmatrix}^{-1} \begin{pmatrix} \Phi_{13}(\beta_0) \\ \tau(\beta_0) \end{pmatrix}\right) \right]^{-1}. \label{eq:CDelta} \end{align} By \eqref{eq:C12}, we see that $ \phi_{a,\infty}(\Delta)$ defined in \eqref{eq:phi1} can be written as \begin{align*} 1\{a \chi_1^2(C_1^2(\Delta) \mu_D^2) + (1-a)\chi_1^2(C_1^2(\Delta) \mu_D^2) \geq \mathbb{C}_{\alpha}(a)\}. \end{align*} This motivates the definition that \begin{align} \phi_{a,\infty}(\delta) = 1\{a \chi_1^2(C_1^2(\delta) \mu_D^2) + (1-a)\chi_1^2(C_2^2(\delta) \mu_D^2) \geq \mathbb{C}_{\alpha}(a)\}. \label{eq:phi} \end{align} To emphasize the dependence of $\phi_{a,\infty}(\delta)$ on $\mu_D$ and $\gamma(\beta_0)$, we further write $\phi_{a,\infty}(\delta)$ as $\phi_{a,\infty}(\delta,\mu_D,\gamma(\beta_0))$. The range of values that $\Delta$ can take is defined as $\mathcal{D}(\beta_0) = \{\delta: \delta+\beta_0 \in \mathcal{B}\}$, where $\mathcal{B}$ is the parameter space. For example, in their empirical application of returns to education, \cite{Staiger-Stock(1997)} posited that the value of $\beta$ (i.e., the return to education) is from 0 to 0.18, i.e., $\mathcal{B} = [0,0.18]$. For each $\delta \in \mathcal{D}(\beta_0)$, the maximum power for the linear combination test is defined as $\mathcal{P}_{\delta,\mu_D} = \sup_{a \in \mathbb{A}(\mu_D,\gamma(\beta_0))} \mathbb{E}\phi_{a,\infty}(\delta,\mu_D,\gamma(\beta_0))$, which means \begin{align*} \mathcal{P}_{\delta,\mu_D} - \mathbb{E}\phi_{a,\infty}(\delta,\mu_D,\gamma(\beta_0)) \end{align*} is the power loss when the weight is set as $a$. Here we denote the domain of $a$ as $\mathbb{A}(\mu_D,\gamma(\beta_0))$ and define it as $\mathbb{A}(\mu_D,\gamma(\beta_0)) = [\underline{a}(\mu_D,\gamma(\beta_0)),\overline{a}]$ for \begin{align*} \underline{a}(\mu_D,\gamma(\beta_0)) = \min\left(0.01, \frac{1.1 \mathbb{C}_{\alpha,\max} \Phi_1(\beta_0) c_{\mathcal{B}}(\beta_0) }{ \Delta_*^4(\beta_0) \mu_D^2} \right), \end{align*} where $\overline{a}<1$ is a constant, $\Delta_*(\beta_0) = \Phi_1^{1/2}(\beta_0)\Psi^{-1/2}(\beta_0)\rho^{-1}(\beta_0)$, and \begin{align*} c_{\mathcal{B}}(\beta_0) = \sup_{\delta \in \mathcal{D}(\beta_0)} \left[1 - (\delta^2,\delta)\left(\begin{pmatrix} \Phi_1(\beta_0) & \Phi_{12}(\beta_0) \\ \Phi_{12}(\beta_0) & \Psi(\beta_0) \end{pmatrix}^{-1} \begin{pmatrix} \Phi_{13}(\beta_0) \\ \tau(\beta_0) \end{pmatrix}\right) \right]^2. \end{align*} Following the lead of I.\cite{Andrews(2016)}), the maximum power loss over $\delta \in \mathcal{D}(\beta_0)$ can be viewed as a maximum regret. Then, we choose $a$ that minimizes the maximum regret, i.e., \begin{align} a(\mu_D,\gamma(\beta_0)) \in \argmin_{a \in \mathbb{A}(\mu_D,\gamma(\beta_0))} \sup_{\delta \in \mathcal{D}(\beta_0)}(\mathcal{P}_{\delta,\mu_D} - \mathbb{E}\phi_{a,\infty}(\delta,\mu_D,\gamma(\beta_0))). \label{eq:atrue} \end{align} Three remarks on the domain of $a$, i.e., $\mathbb{A}(\mu_D,\gamma(\beta_0))$, are in order. First, under weak identification, $\mu_D$ is fixed, and $\frac{1.1 \mathbb{C}_{\alpha,\max} \Phi_1(\beta_0) c_{\mathcal{B}}(\beta_0) }{ \Delta_*^4(\beta_0) \mu_D^2}$ may be larger than $0.01$. In this case, we have $\mathbb{A}(\mu_D,\gamma(\beta_0)) = [0.01,\overline{a}]$. In our simulations, the minimax $a$ never hits the lower bound so that setting the lower bound to be $0.01$ or $0$ does not make any numerical difference. Second, under strong identification and local alternatives, $\frac{1.1 \mathbb{C}_{\alpha,\max} \Phi_1(\beta_0) c_{\mathcal{B}}(\beta_0) }{ \Delta_*^4(\beta_0) \mu_D^2}$ will converge to zero so that \begin{align*} \mathbb{A}(\mu_D,\gamma(\beta_0)) = \left[\frac{1.1 \mathbb{C}_{\alpha,\max} \Phi_1(\beta_0) c_{\mathcal{B}}(\beta_0) }{ \Delta_*^4(\beta_0) \mu_D^2},\overline{a}\right]. \end{align*} We show in Theorem \ref{thm:strongid} below that in this case, the minimax $a$ converges to zero under strong identification and local alternative so that the jackknife CLC test converges to $1\{\mathcal{N}_2^{*2} \geq \mathbb{C}_{\alpha}\}$ defined in Lemma \ref{lem:ump}, which is the uniformly most powerful invariant test. Furthermore, this weight satisfies the requirement in Theorem \ref{thm:admissible}(iii) with $\tilde{q} = 1.1\mathbb{C}_{\alpha,\max}$ so that under strong identifications, our jackknife CLC test has asymptotic power 1 against fixed alternatives, as shown in Theorem \ref{thm:strong_fixed}. Third, we require $\overline{a}<1$ for some technical reason. Again, in our simulations, we never observe the minimax $a$ hits the upper bound so that setting the upper bound to be $\overline{a}$ or $1$ does not make any numerical difference. In practice, we do not observe $\mu_D$ and $\gamma(\beta_0)$. Therefore, we follow I.\citet[Section 6]{Andrews(2016)} and consider the plug-in method. More specifically, we can replace $\gamma(\beta_0)$ by its consistent estimator $\widehat{\gamma}(\beta_0)$ introduced in Assumption \ref{ass:variance_est}. To obtain a proxy of $\mu_D$, we define \begin{align*} \widehat{\sigma}_D = \left(\widehat{\Upsilon} - (\widehat{\Phi}_{13}(\beta_0),\widehat{\tau}(\beta_0))\begin{pmatrix} \widehat{\Phi}_1(\beta_0) & \widehat{\Phi}_{12}(\beta_0) \\ \widehat{\Phi}_{12}(\beta_0) & \widehat{\Psi}(\beta_0) \end{pmatrix}^{-1} \begin{pmatrix} \widehat{\Phi}_{13}(\beta_0) \\ \widehat{\tau}(\beta_0) \end{pmatrix}\right)^{1/2}, \end{align*} which is a function of $\widehat{\gamma}(\beta_0)$ and a consistent estimator of $\sigma_D$ by Assumption \ref{ass:variance_est}. Then, under weak identification, we have $\widehat{D}^2/\widehat{\sigma}_D^2 = D^2 /\sigma_D^2 + o_p(1) \stackrel{d}{=} \chi^2_1(\mu_D^2/\sigma_D^2) + o_p(1)$ and $D^2 /\sigma_D^2$ is a sufficient statistic for $\mu_D^2$. Let $\widehat{r} = \widehat{D}^2/\widehat{\sigma}_D^2$. We consider two estimators for $\mu_D$ as functions of $\widehat{D}$ and $\widehat{\sigma}_D$, namely $f_{pp}(\widehat{D},\widehat{\gamma}(\beta_0)) = \widehat{\sigma}_D \sqrt{\widehat{r}_{pp}}$ and $f_{krs}(\widehat{D},\widehat{\gamma}(\beta_0)) = \widehat{\sigma}_D \sqrt{\widehat{r}_{krs}}$, where $\widehat{r}_{pp} = \max(\widehat{r}-1,0)$ and \begin{align*} \widehat{r}_{krs} = \widehat{r}-1 + \exp\left(-\frac{\widehat{r}}{2}\right)\left(\sum_{j=0}^\infty\left( -\frac{\widehat{r}}{2}\right)^j \frac{1}{j!(1+2j)} \right)^{-1}. \end{align*} Specifically, \cite{krs93} show $\widehat{r}_{krs}$ is positive as long as $\widehat{r}>0$ and $\widehat{r} \geq \widehat{r}_{krs} \geq \widehat{r}-1$. It is also possible to consider the MLE based on a single observation $\widehat{D}^2/\widehat{\sigma}_D^2$. However, such an estimator is harder to use because it does not have a closed-form expression. In practice, we estimate $\mathbb{E}\phi_{a,\infty}(\delta,\mu_D,\gamma(\beta_0))$ by $ \mathbb{E}^*\phi_{a,s}(\delta,\widehat{D},\widehat{\gamma}(\beta_0))$ for $s \in \{pp,krs\}$, where \begin{align} \phi_{a,s}(\delta,\widehat{D},\widehat{\gamma}(\beta_0)) = 1\{a \chi^2_1(\widehat{C}_{1}^2(\delta)f^2_s(\widehat{D},\widehat{\gamma}(\beta_0)) ) + (1-a) \chi^2_1(\widehat{C}_{2}^2(\delta)f^2_s(\widehat{D},\widehat{\gamma}(\beta_0))) \geq \mathbb{C}_{\alpha}(a)\}, \label{eq:phias} \end{align} and $(\widehat{C}_{1}(\delta),\widehat{C}_{2}(\delta))$ are similarly defined as $(C_1(\delta),C_2(\delta))$ in \eqref{eq:CDelta} with $\gamma(\beta_0)$ replaced by $\widehat{\gamma}(\beta_0)$, i.e., \begin{align*} \begin{pmatrix} \widehat{C}_{1}(\delta) \\ \widehat{C}_{2}(\delta) \end{pmatrix} & \equiv \begin{pmatrix} \widehat{\Phi}_1^{-1/2}(\beta_0)\delta^2 \\ (1-\widehat{\rho}^2(\beta_0))^{-1/2}(\widehat{\Psi}^{-1/2}(\beta_0)\delta -\widehat{\rho}(\beta_0) \widehat{\Phi}_1^{-1/2}(\beta_0)\delta^2) \end{pmatrix} \notag \\ & \times \left[1 - (\delta^2,\delta)\left(\begin{pmatrix} \widehat{\Phi}_1(\beta_0) & \widehat{\Phi}_{12}(\beta_0) \\ \widehat{\Phi}_{12}(\beta_0) & \widehat{\Psi}(\beta_0) \end{pmatrix}^{-1} \begin{pmatrix} \widehat{\Phi}_{13}(\beta_0) \\ \widehat{\tau}(\beta_0) \end{pmatrix}\right) \right]^{-1}. \end{align*} Let $\mathcal{P}_{\delta,s}(\widehat{D},\widehat{\gamma}(\beta_0)) = \sup_{a \in \mathbb{A}(f_{s}(\widehat{D},\widehat{\gamma}(\beta_0)),\widehat{\gamma}(\beta_0)) } \mathbb{E}^*\phi_{a,s}(\delta,\widehat{D},\widehat{\gamma}(\beta_0))$. Then, for $s \in \{pp,krs\}$, we can estimate $a(\mu_D,\gamma(\beta_0))$ in \eqref{eq:atrue} by $\mathcal{A}_s(\widehat{D},\widehat{\gamma}(\beta_0))$ defined as \begin{align} \mathcal{A}_s(\widehat{D},\widehat{\gamma}(\beta_0)) \in \argmin_{a \in \mathbb{A}(f_{s}(\widehat{D},\widehat{\gamma}(\beta_0)),\widehat{\gamma}(\beta_0))} \sup_{\delta \in \mathcal{D}(\beta_0)}(\mathcal{P}_{\delta,s}(\widehat{D},\widehat{\gamma}(\beta_0)) - \mathbb{E}^*\phi_{a,s}(\delta,\widehat{D},\widehat{\gamma}(\beta_0))), \label{eq:afeasible} \end{align} where $\phi_{a,s}(\delta,\widehat{D},\widehat{\gamma}(\beta_0))$ is defined in \eqref{eq:phias}, $\mathbb{A}(f_{s}(\widehat{D},\widehat{\gamma}(\beta_0)),\widehat{\gamma}(\beta_0)) = [\underline{a}(f_{s}(\widehat{D},\widehat{\gamma}(\beta_0)),\widehat{\gamma}(\beta_0)),\overline{a}]$, \begin{align*} \underline{a}(f_{s}(\widehat{D},\widehat{\gamma}(\beta_0)),\widehat{\gamma}(\beta_0)) = \min\left(0.01, \frac{1.1 \mathbb{C}_{\alpha,\max} \widehat{\Phi}_1(\beta_0) \widehat{c}_{\mathcal{B}}(\beta_0)}{ \widehat{\Delta}_*^4(\beta_0) f_{s}^2(\widehat{D},\widehat{\gamma}(\beta_0))} \right), \end{align*} \begin{align*} \widehat{c}_{\mathcal{B}}(\beta_0) = \sup_{\delta \in \mathcal{D}(\beta_0)} \left[1 - (\delta^2,\delta)\left(\begin{pmatrix} \widehat{\Phi}_1(\beta_0) & \widehat{\Phi}_{12}(\beta_0) \\ \widehat{\Phi}_{12}(\beta_0) & \widehat{\Psi}(\beta_0) \end{pmatrix}^{-1} \begin{pmatrix} \widehat{\Phi}_{13}(\beta_0) \\ \widehat{\tau}(\beta_0) \end{pmatrix}\right) \right]^2, \end{align*} and $\widehat{\Delta}_*(\beta_0) = \widehat{\Phi}_1^{1/2}(\beta_0)\widehat{\Psi}^{-1/2}(\beta_0) \widehat{\rho}^{-1}(\beta_0)$. Then, the feasible CLC test is, for $s \in \{pp,krs\}$, \begin{align} \widehat{\phi}_{\mathcal{A}_s(\widehat{D},\widehat{\gamma}(\beta_0))} = 1\{\mathcal{A}_s(\widehat{D},\widehat{\gamma}(\beta_0)) AR^{2}(\beta_0) + (1-\mathcal{A}_s(\widehat{D},\widehat{\gamma}(\beta_0))) LM^{*2}(\beta_0) \geq \mathbb{C}_{\alpha}(\mathcal{A}_s(\widehat{D},\widehat{\gamma}(\beta_0)))\}, \label{eq:phihat} \end{align} \section{Asymptotic Properties} We first consider the asymptotic properties of the jackknife CLC test under weak identification and fixed alternative, in which $\mathcal{C}$ and $\Delta$ are treated as fixed so that we have \begin{align*} \widehat{D} \rightsquigarrow D \stackrel{d}{=} \mathcal{N}(\mu_D,\sigma_D^2). \end{align*} We see from \eqref{eq:atrue} and \eqref{eq:afeasible} that $\mathcal{A}_s(d,r) = a(f_s(d,r),r)$ for $(d,r) \in \Re \times \Gamma$, where $\Gamma$ is the parameter space for $\gamma(\beta_0)$ and $s \in \{pp,krs\}$. We make the following assumption on $\mathcal{A}_s(\cdot)$. \begin{ass} Suppose we are under weak identification with a fixed $\beta_0$. Let $\mathcal{S}_s$ be the set of discontinuities of $\mathcal{A}_s(\cdot,\gamma(\beta_0)): \Re \mapsto [0,1]$. Then, we assume $\mathcal{A}_s(d,r)$ is continuous in $r$ at $r= \gamma(\beta_0)$ for any $d \in \Re/\mathcal{S}_s$, and the Lebesgue measure of $\mathcal{S}_s$ is zero for $s \in \{pp,krs\}$. \label{ass:a} \end{ass} Assumption \ref{ass:a} is a technical condition which allows us to apply the continuous mapping theorem. It is mild because $\mathcal{A}_s(\cdot)$ is allowed to be discontinuous in its first argument. In practice, we can approximate $\mathcal{A}_s(\cdot)$ by a step function defined over a grid of $d$ so that there are finite number of discontinuities. The continuity of $\mathcal{A}_s(\cdot)$ in its second argument is due to the smoothness of bivariate normal PDF with respect to the covariance matrix. Therefore, in this case, Assumption \ref{ass:a} holds automatically. \begin{thm} Suppose we are under weak identification and fixed alternative, and \eqref{eq:limit} , Assumptions \ref{ass:variance_est} and \ref{ass:a} hold. Then, for $s\in \{pp,krs\}$, \begin{align*} \mathcal{A}_s(\widehat{D},\widehat{\gamma}(\beta_0)) & \rightsquigarrow \mathcal{A}_s(D,\gamma(\beta_0))= a(f_s(D,\gamma(\beta_0)),\gamma(\beta_0)) \end{align*} and\footnote{We assume $\frac{C}{0} = +\infty$ if $C>0$ and $\min(C,+\infty) = C.$} \begin{align*} \mathbb{E}\widehat{\phi}_{\mathcal{A}_s(\widehat{D},\widehat{\gamma}(\beta_0))} \rightarrow \mathbb{E}\phi_{a(f_s(D,\gamma(\beta_0)),\gamma(\beta_0)),\infty}(\Delta,\mu_D,\gamma(\beta_0)), \end{align*} where $\phi_{a,\infty}$ is defined in \eqref{eq:phi} and $a(f_s(D,\gamma(\beta_0)),\gamma(\beta_0))$ is interpreted as $a(\mu_D,\gamma(\beta_0))$ defined in \eqref{eq:atrue} with $\mu_D$ replaced by $f_s(D,\gamma(\beta_0))$. In addition, Let $BL_1$ be the class of functions $f(\cdot)$ of $D$ that is bounded and Lipschitz with Lipschitz constant 1. Then, if the null hypothesis holds such that $\Delta = 0$, we have \begin{align*} \mathbb{E}(\widehat{\phi}_{\mathcal{A}_s(\widehat{D},\widehat{\gamma}(\beta_0))} - \alpha)f(\widehat{D}) \rightarrow 0, \quad \forall f \in BL_1. \end{align*} \label{thm:weakid} \end{thm} Several remarks on Theorem \ref{thm:weakid} are in order. First, Theorem \ref{thm:weakid} shows the asymptotic power of CLC test with the weight $a$ selected by the minimax procedure is the same as that in the limit experiment when the weight $\mathcal{A}_s(D,\gamma(\beta_0))$ is a function of $D$. Given $D$ is independent of both noncentral chi-square random variables in $\phi_{a,\infty}$ in \eqref{eq:phi}, Theorem \ref{thm:admissible}(i) implies the jackknife CLC test is asymptotically admissible conditional on $\widehat{D}$. Second, we see that the power of our jackknife CLC test is $\mathbb{E}\phi_{\mathcal{A}_s(D,\gamma(\beta_0)),\infty}(\Delta,\mu_D,\gamma(\beta_0))$, which does not exactly match the minimax power $\mathbb{E}\phi_{a(\mu_D,\gamma(\beta_0)),\infty}(\Delta,\mu_D,\gamma(\beta_0))$ in the limit problem. This is because under weak identification, it is impossible to consistently estimate $\mu_D$, or equivalently, the concentration parameter. Similar result holds under weak identification with a fixed number of moment conditions in I.\cite{Andrews(2016)}. The best we can do is to approximate $\mu_D$ by reasonable estimators based on $D$ such as $f_{pp}(D,\gamma(\beta_0))$ and $f_{krs}(D,\gamma(\beta_0))$. Last, Theorem \ref{thm:weakid} implies our jackknife CLC test controls size asymptotically conditionally on $\widehat{D}$, and thus, unconditionally. Next, we consider the performance of $\widehat{\phi}_{\mathcal{A}_s(\widehat{D},\widehat{\gamma}(\beta_0))} $ defined in \eqref{eq:phihat} under strong identification and local alternatives. \begin{thm} Suppose \eqref{eq:limit} and Assumption \ref{ass:variance_est} hold. Further suppose that we are under strong identification and local alternatives as described in Lemma \ref{lem:strongID}. Then, for $s \in \{pp,krs\}$, we have \begin{align*} \mathcal{A}_s(\widehat{D},\widehat{\gamma}(\beta_0)) \stackrel{p}{\longrightarrow} 0 \quad \text{and} \quad \widehat{\phi}_{\mathcal{A}_s(\widehat{D},\widehat{\gamma}(\beta_0))} \rightsquigarrow 1\{\mathcal{N}_2^{*2} \geq \mathbb{C}_{\alpha}\}, \end{align*} where $\mathcal{N}_2^{*} \stackrel{d}{=} \mathcal{N}\left(\frac{\widetilde{\Delta} \widetilde{\mathcal{C}}}{[(1-\rho^2)\Psi]^{1/2}},1\right)$. \label{thm:strongid} \end{thm} Three remarks are in order. First, Theorem \ref{thm:strongid} shows under strong identification and local alternatives, our jackknife CLC test converges to the uniformly most powerful level $\alpha$ test characterized in Lemma \ref{lem:ump}. Therefore, it is more powerful than the jackknife AR and LM tests. Second, under strong identification and local alternative, the JIVE-based Wald test proposed by \cite{Chao(2012)} is asymptotically equivalent to the jackknife LM test, which implies that the jackknife AR and JIVE-Wald-based two-step test in \cite{MS22} is also dominated by the jackknife CLC test. Third, Theorem \ref{thm:strongid} shows our jackknife CLC test is adaptive. Although here we assume $\beta_0$ is under the local alternative, it is unknown to econometricians. Therefore, our jackknife CLC test still uses all the values $\delta$ can take (i.e., $\delta \in \mathcal{D}(\beta_0)$), which includes both local and fixed alternatives, to gauge the power loss. Yet, Theorem \ref{thm:strongid} shows that the minimax procedure can produce the most powerful test as if it is known that $\beta_0$ is under the local alternative. Last, we show that, under strong identification, the jackknife CLC test $\widehat{\phi}_{\mathcal{A}_s(\widehat{D},\widehat{\gamma}(\beta_0))}$ defined in \eqref{eq:phihat} has asymptotic power 1 against fixed alternatives. \begin{thm} Suppose Assumption \ref{ass:variance_est} holds, and $( Q_{e(\beta_0)e(\beta_0)} - \Delta^2 \mathcal{C}, Q_{Xe(\beta_0)} - \Delta \mathcal{C},Q_{XX} - \mathcal{C})^\top = O_p(1)$. Further suppose that we are under strong identification with fixed alternatives so that $\Delta = \beta - \beta_0$ is nonzero and fixed. Then, we have \begin{align*} \widehat{\phi}_{\mathcal{A}_s(\widehat{D},\widehat{\gamma}(\beta_0))} \stackrel{p}{\longrightarrow} 1. \end{align*} \label{thm:strong_fixed} \end{thm} \section{Simulation} \label{sec:sim} \subsection{Power Curve Simulation for the Limit Problem} \label{sec:sim1} In this section, we simulate the power behavior of tests in the limit problem (\ref{eq:limit}). We compare the following tests with nominal rate of $5\%$: our jackknife CLC test in which $\mu_D$ is estimated by the methods $pp$ and $krs$, the jackknife AR test defined in (\ref{eq:AR}), the jackknife LM test defined in (\ref{eq:LM}), and the test that is based on the orthogonalized jackknife LM statistic $LM^{*2}(\beta_0)$ defined in this paper. The results below are based on 2000 simulation repetitions. The values of the covariance matrix in (\ref{eq:limittrue}) are set as follows: $\Phi_1 = \Psi = \Gamma =1$, and $\Phi_{12} = \Phi_{13} = \tau = \rho$, where $\rho \in \{0.2, 0.4, 0.7, 0.9\}$. We have tried to simulate under alternative settings of the covariance matrix, and the obtained patterns of the power behavior are very similar. The choice of the range of alternatives follows that in I.\citet[Section 7.2]{Andrews(2016)}. Figures \ref{limit_fig1}--\ref{limit_fig4} plot the power curves for $\rho=0.2, 0.4, 0.7,$ and $0.9$, respectively. In each figure, we report the results under both weak and strong identifications ($\mathcal{C}=2.5$ and $5$, respectively). We observe that overall, the two jackknife CLC tests have the best power properties in terms of maximum regret. Especially when the identification is strong ($\mathcal{C}=5$) and the degree of endogeneity is not very low ($\rho =0.4, 0.7$, or $0.9$), the jackknife CLC tests outperform their AR and LM counterparts by a large margin. In addition, we notice that when $\mathcal{C}=2.5$, there are parameter values where $LM^*(\beta_0)$ can suffer from substantial declines in power relative to the other tests, which is in line with our theoretical predictions. By contrast, our jackknife CLC tests are able to guard against such substantial power loss because of the adaptive nature of their minimax procedure. \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth,height = 6cm]{power_curve_rho_02.pdf} \caption{Power Curve for $\rho = 0.2$} \label{limit_fig1} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth,height = 6cm]{power_curve_rho_04.pdf} \caption{Power Curve for $\rho=0.4$} \label{limit_fig2} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth,height = 6cm]{power_curve_rho_07.pdf} \caption{Power Curve for $\rho=0.7$} \label{limit_fig3} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth,height = 6cm]{power_curve_rho_09.pdf} \caption{Power Curve for $\rho=0.9$} \label{limit_fig4} \end{figure} \subsection{Simulation Based on Calibrated Data} \label{sec:sim2} We follow \cite{Angrist-Frandsen2022} and \cite{MS22} and calibrate a data generating process (DGP) based on the 1980 census dataset from \cite{Angrist-Krueger(1991)}. Let the instrument be $$Z_i = \big( (1 \{Q_i = q, C_i = c\})_{q \in \{2,3,4\}, c \in \{31,\cdots,39\} }, (1 \{Q_i = q, P_i = p \})_{q \in \{2,3,4\}, p \in \{\text{51 states}\} } \big),$$ where $Q_i, C_i, P_i$ are individual $i's$ Quarter of Birth (QOB), Year of Birth (YOB) and Place of Birth (POB) respectively, so that there are 180 instruments. Note that the dummy with $q = 1$ and $c = 30$ is omitted in the instrument $Z_i$. The outcome variable $Y_i$ is income, the endogenous variable $X_i$ (highest grade completed) is instrumented by $Z_i$, with the control variables $W_i$ being the full set of POB-YOB interactions, i.e., $$W_i = \big( 1 \{C_i = c, P_i = p\}_{c \in \{30,...,39\}, p \in \{1,2,3,4\} } \big)$$ is a $40 \times 1$ matrix. As in \cite{Angrist-Frandsen2022}, using the full 1980 sample (consisting of 329509 individuals), we first obtain the average $X_i$ for each QOB-YOB-POB cell; we call this $\bar{s}(q,c,p)$. Next we estimate the LIML for the following linear IV regression $$Y_i = X_i \beta_X + W_i^\top \beta_W +e_i,$$ $$X_i = Z_i^\top \Gamma_Z + W_i^\top \Gamma_W + V_i, $$ with $\mathbb{E}(e_i|Z_i)=\mathbb{E}(V_i|Z_i,W_i)=0$, denoting the LIML estimate for $\beta_{X,W} \equiv (\beta_X^\top, \beta_W^\top)^\top$ as $\widehat\beta_{LIML}^\top = (\widehat\beta_{LIML, X}^\top, \widehat\beta_{LIML,W}^\top)$ We let $\widehat{y}(C_i,P_i) = W_i^\top \widehat{\beta}_{LIML,W}$ and $$\omega(Q_i,C_i,P_i) = Y_i - X_i \widehat{\beta}_{LIML,X} - W_i^\top \widehat{\beta}_{LIML,W}.$$ We consider the following two DGPs. \begin{enumerate} \item DGP 1: $$\widetilde{y}_i = \bar{y} + \beta \widetilde{s}_i + \omega(Q_i,C_i,P_i)(\nu_i +\kappa_2 \xi_i)$$ $$\widetilde{s}_i \sim Poisson(\mu_i),$$ where $\beta$ is the parameter of interest, $\nu_i$ and $\xi_i$ are independent standard normal, $\bar{y} = \frac{1}{N}\sum_i \widehat{y}(C_i,P_i)$, $\mu_i \equiv max\{1,\gamma_0+\gamma_Z^\top Z_i + \kappa_1 \nu_i \}$, and $\gamma_0 +\gamma^\top_Z Z_i$ is the projection of $\bar{s}_i(q,c,p)$ onto a constant and $Z_i$. We set $\kappa_1 = 1.7$ and $\kappa_2 = 0.1$ as in \cite{MS22}. \item DGP 2: Same as DGP 1 except that $\kappa_1 = 2.7$ and $$\widetilde{s}_i \sim \lfloor Poisson(2\mu_i)/2 \rfloor.$$ \end{enumerate} The parameter space is set as $\mathcal{B} = [-0.1,0.3]$, given that the parameter of interest $\beta$ is the return to education. Following \cite{MS22}, we test the null hypothesis that $\beta = \beta_0$ for $\beta_0 =0.1$ while varying the true value $\beta \in \mathcal{B}$. We consider varying sample size $N$ based on 1.5\%, 1\% and 0.5\% of the full sample size. Upon obtaining $N$ samples, we exclude instruments with $\sum_{i=1}^N Z_{ij} <5$ to ensure Assumption \ref{ass:K} is satisfied. This results in large, medium, and small samples with 4943, 3296, and 1648 observations and 150, 142, and 119 numbers of IVs, respectively. Our DGP 1 is exactly the same as that in \cite{MS22}, which has $\rho =0.41$. Our DGP 2 has $\rho =0.7$. The concentration parameters (defined as $\mathcal{C}/\Upsilon^{1/2}$) for large, medium, and small samples are 4.89, 3.66, and 2.18, respectively for DGP 1 and 5.33, 4.02, 2.42, respectively for DGP 2. The results below are based on 1000 simulation repetitions. We provide more details about the implementation in Section \ref{sec:imp_sim} in the Online Supplement. We compare the following tests with nominal rate of $5\%$ via simulations. \begin{enumerate} \item pp: our jackknife CLC test when $\mu_D$ is estimated by method pp. \item krs: our jackknife CLC test when $\mu_D$ is estimated by method krs. \item AR: the one-sided jackknife AR test with cross-fit variance estimator as proposed by \cite{MS22}. \item LM\_CF: \citeauthor{Matsushita-Otsu2021}'s (\citeyear{Matsushita-Otsu2021}) jackknife LM test, but with cross-fit variance estimator. \item 2-step: \citeauthor{MS22}'s (\citeyear{MS22}) 2-step estimator in which the overall size is set at $5\%$. \item LM$^*$: LM$^*$ test defined in this paper. \item LM\_MO: \citeauthor{Matsushita-Otsu2021}'s (\citeyear{Matsushita-Otsu2021}) original jackknife LM test. \end{enumerate} Figures \ref{fig1} and \ref{fig2} plot the power curves of the aforementioned tests. We can make six observations. First, all methods control size well because they are all weak-identification-robust. Second, in DGP 1 with small sample size, the jackknife LM performs best, but jackknife CLC tests have similar power especially for alternatives that are close to the null. The maximum power gap between jackknife LM and krs, which occurs at the alternative on the far left of the parameter space, is about 8\%. Third, for the rest of the scenarios, our jackknife CLC tests have the highest power. For DGP 1 with medium and large sample sizes, the power gaps between our jackknife CLC tests and the jackknife LM are about 5\% and 7\%. They are 22\%, 20\%, and 20\% for DGP 2 with small, medium, and large sample sizes, respectively. Fourth, method krs is slightly better than pp, which is consistent with the power curve simulation in Section \ref{sec:sim1}. Fifth, Figure \ref{fig3} shows the average values of $a$ for our jackknife CLC tests. Although the powers of our jackknife CLC and LM$^*$ tests are very close, Figure \ref{fig3} implies the minimax procedure does not put all the weights on LM$^*$. Furthermore, because the AR tests are more powerful on the left side of the parameter space than on the right side, the minimax weights are higher (more weights on AR tests) on the left than on the right. The weight is lowest for alternatives that are close to the null, which is consistent with our theory that $LM^*$ is most powerful for local alternatives. The weights for DGP 2 are lower in general that those for DGP 1 because it has slightly stronger identification. Last, although the power of $LM^{*2}(\beta_0)$ drops at the left end of parameter space, the power of jackknife CLC tests remain stable. From Figure \ref{fig3}, we see that in this region, more weights are put on $AR^2(\beta_0)$. \begin{figure}[h] \centering \includegraphics[width=1\textwidth,height = 6cm]{DGP41.pdf} \caption{Power Curve for DGP 1} \label{fig1} \end{figure} \begin{figure}[h] \centering \includegraphics[width=1\textwidth,height = 6cm]{DGP43.pdf} \caption{Power Curve for DGP 2} \label{fig2} \end{figure} \begin{figure}[h] \centering \includegraphics[width=1\textwidth,height = 6cm]{DGP4a.pdf} \caption{Average Values of $a$} \label{fig3} \end{figure} \section{Conclusion} In this paper, we consider a jackknife CLC test which is adaptive to the identification strength in IV regressions with many weak instruments. We show the proposed test is (i) robust to weak identification, many instruments, and heteroskedasticity, (2) admissible under weak identification, and (3) uniformly most powerful among sign-invariant tests under strong identification against local alternatives. Simulation experiments confirm the good power properties of the jackknife CLC test. \newpage
2024-02-18T23:40:44.045Z
2022-07-25T02:16:35.000Z
algebraic_stack_train_0000
3,211
10,797
proofpile-arXiv_065-15755
\section{ Introduction } \label{section1} The non-renormalization theorem~\cite{adler-bardeen,bardeen,zee,low-schr,padua,bbbc,ps} of the four-dimensional gauge an\-om\-aly~\cite{abbj} is of fundamental importance for the construction of consistent high energy physics models. This theorem states that the anomaly coefficient vanishes at all orders of perturbation theory if it vanishes in the one-loop approximation. The original proof of Bardeen~\cite{bardeen} of the theorem for the non-Abelian gauge anomaly is based on an analysis of Feynman graphs: one shows that if the one-loop triangle anomaly cancels, then there exists a gauge invariant regularization valid to all orders. Later on it was recognized by Zee~\cite{zee} in the Abelian case, and by Costa et al.~\cite{padua} in the non-Abelian case, that it is possible to give a proof based on the combined use of the gauge (or BRS) Ward identities and of the Callan-Symanzik equation. In the same time Lowenstein and Schroer~\cite{low-schr}, and later on Bandelloni et al.~\cite{bbbc}, achieved, with the quantum action principle~\cite{lam} as the main tool, an algebraic, regularization independent, version of the previous proof. The main advantage of a regularization independent proof is that it can be naturally extended to more sophisticated theories, {\em e.g.\ } supersymmetric gauge theories and topological theories, for which no regularization preserving all the symmetries is available. The regularization independent proofs given up to the present time~\cite{low-schr,bbbc,ps}, as well as the proof given in~\cite{padua}, based on dimensional regularization, although very general, have their domain of validity restricted by the ''technical'' assumption that the one-loop beta function for the gauge coupling~\cite{beta1} should not identically vanish. Even if this assumption covers a very large class of models including the standard model, there is a wide set of interesting theories for which the one-loop gauge beta function do indeed vanish. This set includes in particular gauge models with $N=1$ supersymmetry which may have some relevance in the construction of grand unified theories~\cite{ibanez}. Moreover the supersymmetric gauge models with a vanishing one-loop gauge beta function~\cite{parkes,hamidi,rajpoot} are the starting point towards the construction of ultraviolet finite theories~\cite{lps}. It is therefore needed to have a proof which also applies to the case of a one-loop vanishing gauge beta function. This is the aim of the present paper. The demonstration follows the differential geometry setup of the descent equations which are known to characterize the anomaly~\cite{zumino,d-violette,brandt}. It is the continuation of a previous work of the authors~\cite{ps}, where a completely algebraic proof of the non-renormalization theorem was given in the case of a non-vanishing one-loop gauge beta function. The main ingredient, as shown in~\cite{ps}, is the vanishing of the anomalous dimensions of the differential form operators which are solutions of the descent equations. In the proof one has to use the ghost equation shown in~\cite{bps}, which controls the coupling of the Faddeev-Popov ghost $c$. However this equation holds only in the Landau gauge, and we will therefore have to present our arguments in this particular gauge. The extension of the non-renormalization theorem to a general linear covariant gauge can be easily performed by following the techniques of extended BRS invariance~\cite{ps-extbrs}, as it was done in~\cite{lps-u1}. \noindent Let us finish this introduction by some remarks. The proof we are going to present here concerns the non-supersym\-met\-ric theories for simplicity, the generalization to the supersymmetric case being apparently straightforward. There is indeed a supersymmetric version of the descent equations which allows for an algebraic set up analog to the non-supersymmetric one and which leads to a unique characterization of the anomaly~\cite{ps-anom,porrati}. Our proof covers the cases of theories for which the gauge beta function does not vanish to higher than one-loop order. It does not hold, as it stands, in cases of higher order vanishing gauge beta function. This proof in particular would not apply to the topological theories which have vanishing beta functions to all orders~\cite{topol}, but to the present time there is no known exemple of such a theory having a gauge anomaly, given as a non-trivial solution of the Wess-Zumino consistency conditions~\cite{stora,topol}. It is however relevant for the construction of finite supersymmetric gauge theories~\cite{lps}. Indeed such a construction starts with a model whose gauge beta function vanishes only in the one-loop approximation and depends on a certain number of {\em independent} couplings (a gauge and a few Yukawa couplings). It is at this stage that the non-renormalization theorem is needed. The all order vanishing of the whole set of beta functions is then ensured by requiring the Yukawa couplings to be a function of the gauge coupling constant, according to the ''reduction of coupling constants'' theory of Zimmermann~\cite{reduction}. \section{ Properties of Yang-Mills theories in the Landau gauge} \label{section2} The purpose of this section is to give a brief summary of the algebraic properties which characterize a four-dimensional gauge theory quantized in the Landau gauge~\cite{bps,ps}. Let us consider a massless gauge theory whose complete classical action $\S$, using the same notations of ref.~\cite{ps}, reads: \begin{equation} \S = \S_{\rm inv}\ + \ \S_{\rm gf} \ + \ \S_{\rm ext} \ , \eqn{action} where $\S_{\rm inv}$, $\S_{\rm gf}$ and $\S_{\rm ext}$ are respectively the gauge invariant action, the Landau gauge fixing term and the external field dependent part. They are given by: \begin{equation} \S_{\rm inv} = \int d^4 \! x \, \left(}\newcommand{\rp}{\right) - {1\over 4g^2} F^{a\mu\nu}F^a_{\mu\nu} + {\cal L}_{\rm matter}(\phi,D_\mu \phi,\lambda_i ) \rp \ , \eqn{actionclass} \begin{equation} \S_{\rm gf} = \int d^4 \! x \, {\ }\left(}\newcommand{\rp}{\right) b^a\partial^\mu A^a_\mu + {\bar c}^a \partial^\mu {(D_\mu c)}^a \rp \ , \eqn{gaugefix} \begin{equation} \S_{\rm ext} = \int d^4 \! x \, \left(}\newcommand{\rp}{\right) - \O^{a\mu} {(D_{\mu}c)}^a + \frac 1 2 \sigma} \renewcommand{\S}{\Sigma^a f^{abc}c^bc^c -i Y c^a T^a \phi \rp \ , \eqn{extaction} where $f^{abc}$ are the structure constant of a simple compact gauge group $G$, $T^a$ are the generators of the matter representation and $\{\lambda_i\}$ denote the self coupling constants of the matter fields $\phi$ whose invariant Lagrangian ${\cal L}_{\rm matter}$ is restricted by the usual power-counting condition. The invariance of $\S$ under the nilpotent $BRS$ transformations~\cite{brs} (the external fields $\O$, $\sigma} \renewcommand{\S}{\Sigma$, $Y$ being kept invariant as usual): \begin{equation}\begin{array}{lcl} sA^a_\mu \= -{(D_\mu c)}^a \ ,\\ sc^a \= {1\over 2} f^{abc}c^b c^c \ ,\\ s{\bar c}^a\= b^a\ , \\ sb^a\=0 \ ,\\ s\phi\= -i c^a T^a \phi \ , \end{array}\eqn{brs} is expressed by the classical Slavnov identity: \begin{equation} {\cal S}(\S) = \int d^4 \! x \, \displaystyle{\Biggl(} \fud{\S}{\O^{a\mu}}\fud{\S}{A_\mu^a} + \fud{\S}{\sigma} \renewcommand{\S}{\Sigma^a}\fud{\S}{c^a} + \fud{\S}{Y}\fud{\S}{\phi} + b^a\fud{\S}{{\bar c}^a} \displaystyle{\Biggr)} = 0 \ . \eqn{slavnov} This identity is assumed to be broken at the quantum level by the gauge anomaly~\cite{abbj,brsyug}, i.e.: \begin{equation} {\cal S}(\Gamma) = \hbar^{n} r \AA {\ }+{\ }O(\hbar^{n+1} ) \ , \qquad n \ge 2 \ , \eqn{anslavnov} whith \begin{equation} \AA{\ }={\ }\varepsilon^{\mu\nu\rho\sigma}\int d^4 \! x \, \partial_{\mu}c^a \displaystyle{\biggl(} d^{abc}\partial_{\nu}A^b_\rho A^c_\sigma - {{\cal D}^{abcd}\over12}A^b_\nu A^c_\rho A^d_\sigma \displaystyle{\biggr)} \ , \eqn{anomaly} \begin{equation} {\cal D}^{abcd} = d^{abn}f^{ncd} + d^{acn}f^{ndb} + d^{adn}f^{nbc} \ , \eqn{DDtensor} where $d^{abc}$ is the totally symmetric invariant tensor and $\Gamma$ is the vertex functional \begin{equation} \Gamma = \S {\ }+{\ }O(\hbar) \ . \eqn{gammafunct} One has to note that eq.\equ{anslavnov} implies that the gauge anomaly is absent at the one loop level, i.e. we consider the case in which the coefficient $r$ of the one loop triangle diagram is equal to zero, due to an appropriate choice of the matter field representation~\cite{abbj}. In such a situation the Adler-Bardeen theorem~\cite{abbj,padua,bbbc,ps} states that the coefficient $r$ in \equ{anslavnov} identically vanishes. The vertex functional $\Gamma$, besides the anomalous Slavnov identity \equ{anslavnov}, is known to obey: \par i) ${\ }{\ }{\ }$the Landau gauge-fixing condition and the antighost equation~\cite{pr} \begin{equation} \fud{\Gamma}{b^a} = \partial A^a ,\qquad \fud{\Gamma}{{\bar c}^a} + \partial \fud{\Gamma}{\O^a} = 0\ , \eqn{gaugecond} \par ii) ${\ }$the rigid gauge invariance~\cite{pr} \begin{equation} {\cal R}^a_{\rm rig}\S = \sum_{{\rm all\ fields\ }\varphi} \int d^4 \! x \, \d^a_{\rm rig}{\varphi}\fud{\S}{{\varphi}}=0\ , \eqn{rigid} \par iii) the ghost equation~\cite{bps} \begin{equation} \int d^4 \! x \, \displaystyle{\Biggl(} \fud{\Gamma}{c^a} + f^{abc}{\bar c}^b \fud{\Gamma}{b^c} \displaystyle{\Biggr)} \ = \Delta^a \ , \eqn{ghosteq} \begin{equation} \Delta^a = \int d^4 \! x \, \left(}\newcommand{\rp}{\right) f^{abc}\O^{b\mu}A^c_\mu - f^{abc}\sigma} \renewcommand{\S}{\Sigma^b c^c + iYT^a\phi \rp\ . \eqn{ghostbreak} These conditions, together with \equ{anslavnov}, allow to write a Callan-Symanzik equation which is Slavnov invariant up to the order $\hbar^{n}$~\cite{ps}, i.e.: \begin{equation}\begin{array}{rl} {\cal C}\Gamma{\ }=& {\ }\displaystyle{\biggl(} \mu \pad{\ }{\mu} + \hbar \beta_g {\partial {\ }\over \partial g} + \hbar \sum_i \beta_i {\partial {\ }\over \partial \lambda_i} + \hbar \gamma_A {\cal N}_A +\hbar \gamma_{\phi} {\cal N}_{\phi} \displaystyle{\biggr)} \Gamma{\ } \\[2mm] =&{\ }\hbar^{n+1} \Delta^{n+1}_c + O(\hbar^{n+2}) \ , \end{array}\eqn{calla-sym} where $\mu$ denotes the renormalization point, $\Delta^{n+1}_c$ is an integrated local polynomial, $(\beta_g,{\ }\beta_i)$ are respectively the beta functions for the gauge and the self matter couplings and $({\cal N}_A,{\ }{\cal N}_{\phi})$ are the Slavnov invariant counting operators: \begin{equation} {\cal N}_A = \int d^4 \! x \, \displaystyle{\biggl(} A^{a\mu}\fud{}{A^{a\mu}} - b^a\fud{}{b^a} -{\bar c}^a\fud{}{{\bar c}^a} - \Omega^{a\mu}\fud{}{\Omega^{a\mu}} \displaystyle{\biggr)} \ , \eqn{counting1} \par \begin{equation} {\cal N}_{\phi} = \int d^4 \! x \, \displaystyle{\biggl(} \phi\fud{}{\phi} - Y\fud{}{Y} \displaystyle{\biggr)} \ . \eqn{counting2} The vanishing up to the order $\hbar^{n}$ of the ghost anomalous dimension, i.e. the absence in \equ{calla-sym} of the Slavnov invariant counting term \begin{equation} {\cal N}_c \Gamma = \int d^4 \! x \, \displaystyle{\biggl(} c^a \fud{\Gamma}{c^a} - \sigma} \renewcommand{\S}{\Sigma^a \fud{\Gamma}{\sigma} \renewcommand{\S}{\Sigma^a} \displaystyle{\biggr)} \ , \eqn{counting3} is due to the ghost equation \equ{ghosteq}. Moreover, as shown in~\cite{ps}, the use of the Landau gauge allows to define a renormalized anomaly insertion \begin{equation} [\AA \cdot \Gamma] = \AA{\ }+{\ }O({\hbar \AA}) \ , \eqn{aninsert} which possesses the following properties: \begin{equation} {\cal C}[\AA \cdot \Gamma] = \hbar{\cal S}_\Gamma [\hat\Delta\cdot\Gamma] + O(\hbar^{n+1})\ , \eqn{cs-anom} \begin{equation} {\cal S}_\Gamma [\AA \cdot \Gamma] = O(\hbar^n) \ , \eqn{aninvariance} where ${\cal S}_\Gamma$ is the linearized Slavnov operator \begin{equation}\begin{array}{rl} {\cal S}_\Gamma =&\int d^4 \! x \, \displaystyle{\Biggl(} \fud{\Gamma}{\O^{a\mu}}\fud{}{A^a_\mu} + \fud{\Gamma}{A^a_\mu}\fud{}{\O^{a\mu}} + \fud{\Gamma}{\sigma} \renewcommand{\S}{\Sigma^a} \fud{}{c^a} + \fud{\Gamma}{c^a} \fud{}{\sigma} \renewcommand{\S}{\Sigma^a} \\[2mm] &{\ }{\ }{\ }{\ }+ \fud{\Gamma}{Y}\fud{}{\phi} + \fud{\Gamma}{\phi} \fud{}{Y} + b^a\fud{}{{\bar c}^a} \displaystyle{\Biggr)} \end{array}\eqn{slavnovlin} and \begin{equation} {\cal S}_\Gamma {\cal S}_\Gamma = O({\hbar^n}) \ . \eqn{nilpotency} The non-vanishing right-hand-side of the last equation is due to the presence of the gauge anomaly in the Slavnov identity \equ{anslavnov}. Equations \equ{cs-anom}, \equ{aninvariance} tell us that the insertion $[\AA \cdot \Gamma]$ obeys a Callan-Symanzik equation without anomalous dimension (up to a ${\cal S}_\Gamma$-variation) till the order $\hbar^n$, and that it is Slavnov invariant up to the order $\hbar^n$. As we will see in the next sections, properties \equ{cs-anom}, \equ{aninvariance} will provide a complete algebraic proof of the Adler-Bardeen theorem also in the case of vanishing one loop gauge beta function. \section{ Order $\hbar^{n+1}$ } \label{section3} Following~\cite{bbbc}, we can extend the anomalous Slavnov identity \equ{anslavnov} to the order $\hbar^{n+1}$ as \begin{equation} {\cal S}(\Gamma){\ }={\ }r{\hbar}^n [\AA \cdot \Gamma]{\ }+{\ } {\hbar}^{n+1}{\cal B}{\ }{\ }+O({\hbar}^{n+2}) \ , \eqn{extsl} where $[\AA \cdot \Gamma]$ is the anomaly insertion defined in equations \equ{cs-anom}, \equ{aninvariance} and ${\cal B}$ is an integrated local functional of ultraviolet dimension four and ghost number one. Applying the Callan-Symanzik operator to both sides of equation \equ{extsl} and making use of eq.\equ{cs-anom} and of the algebraic property \begin{equation} {\cal C}{\cal S}(\Gamma){\ }={\ }{\cal S}_{\Gamma}{\cal C}\Gamma \ , \eqn{relalg} we get, to the lowest order (i.e. order $n+1$) in $\hbar$, the equation \begin{equation} \displaystyle{\biggl(} \beta_g^{(1)} \pad{r}{g}{\ } +{\ }\sum_i \beta_i^{(1)}\pad{r}{\lambda_i} \displaystyle{\biggr)} \AA{\ }+{\ } \mu \pad{{\cal B}}{\mu}{\ }={\ }{\cal S}_{\Sigma}( \Delta^{n+1}_c - r \hat\Delta ) \ , \eqn{nonren1} where ${\cal S}_\S$ is the linearized nilpotent operator corresponding to the classical Slavnov identity \equ{slavnov} and $(\beta^{(1)}_g,{\ }\beta^{(1)}_i)$ are the one loop beta functions~\cite{beta1} Taking into account that ${\cal B}$ is homogeneous of degree zero in the mass parameter~\cite{bbbc}, i.e.: \begin{equation} \mu\pad{{\cal B}}{\mu} = 0 \ , \eqn{BBind} and that the gauge anomaly $\AA$ cannot be written as a local ${\cal S}_\S$-variation, it follows that \equ{nonren1} is equivalent to the two conditions: \begin{equation} \beta_g^{(1)} \pad{r}{g}{\ } +{\ }\sum_i \beta_i^{(1)}\pad{r}{\lambda_i} {\ }={\ }0 \ , \eqn{cond1} and \begin{equation} {\cal S}_\S ( \Delta^{n+1}_c - r \hat\Delta) {\ }={\ }0 \ . \eqn{cond2} In the case in which the one loop gauge beta function $\beta^{(1)}_g$ does not identically vanish, \equ{cond1} implies the Adler-Bardeen theorem~\cite{padua,bbbc,ps}. However, for the time being, we keep \equ{cond1} just as an algebraic equation in view of the fact that we will allow the coefficient $\beta^{(1)}_g$ to vanish. In this case eq.\equ{cond1} implies only that $r$ does not depend on the self matter couplings $\lambda_i$. Let us turn now to the analysis of the second condition \equ{cond2}. This equation shows that the difference $(\Delta^{n+1}_c - r \hat\Delta)$, being a Slavnov invariant quantity, can be expanded in terms of the elements of the invariant basis~\cite{pr}: \begin{equation} \displaystyle{\biggl(} {\partial \S \over \partial g}, \qquad {\partial \S \over \partial \lambda_i}, \qquad {\cal N}_A\S, \qquad {\cal N}_{\phi}\S, \qquad {\cal N}_c\S \displaystyle{\biggr)} \ . \eqn{set} This amounts to rewrite the Callan-Symanzik equation \equ{calla-sym} as \begin{equation} {\cal C}\Gamma{\ }+{\ } \hbar^{n+1}\gamma_c {\cal N}_c \Gamma{\ }={\ } r \hbar^{n+1} \hat\Delta{\ }+{\ }O(\hbar^{n+2}) \ , \eqn{callansym2} where the ghost-anomalous dimension has reappeared, in agreement with the fact that its absence is ensured only up to the order $\hbar^{n}$~\cite{ps}. Finally, repeating the same argument as in~\cite{bbbc}, the Callan-Symanzik equation \equ{callansym2} extends to the order $\hbar^{n+2}$ as \begin{equation} {\cal C}\Gamma{\ }+{\ } \hbar^{n+1}\gamma_c {\cal N}_c \Gamma{\ }={\ } r \hbar^{n+1}[ \hat\Delta \cdot \Gamma]{\ }+{\ }\hbar^{n+2} \Delta^{n+2}_c {\ }+ O(\hbar^{n+3}) \ , \eqn{callansym3} where $\Delta^{n+2}_c$ is a local integrated functional. The interesting feature of this equation is that the general local polynomial $\Delta^{n+1}_c$ of eq.\equ{calla-sym} has been replaced by the term $[\hat \Delta\cdot\Gamma]$, which is the same quantity as the one appearing in the Callan-Symanzik equation for the anomaly insertion \equ{cs-anom}. This step will turn out to be very useful in the discussion of the model at the order $\hbar^{n+2}$. \section{ Order $\hbar^{n+2}$ } \label{section4} This section is devoted to the analysis of the anomalous Slavnov identity at the order $\hbar^{n+2}$, i.e. to the algebraic characterization of the local polynomial ${\cal B}$ in eq.\equ{extsl}. To do this we will use property \equ{aninvariance} which shows that the anomaly insertion $[\AA \cdot \Gamma]$ is Slavnov invariant up to the order $\hbar^n$. Applying the linearized operator ${\cal S}_\Gamma$ \equ{slavnovlin} to both sides of equation \equ{extsl} and making use of \equ{aninvariance} and of the exact relation \begin{equation} {\cal S}_\Gamma {\cal S}(\Gamma) = 0 \ , \eqn{exact} we find, to the lowest order in $\hbar$ (remember that $n \ge 2$), the equation \begin{equation} {\cal S}_\S {\cal B} = 0 \ . \eqn{bbinv} This condition implies that the local polynomial ${\cal B}$ is Slavnov invariant with ghost number one, and then can be written as \begin{equation} {\cal B}{\ }={\ }\hat r \AA{\ }+{\ }{\cal S}_\S {\cal Q} \ , \eqn{bbexpr} where $\AA$ is the gauge anomaly \equ{anomaly}, $\hat r$ is an arbitrary coefficient and ${\cal Q}$ a local integrated polynomial of dimension four and ghost number zero. Moreover, since ${\cal B}$ appears in the Slavnov equation \equ{extsl} at the order $\hbar^{n+1}$, it follows that the cohomological trivial term ${\cal S}_\S {\cal Q}$ can be reabsorbed in the effective action $\Gamma$ as a local counterterm without affecting properties \equ{cs-anom}, \equ{aninvariance} and the Callan-Symanzik equation \equ{callansym3}. The Slavnov identity \equ{extsl} becomes then: \begin{equation} {\cal S}(\Gamma){\ }={\ }r{\hbar}^n [\AA \cdot \Gamma]{\ }+{\ } \hat r {\hbar}^{n+1}\AA{\ }+O({\hbar}^{n+2}) \ , \eqn{extsl1} and extends to the order $\hbar^{n+2}$ as \begin{equation} {\cal S}(\Gamma){\ }={\ }r{\hbar}^n [\AA \cdot \Gamma] {\ }+{\ } \hat r {\hbar}^{n+1}[\AA \cdot \Gamma]{\ }+{\hbar}^{n+2}\hat \BB{\ }+{\ } O(\hbar^{n+3}) \ , \eqn{extsl2} where $\hat \BB$ is an integrated local polynomial of ultraviolet dimension four and ghost number one. It is important to note that we cannot iterate the previous arguments to characterize $\hat \BB$, i.e. property \equ{aninvariance} allows to characterize only the order $\hbar^{n+1}$. Commuting now the Callan-Symanzik equation \equ{callansym3} with the Slavnov identity \equ{extsl2} and using eqs. \equ{cs-anom}, \equ{cond1} and the algebraic relations: \begin{equation} {\cal N}_c {\cal S}(\Gamma){\ }={\ }{\cal S}_\Gamma {\cal N}_c\Gamma \ , \eqn{commut1} \begin{equation} {\cal N}_c [\AA \cdot \Gamma]{\ }={\ }\AA{\ }+{\ }O(\hbar) \ , \eqn{anomcounting} we get, to the lowest order (i.e. $n+2$) in $\hbar$, the equation \begin{equation}\begin{array}{l} \displaystyle{\biggl(} \beta_g^{(2)} \pad{r}{g}{\ }+{\ }\sum_i \beta_i^{(2)}\pad{r}{\lambda_i} {\ }+{\ } \beta_g^{(1)} \pad{\hat r}{g}{\ }+{\ }\sum_i \beta_i^{(1)}\pad{\hat r}{\lambda_i} \displaystyle{\biggr)} \AA{\ } +{\ } \mu \pad{\hat \BB}{\mu}{\ }\\[2mm] ={\ }{\cal S}_{\S}( \Delta^{n+2}_c - \hat r \hat\Delta ) \ , \end{array}\eqn{nonren2} where $(\beta^{(2)}_g,{\ }\beta^{(2)}_i)$ are the two loop beta functions. As in the previous section, taking into account that $\hat \BB$ is homogeneous of degree zero in the mass parameter and that the anomaly $\AA$ cannot be written as a local ${\cal S}_\S$-variation, equation \equ{nonren2} splits into the two equations: \begin{equation} {\cal S}_{\S}( \Delta^{n+2}_c - \hat r \hat\Delta ){\ }={\ }0 \ , \eqn{dhat} \begin{equation} \displaystyle{\biggl(} \beta_g^{(2)} \pad{r}{g}{\ }+{\ }\sum_i \beta_i^{(2)}\pad{r}{\lambda_i} {\ }+{\ } \beta_g^{(1)} \pad{\hat r}{g}{\ }+{\ }\sum_i \beta_i^{(1)}\pad{\hat r}{\lambda_i} \displaystyle{\biggr)}{\ }={\ }0 \ . \eqn{twoloop} Eq.\equ{twoloop}, as it will be discussed in the next chapter, allows to control the dependence of the anomaly coefficient $r$ from the coupling constants $(g,{\ }\lambda_i)$ in the case in which $\beta^{(1)}_g = 0$. As one can easily understand, this is due to the presence in eq.\equ{twoloop} of the second order beta functions. \section{ The Adler-Bardeen theorem } \label{section5} As shown in the previous sections, the anomaly coefficient $r$ in \equ{anslavnov} is constrained by the two conditions \equ{cond1}, \equ{twoloop}; here rewritten for convenience: \begin{equation} \beta_g^{(1)} \pad{r}{g}{\ } +{\ }\sum_i \beta_i^{(1)}\pad{r}{\lambda_i} {\ }={\ }0 \ , \eqn{one} \begin{equation} \displaystyle{\biggl(} \beta_g^{(2)} \pad{r}{g}{\ }+{\ }\sum_i \beta_i^{(2)}\pad{r}{\lambda_i} {\ }+{\ } \beta_g^{(1)} \pad{\hat r}{g}{\ }+{\ }\sum_i \beta_i^{(1)}\pad{\hat r}{\lambda_i} \displaystyle{\biggr)}{\ }={\ }0 \ . \eqn{two} To discuss the consequencies of these equations on the coefficients $(r,{\ }\hat r)$ let us consider first the case in which the one loop gauge beta function $\beta^{(1)}_g$ is nonvanishing. In this case, as shown in~\cite{padua,bbbc,ps}, eq.\equ{one} implies the Adler-Bardeen theorem, i.e. that $r=0$. Equation \equ{two} reduces to: \begin{equation} \beta_g^{(1)} \pad{\hat r}{g}{\ }+{\ }\sum_i \beta_i^{(1)}\pad{\hat r}{\lambda_i} {\ }={\ }0 \ , \eqn{hrequ} from which it follows that also $\hat r$ vanishes; improving then the validity of the Slavnov identity \equ{extsl2} to all orders of perturbation theory by induction. Let us consider now the case in which \begin{equation} \beta_g^{(1)}{\ }={\ }0{\ }, \qquad \beta_g^{(2)}{\ }\ne {\ }0 \ . \eqn{new} Equations \equ{one}, \equ{two} become: \begin{equation} \sum_i \beta_i^{(1)}\pad{r}{\lambda_i} {\ }={\ }0 \ , \eqn{new1} and \begin{equation} \beta_g^{(2)} \pad{r}{g}{\ }+{\ }\sum_i \beta_i^{(2)}\pad{r}{\lambda_i} {\ }+{\ } \sum_i \beta_i^{(1)}\pad{\hat r}{\lambda_i} {\ }={\ }0 \ . \eqn{new2} Eq.\equ{new1} implies that $r$ is independent from the self matter couplings $\lambda_i$. It follows then that eq.\equ{new2} reads: \begin{equation} \beta_g^{(2)} \pad{r}{g}{\ }+ \sum_i \beta_i^{(1)}\pad{\hat r}{\lambda_i} {\ }={\ }0 \ , \eqn{new3} which is easily seen to imply that $r=0$, howing to the fact that $r$ depends only on the gauge coupling $g$ and that the two loop gauge beta function $\beta^{(2)}_g$~\cite{beta2} is not identically zero for vanishing self matter couplings. This concludes the proof of the Adler-Bardeen theorem in the case of vanishing one loop gauge beta function.
2024-02-18T23:40:44.667Z
1993-02-25T14:12:25.000Z
algebraic_stack_train_0000
3,234
3,821
proofpile-arXiv_065-15784
\section{Introduction} The speculation that QCD would have multiple vacua is based on a semiclassical consideration in terms of Euclidean functional integral. The instanton solutions are interpreted as the quantum tunneling between topologically distinct n-vacua and the true vacuum is expected to be an appropriate linear combination of them \cite{Belavin.et.al:1975,'t Hooft:1976,% Jackiw.et.al:1976,Callan.et.al:1976,Jackiw:1977,Callan.et.al:1978,% Jackiw:1980} \begin{equation} \ket \theta \sim \sum_n e^{i n \theta} \ket n. \end{equation} As a result we obtain a set of vacua parametrized by a real parameter \(\theta\). The world of hadrons is regarded as being constructed on one of them. The temporal gauge (\(\G a0 = 0\)) have been exclusively used when this semiclassical speculation is embodied in terms of the canonical quantization. This gauge fixing condition does not suffer from Gribov's ambiguity and is the most convenient one for incorporating properly large field fluctuations like instantons \cite{Gribov+Singer:1978,Friedman.et.al:1983}. In this gauge, however, the Lorentz invariance or, more generally, the Poincare invariance of the theory is not manifest. For the multiple vacua to be acceptable theoretically, there must be a representation of Poincare group in each \(\theta\)-sector, the Hilbert space constructed based on each $\theta$-vacuum. In this paper, we will verify the Poincare invariance of the temporal gauge quantization and discuss about the multiple vacua speculation from the canonical quantization view point. The standard way to take into account the effect of the \(\theta\)-vacuum is to add the Pontryagin density to the lagrangian \cite{Callan.et.al:1976}. We will canonically quantize the lagrangian with the \(\theta\)-term and show the Poincare invariance within the physical space, the subspace of the Hilbert space specified by the requirement of gauge invariance.\footnote{% A similar investigation in the axial gauge have been done in Ref.~\cite{Bars.et.al:1978}.} The important observation that will be shown in this paper is that different values of \(\theta\) in the lagrangian do not automatically lead us to different \(\theta\)-sectors. Instead, they result in the different representations of the Poincare group in the physical space. These representations are connected by certain unitary transformations. Thus, all different values of \(\theta\) make the same physical predictions. In order to confirm the semiclassical speculation on the multiple vacua and examine the \(\theta\)-dependence of QCD, it is important to explicitly write ``large'' gauge transformations in terms of field operators. The Poincare invariance verified in this paper also gives a firm theoretical basis to a recently formulated Lorentz invariant sum rule for the $\theta$-dependence, the $\theta$-dependence of the vacuum energy in terms of a sum over matrix elements of a field operator \cite{Kikuchi.et.al:1992}. We will ignore quark contribution and consider pure gluonic QCD. This paper is organized as follows. The section 2 contains a review of the quantization method and description of various notations. Strictly speaking, the gauge we will use is not the usual temporal gauge. We will not set the temporal components \(\G a0\) of gluon fields to be zero but fix them as arbitrary given c-number functions. We use the term ``temporal gauge'' in this more general sense. The advantage of keeping \(\G a0\) nonzero is that equations are written in more Poincare covariant fashion than setting \(\G a0 = 0\). In section 3, we explicitly verify that the energy-momentum-vector and angular-momentum-tensor obey the Poincare algebra when restricted in the physical space. At this point we assume the existence of at least one vacuum state that is invariant under spatial translation and rotation. Section 4 is devoted for discussion. The effect of \(\G a0\) and \(\theta\) on physical observables will be discussed. \section{Yang-Mills equations} The familiar problem in quantizing gauge theory in the canonical hamiltonian formalism is the absence of momenta conjugate to the temporal components \(\G a0 \). Then we cannot determine the time evolution of \(\G a0\) by the Heisenberg equations. The way we adopt in order to circumvent this problem is simple; we take \( \G a0 \) as given c-number functions of space-time coordinates $x$ and check the independence of physical observables from \(\G a0 \) at the end of the quantization procedure. Since we fix \(\G a0\), we no longer have the freedom of the time dependent gauge transformation. The canonical equations of motion for the spatial components $\G ai$ and their conjugate momenta $\CM ai$ form now a closed set in the sense that the first order differential equations completely determine $\G ai(x)$ and $\CM ai(x)$ for all \(x^0\) from any given initial configuration at $x^0 = 0$. The QCD lagrangian with the topological \(\theta\)-term reads% \footnote{Notations: The Minkowski indices are denoted by Greek letters. Their lowering and raising is done by the metric \(g^{\mu\nu}=g_{\mu\nu} = \mbox{diag}( 1, -1, -1, -1 )\). \( \epsilon_{\mu\nu\lambda\sigma} \) is the four dimensional completely antisymmetric tensor; \( \epsilon_{0123} = - \epsilon^{0123} = 1\). Roman letters $i, j, k ...$ are for spatial indices and run from 1 to 3. Three dimensional \(\epsilon_{ijk}\) is defined by \(\epsilon_{123} = 1\). Roman letters \( a, b, c ...\) denote the indices of the SU(3) adjoint representation; they run from 1 to 8. \(f^{abc} \) are the SU(3) structure constants and completely antisymmetric with respect to \(a\),\(b\), and \(c\). Repeated indices of every type are summed over.} \begin{equation} {\cal L} = - {1\over 4} \G a{\mu\nu} G^{a\mu\nu} + \theta {g^2 \over 32\pi^2} \G a{\mu\nu} \tilde G^{a\mu\nu}, \label{eq:LQCD} \end{equation} where \begin{equation} \G a{\mu\nu} = \pad \mu \G a{\nu} - \pad \nu \G a{\mu} + g f^{abc} \G b\mu \G c\nu, \qquad \tilde G^{a\mu\nu} = {1\over 2} \epsilon^{\mu\nu\lambda\sigma} \G a{\lambda\sigma}, \end{equation} and \(g\) is the coupling constant. Following the usual procedure to get the canonical form of the theory, we calculate conjugate momenta \begin{equation} \CM ai \equiv { \partial {\cal L} \over \partial \Gdot ai } = \Gdot ai - \D i\G a0 - \bar\theta \B ai, \end{equation} where \( \B ai \) stand for chromo-magnetic fields, \begin{equation} \B ai = {1\over 2} \epsilon_{ijk} \G a{jk},\label{eq:B} \end{equation} \(\D \mu\) the covariant derivative for adjoint representation, for example, \begin{equation} \D \mu A^a \equiv \pad \mu A^a + g f^{abc} \G b\mu A^c, \end{equation} and \(\bar\theta \equiv ( g^2 / 8\pi^2 ) \theta \). The hamiltonian is then given by \begin{equation} \tilde {H} \equiv \int d\vec x \left( \CM ai\Gdot ai - {\cal L} \right) = \int d\vec x \left[ {1\over 2} \left( \CM ai + \bar\theta \B ai \right)^2 + {1\over 2} \B ai{}^2 + \CM ai ( \D i \G a0 ) \right], \label{eq:tilH} \end{equation} where the tilde is used to distinguish \( \tilde H \) from its gauge invariant counterpart \(H\); see equations below. The equations of motion for the canonical fields are the Heisenberg equations \begin{eqnarray} \Gdot ai ( x) & = & i \cmtOf{ \tilde H }{ \G ai( x) } \label{eq:Heq1}\\ \dot \CM ai ( x) & = & i \cmtOf{ \tilde H }{ \CM ai ( x ) } \label{eq:Heq2} \end{eqnarray} in terms of the equal time commutation relations. The hamiltonian has an explicit \(x^0\)-dependence through $\G a0$. $\tilde H$ at the same $x^0$ is used for determining the time evolution of $\G ai(x)$ and $\CM ai(x)$ in (\ref{eq:Heq1}) and (\ref{eq:Heq2}). [Throughout this paper, our commutator is taken only at equal time. Thus two operators in commutators are understood to have the same time argument if they have $x^0$-dependence.] We give initial operator configurations for the canonical fields at \( x^0 =0\) such that they obey the commutators \begin{eqnarray} \cmtOf {\CM ai \ofvec x} {\G bj \ofvec y} & = & -i \delta_{ij}\delta^{ab}\delta( \vec x - \vec y\, ) \label{eq:CC1} \\ \cmtOf {\CM ai \ofvec x} {\CM bj \ofvec y} & = & \cmtOf{\G ai \ofvec x} {\G bj \ofvec y} = 0. \label{eq:CC2} \end{eqnarray} The equations (\ref{eq:Heq1}) and (\ref{eq:Heq2}) determine the canonical fields for all $x$ and the solutions \( \G ai(x) \) and \(\CM ai (x)\) obey (\ref{eq:CC1}) and (\ref{eq:CC2}) for any \(x^0\). It is convenient to define the operator fields \(\E ai\), chromo-electric field, by \begin{equation} \E ai(x) \equiv \CM ai(x) + \bar\theta \B ai(x) \label{eq:E} \end{equation} and to write commutators and Heisenberg equations in terms of $\G ai$, $\E ai$, and $\B ai$. Using (\ref{eq:CC1})--(\ref{eq:E}) we obtain \begin{eqnarray} \cmtOf{\G ai \ofvec x} {\G bj \ofvec y} & = & \cmtOf {\B ai \ofvec x}{ \G bj \ofvec y} = \cmtOf{ \B ai \ofvec x}{\B bj \ofvec y} = 0, \label{eq:ETC1}\\ \cmtOf{ \E ai \ofvec x}{ \G bj \ofvec y } & = & -i \delta_{ij}\delta^{ab}\delta( \vec x - \vec y\, ), \\ \cmtOf{ \E ai \ofvec x}{ \B bj \ofvec y } & = & i \epsilon_{ijk}\left\{ \delta^{ab}\pad k\delta( \vec x - \vec y\, ) - g f^{abc} \G c k \ofvec y \delta( \vec x - \vec y\, ) \right\} \\ \cmtOf{\E ai \ofvec x}{\E bj \ofvec y} & = & 0, \label{eq:ETC2} \end{eqnarray} while the hamiltonian becomes \begin{equation} \tilde H = \int d\vec x \left\{ {1\over 2} \E ai {}^2 + {1\over 2} \B ai {}^2 - ( \D i \E ai )\G a0 \right\}. \label{eq:tH} \end{equation} Here we have used the equations \begin{equation} \D i \CM ai = \D i \E ai, \label{eq:ID}\end{equation} which are the consequences of Bianchi identities \begin{equation} \epsilon^{\mu\nu\lambda\sigma} \D\nu \G a {\lambda\sigma} = 0 \label{eq:Bianchi} \end{equation} for \( \mu = 0 \). We have also neglected a surface integral at spatial infinity in Eq.~(\ref{eq:tH}). In this paper we neglect similar surface integrals under the assumption that they cannot change the dynamics. The Heisenberg equations now read \begin{eqnarray} \Gdot ai & = & i \cmtOf {\tilde H} { \G ai } = \E ai + \D i \G a0 \label{eq:EforG}\\ \Edot ai & = & i\cmtOf{\tilde H}{\E ai } = -\epsilon_{ijk}\D j\B ak - gf^{abc} \G b0 \E ci, \label{eq:EforE} \end{eqnarray} and \begin{equation} \Bdot ai = i \cmtOf {\tilde H} {\B ai} = \epsilon_{ijk} \D j \E ak - g f^{abc} \G b0 \B ci. \label{eq:EforB} \end{equation} Eqs.~(\ref{eq:EforG}) clearly mean operator fields \( \E ai \) defined by (\ref{eq:E}) can be identified as electric fields \( \G a{0i} \). Eqs.~(\ref{eq:EforE}) are the operator version of the Yang-Mills equations \begin{equation} \D \mu G^{a\mu\nu} = 0 \label{eq:YM} \end{equation} for \(\nu = i\). Eqs.~(\ref{eq:EforB}), which are consistent with (\ref{eq:B}) and (\ref{eq:EforG}), are the spatial components of Bianchi identities (\ref{eq:Bianchi}). The time components of the Yang-Mills equations, \begin{equation} \GL a \equiv \D i\E ai = 0, \label{eq:GL} \end{equation} cannot be satisfied as an operator equation. Instead, it should be interpreted as a constraint on the physical states $\ket{\mbox{ }} $ \cite{Jackiw:1980,Weyl}. They are defined by \begin{equation} \GL a \ofvec x \ket{\mbox{ }} = 0 \label{eq:phys} \end{equation} for all $a$ and \( \vec x \) with \(\GL a\) at \(x^0 = 0\). Although the time derivative of $\GL a$ is not zero, \begin{equation} \dot \GL a = i \cmtOf{\tilde H} {\GL a} = - gf^{abc} \G b0 \GL c, \end{equation} its matrix elements in the physical space are zero. This means (\ref{eq:phys}) holds for arbitrary \(x^0\). \newcommand{{\cal O}_{\mbox{\scriptsize ph}}}{{\cal O}_{\mbox{\scriptsize ph}}} For the verification of the Poincare invariance, we need one more definition. We refer to the hermitian operators that map the physical states into the physical ones as physical. A physical operator \({\cal O}_{\mbox{\scriptsize ph}}\) must satisfy \begin{equation} \cmtOf{ \GL a ( x)}{{\cal O}_{\mbox{\scriptsize ph}}} \ket{\mbox{ }} = 0 \end{equation} for arbitrary $a$, \( x\) and physical \(\ket{\mbox{ }}\). This definition means that matrix representation of \({\cal O}_{\mbox{\scriptsize ph}}\) are block diagonal with respect to the physical space and the unphysical space (the orthogonal complement of the physical space). Let us refer to sub-matrix of \({\cal O}_{\mbox{\scriptsize ph}}\) in the physical space as the physical component. The operators \(\GL a\) generate local gauge transformations: they satisfy \begin{eqnarray} \cmtOf{\GL b\ofvec x}{\G ai\ofvec y} &=& -i \left\{gf^{abc} \G ci \ofvec y \delta( \vec x - \vec y\, ) + \delta^{ab}\pad i \delta( \vec x - \vec y\, ) \right\},\\ \cmtOf{\GL b\ofvec x}{\E ai\ofvec y} & = & -i gf^{abc} \E ci \ofvec y \delta( \vec x - \vec y\, ), \label{eq:GonE}\\ \cmtOf{\GL b\ofvec x}{\B ai\ofvec y} & = & -i gf^{abc} \B ci \ofvec y \delta( \vec x - \vec y\, ), \end{eqnarray} and local algebra of gauge group \begin{equation} \cmtOf{ \GL a \ofvec x}{\GL b \ofvec y} = igf^{abc}\GL c\ofvec y \delta( \vec x - \vec y\, ). \end{equation} Thus the gauge invariant operators, the operators that commute with \(\GL a\), are physical. \section{The Poincare algebra} Now we will explicitly show the Poincare invariance of our quantization scheme, i.e. we will show the existence of Poincare group representation in the physical space. We start with the classical expression of the energy momentum tensor \begin{equation} \EMT \mu\nu = - G^{a \mu\lambda} G^{a\nu}{}_\lambda + {1\over 4} g^{\mu\nu} G^a_{\lambda\sigma} G^{a\lambda\sigma} \end{equation} and examine its properties in terms of the field operators. The corresponding operator expressions for \(\EMT \mu\nu \) are \begin{eqnarray} \EMT 00 (x) & = & {1\over 2} \left\{ \E ai (x) ^2 + \B ai (x) ^2 \right\} \\ \EMT 0i (x) & = & - \epsilon_{ijk} {1\over 2} \left\{ \E aj (x) \B ak (x) + \B ak (x) \E aj(x) \right\} \\ \EMT ij (x) & = & {1\over 2} \delta_{ij} \left\{ \E ai (x) ^2 + \B ai (x) ^2 \right\} - \E ai(x) \E aj(x) -\B ai(x) \B aj(x), \end{eqnarray} where we have symmetrized the expression for \(\EMT 0i \) with respect to the order of $\E aj$ and $\B ak$ because they do not commute as operator fields; we will use the same prescription for handling the order of operators when this becomes a problem. $\EMT 0i$ satisfy $ \EMT 0i = \EMT i0 $ and are hermitian. We will prove that the unitary transformations \begin{equation} \Lambda(\lambda, \omega) \equiv \exp\left\{ i \lambda_\mu P^\mu + i\omega_{\mu\nu} M^{ \mu\nu } \right\}, \end{equation} defined by the energy-momentum-vector \(P^\mu\) \begin{equation} P^\mu \equiv \int d\vec x \, \EMT 0\mu (x) \label{eq:P} \end{equation} and the angular-momentum-tensor $M^{\mu\nu}$ \begin{equation} M^{\mu\nu} \equiv \int d\vec x \, \left( x ^\mu \EMT 0\nu (x) - x^ \nu \EMT 0\mu (x) \right) \label{eq:M} \end{equation} with real parameters $\lambda_\mu $ and $\omega_{\mu\nu} $, constitute a representation of the Poincare group in the physical space. The operators $\EMT \mu\nu$ are gauge invariant and, thus, physical. They satisfy divergence equations \begin{eqnarray} \dot \EMT 00 + \pad j \EMT 0j & = & 0 \\ \dot \EMT 0i + \pad j \EMT ji & = & - \E ai \GL a, \end{eqnarray} which are the simple consequence of (\ref{eq:EforE}) and (\ref{eq:EforB}). Note that the operators $ \E ai \GL a $ are hermitian and physical: $\E ai $ and $\GL a$ commute for the same color index $a$ no matter what their spatial coordinates are (See Eq.~(\ref{eq:GonE})). And their physical components are zero: \begin{equation} \E ai \GL a \ket{\mbox{ }} = 0 \quad \mbox{for any physical } \ket{\mbox{ }}. \end{equation} Thus, although some of \(P^\mu\) and $M^{\mu\nu}$ (which have \(\EMT 0i\) in their expressions) are $x^0$-dependent, their physical components are $x^0$-independent. The unitary transformations $ \Lambda(\lambda, \omega)$ then have an unique operation in the physical space no matter what time slice in the Minkowski space we use for evaluating the integrals (\ref{eq:P}) and (\ref{eq:M}). The evaluation of commutators of \(\EMT \mu\nu \) with respect to the operator fields are straightforward. By Eqs.~(\ref{eq:ETC1})--(\ref{eq:ETC2}), we obtain \begin{eqnarray} \cmtOf{\EMT 00 \ofvec x}{\G a i \ofvec y} & = &-i \E ai\ofvec y \delta( \vec x - \vec y\, ) \label{eq:COfEMT1}\\ \cmtOf{\EMT 0k \ofvec x}{\G a i \ofvec y} & = & i \epsilon_{ijk}\B a j\ofvec y\delta( \vec x - \vec y\, ) \label{eq:example}\\ \cmtOf{\EMT 00\ofvec x}{\E ai\ofvec y} & = & i \epsilon_{ijk} \left\{ \left(\D j\B ak\ofvec y\right)\delta( \vec x - \vec y\, ) - \B ak\ofvec y \pad j \delta( \vec x - \vec y\, ) \right\} \nonumber\\ \\ \cmtOf{\EMT 0k\ofvec x}{\E ai\ofvec y} & = & i ( \delta_{km} \delta_{il} - \delta_{ki}\delta_{lm} ) \nonumber\\ && \times \left\{\left(\D m\E al\ofvec y\right) \delta( \vec x - \vec y\, ) - \E al\ofvec y\pad m\delta( \vec x - \vec y\, )\right\} \nonumber\\ \\ \cmtOf{\EMT 00\ofvec x}{\B ai\ofvec y} &= & - i\epsilon_{ijk} \left\{ \left(\D j \E ak\ofvec y\right) \delta( \vec x - \vec y\, ) - \E ak\ofvec y \pad j\delta( \vec x - \vec y\, ) \right\} \nonumber\\ \\ \cmtOf{\EMT 0k\ofvec x}{\B ai\ofvec y} & =& i ( \delta_{kj} \delta_{il} - \delta_{ki}\delta_{lj} ) \nonumber\\ &&\times \left\{\left( \D j\B al\ofvec y \right) \delta( \vec x - \vec y\, ) - \B al\ofvec y \pad j\delta( \vec x - \vec y\, ) \right\}. \label{eq:COfEMT2} \end{eqnarray} Using these results, we can calculate the commutators of \(P^\mu\). But these are {\em not} the commutators the generators of translations should obey; for example, from (\ref{eq:example}) we get \begin{equation} \cmtOf {P^k} {\G ai \ofvec x} = - i \partial^k \G ai - i \D i \G ak, \end{equation} which has the residual second term on the right hand side. A careful inspection on Eqs.~(\ref{eq:COfEMT1})--(\ref{eq:COfEMT2}), however, tells us a simple modification on \(\EMT 0\mu \) gives us a right form for the generators of translations. We define \begin{equation} \EMTtilde 0\mu = \EMT 0\mu - \Gtilde \mu \end{equation} with hermitian $\Gtilde \mu$, \begin{equation} \Gtilde \mu (x) \equiv {1\over 2} \left\{ \GL a (x) G^{a\mu}(x) + G^{a\mu}(x) \GL a (x) \right\}. \label{eq:Gtil} \end{equation} The commutators of \(\EMTtilde 0\mu \) are then \begin{eqnarray} \cmtOf{\EMTtilde 00 \ofvec x}{\G a i \ofvec y} & = &-i \left\{ \E ai\ofvec y + \D i \G a0 \ofvec y \right\} \delta( \vec x - \vec y\, ) + i \G a0 \ofvec y \pad i \delta( \vec x - \vec y\, ) ,\nonumber\\ \label{eq:COftEMT1}\\ \cmtOf{\EMTtilde 0k \ofvec x}{\G a i \ofvec y} & = & i \left( \pad k \G ai \ofvec y \right) \delta( \vec x - \vec y\, ) -i \G ak\ofvec y \pad i \delta( \vec x - \vec y\, ), \\ \cmtOf{\EMTtilde 00\ofvec x}{\E ai\ofvec y} & = & i \epsilon_{ijk} \left\{ \left(\D j\B ak\ofvec y\right)\delta( \vec x - \vec y\, ) - \B ak\ofvec y \pad j \delta( \vec x - \vec y\, ) \right\} \nonumber\\ && + igf^{abc} \G b0 \ofvec y \E ci \ofvec y \delta( \vec x - \vec y\, ), \\ \cmtOf{\EMTtilde 0k\ofvec x}{\E ai\ofvec y} & = & i \left( \pad k \E ai \ofvec y \right) \delta( \vec x - \vec y\, ) \nonumber\\ && - i ( \delta_{km} \delta_{il} - \delta_{ki}\delta_{lm} ) \E al\ofvec y \pad m \delta( \vec x - \vec y\, ), \\ \cmtOf{\EMTtilde 00\ofvec x}{\B ai\ofvec y} &= & - i\epsilon_{ijk} \left\{ \left(\D j \E ak\ofvec y\right) \delta( \vec x - \vec y\, ) - \E ak\ofvec y \pad j\delta( \vec x - \vec y\, ) \right\} \nonumber\\ && +ig f^{abc} \G b0 \ofvec y \B ci \ofvec y \delta( \vec x - \vec y\, ), \\ \cmtOf{\EMTtilde 0k\ofvec x}{\B ai\ofvec y} & =& i\left( \pad k \B ai \ofvec y \right) \delta( \vec x - \vec y\, ) \nonumber \\ && - i ( \delta_{kj} \delta_{il} - \delta_{ki}\delta_{lj} ) \B al\ofvec y \pad j\delta( \vec x - \vec y\, ). \label{eq:COftEMT2} \end{eqnarray} Thus the operators $\Ptilde \mu$, defined in similar way to (\ref{eq:P}) with $\EMTtilde 0\mu$, satisfy \begin{eqnarray} \cmtOf {\Ptilde \mu} {\G ai ( x)} & = & - i \partial^\mu \G ai ( x), \label{eq:trG}\\ \cmtOf {\Ptilde \mu} {\E ai ( x)} & = & - i \partial^\mu \E ai ( x), \\ \cmtOf {\Ptilde \mu} {\B ai ( x)} & = & - i \partial^\mu \B ai ( x). \label{eq:trB} \end{eqnarray} Since \( \Ptilde \mu \) are not \(x^0\)-independent, these relations does not necessarily mean \(\Ptilde \mu\) are the generators of finite translations. But at least they generate infinitesimal translations for all operator fields. Note that \(\Ptilde 0 = \tilde H \) so that Eqs.~(\ref{eq:trG})--(\ref{eq:trB}) for \(\mu = 0\) are the Heisenberg equations (\ref{eq:EforG})--(\ref{eq:EforB}). It is interesting to notice that \(\Mtilde \mu\nu\), defined by (\ref{eq:M}) with $\theta ^{\mu\nu} $ replaced by $\EMTtilde \mu\nu$, also satisfy the commutation relations of the generators for infinitesimal Lorentz transformations. Indeed, we obtain \begin{eqnarray} \cmtOf{\Mtilde \mu\nu}{\G ai (x)} = - i \Delta^{[\mu\nu]} \G ai (x), \\ \cmtOf{\Mtilde \mu\nu}{\E ai (x)} = - i \Delta^{[\mu\nu]} \E ai (x), \label{eq:rE}\\ \cmtOf{\Mtilde \mu\nu}{\B ai (x)} = - i \Delta^{[\mu\nu]} \B ai (x), \label{eq:rB} \end{eqnarray} where we have introduced a compact notation \(\Delta^{[\alpha\beta]}\) for infinitesimal Lorentz transformations. They stand for \begin{equation} \Delta^{[\alpha\beta]} \G a\mu = \chi^{[\alpha\beta]}_{\mu\nu} G^{a\nu} + \chi^{[\alpha\beta]}_{\lambda\sigma}x^\lambda \partial^\sigma \G a\mu, \end{equation} for the vector potential and \begin{equation} \Delta^{[\alpha\beta]} \G a{ \mu\nu} = \chi^{[\alpha\beta]}_{\mu\lambda} G^{a\lambda}{}_\nu + \chi^{[\alpha\beta]}_{\nu\lambda} G^a{}_\mu{}^\lambda{} + \chi^{[\alpha\beta]}_{\lambda\sigma}x^\lambda \partial^\sigma \G a {\mu\nu} \end{equation} for the field strength; \(\chi^{[\alpha\beta]}_{\mu\nu} \) is the anti-symmetric parameter for the \([\alpha\beta]\)-Lorentz transformations, \begin{equation} \chi^{[\alpha\beta]}_{\mu\nu} = \delta^\alpha_\mu \delta^\beta_\nu - \delta^\alpha_\nu \delta^\beta_\mu. \end{equation} Note further that although $\Gtilde \mu$ are not gauge invariant, they are physical: \begin{equation} \cmtOf{ \GL a ( x)}{\Gtilde k ( y)} = i \GL a (y) \pad k\delta( \vec x - \vec y\, ). \end{equation} Thus \(\Ptilde \mu \) and \(\Mtilde \mu\nu \) are also physical. Now we will prove that physical components of \(\Ptilde \mu \) and \(\Mtilde \mu\nu\) are the same as those of $P^\mu$ and $M^{\mu\nu}$, respectively. We assume that there is at least one vacuum state $\ket 0$ in the physical space and that it is invariant under spatial translations and rotations generated by \(\Ptilde i\) and \(\Mtilde jk\) evaluated at \(x^0 = 0\). Note that finite transformations \begin{equation} \tilde \Lambda (\lambda, \omega) = \exp \left\{ i \lambda_i \Ptilde i + i\omega_{jk} \Mtilde jk \right\} \end{equation} do not change the time coordinate when operating on the field operators. Thus the infinitesimal form that \(\Ptilde i\) and \(\Mtilde jk\) satisfy, (\ref{eq:trG})--(\ref{eq:rB}), are sufficient to prove that \(\tilde \Lambda \) represent three dimensional translation-rotation group. Under the above assumption we will prove \begin{equation} \Gtilde \mu ( x) \ket{\mbox{ }}= 0 \label{eq:Gmu} \end{equation} for arbitrary physical \(\ket{\mbox{ }}\) and $x$. For \(\mu = 0\), (\ref{eq:Gmu}) is obvious: since $\G a0$ is c-number function, the definitions (\ref{eq:phys}) for physical states and (\ref{eq:Gtil}) for $\Gtilde 0$ result in it. For $\mu = i$, the time derivative of $\Gtilde i$, \begin{equation} \dot {\Gtilde i} = - \left( \E ai + \pad i \G a0 \right) \GL a, \end{equation} has zero physical components: \begin{equation} \dot {\Gtilde i} \ket{\mbox{ }} = 0. \end{equation} Thus we only need to prove (\ref{eq:Gmu}) at $x^0 = 0$. Let us write $\Gtilde i$ at $x^0 = 0$ as \begin{equation} \Gtilde i \ofvec x = c^i - \G ai \ofvec x \GL a \ofvec x, \end{equation} where \begin{equation} c^i \equiv - { 1\over 2} \cmtOf{\GL a \ofvec x }{\G ai\ofvec x } = \left. {8\over 2} i \,\pad i \delta\ofvec x \right|_{\vec x = 0}. \end{equation} Although \( c^i\) has ill-defined expression and needs an appropriate regularization, it is at most a c-number. Thus we can choose the vacuum \(\ket 0\) in order to determine \( c^i\): \begin{equation} \Gtilde i \ofvec x \ket 0 = c^i \ket 0. \end{equation} Applying \(\tilde \Lambda\) on both side of this equation and using the assumption that \(\ket 0\) is invariant under \(\tilde\Lambda \), we get \( c^i = R^i{}_j\,c^j \) for an arbitrary SO(3) matrix $R^i{}_j$; that is, $c^i = 0$. Thus the physical components of \(\Gtilde \mu (x) \) are zero and \(P^\mu\) and \(\Ptilde \mu\) or \(M^{\mu\nu}\) and \(\Mtilde \mu\nu\) have the same physical components. Using (\ref{eq:trG})--(\ref{eq:rB}) and \( H \equiv P^0 \), we obtain \begin{eqnarray} \cmtOf{\tilde H }{ H } & = & \cmtOf{\Ptilde j} H = \cmtOf {\Ptilde j} {P^k} = 0, \label{eq:Po1}\\ \cmtOf{\Mtilde jk} H & = & 0, \nonumber \\ \cmtOf{\Mtilde jk}{P^i} & = & i \left( \delta_{ij} P^k - \delta_{ik} P^j\right), \nonumber \\ \cmtOf{\Mtilde 0k} H & = & -i P^k, \nonumber\\ \cmtOf{\Mtilde 0k} {P^i} & = & -i \delta_{ik} H -i\int d\vec x x^k \E ai \GL a, \label{eq:Po2}\\ \cmtOf{\Mtilde jk}{M^{lm}} & =& i\left( \delta_{jl}M^{km} - \delta_{jm}M^{kl} + \delta_{km}M^{jl} - \delta_{kl}M^{jm}\right), \nonumber\\ \cmtOf{\Mtilde 0k} {M^{ij}} & = & i \left( \delta_{jk} M^{0i} - \delta_{ik} M^{0j} \right) -i \int d\vec x x^k \left( x^i \E aj \GL a- x^j \E ai \GL a \right), \nonumber \\ \cmtOf{\Mtilde 0k} {M^{0i}} & = & i M^{ik} -i \int d\vec x x^0 x^k \E ai \GL a. \label{eq:Po3} \end{eqnarray} Since the physical operators have no matrix elements between physical states and unphysical states, the relations (\ref{eq:Po1})--(\ref{eq:Po3}) are correct for their physical components as well. Recall $\E ai \GL a$ has zero physical components. Thus the physical components of \( M^{\mu\nu} (\Mtilde \mu\nu) \) and \(P^\mu (\Ptilde \mu )\) are $x^0$-independent% \footnote{ Although the physical component of \(M^{0k} \) does not commute with that of \(\tilde H \), the explicit \(x^0\)-dependence in its definition cancel the \(x^0\)-dependence from the commutator in the Heisenberg equations.} and satisfy the Poincare algebra. The operations of the unitary transformations $\Lambda(\lambda, \omega)$ in the physical space are unambiguously determined, and they represent the Poincare group. \section{Discussions} Using the fact that physical components of $P^\mu$ commute each other, we can obtain basis vectors in physical space as eigen vectors of them. Let us refer this basis as physical basis. The set of physical basis and operator solutions for the operator fields $\CM ai (x)$ (or $\E ai (x)$) and $\G ai(x)$ are the all we need to know to make physical predictions. Especially, the matrix elements of physical operators with respect to the physical basis are related to gauge invariant physical predictions. We first discuss about the effect of \(\G a0\) on physical observables. Since \(\G a0\) is an arbitrary function of the space time coordinate \( x\), it should have introduced violation of the Poincare invariance had it coupled to a physical degree of freedom. Conversely, our manifest construction of the Poincare generators implies that physical predictions are independent from the specific configuration of \(\G a0\) that we fix when we start the quantization procedure. This statement is assured by the following two things. 1) The physical basis is \(\G a0\)-independent: they are the eigen vectors of the \(x^0\)-independent physical component of \(P^\mu\) and they are determined at \(x^0 = 0\) by the initial configurations of \(\CM ai\) and \(\G ai\) and the parameter \(\theta\). 2) The time evolution of the physical components of physical operators \({\cal O}_{\mbox{\scriptsize ph}}\) is \(\G a0\)-independent: \(\G a0\) can only couple through the Heisenberg equation \begin{equation} \dot {\cal O}_{\mbox{\scriptsize ph}} = i \cmtOf{ \tilde H}{ {\cal O}_{\mbox{\scriptsize ph}}} \end{equation} where they always appear in conjunction with \(\GL a\), which have zero physical components. Thus the matrix elements of \({\cal O}_{\mbox{\scriptsize ph}}\) with respect to the physical basis are \(\G a0\)-independent. Now let us turn to the \(\theta\)-dependence. We notice that all the \(\theta\)-dependence concentrates in the definition of \( \E ai\), (\ref{eq:E}); although we started with the lagrangian with \(\theta\)-term, there is no explicit \(\theta\)-dependence in the commutators (\ref{eq:ETC1})--(\ref{eq:ETC2}), hamiltonian (\ref{eq:tH}), or the Heisenberg equations (\ref{eq:EforG})--(\ref{eq:EforB}). Thus we will intensively consider the effect of changing \(\theta\) in (\ref{eq:E}). Specifically, we take a following picture for this consideration. We think the canonical fields \(\CM ai \ofvec x\) (at \(x^0 = 0\)) have \(\theta\)-independent operations in the Hilbert space as the operators that act on the state vectors. The operators \(\E ai\ofvec x \), therefore, have \(\theta\)-dependent operations. (See figure 1 where we represent \(\E ai\) at different values of \(\theta\) by arrows as they map a state vector to another one.) The physical space does not change for all values of \(\theta\): because of Eq.~(\ref{eq:ID}), all the constraints on the physical space for different \(\theta\) are identically \(\D i\CM ai\ket{\mbox{ }} = 0 \). Now our problem is how physical predictions depend on the \(\theta\)-dependent initial operator configurations \(\E ai\ofvec x\) while the physical space is \(\theta\)-independent and the equations of motion do not have explicit \(\theta\)-dependence. The key for answering this question is a transformation \begin{equation} T ({\varphi} ) \equiv e^{i{\varphi } q } \end{equation} defined by the so-called topological charge \begin{equation} q \equiv \left. \int d\vec x\, K^0(x)\right|_{x^0 = 0} \end{equation} and real parameter \(\varphi\). Here \( K^0\) is the time component of the current \begin{equation} K^\mu = {g^2 \over 32\pi^2 } \epsilon^{\mu\nu\lambda\sigma} \left( \G a\nu \G a{\lambda\sigma} - { g\over 3} f^{abc} \G a\nu\G b\lambda \G c\sigma \right) \end{equation} whose divergence is the Pontryagin density $\pad \mu K^\mu = (g^2 /32\pi^2) \G a{\mu\nu}\tilde G ^{a\mu\nu} $. [This divergence equation holds even for the operator fields by simply setting $\G a{0i} = \E ai $ and using (\ref{eq:EforG}).] \(T(\varphi)\) transforms the operator fields (at \(x^0 = 0\)) as \begin{eqnarray} T( {\varphi} ) \E ai\ofvec x T( {\varphi} )^{-1} & = & \E ai \ofvec x + {g^2\over 8\pi^2} {\varphi} \B ai\ofvec x, \\ T( {\varphi} ) \G ai\ofvec x T( {\varphi} )^{-1} & = & \G ai\ofvec x, \end{eqnarray} i.e., it connects the initial operator configurations at \(\theta\) with those at \(\theta + \varphi\). Let \({\cal O}_\theta (x) \) denote collectively the operator solutions, \(\E ai(x) \) and \(\G ai(x) \), for the Heisenberg equations with the initial operator configurations at \(\theta\). Then define \begin{equation} {\cal O}_{\theta + \varphi }(x) = T (\varphi) {\cal O}_\theta (x) T (\varphi)^{-1}. \label{eq:rop} \end{equation} \({\cal O}_{\theta +\varphi }(x) \) also satisfies the equations of motion (\ref{eq:EforG})--(\ref{eq:EforE}) and has the appropriate initial configurations for \(\theta + \varphi\). [Note that time evolution of \( {\cal O}_{\theta + \varphi }\) is governed by the hamiltonian written in terms of \({\cal O}_{\theta+\varphi}\) themselves.] Thus the operator fields at different values of \(\theta\) are related by (\ref{eq:rop}). \newcommand{\ket{\mbox{ }}}{\ket{\mbox{ }}} Next, we consider the effect of changing \(\theta\) on the physical basis. The operators \( P^\mu\) and \(M^{\mu\nu}\) are written in terms of \(\E ai\) (and \(\B ai\)) and, thus, they also have \(\theta\)-dependent operations. Especially different values of \(\theta\) lead us to different representations of the Poincare group and different sets of physical basis. Obviously, \(P^\mu\) or \(M^{\mu\nu}\) at \(\theta\) and \(\theta +\varphi\) are related by \begin{eqnarray} P^\mu_{\theta +\varphi} = T(\varphi) P^\mu_\theta T(\varphi)^{-1},\\ M^{\mu\nu}_{\theta +\varphi} = T(\varphi) M^{\mu\nu}_\theta T(\varphi)^{-1}. \end{eqnarray} Since \( q \) is physical, \begin{equation} \cmtOf{ \GL a \ofvec x } {q} = i {g^2 \over 32\pi^2 } \int d\vec y\, \epsilon_{ijk} (\pad j \G ak\ofvec y ) \pad i \delta( \vec x - \vec y\, ) = 0, \end{equation} the transformation \(T(\varphi)\) is unitary within the physical space. Thus physical basis at \(\theta +\varphi \), \(\ket{\mbox{ }}_{\theta +\varphi}\), is related to one at \(\theta\), \(\ket{\mbox{ }}_\theta\), by \begin{equation} \ket{\mbox{ }}_{\theta +\varphi} = T(\varphi) \ket{\mbox{ }}_\theta. \label{eq:rket} \end{equation} Eqs.~(\ref{eq:rop}) and (\ref{eq:rket}) assert that two theories with different values of \(\theta\) yield the same physical predictions and have the same physical content. The possibility of encountering a physical quantity with a nontrivial $\theta$-dependence can only occur if we can further restrict the physical space into a smaller subspace where the transformation $ T(\varphi) $ loses its unitarity. By the requirement of Poincare invariance, this restricted space must be large enough to retain the Poincare algebra generated by $P^\mu$ and $M^{\mu\nu}$. Usually assumed in the literature \cite{Jackiw.et.al:1976,Callan.et.al:1976,% Jackiw:1977,Callan.et.al:1978,Jackiw:1980} is the existence of the ``large'' gauge transformation $\Omega$ that transforms the operator fields as \begin{eqnarray} t^a \, \Omega \G ai\ofvec x \Omega^{-1} & = & h\ofvec x ^{-1} t^a h\ofvec x \,\G ai\ofvec x + {i\over g} h\ofvec x ^{-1} \pad i h\ofvec x, \label{eq:OmegaG}\\ t^a\, \Omega \E ai\ofvec x \Omega^{-1} & = & h\ofvec x^{-1} t^a h\ofvec x \, \E ai \ofvec x,\label{eq:OmegaE} \end{eqnarray} where $t^a$ is the 3\(\times\)3 hermitian traceless generators for SU(3) and $h\ofvec x $ denotes a representative of local gauge transformation which has unit winding number as a map S$^3\rightarrow$SU(3). Once we have the explicit $\Omega$, we can further restrict the physical space by the requirement \begin{equation} \Omega \ket{\mbox{ }} = \ket{\mbox{ } }. \label{eq:res}\end{equation} Since \(\Omega\) commutes with \(P^\mu\) and \(M^{\mu\nu}\), the Poincare group is representable within the physical space further restricted by (\ref{eq:res}). But \(\Omega\) does not commute with \(q\) and, thus, \(T(\varphi)\) is no longer a unitary transformation in this restricted physical space. $\Omega$ cannot be obtained simply by accumulating infinitesimal gauge transformations generated by \(\GL a\). Rather, the reason why QCD may have a nontrivial \(\theta \)-dependence is that the large gauge transformation is disconnected from those generated by \(\GL a\) \cite{Jackiw:1980,Wu.et.al:1985}. As far as we know, a satisfactory operator expression for \(\Omega\) have not been given yet. It is important to construct \(\Omega\) explicitly to verify the multi vacua speculation and further examine the \(\theta\)-dependence of QCD. \medskip \begin{center} \bf Acknowledgement \end{center} The author thanks J. Wudka for helpful discussions. This work is in part supported by the US Department of Energy under Contract No. DE-AT03-87ER40327.
2024-02-18T23:40:44.749Z
1993-02-12T06:16:48.000Z
algebraic_stack_train_0000
3,240
6,052
proofpile-arXiv_065-15834
\section{Introduction and summary} The discovery of the top quark has been anticipated since many years at accelerators of increasing energy. Present hopes are based on analyses of high precision data and the standard theory, see \cite{Rolandi}. The top is the first heavy quark whose mass can be measured to better than 1\% precision at a future $e^+e^-$ collider. Therefore, measurements of its width will not only test the standard model at the Born level, but also the QCD radiative corrections which are of order 10\% \cite{JK1}~. This is in contrast to $b$ and $c$ quarks, where uncertainties in the masses and non-perturbative effects preclude this possibility. Recently, the complete one loop electroweak corrections to the total rate have been also calculated \cite{DS,Gad}~, and turned out to be rather small (1-2\%)~. Nevertheless, it has been claimed \cite{DS,Gad} that a precise measurement of the top width may serve as a consistency check for the electroweak sector of the standard model. In fact a number of calculations have been performed studying electroweak effects on the top width in theories extending the standard model \cite{GH}~. In particular it has been found that the additional corrections from the extended Higgs sector of the minimal supersymmetric standard model are significantly smaller than 1\%. In this article we give the standard model predictions for the top quark width. Our results are different form those in \cite{DS,Gad} because we include the effect of $W$ boson width considered in \cite{JK1} and neglected in later works. This effect is comparable in size to the electroweak corrections. A number of intrinsic uncertainties remains. The present uncertainty in $\alpha_s$ and the ignorance concerning the QCD correction of order ${\cal O}({\alpha_s}^2)$ limit the accuracy of the prediction to about 1-2\%. One has to take into account also the errors, both experimental and theoretical, in the determination of the top mass. At present the best place for a precise determination of $\Gamma_t$ is believed to be the threshold region for $t\bar t$ production in $e^+e^-$ annihilation. The most optimistic current estimate of the relative precision is 5\% \cite{Fujii}~. Therefore, it is mandatory to give the theory prediction which as the one presented in this article is accurate up to order of 1\% . \section{QCD corrected decay rate} We assume throughout three families of quarks. Thus the effects of CKM mixing are negligible. The QCD corrected width of the top quark is given by the following formula \cite{JK1}: \begin{eqnarray} \Gamma^{(1)} = {{{\rm G}_F}^2 {m_t}^5\over 192\pi^3} \left( 9 + 6{\alpha_s\over\pi}\right) \int^{(1-\epsilon)^2}_0 {{\rm d}y\over (1-y/\bar y)^2+\gamma^2} \left[ {\rm F}_0(y,\epsilon) - {2\alpha_s\over 3\pi} {\rm F}_1(y,\epsilon) \right] \label{eq:1} \end{eqnarray} where $$\bar y= \left( M_W/m_t\right)^2\ ,\qquad\epsilon= m_b/m_t\ , \qquad\gamma=\Gamma_{W}/M_W$$ and \begin{equation} \Gamma_{W}= {{\rm G}_F {M_W}^3\over 6\sqrt{2}\pi} \left( 9 + 6 {\alpha_s\over\pi}\right) \label{eq:2} \end{equation} The functions ${\rm F}_0(y,\epsilon)$ and ${\rm F}_1(y,\epsilon)$ read \footnote{We slightly simplify an original formula from [1] using relations between dilogarithms.} \def\lambda(1,y,\epsilon^2){\lambda(1,y,\epsilon^2)} \def\sqrt{\Alambd}{\sqrt{\lambda(1,y,\epsilon^2)}} \def{\cal C}_0(y,\epsilon){{\cal C}_0(y,\epsilon)} \def{\rm Li_2}\,{{\rm Li_2}\,} \defu_w{u_w} \defu_q{u_q} \begin{equation} {\rm F}_0(y,\epsilon) = {1\over 2}\sqrt{\Alambd}\,{\cal C}_0(y,\epsilon) \label{eq:3} \end{equation} where \begin{equation} \lambda(u,v,w) = u^2+v^2+w^2- 2(uv+vw+wu) \label{eq:4} \end{equation} \begin{equation} {\cal C}_0(y,\epsilon) = 4[(1-\epsilon^2)^2+y(1+\epsilon^2)-2y^2] \quad, \label{eq:5} \end{equation} and \begin{eqnarray} {\rm F}_1(y,\epsilon)= \frac{1}{2}{\cal C}_0(y,\epsilon)(1+\epsilon^2-y) \left[ 2\pi^2/3 +4{\rm Li_2}\,(u_w) -4{\rm Li_2}\,(u_q) \right. \nonumber\\ \left. -4{\rm Li_2}\,(\UQu_w) -4\lnu_q\ln(1-u_q)-2\lnu_w\lnu_q+\ln{y}\lnu_q +2\ln\epsilon\lnu_w \right] \nonumber\\ -2{\rm F}_0(y,\epsilon) \left[ \ln{y}+3\ln\epsilon-2\ln\lambda(1,y,\epsilon^2) \right] \nonumber\\ +4(1-\epsilon^2)\left[ (1-\epsilon^2)^2 +y(1+\epsilon^2)-4y^2 \right] \lnu_w \nonumber\\ +\left[ 3-\epsilon^2+11\epsilon^4-\epsilon^6+ y(6-12\epsilon^2+2\epsilon^4) - y^2(21+5\epsilon^2)+12y^3 \right]\lnu_q \nonumber\\ + 6\sqrt{\Alambd}(1-\epsilon^2)(1+\epsilon^2-y)\ln\epsilon \nonumber\\ +\sqrt{\Alambd}\left[ -5+22\epsilon^2 -5\epsilon^4- 9y(1+\epsilon^2)+6y^2\right] \nonumber\\ \label{eq:6} \end{eqnarray} where \begin{equation} u_q= {1+ \epsilon^2 -y -\sqrt{\Alambd}\over 1+ \epsilon^2 -y +\sqrt{\Alambd}} \label{eq:7} \end{equation} \begin{equation} u_w= {1- \epsilon^2 +y -\sqrt{\Alambd}\over 1- \epsilon^2 +y +\sqrt{\Alambd}} \label{eq:8} \end{equation} Above threshold for real W production the rate (1) can be approximated by: \begin{equation} \Gamma^{(1)}_{nw} = {{{\rm G}_F} {m_t}^3\over 16\sqrt{2}\pi} \left[ {\rm F}_0(\bar y,\epsilon) - {2\alpha_s\over 3\pi} {\rm F}_1(\bar y,\epsilon) \right]\quad, \label{eq:9} \end{equation} a result valid in the narrow width approximation. Neglecting $\epsilon$ one arrives at the following relatively compact expressions: \begin{equation} {\rm F}_0(y,0) = 2(1-y)^2 (1+2y) \label{eq:10} \end{equation} and\footnote{This form clearly exhibits limiting behavior $$ f(y) = {2\pi^2\over3} -{5\over2} -3y(1+y\ln y)+\dots $$ for small y, and $$ f(y) = 3\ln(1-y) +{4\pi^2\over 3}-{9\over 2}+\dots $$ for $y\to 1^-$~. Although stated in the text, these limits are not manifest in the original formula given in \cite{JK1}.}: \begin{eqnarray} f(y) = {\rm F}_1(y,0)/{\rm F}_0(y,0) = {2\pi^2\over3}- {5\over 2}+2\ln y\,\ln(1-y)+4{\rm Li_2}\, y -2y + \nonumber\\ {1\over1+2y} \left[ (5+4y)\ln(1-y) +{2y\ln y\over1-y} -{4y^3(1-y+\ln y)\over(1-y)^2 } \right] \nonumber\\ \label{eq:11} \end{eqnarray} The formula (1) has been derived in \cite{JK1} and tested in \cite{JK3,JK4}~. When applied to charm decays, i.e. in the four fermion limit, it reproduces the numerical results for the total rate \cite{CM}~. The formulae (3-6) including the $b$ quark mass corrections have been tested by a numerical calculation in \cite{JK3}. Although performed by the same authors this calculation should be considered an independent one since it was based on a completely different technique and matrix elements equivalent to those derived in the classic papers on muon decays \cite{muon} in a form adopted in \cite{AP} for charm decays. Furthermore we have observed that these formulae after an appropriate analytical continuation are equivalent to formulae in \cite{CGN} describing vacuum polarization effects from heavy quarks in the W boson propagator. \\ Independent calculations including non-zero $b$ quark mass have been performed in [2] and \cite{Gad}. The authors found a numerical agreement of their results with the formulae (3-6). The massless limit, eqs. (10-11), derived in [1] was rederived and confirmed by a number of groups \cite{Czarnecki}-\cite{LY}. We proceed now to the discussion of the numerical predictions for the decay rate and the quality of different approximations. As our input we use:\\ $M_W = 80.10$ GeV \cite{Rolandi}, $m_b = 4.7$ GeV, $\alpha_s(M_{Z}) = .118 \pm .007$ \cite{Altarelli} and $M_Z = 91.187$ GeV \cite{Rolandi}.\\ Then $\alpha_s(m_{t})${} is derived from the formula \begin{eqnarray} &\alpha_s(Q) = {4\pi\over b_0 \ln Q^2/\Lambda^2} \left[ 1 - {b_1\over {b_0}^2} {\ln\ln Q^2/\Lambda^2\over \ln Q^2/\Lambda^2} \right] \\ \label{eq:12} & b_0 = 11 - {2\over 3}N_f , \qquad b_1 = 102 - {38\over 3}N_f \nonumber \end{eqnarray} for $N_f$=5 quark flavours. Uncertainties in the input value of $\alpha_s(M_{Z})${} as well as the second order corrections ${\cal O}({\alpha_s}^2)$~, which have not been calculated yet, lead to an error which we estimate to be of order 1\%. In Table 1 we give our results for the widths obtained from different approximations as well as from the formula (1). Since most other authors present their results in comparison with the zeroth-order result \gamud{(0)}{nw} obtained in the narrow width approximation, we define \begin{equation} \delta^{(i)} = \Gamma^{(i)}/\Gamma^{(0)}_{nw} - 1 \label{eq:13} \end{equation} where $i = 0,1$ corresponds to the Born and the QCD corrected rate respectively, and the widths in the numerators include the effects of the W propagator, cf. eq. (1). Analogously we define \delud{(1)}{nw} which is given by the ratio of the QCD corrected and the Born widths, both evaluated in the narrow width approximation, and \delud{(1)}{nw}$(0)$ for massless $b$ quark. \begin{table}[h] \begin{tabular}{|r|c|c|c|c|c|c|c|c|c|} \hline $m_t\ $ & $\alpha_s(m_{t})$ & \gamud{(0)}{nw} &\delud{(0)}{}& \delud{(1)}{nw}$(0)$ & \delud{(1)}{nw} & \delud{(1)}{}& \gamud{(1)}{} & \delud{}{ew}& \gamud{}{t} \\ {\scriptsize(GeV)} & & {\scriptsize(GeV)} & {\scriptsize(\%)} & {\scriptsize(\%)} & {\scriptsize(\%)} & {\scriptsize(\%)} & {\scriptsize(GeV)} & {\scriptsize(\%)} & {\scriptsize(GeV)} \\ \hline 90.0& .118& .0234& 11.69 & 7.88 &-3.81& 6.56 &.0249& 0.81& .0251\\ 100.0& .116& .0931& 0.16 &-4.56 &-6.91& -6.89 &.0867& 1.04& .0876\\ 110.0& .115& .1955& -1.44 &-6.81 &-7.83& -9.22 &.1775& 1.20& .1796\\ 120.0& .113& .3265& -1.78 &-7.61 &-8.20& -9.89 &.2942& 1.33& .2982\\ 130.0& .112& .4849& -1.82 &-7.97 &-8.37&-10.08 &.4360& 1.43& .4423\\ 140.0& .111& .6708& -1.77 &-8.15 &-8.44&-10.10 &.6031& 1.51& .6122\\ 150.0& .110& .8852& -1.69 &-8.25 &-8.47&-10.05 &.7962& 1.57& .8087\\ 160.0& .109& 1.130& -1.60 &-8.31 &-8.49& -9.99 &1.017& 1.62& 1.033\\ 170.0& .108& 1.405& -1.52 &-8.34 &-8.49& -9.91 &1.266& 1.67& 1.287\\ 180.0& .107& 1.714& -1.45 &-8.35 &-8.48& -9.84 &1.546& 1.70& 1.572\\ 190.0& .106& 2.059& -1.39 &-8.36 &-8.47& -9.77 &1.857& 1.73& 1.890\\ 200.0& .106& 2.440& -1.33 &-8.36 &-8.46& -9.70 &2.203& 1.76& 2.242\\ \hline \end{tabular} \caption{Top width as a function of top mass and the comparison of the different approximations.} \end{table} \section{Electroweak corrections} The complete one loop electroweak correction to the standard model top decay have been calculated in [2] and \cite{Gad}. If the lowest order width is parametrized by ${\rm G}_F$ and $M_W$, cf. eqs. (1) and (9), the electroweak corrections are less than 2\% for realistic top masses. In particular there are no sizable effects arising from Yukawa couplings \cite{IMT}~\footnote{We thank Andre Hoang for checking that this important result is in agreement with [2] when the latter calculation is restricted to the leading ${\cal O}\left({m_t}^2/{M_W}^2\right)$ contribution \cite{Hoang}.}~. For $100\ GeV \le\ m_t\ \le\ 200\ GeV$ and Higgs mass $M_H\ \ge\ 100\ GeV$ the potentially large ${\cal O}\left({m_t}^2/{M_W}^2\right)$ contribution from the diagrams with Yukawa couplings are smaller than 0.2\%~, and hence much smaller than other, subleading in $m_t$ terms. The dependence of the correction on $M_H$ is weak; see [2] for details. In the following we assume $M_H = 100 GeV$~. Strictly speaking $m_t$, $M_Z$, $M_W$, and $M_H$ cannot be treated as independent parameters. The standard model and the existing data imply a relation between them. For our choice of the masses one can neglect this effect, provided $m_t$ is not too close to the present experimental lower limit. The corresponding change of the Born width is -2.6\%, -0.8\%, and less than 0.3\% for $m_t =$~90,~100, and~$\ge$~110~$GeV$, respectively. Therefore we ignore the above mentioned relation and treat all the masses as independent parameters. If the measured $M_W$ and $M_H$ turned out to be very different from the values assumed in this paper, it would be straightforward to evaluate the corresponding change of the Born width. The width of the top quark including the electroweak correction can be evaluated from the formula \begin{equation} \Gamma_t = \Gamma^{(1)} \left[ 1 + \delta_{ew} \right]\quad, \end{equation} and a simple parametrization \begin{equation} \delta_{ew} (\%) \approx 2 - 1.5\bar y \end{equation} has been obtained by us from Table 1 in [2]. The results for \gamud{}{t} calculated using (14) and (15) are given in our Table 1. It should be noted that the size of the electroweak corrections is comparable to the uncertainties from as yet uncalculated ${\cal O}({\alpha_s}^2)$ corrections and the present uncertainty in the value of $\alpha_s$. The electroweak corrections are furthermore sensitive to the details of the Higgs sector, as exemplified by the recent calculations in the context of the two Higgs doublet model \cite{GH}~. \vskip 1cm {\bf\large\noindent Acknowledgements} \\ \vskip0.1cm M.J. thanks Lalit Sehgal for a conversation which stimulated writing this report. He would like also to acknowledge a research fellowship from the Alexander-von-Humboldt Foundation which enabled his stay in the Institut f\"ur Theoretische Teilchenphysik - Univ. of Karlsruhe, where a part of this work was done, and to thank the members of the physics faculty there for warm hospitality and stimulating atmosphere. \newpage \def\PLB #1 #2 #3 {{\it Phys. Lett.} {\bf {#1}B} (#2) #3} \def\NPB #1 #2 #3 {{\it Nucl. Phys.} {\bf B#1} (#2) #3} \def\PRD #1 #2 #3 {{\it Phys. Rev.} {\bf D#1} (#2) #3} \def\PRB #1 #2 #3 {{\it Phys. Rev.} {\bf B#1} (#2) #3} \def\PR #1 #2 #3 {{\it Phys. Rev.} {\bf #1} (#2) #3} \def\PP #1 #2 #3 {{\it Phys. Rep.} {\bf#1} (#2) #3} \def\PRL #1 #2 #3 {{\it Phys. Rev. Lett.} {\bf#1} (#2) #3} \def\CPC #1 #2 #3 {{\it Comp. Phys. Commun.} {\bf#1} (#2) #3} \def\ANN #1 #2 #3 {{\it Annals of Phys.} {\bf#1} (#2) #3} \def\APPB #1 #2 #3 {{\it Acta Phys. Polonica} {\bf B#1}(#2) #3} \def\ZPC #1 #2 #3 {{\it Zeit. f. Phys.} {\bf C#1} (#2) #3} \def\CPC #1 #2 #3 {{\it Comp. Phys. Commun.} {\bf#1} (#2) #3} \def\SJNP #1 #2 #3 {{\it Sov. J. Nucl. Phys.} {\bf#1}(#3) #3} \def\YadF #1 #2 #3 {{\it Yad. Fiz.} {\bf#1} (#2) #3} \def\IJMPA #1 #2 #3 {{\it Int. J. Mod. Phys.} {\bf A#1} (#2) #3}
2024-02-18T23:40:44.930Z
1993-02-23T17:09:02.000Z
algebraic_stack_train_0000
3,246
2,535
proofpile-arXiv_066-174
\section{Introduction} Nitrogen is the fifth most abundant element in the Universe that can exist in the form of two stable isotopes, $^{14}$N and $^{15}$N. The \Nratio ratio has been measured in Solar-system objects such as comets, meteorites and chondrites (Mumma \& Charnley 2011; F\"uri \& Marty 2015), in molecular clouds with and without the influence of star formation processes (Adande \& Ziurys 2012; Hily-Blant et al. 2013; Bizzocchi et al. 2013; Fontani et al.~2015; Guzm\'an et al. 2017; Zeng et al.~2017; Colzi et al. 2018a, 2018b; Kahane et al. 2018; De Simone et al. 2018; Redaelli et al. 2018), and in galaxies (Henkel et al. 1998, 2018; Chin et al. 1999). In star-forming regions, there is a large spread in the measured \Nratio ratio, ranging from $\sim$100 for meteorites, comets and protoplanetary disks to $\sim$1000 in pre-stellar and star-forming cores. The Solar nebula value measured in the Solar wind and in Jupiter's atmosphere is an intermediate value, around 440 (Fouchet et al. 2004; Marty et al. 2010). In the few extragalactic sources where the \Nratio ratio has been measured, its values range from $\sim$100-450 (see Table~\ref{obsgalaxies} below). The \Nratio ratio is considered a good indicator of stellar nucleosynthesis, since the two isotopes are not synthesized in the same way. Both isotopes are thought to be actively produced in the CNO cycles of massive stars and in the so-called Hot Bottom Burning of asymptotic giant branch (AGB) stars (e.g.~Schmitt \& Ness 2002, Izzard et al. 2004). However, there should be some differences in their nucleosynthesis necessary to explain their observational behaviour, such as the strong primary component of $^{14}$N at low metallicity (e.g. Matteucci~1986), or the relative role played by massive stars and novae in the (over-)production of $^{15}$N with respect to $^{14}$N (e.g. Clayton 2003, Romano \& Matteucci 2003, Prantzos 2011, Romano et al. 2017). The relative importance of these processes, and the existence of additional processes not yet considered, are still unclear. In particular, the contribution of the isotopic fractionation, i.e. the role of chemical reactions occurring in the gas phase of the interstellar medium (ISM; see e.g. Roueff et al. 2015, Wirstr{\"o}m \& Charnley 2018), which are unrelated to stellar nucleosynthesis, has not been explored in detail under the different physical conditions expected in extragalactic environments. In this work we perform, for the first time, a chemical modelling study of nitrogen fractionation that may be occurring in the gaseous component of external galaxies. In section 2, we present the chemical model and network used for the $^{14}$N and $^{15}$N isotopic species; in Section 3 we present our results for the modelling of the nitrogen fractionation in gas at different H$_2$ densities and extinction, and affected by energetic phenomena (such as stellar heating, UV radiation and cosmic rays). In Section 4, we report our conclusions. \section{Chemical modelling of Nitrogen fractionation} \label{model} The chemical modelling was carried out using the open source time dependent gas-grain chemical code UCLCHEM\footnote{https://uclchem.github.io/}. The code is explained in detail in Holdship et al. (2017). Here we briefly summarize its main characteristics. UCLCHEM computes the evolution, as a function of time, of chemical abundances of the gas and on the ices starting from a diffuse and atomic gas. We ran UCLCHEM in two phases in a very similar manner as in Viti (2017) where theoretical abundances for extragalactic studies were derived. In Phase I, the gas is allowed to collapse and to reach a high density by means of a free-fall collapse. The temperature during this phase is kept constant at 10 K, and the cosmic ray ionization rate and radiation field are at their standard Galactic values of $\zeta_o$ = 5$\times$10$^{-17}$ s$^{-1}$ and 1 Draine, or 1.6$\times$10$^{-3}$ erg/s/cm$^2$ (Draine~1978, Draine \& Bertoldi~1996). During Phase I, atoms and molecules are allowed to freeze onto the dust grains and react with each other, forming icy mantles. In Phase II, we compute the chemical evolution of the gas after some energetic event has occurred (simulating either the presence of an AGN and/or a starburst). The initial (solar) elemental abundances considered in our models were taken from Asplund et al. (2009). Our elemental isotopic nitrogen ratio is 440. UCLCHEM includes non-thermal desorption processes during the cold phase. While UCLCHEM also includes thermal desorption processes as described in Viti et al (2004), for this work we simply assume instantaneous evaporation for the second phase. In both phases, the basic gas phase chemical network is based on the UMIST13 database (McElroy et al. 2013) with updates from the KIDA database (Wakelam et al. 2015). The surface reactions included in this model are assumed to be mainly hydrogenation reactions, allowing chemical saturation when possible. The network contains 2908 reactions and 239 chemical species. The number of reactions is reduced with respect to other networks reproducing the chemistry of molecular cloud/cores (e.g. Loison et al. 2019), but similar to other networks used to reproduce the chemistry of nearby Galaxies (Viti et al. 2014). For the $^{15}$N network, we duplicated the $^{14}$N network changing all $^{14}$N by $^{15}$N. We also added the $^{15}$N exchange reactions used by Roueff et al. (2015, see their Tables 1 and 2), with the only exception of those reactions involving ortho-H$_2$ and para-H$_2$ for which we only used the reaction rate from the ortho-H$_2$ species. This is partially justified because when we calculated the rate for the para and ortho species at 10 and 100~K, we systematically found that the ortho rate was orders of magnitude higher than the para ones. Nevertheless we note that, as some studies show (Hily-Blant et al. 2018; Furuya et al. 2015) in some environments para H$_2$ may be dominant. We have therefore performed a further test where we use the rate for para-H$_2$ instead of the one for the ortho H$_2$, essentially assuming in this way that all the molecular hydrogen is in the para form. The only two reactions affected by this exchange are: \\ $^{14}$N$^+$ + H$_2$ $\rightarrow$ NH$^+$ + H \\ $^{15}$N$^+$ + H$_2$ $\rightarrow$ $^{15}$NH$^+$ + H which essentially only affect ammonia and the nitrogen hydrides. We discuss this further in Section 3.4. For the ion-neutral reactions for which Roueff et al. (2015) do not give any reaction rate coefficient, we adopted the standard Langevin value of $10^{-9}$ cm$^{3}$s$^{-1}$ for the forward reaction, as done also by Hily-Blant et al.~(2013). We have not included the reactions considered as improbable in Table~1 of Roueff et al. (2015). Finally, we have also checked and updated (where needed) our network according to the reactions given in Table 3 of Loison et al. (2019). To test the network, we first ran a model with the same initial conditions as in Wirstr\"om \& Charnley (2018) They assumed a static core with constant H$_2$ volume density of $10^6$ cm$^{-3}$, constant gas temperature of 10~K, a cosmic ray ionisation rate $\zeta = 3\times 10^{-17}$ s$^{-1}$, and visual extinction $A_{\rm v}=10$. In our test model, we have used these same input parameters, as well as the same initial elemental abundances of C, N and O (taken from Savage \& Sembach~1996). Usually, in our model carbon is, initially, totally in the atomic form (C or C$^+$), while Wirstr\"om \& Charnley~(2018) assume it is totally locked in CO. Therefore, we have adapted our model to also reproduce this initial condition. We find values of the HCN abundance w.r.t. H$_2$, and \Nratio in HCN, very similar to those computed by Wirstr\"om \& Charnley~(2018): in our model, the HCN abundance rises up to $\sim 10^{-9}$ at $\sim 10^{4}$ yrs, and then it drops by several orders of magnitude afterwards. We also find that \Nratio for HCN/HC$^{15}$N is about 400, as found by Wirst\"om \& Charnley (2018), and thus conclude that our updated model can reproduce the most recent dark cloud models including $^{15}$N fractionation. We note that we have not considered the doubly substituted N$_2$, as done by Wirstr\"om \& Charnley (2018), because we have assumed that this species is negligible for the chemistry of extragalactic environments at large scales. Our assumption is likely correct because we are able to reproduce the results of Wirstr\"om \& Charnley (2018), which indicates that the reactions involving the doubly substituted N$_2$ are indeed negligible. Following the approach of Wirstr\"om \& Charnley (2018), we have replicated all reactions involving $^{14}$N species, including those in which more than one product includes nitrogen. This could lead, at high densities and long times, to an artificial increase of the values of the isotopic ratios. Therefore, we recommend to consider these values at long evolutionary times with caution. Our initial grid includes 288 models, spanning the following parameter space: gas final densities from 10$^4$ to 10$^6$ cm$^{-3}$, visual extinctions from 1 to 100 mags, temperatures of 50 and 100~K, radiation fields from 1 to 100 Draine, and cosmic ray ionization rates from 1 to 10$^4$ standard galactic cosmic ray ionization field, all selected to cover the ranges likely to be appropriate for external galaxies. The temperature, radiation field and cosmic ray ionization rate vary only in Phase II. Our parameter space is motivated primarily by considering the parameters that affect most the line intensities of gas tracers in starburst and AGN-dominated galaxies, as predicted by radiative transfer models. We also note that our parameter ranges are consistent with previous studies of the chemistry in external galaxies (e.g. Bayet et al. 2009, 2010). Note that the cosmic ray ionization flux is also used to `simulate' an enhancement in X-ray flux. As previously noted (Viti et al. 2014), this approximation has its limitations in that the X-ray flux will heat the gas more efficiently than cosmic rays. However, the chemistry arising from these two fluxes should be similar. In addition we have ran a second grid of models, varying the parameter space as above, at a reduced metallicity of half solar, to mimic environments more similar to the Large Magellanic Cloud. While we do not aim at modelling any galaxy in particular, this parameter space ought to cover the range of possible differences between extragalactic environments, where nitrogen fractionation has been measured, and the Milky Way. \section{Results} \label{results} In this Section we describe our model predictions for \Nratio\ by varying crucial physical and chemical parameters of the host Galaxy, and discuss how they compare with observations. We have analysed the following chemical species: HCN, HNC (the only two species for which measurements have been obtained, see Table 1), CN, and N$_2$H$^+$. In the next section, we will start discussing the most abundant species, i.e. HCN and HNC, and its chemically related species CN. \subsection{Dependence of fractionation to variations in the physical parameters} \label{dependence} A summary of the qualitative trends of the \Nratio with time, as a function of the combination of the physical parameters, is given in Table~\ref{variations}. Although we run models for two representative average temperatures of 50 and 100~K, we find that varying the temperature does not lead to significant changes in the model predictions of the \Nratio and hence we shall not discuss the sensitivity to temperature variations further. Depending on the combinations of the various parameters, the largest variation with time that we find in \Nratio for HCN, HNC or CN is of an order of magnitude in a range from $\sim 10$ to $\sim 1000$. In Figs.~\ref{fig:variations_1} and \ref{fig:variations_2} we plot the predictions for \Nratio\ against time showing the largest \Nratio\ increase or decrease, respectively, while in Fig.~\ref{fig:frac1} and Fig.~\ref{fig:frac2} we show the fractional abundances (with respect to the total number of hydrogen nuclei) of the main isotopologues for the same models. Fig.~\ref{fig:variations_1} shows that the largest increase is obtained either when $\zeta=1000$ or when both $\zeta$ and $\chi$ are about 1000 times their standard values, and $A_{\rm V}\geq10$ mags. In all cases, the average density is low ($10^4$ cm$^{-3}$). This means that, at large Giant Molecular Cloud scales (i.e for $n_H$ $\sim$ 10$^4$ cm$^{-3}$), in galaxies with sources of energetic particles such as AGNs or ULIRGs the fractionation should be suppressed with time. On the other hand, the highest drop in \Nratio\ (Fig.~\ref{fig:variations_2}) is found for two cases: if $\chi$ is low (1 Draine) but the gas density is high ($10^6$ cm$^{-3}$, top panel in Fig.~\ref{fig:variations_2}), or when $\chi$ and $A_v$ are high (1000 Draine and $\geq$ 10 mags respectively) and the density is low ($10^4$ cm$^{-3}$, bottom panel in Fig.~\ref{fig:variations_2}). A smaller but significant decrease is obtained also when $\zeta$ is high (1000) at high density (middle panel Fig.~\ref{fig:variations_2}). We note that in the top and middle panels of Fig. 2, the ratios do not seem to reach a steady state but show a gradual decrease. We have quantified this decrease and found that in reality this is less than 5$\%$, and likely due to the precision of our calculations. The decrease of the ratios at long times appears large only because the logarithmic Y-scale tends to magnify the changes that occur at low ratios. \begin{figure} {\includegraphics[width=9cm]{169.pdf}} {\includegraphics[width=9cm]{179.pdf}} \caption{Plots showing the cases with significant \Nratio increase, i.e. a fractionation decrease. In the title bar $\zeta$ is in units of $\zeta_o$, $\chi$ in units of Draine, the temperature in units of Kelvins, the gas density in units of cm$^{-3}$, and the $A_V$ is in magnitudes. } \label{fig:variations_1} \end{figure} \begin{figure} {\includegraphics[width=9cm]{242.pdf}} {\includegraphics[width=9cm]{266.pdf}} {\includegraphics[width=9cm]{155.pdf}} \caption{Plots showing the cases with significant \Nratio decrease, i.e. a fractionation increase. Units as in Figure 1.} \label{fig:variations_2} \end{figure} \begin{figure} {\includegraphics[width=9cm]{full169.png}} {\includegraphics[width=9cm]{full179.png}} \caption{Plots showing the fractional abundances with respect to the total number of hydrogen nuclei of the main isotopologues of the models of Fig.~\ref{fig:variations_1}.} \label{fig:frac1} \end{figure} \begin{figure} {\includegraphics[width=9cm]{full242.png}} {\includegraphics[width=9cm]{full266.png}} {\includegraphics[width=9cm]{full155.png}} \caption{Plots showing the fractional abundances of the main isotopologues with respect to the total number of hydrogen nuclei of the models of Fig.~\ref{fig:variations_2}.} \label{fig:frac2} \end{figure} The above discussion describes our analysis of solar metallicity models. As mentioned in Section 2, we also ran models for metallicities half the solar one in order to reproduce the possible trend in a Galaxy like the Small Magellanic Cloud, or other low metallicity galaxies. In general, we do not find any significant difference in the trends. For some of the models we find slightly different absolute values of the \Nratio but the range remains the same. \subsection{Differences in fractionation among N-bearing molecules} \label{diff_molecules} One of the clearest results from our modelling is that HCN and HNC show little differences in their \Nratio within a factor 2. The fractionation of CN, on the other hand, shows more variability, especially with time for many models. In particular, for cosmic ray ionization rates $\geq$ 1000 the standard one, and densities $\leq$ 10$^5$ cm$^{-3}$, the CN fractionation at late times is always higher than that of HNC and HNC by more than a factor of 2. \subsection{Comparison with observations} \label{observations} In Table~\ref{obsgalaxies}, we list the observational values of \Nratio\ for all external galaxies reported in the literature. As reference, in Table~\ref{obsmilkyway} we also list the average values (with the dispersions) of the \Nratio\ obtained in massive star-forming clumps and diffuse clouds in the Milky Way. The Milky Way can be considered as a template for spiral Galaxies, hence these clumps represent a proxy of the densest portions in spirals. We do not include in the table low-mass star-forming cores. Unfortunately, the only two species detected in the $^{15}$N isotope in external galaxies are HCN and (in fewer places) HNC. Hence, we focus the comparison with our models on HCN. Our criterion for choosing the models that best reproduce the observations is that the ratio has to be matched by 10$^5$ years and be maintained up to a million year. For the galaxies where we only have a lower limit for this ratio, we have imposed an arbitrary upper limit of 1000. We note that in general many models match the observed value of fractionation, indicating that the observed ratio is achievable under a large range of physical and chemical parameters. More specifically, for both NGC4945 and Arp220, the range of observed values is achieved by models of gas at low visual extinction for gas densities up to 10$^5$ cm$^{-3}$ and cosmic ray ionization rate up to 100 the standard value. However for NGC4945 there are also some models at high densities (10$^6$ cm$^{-3}$) at all cosmic ray ionization rates that can match the observed ratio at low and high visual extinctions. For Arp220, densities of 10$^6$ cm$^{-3}$ can only match observations for the highest cosmic ray ionziation rates and highest radiation fields at low visual extinctions. This may in fact be consistent with the high star formation rates found in the nuclear region of this galaxy. We note that only for these high densities the radiation field has an impact on the fractionation ratio. IC694 has similar ranges of fractionation to NGC4945 but with a lower upper limit and this indeed reduces the best matches among the models: here only densities $\geq$ 10$^5$ cm$^{-3}$ fit the observed fractionation and, for 10$^5$ cm$^{-3}$, only at visual extinctions of 1 mag for cosmic ray ionization rates and radiation fields of up to 100 and 10 the standard value, respectively. For higher densities, higher values of radiation, cosmic rays and, in some cases, visual extinction, also match the ratio. For the galaxies where we only have lower limits, even imposing an upper limit of 1000, leads to too many models matching the ratio to discuss them here. For the LMC, on the other hand, we are able to constrain the physical parameters much better, as there are only very few models that match the observations: a model with a gas density of 10$^5$ cm$^{-3}$ with a standard galactic cosmic ray ionization rate and an Av of $>$ 10 mags, or models with a density of 10$^6$ cm$^{-3}$, Av$\geq$ 10 mags and $\zeta$ $>$ 100 the standard value. In fact, the measured extinction in the LMC is significantly lower than the average found in the Milky Way (Dobashi et al. 2008), so the first case may be favoured. The radiation field is not constrained. Together with the results from Sections 3.1 and 3.2, we can conclude that the main cause of enrichment in $^{15}$N is high densities, but it can be aided by high fluxes of cosmic rays and, to a lesser extent, an intense radiation field. \begin{table*} \caption{\Nratio measured in external galaxies} \begin{tabular}{|ccccc|} \hline \hline Galaxy & type & $^{14}$N/$^{15}$N & Molecule & Reference \\ \hline \hline NGC4945 & starburst & 200-500 & HCN & Henkel et al. (2018) \\ LMC & 0.5 metal & 111($\pm$ 17) & HCN & Chin et al. (1999) \\ Arp220 & ULIRG & 440 (+140,-82) & HCN, HNC & Wang et al. (2016) \\ NGC1068 & AGN+starburst & $>$ 419 & HCN & Wang et al. (2014) \\ IC694 & starburst & 200-400(?) & HCN & Jiang et al. (2011) \\ LMC & 0.5 metal & 91($\pm$ 21) & HCN & Wang et al. (2009) \\ M82 & starburst & $>$ 100 & HCN & Henkel et al. (1998) \\ Galactic Center & standard with high $\zeta$ & $\geq$164 & HNC & Adande \& Ziurys (2012) \\ \hline \end{tabular} \label{obsgalaxies} \end{table*} \subsection{Fractionation predictions for N$_2$H$^+$ and NH$_3$ in external galaxies} Not many nitrogen bearing species have been observed to be abundant in external galaxies. Beside HCN, HNC and CN, discussed already in previous sections, the most common nitrogen bearing species detected in nearby galaxies are: HNCO, HC$_3$N, CH$_3$CN, and N$_2$H$^+$. While our network does include all the $^{14}$N isotopologues of these species, a fractionation chemistry for the first three of these species is not available, and hence we concentrate on the predicted fractionation of N$_2$H$^+$, an important tracer of cold and dense gas. Aladro et al. (2015) detected N$_2$H$^+$ in 4 galaxies: M83, NGC253, M82, and M51, and found a column density of 6.5$\times$10$^{12}$, 4$\times$10$^{13}$, 1$\times$10$^{13}$ and 4$\times$10$^{12}$ cm$^{-2}$, respectively. Grouping M83 with M51 and M82 with NGC253 (due to their similar values of observed N$_2$H$^+$) we find that for the first two galaxies this translates into a N$_2$H$^+$ fractional abundance ranging between 2.5 and 4$\times$10$^{-10}$ if the visual extinction is 10 mag and 2.5 and 4$\times$10$^{-11}$ if the visual extinction is 100 mag. For the other two galaxies, we get an abundance of $\sim$6.2$\times$10$^{-10}$ to 2.5$\times$10$^{-9}$ for 10 mag and $\sim$6$\times$10$^{-11}$ to 2.5$\times$10$^{-10}$ for 100 mag. In order to predict the expected fractionation of N$_2$H$^+$ in these galaxies we restrict our grid of models to those that match these abundances. {\it M83 and M51}: we find that, if the visual extinction traced by N$_2$H$^+$ is close to 10 mags, then two models can reproduce the range of abundances but both only for {\it a short} period of time, in some cases as brief as 1000 years: a model with a cosmic ray ionization rate higher than the galactic standard one by a factor of 1000, a gas temperature of 50 K and a gas density of 10$^4$ cm$^{-3}$, and another model with a cosmic ray ionization rate higher than the galactic standard one by a factor of 10000, a gas temperature of 100 K and a gas density of 10$^5$ cm$^{-3}$. Clearly, N$_2$H$^+$ is tracing dense gas but it is interesting to note that only high levels of cosmic ray ionization rate can maintain its abundance if the temperature of the gas is $>$ 10 K. If the gas has a visual extinction of 100 mags, then the only models that achieve to maintain a high abundance of N$_2$H$^+$ have a cosmic ray ionization rate of 100 times that of the galactic one, a temperature of 50 or 100 K and a gas density of 10$^4$ cm$^{-3}$. In this case, however, N$_2$H$^+$ is not destroyed before 10,000 years. We note that both M51 and M83 are spiral galaxies, with M83 being a young starburst and M51 having recently interacted with a nearby galaxy triggering star formation. Hence both are likely to have an enhanced cosmic ray ionization rate. {\it M82 and NGC253}: at 10 mags the only model that reproduces the observed abundance of N$_2$H$^+$ is one with a high cosmic ray ionization rate (1000 $\zeta_o$), a temperature of 100 K and a gas density of 10$^4$ cm$^{-3}$, while if the gas is at a visual extinction of 100 mags then the same model but with a factor of 10 less cosmic ray ionization rate can reproduce the observed abundance of N$_2$H$^+$. We recall that the derived abundances from the observations are different at different extinctions which is why for this comparison models at different visual extinctions do give different matches. We note that these two galaxies are the prototypical chemically rich starburst galaxies and, again, as for the other two galaxies, a higher than standard cosmic ray ionization rate is expected. We also note that while the abundance of N$_2$H$^+$ is not sensitive to changes in the radiation field, for all our best fit models the latter can not exceed $\sim$ 100-1000 Draine. These results indicate that either the observed N$_2$H$^+$ is tracing in fact colder gas than we modelled, or that it is indeed tracing gas close to a source of high cosmic ray flux that maintains its abundance for longer. In order to exclude the former hypothesis we ran a test model whereby we maintained in Phase 2 all the parameters as in Phase 1 (including the temperature of the gas at 10 K) and ran the model for 10$^7$ years. We find that, regardless of the gas density, we can not obtain an N$_2$H$^+$ abundance much higher than 10$^{-11}$ (which is below the observational value for most observations) for times less than 1 million years (see Figure 5). Hence we conclude that the high abundance of N$_2$H$^+$ is indeed a consequence of the cosmic ray ionization rate but that it is indeed transient implying that the gas traced by N$_2$H$^+$ and observed by Aladro et al. (2015) is young or most likely replenished periodically. Our predictions also imply that high N$_2$H$^+$ abundances are preferentially seen towards young galaxies, and thus N$_2$H$^+$ could be potentially an evolutionary indicator, although this conclusion has to be taken with caution given the large number of parameters that should produce the predicted high N$_2$H$^+$ abundance. \begin{figure} \hspace{-1.6cm} {\includegraphics[width=7cm, angle=-90]{n2hp-cold-eps-converted-to.pdf}} \caption{Fractional abundance of N$_2$H$^+$ as a function of time for Phase 2 of two models varying in gas densities, at a constant temperature of 10K (see text).} \label{fig:n2hp} \end{figure} What do these best matching models predict in terms of fractionation? Surprisingly the ratio of N$_2$H$^+$ to either of its fractionated counterparts is {\it always} at least 10$^4$ implying an extremely low fractionation. Assuming that our chemical network for the fractionation of N$_2$H$^+$ is complete, it is therefore unlikely we would be able to detect $^{15}$NNH$^{+}$ or N$^{15}$NH$^+$ in a reasonable amount of integration time even with the current, most powerful facilities. Finally it is worth briefly discussing our predictions for NH$_3$ fractionation. Ammonia was detected in NGC~253 (Ott et al. 2005), yielding a temperature $\leq$ 17-85 K. We plot in Fig.~\ref{fig:nh3} the ammonia isotopic ratio expected in the best fit model for NGC~253. As we can see from the figure, for gas older than 10,000 years the predicted fractionation is far too low to be detectable. As mentioned in Section 2, omitting the reactions with para-H$_2$ may have consequences for the abundance of hydrogen nitrides. We therefore compared the NH$_3$ gas fractional abundance between our model and the one performed assuming all the molecular hydrogen in the para form, as described in Section 2, but found that while they differ by a factor of two at the end of the cold phase, they are essentially the same in Phase 2. This is because once the gas is heated to $\sim$ 100 K, the ammonia formed on the ices via hydrogenation is released back to the gas phase and any difference in its abundance between the two models disappears. \begin{figure} {\includegraphics[width=9cm, angle=0]{173.pdf}} \caption{NH$_3$ fractionation for one of the best fit model for NGC~253 as derived from the N$_2$H$^+$ observations. } \label{fig:nh3} \end{figure} \label{predictions} \begin{table*} \centering \caption{Qualitative trends of fractionation as a function of different parameters.} \label{tab:comparison} \begin{tabular}{lcccc} \hline Model & \multicolumn{2}{c}{$A_{\rm V}=1$ mag} & \multicolumn{2}{c}{$A_{\rm V}\geq 10$ mag} \\ \hline standard & \multicolumn{2}{c}{Constant, apart from a transient enrichment} & \multicolumn{2}{c}{\Nratio decrease of 1 order of mag after $10^6$yrs} \\ & \multicolumn{2}{c}{more pronounced for HCN} & \multicolumn{2}{c}{especially at high density} \\ high $\zeta$ & \multicolumn{2}{c}{Decrease of fractionation at low densities with time,} & \multicolumn{2}{c}{Decrease/increase of fractionation} \\ & \multicolumn{2}{c}{flat at high density} & \multicolumn{2}{c}{at low/high density (respectively) with time} \\ high $\chi$ & \multicolumn{2}{c}{Constant with time at both densities} & \multicolumn{2}{c}{Fractionation increase with time for both densities} \\ high $\zeta + \chi$ & \multicolumn{2}{c}{Constant with time at both densities} & \multicolumn{2}{c}{Fractionation decrease/increase} \\ & & & \multicolumn{2}{c}{at low/high density, respectively} \\ \hline \end{tabular} \label{variations} \end{table*} \begin{table*} \caption{\Nratio measured in the Milky Way in dense and diffuse clouds from different molecules.} \begin{tabular}{ccccc} \hline \hline Reference & \multicolumn{4}{c}{$^{14}$N/$^{15}$N} \\ & HCN & HNC & CN & N$_2$H$^+$ \\ \hline Adande \& Ziurys (2012) & & $\sim 130 - 400$ & $\sim 120 - 380$ & \\ Fontani et al. (2015) & & & 190 -- 450 & 180 -- 1300 \\ Ritchey et al. (2015) & & & 274$\pm 18$ & \\ Colzi et al. (2018b) & 115 -- 1305 & 185 -- 780 & & \\ Zeng et al. (2018) & 70 -- 763 & 161 -- 541 & & \\ \hline \end{tabular} \label{obsmilkyway} \end{table*} \section{Conclusions} We have used a time dependent gas-grain chemical model to determine the nitrogen fractionation of dense gas under a range of physical parameters representing galaxies with intense FUV or cosmic ray sources. We determine the sensitivity of the fractionation to the local physical conditions, as well as the fractionation differences among the observable nitrogen bearing species; we qualitatively test our models by comparing our findings with the few observations of HCN available and we then make some predictions related to the fractionation for an important nitrogen-bearing species, N$_2$H$^+$. We summarize our findings below: \begin{itemize} \item In general we find that in most models the \Nratio for HCN, HNC or CN never varies by more than an order of magnitude with time, and remains in a range from $\sim 100$ to $\sim 1000$. \item An increase in fractionation can occur at low radiation fields and high densities and viceversa, as well as when both the cosmic ray ionization rate and the gas density are high. \item A decrease in fractionation is obtained at low densities, high visual extinction and high fluxes of either radiation fields or cosmic rays. \item HCN and HNC show little differences in their \Nratio within a factor of 2. On the other hand the \Nratio for CN can be different from that of the other two species at late times for densities $\leq$ 10$^5$ cm$^{-3}$ and cosmic ray ionization rates $\geq$ to 1000 the standard one. \item Our models succeed in reproducing the observed \Nratio in external galaxies but due to the large ranges observed we are unable to fully constrain the physical parameters of each galaxy with the exception of the LMC whose nitrogen fractionation implies a gas density of 10$^5$ cm$^{-3}$ with galactic cosmic ray ionization rate and an Av of 100 mags, or a density of 10$^6$ cm$^{-3}$, Av $>$ 10 mags and $\zeta$ $>$ 100. \item Finally we predict that even with the most sensitive instruments to date it is unlikely that we would be able to detect $^{15}$NNH$^{+}$ or N$^{15}$NH$^+$ in external galaxies as their fractionation is more than one order of magnitude lower than that for HCN, HNC or CN. \end{itemize} \section*{Acknowledgements} SV and JH acknowledge STFC grant ST/M001334/1. I.J.-S. acknowledges partial support by the MINECO and FEDER funding under grants ESP2015-65597-C4-1 and ESP2017-86582-C4-1-R. We are grateful to E. Wirstr\"om and J.-C. Loison for providing us useful clarifications about their models, to C. Henkel for his critical reading of the manuscript, and to the anonymous referee for constructive comments that improved the manuscript.
2024-02-18T23:40:46.783Z
2019-04-29T02:13:26.000Z
algebraic_stack_train_0000
3,309
5,400
proofpile-arXiv_066-236
\section{Acknowledgements}\label{sec:ack} \noteYD{Will add acknowledgement.} \section{Factory Area Overhead}\label{sec:area} \begin{table*}[t] \small \centering \begin{tabular}{cl||cl} \hline\hline Parameter & Descriptions & Parameter & Descriptions \\\hline $K$ & Factory total capacity & $n$ & Number of input states in distillation protocol \\ $X$ & Number of distributed factories & $k$ & Number of output states in distillation protocol \\ $\ell$ & Block-code levels of a factory & $N_{\text{r}}$ & Number of protocols at round $r$ under block-code\\ $r$ & Distillation round, $1 \le r \le \ell$ & $K_{\text{output}}$ & Number of effective output magic-states due to yield rate\\ $d$ & Surface code distance & $T_{\text{distill}}$ & Time to execute one full iteration of distillation \\ $P_{\text{s}}, P_{\text{success}}$ & Target success probability & $T_\text{t}$ & Time to deliver magic-state to target qubit\\ $P_{L}$ & Logical fidelity & $n_{\text{distill}}$ & Distillation iterations to support one timestep of a program\\ $\epsilon_{\text{inject}}$ & Physical error rate of raw magic-state & $A_{\text{factory}}$ & Total area of factories (in physical qubits)\\ $\epsilon_{\text{in/target/r}}$ & Physical error rate at input, at output, or at round $r$ & & \\ \hline\hline \end{tabular} \caption{List of system parameters involved in the analysis and the optimization procedure.} \end{table*} To describe a magic-state distillation factory, we first make a distinction between a factory \textit{cycle} and a distillation \textit{round} or \textit{level}. A distillation round refers to one iteration of the distillation protocol, a subroutine that is repeated $\ell$ times for a particular factory. A cycle refers to the total time required for the factory to operate completely, taking $n$ input states and creating $k^{\ell}$ output states. All $\ell$ distillation rounds are performed during a cycle. {\bf A magic-state distillation factory architecture can be characterized by three parameters:} the total number of magic states that can be produced per distillation cycle $K$, number of factories on the lattice $X$, and the total number of distillation rounds that are performed per cycle $\ell$. For simplicity, we assume uniform designs where all $K$ output states are to be divided equally into $X$ factories, all of which operate with $\ell$ rounds of distillation. We now analyze the relationships presented in Section \ref{sec:bg} to derive full factory scaling behaviors with respect to these architectural design variables. These behaviors interact non-trivially, and lead to space-time resource consumption functions that show optimal design points. \begin{figure*}[h!] \centering \begin{subfigure}[b]{0.4\textwidth} \includegraphics[trim=0 0 0 0, width=\linewidth]{Figures/eout_x_l.pdf} \caption{Error rate attainable by number of factories} \label{fig:yield} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width=\linewidth]{Figures/eth_x_l.pdf} \caption{Error rate tolerable by number of factories} \label{fig:xsweeperrorthresh} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \includegraphics[ width=\linewidth]{Figures/yield_k_l.pdf} \caption{Yield rate of $L$-level factory with capacity $K$} \label{fig:yield} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width=\linewidth]{Figures/area_k_r.pdf} \caption{Area scaling within each round of a 5-level factory} \label{fig:area_law} \end{subfigure} \caption{(a) Higher fidelity output states are achievable with increasing number of factories at a fixed output capacity. (b) Increasing the number of factories in an architecture allows for higher tolerance of input physical error rates. (c) Increasing factory output capacity puts pressure on the factory yield rate, and increasing the number of levels pushes the yield dropoff point. (d) Maximum area to support multi-level factory is required of the lowest level of the factory, all higher levels require less area support.} \label{fig:factorieserrorsyields} \label{fig:factorycharacteristics} \end{figure*} \subsection{Role of Fidelity and Yield in Area Overhead} First we examine the fidelity of the produced magic-states that is attainable with a given factory configuration, along with expected number of states that will in fact be made available. Once again, we use the terminology ``round'' and ``level'' to both refer to a single iteration of the Bravyi-Haah distillation protocol within a factory. Applying the block code error scaling relationship described by equation \ref{eq:bherror} recursively, as the number of total rounds ($\ell$) of a magic-state factory increases, the output error rates attainable scale double-exponentially with the total number of rounds in a factory: $\ell$. In fact, for a given round $r$ (between $1$ and $\ell$) of a factory, the explicit form of the output error rate can be written by directly applying $r$ copies of equation \ref{eq:bherror}: \begin{align} \epsilon_r &= (1+3(K/X)^{\sfrac{1}{\ell}})^{2^r-1}\epsilon_{\text{inject}}^{2^r}\label{eq:errorround} \end{align} where $(K/X)$ denotes the capacity of each factory on a lattice. The yield rate of a particular factory can be expressed as a product of the yield rate functions describing each individual round, as in equation \ref{eq:bhyield}. The effective output capacity can be written as the product of the success probabilities of all $\ell$ rounds of a factory as: \begin{align} K_{\text{output}} &= K \cdot \prod_{r=1}^{\ell}\bigg[(1-(3(K/X)^{\sfrac{1}{\ell}}+8)\epsilon_{r-1}\bigg]\label{eq:koutput} \end{align} Here $K_{\text{output}}$ refers to the realized number of produced states after adjusting for yield effects, while $K$ refers to the desired or specified number of output states. Equation \ref{eq:koutput} actually imposes a \textit{yield threshold} on the system. For a given $K$, $X$, and $\ell$, a system will have a maximum error rate which, if exceeded, will cause the factory to malfunction and stop producing states reliably. This threshold can be seen by examining the product term, and noting that yield must be positive in order to produce any states. The terms in the sequence of equation \ref{eq:koutput} are decreasing in magnitude, so the threshold is determined by the leading term which requires: $1-(3(K/X)^{1/\ell} + 8)\epsilon_{\text{inject}} > 0$, and thus: \begin{align} \epsilon_{\text{thresh}} < \frac{1}{3(K/X)^{1/\ell}+8}\label{eq:yieldthresh} \end{align} Figure \ref{fig:yield} shows the yield rate scaling behavior of single factories of consisting of $\ell=1,2,3$ with fixed $X=1$. In order to reliably produce some fixed amount of states, the yield effects determine the required number of rounds of distillation that must be performed. On the other side, any given number of distillation rounds has a maximum output capacity $K$ for which the expected number of produced states becomes vanishingly small. Increasing the number of distillation rounds will increase the maximum supportable factory capacity. \subsection{Full Area Costs} We now use these relationships to derive the true area scaling of these factories. For all $\ell$ level factories, the area of the first round exceeds the area required for all other rounds. Using this as an upper bound, we can write the area required for a specific round explicitly in terms of physical qubits as: \begin{align} A_r&=X\cdot k^{r-1}(3k+8)^{\ell-r}(6k+14)\cdot d_r^2 \label{eq:arealaw}\\ &\leq X(3k+8)^{\ell-1}(6k+14)\cdot d_1^2 \end{align} Where $k \equiv (K/X)^{\sfrac{1}{\ell}}$. The inequality in the last line arises due to the fact that the first round always uses the largest area by block-code construction, i.e. $A_r \leq A_1$ for all $1 \le r \le \ell$. Here we have used several relationships, namely that the total number of protocols and modules scales as in equation \ref{eq:num_modules}, a single protocol requires $6k + 14$ logical qubits \cite{campbell}, and the area of a single logical surface code qubit scales as $d^2$ \cite{latticesurgery}. Although in an aggressively optimized factory design then, one could conceivably save space within the distillation procedure by utilizing the space difference between successive rounds of distillation for other computation, we will assume in this work that this cannot be done, and instead the first round area of any given factory defines the area required by that factory over the length of its entire operation, and {\it locks out} the region for distillation only. As a result, Figure \ref{fig:area_k_x} describes the scaling of factory area both by increasing output capacity and increasing the total number of factories. \begin{figure*}[t] \centering \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{Figures/area_k_x.pdf} \caption{Area Scaling} \label{fig:area_k_x} \end{subfigure} ~ \begin{subfigure}[b]{0.3\textwidth} \includegraphics[ width=\textwidth]{Figures/latency_k_x.pdf} \caption{Latency Scaling} \label{fig:time_k_x} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[ width=\textwidth]{Figures/yield_k_x.pdf} \caption{Yield Rate Scaling} \label{fig:yield_k_x} \end{subfigure} ~ \caption{(a) Area required to implement a 2-level factory of varying numbers of factories $X$. As the distribution intensity increases, the total area increases significantly faster as factory output is scaled up. Notice that some regions are not feasible due to the constraint $K/X \le 1$. (b) Latency as it scales with factory output capacity. For factories of a fixed capacity, increasing the number of factories on the lattice reduces latency overall and speeds up application execution time, thanks to reductions in contention and congestion. The flat tails at high $K$ values are due to the fact that the capacity has exceeded the amount that a application ever demanded. (c) Yield as it scales with factory output capacity and number of factories. For a fixed capacity $K_0$, increasing the number of factories can significantly increase the success probability and yield rate of the factory.} \label{fig:factoriessacetime} \end{figure*} \section{Background}\label{sec:bg} \subsection{Quantum Computation} \label{subsec:qc} The idea of quantum computation is to use quantum mechanics to manipulate information stored in two-level physical systems called quantum bits (qubits). In contrast to a bit in a classical machine, each qubit can occupy two logical states, denoted as $\ket{0}$ and $\ket{1}$, as well as a linear combination (superposition) of them, which can be written as $\ket{\psi} = \alpha \ket{0} + \beta \ket{1} $, where $\alpha, \beta$ are complex coefficients satisfying $|\alpha|^2 + |\beta|^2 = 1$. It is sometimes useful to visualize the state of a single qubit as a vector on the bloch sphere \cite{bloch1946nuclear,MikenIke}, as we can rewrite the state $\ket{\psi}$ in its spherical coordinates as $\ket{\psi} = \cos{(\theta/2)}\ket{0} + \exp{(i\phi)}\sin{(\theta/2)}\ket{1}$. Any operations (called quantum logic gates) performed on single qubit can thus be regarded as rotations by an angle $\varphi$ along some axis $\hat{n}$, denoted as $R_{\hat{n}}(\varphi)$. In this paper we will focus on some quantum gates that are commonly used in algorithms, such as the Pauli-X gate ($X \equiv R_x(\pi)$), Pauli-Z gate ($Z \equiv R_z(\pi)$), Hadamard gate ($H \equiv R_x(\pi)R_y(\pi/2)$), S gate ($S \equiv R_z(\pi/2)$), and T gate ($T \equiv R_z(\pi/4)$). For multi-qubit operations, we will consider the most common two-qubit gate called controlled-NOT (CNOT). It has been shown \cite{Barenco} that the above mentioned operations form a \emph{universal} gate set, which implies that any quantum operations can be decomposed as a sequence of the above gates. As quantum logic gates require extremely precise control over the states of the qubits during execution, a slight perturbation of the quantum state or a minor imprecision in the quantum operation could potentially result in performance loss and, in many cases, failure to obtain the correct outcomes. In order to maintain the advantage that quantum computation offers while balancing the fragility of quantum states, quantum error correction codes (QECC) are utilized to procedurally encode and protect quantum states undergoing a computation. One of the most prominent quantum error correcting codes today is the surface code \cite{dennis2002topological,FowlerSurface}. \subsection{Surface Code}\label{subsec:surface} In a typical surface code implementation, physical qubits form a set of two-dimensional rectangular arrays (of logical qubits), each of which performs a series of operations only with its nearest neighbors. A logical qubit, under this construction, is comprised of a tile of physical qubits, and these tiles interact with each other differently according to different logical operations. These interactions on the grid create the potential for communication-imposed latency, as routing and logical qubit motion on the lattice must be accomplished. An important parameter of the surface code is the \textit{code distance} $d$. Larger code distance means a larger tile for each logical qubit. The precise number of physical qubits required in each tile also depends on the underlying surface code implementation. Most common implementations assume a logical qubit of distance $d$ requires $\sim d^2$ physical qubits \cite{FowlerSurface, latticesurgery}. Code distance also determines how well we can protect a logial qubit. The logical error rate $P_L$ of a logical qubit decays exponentially in $d$. More precisely: \begin{align} P_L \sim d(100\epsilon_{in})^{\frac{d+1}{2}}\label{eq:etarg} \end{align} where $\epsilon_{in}$ is the underlying physical error rate of a system \cite{Fowler2013}. In particular, this work will focus on two relatively expensive operations on surface code, namely the logical CNOT gate and the logical T gate. Our overhead analysis will hold regardless of the underlying technology, e.g. superconducting or ion-trap implementations. Earlier work \cite{javadi2017optimized} has also performed such analysis with technology-independent frameworks. Firstly, a logical CNOT between two qubits can be expensive, because the two logical qubits can be located far apart on the lattice and long-distance interaction is achieved by the \emph{topological defect braiding} methodology. Secondly, a logical T gate can also be costly because it requires some ancillary state to be procedurally prepared in advance, called the \emph{magic-state distillation}. \subsubsection{CNOT Braiding} A \textit{braid} is a path in the surface code lattice, or an area where the error correction mechanisms have been temporarily disabled and where no other operations are allowed to use. In other words, braids are not allowed to cross. A logical qubit can be entangled with another if the braid pathway encloses both qubits, where enclosing means extending a pathway from source qubit to target qubit and then contracting back via a (possibly different) pathway. It is important to note that these paths can extend up to arbitrary length in constant time, simply by disabling all area covered by the path in the same cycle. Furthermore, each path must remain open for a constant number of surface code cycles to establish fault tolerance. More precisely, one CNOT braid takes $T_{cnot} = 2d+2$ cycles to be performed fault tolerantly \cite{FowlerSurface,javadi2017optimized}. \subsubsection{T Magic-States} \label{subsec:magic} Now T (and S) gates, as described earlier, are necessary for universal quantum computation, and yet are very costly to implement on the surface code. For simplicity of analysis, we assume all S gates will be decomposed into two T gates, because of their rotation angle relationship. This is potentially an overestimate of the actual gate requirements, as it is also possible to perform an S gate via separate distillation of a different type of magic state. We are also aware of another surface code implementation that allows for S gate to be executed without distillation \cite{litinski2018lattice}. These techniques have different architectural implications which are outside the scope of the analysis of this work. To execute these gates, an ancillary logical qubit must be first prepared into a special state, known as the {\em magic state}~\cite{magic_states}. Once prepared, this magic-state is to be interacted with the target qubit as in \cite{FowlerSurface}, via a probabilistic circuit involving the magic state and between 1 or 3 CNOT braids, each with probability $1/2$. The extra 2 CNOTs are required to perform a corrective S gate in the case that the probabilistic circuit fails, which we assume to be consisting of 2 CNOT braids. This circuit is called the state injection circuit. We can therefore write the expected latency of a T gate as \begin{align} \mathbb{E}[T_t] &= T_{cnot} + \frac{1}{2}(2*T_{cnot}) = 4d+4\label{eq:tgatetime} \end{align} where we use $T_t$ to denote latency of a T gate and $T_{CNOT}$ as latency of a CNOT gate. Since the task of preparing these states is a repetitive process, it has been proposed that an efficient design would dedicate specialized regions of the architecture to their preparation~\cite{steane1997space,Jones}. These {\em magic-state factories} are responsible for creating a steady supply of low-error magic states. The error in each produced state is minimized through a process called {\em distillation}~\cite{Bravyi_magic}, which we will introduce in detail in section \ref{subsec:BH}. \subsection{T-Gates in Quantum Algorithms} \label{subsec:algs} Among the different classes of quantum algorithms, quantum simulation and quantum chemistry applications have drawn significant attention in recent years due to the promises they show in transforming our understanding of new and complex materials, while still potentially remaining tractable in near-term intermediate-size machines \cite{montanaro2016quantum,babbush2017low,kivlichan2018quantum,whitfield2011simulation,jones2012faster}. The benchmark algorithms studied in this work include the \emph{Ground State Estimation} (GSE) \cite{whitfield2011simulation} of the Fe$_2$S$_2$ molecule and the \emph{Ising Model} (IM) \cite{barends2016digitized} algorithms. They are representative applications for the purpose of this study as they present very different demand characteristics for T gate magic states. A more detailed description of T gate distributions in these two algorithms can be found in section \ref{subsec:program}. Here we list in Table \ref{tab:benchmarks} the two benchmarks alongside with some of their T gates statistics, namely the number of qubits ($n_{\text{qubits}}$), total T count ($T_{\text{count}}$), total schedule length ($L$), average T gates per time step ($T_{\text{avg}}$), standard deviation of T gates per time step ($T_{\text{std}}$), and maximum T gates per time step ($T_{\text{peak}}$). \begin{table}[h!] \centering \small \begin{tabular}{ccccccc} \hline\hline Application & $n_{\text{qubits}}$ & $T_{\text{count}}$ & $L$ & $T_{\text{avg}}$ & $T_{\text{std}}$ & $T_{\text{peak}}$ \\\hline IM & 500 & 9068348 & 20589 & 440 & 107 & 778 \\ GSE & 5 & 775522 & 546708 & 1.419 & 1.464 & 12\\\hline \end{tabular} \caption{T gate statistics in the Ising Model (IM) and Ground State Estimation (GSE) benchmarks. For our analysis, we consider a 500-qubit spin chain in our IM simulation, and we simulate a small molecule in GSE comprised of 5 spin orbital states. The reason $T_{\text{peak}}$ for IM can be more than the number of qubits is because in this calculation every S gate in the application has been decomposed into 2 T gates.} \label{tab:benchmarks} \end{table} The Ising Model and Ground State Estimation applications, and others in the same application class, have a predictable structure. Contemporary methods to simulate quantum mechanical systems employ Trotter decomposition \cite{trotter1959product} to digitize the simulation, which involves large numbers of structurally identical Jordan-Wigner Transformation circuits \cite{batista2001generalized}, each of which involves a series of CNOT gates (called the ``CNOT staircase") followed by a controlled rotation operation. This arbitrary-angle rotation will often be decomposed to sequences of H, S, and T operations in a procedure called gate synthesis \cite{ross2014optimal}. Take as an example finding molecular ground state energies of the molecule Fe$_2$S$_2$ requires approximately $10^4$ Trotter steps for ``sufficient" accuracy, each comprised of $7.4 \times 10^6$ rotations \cite{wecker2014gate}. Each of these controlled rotations can be decomposed to sufficient accuracy using approximately 50 T gates per rotation \cite{kliuchnikov2012fast}. All of this can amount to a total number of T gates of order $10^{12}$, which is also the number of prepared magic-states needed. In these types of applications, magic-state distillation will be responsible for between $50\% - 99\%$ of the resource costs when executing an error-corrected computation \cite{ding2018magic}. Because of this, the number of T gates present in an algorithm is often used as a metric for assessing the quality of a solution~\cite{Selinger:2013aa,amy2014polynomial}. \subsection{Bravyi-Haah Distillation Protocol} \label{subsec:BH} In order to execute T gates fault tolerantly, an interaction is required between a target logical qubit and an ancillary magic state qubit. The fidelity of the operation is then tied to the fidelity of the magic state qubit itself, which requires that magic states are able to be reliably produced at high fidelity. This is achieved through procedures known as distillation protocols. Distillation protocols are circuits that accept as input a number of potentially faulty raw magic states ($n$) and output a smaller number of higher fidelity magic states ($k$). The input-output ratio $n \rightarrow k$ is generally used to assess the efficiency of a protocol. Because many distillation protocols are extremely resource-intensive, a key design issue of quantum architectures is to optimize them. In this work we restrict our focus to a popular low-overhead distillation protocol known as the Bravyi-Haah distillation protocol that has received much attention in the field recently \cite{jones2013multilevel, Fowler2013, campbell}. Here we describe in detail the process for preparing and distilling the magic-states. Bravyi-Haah state distillation circuits \cite{Bravyi_magic} take as input $3k+8$ low-fidelity states, and output $k$ higher fidelity magic-states, and thus are denoted as the $3k+8\rightarrow k$ protocol. Notably, if the raw input (injected) states are characterized by error rate $\epsilon_{\text{inject}}$ (which could be different from the physical input error rate $\epsilon_{\text{in}}$ as in equation \ref{eq:etarg} depending on hardware implementations), the output state fidelity is improved with this procedure to: \begin{equation}\label{eq:bherror} \epsilon_{\text{output}} = (1+3k)\epsilon_{\text{inject}}^2, \end{equation} or in other words, a second-order suppression of error. This imposes a tolerance threshold on the underlying input error rate that can be precisely written as: \begin{align} \epsilon_{\text{thresh}} &\approx \frac{1}{3k+1}\label{eq:thresh} \end{align} because when $\epsilon_{\text{inject}} \ge \epsilon_{\text{thresh}}$, the output error rate is no better than where we started before distillation. Moreover, this process is imperfect. For any given implementation of this circuitry, the true yield could be lower than expected. The success probability of the protocol that attempts to output $k$ high fidelity states is, to the highest order, given by: \begin{equation} P_{\text{success}} \approx 1-(8+3k)\epsilon_{\text{inject}}\label{eq:bhyield}. \end{equation} In performing a rigorous full system overhead analysis, these effects will become extremely significant. \subsection{Block Codes} \label{subsec:block} In certain types of applications, the second-order error suppression achieved by single round of Bravyi-Haah distillation is not enough. To overcome this, multiple rounds (also referred to as \emph{levels} in our work) of the distillation protocol can be concatenated to obtain higher and higher output state fidelity. To ensure successful execution of a program, systems must be able to perform all of the gates in the computation with an expected value of logical gate error rate less than 1. So the success probability desired for a specific application ($P_s$) relates to the required logical error rate per gate $P_L$ as follows: \begin{align} P_L \leq \frac{P_s}{N_{\text{gates}}} \end{align} where $N_{\text{gates}}$ is the number of logical gates in the computation. $P_L$ therefore sets a bound on the fidelity of generated magic states. Many circuits contain of order $10^{10}$ logical gates or more \cite{wecker2014gate}, while physical error rates may scale as poorly as $10^{-3}$ \cite{Fowler2013}. In these cases, clearly squaring the input error rate will not achieve the required logical error rate to execute the program. Instead, we can \textit{recursively} apply the Bravyi-Haah circuit $\ell$ times, with permutations of the intermediate output states in between distillation rounds. Throughout this work, we use the terminology ``round'' and ``level'' to both refer to a single iteration of the Bravyi-Haah distillation protocol within a factory. Constructing high fidelity states in this fashion is known as Block Code State Distillation \cite{jones2013multilevel}. As shown in Figure \ref{fig:block_pic}, realizing Bravyi-Haah block code protocols would require $6k+14$ total logical qubits \cite{campbell}. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{Figures/block_pic.pdf} \caption{The recursive structure of the block code protocol. Each block represents a module for Bravyi-Haah $(3k+8)\rightarrow k$ protocol, and lines indicate the magic-state qubits being distilled, and dots indicates the extra $3k+6$ ancillary qubits used, totaling to $6k+14$. This figure shows an example of 2-level block code with $k=2$. So this protocol takes in total $(3k+8)^2=14^2$ states, and outputs $k^2=4$ states with higher fidelity. The qubits (dots) in round 2 are drawn at bigger size, indicating the larger code distance $d$ required to encode the logical qubits, as they have lower error rate than in the previous round \cite{campbell}.} \label{fig:block_pic} \end{figure} \subsubsection{Magic-State Factory Error and Yield Scaling} To perform a rigorous full system overhead analysis, it is necessary to quantify the behavior of multi-level block code factories in terms of output state fidelity and production rate. By construction, the error rate of the produced magic-states will be squared after each round. So the final output states error rate after $\ell$ rounds of distillation will be $\sim \epsilon_{\text{inject}}^{2^{\ell}}$. Since the output states from the previous round will be fed into the next round, the success probability of a distillation module at round $r$ depends on the output error rate of the previous round $\epsilon_{r-1}$, i.e. $P_{\text{success}}^{(r)} = 1-(3k+8)\epsilon_{r-1}$. The success probability for the entire $\ell$-level factory will be explicitly derived later in Section \ref{sec:area}. \subsubsection{Magic-State Factory Area Scaling} Within any particular round $r$ of an $\ell$-round magic-state factory (where $1 \le r \le \ell$), the required number of \emph{physical} qubits defines the space occupied by the factory during that round. However, we will often use \emph{logical} qubit as unit area, since translating to physical qubits will simply pick up a $d_r^2$ multiplicative factor as shown in section \ref{subsec:surface}. In general, any particular round requires several \text{modules} each comprised of several distillation protocol circuits. A generic $n\rightarrow k$ protocol, under a $\ell$-level block code construction, will need a total number of protocols as follows \begin{align}\label{eq:num_modules} N_{\text{distill}} = \sum_{r=1}^\ell N_r = \sum_{r=1}^\ell k^{r-1} n^{\ell - r} \end{align} \subsubsection{Magic-State Factory Time Overhead} Each round of distillation can be shown to require $11 d_r$ number of surface code cycles\cite{campbell}. Suppose $d_r$ is the code distance for round $r$ (which depends upon the input and output error rates), we arrive at the total time to execute full distillation as: \begin{align} T_{\text{distill}} = 11 \sum_{r=1}^\ell d_r \label{eq:distilltime} \end{align} A full assessment of the area and time costs under our proposed architecture designs,will be presented in more detail in Section \ref{sec:area} and Section \ref{sec:latency}. Specifically, we discuss how factory capacity, distillation rounds of each factory, and the input physical error rate all affect the output state yield rate and resulting space and time overhead. \section{Conclusion}\label{sec:conclusion} We present methods for designing magic-state distillation factory architectures that are optimized to execute applications that present with a specific parallelism distribution. By considering applications with different levels of parallelism, we design architectures to take advantage of these characteristics and execute the application with minimal space and execution time overhead. By carefully analyzing the interaction between various magic-state factory characteristics, we find that choosing the most resource optimized magic-state distribution architecture is a complex procedure. We derive and present these trade offs, and compare the architectures that have been commonly described in literature. These comparisons show a surprising picture: namely that even a modest factory capable of producing just a single resource state per distillation cycle can outperform the more commonly described surplus factory in particular input error rate regimes. We also propose a method of distributing the total number of magic states to be produced into several smaller factories uniformly distributed on a machine. In doing this, we see that these types of architectures are capable of achieving higher output fidelities of their produced states with added resilience against fluctuations of the underlying error rate, when compared to unified architectures composed of a single factory. While these designs are tailored to specific applications, we conjecture that distributed systems would in fact be more flexible in their abilities to execute applications with different amounts of parallelism. Intrinsic to their design is the ability to optionally compile smaller applications to various subunits of the machine. Because of this, these designs can be used to support a much wider range of application types than those comprised of a single factory. These systems also show that the trade off in space and time is asymmetric. In quantum chemistry and simulation applications, we notice that the resource optimized designs can use upwards of 2 orders of magnitude more physical qubits to be implemented, while they end up saving over 3 orders of magnitude in time. Magic-state access time, or latency induced specifically by delays due to stalling as magic states are produced, we find is a dominating effect in the execution of these applications. In order to mitigate these effects in a resource-aware fashion, designing a distributed system of several factories allows for efficient partitioning of the magic-state demand across the machine, at the cost of physical area. These conclusions can have physical impacts on near-term designs as well. Specifically, the construction of a factory architecture can imply the location of physical control signals on an underlying device. What we are showing then is the effect of several theoretical long-term designs, and the conclusion that distributed sets of factories outperform other designs should help motivate device fabrication teams as they decide which physical locations should be occupied by rotation generating control signals. As a general principle, long term architectural design and analysis can help guide the study and development of near term devices, which ultimately will help hasten the onset of the fault-tolerant era \cite{preskill2018quantum}. \section{Future Work}\label{sec:future} There are a number of immediate extensions to this study: \begin{itemize} \item \emph{Comparing distributed factory topologies.} Choosing an optimal layout for a distributed factory design is potentially very difficult, and requires an ability to estimate the overheads associated with different layouts. Using architectural simulation tools and adapted network simulation mechanisms, we can foresee evaluation of two new architectures: peripheral and asymmetric-mesh placement. Peripheral placement refers to factories surrounding a central computational region, while asymmetric-mesh placement refers to embedding the factories throughout the machine itself. \item \emph{Embedding data qubits within magic-state factories.} While the designs presented here assume that magic-state factory regions are to be considered black boxes that are not to be occupied by data qubits, because of their massive size requirements we imagine a system that embeds the relatively smaller number of data qubits within the factories themselves. A study of the effect of various embedding techniques on factory cycle latency could determine the efficiency of such a design. \item \emph{Advanced factory pipeline hierarchy.} We envision a concatenation of clusters of the magic-state factories, targeting continuous outputs in time, and hence reduction in contention caused by the distillation latency. In particular, each sub-region in the mesh contains multiple small, identical factories that were turned on asynchronously. So at each time step, there will always be a factory that completes a distillation cycle, and thus serving magic state continuously. \item \emph{Generalization to other distillation protocols.} Although the Bravyi-Haah protocol studied in this paper is among the best known protocols, little analysis has been done on other techniques discovered recently \cite{haah2017magic}. \item \emph{Optimizing the internal mapping and scheduling of magic-state factories.} This work has modeled factories as black-boxed regions that continuously produce resources. A realistic implementation of those factories that optimize for internal congestion would significantly reduce factory overhead, in conjunction with designs proposed in this work that optimize for external congestion. This was studied in \cite{ding2018magic}. \item \emph{Flexibility of Distributed Magic-State Architectures.} While these designs are tailored to applications of a certain parallelism distribution, a study could analyze designs that balance domain specific optimization against general application compatibility. \end{itemize} \section{Introduction}\label{sec:Introduction} Quantum computers promise to provide computational power required to solve classically intractable problems and have significant impacts in materials science, quantum chemistry, cryptography, communication, and many other fields. Recently, much focus has been placed on constructing and optimizing Noisy Intermediate-Scale Quantum (NISQ) computers \cite{preskill2018quantum}, however over the long term quantum error correction will be required to ensure that large quantum programs can execute with high success probability. Currently, the leading error correction protocol is known as the surface code \cite{dennis2002topological,FowlerSurface}, which benefits from low overheads in terms of both fabrication complexity and amount of classical processing required to perform decoding. A common execution model of machines protected by surface code error correction requires a process called {\it magic-state distillation}. In order to perform universal computation on a surface code error corrected machine, special resources called \textit{magic states} must be prepared and interacted with qubits on the device. This process is very space and time intensive, and while much work has been performed optimizing the resource preparation circuits and protocols to make the distillation process run more efficiently internally \cite{Bravyi_magic,haah2017magic,jones2013multilevel,Fowler2013, ding2018magic}, relatively little focus has been placed upon the design of an architecture that {\it generates} and {\it distributes} these resources to a full system. This study develops a realistic estimate of resource overheads of, and examines the trade-offs present in, the architecture of a system that prepares and distributes magic states. In particular, instead of using a single large factory to produce all of the magic states required for an application, the key idea of our work is to distribute this demand across several smaller factories that together produce the desired quantity. We specifically characterize these types of distributed factory systems by three parameters: the total number of magic states that can be produced per cycle, the number of smaller factories on the machine, and the number of distillation rounds that are executed by each factory. The primary trade-off we observe is between the number of qubits (area/space) and the amount of time (latency) spent in the system: we can design architectures that use minimal area but impose large latency overheads due to lower magic-state output rate, or we can occupy larger amounts of area dedicated to resource production aiming to maximally alleviate application latency. The two metrics, space and time, are equally important as it is easy to build small devices with more gates or large devices with few gates. This concept is closely related to the idea of ``Quantum Volume'' \cite{bishop2017quantum}, when machine noise and topologies are taken into consideration. To capture the equal importance of both of these metrics, we use a space-time product cost model in which the two metrics simply multiply together. This model has been used elsewhere in similar analysis \cite{Fowler2013,ding2018magic,paler2017fault,javadi2017optimized}. Figure~\ref{fig:tradeoffcartoon} illustrates the opposing trends for space and time when we increase the magic-state production rate. Our goal is to find the ``sweet spot'' on the combined space-time curve, where the overall resource overhead is at its lowest. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{Figures/tradeoffcartoon.pdf} \caption{Space and time tradeoffs exist for distributions of resource generation factories within quantum computers. These trends are shown assuming same total factory output capacity. By explicit overhead analysis, we can discover optimal space-time volume design points.} \label{fig:tradeoffcartoon} \end{figure} In summary, this paper makes the following contributions: \begin{enumerate} \item We present precise resource estimates for implementing different algorithms with magic-state distillation on a surface code error corrected machine. We derive the estimates from modeling and simulating the generation and distribution of magic states to their target qubits in the computation. \item We quantify the space and time trade-offs of a number of architectural configurations for magic-state production, based on design parameters including the total number of factories, total number of output states these factories can produce, and the desired fidelity of the output magic states. \item We study different architectural designs of magic-state distillation factory, and present an algorithm that finds the configuration that minimizes the space-time volume overhead. \item We highlight the nontrivial interactions of factory failure rates and achievable output state fidelity, and how they affect our design decisions. We analyze the sensitivity of these optimized system configurations to fluctuations in underlying input parameters. \item We discover that dividing a single factory into multiple smaller distributed factories can not only reduce overall space-time volume overhead but also build more resilience into the system against factory failures and output infidelity. \end{enumerate} The rest of the paper is structured as follows. In Section \ref{sec:bg}, a basic background of quantum computation, error correction, magic-state distillation and the Bravyi-Haah distillation protocol, as well as the block-code state-distillation construction are described. Section \ref{sec:related} describes previous work in this area. Sections \ref{sec:area} and \ref{sec:latency} discuss important space and time characteristics of the distillation procedures that we consider, and derive and highlight scaling behaviors that impact full system overhead analysis. Section \ref{sec:tradeoffs} describes in detail how these characteristics interact, and shows how these interactions create a design space with locally optimal design points. Section \ref{sec:method} details the system configurations we model, describes a novel procedure for discovering the optimal design points, and discusses the simulation techniques used to validate our model derivations. Section \ref{sec:results} shows our results and the explains the impacts of optimizing these designs. Sections \ref{sec:conclusion} and \ref{sec:future} conclude and discuss ideas to be pursued as future work. \section{Factory Latency Overhead}\label{sec:latency} This section presents a systematic study of the time overhead of realizing magic-state distillation protocols. First, we will examine the characteristics of the T gate demand in our benchmark programs, by introducing the concept of the T distribution. Next, we will study the latency overhead caused by delivering magic states to wherever T gates are demanded by looking at the contention and congestion factors. Finally, we will arrive at an analytical model for the overall distillation latency integrating the information from the program distribution. \subsection{Program Distributions}\label{subsec:program} While the majority of the prior works on this subject have been abstracting algorithm behavior into a single number, the total T gate count, we argue that the distribution of T gate throughout a algorithm has a significant impact on the performance of the magic-state factory. For example, a program with bursty T distribution, where a large number of T gates are scheduled in a few time steps, puts significant pressure on the factory's capability of producing a large amount of high fidelity magic states quickly. In order to quantify this behavior, we choose two quantum chemistry algorithms that represent the two extremes of T gate parallelism. On one hand, the {\it Ground State Estimation} algorithm is an application with very low T gate parallelism. An algorithm attempting to find the ground state energy of a molecule of size $m$, this application can be characterized by a series of rotations on single qubits \cite{whitfield2011simulation}. {\it Ising model}, on the other hand, is a highly parallel application demanding T gates at a much higher rate. This application simulates the interaction of an Ising spin chain, and therefore requires many parallelized operations on single qubits, along with nearest neighbor qubit interactions \cite{barends2016digitized}. To capture application characteristics, we use the ScaffCC compiler toolchain that supports application synthesis from high-level quantum algorithm to to physical qubit layouts and circuits \cite{scaffcc}. The majority of the time steps in Ising Model algorithm has a large number of parallel T gates with a mean T load of 440, where as Ground State Estimation has no more than 12 T gates at each time steps. As opposed to just using the single T gate count to characterize algorithms, we will from now on use the T load distribution. \begin{figure*}[h!] \centering \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\linewidth]{Figures/latency_k_l_ising.pdf} \caption{Ising Model} \label{fig:time_is} \end{subfigure} ~ \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\linewidth]{Figures/latency_k_l_gse.pdf} \caption{Ground State Estimation} \label{fig:time_gse} \end{subfigure} \caption{(a)-(b) Total number of surface code cycles required by Ising Model and Ground State Estimation applications. Both figures are plotted for three different factory block-code levels, i.e. $X=1$ and $L=1, 2, \text{ and }3$. \label{fig:fact_time} \end{figure*} \subsection{T-Gate Contention and Congestion}\label{subsubsec:contentioncongestion} In order to fully assess the space-time volume overhead of the system, we require a low level description of how the produced magic-states are being consumed by the program. As discussed in the Section \ref{sec:bg}, a T gate requires braiding between the magic-state qubit in the factory and the target qubit that the T gate operates on. Now suppose our factory is able to produce $K$ high-fidelity magic states per distillation cycle, and at some time step the program requests for $t$ T gates. If we demand more than the factory could offer at once (i.e. $t > K$), then naturally only $K$ of those requests can be served, while the others would have to wait for at least another distillation cycle. So we will say that the network has {\it contention} when the demand exceeds the supply capacity. By contrast, we define network {\it congestion} to capture the latency introduced by the fact the some braids may fail to route from the target to the factory on the 2D surface code layout, due to high braiding traffic. To estimate the overhead of network congestion, we will perform an average case analysis without committing to a particular routing algorithm. Ideally, in the contention free limit where the number of requests $t$ is less than $K$, all requests could be scheduled and executed in parallel. However, often times the requests will congest due to limitations of routing algorithms. We define a congestion \textit{factor} $C_g$that represents the total latency required to execute all of the T gate requests at any given time. We model congestion as a factor that scales proportional to the number of $t$ requests made at any given time, within a particular region serviced by a factory. This assumes a general topology in which a factory is placed in the center of a region, and all of the surrounding data qubits are served by this factory alone. Naturally, the center of the region is quite dense with T gate request routes. In general, for a reasonable routing algorithm, the number of routing options increases as area available increases. However, because all of the routes have their destination in the center of the region, increasing area of the region has no such effect. In fact, the \textit{distance} of a T request source from the factory increases the likelihood of congestion from a simple probabilistic argument. There may be other T requests blocking available routes, and the number of these possible requests that block pathways increases as the distance between a request and the factory increases. The combination of these effects interacts with the complexity of a routing algorithm, and results in a scaling relationship proportional to both the T request density $t$ and the maximum distance of any T request within any of these regions: \begin{align} C_g \sim c\sqrt{t} \end{align} for some constant $c$, depending upon the routing algorithm. We validated this congestion model in simulation using simulation tools and compiler toolchains of \cite{javadi2017optimized}, and find that they do indeed agree. Section \ref{sec:method} discusses this in greater detail. \subsection{Resolving T-gate Requests}\label{subsec:exec} For any given program, characterized as a distribution $D$ of the T load, we denote $D[t]$ the number of timesteps in the program that $t$ parallel T gates are to be executed. Then the number of iterations that the factory needs to resolve the $t$ requests can be computed based on the following latency analysis. In particular, in order to maximize the utilization of the factory, we would execute as many outstanding T gate requests as possible in parallel. When the number of requests $t$ exceeds the factory yield $K$, we will need to stall the surpassed amount of requests. We denote $s = \lfloor t/K \rfloor$ the number of fully-utilized iterations. So, we are serving at full capacity for $s$ number of times, and at each time a congestion factor is being multiplied, as discussed in Section \ref{subsubsec:contentioncongestion}. It follows that the first $sK$ requests are completed in $s \sqrt{K}$ number of distillation cycles. And finally the rest $(t-sK)$ outstanding requests are then being executed in $\sqrt{t-sK}$ cycles. Notice that the time it takes to execute the T gate is typically shorter than the factory distillation cycle time. So under the buffer assumption made earlier, we can stage the execution of requests within a distillation cycle such that no data dependencies are violated, as long as there are magic-states available in the factory. The time required to produce some constant number $k$ of states is $T_{\text{distill}}$, while the time required to deliver $k$ states in parallel is $T_t\sqrt{k}$ due to network congestion. So the number of distillation cycles needed to supply a single cycle of $k$ T gate requests is given by the ratio $T_{\text{t}}\sqrt{k}/T_{\text{distill}}$. Substituting $k = K/X$ and $k = (t - sK)/X$ as described earlier, we can calculate the number of distillation iterations we need to serve $t$ T gates in a particular timestep, as: \begin{align} n_{\text{distill}} = \frac{T_{\text{t}}}{T_{\text{distill}}} \cdot \Bigg(s \cdot \sqrt{\frac{K}{X}} + \sqrt{\frac{t-sK}{X}}\Bigg) \end{align} where $K$ is again the yield of each iteration from Equation \ref{eq:koutput}. Putting it together, we obtain our final time overhead of an application: \begin{align} T_{\text{total}} &= T_{\text{distill}} \cdot \Big(\sum_{t = 0}^{T_{\text{peak}}} n_{\text{distill}}\cdot D[t]\Big) \end{align} where $T_{\text{peak}}$ is the maximum number of parallel T gates scheduled at one timestep. Notice that this is independent of $T_{\text{distill}}$, as the distillation cycle time has been captured by the ratio $T_t/T_{\text{distill}}$. The scaling of this function is shown in Figure \ref{fig:time_k_x}, and is compared in Figure \ref{fig:fact_time} across different applications. \section{Area and Latency Trade-offs}\label{sec:tradeoffs} In this section, we will discuss some of the motivations of our proposed algorithm for optimizing space-time resource overhead, based on the area and latency analysis that we built up in the previous sections. The Bravyi-Haah protocol shows an area \textit{expansion} when a single factory is ``divided'' into many smaller factories, that is, the total area of $x$ number of factories each with some capacity $k$ is larger than the area of a factory with capacity $x\cdot k$. Figure \ref{fig:area_k_x} illustrates this trend, arising from the original area law equation \ref{eq:arealaw}. {\bf Why do we want a distributed factory architecture?} Although it might first seem undesirable to divide a single factory into many factories due to the area expansion, there are many advantages when doing so. One such advantage is that smaller factory can produce states with higher fidelity. So, for a fixed output capacity $K$, incrementing the number of factories used to produce in total that $K$ allows for all of those $K$ states to have higher fidelity. The output error rate scales inversely with the number of factories on the lattice for a fixed output capacity $K$ as seen in Equation \ref{eq:errorround}. This provides us with the unique ability to actually manipulate the underlying \textit{physical error rate threshold}. In particular, substitution of $K/X$ for $K$ in all of the previous equations shows that the yield threshold now also has inverse dependence upon the number of factories used. As Figure \ref{fig:xsweeperrorthresh} shows, for a fixed output capacity and block code level $\ell$, increasing the number of factories on the lattice can greatly increase the tolerable physical error rate under which the factory architecture can operate. With this knowledge, we are immediately presented with architectural tradeoffs. Using the representation of programs as distributions of $T$ gate requests, any application can be characterized by a $T_{\text{peak}}$, again defined as the highest number of parallel T gate requests in any timestep of an application. For a ``surplus'' configuration, a system may set the factory output rate $K = T_{\text{peak}}$, so as to never incur any latency during the program execution. However, as the threshold in equation \ref{eq:thresh} indicates, this sets an upper bound on the tolerable input error rate $\epsilon_{\text{in}}$. With a distributed factory architecture, this provides a system parameter enabling systems to be designed that will be able to tolerate higher error rates, and still achieve the same output capacity $K$, at the expense of area as seen in the area law relationship from Figure \ref{fig:area_k_x}. Conversely, systems that are constructed with great knowledge of low underlying physical error rates may be able to reduce overall area of a surplus factory configuration by reducing the number of individual factories to a certain point. These are the tradeoffs in the design space that this work explores, and in fact we can find for representative benchmarks, configurations that are lower in capacity that can save orders of magnitude in space-time overhead overall. \section*{Acknowledgements} This work was funded in part by NSF Expeditions in Computing grant 1730449, Los Alamos National Laboratory and the U.S. Department of Defense under subcontract 431682, by NSF PHY grant 1660686, and by a research gift from Intel Corporation. \bibliographystyle{ieeetr} \section{Evaluation Methodology}\label{sec:method} \subsection{System Configuration} Here we lay out all of the assumptions made about the underlying systems that we are studying. \begin{table}[b] \centering \small \begin{tabular}{c|l} \hline\hline Configuration & Description\\\hline \multirow{ 3}{*}{Surplus} & One central factory that can produce enough\\ & states to always meet the demand at each time-\\ & step of the program as in \cite{isailovic2008running,van2013blueprint,campbell}. \\\hline \multirow{ 2}{*}{Singlet} & One central factory that uses minimal \\ & area and produces only one state per cycle. \\\hline \multirow{2}{*}{Optimized-Unified}& One central factory that outputs an optimized \\ & number of output states per distillation cycle\\ \hline \multirow{2}{*}{Optimized-Distributed}& A optimized set of factories that together \\ & output an optimized number of output states \\ \hline \hline \end{tabular} \caption{List of architecture configurations explored in this work.} \label{tab:configurations} \end{table} First, we assume that the factories will be operated continuously. This means that each $T_{\text{distill}}$, the factories will produce another $K_{\text{output}}$ states. This abstracts away the time needed to deliver these states to their destinations, which would have to be performed in a real system before the next distillation iteration begins. In such real systems, we imagine an architecture that supports a limited, fixed size buffer region so that the subsequent distillation cycle will not overwrite the previously completed states. However, this is a small constant offset in time that applies to all studied designs symmetrically, so it is omitted. Because the factories are always online and producing magic states, the overall time overhead is then equal to the number of distillation cycles required to execute all the scheduled $T$ gate requests, multiplied by the time taken to perform a distillation iteration $T_{\text{distill}}$ from Equation \ref{eq:distilltime}. Next, we assume three different levels of uniformity in these designs: all distributed factories are laid out uniformly on the surface code lattice as in figure \ref{fig:distfactory} (i.e. they are an equal distance apart), all factories in a distributed architecture are identical (i.e. they all operate with the same parameters such as $K$ and $\ell$), and within each factory each block code round is identical (i.e. they are composed of identical $n \rightarrow k$ protocols). Note that Campbell et. al. in \cite{campbell} allows varying $k$ within a single factory, across different rounds. In performing our evaluations we consider four different system configurations: \emph{surplus} architectures that minimize application latency by setting the magic-state output capacity to the peak T gate request count in an application, \emph{singlet} architectures that minimize required space for the factory by producing only a single state per distillation cycle, \emph{optimized-unified} architectures that use one central factory with an optimized choice of output capacity $K$ and number of distillation rounds $\ell$, and \emph{optimized-distributed} architectures that choose an optimum output capacity $K$ distributed into an optimum number $X$ of factories, each utilizing $\ell$ distillation rounds. These architectures are summarized in Table \ref{tab:configurations}. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{Figures/fullsuperimposedplot.png} \caption{Space-time volume minimization under error threshold constraints imposed by target error rate for each block code level. An application will set a target error rate (black) that the factory must be able to achieve in output state fidelity. On the lower plot, levels 2 and 3 are the only levels available that can satisfy this. In the upper plot, we find that the lowest volume in the feasible area is located on the level 2 factory feasibility line. Recall the volume shapes are explained earlier in section \ref{sec:latency}. Here the tails after $K \approx 800$ show an increase in volume, as the added capacity grows the factory areas while maintaining constant latency.} \label{fig:errorfeasible} \end{figure} \subsection{Optimization Algorithm}\label{subsec:algo} As keen readers may have already observed from Figure \ref{fig:fact_time} and Figure \ref{fig:area_law}, for fixed output capacity $K$, it costs us both in time latency and in factory footprint to implement a high $\ell$ block-code factory. The only reason we design for high $\ell$ is to achieve the desired target error rate. This relation is best captured in the bottom half of Figure \ref{fig:errorfeasible}, where the $L=1$ factory is not feasible for $K\ge 1$ since its output error rate is higher than the target error rate, while the $L=2$ factory is feasible for $K \in [1,50]$, and the $L=3$ factory is feasible for the entire plotted range. We combine all of the details of the explicit overhead estimation derived above in order to find optimal design points in the system configuration space. To do this, we must ensure that designs are capable of producing the target logical error rate for an application. Additionally, there exists a set of constraints $C$ that $K,X \in \mathbb{Z^+}$ have to satisfy: (i) $1 \le X \le K$; (ii) $K/X \le (1-8\epsilon_{\text{inject}})/(3\epsilon_{\text{inject}})$, due the Bravyi-Haah protocol error thresholds. With the feasible space mapped out, standard nonlinear optimization techniques are employed to explore the space and select the space-time optimal design point. \begin{algorithm}[h] \caption{Space-time Optimization Procedure} \label{alg:opt} \algorithmicrequire{\;$P_s$, $N_{\text{gates}}$, $\epsilon_{\text{inject}}$, distribution $D$ and constraints $C$} \algorithmicensure{\; $K$, $X$} \begin{algorithmic}[1] \Procedure{Optimize}{} \State $K \gets 1$, $X \gets 1$, $\ell_{\text{max}} = 5$ \State $\epsilon_{\text{target}} \gets P_s/N_{\text{gates}}$ \For{$\ell \in [1,\ell_{\text{max}}]$} \State $k_\ell \gets (K/X)^{1/\ell}$ \State $n_\ell \gets 3k_\ell + 8$ \For{$r \in \{1, \cdots \ell\}$} \If{$r==\ell$} $\epsilon_r \gets \epsilon_{\text{target}}$ \Else \;$\epsilon_r \gets (1+3k_\ell)^{2^r-1}\epsilon_{\text{inject}}^{2^r}$ \EndIf \State $d_r \gets \text{Solve}\{d_r \cdot (100 \epsilon_{\text{in}})^{(d_r+1)/2} = \epsilon_r, \; d_r\}$ \EndFor \State $R\equiv K/X \gets \text{Solve}\{\epsilon_{\ell} = \epsilon_{\text{target}},\;R\}$ \If {$R \geq 1$} \State $K_{\text{output}}\gets K \cdot \prod_{r=1}^{\ell}\big[(1-n_\ell\cdot\epsilon_{\text{inject}})\epsilon_r\big]$ \State $s \gets \lfloor t/K_{\text{output}}\rfloor$ \State $T_{\text{t}} \gets 4d_\ell + 4$ \State $T_{\text{distill}} \gets 11\sum_{r=1}^\ell d_r$ \State $n_{\text{distill}} \gets \frac{T_{\text{t}}}{T_{\text{distill}}} \cdot \Big(s \cdot \sqrt{\frac{K}{X}} + \sqrt{\frac{t-sK}{X}}\Big)$ \State $T_{\text{total}} \gets T_{\text{distill}}\Big(\sum_{t=0}^{T_{\text{peak}}}n_{\text{distill}}\Big)\cdot D[t]$ \State $A_{\text{factories}} \gets X\cdot n_l^{\ell-1}\cdot(6k_\ell + 14)\cdot d_1^2$ \State $(K,X) \gets \;\argmin_{(K,X):C}\; A_{\text{factories}} \cdot T_{\text{total}}$ \Else \State $\ell \gets \ell + 1$ \EndIf \EndFor \State \Return $K,X$ \EndProcedure \end{algorithmic} \end{algorithm} With these constraints in mind, we explore the space by first selecting the lowest $\ell$ possible. As the area law and full volume scaling trends of the previous sections indicate, if there are any feasible design points with $\ell = \ell_0$, then any feasible design points for systems with $\ell_i > \ell_0$ will be strictly greater in overall volume. This is somewhat intuitive, as concatenation of block code protocols is very costly. With the lowest $\ell$ selected, we check to see if there exists any feasible design points for this $\ell$ by checking for solutions to the equation: \begin{align} (1+3k^{\sfrac{1}{\ell}})^{2^{\ell}-1}\epsilon_{\text{inject}}^{2^{\ell}} \leq \frac{P_s}{N_{\text{gates}}} \end{align} If the $K$ that solves this equation is greater than or equal to 1, then there does exist feasible design space along this $\ell$, and the algorithm continues. Otherwise, $\ell$ is incremented. Next, nonlinear optimization techniques are used to search within the mapped feasible space for optimal design points in both $K$ and $X$. \subsection{Simulation and Validation}\label{sec:simulation} \begin{figure}[t] \centering \includegraphics[trim={0cm 0cm 0cm 0cm}, width=\columnwidth]{Figures/simvalidate2.pdf} \caption{Model validation by simulation. The simulation data (blue line) lies between the upper bound model prediction that overestimates congestion(orange line), and the congestion-free lower bound (green line). } \label{fig:simwithlower} \end{figure} This section explores the validity of our models through empirical evaluation of the space-time resources. To do this, we improve the surface code simulation tool from~\cite{javadi2017optimized} to accurately assess the latency and qubit cost of fully error-corrected applications with various magic-state distillation factory configurations. Specifically, we added support for arbitrary factory layouts, which manifests as black boxed regions dedicated to factories that cannot be routed through during computation, combined with sets of locations of produced magic states. The result is a cycle precise simulator that accurately performs production and consumption of magic states, including all necessary routing. One implementation detail that is supported is the ability to dynamically reallocate specific magic-state assignments during runtime. Statically, each T gate operation is prespecified with a particular magic-state resource, located along the outer edge of a factory. During runtime, this can introduce unnecessary contention, as two nearby logical qubits can potentially request the same magic state. This is avoided by implementing online magic-state resource shuffling, so that if the particular state that was requested is unavailable, the system selects the next nearest state that is available. If no such states exist, this T gate is stalled until the next distillation cycle is completed. Figure~\ref{fig:simwithlower} shows simulation results superimposed on top of those driven analytically. We can see that the model shows the same trend as the simulation behavior (blue line), and thus we will be able to show relative tradeoffs between capacity and latency. For simplicity the validation is performed on single unified factory located at the center of the surface code lattice. The results extend well to multiple factories, because in the distributed case, each factory will be responsible for magic-state requests in a sub-region of the lattice. We can validate this by simulating optimal operating points in the space-time trade-off spectrum and comparing them to our expectation from the model. Using simulation data, we re-plot our idealized tradeoff in Figure~\ref{fig:tradeoffcartoon} for the Ising Model Application and show the results in Figure~\ref{fig:xmen}. We see that as factory capacities increase, the applications time improves at the expense of its qubit numbers. In this figure, the space-time volume is sketched in green, and has two near-optimal points: one with relatively few qubits but high latency, and vice versa. The worst performance occurs in the middle of this spectrum, when transition from level 1 to level 2 distillation needs to occur (causing a sudden jump in qubits, but not much latency improvement). \begin{figure}[t] \centering \includegraphics[trim={0cm 0cm 0cm 0cm}, width=\columnwidth]{Figures/xmen.pdf} \caption{Space-time tradeoff observed empirically in simulation for varying factory capacities. A space-time volume (green line) can be chosen at $K\approx40$, which is an optimal, minimized value on this curve. It corresponds here to a low-qubit, high latency configuration. Notice that another configuration at $K\approx300$) could be chosen, corresponding to a high-qubit, low-latency configuration. In this case, the former of these choices is more resource optimized, as the space-time cost is lower.} \label{fig:xmen} \end{figure} \section{Related Work}\label{sec:related} \begin{figure}[t] \centering \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\linewidth]{Figures/unifactories.png} \caption{Single unified factory with \emph{large} capacity} \label{fig:unifactory} \end{subfigure} ~ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\linewidth]{Figures/distfactories.png} \caption{A number of distributed factories, each with \emph{smaller} capacity} \label{fig:distfactory} \end{subfigure} \caption{The concept of a unified versus distributed factory architecture, embedding factories (green blocks) within computational surface code region (blue circles).} \label{fig:fact_time} \end{figure} A number of prior work has been focused upon designing efficient magic-state distillation protocols \cite{Bravyi_magic,anwar2012qutrit,meier2012magic,magic_states}. There are also some work that aim to concatenate different protocols together to reduce the overall cost or improve output rate and fidelity \cite{jones2013multilevel,campbell2012magic}. The problem of scheduling and mapping the distillation circuit is tackled in this work \cite{ding2018magic} by taking advantages of the internal structures in the distillation protocol, and by minimizing CNOT-braid routing congestions. The aim of that work is also to more effciently implement the distillation process, which is different from ours, as we instead aim to optimize a full system architecture built around these protocols and construct factory arrangements that efficiently deliver output magic states to their intended target qubits. Prior work on this subject has often assumed either that magic states will be prepared offline in advance \cite{Fowler2013,Jones}, or that the production rate is set to keep up with the {\it peak} consumption rate in any given quantum application, and any excess states will be placed in a buffer \cite{campbell,van2013blueprint}. This paper operates with the different assumption that magic-state factories will be active during the computation, and states will not be able to be prepared offline or in advance. We do this to characterize the performance of the machine online, and introduce the complexity of resource state distribution throughout the machine, a problem that has been studied well in classical computing systems but has received less of a focus in this domain. Other works closely related to architectural design optimized the ancilla production factories that operate in different error correcting codes \cite{paetznick2011fault,isailovic2008running}, or analyzed the overhead of CNOT operations which dominate other classes of applications like quantum cryptography and search optimization \cite{javadi2017optimized}. Our work focuses instead on quantum chemistry and simulation applications that are likely to represent a large proportion of quantum workloads in both the near and far term. \section{Results}\label{sec:results} In this section we present the resource requirements of various magic-state factory architectures, and show that by considering the scaling behaviors that we have highlighted and searching the design space with our optimization algorithm, we can discover system configurations that save orders of magnitude of quantum volume. We first compare the overheads of the surplus and singlet architectures that represent baselines against which we compare our optimized architectures. We then compare the surplus architecture with the optimized-distributed design found with our optimization algorithm. We look at two representative benchmarks for the quantum chemistry and quantum simulation fields, the Ising Model \cite{barends2016digitized} and Ground State Estimation \cite{whitfield2011simulation} algorithms, as well as how performance of these architectures changes as the benchmarks scale up in size. Next, we detail the space and time trade-off that is made in our resource optimized design choices, and show that the latency induced by a design is a more dominant factor in these applications. We then present a full design space comparison, showing the performance of the surplus design against the singlet design, as well as the optimized-unified factory design, all compared to the performance of optimized-distributed design. Lastly, we analyze the sensitivity of these designs to fluctuations in the underlying physical error rates, and show that building out a distributed factory design adds robustness that makes the architecture perform well for a wider range of input parameters. \subsection{Comparing Surplus and Singlet Architectures} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Figures/timespaceoptimal.pdf} \caption{(Color online) Comparing surplus and singlet designs. There are regions where each outperforms the other, showing great sensitivity to the underlying physical error rate and the corresponding required $\ell$. Recall that the step-like shape is due to level transitions explained in section \ref{subsec:algo}.} \label{fig:timespaceoptimal} \end{figure} We begin with Figure \ref{fig:timespaceoptimal} by comparing two architectures that aim solely to minimize application latency or required space. This comparison represents the range between two ends of the design space spectrum for single factory architectures, and each shows a particular error rate range over which it performs more optimally. Initially, at the highest input error rate, the space optimal singlet design requires more resources than the time optimal surplus design, as the application suffers from excessive latency from magic-state factory access time. Note the inflection points at $10^{-3.5}$ and $10^{-4.5}$ input error rates. At these points, the singlet factory is able to reduce the number of rounds of distillation it must perform, as input error rates are sufficiently low. Over this region, the reduction in area compensates the expansion in computation time, and the design outperforms the much larger surplus factory configuration. At $10^{-4.5}$, the surplus factory is able to operate with fewer distillation rounds as well, enabling this configuration to outperform the singlet design. This behavior is surprising, as it indicates that with respect to a high-parallelism application, there are input error rate regions where intuitively conservative, space minimizing designs are able to outperform what seem like aggressively optimized designs. We see this because we are comparing space and time simultaneously, which allows us to see that the trade-off is asymmetric and these factors interact non-trivially. \subsection{Optimized Design Performance} \begin{figure*}[t] \centering \begin{subfigure}[b]{0.47\textwidth} \includegraphics[width=\linewidth]{Figures/ising500results.pdf} \caption{Ising Model N=500} \label{fig:isingresults} \end{subfigure} \begin{subfigure}[b]{0.47\textwidth} \centering \includegraphics[width=\linewidth]{Figures/gseresults.pdf} \caption{Ground State Estimation} \label{fig:gseresults} \end{subfigure} \begin{subfigure}[b]{0.47\textwidth} \includegraphics[trim={0 0 0 1cm}, width=\linewidth]{Figures/ising1000reductions.pdf} \caption{Ising Model N=1000} \label{fig:ising1000} \end{subfigure} \begin{subfigure}[b]{0.47\textwidth} \centering \includegraphics[width=\linewidth]{Figures/ising2000reductions.pdf} \caption{Ising Model N=2000} \label{fig:ising2000} \end{subfigure} \caption{(Color online) (a)-(b) Resource reductions of optimized-distributed designs over surplus designs for both Ising Model and Ground State Estimation. While Ising Model is intrinsically more parallel which leads to high choices of output capacity, both applications still show between a 12x and 16x reduction in overall space-time volume. (c)-(d) Ising Model with varying problem sizes, comparing time optimal factories against fully space-time optimized configurations. We see that the trend of between 15x and 20x total volume reduction extends to larger molecular simulations.} \label{fig:appresults} \end{figure*} \begin{figure*}[t!] \centering \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=\linewidth]{Figures/space_reduction.pdf} \caption{Space tradeoff} \label{fig:spacereduction} \end{subfigure} \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\linewidth]{Figures/time_reduction.pdf} \caption{Time tradeoff} \label{fig:timereduction} \end{subfigure} \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=\linewidth]{Figures/kvalsising.pdf} \caption{Output capacities procedurally selected} \label{fig:kvals} \end{subfigure} \caption{(Color online) Space-time volume reduces by moving from an optimized-unified factory to an optimized-distributed factory, as the designs trade space for time. Magic-state access latency is a dominating effect in these applications, as can be seen by the large capacity values chosen by the optimized factory configuration.} \label{fig:optimizedreductionresults} \end{figure*} We now move to comparing the surplus design against the optimized-distributed design discovered by our optimization algorithm, that is allowed to subdivide factories across the machine. Figures \ref{fig:isingresults} and \ref{fig:gseresults} depict the detailed results of our optimization procedure on the Ising Model and Ground State Estimation applications, respectively. Ising Model is intrinsically very parallel, which leads to a higher optimal capacity choice for the optimized-distributed factory. Note however that it is able to choose a distribution level that saves approximately 15x in space-time volume. Ground State Estimation is very serial, yet for sufficiently low error rates the optimized-distributed design is able to incorporate distribution of factories into the lattice to lower the required block code concatenation level $\ell$, resulting in a 12x reduction in volume across these points. The reason that the distributed factory design is able to outperform the surplus design is that the feasibility regions of the two designs differ. Because the distributed factory utilizes many small factories on the machine it can achieve a higher output state fidelity than a single factory design, which enables it to operate with a smaller number of distillation rounds. The optimization algorithm respects this characteristic, which is why it searches iteratively from the lowest number of distillation rounds possible, one by one until it discovers a feasible factory configuration. \subsubsection{Optimized Design Performance Scaling} Figures \ref{fig:ising1000} and \ref{fig:ising2000} detail these trends as larger and larger quantum simulation applications are executed. For extremely large simulations, we find that the volume reductions that optimizing a factory design yields become even more pronounced, resulting in between a 15x and 18x full resource reduction. These designs also show sensitivity to physical error rates that require designs to change block code distillation level. \subsection{Distributed Factory Characteristics} As Figure \ref{fig:spacereduction} describes, an optimized-distributed set of factories is able to save between 1.2x and 4x in total space-time volume over the optimized-unified factory. Large volume jumps occur primarily between $10^{-3.5}$ and $10^{-3.4}$ physical error rate, and this again corresponds to a requirement by this application to increment to a higher block code level $\ell$, which happens for both the unified and distributed factory schemes. These optimized designs trade space for time, as Figures \ref{fig:spacereduction} and \ref{fig:timereduction} indicate, and the net effect is an overall volume reduction. This is indicative that for these highly parallel quantum chemistry applications, the magic-state factory access latency is a much more dominating effect than the number of physical qubits required to run these factories. Figure \ref{fig:kvals} depicts the output capacities chosen by the optimization procedure, and how they differ when the system is unified or distributed. Notably, at both ends of the input error rate spectrum we find that both factory architectures choose the same output capacity, as in the high error rate case this is driven by high $\ell$ requirement, while in the low error rate limit both factory architectures can afford to be very large and not suffer from any yield penalties. However, through the center of the error rate spectrum the unified factory design must lower the chosen output capacity, as supporting higher capacity would require a very expensive increase in the number of distillation rounds. \subsection{Full Design Space Comparison} Figure \ref{fig:fullvols} depicts the full space-time volume required by different factory architectures across the design space. Shown are the four main configurations: a surplus factory configured with output capacity $K = T_{\text{peak}}$, a singlet factory with $K=1$, an optimized-unified factory, and an optimized-distributed factory. Distinct volume phases are evident visually on the graph, due to the different feasibility regions of the architectures. Sweeping from high error rates to low error rates, large volume jumps occur as observed before, for specific configurations when that configuration can operate with fewer rounds of distillation in order to convert the input error rate to the target output error rate. Notice that this jump occurs earliest for the singlet, optimized-unified, and optimized-distributed designs, at $10^{-3.5}$ input error rate. All of these designs show an inflection point here, where the configurations can achieve the target output error rate with a smaller number of block code distillation levels. This is not true of the surplus factory, which in fact has the largest output capacity of the set. Because the output capacity is so high, the lowest achievable output error rate is much higher than that of the other designs. This forces the block code level to remain high until the input error rate becomes sufficiently low, which occurs at $10^{-4.5}$. \subsection{Sensitivity Analysis} \label{sec:sensitivity} \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{Figures/error_sensitivity.pdf} \caption{Factory architectures and their sensitivities to fluctuations in underlying physical error rates} \label{fig:sens} \end{figure} \begin{figure*}[t!] \centering \includegraphics[width=0.8\linewidth]{Figures/full_volume_reduction.pdf} \caption{(Color online) Full volume comparison across distillation factory architectures.} \label{fig:fullvols} \end{figure*} Now we turn to analyzing how these designs perform if the environment in which they were designed changes. Supposing that a design choice has been made specifying the desired factory capacity $K$, number of factories $X$, and block code distillation level $\ell$, different types of architectures show varying sensitivity to fluctuations in the underlying design points around which the architectures were constructed. For example, Figure \ref{fig:sens} details an instance of this occurrence. The figure shows the surplus, singlet, and optimized-distributed factory designs, in this case setting $K\sim 600$ and $X \sim 200$ for the distributed architecture. All of these factories were designed under the assumption that the physical machine will operate with $10^{-5}$ error rate. We see that while these applications perform similarly over the range from $10^{-5}$ to $10^{-4}$, just after this point the surplus factory encounters a steep volume expansion due to the yield threshold equation \ref{eq:yieldthresh}. For this design the threshold of tolerable physical error rates is quite high, significantly higher than that of the other designs. Because of this, it can tolerate a smaller range of fluctuation in the underlying error rate before it ceases to execute algorithms correctly.
2024-02-18T23:40:47.016Z
2019-04-29T02:01:14.000Z
algebraic_stack_train_0000
3,320
12,746
proofpile-arXiv_066-573
\section{\bf Introduction} \vskip 0.4 true cm Degree of vertices is one of the essential concepts in theory of graphs with various applications in branches of mathematics and some other fields as network, coding theory and biology (see for instance, \cite{Ber}, \cite{K}, \cite{Lub}, \cite{Mas}). The realizability of a non-increasing sequence of nonnegative integers, means the existence of a simple graph whose degree sequence be the same sequence, is an interesting subject concerning to concept of degree of vertices, with many wide applications (\cite{Aig}, \cite{Ber}, \cite{DuB}, \cite{Er}, \cite{Hak}, \cite{Hav}, \cite{Rit}, \cite{Tri}, \cite{Zre}). P. Er\H{o}ds and T. Gallai in \cite{Er} have presented a theorem which gives a criterion for realizability of a non-increasing sequence of nonnegative integers (Theorem 2.1). Also V. Havel in \cite{Hav} and S. L. Hakimi in \cite{Hak} have presented an algorithm which determines whether such a sequence is realizable or not. A simple graph whose degree sequence be a determined realizable sequence of integers, is not unique under isomorphism, necessarily. But all such graphs are common in some properties. In this paper we newly introduce a concept, called "degree polynomial", for vertices of a simple graph. This notion leads to concept of degree polynomial sequence. The recent concept is deeper and stronger than the concept of degree sequence in graph theory. Since many topics and various applications arise from the concept of degree sequence, it seems that the new concept, degree polynomial sequence, can cause a wide outlook for future researches. After obtaining the degree polynomial sequence for some well-known graphs as cycles, complete graphs, complete bipartite graphs, etc, we prove a theorem which gives a necessary condition for realizability of a sequence of polynomials with coefficients in positive integers. Also we study the behavior of degree polynomial, under several graph operations. More precisely, we calculate the degree polynomial for vertices of join, Cartesian product, tensor product, and lexicographic product of two simple graphs and also for vertices of the complement of a simple graph. Some important examples, counterexamples, and open problems are presented, as well. \vskip 0.8 true cm \section{\bf Preliminaries} \vskip 0.4 true cm In the sequel, we use \cite{Bon} for terminologies and notations on graphs. Let $G$ be a simple graph. For vertices $u,v\in V(G)$, if $u$ is adjacent with $v$, we write $u\sim v$. Let $G$ be a simple graph of order $n$. A non-increasing sequence of nonnegative integers $q=(d_{1},\ldots , d_{n})$ is said to be degree sequence of $G$, whenever every degrees of vertices of $G$ be a term of sequence. A sequence $q=(d_{1},\ldots , d_{n})$ of integers is realizable, if there exists a simple graph $G$, such that $q$ be the degree sequence of $G$. Since adding a finite number of isolated vertices to a graph, and deleting a finite number of such vertices from a nonempty graph makes no change in the degree of other vertices, we can consider only the case in which each $d_{i}, 1\leq i \leq n$ is positive. Let $G$ and $H$ be simple graphs with disjoint vertex sets. The join of $G$ and $H$, denoted by $G\vee H$, is a simple graph with vertex set $V(G)\cup V(H)$, in which for two vertices $u$ and $v$, $u\sim v$ if and only if\\ (1) $u,v\in V(G)$ and $u\sim V$ (in $G$), or\\ (2) $u,v\in V(H)$ and $u\sim V$ (in $H$), or\\ (3) $u\in V(G), \ v\in V(H)$, or\\ (4) $u\in V(H), \ v\in V(G)$. Let $G$ and $H$ be simple graphs. The Cartesian product of $G$ and $H$, denoted by $G\times H$, is a simple graph with vertex set $V(G)\times V(H)$, in which for two vertices $(u_{1}, v_{1})$ and $(u_{2}, v_{2}), (u_{1}, v_{1})\sim (u_{2}, v_{2})$, if and only if\\ (1) $u_{1}=u_{2}$ and $v_{1}\sim v_{2}$ (in $H$), or\\ (2) $v_{1}=v_{2}$ and $u_{1}\sim u_{2}$ (in $G$). Also, the tensor product of $G$ and $H$, denoted by $G\otimes H$, is a simple graph with vertex set $V(G)\times V(H)$, in which for two vertices $(u_{1}, v_{1})$ and $(u_{2}, v_{2}), (u_{1}, v_{1})\sim (u_{2}, v_{2})$, if and only if $u_{1}\sim u_{2}$ (in $G$) and $v_{1}\sim v_{2}$ (in $H$). Finally, the lexicographic product of $G$ and $H$, denoted by $G[H]$, is a simple graph with vertex set $V(G)\times V(H)$, in which for two vertices $(u_{1}, v_{1})$ and $(u_{2}, v_{2}), (u_{1}, v_{1})\sim (u_{2}, v_{2})$, if and only if\\ (1) $u_{1}\sim u_{2}$ (in $G$), or\\ (2) $u_{1}=u_{2}$ and $v_{1}\sim v_{2}$ (in $H$). For a simple graph $G$, the complement of $G$, denoted by $G^{c}$, is a simple graph with vertex set $V(G)$, in which for two vertices $u$ and $v, \ u\sim v$, if and only if $u$ is not adjacent with $v$ in $G$. Let $K$ be a field and $S=K[x_{1}, \ldots, x_{n}]$ be the polynomial ring in $n$ variables with coefficients in $K$ and let $f\in S$ be any nonzero polynomial. The polynomial $f$ is represented uniquely in the form: \[f=\sum_{a_{u}(\neq 0)\in K}a_{u}u\] in terms of distinct monomials $u$ in $S$ where the set of $u$'s is the support of $f$ \cite{Her}. We will call $a_{u}u$'s, the essential terms of $f$. \vskip 0.8 true cm \section{\bf Degree sequence} \vskip 0.4 true cm Two non-isomorphic simple graphs can have the same degree sequence. Consider the graphs below for example. \begin{center} \definecolor{ududff}{rgb}{0.30196078431372547,0.30196078431372547,1.} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=2.0cm,y=1.0cm] \clip(-3.85,-2.8) rectangle (3.,1.6); \draw [line width=0.8pt] (-3.4462095730918487,1.1426511627906977)-- (-3.4462095730918487,0.2606046511627907); \draw [line width=0.8pt] (-3.4462095730918487,0.2606046511627907)-- (-3.4462095730918487,-0.5613023255813954); \draw [line width=0.8pt] (-2.6673221216041387,1.1226046511627907)-- (-2.6673221216041396,0.24055813953488367); \draw [line width=0.8pt] (-2.6673221216041396,0.24055813953488367)-- (-2.6673221216041387,-0.5813488372093023); \draw [line width=0.8pt] (-2.6673221216041396,0.24055813953488367)-- (-3.4462095730918487,0.2606046511627907); \draw [line width=0.8pt] (-2.358706338939197,1.1226046511627907)-- (-2.373402328589908,0.2405581395348837); \draw [line width=0.8pt] (-2.373402328589908,0.2405581395348837)-- (-2.373402328589908,-0.5813488372093023); \draw [line width=0.8pt] (-1.5651228978007752,1.1226046511627907)-- (-1.5651228978007752,0.2405581395348837); \draw [line width=0.8pt] (-1.5651228978007752,0.2405581395348837)-- (-1.5651228978007752,-0.5613023255813954); \draw [line width=0.8pt] (-1.5651228978007752,0.2405581395348837)-- (-2.373402328589908,0.2405581395348837); \draw [line width=0.8pt] (-0.0524838292367391,0.7216744186046511)-- (0.5657956015523941,0.7216744186046511); \draw [line width=0.8pt] (0.5657956015523941,0.7216744186046511)-- (0.5751875808538171,-0.38088372093023254); \draw [line width=0.8pt] (0.5751875808538171,-0.38088372093023254)-- (-0.0624838292367391,-0.36083720930232555); \draw [line width=0.8pt] (-0.0524838292367391,0.7216744186046511)-- (-0.0624838292367391,-0.36083720930232555); \draw [line width=0.8pt] (-0.0524838292367391,0.7216744186046511)-- (0.5751875808538171,-0.38088372093023254); \draw [line width=0.8pt] (0.5657956015523941,0.7216744186046511)-- (-0.0624838292367391,-0.36083720930232555); \draw [line width=0.8pt] (1.8443467011642956,1.1626976744186046)-- (1.138939197930143,1.1626976744186046); \draw [line width=0.8pt] (1.829650711513584,0.5011627906976744)-- (1.1242432082794316,0.4811162790697674); \draw [line width=0.8pt] (1.829650711513584,-0.1603720930232558)-- (1.1242432082794316,-0.1603720930232558); \draw [line width=0.8pt] (1.829650711513584,-0.781813953488372)-- (1.10954721862872,-0.7617674418604651); \begin{scriptsize} \draw [fill=ududff] (-3.4462095730918487,1.1426511627906977) circle (1.5pt); \draw [fill=ududff] (-3.4462095730918487,0.2606046511627907) circle (1.5pt); \draw [fill=ududff] (-3.4462095730918487,-0.5613023255813954) circle (1.5pt); \draw [fill=ududff] (-2.6673221216041387,1.1226046511627907) circle (1.5pt); \draw [fill=ududff] (-2.6673221216041396,0.24055813953488367) circle (1.5pt); \draw [fill=ududff] (-2.6673221216041387,-0.5813488372093023) circle (1.5pt); \draw [fill=ududff] (-2.358706338939197,1.1226046511627907) circle (1.5pt); \draw [fill=ududff] (-2.373402328589908,0.2405581395348837) circle (1.5pt); \draw [fill=ududff] (-2.373402328589908,-0.5813488372093023) circle (1.5pt); \draw [fill=ududff] (-1.5651228978007752,1.1226046511627907) circle (1.5pt); \draw [fill=ududff] (-1.5651228978007752,0.2405581395348837) circle (1.5pt); \draw [fill=ududff] (-1.5651228978007752,-0.5613023255813954) circle (1.5pt); \draw [fill=ududff] (-0.0524838292367391,0.7216744186046511) circle (1.5pt); \draw [fill=ududff] (0.5657956015523941,0.7216744186046511) circle (1.5pt); \draw [fill=ududff] (0.5751875808538171,-0.38088372093023254) circle (1.5pt); \draw [fill=ududff] (-0.0624838292367391,-0.36083720930232555) circle (1.5pt); \draw [fill=ududff] (1.8443467011642956,1.1626976744186046) circle (1.5pt); \draw [fill=ududff] (1.138939197930143,1.1626976744186046) circle (1.5pt); \draw [fill=ududff] (1.829650711513584,0.5011627906976744) circle (1.5pt); \draw [fill=ududff] (1.1242432082794316,0.4811162790697674) circle (1.5pt); \draw [fill=ududff] (1.829650711513584,-0.1603720930232558) circle (1.5pt); \draw [fill=ududff] (1.1242432082794316,-0.1603720930232558) circle (1.5pt); \draw [fill=ududff] (1.829650711513584,-0.781813953488372) circle (1.5pt); \draw [fill=ududff] (1.10954721862872,-0.7617674418604651) circle (1.5pt); \draw (-2.65954721862872,-1.0617674418604651) node[anchor=north west] {$G_{1}$}; \draw (0.7254721862872,-1.0617674418604651) node[anchor=north west] {$G_{2}$}; \end{scriptsize} \end{tikzpicture} \end{center} \label{graph 1} A realizable sequence of integers is also named a graphical sequence. If $q=(d_{1},\ldots , d_{n})$ be a graphical sequence realized by the graph $G$, by elementary properties of simple graphs, we have: (1) The sum of $d_{i}$'s, $1\leq i\leq n$, is even. Therefore the number of odd $d_{i}$'s is even and thus the number of even $d_{i}$'s is even if and only if $n$ is even. (2) For every $1\leq i\leq n$, $d_{i}\leq n-1$. (3) If $G$ has no isolated vertices, then $2[\frac{n+1}{2}]\leq \sum^{n}_{i=1}d_{i}\leq n(n-1)$. (4) If no $d_{i}$ be zero, then at lest two of them are equal (by the pigeonhole principle). (3) and (4) are implied from (2). \begin{thm} (Er\H{o}ds-Gallai Theorem \cite{Er}) A non-increasing sequence $\phi=(a_{1},\ldots , a_{n})$ of nonnegative integers is graphic if and only if \\ a) $\sum^{n}_{i=1}a_{i}$ be even,\\ b) $\sum^{j}_{i=1}a_{i}- j(j-1)\leq\sum^{n}_{k=j+1}min (j,a_{k}) \ \ \ \ (j=1,\ldots , n-1)$. \end{thm} \vskip 0.8 true cm \section{\bf Degree polynomial} \vskip 0.4 true cm Before all, we introduce some notation for convenience. For a polynomial $f(x)=\sum^{n}_{i=1}a_{i}x^{i}\in\mathbb{R}[x]$ with $a_{n}\neq0$, we show the sum of $a_{i}$'s for $1\leq i\leq n$, by $sc(f)$. Also $sec(f)$ and $soc(f)$, use for the sum of $a_{i}$'s for even $i$, and sum of $a_{i}$'s for odd $i$, respectively. We define $sc(0)=0$, as well. We introduce a total order $<_{pol}$ on the set of all nonzero polynomials with coefficients in nonnegative integers such that $<_{pol}$ compares two distinct polynomials $f= \sum^{n}_{i=0}a_{i}x^{i}$ and $g=\sum^{m}_{i=0}b_{i}x^{i}$ with coefficients in nonnegative integers and with $a_{n},b_{m}\neq0$, as follows: If $sc(f)\neq sc(g)$, then each one of $f$ and $g$ whose sum of coefficients is greater (as an integer), will be greater; If $sc(f)=sc(g)$, then supposing that $i_{1}=\max\{i| \ a_{i}b_{i}\neq0\}$, if $a_{i_{1}}\neq b_{i_{1}}$, then each one of $f$ and $g$ which has greater coefficient in $x^{i_{1}}$, will be greater; If $sc(f)=sc(g)$ and $a_{i_{1}}=b_{i_{1}}$, then supposing that $i_{2}=\max\{i| \ i<i_{1},\ a_{i}b_{i}\neq0\}$, if $a_{i_{2}}\neq b_{i_{2}}$, then each one of $f$ and $g$ which has greater coefficient in $x^{i_{2}}$, will be greater; Continue on. For example, \[ 2x^{4}+12x^{3}>_{pol} 3x^{5}+x^{2},\] \[2x^{4}+12x^{2}<_{pol} 2x^{5}+12x^{2}, \ x^{5}+13x^{2},\] \[2x^{4}+12x^{2}>_{pol} 2x^{4}+11x^{2}+x.\] \vskip 0.2 true cm Let $f=\sum_{a_{i}\neq 0}a_{i}x^{i}$ be a nonzero polynomial in $\mathbb{R}[x]$ with coefficients in nonnegative integers where $a_{i}x^{i}$'s are the essential terms of $f$. For $n\in \mathbb{N}$, we denote the polynomial $\sum_{a_{i}\neq 0}a_{i}x^{in}$ by $f^{\curlywedge\times n}$. Also we set $0^{\curlywedge\times n}=0$. If $\deg f\leq n$, we denote the polynomial $\sum_{a_{i}\neq 0} a_{i} x^{n-i}$ by $f^{\curlywedge n-}$. Also we set $0^{\curlywedge n-}=0$. \begin{defn} Let $f=\sum_{a_{i}\neq 0}a_{i}x^{i}$, $g=\sum_{b_{j}\neq 0}b_{j}x^{j}$ be two nonzero polynomials in $\mathbb{R}[x]$ with coefficients in nonnegative integers where $a_{i}x^{i}$'s and $b_{j}x^{j}$'s are the essential terms of $f$ and $g$, respectively. The tensor product of $f$ and $g$, denoted by $f\otimes g$, is the polynomial $\sum c_{t}x^{t}$ in which $t$'s are the distinct products of $i$'s and $j$'s, and \[c_{t}= \sum_{i.j=t}a_{i}b_{j}.\] Also we set $0\otimes f=0, f\otimes 0=0, 0\otimes f=0$, where $0$ is the zero polynomial. \end{defn} Under the conditions of definition 3.1., it is observed simply that, first, $f\otimes g$ can be achieved by tensor-multiplying the essential terms of $f$ by the essential terms of $g$, one by one, and secondly for each $f$ and $g$ with variable $x$ and coefficients in nonnegative integers, $f\otimes g=g\otimes f$. Now we introduce a new concept, that is the concept of degree polynomial. \begin{defn} Let $G$ be a simple graph. For a vertex $v$ of $G$, the degree polynomial of $v$ denoted by $\rm{dp}(v)$, is a polynomial with coefficients in nonnegative integers, in which the coefficient of $x^{i}$ is the number of neighbors of $v$ each of degree $i$; Especially, for an isolated vertex $v$, $\rm{pd}(v)=0$. \end{defn} \begin{example} Let $G$ be the simple graph with following representation. \begin{center} \definecolor{ududff}{rgb}{0.30196078431372547,0.30196078431372547,1.} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \clip(5.3,1.9999999999998) rectangle (9.56,5.3); \draw [line width=1.2pt] (7.06,4.18)-- (6.1,3.4); \draw [line width=1.2pt] (6.1,3.4)-- (7.38,3.38); \draw [line width=1.2pt] (7.06,4.18)-- (7.38,3.38); \draw [line width=1.2pt] (7.38,3.38)-- (8.9,3.38); \draw (5.65,3.58) node[anchor=north west] {$a$}; \draw (7.0,4.66) node[anchor=north west] {$b$}; \draw (7.24,3.4) node[anchor=north west] {$c$}; \draw (8.89,3.7) node[anchor=north west] {$d$}; \draw (7.28,2.82) node[anchor=north west] {$G$}; \begin{scriptsize} \draw [fill=ududff] (7.06,4.18) circle (1.5pt); \draw [fill=ududff] (6.1,3.4) circle (1.5pt); \draw [fill=ududff] (7.38,3.38) circle (1.5pt); \draw [fill=ududff] (8.9,3.38) circle (1.5pt); \end{scriptsize} \end{tikzpicture} \end{center} \label{graph 2} \end{example} We have: \[\rm{dp}(a)=x^{2}+x^{3},\] \[\rm{dp}(b)=x^{2}+x^{3},\] \[\rm{dp}(c)=2x^{2}+x,\] \[\rm{dp}(d)=x^{3}.\] \vskip 0.2 true cm Since adding a finite number of isolated vertices to a simple graph, and deleting a finite number of such vertices from a nonempty simple graph makes no change in the degree polynomials of other vertices, we will consider only the graphs which has no isolated vertices. \begin{defn} For a simple graph $G$, the degree polynomial of $G$, denoted by $\rm{dp}(G)$, is the polynomial $\sum_{i}t_{i}x^{i}$ in $\mathbb{R}[x]$, in which $t_{i}$ is the number of vertices of $G$, each of degree $i$ (specially, $t_{0}$ is the number of isolated vertices of $G$). If $\Delta$ be the maximum degree of $G$, $\rm{dp}(G)$ is of degree $\Delta$. \end{defn} It is obvious that if $n$ be the order of $G$, then the sum of all coefficients of $\rm{dp}(G)$ equals $n$. Also, if $m$ be the size of $G$, then the sum of all coefficients of the derivative of $f$ (with respect to $x$) equals $2m$. \begin{defn} For a simple graph $G$ of order $n$ without any isolated vertex, a sequence $q=(f_{1},f_{2},\ldots , f_{n})$ of polynomials is said to be the degree polynomial sequence of $G$, if\\ (a) $f_{1}\geq_{pol} \ldots \geq_{pol}f_{n}$,\\ (b) each degree polynomial of vertices of $G$, be a term of sequence. \begin{example} For the graph $G$ in Example 3.2, the degree polynomial sequence is: \[2x^{2}+x, x^{2}+x^{3}, x^{2}+x^{3}, x^{3}.\] \end{example} \begin{prop} Let $G$ be a nonempty simple graph. $G$ is r-regular, if and only if each term of the degree polynomial sequence of $G$ be in the form $rx^{r}$. \end{prop} \begin{proof} Let $G$ be r-regular. For $v\in V(G)$, Since each neighbor of $v$ is of degree $r$, $dp(v)$ is in the form $kx^{r}$. Since $v$ itself has exactly $r$ neighbor, $k=r$. Conversely, let each term of the degree polynomial sequence of $G$ be $rx^{r}$. Thus each $v \in V(G)$ has exactly $r$ neighbor of degree $r$, and has no another neighbor. Therefore each $v \in V(G)$ is of degree $r$, and $G$ is r-regular. \end{proof} If $G$ be a nontrivial complete graph, $K_{n}$, it is obvious that the degree polynomial sequence of $G$ is: \[(n-1)x^{n-1}, \ldots , (n-1)x^{n-1}\] where the number of terms is $n$. If $G$ be a path with $n$ vertices, $P_{n}$, then if $n=2$, the degree polynomial sequence of $G$ is: \[x, x,\] if $n=3$, that will be: \[2x, x^{2}, x^{2},\] if $n=4$, that will be: \[x+x^{2}, x+x^{2}, x^{2}, x^{2},\] and finally, if $n\geq 5$, that will be: \[2x^{2}, \ldots , 2x^{2}, x+x^{2}, x+x^{2}, x^{2}, x^{2}\] where the number of terms $2x^{2}$, is $n-4$. If $G$ be a cycle $C_{n} \ (n\geq 3)$, then, the degree polynomial sequence of $G$ is: \[2x^{2}, \ldots , 2x^{2},\] where the number of terms is $n$. If $G$ be a complete bipartite graph, $K_{r,s}$ where $r\geq s$, then the degree polynomial sequence of $G$ is: \[rx^{s}, \ldots , rx^{s}, sx^{r},\ldots , sx^{r},\] where $s$ terms are $rx^{s}$ and $r$ terms are $sx^{r}$. \end{defn} \begin{rem} Supposing $q=(f_{1}, \ldots , f_{n})$ be the degree polynomial sequence of a simple graph $G$, if $f_{i}$ be the degree polynomial of vertex $v_{i}$, since $sc(f_{i})$ be the degree of $v_{i}$, the degree sequence for $G$ is: \[sc(f_{1}), \ldots , sc(f_{n}).\] Consequently, if a sequence $q=(f_{1},\ldots , f_{n})$ of nonzero polynomials be realizable, then the sequence \[sc(f_{1}), sc(f_{2}), \ldots , sc(f_{n})\] of integers is realizable. The following example shows that the inverse case is not established. \end{rem} \begin{example} Consider the sequence: \[2x, x^{2}, x, x, x\] of nonzero polynomials with coefficients in nonnegative integers. Although the sequence: \[sc(2x), sc(x^{2}), sc(x), sc(x), sc(x)\] that is the sequence: \[2,1,1,1,1,\] is realized by simple graph with representation: \begin{center} \definecolor{ududff}{rgb}{0.30196078431372547,0.30196078431372547,1.} \definecolor{cqcqcq}{rgb}{0.7529411764705882,0.7529411764705882,0.7529411764705882} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \clip(5.8,3.) rectangle (9.,5.); \draw [line width=1.2pt,] (7.98,4.44)-- (6.72,4.42); \draw [line width=1.2pt,] (8.38,3.64)-- (7.36,3.62); \draw [line width=1.2pt,] (7.36,3.62)-- (6.32,3.6); \begin{scriptsize} \draw [fill=ududff] (4.16,4.88) circle (2.5pt); \draw [fill=ududff] (7.98,4.44) circle (1.5pt); \draw [fill=ududff] (6.72,4.42) circle (1.5pt); \draw [fill=ududff] (8.38,3.64) circle (1.5pt); \draw [fill=ududff] (7.36,3.62) circle (1.5pt); \draw [fill=ududff] (6.32,3.6) circle (1.5pt); \end{scriptsize} \end{tikzpicture} \end{center} \label{graph 2} but the sequence \[2x, x^{2}, x, x, x\] is not realizable, by Theorem 3.10 (part c). \end{example} The following example shows that two simple graphs with the same degree sequence, can have different degree polynomial sequences. \begin{example} Consider the graphs $G_{1}$ and $G_{2}$ with following representations. \begin{center} \definecolor{ududff}{rgb}{0.30196078431372547,0.30196078431372547,1.} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \clip(3.7,2.) rectangle (9.6,6.); \draw [line width=0.8pt] (8.76,4.84)-- (7.6,4.86); \draw [line width=0.8pt] (7.6,4.86)-- (7.14,3.98); \draw [line width=0.8pt] (7.14,3.98)-- (7.6,3.1); \draw [line width=0.8pt] (7.6,3.1)-- (8.74,3.1); \draw [line width=0.8pt] (8.74,3.1)-- (9.18,3.94); \draw [line width=0.8pt] (9.18,3.94)-- (8.76,4.84); \draw [line width=0.8pt] (5.94,3.96)-- (5.48,4.82); \draw [line width=0.8pt] (5.48,4.82)-- (4.36,4.82); \draw [line width=0.8pt] (4.36,4.82)-- (3.94,3.96); \draw [line width=0.8pt] (3.94,3.96)-- (4.34,3.12); \draw [line width=0.8pt] (4.34,3.12)-- (5.5,3.12); \draw [line width=0.8pt] (5.94,3.96)-- (5.5,3.12); \draw [line width=0.8pt] (8.76,4.84)-- (7.14,3.98); \draw [line width=0.8pt] (5.94,3.96)-- (3.94,3.96); \draw (4.66,2.88) node[anchor=north west] {$G_{1}$}; \draw (7.9,2.84) node[anchor=north west] {$G_{2}$}; \begin{scriptsize} \draw [fill=ududff] (8.76,4.84) circle (1.5pt); \draw [fill=ududff] (7.6,4.86) circle (1.5pt); \draw [fill=ududff] (7.14,3.98) circle (1.5pt); \draw [fill=ududff] (7.6,3.1) circle (1.5pt); \draw [fill=ududff] (8.74,3.1) circle (1.5pt); \draw [fill=ududff] (9.18,3.94) circle (1.5pt); \draw [fill=ududff] (5.94,3.96) circle (1.5pt); \draw [fill=ududff] (5.48,4.82) circle (1.5pt); \draw [fill=ududff] (4.36,4.82) circle (1.5pt); \draw [fill=ududff] (3.94,3.96) circle (1.5pt); \draw [fill=ududff] (4.34,3.12) circle (1.5pt); \draw [fill=ududff] (5.5,3.12) circle (1.5pt); \end{scriptsize} \end{tikzpicture} \end{center} \label{graph 2} For both $G_{1}$ and $G_{2}$, the degree sequence is $3, 3, 2 , 2, 2, 2.$ But the degree polynomial sequence for $G_{1}$, is: \[2x^{2}+x^{3}, 2x^{2}+x^{3}, x^{2}+x^{3}, x^{2}+x^{3}, x^{2}+x^{3}, x^{2}+x^{3},\] and for $G_{2}$, is: \[2x^{2}+x^{3}, 2x^{2}+x^{3}, 2x^{3}, x^{2}+x^{3}, x^{2}+x^{3}, 2x^{2}.\] \end{example} Two non-isomorphic graphs can have the same degree polynomial sequences, as the following example shows. \begin{example} Consider two graphs, $G_{1}$ and $G_{2}$, with the following representations: \begin{center} \definecolor{ududff}{rgb}{0.30196078431372547,0.30196078431372547,1.} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \clip(2.5,2.) rectangle (9.,5.7); \draw [line width=0.8pt] (8.72,4.56)-- (8.04,5.42); \draw [line width=0.8pt] (8.04,5.42)-- (7.32,4.6); \draw [line width=0.8pt] (7.32,4.6)-- (7.3,3.94); \draw [line width=0.8pt] (7.3,3.94)-- (8.04,3.16); \draw [line width=0.8pt] (8.04,3.16)-- (8.74,3.9); \draw [line width=0.8pt] (8.74,3.9)-- (8.72,4.56); \draw [line width=0.8pt] (6.14,3.88)-- (5.54,4.86); \draw [line width=0.8pt] (3.12,4.84)-- (2.6,3.88); \draw [line width=0.8pt] (2.6,3.88)-- (3.64,3.9); \draw [line width=0.8pt] (3.64,3.9)-- (5.04,3.88); \draw [line width=0.8pt] (6.14,3.88)-- (5.04,3.88); \draw (4.08,2.74) node[anchor=north west] {$G_{1}$}; \draw (7.8,2.76) node[anchor=north west] {$G_{2}$}; \draw [line width=0.8pt] (3.64,3.9)-- (3.12,4.84); \draw [line width=0.8pt] (5.04,3.88)-- (5.54,4.86); \draw [line width=0.8pt] (8.04,5.42)-- (8.04,3.16); \begin{scriptsize} \draw [fill=ududff] (8.72,4.56) circle (1.5pt); \draw [fill=ududff] (8.04,5.42) circle (1.5pt); \draw [fill=ududff] (7.32,4.6) circle (1.5pt); \draw [fill=ududff] (7.3,3.94) circle (1.5pt); \draw [fill=ududff] (8.04,3.16) circle (1.5pt); \draw [fill=ududff] (8.74,3.9) circle (1.5pt); \draw [fill=ududff] (6.14,3.88) circle (1.5pt); \draw [fill=ududff] (5.54,4.86) circle (1.5pt); \draw [fill=ududff] (3.12,4.84) circle (1.5pt); \draw [fill=ududff] (2.6,3.88) circle (1.5pt); \draw [fill=ududff] (3.64,3.9) circle (1.5pt); \draw [fill=ududff] (5.04,3.88) circle (1.5pt); \end{scriptsize} \end{tikzpicture} \end{center} \label{graph 2} It is obvious that $G_{1}$ and $G_{2}$ are not isomorphic. However, the degree polynomial sequence for both of them is: \[2x^{2}+x^{3}, 2x^{2}+x^{3}, x^{2}+x^{3}, x^{2}+x^{3}, x^{2}+x^{3}, x^{2}+x^{3},\] \end{example} Now we prove a theorem which gives a necessary condition for realizability of a sequence of polynomials with coefficients in nonnegative integers. \begin{thm} If $G$ be a simple graph without any isolated vertices, and $q=(f_{1}, \ldots , f_{n})$ where $f_{1}\geq_{pol} \ldots \geq_{pol} f_{n}$ be the degree polynomial sequence of $G$, then, \\ (a) $\sum^{n}_{i=1}sc(f_{i})$ is even, \\ (b) for each nonzero coefficient $k$ of a term $kx^{i}$ in the degree polynomial of a vertex $v$, there are at least $k$ distinct vertices $v_{1}, \ldots , v_{k}$, all distinct from $v$, such that: \[sc(\rm{dp}(v_{1}))= \ldots = sc(\rm{dp}(v_{k}))= i,\] \\ (c) $\sum_{sc(f_{j}) \ is \ odd} sec(f_{j})$ and $\sum_{sc(f_{j}) \ is \ even} sec(f_{j})$ are even. \end{thm} \begin{proof} (a) Let $f_{i}=\rm{dp}(v_{i})$, for $1\leq i\leq n$. We have $\sum^{n}_{i=1}sc(f_{i})=\sum^{n}_{i=1}\deg(v_{i})$. Thus $\sum^{n}_{i=1}sc(f_{i})$ is even. (b) Let $k(\neq0)$ be the coefficient of $kx^{i}$ in $\rm{dp}(v)$. Therefore $v$ has exactly $k$ neighbor $v_{1}, \ldots , v_{k}$ of degree $i$. Now, $v_{1}, \ldots, v_{k}$ are distinct from $v$, and \[sc(\rm{dp}(v_{1}))= \ldots =sc(\rm{dp}(v_{k}))=i.\] (c) Let $\{a_{1}, \ldots , a_{s}\}$ be the set of odd vertices of $G$. $\sum^{n}_{j=1}\deg(a_{j})$ is even; That is $\sum^{s}_{j=1}sc(\rm{dp}(a_{j}))$ is even. Thus $\sum^{s}_{j=1}sec(\rm{dp}(a_{j}))+\sum^{s}_{j=1}soc(\rm{dp}(a_{j}))$ is even. For each $a_{j}$, $1\leq j\leq n$, if $a_{j^{\prime}}$ be an odd neighbor of $a_{j}$, then $a_{j}$ is an odd neighbor of $a_{j^{\prime}}$, as well. Therefore the edge between $a_{j}$ and $a_{j^{\prime}}$, acts two times in calculating $\sum^{s}_{j=1}soc(\rm{dp}(a_{j}))$, once in $soc(\rm{dp}(a_{j}))$ and again in $soc(\rm{dp}(aj^{\prime}))$. Thus $\sum^{s}_{j=1}soc(\rm{dp}(a_{j}))$ is even and thus $\sum^{s}_{j=1}sec(\rm{dp}(a_{j}))$, that is $\sum_{sc(f_{j}) \ is \ odd} sec(f_{j})$, is an even integer. The argument for second part, is similar, with this difference that we should start with the set of all even vertices, $\{b_{1}, \ldots , b_{t}\}$. \end{proof} \begin{example} The sequence \[s_{1}=(2x, x^{2}, x, x, x)\] of polynomials, satisfies (a) and (b), but not (c); The sequence \[s_{2}=(2x, x^{2}, x^{2}, x, x, x)\] satisfies (b) and (c), but not (a); Finally the sequence \[s_{3}=(2x^{2}, x, x, x, x)\] satisfies (a) and (c), but not (b). Therefore upon Theorem 3.10, the sequences $s_{1}, s_{2}$ and $s_{3}$ is not realizable. Meanwhile it is possible that a non-increasing sequence $q=(f_{1}, f_{2}, \ldots , f_{n})$ of nonzero polynomials with coefficients in nonnegative integers, satisfies (a), (b) and (c), but it do not be realizable yet. Consider for example, the sequence \[2x^{2}, 2x, 2x, x, x.\] Note that if the sequence be realizable, then the vertex $v$ which $\rm{dp}(v)=2x$, should be adjacent only with two vertices with degree polynomials $x$ and $x$ (name a and b). But in this case, the degree polynomial of a and b will not be $x$. \end{example} Now we study the behavior of degree polynomial under graph operations \begin{thm} Let $G$ and $H$ be two simple graphs with disjoint vertex sets, and $u$ be a vertex in $G$. Then \[\rm{dp}_{G\vee H}(u)=x^{n_{2}}\rm{dp}_{G}(u)+x^{n_{1}}\rm{dp}(H),\] where $n_{1}$ and $n_{2}$ are the orders of $G$ and $H$, respectively. \end{thm} \begin{proof} If $u$ be an isolated vertex in $G$, then $\rm{dp}_{G}(u)=0$. In this case, since $u$ is adjacent with all vertices of $H$ in $G\vee H$ (by definition of $G\vee H$) and $u$ is not adjacent with any vertex of $G$, if $H$ has $t_{0}$ vertices of degree 0, $t_{1}$ vertices of degree 1, ... , $t_{\Delta}$ vertices of degree $\Delta$ ($\Delta$ is the maximum degree of $H$), then the neighbors of $u$ in $G\vee H$, are restricted to:\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $t_{0}$ vertices\ of\ degree $n_{1}+0,$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $t_{1}$ vertices\ of\ degree $n_{1}+1,$ \vskip -0.45 true cm \ \ \ \[.\] \vskip -0.7 true cm \ \ \ \[.\] \vskip -0.7 true cm \ \ \ \[.\] \ \ \ \ \ \ \vskip -0.42 true cm \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $t_{\Delta}$ vertices\ of\ degree $n_{1}+\Delta,$\\ \hskip -0.4 true cm and therefore the degree polynomial of $u$ in $G\vee H$ is: \[t_{0} x^{n_{1}}+ t_{1} x^{n_{1}+1}+ \ldots+ t_{\Delta} x^{n_{1}+\Delta}= x^{n_{1}} \rm{dp}(H),\] and therefore the conclusion holds. Now let $u$ is not isolated vertex in $G$. Suppose that $\rm{dp}_{G}(u)=\sum_{s=1}^{k}c_{i_{s}}x^{i_{s}}$ where $c_{i_{s}}$'s are positive integers and $i_{s}$'s are the distinct degrees of neighbors of $u$ is $G$. It means that the neighbors of $u$ in $G$, are restricted to: \vskip +0.45 true cm \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $c_{i_{1}}$ vertices\ of\ degree $i_{1},$ \vskip -0.45 true cm \ \ \ \ \ \ \[.\] \vskip -0.7 true cm \ \ \ \ \ \ \[.\] \vskip -0.7 true cm \ \ \ \ \ \ \[.\] \ \ \ \ \ \ \vskip -0.35 true cm \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $c_{i_{k}}$ vertices\ of\ degree $i_{k},$\\ \hskip -0.4 true cm Now by definition of $G\vee H$, $u$ will be adjacent in $G\vee H$, with all of the above vertices, and also with any vertex in $H$. Thus the neighbors of $u$ in $G\vee H$ are restricted to: \vskip +0.45 true cm \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $c_{i_{1}}$ vertices\ of\ degree $n_{2}+i_{1},$ \vskip -0.45 true cm \ \ \ \[.\] \vskip -0.7 true cm \ \ \ \[.\] \vskip -0.7 true cm \ \ \ \[.\] \vskip 0.1 true cm \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $c_{i_{k}}$ vertices\ of\ degree $n_{2}+i_{k},$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $t_{0}$ vertices\ of\ degree $n_{1}+0,$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $t_{1}$ vertices\ of\ degree $n_{1}+1,$ \vskip -0.45 true cm \ \ \ \[.\] \vskip -0.7 true cm \ \ \ \[.\] \vskip -0.7 true cm \ \ \ \[.\] \ \ \ \ \ \ \vskip -0.42 true cm \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $t_{\Delta}$ vertices\ of\ degree $n_{1}+\Delta,$\\ \\ where $\rm{dp}(H)=\sum_{i=0}^{\Delta}t_{i}x^{i}$. Therefore \[\rm{dp}(G\vee H)=c_{i_{1}}x^{n_{2}+i_{1}}+ \ldots+c_{i_{k}} x^{n_{2}+i_{k}}+t_{0}x^{n_{1}}+t_{1}x^{n_{1}+1}+ \ldots+ t_{\Delta}x^{n_{1}+\Delta}=\] $x^{n_{2}}\rm{dp}_{G}(u)+x^{n_{1}}\rm{dp}(H)$. \end{proof} \begin{rem} Since for every two simple graphs $G$ and $H$, $G\vee H=H\vee G$, the above theorem, in practice, provides a tool for calculating the degree polynomial of any vertex of $G\vee H$. \end{rem} \begin{thm} If $G$ and $H$ be two simple graphs, and $u$ and $v$ be vertices of $G$ and $H$, respectively, then \[\rm{dp}_{G\times H}((u,v))= x^{\deg u}\rm{dp}(v)+x^{\deg v} \rm{dp}(u).\] \end{thm} \begin{proof} If $u$ in $G$ and $v$ in $H$ be isolated, then by definition of $G\times H, \ (u,v)$ in $G\times H$ is an isolated vertex and therefore $\rm{dp}((u,v))=0$. On the other hand, $\rm{dp}(u)=0$ and $\rm{dp}(v)=0$. Therefore the conclusion holds. If $u$ be isolated in $G$ but $v$ dose not be isolated in $H$, supposing $\rm{dp}(v)=\sum_{r_{j}\neq 0}r_{j}x^{j}$ where $r_{j}$'s are positive integers and $j$'s are the disjoint degrees of neighbors of $v$ in $H$, by definition of $G\times H$, since $u$ has not any adjacent in $G$, each neighbor of $(u,v)$ in $G\times H$, is in the form $(u,v^{\prime})$ where $v^{\prime}\sim v$. Meanwhile the degree of such $(u,v^{\prime})$ in $G\times H$, is $\deg u+\deg v^{\prime}$. Since for each $j$, $v$ has $r_{j}$ neighbors of degree $j$, the number of neighbors of $(u,v)$ of degree $\deg u+ j$, will be $r_{j}$. Thus \[\rm{dp}_{G\times H}((u,v))=\sum_{r_{j}}r_{j}x^{\deg u+ j}= x^{\deg u}\rm{dp}(v).\] But in this case, $\rm{dp}(u)=0$ and therefore the conclusion holds. The argument in the case that $v$ is isolated but $u$ is not, is similar to the argument in the recent case. Now let no one of $u$ and $v$ is not isolated. suppose that $\rm{dp}(u)=\sum_{s=1}^{k}c_{i_{s}}x^{i_{s}}$, and $\rm{dp}(v)=\sum_{t=1}^{k^{\prime}}r_{j_{t}}x^{j_{t}}$, where $c_{i_{s}}$'s and $r_{j_{t}}$'s are positive integers, and $i_{s}$'s and $j_{t}$'s are the disjoint degrees of neighbors of $u$ and $v$, respectively. This means that the neighbors of $u$ in $G$, are restricted to: \vskip +0.45 true cm \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $c_{i_{1}}$ vertices\ of\ degree $i_{1},$ \vskip -0.45 true cm \ \ \ \ \ \ \[.\] \vskip -0.7 true cm \ \ \ \ \ \ \[.\] \vskip -0.7 true cm \ \ \ \ \ \ \[.\] \ \ \ \ \ \ \vskip -0.35 true cm \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $c_{i_{k}}$ vertices\ of\ degree $i_{k},$\\ \hskip -0.4 cm and the neighbors of $v$ in $H$, are restricted to: \vskip +0.45 true cm \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $r_{j_{1}}$ vertices\ of\ degree $j_{1},$ \vskip -0.45 true cm \ \ \ \ \ \ \[.\] \vskip -0.7 true cm \ \ \ \ \ \ \[.\] \vskip -0.7 true cm \ \ \ \ \ \ \[.\] \ \ \ \ \ \ \vskip -0.35 true cm \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $r_{j_{k^{\prime}}}$ vertices\ of\ degree $j_{k^{\prime}}.$\\ \hskip -0.4 cm By definition of $G\times H$, the adjacent vertices of $(u,v)$ in $G\times H$, are of two kinds below:\\ (i) the vertices in the form $(u,b)$ where $b$ is an adjacent of $v$ in $H$,\\ (ii) the vertices in the form $(a,v)$ where $a$ is an adjacent of $u$ in $G$.\\ Since in all vertices of kind (i), $u$ is fixed, such vertices are restricted to: \vskip +0.45 true cm \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $r_{j_{1}}$ vertices\ of\ degree $\deg u+j_{1},$ \vskip -0.45 true cm \ \ \ \ \ \ \[.\] \vskip -0.7 true cm \ \ \ \ \ \ \[.\] \vskip -0.7 true cm \ \ \ \ \ \ \[.\] \ \ \ \ \ \ \vskip -0.35 true cm \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $r_{j_{k^{\prime}}}$ vertices\ of\ degree $\deg u+j_{k^{\prime}}.$\\ \hskip -0.4 cm Also, since in the vertices of kind (ii), $v$ is fixed, such vertices are restricted to: \vskip +0.45 true cm \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $c_{i_{1}}$ vertices\ of\ degree $i_{1}+\deg v,$ \vskip -0.45 true cm \ \ \ \ \ \ \[.\] \vskip -0.7 true cm \ \ \ \ \ \ \[.\] \vskip -0.7 true cm \ \ \ \ \ \ \[.\] \ \ \ \ \ \ \vskip -0.35 true cm \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $c_{i_{k}}$ vertices\ of\ degree $i_{k}+\deg v.$\\ \hskip -0.4 cm Not that the degree of each vertex, $(x,y)$, in $G\times H$ is $\deg_{G}x+\deg_{H}y$. Therefore \[\rm{dp}_{G\times H}((u,v))=(r_{j_{1}}x^{\deg u+j_{1}}+ \ldots+ r_{j_{k^{\prime}}} x^{\deg u+ j_{k^{\prime}}})\] \[+\ (c_{i_{1}}x^{i_{1}+\deg v}+ \ldots+ c_{i_{k}} x^{i_{k}+\deg v})=\ x^{\deg u}\rm{dp}(v)\] \hskip -0.05 cm $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\ x^{\deg v} \rm{dp}(u).$ \end{proof} \begin{thm} If $G$ and $H$ be two simple graphs, and $u$ and $v$ be vertices of $G$ and $H$, respectively, then \[\rm{dp}_{G\otimes H}((u,v))= \rm{dp}(u)\otimes\rm{dp}(v)\] \end{thm} \begin{proof} If at least one of $u$ and $v$ be isolated, then by definition of $G\otimes H$, $(u,v)$ is an isolated vertex in the graph $G\otimes H$, and therefore $\rm{dp}((u,v))=0$. On the other hand, in this case, at least on of $\rm{dp}(u)$ and $\rm{dp}(v)$ is zero. Therefore by definition of tensor product of polynomials, $\rm{dp}(u)\otimes \rm{dp}(v)$ is zero as well, and the conclusion holds. Now let no one of $u$ and $v$ be isolated. Suppose that $\rm{dp}(u)=\sum_{i}c_{i}x^{i}$ and $\rm{dp}(v)=\sum_{j}r_{j}x^{j}$, where $c_{i}$'s and $r_{j}$'s are positive integers, and $i$'s and $j$'s are the disjoint degrees of neighbors of $u$ and $v$, respectively. By definition of $G\otimes H$, first, each neighbor of $(u,v)$ in $G\otimes H$, is in the form $(u^{\prime},v^{\prime})$ where $u^{\prime}$ is a neighbor of $u$, and $v^{\prime}$ is a neighbor of $v$, and secondly if $u^{\prime}$ is a neighbor of $u$ of degree $i$ and $v^{\prime}$ is a neighbor of $v$ of degree $j$, then $(u^{\prime},v^{\prime})$ is a neighbor of $(u,v)$ of degree $i.j$. Since for each $i$, $u$ has exactly $c_{i}$ neighbors of degree $i$, and for each $j$, $v$ has exactly $r_{j}$ neighbors of degree $j$, the number $c_{i}r_{j}$ is calculated in the coefficient of $x^{i.j}$ in the degree polynomial of $(u,v)$. This implies that \[\rm{dp}_{G\otimes H}((u,v))=\rm{dp}(u)\otimes \rm{dp}(v)\] by definition of $\rm{dp}(u)\otimes \rm{dp}(v)$. \end{proof} \begin{thm} If $G$ and $H$ be two simple graphs and $u$ and $v$ be vertices of $G$ and $H$, respectively, then \[\rm{dp}_{G[H]}((u,v))=(\rm{dp}(u))^{\curlywedge\times n_{2}}\rm{dp}(H)+ x^{(\deg u)n_{2}}\rm{dp}(v),\] in which $n_{2}$ is the order of $H$. \end{thm} \begin{proof} If $u$ and $v$, both be isolated, by definition of $G[H]$, $(u,v)$ is isolated in $G[H]$, and therefore $\rm{dp}_{G[H]}((u,v))=0$. But in this case, both $\rm{dp}(u)$ and $\rm{dp}(v)$ are zero polynomials, and therefore the conclusion holds. If $u$ is isolated in $G$, but $v$ is not isolated in $H$, supposing $\rm{dp}(v)=\sum_{t=1}^{k^{\prime}} r_{j_{t}}x^{j_{t}}$, in which $r_{j_{t}}$'s are positive integers and $j_{t}$'s are the disjoint degrees of neighbors of $v$ in $H$, the neighbors of $v$ in $H$, are restricted to: \vskip +0.45 true cm \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $r_{j_{1}}$ vertices\ of\ degree $j_{1},$ \vskip -0.45 true cm \ \ \ \ \ \ \[.\] \vskip -0.7 true cm \ \ \ \ \ \ \[.\] \vskip -0.7 true cm \ \ \ \ \ \ \[.\] \ \ \ \ \ \ \vskip -0.35 true cm \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $r_{j_{k^{\prime}}}$ vertices\ of\ degree $j_{k^{\prime}}.$\\ \hskip -0.4 cm Since $u$ has no any neighbor in $G$, by definition of $G[H]$, every neighbors of $(u,v)$ is in the form $(u,b)$ with degree $(\deg u)n_{2}+\deg b$, where $b$ is a neighbor of $v$ in $H$. Hence since $u$ is fixed, the neighbors of $(u,v)$ are restricted to: \vskip +0.45 true cm \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $r_{j_{1}}$ vertices\ of\ degree $(\deg u)n_{2}+j_{1},$ \vskip -0.45 true cm \ \ \ \ \ \ \[.\] \vskip -0.7 true cm \ \ \ \ \ \ \[.\] \vskip -0.7 true cm \ \ \ \ \ \ \[.\] \ \ \ \ \ \ \vskip -0.35 true cm \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $r_{j_{k^{\prime}}}$ vertices\ of\ degree $(\deg u)n_{2}+j_{k^{\prime}}.$\\ \hskip -0.4 cm Thus \[\rm{dp}_{G[H]}((u,v))=r_{j_{1}}x^{(\deg u)n_{2}+j_{1}}+\ldots+r_{j_{k^{\prime}}}x^{(\deg u)n_{2}+j_{k^{\prime}}}=x^{(\deg u)n_{2}}\rm{dp}(v),\] and since $\rm{dp}(u)=0$, the conclusion in this case holds. If $v$ is isolated in $H$, but $u$ is not isolated in $G$, since $v$ has not any neighbor in $H$, each neighbor of $(u,v)$ is in the form $(a,b)$ such that $a$ is a neighbor of $u$, and $b$ is a vertex of $H$, and the degree of $(a,b)$ in $G[H]$ is $(\deg a)n_{2}+\deg b$. Suppose that $\rm{dp}(u)=\sum_{s=1}^{k}c_{i_{s}} x^{i_{s}}$ where $c_{i_{s}}$'s are positive integers and $i_{s}$'s are the distinct degrees of neighbors of $u$ in $G$. Supposing $\rm{dp}(H)=\sum_{p=0}^{\Delta} l_{p}x^{p}$ where $l_{p}$'s are the number of vertices of $H$, each one of degree $p$, and $\Delta$ is the maximum degree of $H$, for each $p$, the neighbors of $(u,v)$ whose second components are of degree $p$, are restricted to: \vskip +0.45 true cm \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $l_{p}c_{i_{1}}$ vertices\ of\ degree $i_{1}n_{2}+p,$ \vskip -0.45 true cm \ \ \ \ \ \ \[.\] \vskip -0.7 true cm \ \ \ \ \ \ \[.\] \vskip -0.7 true cm \ \ \ \ \ \ \[.\] \ \ \ \ \ \ \vskip -0.35 true cm \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $l_{p}c_{i_{k}}$ vertices\ of\ degree $ i_{k}n_{2}+p.$\\ \hskip -0.4 cm Therefore each $l_{p}c_{i_{s}}$ is calculated in the coefficient of $x^{i_{s}n_{2}+p}$ in $\rm{dp}((u,v))$. Thus \[\rm{dp}_{G[H]}((u,v))=\sum_{p=0}^{\Delta}\sum_{s=1}^{k} l_{p}c_{i_{s}}x^{i_{s}n_{2}+p}= \sum_{s=1}^{k}c_{i_{s}}x^{i_{s}n_{2}}\sum_{p=0}^{\Delta}l_{p}x^{p}\] \[\hskip -2.8 cm =\sum_{s=1}^{k}c_{i_{s}}x^{i_{s}n_{2}}\rm{dp}(H)= (\rm{dp}(u))^{\curlywedge n_{2}}\rm{dp}(H),\] and since $\rm{dp}(v)=0$, the conclusion holds. Now let no one of $u$ and $v$ be isolated. Suppose that $\rm{dp}(u)=\sum_{s=1}^{k}x_{i_{s}}x^{i_{s}}$ and $\rm{dp}(v)=\sum_{t=1}^{k^{\prime}}r_{j_{t}}x^{j_{t}}$ where $c_{i_{s}}$'s and $r_{j_{t}}$'s are positive integers and $i_{s}$'s and $j_{s}$'s are the distinct degree of neighbors of $u$ and $v$, respectively. The adjacent vertices of $(u,v)$ in $G[H]$ are of two kinds below:\\ (i) the vertices in the form $(a,b)$ where $a$ is a neighbor of $u$ in $G$, and $b$ is a vertex of $H$,\\ (ii) the vertices in the form $(u,b)$ where $b$ is a neighbor of $v$ in $H$.\\ Let $\rm{dp}(H)=\sum_{p=0}^{\Delta}l_{p}x^{p}$ where $\Delta$ be the maximum degree of $H$. For each $p$, the neighbors of $(u,v)$ of kind (i) whose second components are of degree $p$, are restricted to: \vskip +0.45 true cm \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $l_{p}c_{i_{1}}$ vertices\ of\ degree $i_{1}n_{2}+p,$ \vskip -0.45 true cm \ \ \ \ \ \ \[.\] \vskip -0.7 true cm \ \ \ \ \ \ \[.\] \vskip -0.7 true cm \ \ \ \ \ \ \[.\] \ \ \ \ \ \ \vskip -0.35 true cm \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $l_{p}c_{i_{k}}$ vertices\ of\ degree $ i_{k}n_{2}+p.$\\ \hskip -0.4 cm Therefore each $l_{p}c_{i_{s}}$ is calculated in the coefficient of $x^{i_{s}n_{2}+p}$. On the other hand, since in all neighbors of kind (ii), $u$ is fixed, such vertices are restricted to: \vskip +0.45 true cm \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $r_{j_{1}}$ vertices\ of\ degree $(\deg u)n_{2}+j_{1},$ \vskip -0.45 true cm \ \ \ \ \ \ \[.\] \vskip -0.7 true cm \ \ \ \ \ \ \[.\] \vskip -0.7 true cm \ \ \ \ \ \ \[.\] \ \ \ \ \ \ \vskip -0.35 true cm \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $r_{j_{k^{\prime}}}$ vertices\ of\ degree $(\deg u)n_{2}+j_{k^{\prime}}.$\\ \hskip -0.4 cm Thus \[\rm{dp}_{G[H]}((u,v))=\sum_{p=0}^{\Delta}\sum_{s=1}^{k} l_{p}c_{i_{s}} x^{i_{s}n_{2}+p}+\sum_{t=0}^{k^\prime}r_{j_{t}}x^{(\deg u)n_{2}+j_{t}}\] \[\hskip -3 cm =(\rm{dp}(u))^{\curlywedge n_{2}}\rm{dp}(H)+x^{(\deg u)n_{2}}\rm{dp}(v).\] \end{proof} As we saw above, having the degree polynomial sequence of graphs $G$ and $H$ without access to $G$ and $H$, the degree polynomial sequences of $G\vee H,\ G\times H,\ G\otimes H$, and $G[H]$ are calculatable. The following theorem shows that the degree polynomial sequence of the complement of a graph can be calculated having the degree polynomial sequence of that graph without access to the graph itself. \begin{thm} Let $G$ be a simple graph and $u$ be a vertex of $G$. Then \[\rm{dp}_{G^{c}}(u)=(\rm{dp}(G)-\rm{dp}_{G}(u)- x^{\deg_ {u}G})^{\curlywedge (n-1)-}\] where $n$ is the order of $G$. \end{thm} \begin{proof} For each $i(\in\mathbb{Z})\geq 0$, the coefficient of $x^{i}$ in $\rm{dp}(G)$, is the total number of the vertices of $G$, each one of degree $i$, and the coefficient of the same $x^{i}$ in $\rm{dp}(u)$, is exactly the number of the vertices of $G$, each one of degree $i$, which are adjacent to $u$ in $G$. Therefore the coefficient of each $x^{i}$ in the polynomial: \[\rm{dp}(G)-\rm{dp}_{G}(u)\] is the number of the vertices of degree $i$ (in $G$) which are non-adjacent to $u$. Since $u$ itself is non-adjacent to $u$, the coefficient of $x^{i}$ in the polynomial: \[\rm{dp}(G)-\rm{dp}_{G}(u)-x^{\deg _{G}u}\] is exactly the number of the vertices of $G$, other than $u$, which are of degree $i$ and non-adjacent to $u$ (in $G$), and by definition of $G^{c}$, this number is exactly the number of the vertices of $G^{c}$ which are of degree $(n-1)-i$ and adjacent with $u$ (in $G^{c}$). Therefore for each $i$, the coefficient of $x^{(n-1)-i}$ in the recent polynomial, equals exactly the number of the vertices of degree $i$ in $G^{c}$, which are adjacent to $u$ (in $G^{c}$). Thus (based on the meaning of the notation $(\rm{dp}(G)-\rm{dp}_{G}(u)- x^{\deg_{G}u})^{\curlywedge (n-1)-}$) the coefficient of $x^{i}$ in \[(\rm{dp}(G)-\rm{dp}_{G}(u)- x^{\deg_{G}u})^{\curlywedge (n-1)-}\] is exactly the number of the neighbors of $u$ in $G^{c}$, which their degree is $i$. \end{proof} \vskip 0.8 true cm \section{\bf Some open problems} \vskip 0.4 true cm Many new questions and open problems can arise from above topics. Some of them are: (1) How can one characterize all degree polynomial sequences? (2) Classify degree polynomial sequences of connected graphs and trees. (3) Characterize all graphs whose degree polynomial sequences formed by polynomials with one term. (4) Characterize all degree polynomial sequences which realize uniquely. \vskip 0.4 true cm
2024-02-18T23:40:48.603Z
2020-09-02T02:21:15.000Z
algebraic_stack_train_0000
3,394
8,356
proofpile-arXiv_066-603
\section*{Data Availability Statement} A snapshot of the exact version of the prototyping platform toki \cite{Horst:tokiABuildAndTestPlatformForPrototypingAndEvaluatingOperatingSystemConceptsInRealTimeEnvironments:2019} that was used to conduct the presented measurements is available on Zenodo \cite{DataAndSwSnapshot:2020}. The snapshot also contains the captured, raw STM trace data and scripts to produce the presented figures. The latest version of toki can be obtained from \cite{toki-dev-site:2020}. \section{Introduction} \label{sec:introduction} \Acp{cps} are characterized by the fact that a computer system works together with a physical environment, or rather controls it. A specific characteristic of such control systems is their necessity to provide short and predictable reaction times on events in the physical world, to guarantee a good control quality \cite{Jensen:WrongAssumptionsAndNeglectedAreasInRealTimeSystems:2008}. Both properties are likewise essential for modern systems, such as tele-operated-driving \cite{Tang2014}, and classic systems, such as the control of internal combustion engines \cite{Frey2010}. An important aspect of the achievable reaction time is the interrupt handling performance in both dimensions the interrupt handling latency and throughput capabilities of a system. Especially the effect of the utilized software stack has not yet been comprehensively assessed. Such a systematic evaluation would, however, facilitate the development and selection of \ac{cps} software stacks for particularly latency-sensitive or throughput-hungry use-cases. Previous studies in this field mainly conducted measurements with the help of external measurement devices \cite{Koning2013,Rybaniec2012,Macauley1998,NXP:MeasuringInterruptLatency:2018}, which requires an in-depth understanding of the hardware to obtain precise measurements \cite{NXP:MeasuringInterruptLatency:2018}. This expert knowledge, however, is reserved for the \ac{soc} and processor \ac{ip} vendors. Hence, we see the need for a measurement method that allows to accurately measure and properly stress the interrupt handling process of today's \acp{soc} without expert knowledge. Accordingly, we present a flexible interrupt performance measurement method that can be applied to ARMv8-A \acs{ip}-core based platforms that provide a physical trace-port. As we see an increasing market share of ARM based systems \cite{Manners2020} and their wide adoption in automotive \acp{cps} \cite{Daimler2018,Tesla2019,Renesas2020} we strongly believe that our method helps in analyzing a multitude of relevant systems. We specify three benchmark functions based on the assessment of ten combinations out of seven distinctive test-cases. Whereby each test case was chosen to stress a dedicated part of the ARM interrupt handling process. The effectiveness of the test-cases and benchmark functions is demonstrated on a Xilinx ZCU102 evaluation board \cite{Xilinx:ZCU102EvaluationBoardUserGuide:2018} with two different software stacks. In summary, we contribute \begin{enumerate*} [ label=\emph{(\roman*)}, ref=(\roman*), itemjoin={{, }}, itemjoin*={{, and }}, after={{.}} ] \item a precise method to measure the interrupt performance of complex ARM based \acp{soc} without expert knowledge \item a set of benchmark functions that provokes the best and worst interrupt latency and maximal throughput on a given ARMv8-A hardware and software combination \end{enumerate*} The rest of this paper\xspace is structured as follows: \Cref{sec:int-handling} describes the interrupt handling process on ARMv8-A platforms with a \ac{gic} version 2, \cref{sec:eval-method} presents the measurement setup and procedure of the envisioned evaluation method, \cref{sec:eval-results} discusses the proposed test-cases and benchmarks along with the measurement results, \cref{sec:related-work} gives an overview on related work, and \cref{sec:conclusion} concludes the paper\xspace. \section{Interrupt Handling Procedure on ARM\titlelowercase{v}8-A Platforms} \label{sec:int-handling} \citet{Mueller:TheComplexityOfSimpleComputerArchitectures:1995} define an interrupt as an event that causes a change in the execution flow of a program sequence other than a branch instruction. Its handling process starts with the activation through a stimulus and ends with the completion of the \ac{isr}, which is called in consequence and processes the stimulus. Until the \ac{isr} is executed several steps are undergone in hardware to cope for example with simultaneously arriving \acp{irq} and masking of certain requests. In the following, we explain this process for ARMv8-A platforms, as specified in the \ac{gic} architecture specification version\nobreak\ 2 \cite{ARM:GenericInterruptControllerArchitectureVersion2:2013}. In \cref{sec:eval-results} this information is used to design suitable test cases. The \ac{gic} architecture specification differentiates among four types of interrupts: \begin{itemize*} [ label={}, afterlabel={}, itemjoin={{, }}, itemjoin*={{, and }}, after={{ interrupts.}} ] \item peripheral \item software-generated \item virtual \item maintenance \end{itemize*} In the course of this paper\xspace we focus solely on measuring interrupts triggered by external stimuli, the peripheral interrupts. They can be configured as edge-triggered or level-sensitive. This means that the corresponding interrupt is recognized either once on a rising edge in the input event signal, or continuously as long as the signal has a certain strength. The \ac{gic} supervises the overall interrupt routing and management process up to the point that the \ac{isr} is called. \Cref{fig:selection-and-signaling-process} shows the \ac{gic} architecture and the signaling path for the selected measurement hardware, the \ac{zynqmp}. The \ac{gic} manages all incoming event signals of the system and consists out of a central Distributor and a CPU interface per processor core. The Distributor manages the trigger-type of each interrupt, organizes their prioritization, and forwards requests to the responsible CPU interface. The CPU interfaces perform the priority masking and preemption handling for their associated core. \begin{figure}[tb] { \includegraphics[width=\linewidth]{gic-arch-and-distributor-selection-process} \caption[]{ Repeatedly, per CPU interface executed selection and signaling process of the \acf{gic} for handling triggered \acfp{irq}. } \label{fig:selection-and-signaling-process} } \end{figure} The timeline of the various steps in the interrupt handling process are illustrated in \cref{fig:int-process-timeline}. The handling process of a certain \ac{irq} begins with the arrival of an event signal at the Distributor (step 1). In case the signal matches the configured trigger type, the actual handling process is triggered through the recognition of a new \ac{irq} (step 2), which eventually leads to the execution of the \ac{isr} (step 9). After being recognized (step 2), the Distributor may select and forward the \ac{irq} to the responsible CPU interface according to the process depicted in the upper half of \cref{fig:selection-and-signaling-process}. This selection process (step 3) is executed repeatedly and potentially in parallel for each CPU interface. When the next \ac{hppi} is identified the Distributor forwards the request to the currently inspected CPU interface (step 4). The CPU interface filters incoming requests according its configuration and the process shown in the lower half of \cref{fig:selection-and-signaling-process}. As a result the CPU interface may signal a pending \ac{irq} to its associated core (step 5). \begin{figure}[tb] { \includegraphics[width=\linewidth]{int-process-timeline} \caption[]{ Overall steps of the interrupt handling process on ARMv8 platforms with a \acf{gic} version 2 and an ARMv8-A architecture profile core. } \label{fig:int-process-timeline} } \end{figure} Subsequent to the signaling of a new \ac{irq}, on an ARMv8-A architecture \cite{ARM:ArchitectureReferenceManualARMv8ForARMv8AArchitectureProfile:2018}, the core acknowledges the signaled \ac{irq} by reading its id (step 6), which marks the \ac{irq} as \emph{active} within the \ac{gic} (step 7). Meanwhile, the processing core jumps to the vector table entry indicated by the id of the current \ac{irq} (step 8), which leads to the execution of the \ac{isr} (step 9). Individual software stacks might add additional steps before the \ac{isr} finally starts. Besides the regular interrupt requests described above, the \ac{gic} architecture version 2 (\acs{gic}v2) also supports the prioritized and more secure \acp{fiq}, however, these lay out of the focus of this paper\xspace. Furthermore, it shall be noted that the regular interrupt processing path was not affected by the latest updates to the \ac{gic} architecture (\ie version 3 and 4). Hence, we strongly believe that the presented benchmark functions would be still valid for the updated architectures. \section{Proposed Evaluation Method} \label{sec:eval-method} With respect to the interrupt performance of a \altext{hardware}{software} combination two properties are most relevant: the \ac{irq} \emph{latency} and \emph{throughput}. Considering the process shown in \cref{fig:int-process-timeline}, we define the \ac{irq} \emph{latency} as the time between a signaled event (step 1) and the call to the corresponding \ac{isr} (step 9). As \emph{throughput} we understand the number of completed \acp{isr} per time unit. \subsection{Measurement Setup} \label{ssec:eval-setup} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{measurement-setup} \caption[]{ Chosen measurement setup, with four PL2PS interrupts generated by the \acf{pl} according to the configuration signaled by the \acf{ps} via its \acf{emio}. The generated interrupts, executed instructions, and global timestamps are recorded through the \acf{stm} and \acf{etm}. The captured trace is read via an ARM DSTREAM unit. } \label{fig:eval-setup} \end{figure} Our measurements utilize a Xilinx ZCU102 \cite{Xilinx:ZCU102EvaluationBoardUserGuide:2018}, with the toki prototyping platform \cite{Horst:tokiABuildAndTestPlatformForPrototypingAndEvaluatingOperatingSystemConceptsInRealTimeEnvironments:2019,toki-dev-site:2020}, and an ARM DSTREAM \cite{ARM:ARMDS5ARMDSTREAMUserGuide:2015} hardware trace. We have chosen the \ac{zynqmp}, as it features an in-built \ac{pl} and versatile tracing capabilities. \Cref{fig:eval-setup} illustrates the chosen hardware setup. The \ac{zynqmp} is divided into the \ac{pl} and \ac{ps}. The \ac{ps} provides two processor clusters, the \ac{apu} with four ARM Cortex-A53 cores and the \ac{rpu} with two ARM Cortex-R5 cores. Both clusters have their own interrupt handling infrastructure, but we focus on the \acs{apu}'s only. The ARM CoreSight \cite{ARM:CoreSightTechnicalIntroduction:2013} tracing capabilities of the \ac{zynqmp} allow to record events, such as taken branches, external interrupts, and software events on a common timeline defined by system-wide time\-stamps. The \ac{stm} and \acp{etm} record the various events in hardware, without altering the behavior of the executed code, neither in a temporal nor semantic sense. Only software driven events require a register write operation and thus marginally influence the timing of the executed code. CoreSight is part of all newer ARM processor \acp{ip} and can be utilized as soon as the used hardware features a physical JTAG- and trace-port. The latter is typically available on evaluation boards used for prototyping professional applications. In addition to the in-built hardware features, we deploy a custom interrupt generation block into the \ac{pl}. The block allows to simultaneously stimulate \ac{apu} interrupts, following a generation pattern defined by a logical-high and -low phase, and trace them in the \ac{stm}. This could also be realized with an external \ac{fpga} or a signal generator, to support additional platforms. The pin multiplexing configuration of the target platform only has to ensure that triggered input lines are also connected to a \ac{stm} hardware event channel. \subsection{Measurement Procedure} \label{ssec:eval-procedure} Based on the measurement setup, we propose two measurement procedures, one per measurement type (\ie latency and throughput), utilizing two different configurations of the interrupt generation block. Both procedures use \ac{apu} core 0 to take measurements and the other cores as stimulation sources, where needed. We conduct our measurements on two software stacks: \textit{(i)} a bare-metal, and \textit{(ii)} a FreeRTOS based system. Both software stacks are provided by the toki build- and test-platform \cite{Horst:tokiABuildAndTestPlatformForPrototypingAndEvaluatingOperatingSystemConceptsInRealTimeEnvironments:2019,toki-dev-site:2020}, and utilize a driver library provided by Xilinx \cite{Xilinx:EmbeddedSW:2020}. This library already features an interrupt dispatching routine that multiplexes the processor exception associated with regular interrupts on the target. The bare-metal stack is an unoptimized piece of code that features neither a scheduler, nor a timer tick. It could clearly be optimized, however, it is unclear how much a fully optimized assembler version of the stack would impact the presented results. Thanks to its Yocto \cite{YoctoWebsite} based build-system, toki can easily be executed to include Linux based software stacks and with that the presented test setup. Completely different software stacks and hardware platforms can be evaluated with the given setup, when they provide \textit{(i)} a libc function interface, and \textit{(ii)} drivers for interacting with the caches, \ac{stm} trace, and \ac{gic} on that platform. \subsubsection*{Throughput} In case of a throughput evaluation, we configure the interrupt generation block to continuously signal the PL2PS interrupts for \SI{9.75}{\second} and then wait for another \SI{250}{\milli\second} on a rotating basis. We call each repetition of this pattern a stimulation phase. Core 0 is configured to handle the signaled \ac{ppi} on a level-sensitive basis and the corresponding \ac{isr} does nothing despite emitting a \ac{stm} software event, \ie executing a \SI{8}{\bit} store instruction. Hence, the time spent in each \ac{isr} and with that the possible reduction of the maximum throughput is negligible. For each throughput measurement we capture a \SI{120}{\second} trace and evaluate the contained stimulation phases. The throughput ($\mu$) in each stimulation phase ($i \in [0,19]$) is obtained from the traced measurement samples by counting the \ac{isr} generated \acs{stm}-software-events between the rising-edge \acs{stm}-hardware-event of the ($i$)\textsuperscript{th} and ($i$+$1$)\textsuperscript{th} stimulation phase and dividing it with the length of one logical-high-phase. The set of throughput values considered for the evaluation in \cref{sec:eval-results} is then given by $M = \{\, \mu(i) \;|\; i \in [0, 19] \,\}$. \subsubsection*{Latency} The latency evaluation is conducted with an alternating scheme of a \SI{1}{\milli\second} PL2PS interrupt trigger and a \SI{4}{\milli\second} pause. The interrupt generation block is configured accordingly. Again we refer to each repetition of this scheme as a stimulation phase. In contrary to the throughput measurement, however, core 0 is configured to handle the signaled \ac{ppi} on a rising-edge basis. Thus, every stimulation phase provokes only one interrupt. The corresponding \ac{isr} is the same as for the throughput measurements. The results for the latency measurements are obtained by evaluating \SI{30}{\second} trace captures. The interrupt latency $\Delta t_\mathsf{latency}(i)$ induced by each stimulation phase $i \in [0, 2399]$ is given by $\Delta t_\mathsf{latency} = B - A$, with $A$ representing the point in time where the interrupt was stimulated and $B$ the point where the corresponding \ac{isr} was started. Both points can be obtained from the captured trace. $A$ is given by the timestamp of the \ac{stm} hardware event associated with the rising-edge of the PL2PS interrupt signal. $B$, on the other hand, has to be determined and defined for every analyzed software stack individually. In the course of this paper\xspace we utilize the timestamp of a \ac{stm} event generated within the interrupt handler of our benchmark application that runs on top of the evaluated software stacks. Similar to the throughput values, the set of latency values considered in \cref{sec:eval-results} is given by $X = \{\, \Delta t_\mathsf{latency}(i) \;|\; i \in [0, 2399] \,\}$. \subsection{Precision and Limitations} \label{ssec:precision} In our measurement setup, we configure the \ac{pl}, trace port, and timestamp generation clock to oscillate at \SI{250}{\mega\hertz}. Hence, two consecutive timestamp ticks lay \SI{4}{\nano\second} apart from each other. Since each sampled event in the \ac{etm} and \ac{stm} is assigned a timestamp, our measurement precision corresponds exactly to the system timestamp resolution, \ie \SI{4}{\nano\second}. This is an order of magnitude smaller than the interrupt latency measured in a previous study for the same hardware platform \cite{Wild:EvaluationOfAMultiCoreProcessorRegardingResponseTimeAndLoadLimitForInterruptProcessing:2018} and a quarter of the measured minimal interrupt latency of an ARM real-time core \cite{NXP:MeasuringInterruptLatency:2018}. Even though state of the art oscilloscopes provide a sampling rate of up to \SI[per-mode=symbol]{20}{\giga\sample\per\second} \cite{KeysightTechnologies:InfiniiVision6000XSeriesOscilloscopesDataSheet:2020}, which corresponds to a measuring precision of \SI{0.05}{\nano\second}, the actual precision in case of interrupt latency measurements might be considerably lower. The reason for this is that the oscilloscope can only measure external signals of a processor. Thus, in-depth knowledge of the internal structure of the hardware and executed instructions during a measurement is required to utilize the full precision of the oscilloscope. This makes it less suited for the evaluation of different hardware platforms and software stacks. The CoreSight based measurement setup, on the other hand, supports a flexible placement of the measurement points within and outside of the processor and does not require any expert knowledge about the hardware or software. Besides the measurement precision and flexibility, we also need to ensure that the presented measurement setup is sane and triggered interrupts can actually be recognized by the processor. According to the \ac{zynqmp} technical reference manual \cite[p.\,312]{Xilinx:ZynqUltraScaleDeviceTechnicalReferenceManual:2018}, a signal pulse that shall trigger a PL2PS interrupt needs to be at least \SI{40}{\nano\second} wide to guarantee that it is recognized as such. Hence, the presented stimulation scenarios for the two measurement procedures ensure that all triggered interrupts can be recognized. The disadvantage of the presented measurement approach, however, is that it is only applicable for ARM based platforms with a dedicated JTAG- and trace-port. Given ARM's 40\% share of the semiconductor market for \ac{ip} designs \cite{Manners2020} and the wide availability of suitable evaluation boards, we believe this is an acceptable drawback. An additional limitation is that valid measurements can only be obtained for the interrupt with the highest priority among the active ones, but this applies to any kind of measurement setup. \section{Constructing a Benchmark} \label{sec:eval-results} \begin{table*}[tp] \caption{ Properties of the evaluated test-cases and benchmarks used to compare the interrupt latency (L) and throughput (T). } \label{tbl:test-cases} \small \extrarowsep=.6pt \begin{tabu} to \textwidth {|X[1.9,l]|X[.8,c]|X[.4,c]|X[.4,c]|X[.8,c]|X[.8,c]|X[.8,c]|>{\hspace*{-.2\tabcolsep}}X[.26,c]|>{\hspace*{-.2\tabcolsep}}X[.26,c]|>{\hspace*{-.2\tabcolsep}}X[.26,c]|} % \tabucline{-} % \multirow{2}{=}{\bfseries Description} & \multirow{2}{=}{\centering\bfseries Targeted Component} & \multicolumn{2}{p{.8\tabucolX+2\tabcolsep}|}{\centering\bfseries Measurements} & \multirow{2}{=}{\centering\bfseries Enabled Interrupts} & \multirow{2}{=}{\centering\bfseries Cache Config} & \multirow{2}{=}{\centering\bfseries Enabled Cores} & \multicolumn{3}{>{\hspace*{2\tabcolsep}}p{.8\tabucolX+4\tabcolsep}|}{\centering\bfseries Benchmarks} \\ & & {\bfseries L} & {\bfseries T} & & & & {\bfseries L\textsubscript{min}} & {\bfseries L\textsubscript{max}} & {\bfseries T\textsubscript{max}} \\ \hline \hline T1: Baseline & --- & X & X & 1 & Disabled & 1 & & & \\ \hline \hline T2: Caches enabled & Cache & X & X & 1 & Enabled & 1 & X & & X \\ T3: Caches invalidated & Cache & X & & 1 & Invalidated & 1 & & & \\ \hline \hline T4: Enabled interrupts & GIC & X & & 2--181 & Disabled & 2 & & X & \\ T5: Order of priorities & GIC & X & & 15 & Disabled & 2 & & & \\ T6: Parallel interrupt handling & GIC & X & X & 1 & Disabled & 2, 3, 4 & & X & X \\ \hline \hline T7: Random memory accesses & Memory & X & & 1 & Disabled & 4 & & & \\ \tabucline{-} \tabuphantomline \end{tabu} \end{table*} \begin{figure*}[tp] \centering { \begin{minipage}[b][124pt][t]{.02\textwidth} \subcaption{} \label{sfig:results:latency-overview} \end{minipage} \begin{minipage}[b][124pt][t]{.59\textwidth} \includegraphics[height=124pt] {latency-overview}% \hfill \end{minipage}% \hfill% \begin{minipage}[b][124pt][t]{.02\textwidth} \subcaption{} \label{sfig:results:latency-b_min} \end{minipage} \begin{minipage}[b][124pt][t]{.16\textwidth} \includegraphics[height=124pt] {latency-b_min-details}% \hfill \end{minipage}% \hfill% \begin{minipage}[b][124pt][t]{.02\textwidth} \subcaption{} \label{sfig:results:latency-b_max} \end{minipage} \begin{minipage}[b][124pt][t]{.16\textwidth} \includegraphics[height=124pt] {latency-b_max-details}% \hfill \end{minipage}% } \caption{ Latency measured with T1--T7 (\subref{sfig:results:latency-overview}) and B-L\textsubscript{min} and B-L\textsubscript{max} (\subref{sfig:results:latency-b_min}-\subref{sfig:results:latency-b_max}). \Csubref{sfig:results:latency-overview} uses a symlog scale with a linear threshold of \SI{2496}{\nano\second}, \csubref{sfig:results:latency-b_min} uses a symlog scale with a linear threshold of \SI{240}{\nano\second}, and \csubref{sfig:results:latency-b_max} uses a linear scale. } \label{fig:results:latency} \end{figure*} \begin{figure}[tp] \centering { \begin{minipage}[b][81.44pt][t]{.02\linewidth} \subcaption{} \label{sfig:results:throughput-overview} \end{minipage} \hfill \begin{minipage}[b][81.44pt][t]{.9\linewidth} \hfill \includegraphics[width=\linewidth-2pt] {throughput-overview}% \hspace*{2pt} \end{minipage}\\[\baselineskip]% \begin{minipage}[b][142.76pt][t]{.02\linewidth} \subcaption{} \label{sfig:results:throughput-details} \end{minipage} \hfill \begin{minipage}[b][142.76pt][t]{.9\linewidth} \hfill \includegraphics[width=\linewidth] {throughput-details}% \end{minipage}% } \caption{ Throughput measured with T1, T2, T6, and B-T\textsubscript{max}. \Csubref{sfig:results:throughput-overview} compares the median of all measurements on a linear scale and \csubref{sfig:results:throughput-details} illustrates the measured throughput ranges on a symlog scale with a linear threshold of \SI{1}{\hertz}, normalized to a \SI{500}{\kilo\hertz} range around the highlighted median } \label{fig:results:throughput} \end{figure} In order to create a benchmark for comparing the interrupt latency and throughput across platforms and software stacks, we have designed seven test-cases specifically tailored to stress the ARMv8-A interrupt handling process. To judge their suitability for an overall benchmark, we measure their performance with the two software stacks described in \cref{ssec:eval-procedure} on top of the \ac{zynqmp} discussed in \cref{ssec:eval-setup}. By comparing the impact of each test-case with respect to the baseline performance of the two systems, we compose three benchmarks out of the test-cases and show their suitability by applying them to the same system configurations. \subsection{Evaluated Test-Cases} \label{ssec:test-cases} Given the interrupt handling process in \cref{sec:int-handling}, we conclude that the time spent in the process can be influenced by: the core, caches, memory, and \ac{gic}. We have designed seven test-cases that aim to reveal the influence of different configuration settings related to the aforementioned components onto the temporal behavior of the interrupt handling process. However, we do exclude the core from our considerations by only measuring interrupts with the highest priority and not computationally loading the measured core. The measurements for all test-cases follow the scheme presented in \cref{ssec:eval-procedure}, unless indicated otherwise. Depending on the goal of each test-case they are either applied only for latency measurements or both latency and throughput measurements. The proposed test-cases and their targeted components are summarized in \cref{tbl:test-cases}, and \cref{fig:results:latency,fig:results:throughput} present the results of our measurements. The presented results are based on 848--6000 measurement samples per latency measurement and 11--12 samples per throughput measurement. The remainder of this section elaborates on the intended influence of the listed test-cases on the interrupt handling process. \subsubsection*{T1: Baseline} T1 is intended to provide a reference point to compare the other test-cases to and rate their impact. Hence, T1 assess the interrupt latency and throughput of a system in the most isolated way, with only one core and interrupt enabled and caches disabled. Hence, T1 only enables the \ac{emio} pin driven interrupt and routes it to core 0. As \ac{isr} the default handler, described in \cref{ssec:eval-procedure}, is used. T1 is evaluated for its latency and throughput performance. \subsubsection*{T2: Caches enabled} T2 equals T1, with the exception that all operations are executed with enabled caches. This test is conducted for both latency and throughput measurements. \subsubsection*{T3: Caches invalidated} T3 is also based on T1, but the \ac{isr} additionally invalidates the data and instruction caches. Due to the fact that this is not feasible in throughput measurements, as new interrupts would arrive independently of the cache invalidation process, we conduct only latency measurements with T3. \subsubsection*{T4: Enabled interrupts} T4 aims at stressing the \ac{gic} with the highest possible number of enabled interrupts, as the interrupt selection and signaling process suggests that more checks have to be done the more interrupts are \altext{enabled}{pending}. Hence, T4 enables the maximum number of interrupts supported by the \ac{zynqmp}, except those required for conducting the measurements. All interrupts are routed to and handled by core 0. The measured PL-to-PS interrupt is assigned to the highest priority and all other interrupts to the lowest priority. Core 0 installs an empty \ac{isr} that immediately returns after clearing the \ac{irq} in the \ac{gic} for all interrupts, except the measured PL-to-PS interrupt, which uses the same \ac{isr} as T1. As this test aims at stressing the \ac{gic} to reduce its performance, we only evaluate it with respect to the interrupt latency. To be able to identify trends, we evaluated this test-case with 1, 36, 72, 108, 144, and 180 stressing interrupts. However, due to the marginal differences between the results of the different T4 variants and space constraints we only show the results of T4-180, T4 with 180 stressing interrupts, which provoked the highest latency. \subsubsection*{T5: Order of priorities} T5 utilizes the same setup as T4 and is also applied to latency measurements only. However, in contrast to T4, T5 only utilizes as much interrupts as there are priorities, \ie 15. The measured interrupt remains at priority 0 and the priorities of the other 14 are assigned in an ascending order (\ie 14 to 1). This design intends to provoke a maximal number of \ac{hppi} updates. \subsubsection*{T6: Parallel interrupt handling} To test the influence of parallelly handled interrupts on the interrupt handling process, T6 enables up to 4 cores and configures all of them to handle the \ac{emio} pin 0 interrupt. The interrupt is configured as level-sensitive with the highest priority. The \ac{pl} ensures that this interrupt is signaled continuously and simultaneously as soon as the test is enabled. The \acp{isr} on all cores generate a \ac{stm} event, which are evaluated for throughput measurements. In case of latency measurements, however, only those \ac{stm} events produced by core 0 are considered. We evaluated T6 with 2, 3, and 4 enabled cores. The results showed a clear trend that the more enabled cores the higher the observed latency and the lower the achieved throughput. Due to space constraints we thus only show the results for T6-4, with 4 enabled cores, in case of the latency considerations and T6-2 in case of the throughput measurements. \subsubsection*{T7: Random memory accesses} As pointed out earlier, the shared memory and interconnecting buses of multi-core processors represent a major source of unforeseen delays. Accordingly, T7 is designed to delay memory accesses by overloading the interconnecting bus and memory interface. For this purpose all 4 cores are running random, concurrent memory accesses in form of constants that are written to random locations in a \SI{96}{\mega\byte} large array. In parallel core 0 executes the standard latency test. Throughput evaluations are not considered with this test-case, as it targets to delay the interrupt handling process. \subsection{Proposed Benchmarks} \label{ssec:benchmarks} Analyzing the measured interrupt performances under the different test-cases, shown in \cref{fig:results:latency,fig:results:throughput}, we conclude that first of all different setups and software stacks indeed considerably influence the interrupt handling performance. All three targeted components, provoke a considerable effect on the interrupt latency and throughput. Particularly noticeable are the differences between the test-cases with enabled (T2, T3) and disabled caches (T1, T4--T7), for both the observed latency and throughput, as well as the effects of stressing the \ac{gic} on the measured latency (T4--T6). Of special interest is that the FreeRTOS based stack achieved a smaller minimum latency and a narrower variation range of the latency and throughput, compared to the bare-metal stack. Examples are for instance T2 and T6-4 for latency measurements, and T6-2 for throughput measurements. After measuring and reviewing the tests for each critical test-case multiple times without finding any anomalies, we assume that some low-level hardware effects, for instance in the pipeline or shared interconnects, might cause the observed behavior. Further insight into the situation could be gained by \textit{(i)} implementing a fully optimized, assembly-only bare-metal stack, or \textit{(ii)} analyzing the actual hardware effects with a cycle-accurate simulation in ARM's Cycle Model Studio \cite{ARM:CycleModelStudio:2020}. However, both approaches are out of the scope of this paper\xspace. T2 produces by far the shortest interrupt latency of \SI{232}{\nano\second} on average with only a few outliers. Hence, we propose to utilize T2 as benchmark for the minimal achievable latency (B-L\textsubscript{min}). To obtain a suitable benchmark for the maximal latency, we analyzed all combination out of the test-cases T4-36, T4-144, T6-3, and T7. Except for the combination out of T6 and T7, all tested combinations showed a similar performance with only slight differences. An exception to that forms the interrupt latency performance of the combination out of T4-144 and T6 on FreeRTOS, which is considerably more confined than all other observed ranges. The highest latency is achieved with a combination out of T4-36 and T6, however, the combination of T4-36, T6, and T7 is close. Accordingly, we propose to use the combination out of T4-36 and T6 to benchmark the achievable maximal interrupt latency (B-L\textsubscript{max}). For the maximal throughput benchmark (B-T\textsubscript{max}) we evaluated all four variants of the T6 test-case with enabled caches (T2). Interestingly, the enabled caches seem to mitigate the effect of more enabled cores, as all combinations showed a similar throughput. However, the combination out of T6-2 and T2 still performed best. Even though the maximal achieved throughput of the combined test-cases lags a little behind that of T2 alone in case of the bare-metal software stack, it outperforms T2 by far in case of the FreeRTOS based stack. Hence, we propose the combination out of T6-2 and T2 to benchmark the maximal throughput of a system. \section{Related Work} \label{sec:related-work} In principle, there exist two patented interrupt latency measurement approaches that are used in literature. First, measurements based on an external measurement device, such as an oscilloscope \cite{Koning2013}. And second, measurements based on storing timestamps when an interrupt is asserted and when the interrupt handling routine is completed \cite{Lever1999}, like we do with our measurements. \Citet{Liu2011} measured the interrupt latency of five Linux variations on an Intel PXA270 processor, which features an ARM instruction set. They used a counter based measurement method and focused on the effect of different computational loads. Since their stimulation is limited to a single periodic interrupt, we argue that their approach is not able to stress the interrupt distribution process and that they rather analyzed the responsiveness of the scheduler to aperiodic events than the deviation of the interrupt latency. The wide majority of studies, however, focused on interrupt performance measurements with external measurement devices \cite{Rybaniec2012,Macauley1998,NXP:MeasuringInterruptLatency:2018}, or combined it with the timestamp approach \cite{Regnier2008}. \Citet{Macauley1998} compared different {80\texttimes86} processors with each other and \citet{NXP:MeasuringInterruptLatency:2018} determined an exact latency for their i.MX RT1050 processor. All other aforementioned studies focused on comparing different software stacks with respect to various computational loads. None of the mentioned studies analyzed the throughput, or stressed the interrupt distribution process. \citet{Aichouch2013} claim to have measured the event latency of LITMUS\^{}RT \vs a KVM/Qemu virtualized environment on an Intel based computer. However, it stays unclear how they performed the measurements and where they got the timing information from. Previous studies of the achievable interrupt throughput focused on the analysis of the achievable network packet \altext{transmission}{re\-cep\-tion} or storage \altext{input}{output} operations per second when considering different interrupt coalescing and balancing strategies \cite{Prasad2004,Ahmad2011,Cheng2012}, but do not analyze the interrupt throughput in isolation with respect to different software stacks. \section{Conclusion and Outlook} \label{sec:conclusion} We presented a flexible evaluation method based on the ARM CoreSight technology \cite{ARM:CoreSightTechnicalIntroduction:2013}, which enables the assessment of various software stacks on top of commodity ARMv8-A platforms with respect to their interrupt handling performance. Utilizing the evaluation method, we crafted seven specifically tailored test-cases that were shown to stress the ARM interrupt handling process. Out of these test-cases we deduced three benchmark functions, tailored to provoke the minimal (B-L\textsubscript{min}) and maximal (B-L\textsubscript{max}) interrupt latency, and the maximal throughput (B-T\textsubscript{max}), of a given software stack. We validated the test-cases and benchmark functions by comparing two software stacks (a simple bare-metal and FreeRTOS based environment) and measuring them on top of a Xilinx ZCU102 \cite{Xilinx:ZCU102EvaluationBoardUserGuide:2018}. Our measurements showed that different software stacks do have a considerable impact on the interrupt handling performance of a hardware platform. Hence, we hope to draw some attention on the importance of a good software design for \ac{cps}, with respect to interrupt processing and the need of a more profound analysis on how interrupt handling processes can be made more predictable with respect to the achievable latency and throughput. \begin{acks} The presented results build on top of the Bachelor's thesis by \citet{Wild:EvaluationOfAMultiCoreProcessorRegardingResponseTimeAndLoadLimitForInterruptProcessing:2018} and were partially funded by the \grantsponsor{BMWi}{German Federal Ministry of Economics and Technology (BMWi)}{https://www.bmwi.de} under grant n\textdegree\kern .1em\grantnum{BMWi}{01MD16002C} and the \grantsponsor{EU}{European Union (EU)}{https://europa.eu/} under RIA grant n\textdegree\thinspace825050. \end{acks} \bibliographystyle{ACM-Reference-Format}
2024-02-18T23:40:48.811Z
2020-09-02T02:22:09.000Z
algebraic_stack_train_0000
3,401
5,657
proofpile-arXiv_066-695
\section{Introduction} A number of random walk methods are known to have fast mixing rates in convex spaces. We are interested in sampling from a bounded connected space $\Omega'\subset\Real^n$ that might be non-convex. Such sampling methods can be useful in solving general optimization and planning problems. Recently, \citet{Abbasi-Bartlett-Gabillon-Malek} analyzed the \textsc{Hit-n-Run} algorithm under the assumptions that (i) a biLipschitz measure-preserving mapping between a convex space $\Omega$ and the target space $\Omega'$ exists, and (ii) $\Omega'$ has low curvature. In this paper, we construct such mappings. Further, we show that existence of such mappings is sufficient to have fast mixing for another popular random walk known as the \textsc{BallWalk}. Thus the curvature condition is not needed to analyze the \textsc{BallWalk} algorithm. A popular approach to analyze mixing rates is by showing lower bounds for {\em conductance}, which is usually obtained by establishing an isoperimetric inequality. As an example of an isoperimetric inequality, \citet{Dyer-Frieze-1991} show that for any partition $(\Omega_1,\Omega_2,\Omega_3)$ of a convex unit volume $\Omega$, \begin{equation} \label{eq:iso-ineq} \vol(\Omega_3) \ge \frac{2 d(\Omega_1,\Omega_2)}{D_{\Omega}} \min(\vol(\Omega_1), \vol(\Omega_2)) \;. \end{equation} Here $\vol$ denotes the $n$-dimensional volume,\footnote{We will use $\vol_{n-1}$ to denote the $(n-1)$-dimensional volume.} $D_{\Omega}$ denotes the diameter of $\Omega$ obtained by $D_{\Omega} = \max_{x,y\in \Omega} \abs{x-y}$ where $\abs{x-y}$ is the Euclidean distance between $x,y\in\Real^n$, and $d(\Omega_1,\Omega_2)=\min_{x\in \Omega_1, y\in \Omega_2} \abs{x-y}$. The only isoperimetric inequality for a non-convex space is shown by \citet{Chandrasekaran-Dadush-Vempala-2010} that obtains an inequality for star-shaped bodies.\footnote{We say $\Omega'$ is star-shaped if the kernel of $\Omega'$, define by $N_S = \{x\in \Omega' : \forall y\in \Omega'\,\, [x,y]\subset \Omega' \}$, is nonempty.} In this paper, we show that an isoperimetic inequality holds for any non-convex space, as long as there exists a smooth measure-preserving mapping from a convex space to that non-convex space. We also show a construction for such mappings by constructing appropriate smooth incompressible flows. Given such mapping, we show a polynomial mixing rate for \textsc{BallWalk}. The \textsc{BallWalk} algorithm is a simple random walk procedure and is defined as follows. Let $B(x,r)$ denote the $n$-dimensional Euclidean ball of radius $r$ centered around $x$. At time $t$, we pick a point uniformly at random from $B(x_t,r)$. Let this point be $y_t$. If the new point is outside $\Omega'$, the move is rejected and $x_{t+1}=x_t$. Otherwise, $x_{t+1} = y_t$. \citet{Kannan-Lovasz-Simonovits-1997} show a polynomial mixing rate for the \textsc{BallWalk} algorithm when $\Omega'$ is convex. The next theorem shows a more general result under an embedding assumption. Before giving more details, let us define some notation. Let $\Omega$ be a convex space with boundary $\partial \Omega$, and let $\Omega'$ be a bounded connected subset of $\Real^n$. Assume $\Omega'$ is the image of $\Omega$ under a Lipschitz measure-preserving mapping $g:\Real^n \rightarrow \Real^n$: \[ \exists L_{\Omega'}>0, \forall x,y\in \Omega,\quad \abs{g(x)- g(y)} \le L_{\Omega'} \abs{x-y}, \quad \det(D_x g) = 1 \;. \] Here $D_x g$ is an $n\times n$ matrix whose $(i,j)$-th element is $\partial g_i / \partial x_j$, also known as the Jacobian matrix evaluated at point $x$. Mapping $g$ is called measure-preserving if $\det(D_x g)=1$ for all $x\in\Omega$. Let $R_{\Omega'}$ be an upper bound on the isoperimetric ratio of $\Omega'$, i.e. $\vol_{n-1}(\partial \Omega)/\vol(\Omega) \le R_{\Omega'}$. \begin{thm} \label{thm:sampling:main} Consider the \textsc{BallWalk} algorithm. Let $\sigma_0$ be the distribution of the initial point, $\sigma_t$ be the distribution after $t$ steps of the random walk, and $\sigma$ be the uniform distribution on $\Omega'$. Suppose there exist $M>0$ such that for any $A\subset \Omega'$, $\sigma_0(A)\le M \sigma(A)$. For any $\epsilon>0$, after $O((1/\epsilon^2)\log (1/\epsilon))$ steps of the \textsc{BallWalk} with radius $r=O(\epsilon)$, we have $d_{tv}(\sigma_t, \sigma) \le \epsilon$. Here, $d_{tv}$ denotes the total variation distance, and big-O notation hides constants and polynomial terms in $D_\Omega, L_{\Omega'}, R_{\Omega'}, M, n$. \end{thm} Here we show a proof sketch for Theorem~\ref{thm:sampling:main}. First we show that an isoperimetric inequality can be obtained from the embedding assumption. We will discuss existence of such embeddings in the next section. Let $(\Omega_1,\Omega_2,\Omega_3)$ be a partition of $\Omega$ and $(\Omega_1',\Omega_2',\Omega_3')$ be the corresponding partition of $\Omega'$ under mapping $g$. Let $(x_1,x_2) = \mathop{\rm argmin}_{y_1\in \Omega_1, y_2\in \Omega_2} d(y_1,y_2)$. By the embedding assumption, \[ d(\Omega_1,\Omega_2) = d(x_1,x_2) \ge \frac{1}{L_{\Omega'}}d(g(x_1), g(x_2)) \ge \frac{1}{L_{\Omega'}} d(\Omega_1', \Omega_2') \;. \] Thus, \begin{align*} \vol(\Omega_3') = \vol(\Omega_3) &\ge \frac{1}{4 D_\Omega} d(\Omega_1,\Omega_2) \min\{\vol(\Omega_1), \vol(\Omega_2) \} &\dots \text{ By \eqref{eq:iso-ineq},} \\ &\ge \frac{1}{4 D_\Omega L_{\Omega'}} d(\Omega_1', \Omega_2') \min\{ \vol(\Omega_1'), \vol(\Omega_2')\} &\dots\text{ By embedding assumption.} \end{align*} It can be shown that if an isoperimetric inequality holds for $\Omega'$, the Markov process induced by the \textsc{BallWalk} algorithm has large conductance. This step of the proof is omitted and is based on standard techniques from the literature on random walk analysis~\citep{Vempala-2005, Chandrasekaran-Dadush-Vempala-2010}. Next we show the relationship between conductance and mixing rates. Let $P_x(A)$ be the probability of being in set $A\subset \Omega'$ after one step of the process that starts from $x$. The ergodic flow of Markov process is defined by $\Phi(A) = \int_A P_x(\Omega'\setminus A)dx$. Define the $s$-conductance (for $0\le s \le 1$) of the Markov process by \[ \Phi_s = \inf_{s < \sigma(A) \le 1/2} \frac{\Phi(A)}{\min\{\vol(A), \vol(\Omega'\setminus A)\}} \;. \] The following standard result relates the $s$-conductance to the mixing rate. \begin{lem}[Corollary~1.5 of \citet{Lovasz-Simonovits-1993}] \label{lem:convergence} Let $0<s\le 1/2$ and $H_s = \sup_{\sigma(A) \le s} \abs{\sigma_0(A)-\sigma(A)}$. Then for every measurable $A\subset \Real^n$ and every $t\ge 0$, \[ \abs{\sigma_t(A)-\sigma(A)} \le H_s + \frac{H_s}{s} \left( 1 - \frac{\Phi_s^2}{2} \right)^t \;. \] \end{lem} The lemma shows that the mixing time of the Markov process is directly related to its $s$-conductance. The rest of the paper is devoted to construction of Lipschitz measure-preserving mappings. First we define some notation. \subsection{Notation} We use $\nabla f$, $\nabla^2 f$, $\nabla\cdot v$, to denote the gradient of function $f$, Laplacian of $f$, and divergence of vector field $v$, respectively. For integer $k\ge 0$ and $0<\alpha\le 1$, we use $C^{k,\alpha}(K,K')$ to denote the H\"{o}older space, i.e. the space containing mappings from $K$ to $K'$ that have continuous derivatives up to order $k$ and such that the $k$th partial derivatives are H\"{o}lder continuous with exponent $\alpha$. Further, we use $\norm{.}_{k,\alpha}$ to denote the $C^{k,\alpha}$ norm. For integer $k\ge 0$ and $p\ge 1$, we use $W^{k,p}(K,K')$ to denote the Sobolev space. \section{Conclusions and Future Work} \section{Lipschitz Measure-Preserving Mappings} \label{sec:pde} We want to find a mapping $u\in C^{0,1}(\Omega,\Omega')$ such that \begin{equation} \label{eq:measure-preserving} \det(\nabla u(x)) =1\qquad \text{for any } x\in\Omega\;. \end{equation} When $\Omega=B(0,1)$ is the Euclidean unit ball and $\Omega'$ is star-shaped, \citet[Theorem~5.4]{Fonseca-Parry-1992} construct a mapping $u\in W^{1,\infty}(B(0,1),\Omega')$. They first construct $a\in W^{1,\infty}(B(0,1), \Omega')$ such that for any $x\in B(0,1)$, $\det(D_x a)=\lambda(x)$ for some $\lambda(x)>0$. Then using results of \citet{Dacorogna-Moser-1990}, a map $b\in C^{1,\alpha}(B(0,1), B(0,1))$ with $0<\alpha<1$ is constructed such that \begin{align*} \det(D_x b) &=\lambda(x) \qquad \text{in } B(0,1)\,,\\ b(x) & = x \qquad \text{on } \partial B(0,1) \;. \end{align*} Define a mapping from $\Omega'$ to $B(0,1)$ by setting $z(x) = b\, \circ\, a^{-1} (x)$ for $x\in \Omega'$. Because \[ D_x z = (D_{a^{-1}(x)} b) (D_{a^{-1}(x)} a )^{-1}\,, \] we get that \[ \det(D_x z) = \lambda(a^{-1}(x)) \frac{1}{\lambda(a^{-1}(x))} = 1 \;. \] We get the desired embedding by setting $u = z^{-1}$. We show a more general approach to construct such mappings by using divergence-free (incompressible) vector fields. Consider velocity field \[ v(t,x):[0,1]\times\Real^n\rightarrow\Real^n \,, \] and the ordinary differential equation \[ \begin{cases} \partial_t z(t) = v(t,z(t))\,, &\quad t > 0\\ z(0) = z_0 \,, \end{cases} \] where $\partial_t$ denotes partial derivative with respect to time. Assume that $v$ is Lipschitz with respect to the spatial variable with Lipschitz constant $L$, uniformly with respect to the time variable. Under these conditions, by Picard--Lindel\"{o}f theorem~\citep{Hartman-2002} the above ODE has a unique and Lipschitz solution. Further, the classical flow of $v$, i.e. the flow $\Phi(t,x):[0,1]\times\Real^n\rightarrow\Real^n$ satisfying \[ \begin{cases} \partial_t \Phi(t,x) = v(t,\Phi(t,x))\,, &\quad t > 0\\ \Phi(0,x) = x \,, \end{cases} \] is biLipschitz: for all $x_1,x_2\in\Real^n$, \begin{equation} \label{eq:PhiLip} e^{-L t} \abs{x_1 - x_2} \le \abs{\Phi(t,x_1) - \Phi(t,x_2)} \le e^{L t} \abs{x_1 - x_2} \;. \end{equation} We want to choose $v$ and $\Phi$ such that $\Omega_1 = \Omega'$ and \[ u(x) = \Phi(1,x) \] is a solution for \eqref{eq:measure-preserving}. Let $\Omega_t$ be a subset of $\Real^n$ such that $\Omega_0=\Omega$ and $\Omega_t = \Phi(t,\Omega)$. To ensure that the flow is volume-preserving, we require that $v$ is divergence-free \begin{equation} \label{eq:divergence-free} \begin{cases} \nabla\cdot v(t,.) = 0 &\quad \text{in } \Omega_t\\ v(t,x) = f_t(x) &\quad \text{on } \partial\Omega_t \\ \end{cases} \end{equation} where boundary values $(f_t)$ are such that $\Omega_1=\Omega'$. By Divergence Theorem, we must have \begin{equation} \label{eq:fn} \int_{\partial \Omega_t} f_t^\top \widehat n dS = 0\,, \end{equation} where $\widehat n$ is the outward pointing unit normal field of the boundary $\partial \Omega_t$. By chain rule, \[ \partial_t \nabla \Phi(t,x) = \nabla v(t,\Phi(t,x)) \nabla \Phi(t,x) \;. \] We know that if $\Psi(t)' = A(t) \Phi(t)$, then $(\det \Psi)' = \mathop{\rm trace}(A) \det \Psi$. Thus, \[ \partial_t \det \nabla \Phi(t,x) = (\det \nabla \Phi(t,x)) \nabla\cdot(v(t,\Phi(t,x))) = 0 \;. \] Thus, $\det \nabla \Phi(t,x) = \det \nabla \Phi(0,x) = 1$ for any $t\in [0,1]$. Thus, $u(x) = \Phi(1,x)$ is a solution for \eqref{eq:measure-preserving}. By \eqref{eq:PhiLip}, $\Phi$ (and hence $u$) inherits the smoothness of $v$. Thus it only remains to show a Lipschitz solution for \eqref{eq:divergence-free}. Problem~\eqref{eq:divergence-free} does not necessarily have a unique solution. Here we describe a solution based on a reduction to a Dirichlet problem. Assume $v(t,.) = \nabla h_t$ for some potential $h_t:\Real^n\rightarrow\Real$. Let $c_t(\Real^n; \Real)$ be such that $\nabla c_t = f_t$ on $\partial \Omega_t$. Thus we want to solve the following Dirichlet problem for Laplace's equation \begin{equation} \label{eq:laplace} \begin{cases} \nabla^2 h_t = 0 &\quad \text{in } \Omega_t\\ h_t = c_t &\quad \text{on } \partial\Omega_t \\ \end{cases} \end{equation} Extend $c_t$ to the whole $\Omega_t$ and let $m_t = \nabla^2 c_t$. Solve \begin{equation} \label{eq:laplace2} \begin{cases} \nabla^2 w_t = m_t &\quad \text{in } \Omega_t\\ w_t = 0 &\quad \text{on } \partial\Omega_t \\ \end{cases} \end{equation} Then $h_t = c_t - w_t$ is a solution for \eqref{eq:laplace}. This holds because $h_t=0$ on $\partial\Omega_t$ and $\nabla^2 h_t = \nabla^2 c_t - \nabla^2 w_t = 0$ in $\Omega_t$. Thus \[ v(t,.) = \nabla h_t = \nabla c_t - \nabla w_t \] is a solution for \eqref{eq:divergence-free}. Lipschitzness of $v(t,.)$ can be shown by bounding second derivatives of $w_t$ and $c_t$. Regularity results for the solution of the Dirichlet problem for Laplace's equation exist. For example, we can use results of \citet[Chapter~6]{Gilbarg-Trudinger-2001} to show H\"{o}lder continuity of the second derivatives of solution of \eqref{eq:laplace2} given that $\partial\Omega_t$ and $m_t$ are sufficiently smooth. \if0 \subsection{Other Approaches} One approach to construct a solution is by using the notion of curl. \begin{defn}[Curl] Let $b=(b_{ij})_{1\le i \le j \le n}$ be a sequence of functions from $\Real^n$ to $\Real$. For $i\ge j$, we define $b_{ij}=- b_{ji}$. Define \[ \nabla\times b = ((\nabla\times b)_j)_{1\le j \le n} \in \Real^n \] where \[ (\nabla\times b)_j = \sum_{i=1}^n (-1)^{i+j} \frac{\partial b_{ij}}{\partial x_i} \;. \] \end{defn} We know that $\nabla\cdot(\nabla\times b) = 0$ for any $b=(b_{ij})_{1\le i \le j \le n}$. So we can consider solutions of the form $v(t,.) = \nabla\times b$ and we need to construct smooth $b$ such that \[ \nabla\times b = f_t \qquad \text{on } \partial \Phi(t,\Omega). \] \citet{Alberti-Crippa-Mazzucato-2014,Alberti-Crippa-Mazzucato-2016} show such constructions for general classes of smooth functions in $\Real^2$. Constructing appropriate $b$ in higher dimensions remain a future work. Next we describe an alternative approach based on results of \citet{Dacorogna-Moser-1990}. Consider problem \[ \begin{cases} \nabla\cdot w = h(x) &\quad \text{in } \Omega_t\\ w(x) = 0 &\quad \text{on } \partial\Omega_t \\ \end{cases} \] for $h\in C^{k,\alpha}(\Omega_t)$, $k\ge 0$, $0< \alpha<1$, and such that $\int_{\Omega_t} h(x) dx = 0$. Theorem~2 of \citet{Dacorogna-Moser-1990} shows that there exists a solution $w\in C^{k+1,\alpha}(\Omega_t; \Real^n)$ such that $\norm{w}_{k+1,\alpha} \le K \norm{h}_{k,\alpha}$ for some $K = K(\alpha, k, \Omega_t) > 0$. If we let $h = - \nabla\cdot f_t$, we have \[ \int_{\Omega_t} h(x) dx = - \int_{\Omega_t} \nabla\cdot f_t(x) dx = - \int_{\partial \Omega_t} f_t^\top \widehat n dS = 0\,, \] where the second step follows from Divergence Theorem and the third step follows from \eqref{eq:fn}. Let $v_t = w + f_t$. Since $\nabla\cdot v_t = \nabla\cdot w + \nabla\cdot f_t = 0$ inside $\Omega_t$ and $v_t = f_t$ on $\partial \Omega_t$, $v_t$ is a solution for \eqref{eq:divergence-free}. Further we get smoothness of $v_t$ from smoothness of $w$ and $f_t$, and \eqref{eq:PhiLip} gives that $u$ is biLipschitz. When $\Omega=B(0,1)$ is the Euclidean unit ball and $\Omega'$ is star-shaped, \citet[Theorem~5.4]{Fonseca-Parry-1992} construct a mapping $u\in W^{1,\infty}(B(0,1),\Omega')$. They first construct $\omega\in W^{1,\infty}(B(0,1), \Omega')$ such that for any $x\in B(0,1)$, $\det(D_x \omega)=\lambda(x)$ for some $\lambda(x)>0$. Then using results of \citet{Dacorogna-Moser-1990}, a map $w\in C^{1,\alpha}(B(0,1), B(0,1))$ with $0<\alpha<1$ is constructed such that \begin{align*} \det(D_x w) &=\lambda(x) \qquad \text{in } B(0,1)\,,\\ w(x) & = x \qquad \text{on } \partial B(0,1) \;. \end{align*} Define a mapping from $\Omega'$ to $B(0,1)$ by setting $z(x) = w\, \circ\, \omega^{-1} (x)$ for $x\in \Omega'$. Because \[ D_x z = (D_{(\omega^{-1}(x))} w) (D_{(\omega^{-1}(x))} \omega )^{-1}\,, \] we get that \[ \det(D_x z) = \lambda(\omega^{-1}(x)) \frac{1}{\lambda(\omega^{-1}(x))} = 1 \;. \] We get the desired embedding by setting $u = z^{-1}$. \fi \if0 \subsection{Laplace Equation} Let $v$ and $f$ denote $v_t$ and $f_t$, respectively. Assume $v = \nabla h$ for some potential $h:\Real^n\rightarrow\Real$. Then we have that \[ \begin{cases} \nabla^2 h = 0 &\quad \text{in } \Omega_t\\ \nabla h = f &\quad \text{on } \partial\Omega_t \\ \end{cases} \] Let $c(\Real^n; \Real)$ be such that $\nabla c = f$ on $\partial \Omega_t$. Thus we want to solve the following Laplace equation with Dirichlet boundary condition \begin{equation} \label{eq:laplace} \begin{cases} \nabla^2 h = 0 &\quad \text{in } \Omega_t\\ h = c &\quad \text{on } \partial\Omega_t \\ \end{cases} \end{equation} We have that~\citep{?}, \begin{align*} \abs{\frac{\partial^2 h}{\partial x_i \partial x_j}} &\le \frac{1}{d(x,\partial \Omega)^2} \abs{h}_\infty \le \frac{1}{d(x,\partial \Omega)^2} \abs{c}_\infty\,, \end{align*} where the second inequality follows from ? principle. We show the construction for a star-shaped problem. When the point is far from the boundary, use the above to bound the derivative. Otherwise, use the construction. First we show that if $d(x,\partial \Omega_0) \le \epsilon$, then $d(\Phi_1(x), \Omega_1) \le \epsilon'$. This gives an upper bound for $V$. \fi \if0 For simplicity assume $n$ is even. Set \[ b_{1i} = b_{i1} = b_{2i} = b_{i2} = 0\quad \text{ for } i\ge 3\,, \qquad b_{3i} = b_{i3} = b_{4i} = b_{i4} = 0\quad \text{ for } i\ge 5\,, \qquad \dots \] On boundary $\partial U$, $\nabla\times b = f$. Thus, \[ \begin{cases} \frac{\partial b_{21}}{\partial x_2} = - f_1\,,\\ \frac{\partial b_{12}}{\partial x_1} = - f_2\,, \end{cases} \rightarrow \begin{cases} \frac{\partial b_{21}}{\partial x_2} = - f_1\,,\\ \frac{\partial b_{21}}{\partial x_1} = f_2 \;. \end{cases} \] Thus, \begin{equation} \label{eq:b21} b_{21}(x) = \int f_2(x) dx_1 + c_2 = \int f_1(x) dx_2 + c_1\,, \end{equation} for a choice of $c_2$ and $c_1$ that do not depend on $x_2$ and $x_1$, respectively. Similarly, we have $b_{43}(x) = \int f_4(x) dx_3 + c_4 = \int f_3(x) dx_4 + c_3$ for appropriate $c_3$ and $c_4$. For a star-shaped space, we can choose $f(x) = \rho(x) x$ where scalar-valued function $\rho$ satisfies $\int_{\partial U} \rho(x) x^\top \widehat{n} dS=0$ on the boundary. We also let $\rho(x)\rightarrow 0$ as $x\rightarrow 0$ in a smooth manner. By \eqref{eq:b21}, \[ x_2 \int \rho(x) dx_1 + c_2 = x_1\int \rho(x) dx_2 + c_1 \;. \] Thus, $b$ and $v$ are well-defined on the boundary and inside $U$. Further, smoothness of $f$ gives smoothness of $v$ and $\Phi$. \fi \if0 Next we argue about smoothness of $u$. Let's assume $v_t$ is $V$-Lipschitz. Let $L_s$ be the smoothness of $\Phi_s$ for $s\in [0,1]$. For any $\tau\in [0,1]$, we can write $\Phi_\tau(x) - \Phi_0(x) = \int_0^\tau v_t(\Phi_t(x)) dt$. Thus, \begin{align*} \abs{\Phi_\tau(x) - \Phi_\tau(y)} &\le \abs{x-y} + \abs{\int_0^\tau (v_t(\Phi_t(x)) - v_t(\Phi_t(y))) d t} \\ &= \abs{x-y} + \tau \abs{v_s(\Phi_s(x)) - v_s(\Phi_s(y))} \\ &\le \abs{x-y} + \tau V \abs{\Phi_s(x) - \Phi_s(y)} \\ &\le (1+ \tau V L_s) \abs{x-y} \,, \end{align*} where the second step holds by mean-value theorem for some $s\in [0,1]$. Thus, $L_\tau \le 1 + \tau V L_s$. With a similar argument, \[ L_1 \le 1 + V L_s \le 1 + V (1 + s V L_{s'}) \le 1 + V + s V^2 + s s' V^3 L_{s''} \le \dots\,, \] for a sequence $1 \ge s \ge s' \ge s'' \ge \dots \ge 0$. Thus \[ L_u = L_1 \le 1 + \frac{V}{1 - s V} \;. \] It remains to bound $V$. \fi
2024-02-18T23:40:49.180Z
2016-11-29T02:13:45.000Z
algebraic_stack_train_0000
3,421
3,371
proofpile-arXiv_066-705
\section{Introduction} \label{Intro} Recently, Nisar et al.\cite{Nisar-Saiful} introduced and studied various properties of $\mathtt{k}$-Struve function $\mathtt{S}_{\nu,c}^{\mathtt{k}}$ defined by \begin{equation}\label{k-Struve} \mathtt{S}_{\nu,c}^{\mathtt{k}}(x):=\sum_{r=0}^{\infty}\frac{(-c)^r} {\Gamma_{\mathtt{k}}(r\mathtt{k}+\nu+\frac{3\mathtt{k}}{2})\Gamma(r+\frac{3}{2})} \left(\frac{x}{2}\right)^{2r+\frac{\nu}{\mathtt{k}}+1}. \end{equation} where $c,\nu \in \mathbb{C}, \nu>\frac{3}{2}\mathtt{k}$. The generalized Wright hypergeometric function ${}_{p}\psi _{q}(z)$ is given by the series \begin{equation} {}_{p}\psi _{q}(z)={}_{p}\psi _{q}\left[ \begin{array}{c} (a_{i},\alpha _{i})_{1,p} \\ (b_{j},\beta _{j})_{1,q \end{array \bigg|z\right] =\displaystyle\sum_{k=0}^{\infty }\dfrac{\prod_{i=1}^{p \Gamma (a_{i}+\alpha _{i}k)}{\prod_{j=1}^{q}\Gamma (b_{j}+\beta _{j}k) \dfrac{z^{k}}{k!}, \label{Fox-Wright} \end{equation where $a_{i},b_{j}\in \mathbb{C}$, and real $\alpha _{i},\beta _{j}\in \mathbb{R}$ ($i=1,2,\ldots ,p;j=1,2,\ldots ,q$). Asymptotic behavior of this function for large values of argument of $z\in {\mathbb{C}}$ were studied in \cite{CFox} and under the condition \begin{equation} \displaystyle\sum_{j=1}^{q}\beta _{j}-\displaystyle\sum_{i=1}^{p}\alpha _{i}>-1 \label{eqn-5-Struve} \end{equation was found in the work of \cite{Wright-2,Wright-3}. Properties of this generalized Wright function were investigated in \cite{Kilbas}, (see also \cite{Kilbas-itsf, Kilbas-frac}. In particular, it was proved \cite{Kilbas} that ${}_{p}\psi _{q}(z)$, $z\in {\mathbb{C}}$ is an entire function under the condition ($\ref{eqn-5-Struve}$). In\cite{Nair-1}, Nair introduced a pathway fractional integral operator and developed further by Mathai and Haubold \cite{Mathai-Habold-1} ,\cite{Mathai-Habold-2} (Also, see \cite{Mathai-pathway}) is defined as follows : Let $\ f\left( x\right) \in L\left( a,b\right) ,\eta \in \mathbb{C},\Re\left( \eta \right) >0,a>0$ and the pathway parameter $\alpha <1$(cf \cit {Praveen-pathway}),then \begin{equation} \left( P_{0+}^{\left( \eta ,\alpha \right) }f\right) \left( x\right) =x^{\eta }\int\limits_{0}^{\left[ \frac{x}{a\left( 1-\alpha \right)}\right] }\left[1- \frac{a\left( 1-\alpha \right) t}{x}\right] ^{\frac{\eta }{\left( 1-\alpha \right)}}f\left( t\right) dt. \label{eqn-path-1} \end{equation} For a real scalar $\alpha$, the pathway model for scalar random variables is represented by the following probability density function (p.d.f.): \begin{equation} f\left( x\right) =c\left\vert x\right\vert ^{\gamma -1}\left[ 1-a\left( 1-\alpha \right) \left\vert x\right\vert ^{\delta }\right] ^{\frac{\beta } \left( 1-\alpha \right) }}, \label{eqn-path-2} \end{equation} provided that $-\infty <x<\infty ,\delta >0,\beta \geq 0,\left[ 1-a\left( 1-\alpha \right) \left\vert x\right\vert ^{\delta }\right] >0,$ and $\gamma >0$, where is the normalizing constant and $\alpha$ is called the pathway parameter \cite{Nair-1}. Note that for $\alpha <1$ it is a finite range density with $\left[ 1-a\left( 1-\alpha \right) \left\vert x\right\vert ^{\delta }\right] >0$ and \ ($\ref{eqn-path-2}$) remains in the extended generalized type-1 beta family \ . The pathway density in ($\ref{eqn-path-2}$), for $\alpha < 1$, includes the extended type-1 beta density, the triangular density, the uniform density and many other p.d.f'.s .\cite{Praveen-pathway}.For instance , $\alpha >1$ gives \begin{equation} f\left( x\right) =c\left\vert x\right\vert ^{\gamma -1}\left[ 1+a\left( 1-\alpha \right) \left\vert x\right\vert ^{\delta }\right] ^{-\frac{\beta } \left( 1-\alpha \right) }}, \label{eqn-path-3} \end{equation provided that $-\infty <x<\infty ,\delta >0,\beta \geq 0,$ and $\alpha >0$ which is the extended generalized type-2 beta model for real x. It includes the type-2 beta density, the F density, the Student-t density, the Cauchy density and many more. For more details about pathway integral operator, one can refer \cite{Praveen-pathway, Purohit}.The purpose of this work is to investigate the composition formula of integral transform operator due to Nair, which is expressed in terms of the generalized Wright hypergeometric function, by inserting the $\mathtt{k}-$ Struve function \section{Pathway Fractional Integration of$\mathtt{k}$-Struve function.} The results given in this section are based on the preliminary assertions giving by composition formula of pathway fractional integral ($\re {eqn-path-1}$) with a power function. \begin{lemma} ({Agarwal}~\cite{Praveen-pathway},Lemma 1) Let $\eta \in \mathbb{C},\Re\left( \eta \right) >0,\beta \in \mathbb{C}$ and $\alpha <1.$ If \Re\left( \beta \right) >0,$and $\Re\left( \frac{\eta }{1-\alpha }\right) >-1, then \begin{equation} \left\{ P_{0+}^{\left( \eta ,\alpha \right) }\left[ t^{\beta -1}\right] \right\} \left( x\right) =\frac{x^{\eta +\beta }}{\left[ a\left( 1-\alpha \right) \right] ^{\beta }}\frac{\Gamma \left( \beta \right) \Gamma \left( 1 \frac{\eta }{1-\alpha }\right) }{\Gamma \left( 1+\frac{\eta }{1-\alpha }+\beta \right) }. \label{lemma1} \end{equation} The pathway fractional integration of the $\mathtt{k}-$ Struve function is given by the following theorem. \end{lemma} \begin{theorem}\label{Th1} Let $\eta ,\rho ,\nu, c \in C$ and $\alpha <1$ be such that $\Re\left( \eta \right) >0,\Re\left( \rho \right) >0, \nu>-\frac{3}{2}\mathtt{k}$ and $\Re\left( \frac{\eta }{1-\alpha }\right) >-1 $ then the following formula hold tru \begin{equation}\label{eqn1-th1} \begin{array}{c} P_{0+}^{\left( \eta ,\alpha \right) }\left[ t^{\rho-1}\mathtt{S}_{\nu,c}^{\mathtt{k}}(t)\right] \left( x\right) =x^{\eta}\left(\frac{x}{a(1-\alpha)}\right)^{\rho+\frac{\nu}{\mathtt{k}}+1}\frac{\Gamma\left(1+\frac{\eta}{1-\alpha}\right)} {\mathtt{k}^{\frac{\nu}{\mathtt{k}}+\frac{1}{2}}2^{\frac{\nu}{\mathtt{k}}+1}}\\ \times _{2}\Psi _{3}\left[ \begin{array}{ccc} \left( \rho +\frac{\nu}{\mathtt{k}}+1,2\right) , & \left( 1,1\right); & \\ \left( \rho +\frac{\nu}{\mathtt{k}}+\frac{\eta }{1-\alpha }+2,2\right) , & \left(\frac{\nu}{\mathtt{k}}+\frac{3}{2},1\right) , & \left( 3/2,1\right \end{array ;-\frac{cx^{2}}{4\mathtt{k}\left[ a^{2}\left( 1-\alpha \right)^{2} \right]}\right] \end{array} \end{equation} \end{theorem} \begin{proof} Applying the pathway operator defined in \eqref{eqn-path-1} to \eqref{k-Struve}, and changing the order of integration and summation, we ge \begin{align*} \left( P_{0+}^{\left( \eta ,\alpha \right)}\left[ t^{\rho-1}\mathtt{S}_{\nu,c}^{\mathtt{k}}(t)\right] \right) \left( x\right)&=P_{0+}^{\left( \eta ,\alpha \right)}\left[t^{\rho-1}\sum_{r=0}^{\infty}\frac{(-c)^{r}\left(\frac{t}{2}\right)^{2r+\frac{\nu}{\mathtt{k}}+1}}{\Gamma_{\mathtt{k}}\left(r\mathtt{k}+\nu+\frac{3}{2}\mathtt{k}\right)\Gamma\left(r+\frac{3}{2}\right)}\right](x)\\ &=\sum_{r=0}^{\infty}\frac{(-c)^{r}\left(\frac{1}{2}\right)^{2r+\frac{\nu}{\mathtt{k}}+1}} {\Gamma_{\mathtt{k}}\left(r\mathtt{k}+\nu+\frac{3}{2}\mathtt{k}\right)\Gamma\left(r+\frac{3}{2}\right)} P_{0+}^{\left( \eta ,\alpha \right)}\left(t^{\rho+2r+\frac{\nu}{\mathtt{k}}}\right)(x) \end{align*} Using Lemma $(\ref{lemma1})$, we get \begin{align*} &&=\sum_{r=0}^{\infty}\frac{(-c)^{r}\left(\frac{1}{2}\right)^{2r+\frac{\nu}{\mathtt{k}}+1}} {\Gamma_{\mathtt{k}}\left(r\mathtt{k}+\nu+\frac{3}{2}\mathtt{k}\right)\Gamma\left(r+\frac{3}{2}\right)} \frac{x^{\eta+\rho+2r+\frac{\nu}{\mathtt{k}}+1}}{[a(1-\alpha)]^{\rho+2r+\frac{\nu}{\mathtt{k}}+1}}\\ &&\times\frac{\Gamma\left(\rho+2r+\frac{\nu}{\mathtt{k}}+1\right)\Gamma\left(1+\frac{\eta}{1-\alpha}\right)}{\Gamma\left(\frac{\eta}{1-\alpha}+\rho+2r+\frac{\nu}{\mathtt{k}}+2\right)} \end{align*} Now using the relation $\Gamma_{\mathtt{k}}\left(\gamma\right)=\mathtt{k}^{\frac{\gamma}{\mathtt{k}}-1}\Gamma\left(\frac{\gamma}{\mathtt{k}}\right)$, we get \begin{align*} &=\frac{x^{\eta+\rho+\frac{\nu}{\mathtt{k}}+1}\Gamma\left(1+\frac{\eta}{1-\alpha}\right)}{\left[a(1-\alpha)\right]^{\rho+\frac{\nu}{\mathtt{k}}+1}2^{\frac{\nu}{\mathtt{k}}+1}}\\ &\times \sum_{r=0}^{\infty}\frac{(-c)^{r}x^{2r}} {\mathtt{k}^{r+\frac{\nu}{\mathtt{k}}+\frac{1}{2}}\Gamma\left(r+\frac{\nu}{\mathtt{k}}+\frac{3}{2}\right)\Gamma\left(r+\frac{3}{2}\right)4^{r}[a(1-\alpha)]^{2r}}\\ &\times\frac{\Gamma\left(\rho+\frac{\nu}{\mathtt{k}}+1+2r\right)}{\Gamma(\frac{\eta}{1-\alpha}+\rho+2r+\frac{\nu}{\mathtt{k}}+2)}. \end{align*} In view of $(\ref{Fox-Wright})$, we arrived the desired result. \end{proof} \begin{corollary} If we take $\mathtt{k}=1$ in theorem \eqref{Th1}, then we get the pathway integrals involving classical Struve function as: \begin{equation}\label{eqn1-cor1} \begin{array}{c} P_{0+}^{\left( \eta ,\alpha \right) }\left[ t^{\rho-1}\mathtt{S}_{\nu,c}^{1}(t)\right] \left( x\right) =x^{\eta}\left(\frac{x}{a(1-\alpha)}\right)^{\rho+\nu+1} \frac{\Gamma\left(1+\frac{\eta}{1-\alpha}\right)}{2^{\nu+1}}\\ \times _{2}\Psi _{3}\left[ \begin{array}{ccc} \left( \rho +\nu+1,2\right) , & \left( 1,1\right); & \\ \left( \rho +\nu+\frac{\eta }{1-\alpha }+2,2\right) , & \left(\nu+\frac{3}{2},1\right) , & \left( 3/2,1\right \end{array ;-\frac{cx^{2}}{4\left[ a^{2}\left( 1-\alpha \right)^{2} \right]}\right] \end{array} \end{equation} \end{corollary} Now, we will give the relation relation between trigonometric function and $\mathtt{k}$-Struve function. By taking $\nu=\mathtt{k}/2$ in (3.10) of \cite{Nisar-Saiful} we get the relation between cosine functions and $\mathtt{k}$-Struve functions as \begin{equation}\label{cos} 1-\cos\left(\frac{\alpha x}{\sqrt{\mathtt{k}}}\right)= \frac{\alpha}{\mathtt{k}}\sqrt{\frac{\pi x}{2}}~\mathtt{S}_{\frac{\mathtt{k}}{2}, \alpha^2}^{\mathtt{k}} (x). \end{equation} Similarly, the relation \begin{equation}\label{cosh} \cosh\left(\frac{\alpha x}{\sqrt{\mathtt{k}}}\right)-1= \frac{\alpha}{\mathtt{k}}\sqrt{\frac{\pi x}{2}}~\mathtt{S}_{\frac{\mathtt{k}}{2}, -\alpha^2}^{\mathtt{k}} (x), \end{equation} can be derive from (3.11) of \cite{Nisar-Saiful}. Also, by taking $\nu=-\frac{k}{2}$ in $(\ref{k-Struve})$, we obtained the following: \begin{equation}\label{sin} \sin\left(\frac{\alpha x}{\sqrt{\mathtt{k}}}\right)=\alpha\left(\sqrt{\frac{\pi x}{2\mathtt{k}}}\right)\mathtt{S}_{-\frac{\mathtt{k}}{2},\alpha^{2}}^{\mathtt{k}}\left(x\right). \end{equation} \begin{equation}\label{sinh} \sinh\left(\frac{\alpha x}{\sqrt{\mathtt{k}}}\right)=\alpha\left(\sqrt{\frac{\pi x}{2\mathtt{k}}}\right)\mathtt{S}_{-\frac{\mathtt{k}}{2},-\alpha^{2}}^{\mathtt{k}}\left(x\right). \end{equation} \section{Pathway fractional integration of cosine,hyperbolic cosine, sine and hyperbolic sine functions} \begin{theorem}\label{Th2} Let $\eta ,\rho ,\nu, c \in \mathbb{C}$ and $\alpha <1$ be such that $\Re\left( \eta \right) >0,\Re\left( \rho \right) >0, \nu>-\frac{3}{2}\mathtt{k}$ and $\Re\left( \frac{\eta }{1-\alpha }\right) >-1 $ then the following formula hold tru \begin{equation}\label{eqn1-th2} \begin{array}{c} P_{0+}^{\left( \eta ,\alpha \right) }\left[ t^{\rho-1}\left(1-\cos\left(\frac{\gamma t}{\sqrt{\mathtt{k}}}\right)\right)\right] \left( x\right) =\sqrt{\pi}\frac{\gamma}{\mathtt{k}^{2}}\frac{ x^{\eta+\rho+2}}{4[a(1-\alpha)]^{\rho+2}}\Gamma\left(1+\frac{\eta}{1-\alpha}\right)\\ \times _{2}\Psi _{3}\left[ \begin{array}{ccc} \left( \rho +2,2\right) , & \left( 1,1\right); & \\ \left( \rho +3+\frac{\eta }{1-\alpha },2\right) , & \left(2,1\right) , & \left( 3/2,1\right \end{array ;-\frac{\gamma^{2}x^{2}}{4\mathtt{k}\left[ a^{2}\left( 1-\alpha \right)^{2} \right]}\right \end{array} \end{equation} \end{theorem} \begin{proof} Applying the pathway operator defined in \eqref{eqn-path-1} to \eqref{cos}, and changing the order of integration and summation, we ge \begin{align*} P_{0+}^{\left( \eta ,\alpha \right) }\left[ t^{\rho-1}\left(1-\cos\left(\frac{\gamma t}{\sqrt{\mathtt{k}}}\right)\right)\right] &=\left( P_{0+}^{\left( \eta ,\alpha \right)}\left[ t^{\rho-1}\frac{\gamma}{\mathtt{k}}\sqrt{\frac{\pi t}{2}}\mathtt{S}_{\frac{k}{2},\gamma^{2}}^{\mathtt{k}}(t)\right] \right)\left( x\right)\\ &=\sqrt{\frac{\pi}{2}}\frac{\gamma}{\mathtt{k}}\sum_{r=0}^{\infty}\frac{(-\gamma^{2})^{r}\left(\frac{1}{2}\right)^{2r+\frac{1}{2}+1}}{\Gamma_{\mathtt{k}}\left(r\mathtt{k}+\frac{4}{2}\mathtt{k}\right)\Gamma\left(r+\frac{3}{2}\right)}P_{0+}^{\left( \eta ,\alpha \right)}\left[t^{\rho+2r+1}\right](x), \end{align*} Using Lemma \ref{lemma1}, we get \begin{align*} =\sqrt{\pi}\frac{\gamma}{\mathtt{k}}\sum_{r=0}^{\infty}\frac{(-\gamma^{2})^{r}(\frac{1}{2})^{2r+2}}{\Gamma_{\mathtt{k}}(r\mathtt{k}+2\mathtt{k})\Gamma(r+\frac{3}{2})}\frac{x^{\eta+\rho+2+2r}}{[a(1-\alpha)]^{\rho+2+2r}}\\ \times \frac{\Gamma(\rho+2r+2)\Gamma(1+\frac{\eta}{1-\alpha})}{\Gamma(1+\frac{\eta}{1-\alpha}+\rho+2+2r)} \end{align*} Now using the relation $\Gamma_{\mathtt{k}}\left(\gamma\right)=\mathtt{k}^{\frac{\gamma}{\mathtt{k}}-1}\Gamma\left(\frac{\gamma}{\mathtt{k}}\right)$, we get \begin{align*} &=\sqrt{\pi}\frac{\gamma}{\mathtt{k}^{2}}\frac{ x^{\eta+\rho+2}}{4[a(1-\alpha)]^{\rho+2}}\Gamma\left(1+\frac{\eta}{1-\alpha}\right)\\ &=\sum_{r=0}^{\infty}\frac{(-\gamma^{2})^{r}(\frac{1}{2})^{2r}\Gamma\left(\rho+2+2r\right)}{\Gamma(r+2)\Gamma(r+\frac{3}{2})}\frac{x^{2r}}{[a(1-\alpha)]^{2r}\Gamma(1+\frac{\eta}{1-\alpha}+\rho+2+2r)} \end{align*} In view of $(\ref{Fox-Wright})$, we arrived the desired result. \end{proof} \begin{corollary}\label{Cor2} If we take $\mathtt{k}=1$ in theorem \ref{Th2}, then we get the pathway integrals involving classical Struve function as: Let $\eta ,\rho ,\nu, c \in \mathbb{C}$ and $\alpha <1$ be such that $\Re\left( \eta \right) >0,\Re\left( \rho \right) >0, \nu>-\frac{3}{2}$ and $\Re\left( \frac{\eta }{1-\alpha }\right) >-1 $ then the following formula hol \begin{equation}\label{eqn1-Cor2} \begin{array}{c} P_{0+}^{\left( \eta ,\alpha \right)}\left[t^{\rho-1}\left(1-cos{\gamma t}\right)\right]\left( x\right) =\sqrt{\pi}{\gamma}\frac{x^{\eta+\rho+2}}{4[a(1-\alpha)]^{\rho+2}}\Gamma\left(1+\frac{\eta}{1-\alpha}\right)\\ \times _{2}\Psi _{3}\left[ \begin{array}{ccc} \left( \rho +2,2\right) , & \left( 1,1\right); & \\ \left( \rho +3+\frac{\eta }{1-\alpha },2\right) , & \left(2,1\right) , & \left( 3/2,1\right \end{array ;-\frac{\gamma^{2}x^{2}}{4\left[ a^{2}\left( 1-\alpha \right)^{2} \right]}\right \end{array} \end{equation} \end{corollary} \begin{theorem}\label{Th3} Let $\eta ,\rho ,\nu, c \in \mathbb{C}$ and $\alpha <1$ be such that $\Re\left( \eta \right) >0,\Re\left( \rho \right) >0, \nu>-\frac{3}{2}\mathtt{k}$ and $\Re\left( \frac{\eta }{1-\alpha }\right) >-1 $ then the following formula hold tru \begin{equation}\label{eqn1-th3} \begin{array}{c} P_{0+}^{\left( \eta ,\alpha \right) }\left[ t^{\rho-1}\left(\cosh\left(\frac{\gamma t}{\sqrt{\mathtt{k}}}\right)\right)\right] \left( x\right) =\sqrt{\pi}\frac{\gamma}{\mathtt{k}^{2}}\frac{ x^{\eta+\rho+2}}{4[a(1-\alpha)]^{\rho+2}}\Gamma\left(1+\frac{\eta}{1-\alpha}\right)\\ \times _{2}\Psi _{3}\left[ \begin{array}{ccc} \left( \rho +2,2\right) , & \left( 1,1\right); & \\ \left( \rho +3+\frac{\eta }{1-\alpha },2\right) , & \left(2,1\right) , & \left( 3/2,1\right \end{array ;\frac{\gamma^{2}x^{2}}{4\mathtt{k}\left[ a^{2}\left( 1-\alpha \right)^{2} \right]}\right \end{array} \end{equation} \end{theorem} \begin{corollary}\label{cor3} If we set $\mathtt{k}=1$ in Theorem \ref{Th3} then we get, Let $\eta ,\rho ,\nu, c \in \mathbb{C}$ and $\alpha <1$ be such that $\Re\left( \eta \right) >0,\Re\left( \rho \right) >0, \nu>-\frac{3}{2}$ and $\Re\left( \frac{\eta }{1-\alpha }\right) >-1 $ then the following formula hold tru \begin{equation}\label{eqn1-Cor3} \begin{array}{c} P_{0+}^{\left( \eta ,\alpha \right) }\left[ t^{\rho-1}\left(\cosh\left(\gamma t\right)\right)\right] \left( x\right) =\sqrt{\pi}\gamma\frac{ x^{\eta+\rho+2}}{4[a(1-\alpha)]^{\rho+2}}\Gamma\left(1+\frac{\eta}{1-\alpha}\right)\\ \times _{2}\Psi _{3}\left[ \begin{array}{ccc} \left( \rho +2,2\right) , & \left( 1,1\right); & \\ \left( \rho +3+\frac{\eta }{1-\alpha },2\right) , & \left(2,1\right) , & \left( 3/2,1\right \end{array ;\frac{\gamma^{2}x^{2}}{4\mathtt{k}\left[ a^{2}\left( 1-\alpha \right)^{2} \right]}\right \end{array} \end{equation} \end{corollary} \begin{theorem}\label{Th4} Let $\eta ,\rho ,\nu, c \in \mathbb{C}$ and $\alpha <1$ be such that $\Re\left( \eta \right) >0,\Re\left( \rho \right) >0, \nu>-\frac{3}{2}\mathtt{k}$ and $\Re\left( \frac{\eta }{1-\alpha }\right) >-1 $ then the following formula hold tru \begin{equation}\label{eqn1-th4} \begin{array}{c} P_{0+}^{\left( \eta ,\alpha \right) }\left[ t^{\rho-1}\left(\sin\left(\frac{\gamma t}{\sqrt{\mathtt{k}}}\right)\right)\right] \left( x\right) =\gamma\sqrt{\frac{\pi}{\mathtt{k}}}\frac{x^{\rho+\eta+1}}{2}{\left[a(1-\alpha)\right]^{\rho+\frac{1}{2}}}\Gamma\left(1+\frac{\eta}{1-\alpha}\right)\\ \times _{1}\Psi _{2}\left[ \begin{array}{ccc} \left( \rho +\frac{1}{2},2\right) ; & \\ \left( \rho +\frac{\eta }{1-\alpha }+\frac{3}{2},2\right) , & \left(\frac{3}{2},1\right); & \end{array ;\frac{-\gamma^{2}x^{2}}{4\mathtt{k}\left[ a^{2}\left( 1-\alpha \right)^{2} \right]}\right \end{array} \end{equation} \end{theorem} \begin{proof} Applying the pathway operator defined in \eqref{eqn-path-1} to \eqref{cos}, and changing the order of integration and summation, we ge \begin{align*} P_{0+}^{\left( \eta ,\alpha \right) }\left[ t^{\rho-1}\left(\sin\left(\frac{\gamma t}{\sqrt{\mathtt{k}}}\right)\right)\right] &=\left( P_{0+}^{\left( \eta ,\alpha \right)}\left[ t^{\rho-1}\gamma\sqrt{\frac{\pi x}{2\mathtt{k}}}~\mathtt{S}_{-\frac{\mathtt{k}}{2},\gamma^{2}}^{\mathtt{k}}(t)\right] \right)\left( x\right)\\ &=\gamma\sqrt{\frac{\pi x}{2\mathtt{k}}}P_{0+}^{\left( \eta ,\alpha \right)}\left[t^{\rho-1}\sum_{r=0}^{\infty}\frac{(-\gamma^{2})^{r} \left(\frac{t}{2}\right)^{2r-\frac{1}{2}+1}}{\Gamma_{\mathtt{k}}(r\mathtt{k}-\frac{\mathtt{k}}{2}+\frac{3\mathtt{k}}{2})\Gamma\left(r+\frac{3}{2}\right)}\right](x)\\ &=\gamma\sqrt{\frac{\pi x}{2\mathtt{k}}}\sum_{r=0}^{\infty}\frac{(-\gamma^{2})^{r}\left(\frac{1}{2}\right)^{2r+\frac{1}{2}}}{\Gamma_{\mathtt{k}}\left(r\mathtt{k}+\mathtt{k}\right)\Gamma\left(r+\frac{3}{2}\right)}P_{0+}^{\left( \eta ,\alpha \right)}\left[t^{\rho+2r+\frac{1}{2}-1}\right] \end{align*} Using Lemma \ref{lemma1} and the relation $\Gamma_{\mathtt{k}}\left(\gamma\right)=\mathtt{k}^{\frac{\gamma}{\mathtt{k}}-1}\Gamma\left(\frac{\gamma}{\mathtt{k}}\right)$, we get \begin{align*} &=\gamma\sqrt{\frac{\pi}{\mathtt{k}}}\frac{x^{\rho+\eta+1}}{[a(1-\alpha)]^{\rho+\frac{1}{2}}}\\ &\times \sum_{r=0}^{\infty}\frac{(-\gamma^{2})^{r}\left(\frac{1}{2}\right)^{2r}x^{2r}\Gamma(\rho+\frac{1}{2}+2r)}{\Gamma\left(r+1\right)\Gamma\left(r+\frac{3}{2}\right)\Gamma\left(\rho+\frac{\eta}{1-\alpha}+\frac{1}{2}+1+2r\right)k^{r}[a(1-\alpha)]^{2r}} \end{align*} In view of $(\ref{Fox-Wright})$, we arrived the desired result. \end{proof} \begin{corollary}\label{Cor4} If we take $\mathtt{k}=1$, then we have Let $\eta ,\rho ,\nu, c \in \mathbb{C}$ and $\alpha <1$ be such that $\Re\left( \eta \right) >0,\Re\left( \rho \right) >0, \nu>-\frac{3}{2}$ and $\Re\left( \frac{\eta }{1-\alpha }\right) >-1 $ then the following formula hold tru \begin{equation}\label{eqn1-Cor4} \begin{array}{c} P_{0+}^{\left( \eta ,\alpha \right) }\left[ t^{\rho-1}\left(\sin\left(\gamma t\right)\right)\right] \left( x\right) =\gamma\sqrt{\pi}\frac{x^{\rho+\eta+1}}{2}{\left[a(1-\alpha)\right]^{\rho+\frac{1}{2}}}\Gamma\left(1+\frac{\eta}{1-\alpha}\right)\\ \times _{1}\Psi _{2}\left[ \begin{array}{ccc} \left( \rho +\frac{1}{2},2\right) ; & \\ \left( \rho +\frac{\eta }{1-\alpha }+\frac{3}{2},2\right) , & \left(\frac{3}{2},1\right); & \end{array ;\frac{-\gamma^{2}x^{2}}{4\left[ a^{2}\left( 1-\alpha \right)^{2} \right]}\right \end{array} \end{equation} \end{corollary} \begin{theorem}\label{Th5} Let $\eta ,\rho ,\nu, c \in \mathbb{C}$ and $\alpha <1$ be such that $\Re\left( \eta \right) >0,\Re\left( \rho \right) >0, \nu>-\frac{3}{2}\mathtt{k}$ and $\Re\left( \frac{\eta }{1-\alpha }\right) >-1 $ then the following formula hold tru \begin{equation}\label{eqn1-th5} \begin{array}{c} P_{0+}^{\left( \eta ,\alpha \right) }\left[ t^{\rho-1}\left(\sinh\left(\frac{\gamma t}{\sqrt{\mathtt{k}}}\right)\right)\right] \left( x\right) =\gamma\sqrt{\frac{\pi}{\mathtt{k}}}\frac{x^{\rho+\eta+1}}{2}{\left[a(1-\alpha)\right]^{\rho+\frac{1}{2}}}\Gamma\left(1+\frac{\eta}{1-\alpha}\right)\\ \times _{1}\Psi _{2}\left[ \begin{array}{ccc} \left( \rho +\frac{1}{2},2\right) ; & \\ \left( \rho +\frac{\eta }{1-\alpha }+\frac{3}{2},2\right) , & \left(\frac{3}{2},1\right); & \end{array ;\frac{\gamma^{2}x^{2}}{4\mathtt{k}\left[ a^{2}\left( 1-\alpha \right)^{2} \right]}\right \end{array} \end{equation} \end{theorem} \begin{corollary}\label{Cor5} If we take $\mathtt{k}=1$, then we have Let $\eta ,\rho ,\nu, c \in \mathbb{C}$ and $\alpha <1$ be such that $\Re\left( \eta \right) >0,\Re\left( \rho \right) >0, \nu>-\frac{3}{2}$ and $\Re\left( \frac{\eta }{1-\alpha }\right) >-1 $ then the following formula hold tru \begin{equation}\label{eqn1-Cor5} \begin{array}{c} P_{0+}^{\left( \eta ,\alpha \right) }\left[ t^{\rho-1}\left(\sinh\left(\gamma t\right)\right)\right] \left( x\right) =\gamma\sqrt{\pi}\frac{x^{\rho+\eta+1}}{2}{\left[a(1-\alpha)\right]^{\rho+\frac{1}{2}}}\Gamma\left(1+\frac{\eta}{1-\alpha}\right)\\ \times _{1}\Psi _{2}\left[ \begin{array}{ccc} \left( \rho +\frac{1}{2},2\right) ; & \\ \left( \rho +\frac{\eta }{1-\alpha }+\frac{3}{2},2\right) , & \left(\frac{3}{2},1\right); & \end{array ;\frac{\gamma^{2}x^{2}}{4\left[ a^{2}\left( 1-\alpha \right)^{2} \right]}\right \end{array} \end{equation} \end{corollary}
2024-02-18T23:40:49.220Z
2016-11-29T02:12:10.000Z
algebraic_stack_train_0000
3,424
3,923
proofpile-arXiv_066-817
\section{Introduction} \label{intro} Since Maurice Fr\'{e}chet's pioneering work \cite{frechet1906} in the early 1900s, \textit{time-elastic} matching of time series or symbolic sequences has attracted much attention from the scientific community in numerous fields such as information indexing and retrieval, pattern analysis, extraction and recognition, data mining, etc. This approach has impacted a very wide spectrum of applications addressing socio-economic issues such as the environment, industry, health, energy, defense and so on. Among other time elastic measures, Dynamic Time Warping (DTW) was widely popularized during the 1970s with the advent of speech recognition systems \cite{VelichkoZagoruyko1970}, \cite{SakoeChiba1971}, and numerous variants that have since been proposed to match time series with a certain degree of time distortion tolerance. The main issue addressed in this paper is time series or shape averaging in the context of a time elastic distance. Time series averaging or signal averaging is a long-standing issue that is currently becoming increasingly prevalent in the big data context; it is relevant for de-noising \cite{Kaiser1979}, \cite{Hassan2010}, summarizing subsets of time series \cite{Petitjean2011}, defining significant prototypes, identifying outliers \cite{Gupta2014}, performing data mining tasks (mainly exploratory data analysis such as clustering) and speeding up classification \cite{Petitjean2014}, as well as regression or data analysis processes in a big data context. In this paper, we specifically tackle the question of averaging subsets of time series, not from considering the DTW measure itself as has already been largely exploited, but from the perspective of the so-called regularized DTW kernel (KDTW). From this new perspective, the estimation of a time series average or centroid can be readily addressed with a probabilistic interpretation of kernel alignment matrices allowing a precise definition of the average of a pair of time series from the expected value of local alignments of samples. The tests carried out so far demonstrate the robustness and the efficiency of this approach compared to the state of the art approach. The structure of this paper is as follows: the introductory section, the second section summarizes the most relevant related studies on time series averaging as well as DTW kernelization. In the third section, we derive a probabilistic interpretation of kernel alignment matrices evaluated on a pair of time series by establishing a parallel with a forward-backward procedure on a stochastic alignment automata. In the fourth section, we define the average of a pair of time series based on the alignment expectation of pairs of samples, and we propose an algorithm designed for the averaging of any subset of time series using a pairwise aggregating procedure. We present in the fifth section three complementary experiments to assess our approach against the state of the art, and conclude. \section{Related works} \label{sec:RelatedWorks} Time series averaging in the context of (multiple) time elastic distance alignments has been mainly addressed in the scope of the Dynamic Time Warping (DTW) measure \cite{VelichkoZagoruyko1970}, \cite{SakoeChiba1971}. Although other time elastic distance measures such as the Edit Distance With Real Penalty (ERP) \cite{Chen04ERP} or the Time Warp Edit Distance (TWED) \cite{Marteau09TWED} could be considered instead, without loss of generality, we remain focused throughout this paper on DTW and its kernelization. \subsection{DTW and time elastic average of a pair of time series} A classical formulation of DTW can be given as follows. If $d$ is a fixed positive integer, we define a time series of length $n$ as a multidimensional sequence $X_1^n=X_1X_2\cdots X_n$, such that, $\forall i \in \{1, ..,n\}$, $X_i \in \mathbb{R}^d$.\\ \begin{definition} \label{def:alignmentPath} If $X_1^n$ and $Y_1^m$ are two time series with respective lengths $n$ and $m$, an {\it alignment path} $\pi = (\pi_k)$ of length $p=|\pi|$ between $X_1^n$ and $Y_1^m$ is represented by a sequence \[ \pi : \{1, \ldots, p\} \rightarrow \{1, \ldots, n\} \times \{1, \ldots, m\} \] such that $\pi_1 = (1, 1)$, $\pi_p = (n, m)$, and (using the notation $\pi_k = (i_k, j_k)$, for all $k \in \{1, \ldots, p-1\}$, $\pi_{k+1} = (i_{k+1}, j_{k+1}) \in \{(i_k + 1, j_k),\linebreak[1](i_k, j_k + 1),\linebreak[1](i_k + 1, j_k + 1) \}$.\\ We define $\forall k$ $\pi_{k}(1)=i_k$ and $\pi_{k}(2)=j_k$, as the index access functions at step $k$ of the mapped elements in the pair of aligned time series.\\ \end{definition} In other words, a warping path defines a way to travel along both time series simultaneously from beginning to end; it cannot skip a point, but it can advance one time step along one series without advancing along the other, thereby justifying the term \textit{time-warping}. If $\delta$ is a distance on $\mathbb{R}^d$, the global {\it cost} of a warping path $\pi$ is the sum of distances (or squared distances or local costs) between pairwise elements of the two time series along $\pi$, i.e.: \[ \text{cost}(\pi) = \sum_{(i_k,j_k) \in \pi} \delta(X_{i_k}, Y_{j_k}) \] A common choice of distance on $\mathbb{R}^d$ is the one generated by the $L^2$ norm. \begin{definition} \label{dtw} For a pair of finite time series $X$ and $Y$, any warping path has a finite length, and thus the number of existing warping paths is finite. Hence, there exists at least one path $\pi^*$ whose cost is minimal, so we can define $\text{DTW}(X, Y)$ as the minimal cost taken over all existing warping paths. Hence \begin{align} \label{eq:dtw} \text{DTW}(X_1^n, Y_1^m) &= \underset{\pi}{\min} \text{ cost}(\pi(X_1^n, Y_1^m))\nonumber \\ &=\text{cost}(\pi^*(X_1^n, Y_1^m)). \end{align} \end{definition} \begin{definition} \label{pairwiseTSaverage} From the DTW measure, \cite{Gupta1996}~ have defined the time elastic average $a(X,Y)$ of a pair of time series $X_1^n$ and $Y_1^m$ as the time series $A_1^{|\pi^*|}$ whose elements are $A_k=\textsl{mean}(X_{\pi^*_k(1)}, Y_{\pi^*_k(2)})$, $\forall k \in {1, \cdots, |\pi^*|}$, where \textsl{mean} corresponds to the definition of the mean in Euclidean space.\\ \end{definition} \begin{figure*}[!ht] \subfloat[Pairwise average (top) and Progressive agglomeration (bottom)\label{fig:hac-iter-a}]{% \fbox{\includegraphics[scale=.37]{DBA-progr.png}}\\ } \hfill \subfloat[Iterative agglomeration with refinement\label{fig:hac-iter-b}]{% \fbox{\includegraphics[scale=.37]{iter.png}} } \caption{Pairwise averaging (top left), progressive hierarchical with similar first agglomeration (bottom left) v.s. iterative agglomeration (right) strategies. Final centroid approximations are presented in red bold color. Temporary estimations are presented using a bold dotted black line} \label{fig:hac-iter} \end{figure*} \subsection{Time elastic centroid of a set of time series} A single alignment path is required to calculate the time elastic centroid of a pair of time series (Def. \ref{def:alignmentPath}). However, multiple path alignments need to be considered to evaluate the centroid of a larger set of time series. Multiple alignments have been widely studied in bioinformatics \cite{Fasman1998}, and it has been shown that determining the optimal alignment of a set of sequences under the sum of all pairs (SP) score scheme is a NP-complete problem \cite{WangJ1994} \cite{Just99}. The time and space complexity of this problem is $O(L^k)$, where $k$ is the number of sequences in the set and $L$ is the length of the sequences when using dynamic programming to search for an optimal solution \cite{Carrillo1988}. This latter result applies to the estimation of the time elastic centroid of a set of $k$ time series with respect to the DTW measure. Since the search for an optimal solution becomes rapidly intractable with increasing $k$, sub-optimal heuristic solutions have been subsequently proposed, most of them falling into one of the following three categories. \subsubsection{Progressive heuristics} Progressive heuristic methods estimate the time elastic centroid of a set of $k$ time series by combining pairwise centroids (Def. \ref{pairwiseTSaverage}). This kind of approach constructs a binary tree whose leaves correspond to the time series of the data set, and whose nodes correspond to the calculation of a local pairwise centroid, such that, when the tree is complete, the root is associated with the estimated data set centroid. The proposed strategies differ in the way the tree is constructed. One popular approach consists of providing a random order for the leaves, and then constructing the binary tree up to the root using this ordering \cite{Gupta1996}. Another approach involves constructing a dendrogram (a hierarchical ascendant clustering) from the data set and then using this dendrogram to calculate pairwise centroids starting with the closest pairs of time series and progressively aggregating series that are farther away\cite{Niennattrakul2009} as illustrated on the left of Figure \ref{fig:hac-iter}. Note that these heuristic methods are entirely based on the calculation of a pairwise centroid, so they do not explicitly require the evaluation of a DTW centroid for more than two time series. Their degree of complexity varies linearly with the number of time series in the data set. \subsubsection{Iterative heuristics} Iterative heuristics are based on an iterated three-step process. For a given temporary centroid candidate, the first step consists of calculating the inertia, i.e. the sum of the DTW distances between the temporary centroid and each time series in the data set. The second step (Figure \ref{fig:hac-iter-a} top) evaluates the best pairwise alignment with the temporary centroid $c(i)$, of length $L$, for each time series $u_j(i)$ in the data set ($j \in \{1 \cdots n\}$), where $i$ is the timestamp. A new time series of length $L$, $u'_j(i)$ is thus constructed that contains the contributions of all the samples of time series $u_j(i)$, but with time being possibly stretched (duplicate samples) or compressed (average of successive samples) according to the best alignment path as exemplified in Figure \ref{fig:hac-iter-a}, top left side. The third step consists in producing a new temporary centroid candidate $c(i)$ from the set $\{u'_j(i)\}$ by successively averaging (in the sense of the Euclidean centroid), the samples at every timestamp $i$ of the $u'_j(i)$ time series. Basically, we have $c(i) = 1/n\cdot\sum_{j=1..n} u'_j(i)$. Then, the new centroid candidate replaces the previous one and the process is iterated until the inertia is no longer reduced or the maximum number of iterations is reached. Generally, the first temporary centroid candidate is taken as the DTW medoid of the considered data set. This process is illustrated on Figure \ref{fig:hac-iter-b}. The three steps of this heuristic method were first proposed in \cite{Abdulla2003}. The iterative aspect of this heuristic approach was initially introduced by \cite{Hautamaki2008} and refined by \cite{Petitjean2011} who introduced the DTW Barycenter Averaging (DBA) algorithm. Note that, in contrast to the progressive method, this kind of approach needs to evaluate, at each iteration, all the alignments with the current centroid candidate. The complexity of the iterative approach is higher than the progressive approach, the extra computational cost being linear with the number of iterations. More sophisticated approaches have been proposed to escape some local minima. For instance \cite{Petitjean2012} have evaluated a genetic algorithm for managing a population of centroid candidates, thus improving with some success the straightforward iterative heuristic methods. \subsubsection{Optimization approaches} Given the entire set of time series $\mathbb{S}$ and a subset of $n$ time series $S=\{X_j\}_{j=1 \cdots n} \subseteq \mathbb{S}$, optimization approaches attempt to estimate the centroid of $S$ from the definition of an optimization problem, which is generally expressed by equation (\ref{eq.opt}) given below: \begin{equation} \label{eq.opt} c = \arg\!\min_{s \in \mathcal{S}} \sum_{j=1}^n \DTW(s,X_j) \end{equation} Among other works, some attempt to use this kind of direct approach for the estimation of time elastic centroid was recently addressed in \cite{ZhouDLTore2009}, \cite{ZhouDLTore2016} and \cite{SoheilyKhah2016}. In \cite{ZhouDLTore2009} the authors detail a Canonical Time Warp (CTW) and a Generalized version of it (GCTW) \cite{ZhouDLTore2016} that combines DTW and CCA (Canonical Correlation Analysis) for temporally aligning multi-modal motion sequences. From a least square formulation for DTW, a non-convex optimization problem is handled by means of a coordinate-descent approach that alternates between multiple temporal alignments using DTW (or a variant exploiting a set of basis functions to parameterized the warping paths) and spatial projections using CCA (or a multi-set extension of CCA). Whilst these approaches have not been designed to explicitly propose a centroid estimation, they do provide multi-alignment paths that can straightforwardly be used to compute a centroid estimate. As an extension to CTW, GCTW requires the set-up of generally "smooth" function basis that constrain the shape of the admissible alignment paths. This ensures the computational efficiency of GCTW, but in return it may induce some drawback, especially when considering the averaging of "unsmoothed" time series that may involve very "jerky" alignment paths. The choice of this function basis may require some expertise on the data. \\ \color{black} In \cite{SoheilyKhah2016}, a non-convex constrained optimization problem is derived, by integrating a temporal weighting of local sample alignments to highlight the temporal region of interest in a time series data set, thus penalizing the other temporal regions. Although the number of parameters to optimize is linear with the size and the dimensionality of the time series, the two steps gradient-based optimization process they derived is very computationally efficient and shown to outperform the state of the art approaches on some challenging scalar and multivariate data sets. However, as numerous local \textit{optima} exist in practice, the method is not guaranteed to converge towards the best possible centroid, which is anyway the case in all other approaches. Furthermore, their approach, due to combinatorial explosion, cannot be adapted for time elastic kernels like the one addressed in this paper and described in section \ref{sec:DTW}. \subsection{Discussion and motivation} According to the state of the art in time elastic centroid estimation, an exact centroid, if it exists, can be calculated by solving a NP-complete problem whose complexity is exponential with the number of time series to be averaged. Heuristic methods with increasing time complexity have been proposed since the early 2000s. Simple pairwise progressive aggregation is a less complex approach, but which suffers from its dependence on initial conditions. Iterative aggregation is reputed to be more efficient, but entails a higher computational cost. It could be combined with ensemble methods or soft optimization such as genetic algorithms. The non-convex optimization approach has the merit of directly addressing the mathematical formulation of the centroid problem in a time elastic distance context. This approach nevertheless involves a higher complexity and must deal with a relatively large set of parameters to be optimized (the weights and the sample of the centroid). Its scalability could be questioned, specifically for high dimensional multivariate time series. It should also be mentioned that some criticism of these heuristic methods has been made in \cite{Niennattrakul2007}. Among other drawbacks, the fact that DTW is not a metric could explain the occurrence of unwanted behavior such as centroid drift outside the time series cluster to be averaged. We should also bear in mind that keeping a single best alignment can increase the dependence of the solution on the initial conditions. It may also increase the aggregating order of the time series proposed by the chosen method, or potentially enhance the convergence rate. In this study, we do not directly address the issue of time elastic centroid estimation from the DTW perspective, but rather from the point of view of the regularized dynamic time warping kernel (KDTW) \cite{MarteauGibet2014}. Although this perspective allows us to consider centroid estimation as a preimage problem, which is in itself another optimization perspective, we rather show that the KDTW alignment matrices computation can be described as the result of applying a forward-backward algorithm on a stochastic alignment automata. This probabilistic interpretation of the pairwise alignment of time series leads us to propose a robust averaging scheme for any set of time series that interpolate jointly along the time axis and in the sample space. Furthermore, this scheme significantly outperforms the current state of the art method, as shown by our experiments. \subsection{Time elastic kernels and their regularization} \label{sec:DTW} The \textbf{Dynamic Time Warping} (DTW) distance between two time series $X_1^p=X_1X_2 \cdots X_p$ and $Y_1^q = Y_1Y_2 \cdots Y_q $ of lengths $p$ and $q$ respectively, \cite{VelichkoZagoruyko1970}, \cite{SakoeChiba1971} as defined in equation (\ref{eq:dtw}) can be recursively evaluated as \begin{eqnarray} \label{Eq.dtw2} d_{dtw}(X_1^p, Y_1^q)= \hspace{5cm} \nonumber \\ d_{E}^{2}(X_p, Y_q) + \text{Min} \left\{ \begin{array}{ll} d_{dtw}(X_1^{p-1}, Y_1^q) \\ d_{dtw}(X_1^{p-1}, Y_1^{q-1}) \\ d_{dtw}(X_1^p, Y_1^{q-1}) \\ \end{array} \right. \end{eqnarray} where $d_{E}(X_p,Y_q)$ is the Euclidean distance defined on $\mathbb{R}^d$ between the two positions in sequences $X_1^p$ and $Y_1^q$ taken at times $p$ and $q$, respectively. Apart from the fact that the triangular inequality does not hold for the DTW distance measure, it is not possible to define a positive definite kernel directly from this distance. Hence, the optimization problem, which is inherent to the learning of a kernel machine, is no longer convex and could be a source of limitation due to the emergence of local minima.\\ \textbf{Regularized DTW}: seminal work by \cite{CuturiVert2007}, prolonged recently by \cite{MarteauGibet2014} leads us to propose new guidelines to ensure that kernels constructed from elastic measures such as DTW are positive definite. A simple instance of such a regularized kernel, derived from \cite{MarteauGibet2014}, can be expressed as a convolution kernel, which makes use of two recursive terms: \resizebox{.95\linewidth}{!}{ \begin{minipage}{\linewidth} \begin{align} \label{Eq.KDTW} \begin{array}{ll} \textsc{KDTW} (X_1^p, Y_1^q)=K_{dtw}(X_1^p, Y_1^q)+K'_{dtw}(X_1^p, Y_1^q) \\ \\ K_{dtw}(X_1^p, Y_1^q) = \\ \begin{array}{ll} \hspace{2mm} \frac{1}{3}e^{-\nu d_{E}^{2}(X_p, Y_q)} \cdot \Big(h(p-1,q)K_{dtw}(X_1^{p-1}, Y_1^q)\\ \hspace{20mm} + h(p-1,q-1) K_{dtw}(X_1^{p-1}, Y_1^{q-1}) \\ \hspace{20mm} + h(p,q-1)K_{dtw}(X_1^p, Y_1^{q-1})\Big) \\ \end{array}\\ {}\\ K'_{dtw}(X_1^p, Y_1^q) = \\ \begin{array}{ll} \hspace{2mm} \frac{1}{3} \cdot \Big(h(p-1,q) K'_{dtw}(X_1^{p-1}, Y_1^q)e^{-\nu d_{E}^{2}(X_p, Y_p)} \\ \hspace{3mm} + \Delta_{p,q} h(p-1,q-1)K'_{dtw}(X_1^{p-1}, Y_1^{q-1})e^{-\nu d_{E}^{2}(X_p, Y_q)}\\ \hspace{3mm} + h(p,q-1)K'_{dtw}(X_1^p, Y_1^{q-1})e^{-\nu d_{E}^{2}(X_q, Y_q)}\Big) \\ \end{array} \end{array} \end{align} \end{minipage} } \color{black} where $\Delta_{p,q}$ is the Kronecker symbol, $\nu \in \mathbb{R}^{+}$ is a \textit{stiffness} parameter which weights the local contributions, i.e. the distances between locally aligned positions, $d_E(.,.)$ is a distance defined on $\mathbb{R}^{k}$, and $h$ is a symmetric binary non negative function, usually in $\{0,1\}$, used to define a symmetric corridor around the main diagonal to limit the "time elasticity" of the kernel. For the remaining of the paper we will not consider any corridor, hence $h(.,.)=1$ everywhere. The initialization is simply $K_{dtw}(X_1^0, Y_1^0) = K'_{dtw} (X_1^0, Y_1^0) = 1$.\\ The main idea behind this regularization is to replace the operators $\min$ and $\max$ (which prevent symmetrization of the kernel) by a summation operator. This allows us to consider the best possible alignment, as well as all the best (or nearly the best) paths by summing their overall cost. The parameter $\nu$ is used to check what is termed as nearly-the-best alignment, thus penalizing alignments that are too far away from the optimal ones. This parameter can be easily optimized through a cross-validation. \\ For each alignment path, KDTW evaluates the product of local alignment costs $e^{-\nu d_{E}^{2}(X_p, Y_q))} \le 1 $ occurring along the path. This product can be very small depending on the size of the time series and the selected value for $\nu$. This is the source for a diagonal dominance problem in the Gram matrix. But, above all, this requires to balance the choice of the $\nu$ value according to the lengths of the matched time series. This is the main (and probably the only) limitation of the KDTW kernel: the selectivity or bandwidth of the local alignment kernels needs to be adjusted according to the lengths of the matched time series. \section{Stochastic alignment process} To introduce a probabilistic paradigm to the time elastic averaging of time series, we first consider the pairwise alignment process as the output of a stochastic automata. The stochastic alignment process that we propose finds its roots in the forward-backward algorithm defined for the learning of Hidden Markov Models (HMM) \cite{Rabiner89} and in the parallel between HMM and DTW that is proposed in \cite{Juang85}, \cite{NakagawaNakanishi1989} and in a more distant way in \cite{Chudova2002}. However we differ from these founding works (and others) in the following \begin{enumerate} \item we do not construct a parallel with DTW, but with its kernelized variant KDTW \item \cite{NakagawaNakanishi1989} only consider an optimal alignment path (exploiting the Viterbi algorithm) while we consider the whole set of possible alignments (as in \cite{Juang85}) \item \cite{Juang85} construct an asymmetric classical left-right HMM (one time series plays the role of the observation sequence, while the other plays the role of the state sequence). With a similar idea \cite{Chudova2002} proposes a generative mixture model along a discrete time grid axis with local and global time warp capability. We construct instead an alignment process, that sticks on the DTW recursive definition without any other hypothesis on the structure of the automata, and for which the two aligned time series play the role of the observation sequence, and the set of states corresponds to the set of all possible sample pairs alignments. \end{enumerate} \subsection{pairwise alignment of time series as a Markov model} Let $o_1^n = o_1 o_2 \cdots o_n$ and ${o'}_1^{n'} = o'_1 o'_2 \cdots o'_{n'}$ be two discrete time series (observations) of length $n$ and $n'$ respectively. To align this two time series, we define a stochastic alignment automata as follows. First we consider the set of state variables $\mathcal{S} =\{ S_{1,1},S_{1,2}, \cdots, S_{n,n'}\}$. Each $S_{i,j}$ characterizes the alignment between observed samples $o_i$ and $o'_j$. The posterior probability for all state variables, $S_{i,j}$, given the sequences of observations $o_1^n$ and ${o'}_1^{n'}$ is $P(S_{i,j}|o_1^n;{o'}_1^{n'})$. The transitions probabilities between states are driven by a tensor $\mathbf{A}=[a_{ij;kl}]$, where $a_{ij;kl}=P(S_{k,l}|S_{i,j})$, $\forall (k,l)$ and $(i,j) \in \{1 \cdots n\} \times \{1 \cdots n'\}$. $\mathbf{A}$ can be defined accordingly to the standard DTW definition, namely \begin{equation} \label{Atensor} a_{ij;kl} = \left\{ \begin{array}{ll} \frac{1}{3} \textsc{ if } \left\{ \begin{array}{ll} (k=i \textsc{ and } l=j+1) \\ \textsc{ or } (k=i+1 \textsc{ and } l=j+1) \\ \textsc{ or } (k=i+1 \textsc{ and } l=j)\\ \end{array} \right. \\ 0 \textsc{ otherwise.}\\ \end{array} \right. \end{equation} The $1/3$ factor ensures that the transition matrix equivalent to $\mathbf{A}$ is stochastic, basically \begin{equation} \label{eq:stochastic} \forall i,j \textsc{ }\sum_{kl} a_{ij;kl} = 1\\ \end{equation} Notice that any tensor $\mathbf{A}$ satisfying equation (\ref{eq:stochastic}) could be considered at this level instead of the previous DTW surrogate tensor.\\ Furthermore, each state is observable through the so-called emission probabilities which are defined by a set of functions $b_{ij}(x,y)$, where $b_{ij}(x,y)=P(x,y | S_{i,j})$, $\forall (x,v) \in \mathbb{R}^d \times \mathbb{R}^d$ and $(i,j) \in \{1 \cdots n\} \times \{1 \cdots n'\}$. The $b_{ij}$ functions are normalized such that $\iint_{x,y} b_{ij}(x,y) \,dx\,dy=1$. \\ Here we differ from the classical HMM: the first difference lies in the nature of the observation sequence itself. Unlike HMM, our observation consists of a pair of subsequences that are not traveled necessarily synchronously, but according to the structure of the transition tensor $\mathbf{A}$. For instance, given the DTW tensor described by equation (\ref{Atensor}), from a current state associated to the alignment $(o_u,o'_v)$, three possible alignments can be reached at the next transition: $(o_{u+1},o'_v)$, $(o_u,o'_{v+1})$ or $(o_{u+1},o'_{v+1})$.\\ The second difference with classical HMM is that the emission probabilities are independent from the state, such that $\forall i,j$ $b_{i,j}(x,y)=b(x,y)$. We use a local (density) kernel to estimate these probabilities as follows \begin{equation} \label{Btensor} b(x,y) = \kappa(x,y) = \gamma e^{-\nu d_{E}^{2}(x,y)} \end{equation} where $\gamma$ is the normalization coefficient. Consequently, given the two observation sequences ${o}_1^{n}$ and ${o'}_1^{n'}$, we define the emission probability matrix $\mathbf{B}=[b_{kl}]=b(o_k,o'_l)=\gamma e^{-\nu d_{E}^{2}(o_k,o'_l)}$, for $k \in \{1,\cdots,n\}$ and $l \in \{1,\cdots,n'\}$\\ Finally let $\mathbf{u}$ be the initial state probability vector defined by $\forall (i,j) \in \{1 \cdots n\} \times \{1 \cdots n'\}$, $\mathbf{u}_{ij}=1$ if $i=j=1$, $0$ otherwise. \\ Thereby, the stochastic alignment automata is fully specified by the triplet $\theta=(\mathbf{A}, \mathbf{B}, \mathbf{u})$, where $\mathbf{A}$ only depends on the lengths $n$ and $n'$ of the observations, and $\mathbf{B}$ depends on the complete pair of observations $o_1^n$ and ${o'}_1^{n'}$. \subsection{Forward-backward alignment algorithm} We derive the forward-backward alignment algorithm for our stochastic alignment automata from its classical derivation that was defined for Hidden Markov Models \cite{Rabiner89}. For all $S \in \mathcal{S}$, the posterior probability $P(S|o_1^n,{o'}_1^{n'}, \theta)$ is decomposed into forward/backward recursions as follows: \begin{equation} \label{FB1} \begin{array}{ll} P(S|o_1^n,{o'}_1^{n'},\theta) &= \frac{P(o_1^n,{o'}_1^{n'}, S |\theta)}{P(o_1^n,{o'}_1^{n'}|\theta)} \\ &= \frac{P(o_1^t,o_t^n,{o'}_1^{t'},{o'}_{t'}^{n'},S|\theta)}{P(o_1^n,{o'}_1^{n'}|\theta)}\\ &= \frac{P(o_t^n,{o'}_{t'}^{n'}|S,\theta)P(S,o_1^t,{o'}_{1}^{t'}|\theta)}{P(o_1^n,{o'}_1^{n'}|\theta)}\\ \end{array} \end{equation} The last equality results from the application of the Bayes rule and the conditional independence of $o_t^n,{o'}_{t'}^{n'}$ and $o_1^t,{o'}_{1}^{t'}$ given $S$, $\theta$.\\ Let $\alpha_{t,t'}=P(o_1^t,{o'}_{1}^{t'},S_{t,t'}|\theta)$ be the probability of the alignment of the pair of partial observation sequences $(o_1^t, {o'}_{1}^{t'})$ produced by all possible state sequences that end at state $S_{t,t'}$. $\alpha_{t,t'}$ can be recursively evaluated as the forward procedure \begin{equation} \label{Forward} \left\{ \begin{array}{ll} \alpha_{1,1}=u_{11}b_{11}\\ \alpha_{t,t'}=b_{tt'}\sum\limits_{u,v\in \mathcal{F}_{t,t'}}\alpha_{u,v}a_{uv;tt'} \\ \end{array} \right. \end{equation} where $\mathcal{F}_{t,t'}$ is the subset of states allowing to reach the state $S_{t,t'}$ in a single transition. For the DTW tensor $\mathbf{A}$ (Eq. \ref{Atensor}), we have $\mathcal{F}_{t,t'}=\{S_{t-1,t'}, S_{t,t'-1}, S_{t-1,t'-1}\}$.\\ Notice that in this case $\alpha_{n,n'}=K_{dtw}(o_1^n, {o'}_{1}^{n'})$. \\ Similarly let $\beta_{t,t'}=P(o_t^n,{o'}_{t'}^{n'}|S,\theta)$ be the probability of the alignment of the pair of partial sequences $(o_t^n, {o'}_{t'}^{n'})$ given starting state $S_{t,t'}$. $\beta_{t,t'}$ can be recursively evaluated as the backward procedure \begin{equation} \label{Backward} \left\{ \begin{array}{ll} \beta_{n,n'}=1\\ \beta_{t,t'}=\sum\limits_{u,v\in \mathcal{B}_{t,t'}}\beta_{u,v}a_{tt';uv}b_{tt'} \\ \end{array} \right. \end{equation} where $\mathcal{B}_{t,t'}$ is the subset of states that can be reached from the state $S_{t,t'}$ in a single transition. For the DTW tensor $\mathbf{A}$ (Eq. \ref{Atensor}), we have $\mathcal{B}_{t,t'}=\{S_{t+1,t'}, S_{t,t'+1}, S_{t+1,t'+1}\}$.\\ Hence from Eq. \ref{FB1}, we get \begin{equation} \label{FB2} P(S_{t,t'}|o_1^n,{o'}_1^{n'},\theta) = \frac{\alpha_{t,t'}\beta_{t,t'}}{P(o_1^n,{o'}_1^{n'}|\theta)} \end{equation} Any tensor $\mathbf{A}$ satisfying equation (\ref{eq:stochastic}) is not eligible: for the $\alpha_{t,t'}$ and $\beta_{t,t'}$ recursions to be calculable, one has to impose \textit{linearity}. Basically $\alpha_{t,t'}$ cannot depend on any $\alpha_{u,v'}$ that is not previously evaluated. The constraint we need to impose is that the time stamps are locally increasing, i.e. if $\alpha_{t,t'}$ depends on any $\alpha_{u,v'}$, then necessarily $[(t<u $ and $t'\leq v')$ or $(t\leq u $ and $ t' < v')]$. The same applies for the $\beta_{t,t'}$ recursion.\\ \begin{figure}[h!] \centering \begin{tabular}{ccc} \includegraphics[scale=.6, angle=0]{SinTest.png} & \end{tabular} \caption{Forward Backward matrix (logarithmic values) for the alignment of a positive halfwave with a sinus wave. The dark red color represents high probability states, while dark blue color represents low probability states.} \label{fig:SinTest} \end{figure} As an example, Figure \ref{fig:SinTest} presents the Forward Backward ($FB$) matrix ($FB(t,t')=P(S_{t,t'}|o_1^n,{o'}_1^{n'},\theta)$) corresponding to the alignment of a positive half-wave with a sinus wave. The three areas of likely alignment paths are clearly identified in dark red colors. \\ \subsection{Parallel with KDTW} A direct parallel exists between KDTW and the previous Markov process. It follows from the forward equation (Eq. \ref{Forward}) that \begin{align} \label{KDTW_HMM} K_{dtw}(X_1^k, Y_1^l)=\sum_{i,j} a_{ij,kl} b_{kl}K_{dtw}(X_1^i), Y_1^j) \nonumber\\ =\kappa(X_k,Y_l) \sum_{i,j} a_{ij,kl} K_{dtw}(X_1^i, Y_1^j) \end{align} where $\mathbf{A}=[a_{ij;kl}]$ is defined in equation (\ref{Atensor}), and $\mathbf{B}=[b_{kl}]$, defined in equation (\ref{Btensor}), is such that $b_{kl}=e^{-\nu d_{E}^{2}(X_k,Y_l)}$. Hence, the $K_{dtw}$ recursion coincides exactly with the forward recursion (Eq. \ref{Forward}). Similarly, we can assimilate the backward recursion (eq. \ref{Backward}) to the $K_{dtw}$ evaluation of the pair of time series obtained by inverting $X$ and $Y$ along the time axis. Hence, the forward-backward matrix elements (eq. \ref{FB2}) can be directly expressed in terms $K_{dtw}$ recursions. Furthermore, the corridor function $h()$ that occurs in the $K_{dtw}$ recursion (Eq. \ref{Eq.KDTW}) modifies directly the structure of the transition tensor $\mathbf{A}$ by setting $a_{ij;kl}=0$ whenever $h(i,j)=0$ or $h(k,l)=0$. Neighbor states may be affected also by the normalization that is required to maintain $\mathbf{A}$ stochastic. \subsection{Time elastic centroid estimate of a set of time series} Let us introduce the marginal probability of subset $S_{t,\bullet}=\{S_{t,1}, S_{t,2}, \cdots, S_{t,n'}\}$ given the observations $o$ and $o'$, namely that sample $o_t$ is aligned with the samples of ${o'}_{1}^{n'}$ \begin{equation} P(S_{t,\bullet})=\sum_{t'}P(S_{t,t'}|o_1^n,{o'}_{1}^{n'},\theta) \end{equation} and let us consider, for all $t$ and $t'$, the conditional probability of state $S_{t,t'}$ given the two observation sequences, parameter $\theta$ and $S_{t,\bullet}$, namely the probability that $o_t$ and $o'_{t'}$ are aligned given the knowledge that $o_t$ is aligned with one of the samples of $o'$. \begin{equation} \label{CondProp1} \begin{array}{ll} P(S_{t,t'}|o_1^n,{o'}_{1}^{n'}, S_{t,\bullet}, \theta) = \\ \hspace{5mm}P(S_{t,t'}|o_1^n,{o'}_{1}^{n'},\theta)/P(S_{t,\bullet}|o_1^n,{o'}_1^{n'},\theta) \end{array} \end{equation} The previous equality is easily established because $P(S_{t,t'}, S_{t,\bullet}|o_1^n,{o'}_{1}^{n'}, \theta) = P(S_{t,t'}|o_1^n,{o'}_{1}^{n'}, \theta)$.\\ Note that for estimating $P(S_{t,t'}, S_{t,\bullet}|o_1^n,{o'}_{1}^{n'}, \theta)$ we only need to evaluate the forward ($\alpha_{t,t'}$) and backward ($\beta_{t,t'}$) recursions, since $P(o_1^n,{o'}_1^{n'}|\theta)$, the numerator term in Eq.\ref{FB2}, is eliminated.\\ We can then define the expectation of the samples of ${o'}_{1}^{n'}$ that are aligned with sample $o_t$ (given that $o_t$ is aligned) as well as the expectation of time of occurrence of the samples of ${o'}_{1}^{n'}$ that are aligned with $o_t$ as follows: \begin{equation} \label{Expectation1} \begin{array}{ll} E(o'|o_t)=\frac{1}{n'}\sum\limits_{t'=1}^{n'} {o'}_{t'} P(S_{t,t'}|o_1^n,{o'}_1^{n'},S_{t,\bullet},\theta)\\ E(t'|o_t)=\frac{1}{n'}\sum\limits_{t'=1}^{n'} {t'} P(S_{t,t'}|o_1^n,{o'}_1^{n'},S_{t,\bullet},\theta)\\ \end{array} \end{equation} \begin{figure}[h] \centering \begin{tabular}{ccc} \includegraphics[scale=.45, angle=0]{CBF_Centroids.png} & \end{tabular} \caption{Centroids obtained for the CBF data set. For the three shapes, the expected start (24) and end (88) time stamps (hence the expected shape duration of 64 frames) are correctly extracted} \label{fig:CBF_Cent} \end{figure} The Expectation equations (Eq. \ref{Expectation1}) are at the basis of our procedure for averaging a set of time series. Let $O=\{{}^k\! o_1^{n_k}\}_{k=1 \cdots N}$ be a set of time series and $r_1^{n}$ a reference time series ($r_1^{n}$ can be initially setup as the medoid of set $O$). The centroid estimate of $O$ is defined as the pair $(c, \tau)$ where $c$ is a time series of length $n$ and $\tau$ is the sequence of time stamps associated to the samples of $c$ \begin{equation} \label{eq:TEKA} \begin{array}{ll} c(t)=\frac{1}{N} \sum\limits_{k=1}^N E({}^k\! o|r_t)\\ \hspace{7mm} =\frac{1}{N} \sum\limits_{k=1}^N \frac{1}{n_k} \sum\limits_{{}^k\! t=1}^{n_k} {}^k\! o_j P(S_{t,{}^k\! t}|r_1^{n},{}^k\! o_1^{n_k}, S_{t,\bullet})\\ \tau(t)=\frac{1}{N} \sum\limits_{k=1}^N E({}^k\! t|r_t)\\ \hspace{7mm} =\frac{1}{N} \sum\limits_{k=1}^N \frac{1}{n_k} \sum\limits_{{}^k\! t=1}^{n_k} {}^k\! t P(S_{t,{}^k\! t}|r_1^{n},{}^k\! o_1^{n_k}, S_{t,\bullet})\\ \end{array} \end{equation} Obviously, $(c,\tau)$ is a non uniformly sampled time series for which $\tau(t)$ is the time stamp associated to observation $c(t)$. $\tau(t)$ could be understood as the expected time of occurrence of the expected observation $c(t)$. A uniform re-sampling can straightforwardly be used to get back to a uniformly sampled time series.\\ The proposed iterative agglomerative algorithm (cf. Fig. \ref{fig:hac-iter}-b), called TEKA (Time Elastic Kernel Averaging), that provides a refinement of the centroid estimation at each iteration until reaching a (local) optimum is presented in algorithm (\ref{alg:TEKA}).\\ As an example, figure (\ref{fig:CBF_Cent}) presents the time elastic centroid estimates obtained, using algorithm (\ref{alg:TEKA}) with $K=K_{dtw}$, for the Cylinder c(t), Bell, b(t) Funnel, f(t), synthetic functions \cite{Saito1994} defined as follows\\ $c(t) = (6 + \eta)\cdot\chi_{[a,b]}(t) + \epsilon(t)$\\ \hspace*{4mm} $b(t) = (6 + \eta)\cdot\chi_{[a,b]}(t)\cdot (t-a)/(b-a) + \epsilon(t)$\\ \hspace*{4mm} $f(t) = (6 + \eta)\cdot \chi_{[a,b]}(t) \cdot (b-t)/(b-a) + \epsilon(t)$\\ where $\chi_{[a,b]} =0$ if $t < a \vee t > b$, $1$ if $a \le t \le b$, $\eta$ and $\epsilon(t)$ are obtained from a standard normal distribution $N(0, 1)$, $a$ is an integer obtained from a uniform distribution in $[16, 32]$ and $b-a$ is another integer obtained from another uniform distribution in $[32, 96]$. Hence such shapes are characterized with start and end time stamps of $24$ and $88$ respectively, and a shape duration of $64$ samples. Figure (\ref{fig:CBF_Cent}) clearly shows that, from a subset of 300 time series (100 for each category), the algorithm has correctly recovered the start and end shape events (hence the expected shape duration) for all three shapes. \begin{algorithm}[] \caption{Iterative Time Elastic Kernel Averaging (TEKA) of a set of time series}\label{alg:TEKA} \begin{algorithmic}[1] \State Let $K$ be a similarity time elastic kernel for time series satisfying eq. (\ref{KDTW_HMM}) \State Let $O$ be a set of time series of $d$ dimensional samples \State Let $c$ be an initial centroid estimate (e.g. the medoid of $O$) of length $n$ \State Let $\tau$ and $\tau_0$ be two sequences of time stamps of length $n$ initialized with zero values \State Let $MeanK_0 = 0$ and $MeanK$ be two double values; \Repeat \State $c_0 = c$, $\tau_0 = \tau$, $MeanK_0 = MeanK$; \State Evaluate $c$ and $\tau$ according to Eq. (\ref{eq:TEKA}) \State //Average similarity between $c$ and $O$ elements \State $MeanK$=$\frac{1}{|O|}\sum_{o \in O} K(c,o)$ \Until{$MeanK<MeanK_0$} \State ($c_0$, $\tau_0$) is the centroid estimation \State Uniformly re-sample $c_0$ using the time stamps $\tau_0$ \end{algorithmic} \end{algorithm} \begin{table*}[ht!] \begin{center} \begin{tabular}{cccc} \textbf{DBA} & \raisebox{-.5\height}{\includegraphics[scale=.22]{CBF_iDBA_c.png}}& \raisebox{-.5\height}{\includegraphics[scale=.22]{CBF_iDBA_b.png}}& \raisebox{-.5\height}{\includegraphics[scale=.22]{CBF_iDBA_f.png}}\\ \textbf{CTW} & \raisebox{-.5\height}{\includegraphics[scale=.22]{CBF_CTW_c.png}}& \raisebox{-.5\height}{\includegraphics[scale=.22]{CBF_CTW_b.png}}& \raisebox{-.5\height}{\includegraphics[scale=.22]{CBF_CTW_f.png}}\\ \textbf{TEKA} & \raisebox{-.5\height}{\includegraphics[scale=.22]{CBF_iTEKA_c.png}}& \raisebox{-.5\height}{\includegraphics[scale=.22]{CBF_iTEKA_b.png}}& \raisebox{-.5\height}{\includegraphics[scale=.22]{CBF_iTEKA_f.png}}\\ \end{tabular} \caption{Centroid estimation for the three categories of the CBF dataset and for the three tested algorithms: DBA (top), CTW (center) TEKA (bottom). The centroid estimations are indicated as a bold black line superimposed on top of the time series (in light red) that are averaged.} \label{fig:centroidsCBF} \end{center} \end{table*} The figures presented in Table \ref{fig:centroidsCBF} compare the centroid estimates provided by the iterated DBA \cite{Petitjean2012}, CTW \cite{ZhouDLTore2009} and TEKA algorithms. For the experiment, the DBA and TEKA algorithms were iterated at most 10 times. The centroid estimates provided by the TEKA algorithm are much smoother than the ones provided by DBA or CTW. This denoising property, expected from any averaging algorithm, will be addressed in a dedicated experiment (c.f. subsection \ref{exp:denoising}). \subsection{Role of parameter $\nu$} In practice, the selectivity or bandwidth of the local alignment kernels (that is controlled by parameter $\nu$) has to be adapted according to the the lengths of the time series. If the time series are long, then $\nu$ should be reduced to maintain the calculability of the forward-backward matrices, and the local selectivity decreases. Hence, more alignment paths are likely and more sample pairs participate to the calculation of the average such that local details are filtered out by the averaging. Conversely if the time series are short, $\nu$ can be increased, hence fewer sample pairs participate to the calculation of the average, and details can be preserved. \color{black} \subsection{Computational complexity} TEKA has intrinsically the same algorithmic complexity than the DBA algorithm, basically $O(L^2)$ for each pairwise averaging, where $L$ is the average length of the time series. Nevertheless, computationally speaking, TEKA algorithm is slightly more costly mainly because of two reasons: \begin{itemize} \item the FB matrix induces a factor three in complexity because of the reverse alignment and the multiplication term by term of the forward and backward matrices. \item the exponential terms that enter into the computation of KDTW (Eq. (\ref{Eq.KDTW})) are costly, basically $O(M(n) n^{1/2})$, where $M(n)$ is the cost of the floating point multiplication, and $n$ is the number of digits. This induces another factor 2 or 3 depending on the chosen floating point precision. \end{itemize} The overall algorithmic cost for averaging a set of $N$ time series of average length $L$ with an average number of iterations $I$ is, for the two algorithms, $O(I\cdot N \cdot L^2)$.\\ Some optimization are indeed possible, in particular replacing the exponential function by another local kernel easier to compute is an important source of algorithmic simplification. We do not address further this issue in this paper and let it stand as a perspective. \section{Experiments} The two first proposed experiments aim at demonstrating the benefits of using time elastic centroids in a data reduction paradigm: 1-NC/NM (first near centroid or medoid) classification for the first one, and isolated gesture recognition for the second one using 1-NC/NM and SVM classifiers in conjunction with the KDTW kernel. The third experiment explores the noise reduction angle brought by time elastic centroids. \subsection{1-Nearest Centroid/Medoid classification} The purpose of this experiment is to evaluate the effectiveness of the proposed time elastic averaging method (TEKA) against a triple baseline. The first baseline allow us to compare centroid-based with medoid-based approaches. The second and third baselines are provided by the DBA \cite{Petitjean2012} and CTW \cite{ZhouDLTore2009} algorithms (thanks to the implementation proposed by the authors), currently considered as state of the art methods to average a set of sequences consistently with DTW. We have tested the CTW averaging with a 1-NC-DTW (CTW1) and a 1-NC-KDTW (CTW2) classifier to highlight the impact of the selected similarity measure. \color{black} For this purpose, we empirically evaluate the effectiveness of the methods using a first nearest centroid/medoid (1-NC/NM) classification task on a set of time series derived from widely diverse fields of application. The task consists of representing each category contained in a training data set by estimating its medoid or centroid and then evaluating the error rate of a 1-NC classifier on an independent testing data set. Hence, the classification rule consists of assigning to the tested time series the category which corresponds to the closest (or most similar) medoid or centroid according to the DTW measure for DTW medoid (DTW-M), DBA and CTW centroids (CTW1) or to KDTW measure for KDTW medoid (KDTW-M), CTW (CTW2) and TEKA centroids. \\ In \cite{Petitjean2014} a generalized k-NC task is described. The authors demonstrate that by selecting the appropriate number $k$ of centroids (using DBA and k-means), they achieve, without loss, a 70\% speed-up in average, compared to the original k-Nearest Neighbor task. Although, in general, the classification accuracy is improved when several centroids are used to represent the training datasets, our main purpose is to highlight and amplify the discrimination between time series averaging methods: this is why we stick here with the 1-NC task. \begin{table*}[] \caption{Comparative study using the UCR and UCI data sets: classification error rates evaluated on the TEST data set (in \%) obtained using the first nearest neighbour classification rule for DTW-M, KDTW-M, (medoids), DBA, CTW1, CTW2 and TEKA (centroids). A single medoid/centroid extracted from the training data set represents each category.} \label{tab:classResults} \centering \resizebox{\textwidth}{!}{\begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline \textbf{DATASET} & \# Cat $|$ L & \textbf{DTW-M} & \textbf{DBA} & \textbf{CTW1} & \textbf{CTW2} & \textbf{ KDTW-M} & \textbf{TEKA} \\ \hline\hline Synthetic\_Control & 6$|$60 & 3.00 & \textbf{2.00} & 19.00 & 3.33 & 3.33 & 2.33 \\ Gun\_Point & 2$|$150 & 44.00 & 32.00 & 54.67 & \textbf{25.33} & 52.00 & 27.33 \\ CBF & 3$|$128 & 7.89 & 5.33 & 34.22 & 3.55 & 8.11 & \textbf{3.33} \\ Face\_(all) & 14$|$131 & 25.21 & 18.05 & 34.38 & 27.93 & 20.53 & \textbf{13.61} \\ OSU\_Leaf & 6$|$427 & 64.05 & 56.20 & 64.05 & 57.02 & 53.31 & \textbf{50.82} \\ Swedish\_Leaf & 15$|$128 & 38.56 & 30.08 & 32 & 25.76 & 31.36 & \textbf{22.08} \\ 50Words & 50$|$270 & 48.13 & 41.32 & 48.57 & 36.48 & 23.40 & \textbf{19.78} \\ Trace & 4$|$275 & \textbf{5.00} & 7.00 & 6.00 & 18 & 23.00 & 16.00 \\ Two\_Patterns & 4$|$128 & 1.83 & 1.18 & 26.75 & 37.75 & 1.17 & \textbf{1.10} \\ Wafer & 2$|$152 & 64.23 & 33.89 & 37.83 & 33.27 & 43.92 & \textbf{8.38} \\ Face\_(four) & 4$|$350 & 12.50 & 13.64 & 19.32 & 15.91 & 17.05 & \textbf{10.23} \\ Lightning-2 & 2$|$637 & 34.43 & 37.70 & 37.70 & \textbf{29.51} & \textbf{29.51} & \textbf{29.51} \\ Lightning-7 & 7$|$319 & 27.40 & 27.40 & 41.10 & 38.35 & 19.18 & \textbf{16.44} \\ ECG200 & 2$|$96 & 32.00 & 28.00 & 27.00 & \textbf{25} & 29.00 & 26.00 \\ Adiac & 37$|$176 & 57.54 & 52.69 & 54.73 & 34.78 & 40.67 & \textbf{32.22} \\ Yoga & 2$|$426 & 47.67 & 47.87 & 53.56 & 48.97 & 47.53 & \textbf{44.90} \\ Fish & 7$|$463 & 38.86 & 30.29 & 39.42 & 22.28 & 20.57 & \textbf{14.28} \\ Beef & 5$|$470 & 60.00 & 53.33 & 53.33 & \textbf{50} & 53.33 & \textbf{50} \\ Coffee & 2$|$286 & 57.14 & 32.14 & 32.14 & \textbf{28.57} & 32.14 & 32.14 \\ OliveOil & 4$|$570 & 26.67 & \textbf{16.67} & 13.33 & 23.33 & 30 & \textbf{16.67} \\ CinC\_ECG\_torso & 4$|$1639 & 74.71 & 53.55 & 73.33 & 42.90 & 66.67 & \textbf{33.04} \\ ChlorineConcentration & 3$|$166 & 65.96 & 68.15 & 67.40 & 67.97 & 65.65 & \textbf{64.97} \\ DiatomSizeReduction & 4$|$345 & 22.88 & 5.88 & 5.23 & \textbf{2.61} & 11.11 & 2.94 \\ ECGFiveDays & 2$|$136 & 47.50 & 30.20 & 34.49 & 13.47 & \textbf{11.38} & 16.37 \\ FacesUCR & 14$|$131 & 27.95 & 18.44 & 32.20 & 21.66 & 20.73 & \textbf{12.19} \\ Haptics & 5$|$1092 & 68.18 & 64.61 & 58.77 & 57.47 & 63.64 & \textbf{53.57} \\ InlineSkate & 7$|$1882 & 78.55 & 76.55 & 81.64 & 82.18 & 78.36 & \textbf{75.09} \\ ItalyPowerDemand & 2$|$24 & 31.68 & 20.99 & 15.84 & 9.33 & \textbf{5.05} & 6.61 \\ MALLAT & 8$|$1024 & 6.95 & 6.10 & 5.24 & \textbf{3.33} & 6.87 & 3.66 \\ MedicalImages & 10$|$99 & 67.76 & 58.42 & 58.29 & 59.34 & \textbf{57.24} & 59.60 \\ MoteStrain & 2$|$84 & 15.10 & 13.18 & 19.01 & 15.33 & 12.70 & \textbf{9.35} \\ SonyAIBORobot\_SurfaceII & 2$|$65 & 26.34 & 21.09 & 20.57 & \textbf{17.52} & 26.230 & 19.30 \\ SonyAIBORobot\_Surface & 2$|$70 & 38.10 & 19.47 & 14.48 & \textbf{9.31} & 39.77 & 17.95 \\ Symbols & 6$|$398 & 7.64 & 4.42 & 22.31 & 20.70 & \textbf{3.92} & 4.02 \\ TwoLeadECG & 2$|$82 & 24.14 & \textbf{13.17} & 20.37 & 19.23 & 27.04 & 18.96 \\ WordsSynonyms & 25$|$270 & 70.85 & 64.26 & 78.84 & 63.32 & 64.26 & \textbf{56.11} \\ Cricket\_X & 12$|$300 & 67.69 & \textbf{52.82} & 78.46 & 73.85 & 61.79 & \textbf{52.82} \\ Cricket\_Y & 12$|$300 & 68.97 & 52.82 & 69.74 & 65.64 & \textbf{46.92} & 50.25 \\ Cricket\_Z & 12$|$300 & 73.59 & \textbf{48.97} & 78.21 & 64.36 & 56.67 & 51.79 \\ uWaveGestureLibrary\_X & 8$|$315 & 38.97 & 33.08 & 37.33 & 34.61 & 34.34 & \textbf{32.18} \\ uWaveGestureLibrary\_Y & 8$|$315 & 49.30 & 44.44 & 45.42 & 41.99 & 42.18 & \textbf{39.64} \\ uWaveGestureLibrary\_Z & 8$|$315 & 47.40 & \textbf{39.25} & 47.65 & 39.36 & 41.96 & 39.97 \\ PWM2 & 3$|$128 & 43.00 & 35.00 & 63.66 & 6.33 & 21.00 & \textbf{4.33} \\ uWaveGestureLibrary\_3D & 8$|$315 & 10.11 & \textbf{5.61} & 9.35 & 7.68 & 13.74 & 7.73 \\ CharTrajTT\_3D & 20$|$178 & 11.026 & 9.58 & 13.45 & 15.05 & 6.93 & \textbf{4.99} \\ \hline \hline \textbf{\# Best Scores} & - & 1 & 7 & 0 & 9 & 6 & \textbf{27} \\ \hline \textbf{\# Uniquely Best Scores} & - & 1 & 5 & 0 & 7 & 5 & \textbf{23} \\ \hline \textbf{Average rank} & - & 4.56 & 2.87 & 4.62 & 2.97 & 3.22 & \textbf{1.6} \\ \hline \end{tabular}} \end{table*} A collection of 45 heterogeneous data sets is used to assess the proposed algorithms. The collection includes synthetic and real data sets, as well as univariate and multivariate time series. These data sets are distributed as follows: \\ \begin{itemize} \item 42 of these data sets are available at the UCR repository \cite{KeoghUCRdataset}. Basically, we used all the data sets except for \textit{StarLightCurves}, \textit{Non-Invasive Fetal ECG Thorax1} and \textit{Non-Invasive Fetal ECG Thorax2}. Although these last three data sets are still tractable, their computational cost is high because of their size and the length of the time series they contain. All these data sets are composed of scalar time series. \item One data set, uWaveGestureLibrary\_3D was constructed from the uWaveGestureLibrary\_{X|Y|Z} scalar data sets to compose a new set of multivariate (3D) time series. \item One data set, CharTrajTT, is available at the UCI Repository \cite{Lichman:2013} under the name \textit{Character Trajectories Data Set}. This data set contains multivariate (3D) time series and is divided into two equal sized data sets (TRAIN and TEST) for the experiment. \item The last data set, \textit{PWM2}, which stands for Pulse Width Modulation \cite{PWM}, was specifically defined to demonstrate a weakness in dynamic time warping (DTW) pseudo distance. This data set is composed of synthetic scalar time series.\\ \end{itemize} For each dataset, a training subset (TRAIN) is defined as well as an independent testing subset (TEST). We use the training sets to extract single medoids or centroid estimates for each of the categories defined in the data sets. Furthermore, for KDTW-M, CTW2 and TEKA, the $\nu$ parameter is optimized using a \textit{leave-one-out} (LOO) procedure carried out on the TRAIN data sets. The $\nu$ value is selected within the discrete set $\{.01, .05, .1, .25, .5, .75, 1, 2, 5, 10, 15, 20, 25, 50, 100\}$. The value that minimizes the LOO classification error rate on the TRAIN data is then used to provide the error rates that are estimated on the TEST data.\\ The classification results are given in Table \ref{tab:classResults}. It can be seen from this experiment, that \begin{enumerate}[i)] \item Centroid-based methods outperform medoid-based methods: DBA and CTW (CTW2) yield lower error rates compared to DTW-M, as do TEKA compared to KDTW-M and DTW-M. \item CTW pairs much better with KDTW (CTW2 outperforms CTW1) \item TEKA outperforms DBA (under the same experimental conditions (maximum of 10 iterations)), and CTW.\\ \end{enumerate} The average ranking for all six tested methods, which supports our preliminary conclusion, is given at the bottom of Table \ref{tab:classResults}.\\ \begin{table}[h!] \centering \caption{ Wilcoxon signed-rank test of pairwise accuracy differences for 1-NC/NM classifiers carried out on the 45 datasets. } \resizebox{.48 \textwidth}{!}{\begin{tabular}{|l|l|l|l|l|l|} \hline Method & KDTW-M & DBA & CTW1 & CTW2 & TEKA\\ \hline \hline DTW-M & \textbf{\texttt{p<.0001}} & \textbf{\texttt{p<.0001}} & \texttt{0.638} & \textbf{\texttt{0.0002}} & \textbf{\texttt{p<.0001}}\\ \hline KDTW-M & - & \texttt{0.395} & \textbf{\texttt{0.0004}} & \texttt{0.5261} & \textbf{\texttt{p<.0001}}\\ \hline DBA & - & - & \textbf{\texttt{p<.0001}} & \texttt{0.8214} & \textbf{\texttt{p<.0001}}\\ \hline CTW1 & - & - & - & \textbf{\texttt{p<.0001}} & \textbf{\texttt{p<.0001}}\\ \hline CTW2 & - & - & - & - & \textbf{\texttt{p<.0001}}\\ \hline \end{tabular}} \label{tab:significance_test} \end{table} In Table \ref{tab:significance_test} we report the P-values for each pair of tested algorithms using a Wilcoxon signed-rank test. The null hypothesis is that for a tested pair of classifiers, the difference between classification error rates obtained on the 45 datasets follows a symmetric distribution around zero. With a $.05$ significance level, the P-values that lead to reject the null hypothesis are shown in bolded fonts in the table. This analysis confirms our previous analysis of the classification results. We observe that centroid-based approaches perform significantly better than medoid-based approaches. Furthermore, KDTW-M appears to be significantly better than DTW-M. Furthermore, TEKA is evaluated as significantly better than DBA and CTW2 in this experiment. Note also that DBA does not seem to perform significantly better than KDTW-M or CTW2, and that CTW1 performed similarly to DTW-M and poorly compared to the other centroid methods. Hence, it confirms out that CTW method seems to pair well with KDTW measure but poorly with the DTW measure. \color{black} \subsection{Instance set reduction} In this second experiment, we address an application that consists in summarizing subsets of training time series to speed-up an isolated gesture recognition process. The dataset that we consider enables to explore the hand-shape and the upper body movement using 3D positions of skeletal joints captured using a Microsoft Kinect 2 sensor. 20 subjects have been selected (15 males and 5 females) to perform in front of the sensor (at a three meters distance) the six selected NATOPS gestures. Each subject repeated each gesture three times. Hence the isolated gesture dataset is composed of 360 gesture utterances that have been manually segmented to a fixed length of 51 frames \footnote{These datasets will be made available for the community at the earliest feasible opportunity}. To evaluate this task, we have performed a subject cross validation experiment consisting of 100 tests: for each test, 10 subjects have been randomly drawn among 20 for training and the remaining 10 subjects have been retained for testing. 1-NN/NC (our baselines) and SVM classifiers are evaluated, with or without summarizing the subsets composed with the three repetitions performed by each subjects using a single centroid (DBA, CTW, TEKA) or Medoid (KDTW-M). The $\nu$ parameter of the KDTW kernel as well as the SVM meta parameter (RBF bandwidth $\sigma$ and $C$) are optimized using a leave one subject procedure on the training dataset. The kernels $exp(-DTW(.,.)/\sigma)$ and $exp(-KDTW(.,.)/\sigma)$ are used respectively in the SVM DTW and SVM KDTW classifiers. \\ \begin{table}[h!] \centering \caption{Assessment measures (ERR:Error rate, PRE: Precision, REC:Recall and $\mathbf{F_1}$ score) for the isolated gestures recognition. $\overline{\#Ref}$ is the number of training gestures for the 1-NN/NC classifiers and the mean number of support vectors for the SVM classifiers.} \resizebox{.48 \textwidth}{!}{\begin{tabular}{|l|l|l|l|l|l|} \hline \textbf{Method} & \begin{tabular}[c]{@{}l@{}}\textbf{ERR}\\ mean $\|$ std\end{tabular} & \textbf{PRE} & \textbf{REC} & $\mathbf{F_1}$ & \textbf{$\overline{\#Ref}$}\\ \hline \hline 1-NN DTW & .134 $\|$ .012 & .869 & .866 & 0.867 & 180\\ \hline 1-NN KDTW & \textbf{.128} $\|$ .016 & .876 & .972 & .874 & 180\\ \hline \hline 1-NC DTW-DBA & .136 $\|$ .014 & .868 & .864 & .866 & \textbf{60}\\ \hline 1-NC KDTW-CTW & .135 $\|$ .016 & .871 & .865 & .868 & \textbf{60}\\ \hline 1-NC KDTW-TEKA & \textbf{.133} $\|$ .014 & .871 & .867 & .869 & \textbf{60} \\ \hline \hline SVM DTW & .146 $\|$ .015 & .871 & .854 & .862 & 164.97 \\ \hline SVM KDTW & \textbf{.051} $\|$ .015 & .952 & .949 & .951 & \textbf{103.10} \\ \hline \hline SVM KDTW-M & .087 $\|$ .02 & .92.9 & .92.6 & .92.7 & 47.62 \\ \hline SVM KDTW-DBA & .080 $\|$ .017 & .935 & .931 & .931 & \textbf{46.74}\\ \hline SVM KDTW-CTW & .085 $\|$ .021 & .933 & .927 & .930 & 50.12\\ \hline SVM KDTW-TEKA & \textbf{.079} $\|$ .019 & .937 & .933 & .935 & 47.45\\ \hline \end{tabular}} \label{tab:isol_res} \end{table} \begin{table}[h!] \centering \caption{ Wilcoxon signed-rank test of pairwise accuracy differences for 1-NN/NC classifiers. DTW and KDTW methods exploit the entire training sets while the other methods only use one centroid for each subject and each gesture label.} \resizebox{.48 \textwidth}{!}{\begin{tabular}{|l|l|l|l|l|} \hline Method & 1-NN & 1-NC & 1-NC & 1-NC\\ & KDTW & DBA & CTW & TEKA\\ \hline \hline 1-NN DTW & \textbf{\texttt{p<.0001}} & \texttt{0.140} & \texttt{0.886} & \texttt{0.371} \\ \hline 1-NN KDTW & - & \textbf{\texttt{p<.0001}} & \textbf{0.026} & \texttt{0.087}\\ \hline 1-NC DBA & - & - & \texttt{0.281} & \textbf{\texttt{0.006}} \\ \hline 1-NC CTW & - & - & - & \texttt{0.199} \\ \hline \end{tabular}} \label{tab:significance_1NN} \end{table} \begin{table}[h!] \centering \caption{ Wilcoxon signed-rank test oof pairwise accuracy differences for SVM classifiers. DTW and KDTW methods exploit the entire training sets while the other methods only use one centroid for each subject and each gesture label.} \resizebox{.48 \textwidth}{!}{\begin{tabular}{|l|l|l|l|l|l|} \hline Method & SVM & SVM & SVM & SVM & SVM \\ & KDTW & KDTW-M & DBA & CTW & TEKA\\ \hline \hline SVM DTW & \textbf{\texttt{p<.0001}} & \textbf{\texttt{p<.0001}} & \textbf{\texttt{p<.0001}} & \textbf{\texttt{p<.0001}} & \textbf{\texttt{p<.0001}} \\ \hline SVM KDTW & - & \textbf{\texttt{p<.0001}} &\textbf{\texttt{p<.0001}} & \textbf{\texttt{p<.0001}} & \textbf{\texttt{p<.0001}}\\ \hline SVM KDTW-M & - & - & \textbf{\texttt{0.002}} & \texttt{0.57} & \textbf{\texttt{0.0002}}\\ \hline SVM DBA & - & - & - & \texttt{0.107} & \texttt{0.339} \\ \hline SVM CTW & - & - & - & - & \textbf{\texttt{0.013}} \\ \hline \end{tabular}} \label{tab:significance_SVM} \end{table} Table \ref{tab:isol_res} gives the assessment measures (ERR: average error rate, PRE: macro average precision, REC: macro average recall and $F_1 = 2 \cdot \frac{\mathrm{precision} \cdot \mathrm{recall}}{\mathrm{precision} + \mathrm{recall}}$) for the isolated gestures classification task. In addition, the number of reference instances used by the 1-NN/NC classifiers or the number of support vectors exploited by the SVM ($\overline{\#Ref}$ column in the table) are reported to demonstrate the data reduction that is induced by the methods in the training sets. The results show that the DTW measure does not fit well with SVM comparatively to KDTW: the error rate or the F1 score are about $9\%$ higher or lower for the isolated gesture task. Hence, to compare the DBA, CTW and TEKA centroids using a SVM classification, the KDTW kernel has been used. When using the centroids (SVM KDTW-DBA, SVM KDTW-CTW, SVM KDTW-TEKA), or Medoids (SVM KDTW-M) the error rate or $F_1$ score increases or decreases only by around $2.5\%$ and $2\%$ comparatively to the SVM-KDTW that achieves the best scores. Meanwhile the number of support vectors exploited by the SVM drops by a two factor, leading to an expected speed-up of $2$. Compared to 1-NN classification without centroids, the SVM KDTW with centroids achieves a much better performance, with an expected speed-up of $4$ ($\sim 50$ support vectors comparatively to $180$ gesture instances). This demonstrates the capacity of centroid methods to reduce significantly the size of the training sets while maintaining a very similar level of accuracy. In more details, the TEKA is the centroid-based method that achieves the lowest error rates for the two classification tasks, while DBA is the centroid-based method that exploits the fewest support vectors (46.5). Table \ref{tab:significance_1NN} and \ref{tab:significance_SVM} give the P-values for the Wilcoxon signed-rank tests. With the same null hypothesis as above (difference between the error rates follows a symmetric distribution around zero), and with a $.05$ significance level, the P-values that lead to reject the null hypothesis are presented in bolded fonts in the tables. From Table \ref{tab:significance_1NN} we note that 1NN-KDTW (which exploits the full training set) performs significantly better than 1NN DTW, 1-NC DTW-DBA and 1-NC KDTW-CTW but not significantly than 1-NC KDTW-TEKA. Conversely, 1-NC KDTW-TEKA performs significantly better that 1-NC DTW-DBA but not significantly better that 1-NC KDTW-CTW. Similarly, from Table \ref{tab:significance_SVM} we observe that SVM KDTW, which exploits the full training set, performs significantly better than all centroid or medoid based methods. Also, SVM KDTW-TEKA performs significantly better than SVM KDTW-CTW but not significantly better than SVM KDTW-DBA. Finally SVM KDTW-TEKA and SVM KDTW-DBA outperform the medoid based method (SVM KDTW-M) but not SVM KDTW-CTW. If the three centroid methods show rather close accuracies on this experiment, TEKA is significantly better than DBA on the 1NC task and significantly better than CTW on the SVM task. \color{black} \subsection{Denoising experiment} \label{exp:denoising} To demonstrate the utility of centroid based methods for denoising data, we construct a demonstrative synthetic experiment that provides some insights. The test is based on the following 2D periodic signal: \begin{align} X_k(t)=\left(A_k+B_k\sum_{i=1}^\infty \delta(t-\frac{2\pi i}{6\omega_k})\right) cos(\omega_k t+\phi_k)\\ Y_k(t)=\left(A_k+B_k\sum_{i=1}^\infty \delta(t-\frac{2\pi i}{6\omega_k})\right) sin(\omega_k t+\phi_k)\nonumber \end{align} where $A_k=A_0+a_k$, $B_k=(A_0+5)+b_k$ and $\omega_k = \omega_0 + w_k$, $A_0$ and $\omega_0$ are constant and $a_k$, $b_k$, $\omega_k$, $\phi_k$ are small perturbation in amplitude, frequency and phase respectively and randomly drawn from $a_k \in [0, A_0/10]$, $b_k \in [0, A_0/10]$, $\omega_k \in [-\omega_0/6.67, \omega_0/6.67]$, $\phi_k \in [-\omega_0/10, \omega_0/10]$. \begin{figure}[] \centering \begin{tabular}{ccc} \includegraphics[scale=.35, angle=0]{starSig.png} \\ \includegraphics[scale=.35, angle=0]{starSig2D.png} \end{tabular} \caption{$(\tilde{X_k}(t),\tilde{Y_k}(t))$ waveforms (top) and corresponding 2D shape (bottom, plain black curve) of the synthetic signal.} \label{fig:cleanShape} \end{figure} \begin{figure}[] \centering \includegraphics[scale=.35, angle=0]{starSig_fft_.png} \caption{Log power spectra of a $\tilde{X_k}$ component.} \label{fig:cleanSpectra} \end{figure} \begin{figure}[] \centering \begin{tabular}{cc} \includegraphics[scale=.35, angle=0]{starSig_noise.png}\\ \includegraphics[scale=.35, angle=0]{starSig_noise2D.png} \end{tabular} \caption{Noisy $(x_k(t),y_k(t))$ waveforms (top) and corresponding 2D shape (bottom) of the synthetic signal.} \label{fig:noisyShape} \end{figure} \begin{figure*}[h] \centering \begin{tabular}{cccc} Euclidean & DBA & CTW & TEKA \\ \includegraphics[scale=.2, angle=0]{starSig_fft__Ceuclid.png} & \includegraphics[scale=.2, angle=0]{starSig_fft__Cidba.png} & \includegraphics[scale=.2, angle=0]{starSig_fft__Cctw.png} & \includegraphics[scale=.2, angle=0]{starSig_fft__Citeka_a.png} \\ \includegraphics[scale=.2, angle=0]{starSig_euclid.png} & \includegraphics[scale=.2, angle=0]{starSig_idba.png} & \includegraphics[scale=.2, angle=0]{starSig_ctw.png} & \includegraphics[scale=.2, angle=0]{starSig_iteka_a.png}\\ \includegraphics[scale=.2, angle=0]{starSig_1D_Ceuclid.png} & \includegraphics[scale=.2, angle=0]{starSig_1D_Cidba.png} & \includegraphics[scale=.2, angle=0]{starSig_1D_Cctw.png} & \includegraphics[scale=.2, angle=0]{starSig_1D_Citeka_a.png} \end{tabular} \caption{Centroids obtained from a set of height noisy instances $\{(x_k,y_k)\}_{k=1\cdots 8}$ for Euclidean, DBA, CTW and TEKA averaging methods. The log power spectra in dB (top), the 2D shape (center) and x,y waveforms (bottom) are shown.} \label{fig:noisyCentroids} \end{figure*} In practice we have adopted the following setting: $f_0=\omega_o/(2.\pi)=20Hz$, and $A_0= 1$. We then center and normalize this 2D signal to get $(\tilde{X_k}(t),\tilde{Y_k}(t))$ corresponding to the plots given in Figure \ref{fig:cleanShape}. The log power spectrum of the $\tilde{X_k}$ component, that is presented in Figure \ref{fig:cleanSpectra}, shows the Dirac spike located at $f_0=20Hz$ (corresponding to the sine component), and the convolution of this spike with a Dirac comb in the frequency domain that results in pairs of Dirac spikes symmetrically located ($\pm 20Hz$) around multiples of $6f_0$, namely $120Hz$, $240Hz$, etc. This shows that this signal is characterized by an infinite spectrum. We consider then noise utterances $\epsilon_k(t)$ with zero mean and variance one added to each instances of the 2D signal: \begin{align} x_k(t)=\tilde{X_k}(t)+\epsilon_k(t) \nonumber\\ y_k(t)=\tilde{Y_k}(t)+\epsilon_k(t) \nonumber \end{align} leading to a signal to noise ratio of $0dB$. An example of such noisy instance is given in Figure \ref{fig:noisyShape}. Because of the scattering of the random components of the signal in a wide spectral band, traditional noise reduction techniques, such as those presented in \cite{Hassan2010} for instance, will not allow to recover the signal properly. The task consists in reducing the noise as far as possible to recover the 2D shape of the noise free signal from a small set of noisy instances $\{(x_k,y_k)\}_{k=1\cdots 8}$ containing two "periods" of the clean signal. Figure \ref{fig:noisyCentroids} presents the centroid shapes obtained using, from left to right, Euclidean, DBA, CTW and TEKA methods respectively. We can see that the Euclidean centroid retrieves partially the low frequency sine component without properly sorting out the spikes components, while DBA more accurately retrieves the spikes, however without achieving to suppress the low frequency noise around the sine component. CTW centroid appears to be in between and achieves partially to reduce the low frequency noise and to extract the spikes. TEKA achieves the best retrieval of the sine and spikes components that are better timely and spatially separated. The spectral analysis presented in Figure \ref{fig:noisyCentroids} (top) gives further insight: for DBA and CTW centroids, top center sub-figures, the series of pairs of Dirac spikes (in dotted red) are still hidden into the noise level (black curve), while it is much more separated from the noise for the TEKA centroid, as shown in the top right side sub-figure. Moreover, if we take the clean shapes as ground truth, the signal to noise ratio (SNR) gains estimated from the log power spectra (to get rid of the phase) is $0 dB$ for the noisy shapes , while it is $1.58 dB$ for the Euclidean centroid, $1.17 dB$ for the DBA centroid, $1.57 dB$ for the CTW centroid, and $3.88 dB$ for the TEKA centroid. Note that in the calculation of the SNR, preserving the spikes has a lower impact compared to preserving the low frequency sine wave, which explains why the SNR values obtained by the DBA and CTW centroid are lower than for the Euclidean centroid. In terms of noise reduction, this experiment demonstrates the ability of the TEKA centroid to better recover, from few noisy utterances, a signal whose components are scattered in a wide band spectrum. Indeed, if the noise level increases, the quality of the denoising will be reduced. \subsection{Discussion} We believe that the noise filtering ability of TEKA is mainly due to the averaging technique described in the equation (\ref{eq:TEKA}), which aggregates many plausible alignments between samples (instead of a best one) while averaging also the time of occurrence of the samples, in particular those corresponding to expected pattern location and duration such as the CBF shapes or the spike locations in the third experiment. This ability is also likely to explain the best accuracy results obtained by TEKA comparatively to the state of the art methods, CTW and DBA. Furthermore, it seems that the KDTW measure is more adapted to match centroids than DTW. Here again, handling several good to best alignments rather than a single optimal one allows for matching the centroids in many ways that are averaged by the measure. This has been verified for CTW in 1-NC classification tasks and is true for TEKA and DBA also. The main limitation in exploiting TEKA (and KDTW) is the tuning of the $\nu$ parameter that control the selectivity of the local kernel. $\nu$ is dependent on the length of the time series and need to be adapted to the task itself. Basically, if $\nu$ is too small TEKA will filter out high frequency events just as a moving average filter. Conversely, if $\nu$ is too high, the computation of the products of local probabilities along the alignment paths will bear some loss of significance in terms of the numerical calculation. Despite this tuning requirement, the three experiments, that we have carried out in this study, demonstrate its applicability and usefulness. \color{black} \section{Conclusion} In this paper, we have addressed the problem of averaging a set of time series in the context of a time elastic distance measure such as Dynamic Time Warping. The new perspective provided by the kernelization of the elastic distance allows a re-interpretation of pairwise kernel alignment matrices as the result of a forward-backward procedure applied on the states of an equivalent stochastic alignment automata. From this re-interpretation, we have proposed a new algorithm, TEKA, based on an iterative agglomerative heuristic method that allows for efficiently computing good solutions to the multi-alignment of time series. This algorithm exhibits quite interesting denoising capabilities which enlarges the area of its potential applications. We have presented extensive experiments carried out on synthetic and real data sets, containing univariate but also multivariate time series. Our results show that centroid-based methods significantly outperform medoid-based methods in the context of a first nearest neighbor and SVM classification tasks. More strikingly, the TEKA algorithm, which integrates joint averaging in the sample space and along the time axis, is significantly better than the state-of-the art DBA and CTW algorithms, with a similar algorithmic complexity. It enables robust training set reduction which has been experimented on an isolated gesture recognition task. Finally we have developed a dedicated synthetic test to demonstrate the denoising capability of our algorithm, a property that is not supported at a same level by the other time-elastic centroid methods on this test. \color{black} \section*{Acknowledgments} The authors thank the French Ministry of Research, the Brittany Region and the European Regional Development Fund that partially funded this research. The authors also thank the promoters of the UCR and UCI data repositories for providing the datasets used in this study. \bibliographystyle{IEEEtran}
2024-02-18T23:40:50.383Z
2017-04-25T02:10:27.000Z
algebraic_stack_train_0000
3,442
11,800
proofpile-arXiv_066-824
\section{Introduction} To evaluate changes to text-to-speech (TTS) synthesizers, human raters are often employed to assess the synthesized speech. Multiple human ratings of an audio sample contribute to a \textit{mean opinion score} (MOS). MOS has been crowdsourced with specific attention to rater quality by \citep{ribeiro2011crowdmos}. While crowdsourcing introduces a degree of parallelism to the rating process, it is still relatively costly and time-consuming to obtain MOS for TTS quality testing. Numerous systems have been produced to algorithmically produce \textit{objective} assessments of audio quality approximating the \textit{subjective} human assessment, including some which assess speech quality. For example, MCD \citep{kubichek1993mel}, PESQ \citep{rix2001perceptual} and POLQA \citep{rec2011p} target this particular space. These are \textit{intrusive} assessors in that they assume the presence of an undistorted reference signal to facilitate comparisons when deriving ratings, something that does not exist in the case of the synthesized speech of TTS. \textit{Non-intrusive} assessments such as ANIQUE \citep{kim2005anique}, LCQA \citep{grancharov2006non} and P.~563 \citep{malfait2006p} have been proposed to evaluate speech quality where this reference signal is not available. Much research in quality assessment is targeted at telephony, with emphasis on detecting distortions and other artifacts introduced by lossy compression and transmission. Throughout this work, a \textit{synthesizer} constitutes a snapshot of the evolving implementation of a unit selection synthesis algorithm and a continually growing corpus of recorded audio, combined with a specific set of synthesis/cost parameters. When we partition or aggregate data by synthesizer, we take all utterances from a given synthesizer and allocate them \textit{en masse} to a single training or evaluation fold, or to a single aggregate metric, e.g. \textit{synthesizer-level mean MOS}. In \citep{peng2002perpetually}, the authors used human ratings to improve the correlation between MOS and unit selection TTS cost. Similarly, \citep{alias2011efficient, pobar2012optimization} explore means of tuning cost functions to incorporate subjective preferences. Such works consider direct optimization of synthesizer MOS as a function of synthesizer parameters (e.g. cost function weights). In prior unpublished work we trained similar models and found they could exceed 0.9 Spearman rank correlation between true and estimated synthesizer MOS. However, any modifications to parameter semantics or engine internals render this mapping invalid. It is desirable to learn a synthesizer assessment which operates independently of engine internals, directly assessing pools of TTS waveforms. We demonstrate that deep recurrent networks can model \textit{naturalness}-of-speech MOS ratings produced by human raters for TTS synthesizer evaluation, using only raw audio waveform as input. We explore a variety of deep recurrent architectures, to incorporate long-term time dependencies. Our tuned AutoMOS model achieves a Spearman rank correlation of 0.949, ranking 36 different synthesizers. (Sampling a single human rating for each utterance yields a Spearman correlation of 0.986.) When evaluating the calibration of AutoMOS on multiple utterances with similar predicted MOS, we find five-fold median correlations > 0.9 and MSE competitive with sampled human ratings, even when we quantize the predicted utterance MOS to the 0.5 increments of the human rater scale. Such results open the door for scalable, automated tuning and continuous quality monitoring of TTS engines. \section{Model \& Results} Because audio data is of varying length for each example, either directly pooling across the time dimension or the use of recurrent neural networks (RNN) is suggested. To encode the intuition that valuable information exists at relatively large time-scales (consider phone or transition duration or inflection, which may vary according to context), we opt to explore a family of RNN models. In particular, we test a family of models that layer one or more fully-connected layers atop the time-pooled outputs of a stack of recurrent Long Short-Term Memory \citep[LSTM;][]{hochreiter1997long} cells. The timeseries input to the LSTM is either a log-mel spectrogram or a time-pooled convolution as in \citep{hoshen2015speech}, in each case over a 16kHz waveform. We consider the addition of single frame velocity and acceleration components to this timeseries. The final LSTM layer's outputs are max-pooled across time, and fed as inputs to the fully-connected hidden layers which compute final regression values. We explored a means of inducing learning across longer timeframes (a stacked LSTM where deeper layers use a stride of 2 or more timesteps over the outputs of of lower-level LSTM layers), but found performance comparable to that of a simpler stacked or single-layer LSTM. Max-pooling non-final LSTM layers' outputs and adding skip connections to the final hidden layers was not found to improve performance. \begin{figure} \includegraphics[width=\linewidth]{automos.png} \caption{Diagram of the best performing AutoMOS network} \label{fig:diagram} \end{figure} We explore multiple modes of predicting and training over input waveforms $x$: (1) predict sufficient statistics $\mu(x), \sigma(x)$ and train on log-likelihood of individual human ratings $r_i$ under Gaussian $\log p(r_i | \mu(x), \sigma(x))$, (2) predict $MOS(x)$ and train on utterance-level L2 loss $(MOS(x) - MOS_{true})^2$, and (3) predict $logits(x)$ and train on cross-entropy between the true 9-category distribution of human ratings and $\Cat(logits(x))$ per-utterance. We train a separate set of outputs with a learned embedding of the ground-truth synthesizer, providing both regularization and gradients to the training process. Embeddings are initialized randomly; both the embedding and the prediction thereof receive a gradient (toward each other) in each training step. The best performing model is illustrated in Figure~\ref{fig:diagram}. All models were trained with Adagrad on batches of 20 examples asynchronously across 10 workers. We use five-fold cross-validation to evaluate the best found set of hyperparameters, with all utterances for any given synthesizer appearing exclusively in a single fold. \paragraph{Data} We use a corpus of TTS naturalness scores acquired over multiple years across multiple instances of quality testing for \iffinal Google's \else [[our]] \fi TTS engines. All tests are iterations on a single English (US) voice used across multiple products. Raters scored each utterance given a 5-point Likert scale for naturalness, in half-point increments. We partition training data from holdout data such that all utterances for a given synthesizer are in the same partition. The data includes 168,086 ratings across 47,320 utterances generated by 36 synthesizers. The utterance quantity per synthesizer varies from 64 to 4800: \begin{sparkline}{14} \setlength{\sparkspikewidth}{1pt} \definecolor{sparkspikecolor}{named}{blue} \definecolor{sparkbottomlinecolor}{gray}{0.9} \sparkspike 0.027027 0.272349 \sparkspike 0.054054 0.347564 \sparkspike 0.081081 0.383818 \sparkspike 0.108108 0.383818 \sparkspike 0.135135 0.383818 \sparkspike 0.162162 0.383818 \sparkspike 0.189189 0.464384 \sparkspike 0.216216 0.464384 \sparkspike 0.243243 0.464384 \sparkspike 0.270270 0.464384 \sparkspike 0.297297 0.464384 \sparkspike 0.324324 0.464384 \sparkspike 0.351351 0.464384 \sparkspike 0.378378 0.464384 \sparkspike 0.405405 0.464384 \sparkspike 0.432432 0.532720 \sparkspike 0.459459 0.581204 \sparkspike 0.486486 0.581204 \sparkspike 0.513514 0.581204 \sparkspike 0.540541 0.581204 \sparkspike 0.567568 0.581204 \sparkspike 0.594595 0.581204 \sparkspike 0.621622 0.618812 \sparkspike 0.648649 0.618812 \sparkspike 0.675676 0.618812 \sparkspike 0.702703 0.675520 \sparkspike 0.729730 0.689380 \sparkspike 0.756757 0.698025 \sparkspike 0.783784 1.000000 \sparkspike 0.810811 1.000000 \sparkspike 0.837838 1.000000 \sparkspike 0.864865 1.000000 \sparkspike 0.891892 1.000000 \sparkspike 0.918919 1.000000 \sparkspike 0.945946 1.000000 \sparkspike 0.972973 1.000000 \sparkbottomline 0.5 \end{sparkline}, in log-scale. \iffalse \begin{table}[t] \caption{Naturalness scoring guidelines} \label{scores-table} \centering \begin{tabular}{ll} \toprule Score & Description \\ \midrule 5.0~~Excellent & Completely natural speech \\ 4.0~~Good & Mostly natural speech \\ 3.0~~Fair & Equally natural and unnatural speech \\ 2.0~~Poor & Mostly unnatural speech \\ 1.0~~Bad & Completely unnatural speech \\ \bottomrule \end{tabular} \end{table} \fi \paragraph{Hyperparameter Tuning} We used Google Cloud's HyperTune to explore a set of hyperparameters shown in Table~\ref{hparams-table}. About 2 in 3 top-performing tuning runs used the cross-entropy categorical training mode. The top 10 configurations we found had eval-set Pearson correlations between utterance-level predicted and true MOS ranging from 0.56-0.61 after 20,000 training steps. When we constrained the search space to those models using convolution+pooling based timeseries (as opposed to log-mel), we found weaker best eval-set correlations around 0.48. This could indicate there is little value in the sample-level details when dealing in synthesized speech, or could signal insufficient training data. While \citep{sainath2015learning} reported gammatone-like learned filter banks, their speaker-independent ASR covers a much wider range of voices, and we did not observe a similar set of emergent filters from random initialization. Initialization with gammatone filters yielded only nominal improvements in performance (r=0.51). Relative to a simple L2 loss to the true MOS, we observed little benefit from training a Gaussian predictor against individual ratings. The estimated variance was typically higher than the true sample variance for a given utterance. Using a simpler L2 loss against the true MOS provided for faster training and convergence, allowing us to try a wider variety of structural changes. Treating predictions as categorical and using a cross-entropy loss slightly outperformed the L2 construction in the top tuning runs (0.61 categorical vs. 0.58 L2). The categorical form gives AutoMOS more weights and hence a greater capacity near the output layer. \begin{table}[t] \caption{Model Hyperparameters} \label{hparams-table} \centering \begin{tabular}{lll} \toprule Description & Range explored & Best Performer \\ & & \begin{small}(Pearson r = 0.61)\end{small} \\ \midrule Learning rate; decay / 1000 steps & 0.0001 - 0.1; 0.9 - 1.0 & 0.057; 0.94\\ L1; L2 regularization & 0.0 - 0.001 & 1.4e-5; 2.6e-5\\ Loss strategy & L2 $|$ cross-entropy & cross-entropy \\ Synthesizer regression embedding dim & 0 - 50 & 37 \\ Timeseries type & log-mel $|$ pooled conv1d & log-mel \\ Timeseries width (\# mel bins, conv filters) & 20 - 100 & 86\\ Timeseries 1-step derivatives & (none) $|$ vel. $|$ vel. + acc. & vel. + acc. \\ LSTM layer width; depth & 20 - 100; 1 - 10 & 93; 2 \\ LSTM timestep stride at non-0th layers & 1 - 10 & 10 \\ LSTM layers feeding hidden layer inputs & all $|$ last & all \\ Post-LSTM hidden layer width; depth & 20 - 200; 0 - 2 & 60; 1 \\ \bottomrule \end{tabular} \end{table} \paragraph{Evaluation} As simple baselines for comparison, we consider (1) a bias-only model which always predicts the mean of all observed utterances' MOS and (2) a small nonlinear model which takes only utterance length as input (two 10-unit hidden layers with rectified linear activation), with the intuition that a longer utterance includes more opportunities to make mistakes deemed unnatural. We draw one human rating for each utterance and show this comparison as a "Sample human rating" column. If errors are unbiased, an increased sample size should reduce error. We sort utterances by predicted MOS and evaluate correlations between $ \mathbb{E}_{group}(MOS_{predicted}) $ and $ \mathbb{E}_{group}(MOS_{true}) $ (where $\mathbb{E}$ is the expected value operator) on groupings of 10 or more utterances with adjacent predicted MOS in a similar fashion to a calibration plot. We show such plots in Figure~\ref{fig:calibplots}. \begin{figure} \includegraphics[width=.74\linewidth]{calib.png} \includegraphics[width=.25\linewidth]{sample.png} \caption{\textbf{Left:} Calibration plots (including all eval folds of AutoMOS): Green represents perfect calibration; blue plots $(\mathbb{E}_{w}(MOS_{predicted})$, $\mathbb{E}_{w}(MOS_{true}))$ within 0.05 windows $w$ along the x-axis. \textbf{Right:}~Samples from score-over-time animations. Visit \texttt{\href{http://goo.gl/cnQbSn}{goo.gl/cnQbSn}} to view.} \label{fig:calibplots} \end{figure} How well can we rank synthesizers relative to one another? To perform this evaluation, we use the above five-fold cross validation to predict a MOS for each utterance using the AutoMOS instance from which it was held-out. We then average at the synthesizer-level, giving us a total of 36 $ \mathbb{E}_{synth}(MOS_{predicted}), \mathbb{E}_{synth}(MOS_{true}) $ pairs upon which we evaluate. Results shown in Table~\ref{results-table}. \begin{table}[t] \caption{RMSE and correlation results (reflecting median fold, except as indicated)} \label{results-table} \centering \begin{minipage}{\linewidth} \renewcommand\footnoterule{ \kern -1ex} \begin{tabular}{lccccc} \toprule & \multicolumn{2}{c}{Baselines} & \multicolumn{2}{c}{AutoMOS} & Ground-truth \\ \cmidrule(lr{.5em}){2-3} \cmidrule(lr{.5em}){4-5} \cmidrule(lr{.5em}){6-6} Metric / Model & Bias-only & NNet(utt. length) & Raw & Quantized\footnote{\label{note1}utterance scores from 1-5 in increments of 0.5} & Sample human rating\footnoteref{note1} \\ \midrule \multicolumn{5}{l}{Utterance-level ($n_{fold} = {6000, 6424, 6624, 12348, 15924}$)} \\ ~~RMSE & 0.618 & 0.553 & 0.462 & 0.483 & 0.512 \\ ~~Pearson r & $-$ & 0.454 & 0.668 & 0.638 & 0.764 \\ ~~Spearman r & $-$ & 0.399 & 0.667 & 0.636 & 0.757 \\ \multicolumn{5}{l}{10 utterance means ($n_{fold} = {600, 643, 663, 1235, 1593}$)} \\ ~~RMSE & 0.203 & 0.213 & 0.172 & 0.171 & 0.358 \\ ~~Pearson r & $-$ & 0.812 & 0.930 & 0.933 & 0.962 \\ ~~Spearman r & $-$ & 0.657 & 0.925 & 0.925 & 0.956 \\ \multicolumn{5}{l}{Synthesizer-level means ($n = {36}$; uses all folds)} \\ ~~RMSE & 0.252 & 0.132 & 0.073 & 0.075 & 0.034 \\ ~~Pearson r & $-$ & 0.795 & 0.938 & 0.935 & 0.987 \\ ~~Spearman r & $-$ & 0.679 & 0.949 & 0.947 & 0.986 \\ \bottomrule \end{tabular} \end{minipage} \end{table} \section{Discussion} AutoMOS tends to avoid very-high or very-low predictions, likely reflecting the distribution of the training data. It also seems to learn patterns in the data around certain common "types" of utterances which usually achieve high ("OK, setting your alarm") or low MOS (reading dictionary definitions). It's possible that different distributions of texts per synthesizer could yield easily predictable differences in synthesizer-MOS. A future improvement would be predicting MOS for the raw text and evaluating the \textit{advantage} of an utterance relative to this baseline. In \citep{peng2002perpetually}, naturalness is predictable from unit selection costs; here, we want to remove the predictive baseline of the text. We have begun tuning a TTS engine using AutoMOS. Subsequent human evaluations will provide concrete results on the model and evaluation criteria we've selected. Similarly, we will experiment with the system for continuous quality testing of a large-scale TTS deployment. It may be possible to leverage AutoMOS to do stratified sampling of utterances to send to human raters. This would allow raters to focus energy more evenly across the quality spectrum. To probe what's been learned, we have explored artificial truncation, as in Figure~\ref{fig:calibplots} (right). Methods like layerwise relevance propagation \citep{binder2016layer} or activation difference propagation \citep{shrikumar2016not} have shown promise with image models, and could be interesting to apply to a unit selection cost function. \newpage \begin{small} \bibliographystyle{apalike} \section{What are sparklines?} Sparklines are intense, simple, wordlike graphics, so named by Edward Tufte. This is an example of sparkline: \begin{sparkline}{10} \sparkrectangle 0.3 0.8 \sparkdot 0.5 0.62 blue \sparkdot 1 0.2 red \spark 0.1 0.95 0.2 0.8 0.3 0.3 0.4 0.52 0.5 0.62 0.6 0.7 0.7 0.5 0.8 0.4 0.9 0.25 1 0.2 / \end{sparkline} (the stock price of Daimler Chrysler, for example). In lieu of a more detailed introduction, Professor Tufte's site has an early release of a chapter on sparklines, see www.edwardtufte.com. A PHP implementation can be found at http://sparkline.sourceforge.net/. A sparkline can be added using the \env{sparkline}--environment. Also, you can add sparkling rectangles for the median and special sparkling dots in red or blue. If we want to add a sparkline, be careful not to leave an empty line between the text left of the sparkline and the environment itself, since otherwise the sparkline starts a new paragraph. Sparklines do not appear within a dvi-file, they require either pdflatex or conversion to postscript with dvips. The \textsf{sparklines} package requires the \pkg{pgf} package. This makes it incompatible with \pkg{pictex}: the combination both require too many dimension parameters. It is possible that the package \pkg{sparklines} can be used with \pkg{pictexwd}; they can at least be loaded together without using too many dimensions, but no other test was conducted. \section{Usage} \paragraph{Sparkline environment} The sparkline at the beginning of the previous section was created with the following: \begin{verbatim} \begin{sparkline}{10} \sparkrectangle 0.3 0.8 \sparkdot 0.5 0.62 blue \sparkdot 1 0.2 red \spark 0.1 0.95 0.2 0.8 0.3 0.3 0.4 0.52 0.5 0.62 0.6 0.7 0.7 0.5 0.8 0.4 0.9 0.25 1 0.2 / \end{sparkline} \end{verbatim} The argument of the environment gives the width of the graphic as a multiple of the dimension unit \texttt{ex} (approximately the height of the lowercase `x' in the current font). Within the environment, $x$-coordinates run from 0 (extreme left of the graphic) to 1 (extreme right). The height of the graphic is given by the \emph{macro} \verb$\sparklineheight$, defined to produce \texttt{1.75} by default. The units are the same as for the width: the \texttt{ex} of the current font. You can redefine that macro (with \verb$\renewcommand$) to force another height. Within the graphic, the $y$-coordinate runs from 0 (lowermost point) to 1 (uppermost). Each pair of numbers after the macro \verb$\spark$ represents a coordinate pair, giving the location of a point in the above described coordinate system. The macro draws a line from each point to the next. Each number must be followed by a space, and the list is terminated by a \texttt{/}. Be careful that there are an even number of coordinates. The thickness of the line that is created is the value of the length \verb$\sparklinethickness$, which the user may change (with \verb$\setlength$). The default value is \texttt{0.2pt}; the above example was created with the value \texttt{0.3pt}. \paragraph{Sparkrectangle} The \verb$\sparkrectangle$ command produces a background rectangle. It must be followed by two numbers, each followed by a space or the end of the line. They are the $y$-coordinates of the bottom and top of the rectangle. This is supposed to show the `normal range' of the $y$-data, so that a point outside that rectangle represents a departure from normal. The color of the rectangle is `\texttt{sparkrectanglecolor}', which the user may redefine (with \verb$\definecolor$). The initial definition is given by \\ \indent\verb$\definecolor{sparkrectanglecolor}{gray}{0.9}$\\ In the above example, it was first changed to a light green. \paragraph{Sparkdots} The colored dots are produced by \verb$\sparkdot$, the diameter of the dot is the value of the length \verb$\sparkdotwidth$, which the user may change (with \verb$\setlength$). The default value is \texttt{1.2pt}; the above example was created a value of \texttt{1.3pt}. The command takes three parameters, each of which must be followed by a space or the end of the line. The first two are the coordinates of the center of the dot, the third is the color. \paragraph{Bar graphs} Bar graphs can be drawn easily: \begin{sparkline}{5} \sparkspike .083 .18 \sparkspike .25 .55 \sparkspike .417 1 \sparkspike .583 .62 \sparkspike .75 .42 \sparkspike .917 .5 \end{sparkline}. This was created by the code: \begin{verbatim} \begin{sparkline}{4} \sparkspike .083 .18 \sparkspike .25 .55 \sparkspike .417 1 \sparkspike .583 .62 \sparkspike .75 .42 \sparkspike .917 .5 \end{sparkline}. \end{verbatim} The macro \verb$\sparkspike$ must be followed by a pair of numbers, each followed by a space or the end of the line. The first of the pair is the horizontal location of the bar and the second is the height. The bars are drawn in color `\texttt{sparkspikecolor}' which the user may redefine (with \verb$\definecolor$). The default is \texttt{black}; the above example was drawn with it changed to \texttt{red}. The width of each bar is the value of the length \verb$\sparkspikewidth$, which the user may change (with \verb$\setlength$). The default is \texttt{2pt}. \paragraph{Colors} In case you want to change colors use \begin{verbatim} \definecolor{sparkrectanglecolor}{gray}{0.9} \definecolor{sparkspikecolor}{named}{red} \definecolor{sparklinecolor}{named}{red} \end{verbatim} before the sparkline environment (see a manual about defining colors in \LaTeX{} if you do not understand the definition of \emph{named} etc.). \paragraph{Bottom line} This adds a bottom line (the x-axis) which can be useful to visually separate different bar charts that are next to each other: \begin{sparkline}{5} \definecolor{sparkbottomlinecolor}{gray}{0.9} \sparkspike .15 .55 \sparkspike .317 1 \sparkspike .483 .62 \sparkspike .65 .42 \sparkspike .817 .5 \sparkbottomline 0.9 \end{sparkline}. The code used was \begin{verbatim} \begin{sparkline}{5} \definecolor{sparkbottomlinecolor}{gray}{0.9} \sparkspike .15 .55 \sparkspike .317 1 \sparkspike .483 .62 \sparkspike .65 .42 \sparkspike .817 .5 \sparkbottomline 0.9 \end{sparkline}. \end{verbatim} Changing the color of the bottom line is quite easy using the command \section*{Version history} \begin{description} \item[] Oct 19, 2014 version 1.6: Emiel van Miltenburg ([email protected]) - Adding a bottom line (the x-axis, this is useful to visually separate different bar charts that are next to each other) and changing the color of the bottom line. \item[] Nov 21, 2009 version 1.5: Benno Puetz ([email protected]) made change of colors possible. \item[] Apr 20, 2009 version 1.4: Alexander Kowalski ([email protected]) found an error concerning spark-rectangles \item[] Mar 21, 2007 version 1.3: User adjustable colors and parameters added by Dan Luecking $\[email protected]$\rangle$ \item[] Mar 19, 2007 version 1.2: Sparkbars added thanks to Harlan Harris $\[email protected]$\rangle$ \item[] Apr 21, 2005 version 1.1: bug removed thanks to Mathias Hofmann $\[email protected]$\rangle$ \item[] Dec 12, 2004 version 1.0: first version of sparklines \end{description} \end{document}
2024-02-18T23:40:50.424Z
2016-11-29T02:12:46.000Z
algebraic_stack_train_0000
3,444
3,732
proofpile-arXiv_066-895
\section{Introduction} In a two-dimensional electron gas (2DEG) subjected to a strong magnetic field, electrons are forced into Landau levels with the kinetic energy quenched. The dominating electron-electron interaction induces various correlated ground states. The most celebrated of these states is the fractional quantum Hall (FQH) liquid which occurs in the vicinity of a set of magnetic filling factors of rational fractions~\cite{jainendrak.jain2007}. Interestingly, even though the system is dominated by the electron-electron interaction, its physics can be well described in a hidden Hilbert space by a set of weakly interacting composite fermions (bosons) that are bound states of an electron with an even (odd) number of quantum vortices, as suggested by the theory of composite fermions (CFs)~\cite{jainendrak.jain2007,jain2009}. In the theory, a FQH state is interpreted as an integer quantum Hall state of CFs. The theory achieves great successes. For instance, the ground state wave functions prescribed by the CF theory for the FQH states achieve high overlaps with those determined from exact diagonalizations~\cite{jainendrak.jain2007}, and the predictions based on an intuitive picture of non-interacting CFs are verified in various experiments~\cite{o.heinonen1997}. The theory can even be applied to more exotic situations such as the half-filling case which is interpreted as a Fermi liquid of CFs~\cite{kalmeyer1992,halperin1993}, and the $5/2$-filling case which is interpreted as a $p$-wave pairing state of CFs~\cite{moore1991}. In effect, for every known state of electrons, one could envision a counterpart for CFs. It is natural to envision that CFs may form a Wigner crystal (WC). Electrons form a Wigner crystal at sufficiently low density when the electron-electron interaction dominates over the kinetic energy~\cite{gabrielegiuliani2005}. In the presence of a strong external magnetic field, the kinetic energy is completely quenched and electrons should have a tendency to form a crystalline phase. However, in 2DEG, the tendency is preempted by the more stable FQH states when the filling factor is close to special fractions such as $1/3$ and $2/5$. Nevertheless, a WC could be stabilized when the filling factor deviates from these fractions. Theoretical studies suggest that the WC of composite particles (CPWC), \emph{i.e.}, a WC consisting not of electrons but composite fermions or bosons, could be stabilized~\cite{yi1998}. More specifically, either a type-I CPWC~\cite{yi1998,narevich2001}, in which all composite particles (CPs) are frozen, or a type-II CPWC~\cite{archer2013}, which lives on top of a FQH state and only freezes CFs excessive for the filling fraction of the FQH state, could be energetically favored over the ordinary electron WC~\cite{yi1998,yang2001,lee2002,goerbig2004,chang2005,chang2006}. Experimentally, there had accumulated a large number of evidences indicating the formations of WCs in 2DEG systems, although these experiments, either detecting the microwave resonances of disorder pinning modes~\cite{andrei1988,engel1997,li1997,li2000,chen2003,chen2004,chen2006,zhu2010,williams1991} or measuring transport behaviors~\cite{zhang2015,liu2014,pan2002,li1991,jiang1991,williams1991,jiang1990,goldman1990}, cannot unambiguously distinguish a CPWC from its ordinary electron counterpart. A possible way to distinguish a CPWC from its ordinary electron counterpart is to examine its low energy phonon excitation. The phonon excitation of an ordinary WC had been thoroughly investigated~\cite{maki1983,c^ote1990,c^ote1991}. It consists of a low-frequency branch and a magneto-plasmon mode that occurs near the magnetic cyclotron frequency. Moreover, Kohn's theorem asserts that the magnetic cyclotron mode exhausts all the spectral weight in the long wavelength limit~\cite{kalmeyer1992,simon1998}. For a CPWC, in analog to the ordinary WC, one would expect that its phonon excitation also consists of two branches. However, similar to the magneto-roton mode arisen in FQH liquids~\cite{girvin1986}, its high-frequency branch must be an emergent mode originated purely from the electron-electron interaction, irrelevant to the cyclotron resonance because all excitations of CPs are limited within a partially filled Landau level. To be consistent with Kohn's theorem, the mode must have a vanishing oscillator strength in the long wavelength limit. These features of the emergent mode make it unique. An experimental probe of the mode would provide an unambiguous evidence for the CP nature of an observed WC phase. To determine the low-energy phonon excitation of a CPWC, it is necessary to understand the dynamics of CPs. Unfortunately, up to now, the true nature of the CP dynamics is yet to be fully clarified. Existing theories are based on a heuristic approach assuming that CPs follow the ordinary dynamics characterized by an effective mass and an effective magnetic field~\cite{rhim2015,archer2011,ettouhami2006}. The assumption is in accordance with the conventional wisdom that a CP behaves just like an ordinary Newtonian particle, as implied in either the Halperin-Lee-Read theory of composite Fermi-liquids~\cite{halperin1993} or L\'{o}pez-Fradkin's construction of the Chern-Simons field theory for FQH states~\cite{lopez1991}. However, the validity of the assumption is questionable. An indication of that is the violation of Kohn's theorem: the heuristic approach would predict a new cyclotron mode corresponding to the effective mass and magnetic field. It hints that the CP dynamics adopted by the heuristic approach cannot be the correct one. Actually, even for electrons in a solid, the dynamics in general has a symplectic form (Sundaram-Niu dynamics) in which Berry curvature corrections emerge as a result of the coupling between orbital and internal degrees of freedom (e.g., spin-orbit coupling, SOC)~\cite{sundaram1999,xiao2010}. It was also found that the lattice dynamics of a magnetic solid with a strong SOC is subjected to an emergent gauge field~\cite{qin2012} which gives rise to a dissipationless viscosity~\cite{hughes2011}. For the case of the CPs, although the SOC is irrelevant, their orbital motions are nevertheless entangled with internal degrees of freedom in a strongly correlated ground state. It is reasonable to expect that the entanglement would serve as an effective SOC and give rise to Berry curvature corrections to the dynamics of CPs. Recently, Son also questions the validity of the assumption by noting inconsistencies in the conventional Halperin-Lee-Read theory of CF Fermi liquids, and hypothesizes that a CF could be a Dirac particle~\cite{son2015,son2016}. A Dirac particle would follow a dynamics subjected to a Berry curvature in the momentum space with a singular distribution. We believe that a concrete answer to the question should be a derivation of the CP dynamics directly from microscopic wave functions. The theory of CFs, as detailed in Ref.~\cite{jainendrak.jain2007}, is not only an intuitive picture for describing FQH states, but also a systematic way for constructing ground state wave functions as well as the Hilbert space of low-lying excitations. These informations are sufficient for an unambiguously determination of the dynamics. Conversely, a proposal on the nature of CFs should have an implication on how the microscopic states would be constructed. Unfortunately, the correspondence between the microscopic states and the dynamics is rarely explicitly demonstrated in literatures. A rare example of the correspondence can be found in the dipole picture of CFs~\cite{read1994}, which is based on the microscopic Rezayi-Read wave function for a CF liquid~\cite{rezayi1994}. However, even in this case, an explicit form of the dynamics had never been properly formulated (see Sec.~\ref{subsec:Dipole-interpretation}). In this paper, we derive the effective dynamics of CPs directly from microscopic wave functions. The derivation is based on the time-dependent variational principle~\cite{p.kramer1981}. We focus on the type-I CPWC, which is relatively simple without unnecessary obscuring complexities. Based on the dynamics, we conclude that a CP, at least in the CPWC phase, is neither an ordinary Newtonian particle nor a Dirac particle, but a particle subjected to Berry curvature corrections, and follows the more general Sundaram-Niu dynamics. We further show that the CP dynamics is consistent with the dipole picture of CFs as well as Kohn's theorem. We carry out numerical simulations to quantitatively determine the dispersions of phonons. We find an emergent magneto-roton mode which signifies the CP nature of a WC. The mode occurs at frequencies much lower than the magnetic cyclotron frequency, and has a vanishing oscillator strength in the long wavelength limit, consistent with Kohn's theorem. The quantitative results will be useful for future experimental probes of the emergent magneto-roton mode, which would unambiguously distinguish a CPWC from its ordinary electron counterpart. The remainder of the paper is organized as follows. In Sec.~II, we derive the CP dynamics from the microscopic wave function of the CPWC phase. In Sec.~III, we analyze CP pictures emerged from the dynamics. In Sec. IV, we carry out the numerical simulations based on the formalism, and present quantitative results for the dispersions of the phonon excitations. Finally, Sec.~V contains concluding remarks. \section{CP dynamics in a CPWC} \subsection{CPWC wave function} The theory of CFs prescribes an ansatz for constructing the wave functions of the ground state and low-lying excited states of a CF system. A CF wave function is derived from a Hartree-Fock wave function $\Psi_{HF}$, which describes the quantum state of a collection of weakly interacting particles in a fictitious (hidden) Hilbert space. The CF wave function is obtained by a transformation from $\Psi_{HF}$~\cite{jainendrak.jain2007,jain2009}: \begin{equation} \Psi\left(\left\{ \bm{r}_{i}\right\} \right)=\hat{P}_{LLL}J\Psi_{HF}\left(\left\{ \bm{r}_{i}\right\} \right),\label{eq:PsiMap} \end{equation} where $\hat{P}_{LLL}$ denotes the projection to the lowest Landau level (LLL), and \begin{equation} J=\prod_{i<j}\left(z_{i}-z_{j}\right)^{m},\label{eq:Bijl-Jastrow} \end{equation} is the Bijl-Jastrow factor which binds an integer number of $m$ quantum vortices to each of electrons, $z_{i}=x_{i}+iy_{i}$ with $\bm{r}_{i}\equiv(x_{i},y_{i})$ being the coordinate of an electron~\footnote{We have assumed that the direction of $q\bm{B}$ is along $+\hat{z}$ direction, where $q$ is the charge of a carrier, and $\hat{z}$ is the normal direction of the 2DEG plane. For the opposite case, one should define $z_{i}=x_{i}-iy_{i}$ instead.}. Equation~(\ref{eq:PsiMap}) maps a state in the conventional Landau-Fermi paradigm to a CF state. Using different Landau-Fermi states and following the ansatz, it is possible to construct a whole array of CF wave functions corresponded to various states observed in 2DEG. For instance, a set of filled Landau levels is mapped to a FQH state~\cite{jainendrak.jain2007}, a Fermi liquid to a CF Fermi liquid~\cite{kalmeyer1992,halperin1993}, a $p$-wave superconductor to the Moore-Read state~\cite{moore1991}. For the ground state of a type-I CPWC, $\Psi_{HF}$ is chosen to be~\cite{yi1998}: \begin{equation} \Psi_{HF}\left(\left\{ \bm{r}_{i}\right\} \right)=\hat{\mathcal{A}}\prod_{i}\phi_{\bm{R}_{i}^{0}}(\bm{r}_{i}),\label{eq:PsiHF} \end{equation} where $\phi_{\bm{R}_{i}^{0}}(\bm{r}_{i})\propto\exp[-(\bm{r}_{i}-\bm{R}_{i}^{0})^{2}/4l_{B}^{2}-i(\hat{z}\times\bm{r}_{i})\cdot\bm{R}_{i}^{0}/2l_{B}^{2}]$ is the wave function of a LLL coherent state centering at $\bm{R}_{i}^{0}$~\cite{maki1983}, $\left\{ \bm{R}_{i}^{0},\,i=1\dots N\right\} $ forms a two-dimensional triangular lattice, $\hat{\mathcal{A}}$ denotes the (anti-) symmetrization of the wave function, and $l_{B}\equiv\sqrt{\hbar/eB}$ is the magnetic length for the external magnetic field $B$. We note that $\Psi_{HF}$ is actually a trial wave function for an ordinary electron WC in the LLL~\cite{maki1983}. The mapping of Eq.~(\ref{eq:PsiMap}) transforms it to a trial wave function for the CPWC with a variational parameter $m$. Different from the usual CF wave functions, $m$ for a CPWC wave function can be even (CF) or odd (composite boson). This is because electrons in a CPWC are spatially localized and not sensitive to the exchange symmetry. Extensive numerical simulations based on the trial wave function had been carried out in Ref.~\cite{yi1998}. It shows that a type-I CPWC is indeed energetically favored over the ordinary electron WC. The low-lying excited states can be constructed by modifying $\Psi_{HF}$. An apparent modification is to replace $\left\{ \bm{R}_{i}^{0}\right\} $ with $\left\{ \bm{R}_{i}\equiv\bm{R}_{i}^{0}+\bm{u}_{i}\right\} $ which introduces deviations of the particles from their equilibrium positions. Another physically motivated modification is to introduce a momentum for each particle. This can be achieved by replacing $\phi_{\bm{R}}(\bm{r})$ with $\phi_{\bm{R}}(\bm{r})\exp(i\bm{k}\cdot\bm{r})$, as apparent for a localized wave packet with a momentum $\bm{p}=\hbar\bm{k}$. We note that the similar approach is also adopted in constructing Rezayi-Read's wave function for a CF Fermi liquid~\cite{rezayi1994} as well as in Girvin-MacDonald-Platzman theory of magneto-roton in FQH liquids~\cite{girvin1986}. The modifications result in a wave-function parameterized in $\left\{ \bm{R}_{i}\right\} $ and $\left\{ \bm{k}_{i}\right\} $: \begin{equation} \Psi\left(\left\{ \bm{r}_{i}\right\} \right)\propto\mathcal{A}\hat{P}_{LLL}\prod_{i<j}(z_{i}-z_{j})^{m}\prod_{i}\phi_{\bm{R}_{i}}(\bm{r}_{i})e^{i\bm{k}_{i}\cdot\bm{r}_{i}},\label{eq:wavefunction} \end{equation} which specifies a sub-manifold in the Hilbert space. We assume that the ground state and low-lying phononic excited states of a CPWC completely lie in the sub-manifold. Following the standard procedure of applying the projection to the LLL~\cite{jainendrak.jain2007}, we obtain the explicit form of the wave function (\ref{eq:wavefunction}): \begin{equation} \Psi\left(\left\{ \bm{r}_{i}\right\} \right)\propto\mathcal{A}\prod_{i<j}(z_{i}+ik_{i}l_{B}^{2}-z_{j}-ik_{j}l_{B}^{2})^{m}\prod_{i}\phi_{\bm{R}_{i}}(\bm{r}_{i}),\label{eq:wf2} \end{equation} where $k_{i}\equiv k_{xi}+ik_{yi}$, and we have made a substitution $\bm{R}_{i}+\bm{k}_{i}l_{B}^{2}\times\hat{z}\rightarrow\bm{R}_{i}$, and dropped irrelevant normalization and phase factors. We will base our derivation of the CP dynamics on the ansatz wave function Eq.~(\ref{eq:wf2}). The physical meaning of the momentum $\hbar\bm{k}_{i}$ becomes apparent in Eq.~(\ref{eq:wf2}). It shifts $z_{i}$ in the Bijl-Jastrow factor to $z_{i}^{v}\equiv z_{i}+ik_{i}l_{B}^{2}$. One could interpret $z_{i}^{v}$ as the position of quantum vortices binding with $i$-th electron. The momentum is actually the spatial separation of the electron and the quantum vortices in a CP. This is exactly the dipole picture of CFs proposed by Read~\cite{read1994}. We note that the momentum degrees of freedom only present in systems with $m\ne0$. For an ordinary WC with $m=0$, the momentums have no effect to the wave function except introducing a re-parametrization to $\left\{ \bm{R}_{i}\right\} $. Therefore, the momentums are emergent degrees of a CP system. When adopting the ansatz wave function Eq.~(\ref{eq:wf2}), we basically assume that the CPWC state belongs to the same paradigm as that for FQH states. Viewed from the new CF paradigm, the modifications introduced in Eq.~(\ref{eq:wf2}) are well motivated in physics, notwithstanding its highly nontrivial form. The paradigm of CFs, which dictates how the ground state and low-lying excited states are constructed, had been extensively tested in literatures for FQH states and others~\cite{jainendrak.jain2007}. It is reasonable to believe that the CPWC also fits in with the paradigm. This can be tested by comparing the wave-functions generated by the ansatz with those obtained by diagonalizing microscopic Hamiltonians. In this paper, we will not carry out the test. Instead, we will focus on an immediate question, \emph{i.e.}, had one adopted the paradigm \emph{per se}, what would be the dynamics? It can be shown that our ansatz wave-function approach is equivalent to a CF diagonalization~\cite{jainendrak.jain2007} (see Sec.~\ref{subsec:Quantum-Correspondence-of} and \ref{subsec:Projection-of-the}). The equivalence could serve as a justification for our approach. Our approach is advantageous in the sense that it provides direct knowledge of the dynamics of CPs, whereas the CF diagonalization technique provides an efficient machinery for systematically improving calculations but little information about the dynamics. \subsection{Derivation of the CP dynamics} To determine the dynamics of CPs in a CPWC, we employ the time-dependent variational principle of quantum mechanics. It minimizes an action $S\equiv\int_{t_{i}}^{t_{f}}Ldt$ with the Lagrangian~\cite{p.kramer1981}: \begin{equation} L=\frac{i\hbar}{2}\frac{\left\langle \Psi\left|\dot{\Psi}\right.\right\rangle -\left\langle \left.\dot{\Psi}\right|\Psi\right\rangle }{\left\langle \Psi\left|\Psi\right.\right\rangle }-V_{ee},\label{eq:Lagrangian} \end{equation} where we assume that the wave function depends on the time through its parameters $\left\{ \bm{R}_{i},\bm{k}_{i}\right\} $, $V_{ee}\equiv\langle\Psi|\hat{V}_{ee}|\Psi\rangle/\left\langle \Psi\left|\Psi\right.\right\rangle $ is the expectation value of the electron-electron interaction $\hat{V}_{ee}$, and the kinetic part of the microscopic Hamiltonian of the system is ignored since it is quenched in the LLL. A minimization of the action will result in a semi-classical equation of motion~\cite{sundaram1999}. Alternatively, one could interpret the action as the one determining the path integral amplitude of a quantum evolution in the sub-manifold of the Hilbert space~\cite{xiao2005}. The two interpretations are corresponding to the classical and quantum version of the same dynamics, respectively. We proceed to determine the explicit form of the Lagrangian. The Lagrangian can be expanded as \begin{equation} L=\sum_{i}(\bm{A}_{\bm{u}_{i}}\cdot\dot{\bm{u}}_{i}+\bm{A}_{\bm{k}_{i}}\cdot\dot{\bm{k}}_{i})-V_{ee},\label{eq:L} \end{equation} where $\bm{A}_{\bm{u}_{i}}$ and $\bm{A}_{\bm{k}_{i}}$ are Berry connections in the parameter space, $\bm{A}_{\bm{u}_{i}}=-\hbar\mathrm{Im}\left\langle \Psi\left|\partial\Psi/\partial\bm{u}_{i}\right.\right\rangle /\left\langle \Psi\left|\Psi\right.\right\rangle $ and $\bm{A}_{\bm{k}_{i}}=-\hbar\mathrm{Im}\left\langle \Psi\left|\partial\Psi/\partial\bm{k}_{i}\right.\right\rangle /\left\langle \Psi\left|\Psi\right.\right\rangle $, respectively. By using Eq.~(\ref{eq:wf2}), it is straightforward to obtain: \begin{align} \bm{A}_{\bm{u}_{i}} & =-\frac{\hbar}{2l_{B}^{2}}\left\langle \hat{\bm{r}}_{i}\right\rangle \times\hat{z},\\ \bm{A}_{\bm{k}_{i}} & =-m\hbar l_{B}^{2}\left\langle \sum_{j\ne i}\frac{\bm{r}_{i}-\bm{r}_{j}+\hat{z}\times(\bm{k}_{i}-\bm{k}_{j})l_{B}^{2}}{\left|\bm{r}_{i}-\bm{r}_{j}+\hat{z}\times\left(\bm{k}_{i}-\bm{k}_{j}\right)l_{B}^{2}\right|^{2}}\right\rangle ,\label{eq:Ak1} \end{align} where $\left\langle \dots\right\rangle \equiv\left\langle \Psi\left|\dots\right|\Psi\right\rangle /\left\langle \Psi\left|\Psi\right.\right\rangle $, and we ignore the anti-symmetrization in the wave function Eq.~(\ref{eq:wavefunction}). The anti-symmetrization can be re-impose when formulating the quantum version of the dynamics. For the case of CPWCs, the effect due to the non-distinguishability of electrons turns out to be negligible~\cite{yi1998}. The Berry connections could be simplified. We make use the identity: \begin{multline} \bm{\nabla}_{\bm{r}_{i}}\left|\Psi\right|^{2}=-\frac{\bm{r}_{i}-\bm{R}_{i}}{l_{B}^{2}}\left|\Psi\right|^{2}\\ +2m\sum_{j\ne i}\frac{\bm{r}_{i}-\bm{r}_{j}+\hat{z}\times(\bm{k}_{i}-\bm{k}_{j})l_{B}^{2}}{\left|\bm{r}_{i}-\bm{r}_{j}+\hat{z}\times\left(\bm{k}_{i}-\bm{k}_{j}\right)l_{B}^{2}\right|^{2}}\left|\Psi\right|^{2}.\label{eq:DPsi} \end{multline} Substituting Eq.~(\ref{eq:DPsi}) into (\ref{eq:Ak1}), we obtain, \begin{equation} \bm{A}_{\bm{k}_{i}}=-\frac{\hbar}{2}\left(\left\langle \hat{\bm{\xi}_{i}}\right\rangle -\bm{u}_{i}\right), \end{equation} where $\hat{\bm{\xi}_{i}}\equiv\hat{\bm{r}}_{i}-\bm{R}_{i}^{0}$. The Berry connections can then be expressed as: \begin{eqnarray} \bm{A}_{\bm{u}_{i}} & = & -\frac{\hbar}{2l_{B}^{2}}\bm{x}_{i}\times\hat{z}+\frac{\hbar\bm{k}_{i}}{2},\label{eq:Au}\\ \bm{A}_{\bm{k}_{i}} & = & -\frac{\hbar}{2}\left(\bm{x}_{i}-\bm{u}_{i}+\bm{k}_{i}\times\hat{z}l_{B}^{2}\right),\label{eq:Ak} \end{eqnarray} where $\bm{x}_{i}\equiv\left\langle \hat{\bm{\xi}_{i}}\right\rangle -\bm{k}_{i}\times\hat{z}l_{B}^{2}$, $\hat{z}$ is the unit normal vector of the 2DEG plane . We note that $\bm{x}_{i}$ is the average position (relative to $\bm{R}_{i}^{0}$) of the quantum vortices binding with $i$-th electron, which is displaced from the electron position $\langle\hat{\bm{\xi}}_{i}\rangle$ by a vector $-\bm{k}_{i}\times\hat{z}l_{B}^{2}$, according to the wave function Eq.~(\ref{eq:wf2}). We adopt $\{\bm{x}_{i},\bm{p}_{i}\equiv\hbar\bm{k}_{i}\}$ as the set of dynamic variables, and interpret $\bm{x}_{i}$ and $\bm{p}_{i}$ as the position and momentum of a CP, respectively. To express the Lagrangian in $\{\bm{x}_{i},\bm{p}_{i}\}$, it is necessary to relate the dynamic variables with the original set of parameters. We assume that both $\bm{x}_{i}$ and $\bm{p}_{i}$ are small in a CPWC, and expand the Lagrangian to the second order of the dynamic variables. For the purpose, we expand $\bm{x}_{i}$ to the linear order of the original parameters: \begin{equation} x_{i\alpha}\approx\sum_{j\beta}A_{i\alpha,j\beta}u_{j\beta}+B_{i\alpha,j\beta}k_{j\beta},\label{eq:x} \end{equation} and, \begin{align} A_{i\alpha,j\beta} & \equiv\left.\frac{\partial\left\langle \hat{x}_{i\alpha}\right\rangle }{\partial u_{j\beta}}\right|_{0}=\frac{1}{l_{B}^{2}}\left\langle \hat{\xi}_{i\alpha}\hat{\xi}_{j\beta}\right\rangle _{0},\label{eq:A1}\\ B_{i\alpha,j\beta} & \equiv\left.\frac{\partial\left\langle \hat{x}_{i\alpha}\right\rangle }{\partial k_{j\beta}}\right|_{0}=-l_{B}^{2}\epsilon_{\alpha\beta}\delta_{ij}\nonumber \\ & +\left\langle \hat{\xi}_{i\alpha}\left(2ml_{B}^{2}\sum_{l\ne j,\gamma}\epsilon_{\beta\gamma}\frac{r_{j\gamma}-r_{l\gamma}}{\left|\bm{r}_{j}-\bm{r}_{l}\right|^{2}}\right)\right\rangle _{0}, \end{align} where $\alpha\,(\beta)=x,y$ indexes the component of the coordinate, $\left\langle \dots\right\rangle _{0}$ denotes the expectation value in the ground state $\Psi_{0}\equiv\left.\Psi\right|_{\bm{u}_{i},\bm{k}_{i}\rightarrow0}$, and $\epsilon_{\alpha\beta}$ is the two-dimensional Levi-Civita symbol. Making use the identity Eq.~(\ref{eq:DPsi}), we obtain: \begin{equation} B_{i\alpha,j\beta}=-l_{B}^{2}A_{i\alpha,j\gamma}\epsilon_{\gamma\beta}.\label{eq:B} \end{equation} Substituting (\ref{eq:A1}) and (\ref{eq:B}) into (\ref{eq:x}), we obtain: \begin{align} x_{i\alpha} & =\sum_{j\beta}A_{i\alpha,j\beta}\left(\bm{u}_{j}-\bm{k}_{j}\times\hat{z}l^{2}\right)_{\beta},\label{eq:x-u} \end{align} Similarly, $V_{ee}$ is expanded to the second order of the dynamic variables: \begin{multline} V_{ee}\approx\frac{1}{2}\sum_{i\alpha,j\beta}D_{i\alpha,j\beta}^{\bm{xx}}x_{i\alpha}x_{j\beta}+2D_{i\alpha,j\beta}^{\bm{px}}p_{i\alpha}x_{j\beta}\\ +D_{i\alpha,j\beta}^{\bm{pp}}p_{i\alpha}p_{j\beta}.\label{eq:Vee} \end{multline} The coefficients can be related to correlation functions (see Appendix A): \begin{align} D_{i\alpha,j\beta}^{\bm{x}\bm{x}} & =\frac{1}{l_{B}^{4}}\sum_{\gamma\delta}\left\langle \left(\hat{V}_{ee}-\bar{V}_{ee}\right)\hat{\xi}_{l\gamma}\hat{\xi}_{m\delta}\right\rangle _{0}\nonumber \\ & \times\left[A^{-1}\right]_{i\alpha,l\gamma}\left[A^{-1}\right]_{m\delta,j\beta},\label{eq:Dxx}\\ D_{i\alpha,j\beta}^{\bm{p}\bm{x}} & =-\frac{1}{\hbar}\sum_{\gamma\delta}\epsilon_{\alpha\gamma}\left\langle \frac{\partial\hat{V}_{ee}}{\partial r_{i\gamma}}\hat{\xi}_{l\delta}\right\rangle _{0}\left[A^{-1}\right]_{l\delta,j\beta},\label{eq:Dpx}\\ D_{i\alpha,j\beta}^{\bm{p}\bm{p}} & =\frac{l_{B}^{4}}{\hbar^{2}}\sum_{\gamma\delta}\epsilon_{\alpha\gamma}\epsilon_{\beta\delta}\left\langle \frac{\partial^{2}\hat{V}_{ee}}{\partial r_{i\gamma}\partial r_{j\delta}}\right\rangle _{0},\label{eq:Dpp} \end{align} where $\left[A^{-1}\right]$ denotes the inverse of a matrix with elements $[A]_{i\alpha,j\beta}=A_{i\alpha,j\beta}$, and $\bar{V}_{ee}\equiv\langle\hat{V}_{ee}\rangle_{0}$. Substituting Eqs.~(\ref{eq:Au}, \ref{eq:Ak}, \ref{eq:x-u}, \ref{eq:Vee}) into Eq.~(\ref{eq:L}), we can determine the explicit form of the Lagrangian. Because of the translational symmetry, it is convenient to express the Lagrangian in the Fourier transformed dynamic variables $\bm{x}(\bm{q})\equiv1/\sqrt{N}\sum_{i}\bm{x}_{i}\exp\left(-i\bm{q}\cdot\bm{R}_{i}^{0}\right)$ and $\bm{p}(\bm{q})\equiv1/\sqrt{N}\sum_{i}\bm{p}_{i}\exp\left(-i\bm{q}\cdot\bm{R}_{i}^{0}\right)$, where $\bm{q}$ is a wave vector defined in the Brillouin zone for a triangular lattice. The Lagrangian can be decomposed into $L=\sum_{\bm{q}}L_{\bm{q}}$ with: \begin{multline} L_{\bm{q}}=\frac{eB_{e}(\bm{q})}{2}\left(\hat{z}\times\bm{x}^{\ast}(\bm{q})\right)\cdot\dot{\bm{x}}(\bm{q})+\frac{1}{2eB}\left(\hat{z}\times\bm{p}^{\ast}(\bm{q})\right)\cdot\dot{\bm{p}}(\bm{q})\\ +\bm{p}^{\ast}(\bm{q})\cdot\dot{\bm{x}}(\bm{q})-\frac{1}{2}\left[\begin{array}{c} \bm{x}(\bm{q})\\ \bm{p}(\bm{q}) \end{array}\right]^{\dagger}\mathcal{D}(\bm{q})\left[\begin{array}{c} \bm{x}(\bm{q})\\ \bm{p}(\bm{q}) \end{array}\right],\label{eq:Lq} \end{multline} where $B_{e}(\bm{q})$ is determined by, \begin{equation} B_{e}(\bm{q})=\frac{B}{2}\mathrm{Tr}\mathcal{A}^{-1}(\bm{q}),\label{eq:Be} \end{equation} with $\mathcal{A}^{-1}(\bm{q})$ being the inverse of a $2\times2$ matrix with elements $\mathcal{A}_{\alpha\beta}(\bm{q})=\sum_{\bm{R}_{i}^{0}}A_{i\alpha,0\beta}\exp(-i\bm{q}\cdot\bm{R}_{i}^{0})$, and \begin{equation} \mathcal{D}(\bm{q})=\left[\begin{array}{cc} \mathcal{D}^{\bm{xx}}(\bm{q}) & \mathcal{D}^{\bm{px}}(\bm{q})\\ \mathcal{D}^{\bm{px}}(\bm{q}) & \mathcal{D}^{\bm{pp}}(\bm{q}) \end{array}\right], \end{equation} with $\mathcal{D}^{\bm{xx}}(\bm{q})$, $\mathcal{D}^{\bm{px}}(\bm{q})$, and $\mathcal{D}^{\bm{pp}}(\bm{q})$ being the Fourier transforms of $D^{\bm{xx}}$, $D^{\bm{px}}$, and $D^{\bm{pp}}$, respectively. The equation of motion of a type-I CPWC is: \begin{equation} \left[\begin{array}{cc} eB_{e}(\bm{q})\hat{z}\times & I\\ -I & \frac{1}{eB}\hat{z}\times \end{array}\right]\left[\begin{array}{c} \dot{\bm{x}}(\bm{q})\\ \dot{\bm{p}}(\bm{q}) \end{array}\right]=-\mathcal{D}(\bm{q})\left[\begin{array}{c} \bm{x}(\bm{q})\\ \bm{p}(\bm{q}) \end{array}\right],\label{eq:EQM} \end{equation} which is the main result of this paper. Interpretations of the dynamics and its implications to the nature of CPs will be discussed in Sec.~\ref{sec:Interpretations-of-the}. \subsection{Quantization of the effective dynamics\label{subsec:Quantum-Correspondence-of}} The dynamics Eq.~(\ref{eq:EQM}) could be quantized. The resulting quantum dynamics describes the quantum evolution of the system in the sub-manifold of the Hilbert space specified by the wave function Eq.~(\ref{eq:wf2}). A general scheme of the quantization had been discussed in Ref.~\cite{xiao2005}. Basically, the non-canonical kinematic matrix in the left hand side of Eq.~(\ref{eq:EQM}) gives rise to non-commutativity between the dynamic variables: \begin{equation} \left[\begin{array}{cc} [\bm{x}^{\dagger}(\bm{q}),\bm{x}(\bm{q})] & [\bm{x}^{\dagger}(\bm{q}),\bm{p}(\bm{q})]\\{} [\bm{p}^{\dagger}(\bm{q}),\bm{x}(\bm{q})] & [\bm{p}^{\dagger}(\bm{q}),\bm{p}(\bm{q})] \end{array}\right]=i\hbar\left[\begin{array}{cc} eB_{e}(\bm{q})\hat{\epsilon} & -I\\ I & \frac{1}{eB}\hat{\epsilon} \end{array}\right]^{-1},\label{eq:commutation} \end{equation} where $\hat{\epsilon}$ is the $2\times2$ anti-symmetric matrix with $[\hat{\epsilon}]_{\alpha\beta}=\epsilon_{\alpha\beta}$. The system is governed by an effective hamiltonian $\hat{H}_{eff}=V_{ee}$ by upgrading the dynamic variables to quantum operators. The system can be transformed to a phonon representation by a procedure described in Ref.~\cite{qin2012}. We solve the generalized eigenvalue equation: \begin{equation} i\omega_{\bm{q}}\left[\begin{array}{cc} -eB_{e}(\bm{q})\hat{\epsilon} & I\\ -I & -\frac{1}{eB}\hat{\epsilon} \end{array}\right]\psi_{\bm{q}}=\mathcal{D}(\bm{q})\psi_{\bm{q}}.\label{eq:gep} \end{equation} The equation gives rise to two positive frequency solutions and two negative frequency solutions, with eigenvectors related by complex conjugations~\cite{qin2012}. The eigenvectors are normalized by $\bar{\psi}_{\bm{q}}\psi_{\bm{q}}=\pm1,$ where $\pm$ is for the positive and the negative frequency solution, respectively, and \begin{equation} \bar{\psi}_{\bm{q}}\equiv-i\psi_{\bm{q}}^{\dagger}\left[\begin{array}{cc} eB_{e}(\bm{q})\hat{\epsilon} & -I\\ I & \frac{1}{eB}\hat{\epsilon} \end{array}\right]. \end{equation} The dynamic variables can then be expressed in phonon creation and annihilation operators: \begin{equation} \left[\begin{array}{c} \bm{x}(\bm{q})\\ \bm{p}(\bm{q}) \end{array}\right]=\sum_{i\in+}\psi_{\bm{q}}^{(i)}a_{\bm{q}i}+\psi_{-\bm{q}}^{(i)\ast}a_{-\bm{q}i}^{\dagger},\label{eq:expansion} \end{equation} where the summation is over the two positive frequency solutions, and $a_{\bm{q}i}$ and $a_{\bm{q}i}^{\dagger}$ are bosonic creation and annihilation operators, respectively. One can verify that the dynamic variables, expressed as Eq.~(\ref{eq:expansion}), do recover the commutation relation Eq.~(\ref{eq:commutation}). With the phonon representation, we define a coherent state as the eigenstate of the annihilation operator: \begin{equation} a_{\bm{q}i}\left|\phi\right\rangle =\phi_{\bm{q}i}\left|\phi\right\rangle . \end{equation} In the the real space, the coherent state is interpreted as, \begin{equation} \left\langle \bm{r}\left|\phi\right.\right\rangle =\frac{\Psi(\bm{r};\phi)}{\left\langle \Psi_{0}\left|\Psi\right.\right\rangle },\label{eq:coherent state} \end{equation} where $\Psi(\bm{r};\phi)$ is the wave function Eq.~(\ref{eq:wf2}) with the parameters substituted with values corresponding to: \begin{equation} \left[\begin{array}{c} \bm{x}^{(+)}(\bm{q})\\ \bm{p}^{(+)}(\bm{q}) \end{array}\right]=\sum_{i\in+}\psi_{\bm{q}}^{(i)}\phi_{\bm{q}i},\label{eq:positive component} \end{equation} where the superscript $(+)$ indicates that the dynamic variables contain positive-frequency components only~\cite{glauber1963}, and the denominator is introduced to eliminate the time-dependent factor of the ground-state component in the wave-function~\cite{p.kramer1981}. For a given phonon state, the corresponding physical wave function can be determined by~\cite{glauber1963}: \begin{equation} \left\langle \bm{r}\left|\varphi\right.\right\rangle =\int\frac{\mathrm{d}\phi\mathrm{d}\phi^{\ast}}{2\pi i}e^{-\left|\phi\right|^{2}}\frac{\Psi(\bm{r};\phi)}{\left\langle \Psi_{0}\left|\Psi\right.\right\rangle }\varphi(\phi^{\ast}). \end{equation} For the excited state with $n$ phonons of the mode $(\bm{q},i)$, $\varphi(\phi^{\ast})\propto\phi_{\bm{q}i}^{\ast n}$, the corresponding physical wave function is: \begin{equation} \Psi_{n}(\bm{r})\propto\left.\frac{\partial^{n}}{\partial\phi_{\bm{q}i}^{n}}\frac{\Psi(\bm{r};\phi)}{\left\langle \Psi_{0}\left|\Psi\right.\right\rangle }\right|_{\phi\rightarrow0}.\,\label{eq:Psin} \end{equation} From Eq.~(\ref{eq:Psin}), we conclude that a one-phonon state must be a superposition of: \begin{equation} \left.\frac{\partial}{\partial\bar{u}_{i}}\frac{\Psi(\bm{r})}{\left\langle \Psi_{0}\left|\Psi\right.\right\rangle }\right|_{\bar{u},k\rightarrow0},\,\,\,\left.\frac{\partial}{\partial k_{i}}\frac{\Psi(\bm{r})}{\left\langle \Psi_{0}\left|\Psi\right.\right\rangle }\right|_{\bar{u},k\rightarrow0}. \end{equation} They are corresponding to a set of many-body wave functions: \begin{equation} \hat{P}_{LLL}(z_{i}-Z_{i}^{0})\Psi_{0},\,\,\,\hat{P}_{LLL}(\bar{z}_{i}-\bar{Z}_{i}^{0})\Psi_{0}.\label{eq:basis} \end{equation} One can construct a quantum solution of the phonon excitation problem by directly diagonalizing the microscopic hamiltonian in the truncated Hilbert space span by the bases Eq.~(\ref{eq:basis}). The resulting eigenvalue equation, after an appropriate basis transformation, is nothing but the eigenvalue equation Eq.~(\ref{eq:gep}), with a modified dynamic matrix. The modification to the dynamic matrix will derived in the next subsection. \subsection{Projected dynamic matrix\label{subsec:Projection-of-the}} Before proceeding, we note a subtlety concerning the quantum correspondence of the dynamics. In the derivation of the dynamics, we treat $\bm{x}_{i}$ and $\bm{p}_{i}$ as classical variables. However, when constructing the quantum coherent states, we use only the positive-frequency components of the dynamic variables, as shown in Eq.~(\ref{eq:positive component}). The latter is necessary because the wave function $\left\langle \bm{r}\left|\phi\right.\right\rangle $ defined in Eq.~(\ref{eq:coherent state}) should be a superposition of the ground state and excited states: $\left\langle \bm{r}\left|\phi\right.\right\rangle \sim\Psi_{0}+\sum_{i}\exp(-i\Delta E_{i}t)\Psi_{i}$ with $\Delta E_{i}>0$, \emph{i.e.}, it only contains positive frequency components in its time dependence. By assuming that the wave function is a function of the positive frequency components of the dynamic variables, we are able to obtain a wave function consistent with the general requirement~\cite{glauber1963}. The consideration will introduce a modification to the harmonic expansion of $V_{ee}$. This is because the expansion Eq.~(\ref{eq:Vee}), which treats the dynamic variable as classical variables, includes terms that couple two positive (negative)-frequency components of dynamic variables. These terms induce spurious couplings between the positive and negative frequency components, and should be dropped. On the other hand, one can show that kinematic part of the dynamics is not affected by the spurious coupling. To determine the modification, we note that our wave function Eq.~(\ref{eq:wf2}) depends only on the complex variables $\bar{u}_{i}\equiv u_{xi}-iu_{yi}$ and $k_{i}\equiv k_{xi}+ik_{yi}$ . Thus, $\bar{u}_{i}$ and $k_{i}$ can be chosen to be positive-frequency functions of the time, and a proper harmonic expansion of $V_{ee}$ should only include terms coupling $\{\bar{u}_{i},k_{i}\}$ with their complex conjugates. To this end, we expand $V_{ee}$ in terms of $\{\bar{\bm{u}}(\bm{q}),\bm{k}(\bm{q})\}$: \begin{equation} V_{ee}\approx\frac{1}{2}\sum_{\bm{q}}\left[\begin{array}{c} \bar{\bm{u}}(\bm{q})\\ \bm{k}(\bm{q}) \end{array}\right]^{\dagger}\tilde{\mathcal{D}}(\bm{q})\left[\begin{array}{c} \bar{\bm{u}}(\bm{q})\\ \bm{k}(\bm{q}) \end{array}\right], \end{equation} where $\tilde{\mathcal{D}}(\bm{q})$ is the dynamic matrix with respect to $\{\bm{u}(\bm{q}),\bm{k}(\bm{q})\}$. To get rid of the spurious coupling, we introduce a projected dynamic matrix: \begin{equation} \tilde{\mathcal{D}}^{P}(\bm{q})=\tilde{P}_{+}\tilde{\mathcal{D}}(\bm{q})\tilde{P}_{+}+\tilde{P}_{-}^{\dagger}\tilde{\mathcal{D}}(\bm{q})\tilde{P}_{-}. \end{equation} with \begin{equation} \tilde{P}_{\pm}=\left[\begin{array}{cc} \frac{1}{2}\left(1\mp\sigma_{2}\right) & 0\\ 0 & \frac{1}{2}\left(1\pm\sigma_{2}\right) \end{array}\right], \end{equation} where $\sigma_{2}$ is the second Pauli matrix. Similarly, we can obtain a projected dynamic matrix with respect to $\{\bm{x}(\bm{q}),\bm{p}(\bm{q})\}$ by a projection: \begin{align} \mathcal{D}^{P}(\bm{q}) & =P_{+}^{\dagger}(\bm{q})\mathcal{D}(\bm{q})P_{+}(\bm{q})+P_{-}^{\dagger}(\bm{q})\mathcal{D}(\bm{q})P_{-}(\bm{q}),\label{eq:DP} \end{align} with $P_{\pm}=U(\bm{q})\tilde{P}_{\pm}U^{-1}(\bm{q})$, where $U(\bm{q})$ is the transformation matrix relating $\{\bar{\bm{u}}(\bm{q}),\bm{k}(\bm{q})\}$ with $\{\bm{x}(\bm{q}),\bm{p}(\bm{q})\}$: $[\bm{x}(\bm{q}),\bm{p}(\bm{q})]^{T}=U(\bm{q})[\bm{u}(\bm{q}),\bm{k}(\bm{q})]^{T}$. We have: \begin{align} P_{\pm}(\bm{q}) & =\left[\begin{array}{cc} \frac{1}{2}\left(1\mp\mathcal{A}(\bm{q})\sigma_{2}\mathcal{A}^{-1}(\bm{q})\right) & \mp i\mathcal{A}(\bm{q})\\ 0 & \frac{1}{2}\left(1\pm\sigma_{2}\right) \end{array}\right]. \end{align} By substituting the dynamic matrix $\mathcal{D}(\bm{q})$ in Eq.~(\ref{eq:EQM}) and (\ref{eq:gep}) with $\mathcal{D}^{P}(\bm{q})$, one can show that the eigenvalue equation becomes identical to that obtained from the CF diagonalization with the bases Eq.~(\ref{eq:basis}). \section{Interpretations of the CP dynamics\label{sec:Interpretations-of-the}} \subsection{Sundaram-Niu dynamics of CPs} The CP dynamics, as shown in Eq.~(\ref{eq:EQM}), is distinctly different from the one adopted in the heuristic approach, in which a CP is assumed to be an ordinary Newtonian particle characterized by an effective mass and a mean-field effective magnetic field~\cite{ettouhami2006,archer2011,rhim2015}. Our CP dynamics fits in with the form of the more general Sundaram-Niu dynamics with Berry curvature corrections. An analysis of these corrections would provide an insight to the nature of CPs, as we will discuss in the following. Firstly, CPs are subjected to an emergent gauge field $\Delta B_{e}(\bm{q})\equiv B-B_{e}(\bm{q})$. The emergent gauge field gives rise to a dissipationless viscosity, which is a transverse inter-particle force proportional to the relative velocity between two particles~\cite{qin2012,hughes2011}: \begin{equation} \bm{F}_{ij}^{(DV)}=e\Delta\mathcal{B}_{e}(\bm{R}_{i}^{0}-\bm{R}_{j}^{0})\hat{z}\times\left(\dot{\bm{x}}_{j}-\dot{\bm{x}}_{i}\right),\label{eq:DV} \end{equation} where $\Delta\mathcal{B}_{e}(\bm{R}^{0})\equiv\int_{BZ}d^{2}q/(2\pi)^{2}\Delta B_{e}(\bm{q})\exp(i\bm{q}\cdot\bm{R}^{0})$. In ordinary phonon systems, the dissipationless viscosity could arise in the presence of a strong SOC and magnetization. In CPWC, however, it is induced by the quantum vortices attached in CPs, similar to the Chern-Simons field emerged in FQH liquids~\cite{lopez1991,zhang1992}. For the latter, one usually adopts a mean-field approximation that gives rise to an effective magnetic field experienced by CPs. For the CPWC, the mean-field approximation is equivalent to keeping only the diagonal component of the emergent gauge field \begin{equation} \Delta B\equiv\Delta\mathcal{B}_{e}(\bm{0})=-\sum_{\bm{R}_{i}^{0}\ne0}\Delta\mathcal{B}_{e}(\bm{R}_{i}^{0}),\label{eq:DeltaB} \end{equation} and assuming that CPs experience an effective magnetic field $B_{eff}=B-\Delta B$. However, the mean field approximation may not be appropriate for a CPWC since it breaks the translational symmetry. Secondly, CPs are subjected to a Berry curvature in the momentum space with $\Omega_{z}=1/eB$. This is a new feature of the dynamics, not presented in the conventional theory of CFs~\cite{halperin1993,simon1998}. The Berry curvature gives rise to an anomalous velocity, which is well known for electron dynamics in magnetic solids with a SOC, and is linked to the (quantum) anomalous Hall effect~\cite{jungwirth2002,haldane1988}. Here, the Berry curvature is not induced by the SOC, but inherited from the Landau level hosting the particles. Indeed, a Landau level, when casted to a magnetic Bloch band, does have a uniformly distributed Berry curvature in the momentum space with $\Omega_{z}^{(LL)}=-1/eB$~\cite{zhang2016}. One can show that the difference in the signs of $\Omega_{z}$ and $\Omega_{z}^{(LL)}$ is due to our assignment of the CP position to its constituent quantum vortices (see Sec.~\ref{subsec:Definition-of-the} and \cite{shi2017}). The presence of a Berry curvature in the momentum space clearly indicates that a CP is neither an ordinary Newtonian particle nor a Dirac particle. Interestingly, had the Berry curvature survived in a half-filled CF Fermi liquid, it would give rise to a $\pi$ Berry phase, the same as that predicted by the Dirac theory~\cite{son2015}. Based on these discussions, we conclude that a CP is a particle following the Sundaram-Niu dynamics. \subsection{Dipole interpretation\label{subsec:Dipole-interpretation}} The interpretation is not necessarily unique. It depends on the choice of dynamic variables and physical meanings one assigns to them. Had we interpreted $\hat{z}\times\bm{p}_{i}/eB$ as the displacement from the electron to the quantum vortices in a CP, as indicated by the wave function Eq.~(\ref{eq:wf2}), we would obtain the dipole interpretation of the dynamics~\cite{read1994,simon1998}. In the dipole interpretation, a CP is regarded as a dipole consisting of an electron and a bundle of $m$ quantum vortices~\cite{read1994,simon1998}. The picture and its relation to the usual position-momentum interpretation had been discussed in Ref.~\cite{read1994}. For the interpretation, we adopt another set of dynamic variables: \begin{align} \bm{x}_{i}^{e} & =\left\langle \hat{\bm{r}}_{i}\right\rangle -\bm{R}_{i}^{0}=\bm{x}_{i}-\frac{1}{eB}\hat{z}\times\bm{p}_{i},\label{eq:xie}\\ \bm{x}_{i}^{\phi} & \equiv\bm{x}_{i}, \end{align} which are positions of the electron and the bundle of the quantum vortices, respectively. Note that the position of a composite particle is assigned to the position of the quantum vortices in Eq.~(\ref{eq:EQM}). The equation of motion with respect to the new dynamic variables is: \begin{equation} \left[\begin{array}{cc} e\Delta B_{e}(\bm{q})\hat{z}\times & 0\\ 0 & -eB\hat{z}\times \end{array}\right]\left[\begin{array}{c} \dot{\bm{x}}_{\phi}(\bm{q})\\ \dot{\bm{x}}_{e}(\bm{q}) \end{array}\right]=\mathcal{D}^{\prime}(\bm{q})\left[\begin{array}{c} \bm{x}_{\phi}(\bm{q})\\ \bm{x}_{e}(\bm{q}) \end{array}\right],\label{eq:EQM1} \end{equation} where $\mathcal{D}^{\prime}(\bm{q})$ is the corresponding dynamic matrix, which can be related to $\mathcal{D}^{P}(\bm{q})$ by a transformation. It is notable from Eq.~(\ref{eq:EQM1}) that the electron in a CP is only coupled to the external magnetic field, while the quantum vortices are only coupled to the emergent gauge field~\cite{potter2016}. Although not explicitly specified in the original proposal~\cite{read1994}, the simple form of the coupling could have been expected from the microscopic wave function Eq.~(\ref{eq:wf2}), in which the correlations introduced in the Bijl-Jastrow factor are between coordinates of quantum vortices. It also becomes apparent that our dynamics is consistent with Kohn's theorem. In the long wavelength limit $\bm{q}\rightarrow0$, both $\mathcal{D}^{\prime}(\bm{q})$ and $\Delta B_{e}(\bm{q})$ vanish because of the translational symmetry. As a result, the degrees of freedom associating with $\bm{x}_{\phi}$ become degenerate. The system will only have a trivial zero frequency mode, and no emergent mode will be present. The behavior is exactly what would be expected from Kohn's theorem, because the cyclotron mode, which is the only allowed resonance at $\bm{q}=0$ according to the theorem, is an inter-Landau level excitation, and will not appear in our dynamics which has assumed that all excitations are within a Landau level. From these observations, it becomes apparent that the presence of the Berry curvature in the momentum space would be an inevitable feature of the CP dynamics if we wanted to obtain the particular form of the dipole picture or maintain the consistency with Kohn's theorem. Had we assumed a vanishing Berry curvature in Eq.~(\ref{eq:EQM}), Eq.~(\ref{eq:EQM1}) would have a different form of the coupling to gauge fields, and its right hand side would not become degenerate to be consistent with Kohn's theorem. The presence of the Berry curvature is actually the most important difference between the conventional Chern-Simons theory of CPs~\cite{kalmeyer1992,halperin1993}, which is constructed from particles residing in a free parabolic band~\cite{lopez1991,zhang1992}, and a theory directly derived from a microscopic wave function defined in a Landau level. \subsection{Definition of the CP position\label{subsec:Definition-of-the}} In the position-momentum interpretation of the dynamics, there is arbitrariness in defining the position of a CP. In Eq.~(\ref{eq:EQM}), the position of a CP is interpreted as the position of its constituent quantum vortices. It seems to be equally plausible to interpret the CP position as the electron position (or, e.g., the average position of the electron and the quantum vortices). The issue is: will a different choice affect our interpretation of the dynamics? To see that, we derive the equation of motion with respect to $\{\bm{x}_{e}(\bm{q}),\bm{p}(\bm{q})\}$. By substituting Eq.~(\ref{eq:xie}) into Eq.~(\ref{eq:EQM}), it is straightforward to obtain: \begin{multline} \left[\begin{array}{cc} eB_{e}(\bm{q})\hat{z}\times & \frac{\Delta B_{e}(\bm{q})}{B}I\\ -\frac{\Delta B_{e}(\bm{q})}{B}I & -\frac{1}{eB}\left(\frac{\Delta B_{e}(\bm{q})}{B}\right)\hat{z}\times \end{array}\right]\left[\begin{array}{c} \dot{\bm{x}}_{e}(\bm{q})\\ \dot{\bm{p}}(\bm{q}) \end{array}\right]\\ =-\mathcal{D}^{\prime\prime}(\bm{q})\left[\begin{array}{c} \bm{x}_{e}(\bm{q})\\ \bm{p}(\bm{q}) \end{array}\right],\label{eq:EQM2} \end{multline} where $\mathcal{D}^{\prime\prime}(\bm{q})$ is the transformed dynamic matrix with respect to the new dynamic variables. We observe that the equation of motion becomes more complicated. It still fits in with the general form of the Sundaram-Niu dynamics, but with a complicated structure of Berry curvatures~\cite{sundaram1999}. Similar complexity also arises when one adopts other definitions of the CP position. It turns out that the initial definition of the CP position provides the simplest form of equation of motion. We conclude that alternative definitions of the CP position will not affect our general interpretation, \emph{i.e.}, CPs follow the Sundaram-Niu dynamics. The particular choice adopted in Eq.~(\ref{eq:EQM}) is the best because it has the simplest structure of Berry curvatures. \section{Numerical simulations} \subsection{Methods} We employ the Metropolis Monte-Carlo method to evaluate the coefficients defined in Eqs.~(\ref{eq:Ak1}, \ref{eq:Dxx}\textendash \ref{eq:Dpp}). The algorithm and setup of our simulations are similar to those adopted in Ref.~\cite{yi1998}, with a couple of improvements detailed as follows. \begin{figure} \includegraphics[width=1\columnwidth]{Eg-PR} \caption{\label{fig:Simulation}Variational ground state energies of CPWCs relative to that of the ordinary WC. Red points indicate the results of Ref.~\cite{yi1998}. The phase boundaries between CPWC phases with different values of $m$ are determined by comparing the energies, and indicated by dashed vertical lines. Inset: configuration of the simulation cell. } \end{figure} Firstly, our calculation employs a much larger simulation cell which involves 397 electrons arranged as $11$ concentric hexagonal rings in a plane, as shown in the inset of Fig.~(\ref{fig:Simulation}). The larger simulation cell is needed to eliminate finite size effects as the coefficients decay slowly in the real space. Secondly, we use a different wave function for the finite simulation cell, and eliminate the need for introducing ``ghost'' particles explicitly. As pointed out in Ref.~\cite{yi1998}, for a finite lattice in equilibrium ($\bm{R}_{i}=\bm{R}_{i}^{0}$, and $\bm{k}_{i}=0$), the average positions of electrons do not coincide with their expected equilibrium positions due to asymmetry induced by the Bijl-Jastrow factor. As a result, it is necessary to introduce a cloud of ``ghost particles'' for each of electrons to counter balance the effect. In Ref.~\cite{yi1998}, finite size ghost-particle clouds were introduced. In our simulation, we extend the size of the ghost particle clouds to infinity. The resulting wave function can be determined analytically: \begin{multline} \Psi\left(\left\{ \bm{r}_{i}\right\} \right)\propto\mathcal{A}\frac{\prod_{i<j\leq N}(z_{i}+ik_{i}l_{B}^{2}-z_{j}-ik_{j}l_{B}^{2})^{m}}{\prod_{i\leq N}\prod_{j\ne i,\leq N}\left(z_{i}+ik_{j}l_{B}^{2}-Z_{j}^{0}\right)^{m}}\\ \times\prod_{i=1}^{N}\left[\psi(z_{i}-Z_{i}^{0}+ik_{i}l_{B}^{2})\right]^{m}\phi_{\bm{R}_{i}}(\bm{r}_{i}),\label{eq:wf3} \end{multline} where $Z_{i}^{0}\equiv X_{i}^{0}+iY_{i}^{0}$ ($\bm{R}_{i}^{0}\equiv(X_{i}^{0},Y_{i}^{0})$), $N$ is the total number of electrons in the simulation cell, and~\cite{rhim2015}, \begin{equation} \psi(z)\equiv\frac{\prod_{i\ne0}\left(z-Z_{i}^{0}\right)}{\prod_{i\ne0}\left(Z_{i}^{0}\right)}\propto\frac{1}{z}\theta_{1}\left(\left.\frac{z}{a}\right|\frac{1}{2}+i\frac{\sqrt{3}}{2}\right), \end{equation} where $a$ is the lattice constant of the WC, the product is extended to an infinite triangular lattice with unit vectors $\bm{a}_{1}=(1,0)a$ and $\bm{a}_{2}=(1/2,\sqrt{3}/2)a$, and $\theta_{1}$ is the Jacobi theta function. Equation (\ref{eq:wf3}) is used in our numerical simulations. An important issue of our simulation is to extrapolate the calculation results obtained in a finite simulation cell to the macroscopic limit. To this end, we find that $A_{0}^{\mathrm{finite}}$, coefficient defined in Eq.~(\ref{eq:A1}) calculated with a harmonic approximation of the wave function for the finite simulation cell (See Eq.~(\ref{eq:A0}) in Appendix B), fits the long-range tail of the calculated coefficient very well. Hence, we divide the coefficient into a long range part $A_{0}^{\mathrm{finite}}$ and a short range part that decays rapidly with the distance, and fit the short range part up to the fifth nearest neighbors. The extrapolation is then straightforward by upgrading $A_{0}^{\mathrm{finite}}$ to its infinite lattice counterpart, which can be determined analytically. Similar extrapolation schemes are applied for the determinations of the coefficients Eq.~(\ref{eq:Dxx}\textendash \ref{eq:Dpp}). We can have harmonic approximation for these coefficients as well (see Eq.~(\ref{eq:D0q}) in Appendix B). They are regarded as the long range parts of the coefficients. In this case, the remainders of the coefficients decays as $1/|\bm{R}_{i}^{0}-\bm{R}_{j}^{0}|^{5}$ in the long range. We fit the remainders of $\mathcal{D}^{\bm{x}\bm{x}}$ and $\mathcal{D}^{\bm{p}\bm{x}}$ with short range terms up to the fifth nearest neighbors, whereas for $\mathcal{D}^{\bm{p}\bm{p}}$, the higher precision of the calculated values allows us to fit it with a $1/|\bm{R}_{i}^{0}-\bm{R}_{j}^{0}|^{5}$ term plus the short range terms. We note that using the short-range terms to fit the remainders may yield an incorrect asymptotic behavior in the long wavelength limit. It makes our determination of the dynamic matrix less reliable in the regime. Figure~\ref{fig:Simulation} shows the variational ground state energies determined from our simulations, and a comparison with the results presented in Ref.~\cite{yi1998}. In our simulations, each of Markov chains contains a total $5.6\times10^{12}$ proposal states with an acceptance rate $\sim25\%$. They yield essentially identical results as the old simulations (within the error bars of the old simulations) albeit with much improved precision. \subsection{Results} \begin{figure} \includegraphics[width=1\columnwidth]{Beff-PR} \caption{\label{fig:Beff}Emergent gauge field. (a) Distribution of the emergent gauge field in the Brillouin zone for four representative filling factors (see legends in (b)) with different values of $m$; (b) Decay of the dissipationless viscosity coefficient $\Delta\mathcal{B}_{e}(\bm{R}_{i}^{0})$ in the real space. $R$ denotes the distance between two particles, and $a$ is the lattice constant; (c) Filling factor $\nu$ dependence of the emergent gauge field at the $K$-point of the Brillouin zone (circle-solid line) and the mean-field value $\Delta B$ defined in (\ref{eq:DeltaB}) (triangle-dashed line ). } \end{figure} Figure \ref{fig:Beff} shows the emergent gauge field $\Delta B_{e}$. The distribution of the emergent field in the Brillouin zone is shown in Fig.~\ref{fig:Beff}(a). It peaks at the $K$-point and vanishes at the $\Gamma$-point. In the real space, the dissipationless viscosity coefficient decays rapidly with the distance, as shown in Fig.~\ref{fig:Beff}(b). The strength of the emergent gauge field is characterized either by the value of $\Delta B_{e}(\bm{q})$ at the $K$-point or the mean-field value $\Delta B$ defined in Eq.~(\ref{eq:DeltaB}). Both are shown in Fig.~\ref{fig:Beff}(c). The magnitude of the emergent field is ranged from a few percents to tens percents of the external magnetic field, and is an increasing function of the filling factor for a given value of $m$. The magnitude is smaller than that expected for a FQH liquid, which has a mean-field value $\Delta B^{FQH}/B=m\nu$. It indicates the mean-field approximation adopted for the theory of FQH liquids is not applicable for the CPWCs. On the other hand, the magnitude is actually gigantic in comparing with that generated by an intrinsic SOC. For instance, the intrinsic SOC in GaAs could also give rise to a similar emergent gauge field in an ordinary 2D WC. However, its magnitude is of order of $\sim0.01\,\mathrm{T}$ only~\cite{ji2017}. \begin{figure} \includegraphics[width=1\columnwidth]{Omegaq-PR} \caption{\label{fig:Omegaq}Phonon dispersions of type-I CPWCs. (a\textendash d) Phonon dispersions for a few representative filling factors. Both results using the projected dynamic matrix (solid lines) and the unprojected one (dotted lines) are shown. (e) Filling factor $\nu$ dependence of phonon energies of the upper (U) and the lower (L) branch at high symmetry points of the Brillouin zone, including $K$ point, $M$ point, as well as $\Gamma$ point (evaluated at $\bm{q}=0.01\bm{K}$). $e^{2}/\epsilon l$ ($\approx4.3\sqrt{B[\mathrm{T}]}\,\mathrm{meV}$ for GaAs) is the Coulomb energy scale. Error bars for the phonon energies near the $\Gamma$-point are shown. } \end{figure} The phonon dispersions of type-I CPWCs are obtained by solving the generalized eigenvalue equation Eq.~(\ref{eq:gep}). The results are summarized in Fig.~\ref{fig:Omegaq}. Among the two branches of phonons of a CPWC, the lower branch is not much different from that of an ordinary WC, both qualitatively and quantitatively~\cite{maki1983,c^ote1990,c^ote1991}, whereas the upper branch is an emergent mode with an energy scale $\sim0.5\nu^{3/2}e^{2}/\epsilon l_{B}$, which is much smaller than the cyclotron energy. The upper branch has similar origin and energy scale as the magneto-roton mode arisen in FQH liquids~\cite{girvin1986}. We thus interpret the mode as the magneto-roton mode of the CPWC. \begin{figure} \includegraphics[width=1\columnwidth]{oscs} \caption{\label{fig:Oscillator-strength} Oscillator strength of the emergent magneto-roton mode for a few representative filling factors, in unit of $\omega_{\bm{q}}/\omega_{c}$, where $\omega_{\bm{q}}$ is the frequency of the mode, and $\omega_{c}$ is the cyclotron frequency. } \end{figure} We also calculate the oscillator strength of the emergent magneto-roton mode. To do that, we determine the response of the system to an external time-dependent electric field $\bm{E}(t)=\bm{E}_{\omega}\exp(-i\omega t)$. Because the external electric field is only coupled to the electron degree of freedom (see Sec.~\ref{subsec:Dipole-interpretation}), it will introduce a scale potential $e\bm{E}(t)\cdot\bm{x}^{e}\equiv e\bm{E}(t)\cdot(\bm{x}-\hat{z}\times\bm{p}/eB)$ into the system. The extra term in the potential could be interpreted as a coupling to the dipole of a CP. As a result, the equation of motion has the form: \begin{multline} \left[\begin{array}{cc} eB_{e}(\bm{q})\hat{z}\times & I\\ -I & \frac{1}{eB}\hat{z}\times \end{array}\right]\left[\begin{array}{c} \dot{\bm{x}}(\bm{q})\\ \dot{\bm{p}}(\bm{q}) \end{array}\right]=-\mathcal{D}(\bm{q})\left[\begin{array}{c} \bm{x}(\bm{q})\\ \bm{p}(\bm{q}) \end{array}\right]\\ +\left[\begin{array}{c} -e\bm{E}(t)\\ \bm{E}(t)\times\hat{z}/B \end{array}\right].\label{eq:EQM-1} \end{multline} By solving the equation, we can determine the displacement of electrons parallel to the electric field, and the oscillator strength $f_{\bm{q}i}$ is defined by the relation~\cite{johndavidjackson1999} $x_{e}^{\parallel}(\bm{q})=-e/m_{b}\sum_{i}f_{\bm{q}i}/(\omega_{\bm{q}i}^{2}-\omega^{2}-i\omega0^{+})E_{\omega}$, where $m_{b}$ is the electron band mass of the 2DEG. The oscillator strength for the emergent magneto-roton mode is shown in Fig.~\ref{fig:Oscillator-strength}. We see that it vanishes at the limit $\bm{q}\rightarrow0$, consistent with Kohn's theorem. The mode we predict here has an energy scale $\sim0.5\nu^{3/2}e^{2}/\epsilon l_{B}$. For typical experimental parameters, the energy is much larger than that probed by existing microwave experiments~\cite{andrei1988,engel1997,li1997,li2000,chen2003,chen2004,chen2006,zhu2010,williams1991}, which are focused on the disorder pining modes. Our prediction thus calls for new microwave experiments to probe the new energy regime. \section{Concluding remarks} In summary, we have derived the effective dynamics of CPs in a CPWC directly from the microscopic wave function. We find, most notably, the presence of a Berry curvature in the momentum space. The picture emerged from the dynamics is different from the conventional CF theory which assumes that CPs behave just like an ordinary Newtonian particle. On the other hand, we show that the dynamics is consistent with the dipole picture of CPs, and the presence of the Berry curvature is actually an inevitable consequence of the picture. The consistency is not a coincidence, since both are based on microscopic wave functions. Although our theory is developed for the CPWC phase, the insight may be carried over to the liquid phase. In particular, the presence of the Berry curvature would provide a cure for deficiencies of the conventional CF theory~\cite{shi2017}. The solution is less radical and would be more natural compared to that prescribed by the Dirac theory~\cite{son2015}. Our study reveals the discrepancy between the conventional interpretation of CPs and that emerged from a microscopic wave function. This is not surprising because the conventional picture was developed from a flux-attachment argument for free particles residing in a parabolic band~\cite{lopez1991}, while the microscopic wave functions are constructed for electrons constrained in a Landau level. For the latter, it would be highly desirable to have a CF theory which makes no direct reference to the magnetic field, since in a Landau level all the effects of the magnetic field has been accounted for by the Berry curvature. Indeed, in our dynamics, all the references to the magnetic field $eB$ could be interpreted as $1/\Omega_{z}$. Such a theory would also be a first step toward an understanding of the fractional Chern insulators~\cite{parameswaran2013}. \begin{acknowledgments} This work is supported by National Basic Research Program of China (973 Program) Grant No. 2015CB921101 and National Science Foundation of China Grant No. 11325416. \end{acknowledgments}
2024-02-18T23:40:50.701Z
2017-11-07T02:13:22.000Z
algebraic_stack_train_0000
3,458
10,452
proofpile-arXiv_066-1272
\section{Introduction} Hydrogen can be produced from a variety of renewable resources or in modern 4th-generation nuclear reactors operating at high temperatures where hydrogen production by {}water {}hydrolysis advantageously serves also to their cooling during periods when electricity cannot be produced. Then it is utilized in high-efficiency power generation systems with no emission of pollutants based on thermo-chemistry (burning directly hydrogen) or electro-chemistry (using fuel cells, cf.~Section~\ref{sec-fuel-cells} for little more details). Hydrogen contains more energy per unit mass than any other available fuel. However, being the lightest element of the Periodic Table, it is highly volatile. Thus, in order to be compactly stored, standardly it is compressed in heavy high-pressure tanks or liquefied with recourse to expensive cryogenic systems. The lack of an efficient and economical way to store hydrogen is the major barrier to the massive commercial implementation of hydrogen-based technologies, especially in the automotive sector \cite{Edwards2008}. A promising alternative to cryogenic and high-pressure hydrogen storage option is provided by solid-state storage, a technology which exploits the property of certain metals and alloys to accommodate hydrogen atoms in their interstitial sites \cite{Libowitz1994}. We propose a mathematical model for hydrogen adsorption in metals. Beside diffusion, the model accounts for phase transformation, temperature, strain, and hysteresis, cf.\ e.g.\ \cite{DrClGu??HCHS}. Thus, our model, {based on a conventional rational mechanics (cf.\ Remark~\ref{rem-RM} below)}, extends those proposed and analyzed in \cite{Bonetti2012,Bonetti2007,Chiodaroli2011}, Since the modeling is entirely new, a detailed derivation is presented in the next Section~\ref{sec-model} where also the model is a bit reformulated to facilitate mathematical analysis; {of course, various simplifications had to been adopted, cf.\ Remark~\ref{rem-simplifications} below.} Mathematical results as far as existence of weak solutions are summarized in Section~\ref{sec-results} while their proof by a carefully designed semi-implicit discretisation in time is done in Section~\ref{sec-discrete}. Eventually, in Section~\ref{sec-fuel-cells}, we briefly sketch the augmentation of the model for a multicomponent, charged (i.e.\ ionized) chemically reacting medium instead of mere single-component electro-neutral hydrogen, having in mind e.g.\ application to the mentioned fuel cells or to elastic semiconductors. \section{Model derivation}\label{sec-model} We consider a solid body, which we identify with a {domain} $\Omega$ of the three-dimensional space. We regard $\Omega$ as a platform for several mutually interacting processes and phenomena affecting the kinetics of hydrogen adsorption/desorption \cite{Latroche2004,Libowitz1994}: \begin{itemize} \item \emph{Phase transformation}: at a low concentration, hydrogen atoms form a dilute interstitial solid-solution (\emph{$\alpha$-phase}). Increasing the hydrogen concentration causes parts of the solid solution to precipitate into a \emph{$\beta$-phase} of larger interstitial concentration and lower density. Further addition of hydrogen takes place at constant pressure, until the metal is entirely converted into hydride. \item \emph{Temperature variation}: hydrogenation is \emph{exothermic} and \emph{reversible}: when the metal is exposed to hydrogen, heat is generated; conversely, heating the hydride drives the reaction in the reverse direction. \item \emph{Strain and stress}: hydrogenation is accompanied by large expansion of the unit cell volume of the crystal. {Within this ``swelling''}, volume changes between the two phases can vary from 8\% to 30\% and it may cause large stresses. \item \emph{Spatial distribution and transport}: in addition, an important feature is distri\-buted-parameter character of such storage devices. In particular, the motion of H atoms after dissociation of the $\rm H_2$ molecule on the surface is diffusion driven by gradient of chemical potential, and heat transfer and force equilibrium must be properly counted. \end{itemize} In order to describe the above-mentioned processes we introduce the following time-dependent fields on $\Omega$, which we refer to as \emph{primary fields}: \begin{itemize} \item $\bfu$, the displacement field; \item {$\bbc$, the microstructural phase field;} \item $\CHI$, the concentration of moles of hydrogen per unit volume; \item $\vartheta$, the temperature field; \end{itemize} The microstructural field is a collection of scalar variables which contains information concerning phase transformation and damage. We now derive a system of partial differential equations ruling the evolution of the primary fields. We do this in two steps. \emph{Step 1: Balance laws.} We invoke certain well-accepted thermomechanical prin\-ciples, whose statement requires the introduction of some \emph{auxiliary fields}: \begin{center} \ \ \ \ \ \ \ \begin{minipage}[t]{0.36\linewidth} ${\boldsymbol\sigma}$ stress, $\bff$ bulk force, $\bff_{\rm s}$ surface force, $\bbs$ internal microforce, $\bbS$ microstress, $\bbf$ bulk microforce, $\bbf_{\rm s}$ external surface microforce, $e$ internal energy, \end{minipage} \hfill \begin{minipage}[t]{0.52\linewidth} $\bfeps(\bfu)=\frac12\left(\nabla\bfu+\nabla\bfu^\top\right)$ small-strain tensor, $\psi$ free energy, $\mu$ chemical potential, $\bfh$ hydrogen flux, $h$ bulk hydrogen supply, $h_{\rm s}$ surface hydrogen supply, $\bfq$ heat flux, $q$ bulk heat supply. \end{minipage} \end{center} \medskip \noindent Each particular specification of space-time evolution of primary and auxiliary fields constitutes a \emph{dynamical process}. We require that every dynamical process comply with the following balance equations: \begin{subequations}\label{balance} \begin{align} &\varrho\DDT\bfu-\textrm{div}{\bfsigma}=\bff,\label{paperino}\\ &\bbs-\textrm{div}\bbS=\bbf,\label{qui}\\ &\DT\CHI+\textrm{div}\mathbf h=h,\label{quo}\\ &\DT e+\textrm{div}\mathbf q=q+{\bfsigma}{:}\bfeps(\DT\bfu) +\bbs{\cdot}\DT\bbm+\bbS{:}\nabla\DT\bbm+\DT\CHI\mu-\mathbf h{\cdot}\nabla \mu, \label{qua} \end{align} \end{subequations} where the dot denotes the time derivative. The statements contained in \eqref{balance} are, in the order: the \emph{standard-force balance}, the \emph{microforce balance}, the \emph{balance of mass for hydrogen}, and the \emph{balance of internal energy}. The corresponding natural conditions on $\partial\Omega$ are:\\\vspace*{-2em} \begin{subequations}\label{boundary} \begin{align} &{\bfsigma}\bfn=\bff_{\rm s},\\ &\bbS\bfn=\bbf_{\rm s},\\ &\bfh\cdot\bfn=h_{\rm s},\\ &\bfq\cdot\bfn=q_{\rm s}. \end{align} \end{subequations} Although the number of balance equations equals that of primary fields, the system \eqref{balance} and \eqref{boundary} is under-determined. Such indeterminacy reflects the fact that these laws are common to a wide spectrum of thermomechanical systems. Thus, they cannot single out the particular mathematical model that best fits the system under investigation. One needs indeed additional conditions which can distinguish one particular material from another. These are called \emph{constitutive prescriptions}.\medskip \emph{Step 2: Second law.} A constitutive prescription is typically expressed as a relation between the instantaneous value of a secondary field at a given point and that of a so-called \emph{constitutive list}, a list of quantities obtained by taking the values of primary and secondary fields, or their space/time derivatives. A basic principle that guides the formulation of constitutive prescriptions is the {}requirement that every conceivable dynamical process be consistent with {}the \emph{entropy inequality}: \begin{equation}\label{pippo} \DT s\ge \textrm{div}\Big(\frac {\mathbf q}{\vartheta}\Big)+\frac q \vartheta, \end{equation} irrespectively of the practical difficulties involved in realizing such a process. Thus, unlike the balance laws \eqref{balance}, the imbalance \eqref{pippo} is not explicitly stated in the mathematical model, but it is implicitly enforced through a suitable choice of constitutive prescriptions. The entropy inequality is best exploited by replacing, in the list of fields to be specified constitutively, the internal energy with the {}\emph{free energy}:{} \begin{equation}\label{psi=e-s.theta} \psi=e-s\vartheta. \end{equation} Rewriting \eqref{qua} in terms of $\psi$ and $s$, and substituting it into \eqref{pippo}, one arrives at: \[ \DT\psi+s\DT\vartheta-\mu\DT\CHI\le {\bfsigma}:\DT\bfeps+\bbs\cdot\DT\bbm+\bbS:\nabla\DT\bbm-\mathbf h\cdot\nabla\mu -\frac 1 \vartheta\mathbf q\cdot\nabla\vartheta, \] where we have used the shorthand notation $\bfeps=\bfeps(\bfu)$. A {}standard argument due to Coleman and Noll \cite{Coleman1963} {}allows us to conclude that the free energy may depend at most on $(\bfeps,\bbm,\nabla\bbm,\CHI,\vartheta)$: \[ \psi=\varphi(\bfeps,\bbm,\nabla\bbm,\CHI,\vartheta). \] Moreover, if one assumes that entropy and chemical potential depend on the same list, one obtains \[ s=-\partial_\vartheta\varphi,\qquad \mu=\partial_\CHI\varphi. \] The dissipation inequality can further be written in a more compact form by introducing the splitting:\\\vspace*{-2em} \begin{align} &{\boldsymbol\sigma}=\partial_{\bfeps}\varphi+{\bfsigma}^{\rm d},\\ &{\bbs}=\partial_{\bbm}\varphi+\bbs^{\rm d},\\ &{\bbS}=\partial_{\nabla\bbm}\varphi+\bbS^{\rm d}. \end{align} With that splitting, one indeed obtains: \begin{equation}\label{zg} 0\le {\bfsigma}^{\rm d}:\DT\bfeps+\bbs^{\rm d}\cdot\DT\bbm+\bbS^{\rm d}:\nabla\DT\bbm-\bfh\cdot\nabla\mu -\frac 1 \vartheta\bfq\cdot\nabla\vartheta. \end{equation} \emph{Step 3: Constitutive equations.} To facilitate mathematical analysis but still capturing desired features, we restrict our attention to the following special constitutive ansatz: \begin{equation}\label{ansatz1} \varphi(\bfeps,\bbm,\nabla\bbm,\CHI,\vartheta)={\varphi_1}(\bbm,\CHI)+{\varphi_2}(\bfeps,\bbm)+ {\varphi_3}(\bbm,\vartheta)+\vartheta\varphi_4(\bbm,\bfeps)+ \frac\lambda2|\nabla\bbm|^2, \end{equation} where $\lambda>0$ is a length-scale parameter. This ansatz ensures, e.g., the heat capacity independent of the variables whose gradient is not directly controlled, i.e.\ $\bfeps$ and $\CHI$, and also the chemical potential independent of $\bfeps$ and $\vartheta$. The constitutive equations for entropy and chemical potential are \begin{subequations} \begin{align} &s=-\partial_\vartheta{\varphi_3}(\bbm,\vartheta)-\varphi_4(\bfeps,\bbm),\\ &\mu=\partial_\CHI{\varphi_1}(\bbm,\CHI). \end{align} \end{subequations} On defining {$\omega:=\varphi-\vartheta\partial_\vartheta\varphi$, in view of \eqref{ansatz1} we have} \begin{align}\label{def-of-omega} \omega(\bbm,\vartheta)= \varphi_3(\bbm,\vartheta)-\vartheta \partial_\vartheta\varphi_3(\bbm,\vartheta), \end{align} the constitutive equation for internal energy {$e=\psi+s\vartheta$, cf.\ \eqref{psi=e-s.theta},} is \[ e=\varphi_1(\bbm,\CHI)+\varphi_2(\bfeps,\bbm)+\omega(\bbm,\vartheta) +\frac\lambda2|\nabla\bbm|^2. \] As constitutive prescriptions for the dissipative parts of the auxiliary fields we choose\\[-1.2em] \begin{subequations}\label{interm} \begin{align} &{\boldsymbol\sigma}^{\rm d}=\mathbb D\bfeps(\DT\bfu),\\ &\bbs^{\rm d}\in \alpha\DT\bbm+\partial\zeta(\DT\bbm),\label{dissip-pot-m}\\ &\bbS^{\rm d}=0,\\ &\bfq=-\bbK(\bfeps,\bbm,\CHI,\vartheta)\nabla\vartheta,\label{aaa}\\ &\bfh=-\bbM(\bfeps,\bbm,\CHI,\vartheta)\nabla\mu.\label{bbb} \end{align} \end{subequations} Here $\mathbb D$ is a 2nd-order positive-definite viscosity-moduli tensor, $\alpha>0$ counts for rate effects in evolution of $\bbm$, $\zeta$ is a convex function homogeneous of degree one; note that $\zeta$ is typically nonsmooth at 0, which counts for activation of evolution of $\bbm$. Moreover, $\bbK$ and $\bbM$ are respectively 2nd-order positive-definite heat-conductivity and hydrogen-diffusivity tensors. We also eventually set $h=0$ and $\bbf=0$. We therefore arrive at the following system: \begin{subequations}\label{orig-syst} \begin{align} &\varrho\DDT\bfu- \mathrm{div}\big( \partial_{\bfeps}\varphi_2(\bfeps(\bfu),\bbm) +{\vartheta\partial_{\bfeps}\varphi_4(\bfeps(\bfu),\bbm)} +\mathbb D\bfeps(\DT{\bfu})\big)=\bff,\label{balmech-}\\ &\alpha\DT\bbm-\lambda\,\Delta \bbm +\partial_\bbm{\varphi_1}(\bbm,\CHI)+\partial_\bbm{\varphi_2}(\bfeps(\bfu),\bbm) \nonumber\\ &\qquad\qquad\qquad\qquad\ +\partial_\bbm{\varphi_3}(\bbm,\vartheta) +{\vartheta\partial_{\bbm}\varphi_4(\bfeps(\bfu),\bbm)} +\partial\zeta(\dot\chi)\ni0,\label{phasetransf} \\& \DT\CHI-\mathrm{div}\big(\bbM(\bfeps(\bfu),\bbm,\CHI,\vartheta)\nabla\mu\big)=0, \label{EEd-t2-} \\ &\DT w-\mathrm{div}\Big(\bbK(\bfeps(\bfu),\bbm,\CHI,\vartheta)\nabla \vartheta\Big)= \big(\mathbb D\bfeps(\DT\bfu) +{\vartheta\partial_{\bfeps}\varphi_4(\bfeps(\bfu),\bbm)}\big) {:}\bfeps(\DT\bfu)\nonumber \\&\null\qquad\qquad\qquad\qquad\ +\big(\alpha\DT\bbm+\partial_{\bbm}{\varphi_3}(\bbm,\vartheta) +{\vartheta\partial_{\bbm}\varphi_4(\bfeps(\bfu),\bbm)} \big){\cdot}\DT\bbm\nonumber \\&\null\qquad\qquad\qquad\qquad\ +\zeta(\DT\bbm)+\bbM(\bfeps(\bfu),\mathbbmss m,\CHI,\vartheta) \nabla\mu{\cdot}\nabla\mu+q, \\\label{def-of-mu} &\mu=\partial_\CHI{\varphi_1}(\bbm,\CHI), \\ &w=\omega(\bbm,\vartheta), \end{align} \end{subequations} {where $\omega$ is from \eqref{def-of-omega}.} We make the following {natural} assumption {which, in fact, says positivity of the heat capacity}: \begin{align} &\partial_\vartheta \omega={-\vartheta\partial^2_{\vartheta\vartheta}\varphi}= -\vartheta\partial^2_{\vartheta\vartheta}\varphi_3>0. \label{heatc} \end{align} Then, the inverse to $\omega(\bbm,\cdot)$ does exist and we can express $\vartheta$ as \[ \vartheta={[\omega(\bbm,\cdot)]^{-1}(w)=:}\theta(\bbm,w), \] which allows us to eliminate temperature $\vartheta$ from the system \eqref{orig-syst}. {} The symbol $\ni$ appearing in formula \eqref{phasetransf} (and in formulas \eqref{gilbmod-t}, \eqref{TGMd-2}, and \eqref{TGMd-2-comp} below) means that the right--hand side is included in the left--hand side, which is a set, since $\zeta$ and $\varphi_2$ are non--smooth (see also \eqref{def-of-phi2} below). {} Moreover, we will be a bit more specific in \eqref{ansatz1}. A typical contribution to the free energy is \begin{align}\label{basic-ansatz} {\frac12\bbC(\bfeps{-}\bfepstr\bbc{-}\vartheta\bfalpha) {:}(\bfeps{-}\bfepstr\bbc{-}\vartheta\bfalpha) +\frac k2\big|\bbc{-}a(\CHI)\big|^2+\phi_1(\CHI)+\phi_3(\vartheta) +\delta_K(\bbc)} \end{align} where $\bbC$ is a 4th-order {elastic-moduli} tensor, $\bfepstr$ is a 2nd-order tensor which incorporates the effect of dilation due to the microstructural parameter $\bbc$ within metal/hydride phase transformation, $\bfalpha$ is a tensor accounting for thermal dilation, $\phi_1$ and $\phi_3$ are the simplest variant of the contribution to the {chemical potential} and the heat capacity, respectively, and {$\delta_K$ is an indicator function of a convex set $K\subset\R^N$ from which the phase-field $\bbc$ is assumed to range. Assuming $\bbm$ is scalar-valued, $\bfepstr$ the unit matrix, and $k$ large, we get essentially an isotropic {\it swelling} controlled nonlinearly by the hydrogen concentration by $\bbc\sim a(\CHI)$ while allowing still $\varphi_1(\bbm,\cdot)$ to be uniformly convex, as needed later in \eqref{est-of-nabla-concentration}.} Obviously, {in view of \eqref{ansatz1}, the specific choice} \eqref{basic-ansatz} leads to \begin{subequations} \begin{align}\label{def-of-phi1} &{\varphi_1(\bbm,\CHI)=\frac k2\big|\bbc{-}a(\CHI)\big|^2+\phi_1(\CHI),} \\\label{def-of-phi2} &{\varphi_2}(\bfeps,\bbm) =\frac12\bbC (\bfeps{-}\bfepstr\bbc){:}(\bfeps{-}\bfepstr\bbc)+\delta_K(\bbc) ,\\\label{def-of-phi3} & \varphi_3(\bbm,\vartheta)= \frac12\vartheta^2\bbC\bfalpha{:}\bfalpha +\vartheta\bbC\bfalpha{:}\bfepstr\bbc+\phi_3(\vartheta), \\ &\varphi_4(\bfeps,\bbm)= -\bbC\bfalpha{:}\bfeps. \end{align} \end{subequations} Note that, {in \eqref{def-of-phi3}, $\partial_{\vartheta\vartheta\bbm}^3\varphi_3\equiv0$,} which makes the heat capacity independent of $\bbm$, but we can consider more generally the contribution $\phi_3$ dependent also on $\bbm$ to reflect different heat capacity of metal and of hydride, and therefore we do not restrict ourselves to a particular form of $\varphi_3$ in \eqref{ansatz1} but make only certain technical assumptions below, {cf.\ \eqref{q2}.} Similarly, we keep the treatment of $\varphi_1$ in a nonspecified way. {In fact, the specific form \eqref{def-of-phi2} is a simplified linearization of the so-called St.Venant-Kirchhoff potential but, when derived from the St.Venant-Kirchhoff form by linearizing the stress response, it results still in some other terms, cf.\ \cite[Sect.\,5.4]{DreDud??MSIG}, which here was neglected rather for notational simplicity.} Thus, on setting\\[-1.3em] \begin{subequations} \begin{align} &\bfsigma_{\rm a}(\bbm,w):=-\theta(\bbm,w)\bbC\bfalpha,\\ &{\bbs_{\rm a}}(\bbm,w):=\partial_\bbm{\varphi_3}(\bbm,\theta(\bbm,w)) , \\ &\mathsf K(\bfeps,\bbm,\CHI,w):=\bbK(\bfeps,\bbm,\CHI,\theta(\bbm,w)),\\ &\mathsf{L}(\bfeps,\bbm,\CHI,w):= \bbK(\bfeps,\bbm,\CHI,\theta(\bbm,w))\otimes\partial_\bbm\theta(\bbm,w),\\ &\mathsf M(\bfeps,\bbm,\CHI,w):=\bbM(\bfeps,\bbm,\CHI,\theta(\bbm,w)), \end{align} \end{subequations} the original system \eqref{orig-syst} is transformed into \begin{subequations}\label{BVP-t} \begin{align}\label{balmech} &\!\!\varrho\DDT\bfu- \mathrm{div}\big( {\bbC(\bfeps(\bfu){-}\bfepstr\bbm)} +{\bfsigma_{\rm a}}(\bbm,w)+\mathbb D\bfeps(\DT\bfu)\big)=\bff, \\&\!\! \alpha\DT\bbm+\partial\zeta(\DT\bbm)-\lambda\,\Delta \bbm\! \label{gilbmod-t} +\partial_\bbm{\varphi_1}(\bbm,\CHI){+} {\bfepstr^\top\bbC(\bfepstr\bbm{-}\bfeps(\bfu))} {+}{\bbs_{\rm a}}(\bbm,w){+}N_K^{}(\bbm) \ni0, \\ &\!\!\DT\CHI-\mathrm{div}\big(\mathsf M(\bfeps(\bfu),\bbm,\CHI,w)\nabla\mu\big)=0, \label{EEd-t2} \\&\!\!\DT w-\mathrm{div}\big(\mathsf K(\bfeps(\bfu),\bbm,\CHI,w) \nabla w{+}\mathsf{L}(\bfeps(\bfu),\bbm,\CHI,w) \nabla\bbm\big)=\big({\bfsigma_{\rm a}}(\bbm,w){+} \mathbbm D\bfeps(\DT\bfu)\big){:}\bfeps(\DT{\bfu}) \nonumber \\&\qquad\qquad +\big(\bbs_{\rm a}(\bbm,w)+\alpha\DT\bbm\big){\cdot}\DT\bbm +\zeta(\DT\bbm) +\mathsf M(\bfeps(\bfu),\mathbbmss m,\CHI,w)\nabla \mu{\cdot}\nabla\mu+q, \label{heatequation} \\&\!\mu=\partial_\CHI{\varphi_1}(\bbm,\CHI),\label{chempot} \end{align} \end{subequations} {where $N_K^{}=\partial\delta_K$ denotes standardly the normal cone to the convex set $K$.} The boundary conditions {\eqref{boundary} now} take the form: \begin{subequations}\label{bcc} \begin{align} &\big( \bbC(\bfeps(\bfu){-}\bfepstr\bbm) +{\bfsigma_{\rm a}}(\bbm,w)+\mathbb D\bfeps(\DT\bfu)\big)\mathbf n =\bff_{\rm s}, \label{BC-t-1--} \\ & \, \DELETE{\mathsf M(\bfeps(\bfu),\bbm,\CHI,w)} \nabla{\bbm} \cdot{\mathbf n}=0, \\ &\label{bc34} \,\mathsf M(\bfeps(\bfu),\bbm,\CHI,w)\nabla \mu \cdot\mathbf n=h_s, \\ &\label{BC-t-2} \big(\mathsf K(\bfeps(\bfu),\bbm,\CHI,w) \nabla w+\mathsf{L}(\bfeps(\bfu),\bbm,\CHI,w) \nabla{\bbm}\big)\cdot\mathbf n=q_{\rm s}.\end{align} Using the convention like $\bfu(\cdot,t)=:\bfu(t)$, we complete the system by the initial con\-ditions:\\[-1.3em] \begin{align} &\label{IC} \bfu(0)=\bfu_0,\qquad\DT\bfu(0)=\bfv_0, \qquad\bbm(0)=\bbm_0,\qquad\CHI(0)=\CHI_0,\qquad w(0)=w_0, \end{align} \end{subequations} where we have set $w_0=\omega(\bbm_0,\vartheta_0).$ We henceforth shall use the abbreviation for {the so-called stored energy, i.e.\ the temperature independent part} of the free energy: \begin{align}\label{def-of-phi12} \varphi_{12}(\bfeps,\bbm,\CHI):= {\varphi_1}(\bbm,\CHI)+{\varphi_2}(\bfeps,\bbm) \end{align} By testing {(\ref{BVP-t}a,b,c,d)} respectively with $\DT\bfu$, $\DT\bbm$, $\mu$, and by a constant $\nu$, integrating by parts {in time, using Green's formula with} the boundary conditions \eqref{bcc}, and taking into account \eqref{chempot} so that\\[-1.3em] \begin{align}\label{fundamental-test} \DT\CHI\mu=\frac{\partial}{\partial t}\varphi_1(\bbm,\CHI) -\partial_\bbm\varphi_1(\bbm,\CHI)\DT\bbm, \end{align} we obtain the following identity: \begin{align}\nonumber &\int_\Omega \frac\r2\big|\DT\bfu(t)\big|^2 +\varphi_{12}(\bfeps(t),\bbm(t),\CHI(t))+\frac\lambda2|\nabla\bbm(t)|^2 +\nu w(t) \,\d x \\[-.3em]&\nonumber \quad +(1{-}\nu)\int_0^t\!\!\int_\Omega\! \bbD\bfeps(\DT\bfu){:}\bfeps(\DT\bfu)+\alpha\big|\DT\bbm\big|^2 {+\zeta\big(\DT\bbm\big)} +\mathsf M(\bfeps(\bfu),\bbm,\CHI, w)\nabla\mu{\cdot}\nabla\mu\,\d x\d t \\\nonumber& =(1{-}\nu)\int_0^t\!\!\int_\Omega \bfsigma_{\rm a}(\bbm,w){:}\bfeps(\DT\bfu)+\bbs_{\rm a}(\bbm,w){\cdot}\DT\bbm \,\d x\d t \\\nonumber &\quad +\nu \int_0^t\!\!\int_\Omega q \,\d x\d t+\int_0^t\!\bigg(\int_\Omega \bff{\cdot}\DT\bfu\,\d x +\int_\Gamma{\bff_{\rm s}}{\cdot}\DT\bfu +{h_{\rm s}}{\cdot}\DT\mu+{\nu} {q_{\rm s}}\,\d S \bigg)\d t \\&\quad +\nu\int_\Omega w_0\,\d x+\int_\Omega\frac\r2|\bfv_0|^2 +\varphi_{12}(\bfeps(\bfu_0),\bbm_0,\CHI_0)+\frac\lambda2|\nabla\bbm_0|^2 \,\d x.\label{balance3} \end{align} For $\nu=0$, the identity \eqref{balance3} represents the mechanical energy balance. For $0<\nu<1$, both the internal energy and dissipative terms are seen; henceforth, a discrete version of this estimate will be used in the proof of Lemma \ref{lem-2} for $\nu=1/2$. Eventually, for $\nu=1$, we recover the standard total-energy balance; note that the dissipative terms (and also adiabatic terms) then vanish. \begin{remark}\label{rem-simplifications} Of course, the above model adopted a lot of simplifications of the actual situations in hydride {}storage{}. In particular the concept of small strains may be questionable at some situations, possible damage is essentially neglected, although formally it can be involved in a general form of $\varphi_2$ in \eqref{ansatz1} below but a lot of analytical considerations seem to be difficult to be straightforwardly adapted. Also temperature dependence of the chemical potential of hydrogen is neglected, i.e.\ $\phi_1$ in \eqref{ansatz1} does not depend of $\theta$. Further, the chemical reaction in the multi-component system metal/hydrid/hydrogen is basically neglected and hydride is modeled as a mixture of metal and hydrogen with essentially the possibility to obtain the same thermomechanical response of the phase transformation as the corresponding chemical reaction (and, in addition, we can easily model the activated hysteretic response related with this phase transformation). \end{remark} \begin{remark}\label{rem-RM} {{}The {}thermodynamics of our model {}follows {}essentially a classical approach based on rational mechanics and Clausius-Duhem inequality, cf.\ e.g.\ \cite{Bowe76CP}. There are some variants of this general scenario \cite{DegMaz84NET,LeJoCa08UNET,KjeBed08NETH,Mull85T} which are to some extent equivalent under some simplifications like those in Remark~\ref{rem-simplifications} above. } \end{remark} \section{Weak solutions and their existence}\label{sec-results} \def\deltauu{\frac{\bfu\kt-\bfu\kkt}\tau} \def\deltau{\dtt\kt\bfu} \def\deltauuu{\dtt_\tau^{k-1}\bfu} \def\deltam{\dtt\kt\bbm} \def\deltamm{\frac{\bbm\kt-\bbm\kkt}\tau} \def\deltac{\dtt\kt\bbc} \def\deltad{\dtt\kt\bbd} \def\deltaee{\bfeps(\frac{\bfu\kt-\bfu\kkt}\tau)} \def\deltae{\bfeps(\deltau)} \def\deltaeee{\dtt\kt\bfeps} \def\deltaxx{\frac{\CHI^k_\tau-\CHI\kkt}\tau} \def\deltax{\dtt\kt\CHI} \def\deltaww{\frac{w^k_\tau-w\kkt}\tau} \def\deltaw{\dtt\kt w} \def\vphi{\varphi_{12}} Let us summarize the qualification on the data on which we will rely during the analysis of the initial-boundary-value problem \eqref{BVP-t}. We confine ourselves to the (physically relevant) three-dimensional case. We consider a fixed time horizon $T$ and abbreviate $Q:=\Omega{\times}(0,T)$ and $\Sigma:=\Gamma{\times}(0,T)$, with $\Gamma$ a boundary of the domain $\Omega\subset\R^3$ assumed Lipschitz. {In the following, we shall use some classical notation for function spaces, in particular the Lebesgue spaces $L^p$, the Sobolev spaces $W^{k,p}$ and, in particular, $H^k=W^{k,2}$, and vector-valued functions.} We suppose that \begin{subequations}\label{DQ} \begin{align}\nonumber &\bbC,\ \mathbb D,\ \partial^2_{\CHI\CHI}{\varphi_1}(\bbm,\CHI),\ \mathsf M(\bfeps,\bbm,\CHI,w),\ \mathsf K(\bfeps,\bbm,\CHI,w) \\&\hspace*{12em}\text{are (uniformly) positive definite},\label{pos1} \\\label{growth-of-phi1} &\max\big(|\varphi_1(\bbm,\CHI)|, |\partial_\bbm\varphi_1(\bbm,\CHI)|\big)\le C\big(1 +|\CHI|^3\big), \\&|\mathsf M(\bfeps,\bbm,\CHI,w)\partial^2_{\CHI\CHI}\varphi_1(\bbm,\CHI)| \le C(1+|\CHI|^{6-\epsilon}),\label{growth1} \\\label{semiconvexity-phi0} &\varphi_1\in {\rm C}^2(K{\times}\R^+;\R),\ \ \partial^2_{\bbm\bbm}\varphi_1(\bbm,\CHI)\ \ \text{ is bounded from below}, \\\label{bbg} &\partial^2_{\bbm\CHI}\varphi_1(\bbm,\CHI)\ \ \text{is bounded, and }\ { \partial^2_{\bbm\CHI}\varphi_1(\bbm,\CHI)=0\ \text{ for }\CHI=0,} \\\label{Mbound} &\mathsf M(\bfeps,\bbm,\CHI,w)\ \ \text{is bounded}, \\\label{phi0-coerc} &\exists\epsilon>0:\qquad\varphi_1(\bbm,\CHI)\ge {\epsilon|\CHI|^2}, \ \ \ \ \varphi_1(\bbm,\cdot)\text{ convex}, \\ & \zeta:\R^N\to\R\text{ convex 1-homogeneous.} \\&{K\subset\R^N\ \text{ bounded, convex, closed,}} \\\label{q1} &{\bfsigma_{\rm a}\in C(K{\times}\R^+;\R^N),}\qquad\ |\bfsigma_{\rm a}(\bbm,w)|\le C\sqrt{1+\varphi_1(\bbm,\CHI)+w},\\ \label{q2} &{\bbs_{\rm a}\in C(K{\times}\R^+;\R^{3\times3}),}\qquad |\bbs_{\rm a}(\bbm,w)|\le C\sqrt{1+\varphi_1(\bbm,\CHI)+w},\\ \label{q3} &|\mathsf L(\bfeps,\bbm,\CHI,w)|\le C\sqrt{1+w}. \end{align} {}Moreover, we need the qualification of the right-hand sides and the initial data: \begin{align} &\bff\!\in\!L^2(I;L^{6/5}(\Omega;\R^3)),\quad\bff_{\rm s}\!\in\!L^2(I;L^{4/3}(\Gamma;\R^3)), \quad q\!\in\!L^1(\Omega), \quad q_{\rm s}\!\in\!L^1(\Gamma), \\\label{IC-ass-1} & \bfu_0\!\in\!H^1(\Omega;\R^3),\ \ \bfv_0\!\in\!L^2(\Omega;\R^3),\ \ \bbm_0\!\in\!H^1(\Omega;\R^N),\ \ \ {\bbm_0\!\in\!K\text{ a.e.\ on }\Omega,} \\\label{IC-ass-2} &\CHI_0\!\in\!H^1(\Omega),\ \ \ {\CHI_0\ge 0,}\ \ \ w_0\!\in\!L^1(\Omega),\ \ \ {w_0\ge 0}. \end{align} \end{subequations} {We note that \eqref{q3} will be used in the derivation of the estimate on $\nabla w$, see \eqref{est-of-nabla-w} below. Also note that (\ref{DQ}a,d,e) is not in conflict with the example \eqref{def-of-phi1} where $\partial^2_{\bbm\CHI}\varphi_1(\bbm,\CHI)=-ka'(\CHI)$ and $\partial^2_{\CHI\CHI}\varphi_1(\bbm,\CHI)=k(a(\CHI){-}\bbm)a''(\CHI) +\phi_1''(\CHI)+ka'(\CHI)^2$, while $\partial^2_{\bbm\bbm}\varphi_1(\bbm,\CHI)=k$ so that \eqref{pos1} needs $k(a(\CHI){-}\bbm)a''(\CHI) +\phi_1''(\CHI)+ka'(\CHI)^2\ge\epsilon>0$, \eqref{bbg} needs $a'$ bounded with $a'(0)=0$ while \eqref{semiconvexity-phi0} is here satisfied automatically. } \begin{definition}[{\sc Weak solutions}]\label{def} We say that the six-tuple $(\bfu,\bbm,\CHI,w,\mu,\xi)$ with \begin{subequations}\begin{align} &\bfu\in H^1(I;H^1(\Omega;\R^3)),\quad \\&\bbm\in L^\infty(I;H^1(\Omega;\R^N)) \cap{H^1(I;L^2(\Omega;\R^N))},\quad \\&\CHI\in L^\infty(I;L^2(\Omega)), \\&w\in L^\infty(I;L^1(\Omega))\cap L^1(I;W^{1,1}(\Omega)),\quad \\&\mu\in L^\infty(I;H^1(\Omega)), \\&\xi\in L^2(Q;\R^N),\qquad \xi\in N_K^{}(\bbm)\ \ \text{ a.e.\ on }\ Q, \end{align}\end{subequations} is a weak solution to the initial-boundary-value problem \eqref{BVP-t}--\eqref{bcc} if \begin{subequations} \begin{align}\nonumber \int_Q\! \big( \bbC(\bfeps(\bfu){-}\bfepstr\bbm) +\bfsigma_{\rm a}(\bbm,w)+\mathbb D\bfeps(\DT\bfu)\big){:}\bfeps(\bfz) -\varrho\DT\bfu{\cdot} \DT\bfz\,\d x\d t\qquad\qquad\qquad\quad \\[-.8em] =\int_Q\!\bff{\cdot}\bfz\,\d x\d t +\int_\Sigma\!\bff_{\rm s}{\cdot}\bfz\,\d S\d t +\int_\Omega\!\bfv_0{\cdot}\bfz(0)\,\d x \end{align} for any $\bfz\in C^1(\overline Q;\R^3)$ such that $\bfz(T)=0$, \begin{align}\nonumber &\int_Q\!\zeta(\bbv)+ \big(\alpha\DT\bbm+ \partial_\bbm{\varphi_1}(\bbm,\CHI) +{\bfepstr^\top\bbC(\bfepstr\bbm{-}\bfeps(\bfu))} +\bbs_{\rm a}(\bbm,\CHI) +{\xi} \big){\cdot}(\bbv{-}\DT\bbm) \\[-.7em]&\label{def-of-m}\qquad +\lambda\nabla\bbm{:} \nabla\bbv \,\d x\d t+\int_\Omega\!\frac\lambda2|\nabla\bbm_0|^2\,\d x \ge\int_Q \zeta(\DT\bbm)\,\d x\d t+\int_\Omega\!\frac\lambda2|\nabla\bbm(T)|^2\,\d x \end{align} for any $\bbv\in C^1(\overline Q;\R^N)$, \begin{align}\label{def-of-chi} \int_Q\!\bbM(\bfeps(\bfu),\bbm,\CHI,\vartheta)\nabla\mu{\cdot} \nabla v-\CHI\DT v\,\d x\d t=\int_\Sigma h_sv\,\d S\d t+\int_\Omega\CHI_0v\,\d x \end{align} for all $v\in C^1(\overline Q)$ with $v(T)=0$, \begin{align}\nonumber &\int_Q \Big(\mathsf K(\bfeps(\bfu),\bbm,\CHI,w)\nabla w+\mathsf L(\bfeps(\bfu),\bbm,\CHI,w)\nabla\bbm\Big)\cdot\nabla w-w\dot v\, \d x\d t \\[-.7em] &\quad=\int_\Omega w_0 v(0)\,\d x+\int_\Sigma q_s v\, \d S \d t+\int_Q \Big(q+\big(\sigma_{\rm a}(\bbm,w)+\mathbb D\bfeps(\dot\bfu)\big){:}\bfeps(\dot\bfu) \nonumber \\[-.6em] &\quad\ \ \ \ \ \ \ \ \ +\big(\bbs_{\rm a}(\bbm,w)+\alpha\dot\bbm\big){\cdot}\dot\bbm+\zeta(\dot\bbm) \mathsf M(\bfeps(\bfu),\bbm,\CHI,w)\nabla\mu{\cdot}\nabla\mu\Big)v\,\d x\d t \end{align} for all $v\in C^1(\overline Q)$ such that $v(T)=0$, and eventually \begin{align} \mu=\partial_\CHI\varphi_1(\CHI,\bbm)\quad\text{ a.e. in }Q. \end{align} \end{subequations} \end{definition} The above definition obviously arises from \eqref{BVP-t}--\eqref{bcc} by using standard concept. The inequality \eqref{def-of-m} arises by using additionally the identity \begin{align}\label{by-part-for-m} \int_Q \Delta\bbm{\cdot}\DT\bbm\,\d x\d t =\frac12\int_\Omega\big|\nabla\bbm(0)\big|^2-\big|\nabla\bbm(T)\big|^2\,\d x, \end{align} which can rigorously be justified if $\Delta\bbm\in L^2(Q;\R^N)$, cf.~\cite[Formula (3.69)]{Podio-Guidugli2010}. At this occasion, let us emphasize that $\nabla\DT\bbm$ is not well defined as a function on $Q$ so that we avoid using $\int_Q\nabla\bbm{:}\nabla(\bbv{-}\DT\bbm)\,\d x\d t$. \begin{theorem}[{\sc Existence of weak solutions}]\label{thm} {Let assumptions \eqref{DQ} hold true.} Then \eqref{BVP-t}--\eqref{bcc} has at least one weak solution $(\bfu,\bbm,\CHI,w,\mu)$ according Definition~\ref{def}. Moreover, { \begin{subequations}\label{additional}\begin{align} &\r\DDT\bfu\in L^2(I;H^1(\O;\R^3)^*), \\ &\DT w\!\in\!L^1(I;H^3(\O)^*)\ \text{ and }\ w\!\in\!L^r(I;W^{1,r}(\Omega)) \ \text{ for any }1\le r<5/4, \\ &\DT\CHI\in L^2(I;H^1(\O)^*), \\\label{additional-1} &\Delta\bbm\in L^2(Q;\R^N), \end{align}\end{subequations} and {this} solution is consistent with the energy conservation equation \eqref{balance3} with $\nu=1$, as well as with $\CHI\ge0$ and $w\ge0$ on $Q$. If $\Omega$ is smooth, then even $\bbm\in L^2(I;H^2(\Omega;\R^N))$. } \end{theorem} We will prove this theorem in Section~\ref{sec-discrete} by a semi-implicit time discretisation in a more or less constructive manner, except the fixed-point argument behind the boundary-value sub-problems \eqref{gilbmod-disc}--\eqref{bc34} and \eqref{heatequation}--\eqref{BC-t-2-2} and selection of converging subsequences. {The additional properties \eqref{additional} follow from \eqref{est-of-nabla-w} and \eqref{apriori-II+}. The $H^2$-regularity of $\bbm$ is a standard consequence of \eqref{additional-1}. For more detailed modes of convergence of the mentioned approximate solutions we refer to \eqref{15.-strong} and \eqref{15.-strong+} below.} \section{Analysis of \eqref{BVP-t}--\eqref{bcc} by semidiscretisation in time} \label{sec-discrete} We will prove existence of a weak solution to the initial-boundary-value problem \eqref{BVP-t} by a carefully constructed semi-implicit discretisation in time which, at the same time, will decouple the system to a sequence of convex minimization problems combined with a diffusion equation, and provide thus a rather efficient conceptual numerical strategy. In comparison with the fully implicit time discretisation (i.e.\ the so-called Rothe method), our discretisation will allow for a simpler (and constructive) proof of existence of the discrete solutions, weaker assumptions about convexity mode of the stored energy, but we need to impose a bit stronger growth qualification of the data than required by the nature of the continuous system \eqref{BVP-t}. We use an equidistant partition of the time interval $I=[0,T]$ with a time step $\tau>0$, assuming $T/\tau\in\N$, and denote $\{\bfu\kt\}_{k=0}^{T/\tau}$ an approximation of the desired values $\bfu(k\tau)$, and similarly $\bbm\kt$ is to approximate $\bbm(k\tau)$, etc. Further, let us abbreviate by $\dtt\kt$ the backward difference operator, i.e.\ e.g.\ $\deltau:=\deltauu$, and similarly also $[\dtt\kt]^2\bfu=\dtt\kt[\deltau]=\frac{{\bfu}\kt{-} 2{\bfu}\kkt{+}{\bfu}_\tau^{k-2}}{\tau^2}$, or $\deltam:=\deltamm$, $\deltax:=\deltaxx$, etc. Then, using also notation \eqref{def-of-phi12}, we devise the following semi-implicit discretisation: \allowdisplaybreaks[1] \begin{subequations}\label{TGMd} \begin{align} &\varrho[\dtt\kt]^2\bfu -{\rm div}\Big(\bbC(\bfeps(\bfu\kt){-}\bfepstr\bbc\kt ) +{\bfsigma_{\rm a}}({\bbm\kkt},w\kkt) \label{TGMd-1} +\mathbb D\bfeps(\deltau)\Big)=\bff\kt, \\& \alpha\deltac +\partial\zeta\left(\deltac\right) -\lambda\Delta\bbc\kt +\partial_\bbc{\varphi_1}(\bbc\kt,\CHI\kkt) +\bfepstr^\top\bbC(\bfepstr\bbc\kt{-}\bfeps(\bfu\kt)) \nonumber\\& \hspace{6em} +{\bbs_{\rm a}}({\bbm\kkt},w\kkt)+\xi\kt\ni0 \quad\text{ with } \quad\xi\kt\in N_K^{}(\bbc\kt), \label{TGMd-2} \\& \dtt\kt\CHI-\textrm{div}\big(\mathsf M(\bfeps(\bfu\kt), \bbm\kt,\CHI\kt,w\kkt)\nabla\mu\kt\big)=0,\label{gilbmod-disc} \\ & \dtt\kt w-\mathrm{div}\big( \mathsf{K}(\bfeps(\bfu\kt),\bbm\kt,\CHI\kt,w\kt)\nabla w\kt +\mathsf{L}(\bfeps(\bfu\kt),\bbm\kt,\CHI\kt,w\kt)\nabla\bbm\kt \big)\nonumber \\[-.3em]&\hspace{6em}=q\kt+\big({\bfsigma_{\rm a}}({\bbm\kkt},w\kt)+ \frac{\mathbbm D\bfeps(\deltau)\big){:}\bfeps(\deltau)} {{1+\tau|\bfeps(\deltau)|^2}} \nonumber \\[-.3em] &\hspace{6em}+\big(\bbs_{\rm a}({\bbm\kkt},w\kt) +\alpha\deltam\big)\cdot\deltam \nonumber \\[-.3em]\label{heatequation-disc}&\hspace{6em}+\zeta(\deltam) +\frac{\mathsf M(\bfeps(\bfu\kt),\bbm\kt,\CHI\kt,w\kkt) \nabla\mu\kt\cdot\nabla\mu\kt}{1+\tau|\nabla\mu\kt|^2}, \\ &\mu\kt=\partial_{\CHI}{\varphi_1}(\bbm\kt,\CHI\kt), \end{align} \end{subequations} for $k=1,...,T/\tau$, together with the boundary conditions \begin{subequations}\label{BC-t} \begin{align} &\big(\bbC(\bfeps(\bfu\kt){-}\bfepstr\bbc\kt)+\bfsigma_{\rm a}({\bbm\kkt},w\kkt) +\mathbb D\bfeps(\deltau) \big)\mathbf n=\bff_{\rm s,\tau}^k, \label{BC-t-1-} \\\label{BC-TGMd-2} & \frac{\partial\bbc\kt}{\partial\mathbf n}=0, \\ &\label{bc34+} \mathsf M(\bfeps(\bfu\kt),\bbm\kt,{\CHI\kt},w\kkt)\nabla\mu\kt \cdot\mathbf n=h_{\rm s,\tau}^k, \\ &\label{BC-t-2-2} \big( \mathsf{K}({{}\bfeps(\bfu\kt),\bbm\kt},\CHI\kt,{w\kt})\nabla w\kt +\mathsf{L}(\bfeps(\bfu\kt),\bbm\kt,\CHI\kt,{w\kt})\nabla{\bbm\kt} \big){\cdot}\mathbf n=q_{\rm s,\tau}^k, \end{align} \end{subequations} starting from $k=1$ by using \begin{align}\label{IC2} \!\!{\bfu}_\tau^0= {\bfu}_0,\quad\ {\bfu}_\tau^{-1}= {\bfu}_0{-}\tau\mathbf v_0,\quad\ \bbm_\tau^0=\bbm_0, \quad\ \CHI_\tau^0=\CHI_0,\quad\ {w}_\tau^0=\omega(m_0,\vartheta_0). \end{align} An important feature of the scheme \eqref{TGMd} is that it decouples to {three} boundary-value problems, which (after a further spatial discretisation) can advantageously be used in a numerical treatment and which is advantageously used even to show existence of approximate solutions: \begin{lemma}[{\sc Existence of the discrete solution}]\label{lem-1} Let \eqref{DQ} hold and $\tau>0$ be small enough, {cf.\ \eqref{tau-small} below.} Then, for any $k=1,...,T/\tau$, \eqref{TGMd} possesses a solution $\bfu\kt\in H^1(\Omega;\R^3)$, $\bbm\kt\in H^1(\Omega;\R^N)$, $\xi\kt\in L^2(\Omega;\R^N)$, $\CHI\kt,\,\mu\kt,\,w\kt\in H^1(\Omega)$ such that {$\CHI\kt\ge0$} and $w\kt\ge0$. \end{lemma} \noindent{\it Proof}. The first boundary-value problem arising by the decoupling is (\ref{TGMd}a,b) with (\ref{BC-t}a,b), and it leads to the minimization of the functional: \begin{align}\nonumber &(\bfu,\bbc)\mapsto\int_\Omega {\delta_K(\bbc)}+\varphi_{01}(\bfeps(\bfu),\bbc,\CHI\kkt) +\frac\lambda2|\nabla\bbc|^2 +\frac{\tau^2}2\varrho\Big|\frac{\bfu-2\bfu\kkt+\bfu_\tau^{k-2}}{\tau^2}\Big|^2 \\&\qquad\quad\nonumber +\frac\tau2\mathbb D\bfeps\big(\frac{\bfu{-}\bfu\kkt}\tau\big){:} \bfeps\big(\frac{\bfu{-}\bfu\kkt}\tau\big) +\alpha\frac\tau2\Big|\frac{\bbc{-}\bbc\kkt}\tau\Big|^2 +\tau\zeta\Big(\frac{\bbc{-}\bbc\kkt}\tau\Big) \\ &\qquad\quad +{\bfsigma_{\rm a}}(\bbm\kkt,w\kkt){:}\bfeps(\bfu) \label{minimization-u-m} +\bbs_{\rm a}(\bbm\kkt,w\kkt){\cdot}\bbc +\bff\kt{\cdot}\bfu\,\d x+\int_\Gamma\bff_{\rm s,\tau}^k{\cdot}\bfu\,\d S. \end{align} Due to \eqref{def-of-phi2} and \eqref{semiconvexity-phi0}, $\varphi_{12}(\cdot,\cdot,\CHI\kkt)$ and thus the whole functional \eqref{minimization-u-m} are strictly convex, for $\tau>0$ sufficiently small, {namely for \begin{align}\label{tau-small} \tau\le\tau_1:= \begin{cases}\displaystyle{\min\Big(T,\frac{\alpha^2} {|\inf\partial_{\bbc\bbc}^2\varphi_1|^2}\Big)} &\text{if }\inf\partial_{\bbc\bbc}^2\varphi_1<0,\\[-.3em] T&\text{otherwise}.\end{cases} \end{align}} Therefore, there exists a unique minimizer $(\bfu\kt,\bbc\kt)\!\in\!H^1(\Omega;\R^3)\!\times\!H^1(\Omega;\R^N)$. {By standard arguments, cf.\ e.g.\ \cite[Chap.\,2 and 4]{Roubicek2013},} this minimizer {when completed by $\xi\kt\!\in\!N_K(\bbc\kt)$} is a weak solution to (\ref{TGMd}a,b)--(\ref{BC-t}a,b). {Moreover, $\xi\kt\!\in\!L^2(\Omega;\R^N)$ can be shown by the same arguments as in the Lemma~\ref{lem-1+} below.} Then one can solve \eqref{gilbmod-disc}--\eqref{bc34+}, which represents a semi-linear diffusion equation. We observe that we can eliminate $\mu\kt$ because obviously \begin{align}\label{mu-m-chi} \nabla\mu\kt=\partial_{\CHI\bbm}^2{\varphi_1}(\bbm\kt,\CHI\kt)\nabla\bbm\kt +\partial_{\CHI\CHI}^2{\varphi_1}(\bbm\kt,\CHI\kt)\nabla\CHI\kt, \end{align} which leads us to abbreviate \begin{subequations}\label{def-of-Ms} \begin{align} &{\mathsf M}_1(\bfeps,\bbm,\CHI,w):= \partial_{\CHI\CHI}^2{\varphi_1}(\bbm,\CHI)\mathsf M(\bfeps,\bbm,\CHI,w), \\&\mathsf M_2(\bfeps,\bbm,\CHI,w):= \mathsf M(\bfeps,\bbm,\CHI,w)\otimes\partial_{\CHI\bbm}^2{\varphi_1}(\bbm,\CHI). \end{align} \end{subequations} Then \eqref{gilbmod-disc}--\eqref{bc34+} transforms to the semi-linear boundary-value problem: \begin{subequations}\label{BVP-for-chi} \begin{align} & \deltax-\textrm{div} \big({\mathsf M}_1(\bfeps(\bfu\kt),\bbm\kt,\CHI\kt,w\kkt)\nabla\CHI\kt \!+{\mathsf M}_2(\bfeps(\bfu\kt),\bbm\kt,\CHI\kt,w\kkt)\nabla\bbm\kt\big)=0 \intertext{on $\Omega$ together with the boundary condition on $\Gamma$:} & \big( {\mathsf M}_1(\bfeps(\bfu\kt), \bbm\kt,\CHI\kt,w\kkt)\nabla\CHI\kt+{\mathsf M}_{2}(\bfeps(\bfu\kt), \bbm\kt,\CHI\kt,w\kkt)\nabla\bbm\kt\big){\cdot}\mathbf n =\bff_{\rm s,\tau}^k. \end{align} \end{subequations} Due to the fully implicit discretisation of $\partial_{\CHI}{\varphi_1}(\bbm,\CHI)$ which is needed for the a-priori estimates in Lemma~\ref{lem-2} below, via \eqref{def-of-Ms} we inevitably obtain the dependence of $\mathsf M_{1,2}(\bfeps(\bfu\kt),\bbm\kt,\CHI\kt,w\kkt)$ on $\CHI\kt$ so that the problem \eqref{BVP-for-chi} unfortunately does not have any potential. Anyhow, thanks to \eqref{pos1}, the 2nd-order tensor $\mathsf M_1$ is uniformly positive definite. Also, by \eqref{bbg} and \eqref{Mbound}, $\mathsf M_2$ is bounded. As a consequence, the nonlinear operator $A:H^1(\Omega)\mapsto H^1(\Omega)^*$ defined by \begin{align}\nonumber \big\langle A(\CHI),\bbv\big\rangle:= \int_\Omega\frac1\tau\CHI\bbv&+ \Big(\mathsf M_{1}(\bfeps(\bfu\kt),\bbm\kt,\CHI,w\kkt)\nabla\CHI \\[-.3em]&\ \ + \mathsf M_{2}(\bfeps(\bfu\kt),\bbm\kt,\CHI,w\kkt)\nabla\bbm\kt\Big) {\cdot}\nabla\bbv\,\d x \end{align} is coercive. Thanks to Assumption \eqref{growth1}, we have that $A$ is also weakly continuous, existence of a solution $\CHI\kt\in H^1(\Omega)$ can be obtained using the Galerkin method and Brouwer fixed-point theorem; of course, the obtained solution needs not be unique. {Testing \eqref{BVP-for-chi} by $[\CHI\kt]^-$ and using \eqref{bbg} implies $\CHI\kt\ge0$ provided $\CHI\kkt\ge0$, which yields non-negativity of the hydrogen concentration recursively for any $k=1,...,T/\tau$ by using \eqref{IC-ass-2}.} Let us also note that from \eqref{mu-m-chi} we obtain also $\nabla\mu\kt\in L^2(\Omega;\R^3)$. In particular, we have simply {both $\mathbbm D\bfeps(\deltau){:}\bfeps(\deltau)/(1{+}\tau|\bfeps(\deltau)|^2)\in L^\infty(\Omega)$ and} $\mathsf M(\bfeps(\bfu\kt),\bbm\kt,\CHI\kt,w\kkt) \nabla\mu\kt{\cdot}\nabla\mu\kt/(1{+}\tau|\nabla\mu\kt|^2)\in L^\infty(\Omega)$, and thus the right-hand side of \eqref{heatequation-disc} is in $L^2(\Omega)$. Therefore, eventually, we are to solve \eqref{heatequation-disc}--\eqref{BC-t-2-2}, which represents a semilinear heat-transfer equation with the right-hand side in $H^1(\Omega)^*$. The only nonlinearity is due to the $w$-dependence of $\mathsf{L}(\bfeps(\bfu\kt),\bbm\kt,\CHI\kt,w)$, $\bfsigma_{\rm a}(\bbm\kkt,w)$, and $\bbs_{\rm a}(\bbm\kkt,w)$. The later two are needed to guarantee $w\kt\ge0$. Anyhow, since this nonlinearity is of lower order, we can pass through it by compactness and strong convergence. Thus, it suffices for us to check coercivity of the underlying operator. To this aim, we test \eqref{heatequation-disc} by $w\kt$. The terms on the right-hand side containing $w\kt$ are estimated standardly by using H\"older's and Young's inequalities, and using the qualifications {(\ref{DQ}l,m).} Having coercivity, we see that there exists at least one solution. Moreover, this solution satisfies $w\kt\ge0$, which can be seen by testing $\eqref{heatequation-disc}$ by the negative part of $w\kt$ and using that ${\bfsigma_{\rm a}}(\bbm,w)=0$ and ${{\bbs_{\rm a}}}(\bbm,w)=0$ for $w\le0$. Note that to have such non-negativity it is important to have $w\kt$ in the $\mathbbm K$-term on the left-hand side, as well as in the nonlinear terms $\sigma_{\rm a}$ and $\bbs_{\rm a}$ on the right-hand side. \QED \medskip Let us define the piecewise affine interpolant $u_\tau$ by \begin{subequations} \begin{align} &\bfu_\tau(t):= \frac{t-(k{-}1)\tau}\tau\bfu\kt +\frac{k\tau-t}\tau \bfu\kkt \quad\text{ for $t\in[(k{-}1)\tau,k\tau]\ $} \intertext{with $\ k=0,...,T/\tau$. Besides, we define also the backward piecewise constant interpolant $\bar u_\tau$ and $\underline u_\tau$ by} &\bar\bfu_\tau(t):=\bfu\kt, \qquad\qquad\qquad\qquad\text{ for $t\in((k{-}1)\tau,k\tau]\ $,\ \ $k=1,...,T/\tau$}, \\ &\underline\bfu_\tau(t):=\bfu\kkt, \qquad\qquad\qquad\quad\ \text{for $t\in[(k{-}1)\tau,k\tau)\ $,\ \ $k=1,...,T/\tau$}. \intertext{Similarly, we define also $m_\tau$, $\bar m_\tau$, $\underline m_\tau$, $\bar\vartheta_\tau$, $\vartheta_\tau$, $\bar g_\tau$, $\bar\bff_{\rm b,\tau}$, etc. We will also need the piecewise affine interpolant of the (piecewise constant) velocity $\frac{\partial\bfu_\tau}{\partial t}$, which we denote by $\big[\frac{\partial\bfu_\tau}{\partial t}\big]^{\rm i}$, i.e.} &\big[\DT\bfu_\tau\big]^{\rm i}(t):= \frac{t{-}(k{-}1)\tau}\tau\,\frac{\bfu\kt{-}\bfu\kkt}\tau +\frac{k\tau{-}t}\tau \,\frac{\bfu\kkt{-}\bfu_\tau^{k-2}}\tau \ \text{ for $t\in((k{-}1)\tau,k\tau]$}. \end{align} \end{subequations} Note that $\DDT\bfu_\tau^{\rm i}:=\frac{\partial}{\partial t} \big[\DT\bfu_\tau\big]^{\rm i}$ is piecewise constant with the values $\frac{\bfu_\tau^k{-}2\bfu_\tau^{k-1}{+}\bfu_\tau^{k-2}}{\tau^2}$ on the particular subintervals $((k{-}1)\tau,k\tau)$. In terms of these interpolants, we can write the approximate system \eqref{TGMd} in a more ``condensed'' form closer to the desired continuous system \eqref{BVP-t}, namely: \begin{subequations}\label{BVP-disc} \begin{align} &\r\DDT\bfu_\tau^{\rm i} -{\rm div}\big( {\bbC(\bfeps(\bar\bfu_\tau){-}\bfepstr\bar\bbc_\tau)} +{\bfsigma_{\rm a}}(\underline{\bbm}_\tau,\underline{w}_\tau) \label{TGMd-1-comp} +\mathbb D\bfeps(\DT\bfu_\tau)\big)={\bar\bff_\tau}, \\ & \nonumber \alpha\DT\bbc_\tau{+}\partial\zeta(\DT\bbc_\tau) {-}\lambda\Delta\bar\bbc_\tau {+}\partial_\bbc{\varphi_1}(\bar\bbc_\tau,\underline\CHI_\tau) {+}\bfepstr^\top\bbC(\bfepstr\bar\bbc_\tau{-}\bfeps(\bar\bfu_\tau)) \\ \label{TGMd-2-comp} &\hspace{6em} +{\bbs_{\rm a}}({\underline{\bbm}_\tau},\underline{w}_\tau){+}\bar\xi_\tau\ni0 \hspace{3.1em}\text{ with }\ \ \ \ \bar\xi_\tau\in N_K^{}(\bar\bbc_\tau), \\&\label{gilbmod-disc2} \DT\CHI_\tau-\textrm{div}\big({\mathsf M}(\bfeps(\bar\bfu_\tau), \bbm\kt,\bar\CHI_\tau,\underline{w}_\tau)\nabla\bar\mu_\tau\big)=0\ \ \ \ \text{ with }\ \ \ \ \bar\mu_\tau=\partial_{\CHI}{\varphi_1}(\bar\bbm_\tau,\bar\CHI_\tau), \\ & \DT w_\tau-\mathrm{div}\big(\mathsf{K}(\bfeps(\bar\bfu_\tau), \bar\bbm_\tau,\bar\CHI_\tau,\underline{w}_\tau)\nabla \bar w_\tau +\mathsf{L}(\bfeps(\bar\bfu_\tau),\bar\bbm_\tau,\bar\CHI_\tau,\underline{w}_\tau) \nabla\bar\bbm_\tau\big)\nonumber \\[-.3em]&\hspace{6em}=\big(\bfsigma_{\rm a}(\underline\bbm_\tau,\bar w_\tau)+ \frac{\mathbbm D\,\bfeps(\DT\bfu_\tau) \big){:}\bfeps(\DT\bfu_\tau)}{{1+\tau|\bfeps(\DT\bfu_\tau)|^2}} +\big(\bbs_{\rm a}({\underline{\bbm}_\tau},\bar w_\tau) +\alpha\DT\bbm_\tau\big){\cdot}\DT\bbm_\tau\nonumber \\\label{heatequation2}&\hspace{6em} +\zeta(\DT\bbm_\tau) + \frac{\mathsf M(\bfeps(\bar\bfu_\tau),\bar\bbm_\tau,\bar\CHI_\tau,\underline{w}_\tau) \nabla\bar\mu_\tau{\cdot}\nabla\bar\mu_\tau}{1+\tau|\nabla\bar\mu_\tau|^2} +{\bar q_\tau}, \end{align} \end{subequations} for $k=1,...,T/\tau$, together with the boundary conditions \begin{subequations}\label{BC-disc}\begin{align} &\big(\bbC(\bfeps(\bar\bfu_\tau){-}\bfepstr\bar\bbc_\tau) +{\bfsigma_{\rm a}}( \underline{\bbm}_\tau, \underline{w}_\tau) +\mathbb D\bfeps(\DT\bfu_\tau) \big)\mathbf n={\bar\bff_{\rm s,\tau}}, \label{BC-t-1} \\ & \frac{\partial\bar\bbc_\tau}{\partial\mathbf n}=0,\qquad\ \label{bc342} {\mathsf M}(\bfeps(\bar\bfu_\tau),\bar\bbm\kt, \underline\CHI_\tau,\underline{w}_\tau)\nabla\bar\mu_\tau \cdot\mathbf n={\bar h_{\rm s,\tau}}, \\ &\label{BC-t-2-3} \big( \mathsf{K}(\bfeps(\bar\bfu_\tau),\bar\bbm_\tau,\bar\CHI_\tau,\underline{w}_\tau) \nabla \bar w_\tau+\mathsf{L}(\bfeps(\bar\bfu_\tau),\bar\bbm_\tau,\bar\CHI_\tau,\underline{w}_\tau) \nabla\bar\bbm_\tau\big){\cdot}\mathbf n={\bar q_{\rm s,\tau}}. \end{align}\end{subequations} \begin{lemma}[{\sc First estimates}]\label{lem-2} \slshape Let again the assumptions of Lemma~\ref{lem-1} hold. Then, for some $C$ and ${C_r}$ independent of $\tau>0$, \begin{subequations}\label{apriori-I} \begin{align}\label{apriori-Ia} &\big\|\bfu_\tau\big\|_{W^{1,\infty}(I;L^2(\O;\R^3))\,\cap\, H^1(I;H^1(\O;\R^3))}\le C, \\\label{apriori-Ia+} &\big\|\bbm_\tau\big\|_{L^\infty(I;H^1(\O;\R^N))\,\cap\, H^1(I;L^2(\O;\R^N))\,\cap\,L^\infty(Q;\R^N)}\le C, \\\label{apriori-Ia++} &\big\|\bar\CHI_\tau\big\|_{L^\infty(I;H^1(\O))}\le C, \\\label{apriori-Ia+++} &\big\|\bar\mu_\tau\big\|_{L^\infty(I;H^1(\O))}\le C, \\\label{apriori-Ib} &\big\|\bar w_\tau\big\|_{L^\infty(I;L^1(\O))}\le C, \\\label{est-of-nabla-w} &\big\|\nabla\bar w_\tau\big\|_{{L^r(Q;\R^3)}}\le C_r \qquad\text{ for any }\ 1\le r<5/4. \end{align} \end{subequations} \end{lemma} \noindent{\it Proof}. {The strategy is to test the particular equations in \eqref{TGMd} respectively by $\deltau$, $\deltam$, $\mu\kt$, and $\frac12$. For (\ref{TGMd}a,b),} we note that a standard convexity argument yields: \begin{align} \label{eq:2} &\varrho[\dtt\kt]^2\bfu{\cdot}\dtt\kt\bfu+ \bbC(\bfeps(\bfu\kt){{-}\bfepstr\bbc\kt)} ){:}\dtt\kt\bfeps {+\big(\bfepstr^\top\bbC(\bfepstr\bbc\kt{-}\bfeps(\bfu\kt)) +\xi\kt\big) {\cdot}\dtt\kt\bbc +\lambda\nabla\bbc\kt{:}\nabla\dtt\kt\bbc } \nonumber\\&\, \ge\frac\varrho2|\dtt\kt\bfu|^2+\frac12\bbC (\bfeps(\bfu\kt){-}\bfepstr\bbc\kt){:}(\bfeps(\bfu\kt){-}\bfepstr\bbc\kt) +\frac\lambda2|\nabla\bbc\kt|^2 +\delta_K(\bbc\kt) \nonumber\\&\, -\frac\varrho2|\dtt\kkt\bfu|^2\!-\frac12\bbC (\bfeps(\bfu\kkt){-}\bfepstr\bbc\kkt){:}(\bfeps(\bfu\kkt){-}\bfepstr\bbc\kkt) -\frac\lambda2|\nabla\bbc\kkt|^2\!-\delta_K(\bbc\kkt). \end{align} Owing the our {equi-semiconvexity assumption \eqref{semiconvexity-phi0} on $\varphi_1(\cdot,\CHI)$, we also have: \begin{align}\label{eq:4} &\partial_\bbc\varphi_1(\bbc\kt,\CHI\kkt){\cdot}\dtt\kt\bbc =\Big(\partial_\bbc\varphi_1(\bbc\kt,\CHI\kkt) +\alpha\frac{\bbc\kt}{\sqrt\tau}\Big){\cdot}\dtt\kt\bbc -\alpha\frac {\bbc\kt}{\sqrt\tau}{\cdot}\dtt\kt\bbc \nonumber\\ &\qquad\ge \frac1\tau\Big(\varphi_1(\bbc\kt,\CHI\kkt)+\alpha\frac{(\bbc\kt)^2}{2\sqrt\tau}- \varphi_1(\bbc\kkt,\CHI\kkt)-\alpha\frac{(\bbc\kkt)^2}{2\sqrt\tau} \Big) -\alpha\sqrt\tau \frac {\bbc\kt}\tau{\cdot}\dtt\kt\bbc \nonumber\\ &\qquad= \frac{\varphi_1(\bbc\kt,\CHI\kkt)-\varphi_1(\bbc\kkt,\CHI\kkt)}\tau -\alpha\frac{\sqrt \tau}2\left|\dtt\kt\bbc \right|^2 \end{align} provided $0<\tau\le\tau_1$ with $\tau_1$ from \eqref{tau-small}. } Now, we add {the tested equations} together. We then use \eqref{eq:2}--\eqref{eq:4}, and still the convexity \eqref{phi0-coerc} of $\CHI\mapsto\varphi_1(\bbm,\CHI)$ to deduce the estimate \begin{align}\nonumber &\int_\Omega\frac12w\kt+\frac\r2\big|\dtt\kt\bfu\big|^2 +\varphi_{12}(\bfeps\kt,\bbm\kt,\CHI\kt) +\frac\lambda2|\nabla\bbm\kt|^2 \,\d x \\&\nonumber \ \ +\tau\sum_{j=1}^k\int_\Omega\mathbb D\dtt_\tau^j\bfeps:\dtt_\tau^j\bfeps + \frac{2{-}\sqrt\tau}2\alpha\big|\dtt_\tau^j\bbm\big|^2\Big) +\frac12{\mathsf M}(\bfeps_\tau^j, \bbm_\tau^j,\CHI_\tau^{j-1},w_\tau^{j-1}) \nabla\mu_\tau^j{\cdot}\nabla\mu_\tau^j \d x\d t \\\nonumber&\ \le\int_\Omega\frac12w_0+\frac\r2|\bfv_0|^2 +\varphi_{12}(\bfeps_0,\bbm_0,\CHI_0) +\frac\lambda2|\nabla\bbm_0|^2 \,\d x \\\nonumber& \ \ +\tau\sum_{j=1}^k\int_\Omega \bff^j_\tau{\cdot}\dtt_\tau^j\bfu\,+\frac12q_\tau^j +\Big(\frac12\bfsigma_{\rm a}(\bbm_\tau^{j-1},w_\tau^j) -\bfsigma_{\rm a}(\bbm_\tau^{j-1},w_\tau^{j-1}) \Big) \cdot\dtt_\tau^j\bfeps \\ & \ \ +\Big(\frac12\bbs_{\rm a}({\bbm_\tau^{j-1}}, w_\tau^{j})- \bbs_{\rm a}({\bbm_\tau^{j-1}}, w_\tau^{j-1}) \Big){\cdot}\dtt_\tau^j\bbc\,\d x \label{eq:7} +\int_\Gamma\bff^j_{\rm s,\tau}{\cdot}\dtt_\tau^j\bfu+ h^j_{\rm s,\tau}{\cdot}\dtt_\tau^j\mu_\tau+\frac12 q^j_{\rm s,\tau}\d S \end{align} {where we used the abbreviation $\varphi_{12}$ from \eqref{def-of-phi12} with $\varphi_2$ from \eqref{def-of-phi2}.} We remark that our semi-implicit scheme has benefited from the cancellation of the terms $\pm\frac1\tau\varphi_1(\bbc\kt,\CHI\kkt)$ under this test by time-differences, which is a more general phenomenon to be understood as the fractional-step method, here combined with the semiconvexity, cf.\ \cite[Remarks~8.24-8.25]{Roubicek2013}. {Now, by \eqref{q1}, using H\"older inequality, and recalling that $w_\tau^k\ge 0$, we obtain the estimate:\\[-.9em] \begin{align}\label{gf} &\int_\Omega \Big(\frac12\bfsigma_{\rm a}(\bbm_\tau^{j-1},w_\tau^j) -\bfsigma_{\rm a}(\bbm_\tau^{j-1},w_\tau^{j-1}) \Big){:}\dtt_\tau^j\bfeps\,\d x\nonumber\\[-.5em] &\qquad\le C_\epsilon+C_\epsilon\int_\Omega\varphi_1(\bbm_\tau^{j-1},\CHI_\tau^{j-1})+w_\tau^j+ w_\tau^{j-1}+|\bfeps_\tau^j|^2\, \d x+\epsilon \int_\Omega |\dtt_\tau^j\bfeps|^2\, \d x, \end{align} where $\epsilon$ is an arbitrarily small number, and $C_\epsilon$ depends on $\epsilon$. A similar estimate holds also for the terms multiplying $\dtt_\tau^j\bbc$ on the right-hand side of \eqref{eq:7}. We now can adsorb the discrete time derivatives into the left-hand side, and use a discrete Gronwall inequality. Thanks to \eqref{pos1} and \eqref{phi0-coerc}, we have $\varphi_1(\bbm,\CHI)\ge\epsilon\CHI^2$ and the a-priori bound $\bbm\kt\in K$. This gives the estimates (\ref{apriori-I}a-c,e). The estimate \eqref{apriori-Ia+++} follows from the relation (cf.\ also \eqref{mu-m-chi}): \begin{align}\label{est-of-nabla-concentration} \nabla\bar\CHI_\tau= \big[\partial_{\CHI\CHI} ^2{\varphi_1}(\bar\bbm_\tau,\bar\CHI_\tau)\big]^{-1} \big(\nabla\bar\mu_\tau-\partial_{\CHI\bbm}^2{\varphi_1}(\bar\bbm_\tau,\bar\CHI_\tau) \nabla\bar\bbm_\tau\big). \end{align} Eventually, we observe that the right-hand sides of \eqref{heatequation2} and \eqref{BC-t-2-3} are bounded in $L^1(Q)$ and $L^1(\Sigma)$, respectively. For this, we need, in particular, assumptions \eqref{q1} and \eqref{q2}. Then one can use the $L^1$-theory for the heat equation to obtain the remaining estimate \eqref{est-of-nabla-w}, see \cite{Boccardo1997}{}. {}To this goal, {like in \cite{FePeRo09ESPT},} one is to perform the test of \eqref{heatequation-disc} by $\varpi'(w\kt)$, where {}$\varpi(w\kt):=((1{+}{w\kt})^{1-\e}\!-1)/(\e{-}1)$ with $\e>0$, and sum it for $k=1,...,T/\tau$. Comparing to standard technique, see for instance \cite{Podio-Guidugli2010,Roubicek2011,Roubivcek2010}, the only non-standard estimates is due to the $\mathsf{L}$-term, as in \cite{Roubivcek2012}, which requires assumption \eqref{q3}, cf.\ also \cite[Sect.12.9]{Roubicek2013}. } \QED \begin{lemma}[{\sc Further estimates}]\label{lem-1+} \slshape Under the assumption of Lemma~\ref{lem-1}, for some constant $C$ independent of $\tau$, it also holds: \begin{subequations}\label{apriori-II+} \begin{align}\label{apriori-IIc} &\big\|\r\DDT\bfu_\tau^{\rm i} \big\|_{L^2(I;H^1(\O;\R^3)^*)}\le C, \\\label{a-priori-IId} &\big\|\DT{w}_\tau\big\|_{L^1(I;H^3(\O)^*)}\le C, \\\label{a-priori-IIe} &\big\|\DT{\CHI}_\tau\big\|_{L^2(I;H^1(\O)^*)}\le C, \\\label{est-of-Delta-m} &\big\|\Delta\bar\bbm_\tau\big\|_{L^2(Q;\R^N)}\le C, \\\label{est-of-xi} &{\big\|\bar\xi_\tau\big\|_{L^2(Q;\R^N)}\le C.} \end{align} \end{subequations} \end{lemma} \noindent{\it Proof}. The ``dual'' estimates (\ref{apriori-II+}a-c) can be obtained routinely as a consequence of the previously derived ones by using the equations (\ref{BVP-disc}a,d,e) with the corresponding boundary conditions (\ref{BC-disc}a,d,e). As $\zeta$ is finite, $\partial\zeta$ is bounded, and from {\eqref{growth-of-phi1} and \eqref{q1}} and the already proved estimates (\ref{apriori-I}a-c,e), we have, for some $C$ finite, \begin{align*} \forall\,\bar r_\tau\!\in\!\partial\zeta(\DT\bbc_\tau){+}\alpha\DT\bbc_\tau {+}\partial_\bbm{\varphi_1}(\bar\bbm_\tau,\underline\CHI_\tau) {+}\partial_\bbm{\varphi_2}(\bfeps(\bar\bfu_\tau),\bar\bbm_\tau) {+}{\bbs_{\rm a}}({\underline{\bbm}_\tau},\underline{w}_\tau){:} \ \ \big\|\bar r_\tau\big\|_{L^2(Q;\R^N)}\!\le C. \end{align*} We can now prove (\ref{apriori-II+}d,e), imitating an abstract procedure like in \cite{LuScSt08GSPF}. We write {\eqref{TGMd-2-comp} as\\[-1.9em] \begin{align}\label{Delta-m+} \lambda\Delta\bar\bbc_\tau-\bar\xi_\tau=\bar r_\tau\ \ \text{ with }\ \ \bar\xi_\tau\in N_K^{}(\bar\bbc_\tau). \end{align} We test \eqref{Delta-m+} by $\Delta\bar\bbc_\tau$ and use the monotonicity of the set-valued mapping $N_K^{}$, which ensures (when written very formally) that \begin{align}\nonumber &\!\int_\Omega\!\bar\xi_\tau{\cdot}\Delta\bar\bbc_\tau\,\d x= \int_\Omega\!N_K^{}(\bar\bbc_\tau){\cdot}\Delta\bar\bbc_\tau\,\d x= \int_\Gamma\! N_K^{}(\bar\bbc_\tau) \frac{\partial\bar\bbc_\tau}{\partial\mathbf n}\,\d S -\int_\Omega\!\nabla N_K^{}(\bar\bbc_\tau){:}\nabla\bar\bbc_\tau\,\d x \\[-.3em]\label{L2-regularity}&\qquad\ \ =-\int_\Omega \partial N_K^{}(\bar\bbc_\tau)\nabla\bar\bbc_\tau{:}\nabla\bar\bbc_\tau\,\d x =-\int_\Omega \partial^2\delta_K(\bar\bbc_\tau) \nabla\bar\bbc_\tau{:}\nabla\bar\bbc_\tau\,\d x\le0. \end{align} Of course, the positive-definiteness of the Jacobian $\partial^2\delta_K$ of the nonsmooth convex function $\delta_K$ is indeed very formal and the rigorous proof needs a smoothening argument. In fact, one can use an exterior penalty $\delta_{K,\epsilon}^{}(\bbc):= \epsilon^{-1}\min_{\widetilde\bbc\in K}|\bbc{-}\widetilde\bbc|^2$ (i.e.\ Yosida's approximation of $\delta_K^{}$) and consider the Dirichlet boundary-value problem $\lambda\Delta\bar\bbc_{\tau,\epsilon}-\delta_{K,\epsilon}'(\bar\bbc_{\tau,\epsilon}) =\bar r_\tau$ with the boundary condition $\bar\bbc_{\tau,\epsilon}=\bar\bbc_\tau$ on $\Sigma$ which ensures $\delta_{K,\epsilon}'(\bar\bbc_{\tau,\epsilon})=0$ on $\Sigma$ so that the boundary term arising by the test by $\Delta\bbc_{\tau,\epsilon}$ disappears, likewise already in \eqref{L2-regularity}, and the limit with $\epsilon\to0$ is then easy. Therefore, by this estimate, we obtain $\lambda\|\Delta\bar\bbc_\tau\|_{L^2(Q;\R^N)}\le \|\bar r_\tau\|_{L^2(Q;\R^N)}$ so that \eqref{est-of-Delta-m} is proved, and thus also the bound \eqref{est-of-xi} for $\bar\xi_\tau=\lambda\Delta\bar\bbc_\tau-\bar r_\tau$ is proved.} \QED \medskip \begin{proposition}[{\sc Convergence for $\tau\to0$}]\label{prop-conv} \slshape Let again the assumption of Lemma~\ref{lem-1} hold. Then there is a subsequence such that \begin{subequations}\label{15.-strong} \begin{align} \label{15.-u-strong} &\bfu_\tau\to\bfu&&\text{strongly in }\ H^1(I;H^1(\O;\R^3)), \\\label{15.-m} &\bbm_\tau\to \bbm&&\text{strongly in }\ {H^1(I;L^2(\O;\R^N))}, \\\label{15.-chi} &\bar\CHI_\tau\to\CHI&&{\text{strongly in }\ L^r(Q)\ \text{ with any }\ 1\le r<6}, \\ \label{15.-theta-strong} &\bar{w}_\tau\to {w}\ \ \&\ \ \underline{w}_\tau\to {w}\!\!\!\!\!\!\!\! &&\text{strongly in }\ L^s(Q)\ \text{ with any }\ {1\le s<5/3}, \\\label{15.-mu-strong} &{\bar\mu_\tau\to\mu}&&\text{strongly in }\ L^2(I;H^1(\O)), \\\label{15.-xi-weak} &{\bar\xi_\tau\to\xi}&&\text{weakly in }\ L^2(Q;\R^N), \end{align}\end{subequations} and any {$(\bfu,\bbm,\CHI,w,\mu,\xi)$} obtained in this way is a weak solution to the system \eqref{BVP-t}--\eqref{bcc} in accord with Definition~\ref{def} which also preserves the total energy {as well as $\CHI\ge0$ and $w\ge0$} as claimed in Theorem~\ref{thm}. {Moreover, if $\Omega$ is smooth, then also \begin{subequations}\label{15.-strong+} \begin{align} \label{15.-m+} &\!\bar\bbm_\tau\to \bbm&&&&\quad\ \text{strongly in }\ {L^2(I;H^1(\O;\R^N))},&&&& \\\label{15.-chi+} &\!\bar\CHI_\tau\to \CHI&&&&\quad\ \text{strongly in }\ {L^2(I;H^1(\O))}. \end{align}\end{subequations} } \end{proposition} \noindent{\it Proof}. For lucidity, we divide the proof into {nine} steps. \medskip \noindent \emph{Step 1: Selection of a converging subsequence.} By Banach's selection principle, we select a weakly* converging subsequence with respect to the norms from the estimates \eqref{apriori-I} and \eqref{apriori-II+}. {By Aubin-Lions' theorem, from \eqref{apriori-Ia++} and \eqref{a-priori-IIe}, one gets \eqref{15.-chi}.} Moreover, a (generalized) Aubin-Lions theorem, cf.\ \cite[Corollary~7.9]{Roubicek2013}, based on \eqref{est-of-nabla-w} and \eqref{a-priori-IId} which makes $\DT{\bar w}_\tau$ bounded as an $H^3(\Omega)^*$-valued measure on $[0,T]$, interpolated further with the estimate \eqref{apriori-Ib}, gives the first strong convergence \eqref{15.-theta-strong}. The second one follows analogously. { Further, $\CHI\ge0$ and $w\ge0$ is inherited from $\CHI_\tau\ge0$ and $w_\tau\ge0$ proved in Lemma~\ref{lem-1}. If $\Omega$ is smooth, from \eqref{est-of-Delta-m} one has $\bar\bbm_\tau$ bounded in $L^2(I;H^2(\Omega;\R^N))$, so by Aubin-Lions' theorem one gets \eqref{15.-m+} and then, from \eqref{est-of-nabla-concentration}, one gets also \eqref{15.-chi+}. } \medskip \noindent \emph{Step 2: Convergence in the semilinear mechanical part.} Equation (\ref{balmech}) is obviously semilinear, and therefore the weak convergence is sufficient to obtain it in the limit from the corresponding equations (\ref{TGMd-1-comp}). In particular, $\varrho\DDT\bfu$ is in duality with $\DT\bfu$: \begin{align}\label{quality-of-u} \DT\bfu\in L^2(I;H^1(\O;\R^3))\quad \text{and}\quad \r\DDT\bfu\in L^2(I;H^1(\O;\R^3)^*). \end{align} By Rellich's theorem we have the continuous and the compact embeddings ${H^1(I;L^2(\Omega))\cap L^\infty(I;H^1(\Omega))} \subset H^1(Q)\Subset L^2(Q)$ so that $\bbm_\tau\to\bbm$ strongly in $L^2(Q;\R^N)$ and then also $\underline\bbm_\tau\to\bbm$ strongly in $L^2(Q;\R^N)$ because $\|\underline\bbm_\tau{-}\bbm_\tau\|_{L^2(Q;\R^N)} =3^{-1/2}\tau\|\DT\bbm_\tau\|_{L^2(Q;\R^N)}\to0$, cf.\ \cite[Rem.\,8.10]{Roubicek2013}. Together with \eqref{15.-theta-strong}, we can pass through the nonlinearities $\bfsigma_{\rm a}$ and $\bbs_{\rm a}$. {Similarly also $\bar\bbm_\tau\to\bbm$ strongly in $L^2(Q;\R^N)$, which we will use later.} \medskip \noindent \emph{Step 3: Strong convergence of $\bfeps(\bfu_\tau)$.} As the nonlinearities $\partial_\bbm\varphi_2$, $\mathsf{M}$, $\mathsf{K}$, and $\mathsf{L}$ may depend on $\bfeps$, we need first to prove strong convergence of $\bfeps(\bfu_\tau)$. Using the equation for the discrete approximation, and the limit equation proved in Step 2, we can write: \begin{align}\nonumber &\int_Q\!\bbC\bfeps(\bar\bfu_\tau{-}\bfu){:}\bfeps(\bar\bfu_\tau{-}\bfu)\,\d x\d t \\[-.3em]&\qquad\qquad \le \int_Q\!\varrho\DDT\bfu_\tau^{\rm i} {\cdot}(\bfu{-}\bar\bfu_\tau) +\bfsigma_{\rm a}(\underline\bbm_\tau,\underline\CHI_\tau){:} \bfeps(\bfu{-}\bar\bfu_\tau) +\bbC\bfeps(\bfu){:}\bfeps(\bfu{-}\bar\bfu_\tau)\,\d x\d t \label{strong-e(u)} \end{align} where we used the monotonicity of $u\mapsto\mathbb D\bfeps(\DT u)$ if the initial condition is fixed; cf.\ \cite{Roubicek2013} for details about the time discretisation. The goal is to pass the right-hand side of \eqref{strong-e(u)} to 0. We use Aubin-Lions' Theorem to obtain strong convergence of $\DT\bfu_\tau$ in $L^2(Q;\R^3)$, and by the Rellich compactness theorem also strong convergence of $\bar\bfu_\tau(T)$ in $L^2(\Omega;\R^3)$, which allows us to pass to the limit: \begin{align*} &\lim_{\tau\to 0} \int_Q\!\varrho\DDT\bfu_\tau^{\rm i} {\cdot}(\bfu{-}\bar\bfu_\tau)\,\d x\d t=\lim_{\tau\to 0} \bigg(\int_\O\varrho \DT\bfu_\tau(0)\cdot\bar\bfu_\tau(\tau) -\varrho\DT\bfu_\tau(T)\cdot\bar\bfu_\tau(T)\,\d x \\[-.3em]&\hspace{15em} +\int_\tau^T\!\!\!\int_\Omega\varrho\DT\bfu_\tau(\cdot-\tau){\cdot} \DT\bfu_\tau\,\d x\d t +\int_Q\varrho\DDT\bfu_\tau^{\rm i}{\cdot}\bfu\,\d x\d t\bigg) \\&\hspace{7em}=\int_\O\varrho \DT\bfu(0){\cdot}\bfu(0) -\varrho\DT\bfu(T){\cdot}\bfu(T)\,\d x +\int_Q\varrho|\DT\bfu_\tau|^2+\varrho\DDT\bfu{\cdot}\bfu\,\d x\d t=0; \end{align*} the former equation is by discrete by-part summation, cf.\ e.g.\ \cite[Remark~11.38]{Roubicek2013}, while the later equality simply follows by reversing integration by parts; here \eqref{quality-of-u} is used. By \eqref{apriori-Ia++} and \eqref{a-priori-IIe}, we have $\underline\CHI_\tau\to\CHI$ weakly* in $L^\infty(I;H^1(\Omega))$, and $\underline{\DT\CHI}_\tau$ bounded as an $H^1(\Omega)^*$-valued measure on $[0,T]$, which gives $\underline\CHI_\tau\to\CHI$ strongly in $L^2(Q)$ by a (generalized) Aubin-Lions theorem, cf.\ \cite[Corollary~7.9]{Roubicek2013}. Then the other terms in \eqref{strong-e(u)} clearly converge 0. Altogether, we proved that the right-hand side of \eqref{strong-e(u)} converges to 0, which eventually shows $\bar\bfeps_\tau\to\bfeps$ in $L^2(Q;\R^{3\times3})$. \medskip \emph{Step 4: Limit passage in the micromechanical inequality.} {F}rom \eqref{TGMd-2-comp}, we have \begin{subequations}\begin{align}\nonumber &\forall \bbv\!\in\!L^2(I;H^1(\Omega;\R^N)){:}\,\ \int_Q\! \big(\alpha\DT\bbm_\tau +\bbs_{\rm a}(\underline\bbm_\tau,\underline\CHI_\tau) + \partial_\bbc\varphi_{12}(\bar\bfeps_\tau,\bar\bbc_\tau,\underline\CHI_\tau) {+\bar\xi_\tau}\big){\cdot}(\bbv{-}\DT\bbc_\tau) \\[-.5em]\label{mm-ineq-1} &\hspace{11em} +\lambda\nabla\bar\bbm_\tau{:}(\nabla\bbv{-}\nabla\DT\bbm_\tau) +\zeta(\bbv)\,\d x\d t\ge\!\int_Q\! \zeta(\DT\bbm_\tau)\,\d x\d t \\[-.2em]&\label{mm-ineq-2} {\forall\bbv\!\in\!L^2(Q;\R^N),\ \ \bbv\!\in\!K\text{ a.e.}:\quad \int_Q\!\bar\xi_\tau{\cdot}(\bbv{-}\bar\bbc_\tau)\,\d x\d t\ge0.} \end{align}\end{subequations} {The limit passage in \eqref{mm-ineq-2} is easy because $\bar\xi_\tau\to\xi$ weakly in $L^2(Q;\R^N)$ and $\bar\bbc_\tau\to\bbc$ strongly in $L^2(Q;\R^N)$ has already been proved in Step 2; thus $\xi\in N_K^{}(\bbc)$ is shown. Now we can make a limit passage in \eqref{mm-ineq-1}.} Here on the left-hand side we have collected all terms that need to be handled through a continuity or a weak upper semicontinuity arguments, while the right-hand side is to be treated by weak {}lower {}semicontinuity. We benefit from the strong convergence of $\underline\bbm_\tau$ (and similarly also of $\bar\bbm_\tau$) shown in Step~2 and of $\underline\CHI_\tau$ and $\bar\bfeps_\tau$ proved in Step 3. The only nontrivial limit passage which, however, leads us directly to \eqref{def-of-m} is: \begin{align}\nonumber \limsup_{\tau\to0}\int_Q\!-\lambda\nabla\bar\bbm_\tau{\cdot}\nabla\DT\bbm_\tau \,\d x\d t&\le\int_\Omega\!\frac\lambda2\big|\nabla\bbm_0\big|^2\,\d x -\liminf_{\tau\to0}\int_\Omega\!\frac\lambda2\big|\nabla\bbm(T)\big|^2\,\d x \\&\le\int_\Omega\frac\lambda2\big|\nabla\bbm_0\big|^2 -\frac\lambda2\big|\nabla\bbm(T)\big|^2\,\d x. \end{align} {Eventually, the limit in $\int_Q\bar\xi_\tau{\cdot}\DT\bbc_\tau\,\d x\d t$ is simple because, for any $\bar\xi_\tau\in N_K^{}(\DT\bbc_\tau)$, this integral equals $\int_\Omega\delta_K(\xi_\tau(T))-\delta_K(\xi_0)\,\d x=0 =\int_\Omega\delta_K(\xi(T))-\delta_K(\xi_0)\,\d x =\int_Q\xi{\cdot}\DT\bbc\,\d x\d t$ since we already know $\xi\in N_K^{}(\bbc)$; note that \eqref{IC-ass-1} has been used here. Thus \eqref{def-of-m} is proved. } \medskip \noindent \emph{Step 5: Limit passage in the diffusion equation.} Again, all strong convergences we already have proved in Steps 2 and 3 are to be used. Then the limit passage in the semi-linear equation \eqref{gilbmod-disc} is simple. \medskip \noindent \emph{Step 6: Mechanical/chemical energy preservation.} We balance the kinetic and stored energy integrated over the domain:\\[-1.5em] \[ \mathcal E(t):=\int_\O\frac\varrho2\big|\DT\bfu(t)\big|^2 +{\varphi_{12}(\bfeps(\bfu(t),\bbm(t),\CHI(t))} +\frac\lambda2\big|\nabla\bbm(t)\big|^2\,\d x. \] {as we actually did in \eqref{balance3} with $\nu=0$}, i.e.\\[-1.5em] \begin{align}\nonumber &\mathcal E(T)-\mathcal E(0)=\int_Q\!\bff{\cdot}\DT\bfu+q\,\d x\d t -\int_Q\!\big(\mathbb D\bfeps(\DT\bfu){+} \bfsigma_{\rm a}(\bbm,w)\big){:}\bfeps(\DT\bfu)+ \big(\alpha\DT\bbm{+}\bbs_{\rm a}(\bbm,w)\big){\cdot}\DT\bbm \\[-.3em]\label{engr-equality}&\qquad\qquad\quad\ \ +\zeta(\DT\bbm) +\mathsf{M}(\bfeps(\bfu),\bbm,\CHI,w)\nabla\mu{\cdot}\nabla\mu\,\d x\d t +\int_\Sigma\!\bff_{\rm s}{\cdot}\DT\bfu{+}q_{\rm s}{+}\mu h_{\rm s}\,\d S\d t. \end{align} This is standardly achieved by testing the mechanical-chemical equations (\ref{BVP-t}a,c), and the inclusion (\ref{BVP-t}b), respectively by $\DT\bfu$, $\mu$, and $\DT\bbm$, and by using the chain rule to integrate with respect to $t$. In order for these tests to be legal, we need $\varrho\DDT\bfu$ to be in duality with $\DT\bfu$, which has already been proved, cf.\ \eqref{quality-of-u}. In particular, we make use of\\[-1.5em] \[ \int_0^T\!\big\langle\varrho\DDT\bfu,\DT\bfu\big\rangle\,\d t= \int_\Omega \frac\varrho2\big|\DT\bfu(T)\big|^2 -\frac\varrho2\big|\DT\bfu(0)\big|^2\,{\rm d}x. \] {Further, we need $\Delta\bbm\in L^2(Q;\R^N)$ to have \eqref{by-part-for-m} at our disposal; for this, the assumptions \eqref{growth-of-phi1} and \eqref{q1} together with the estimates (\ref{apriori-I}a-c,e) are used.} {Also, $\DT\CHI\in L^2(I;H^1(\Omega)^*)$ is in duality with $\mu\in L^2(I;H^1(\Omega))$ as well as $\DT\bbm\in L^2(Q;\R^N)$ is in duality with $\partial_\bbm\varphi_1(\bbm,\CHI)\in L^2(Q;\R^N)$, cf.\ \eqref{a-priori-IIe} with \eqref{15.-mu-strong} and \eqref{apriori-Ia+} with the assumption \eqref{growth-of-phi1} with $\bbm\in L^\infty(Q;\R^N)$ and $\CHI\in L^\infty(I;L^6(\Omega))$, so that we can rigorously execute the formula \eqref{fundamental-test} integrated over $I$, which gives\\[-1.5em] \begin{equation*} \int_0^T\bigg(\langle\DT\CHI,\mu\rangle+ \int_\Omega\partial_\bbm\varphi_1(\bbm,\CHI)\DT\bbm\,\d x\bigg)\d t =\int_\Omega\varphi_1( \bbm(T),\CHI(T))-\varphi_1(\bbm_0,\CHI_0)\,\d x. \end{equation*} Also $\xi\in L^2(Q;\R^N)$ is in duality with $\DT\chi\in L^2(Q;\R^N)$ so that $\int_Q\xi\DT\bbm\,\d x\d t$ has a sense and simply equals to 0 because $\xi\in\partial\delta_K(\bbm)$ has been proved in Step~4 and because $\delta_K(\bbm_0)=0$ is assumed, cf.\ \eqref{IC-ass-1}. } \medskip \noindent \emph{Step 7: Strong convergence of ${\bfeps}(\DT\bfu_\tau)$, $\DT\bbm_\tau$, and $\nabla\bar\mu_\tau$.} Using the discrete mechanic-chemical energy imbalance {(which is like \eqref{eq:7} except that the 1/2 of the heat equation \eqref{heatequation2} is not counted),} and eventually the energy equality \eqref{engr-equality}, we can write\goodbreak \begin{align}\nonumber &\int_Q \zeta(\DT\bbm)+ \bbD{\bfeps}(\DT\bfu){:}{\bfeps}(\DT\bfu) +\alpha|\DT\bbm|^2 +\bbM(\bfeps(\bfu),\bbm,\CHI,w)\nabla\mu{\cdot}\nabla\mu\,\d x\d t \\[-.4em]&\ \nonumber\le \liminf_{\tau\to0} \int_Q\!\zeta(\DT\bbm_\tau)+ \bbD{\bfeps}(\DT\bfu_\tau){:}{\bfeps}(\DT\bfu_\tau) +\alpha|\DT\bbm_\tau|^2 + \bbM(\bfeps(\bar\bfu_\tau),\bar\bbm_\tau,\bar\CHI_\tau,\underline w_\tau) \nabla\bar\mu_\tau {\cdot}\nabla\bar\mu_\tau \\&\ \nonumber\le\limsup_{\tau\to0} \int_Q\!\zeta(\DT\bbm_\tau)+ \bbD{\bfeps}(\DT\bfu_\tau){:}{\bfeps}(\DT\bfu_\tau) +{\big(1{-}\sqrt\tau/2\big)}\alpha|\DT\bbm_\tau|^2 \\[-.5em]&\hspace{15.7em}\nonumber +\bbM(\bfeps(\bar\bfu_\tau),\bar\bbm_\tau,\bar\CHI_\tau,\underline w_\tau) \nabla\bar\mu_\tau{\cdot}\nabla\bar\mu_\tau\,\d x\d t \\\nonumber &\ \le\limsup_{\tau\to0}\bigg( \mathcal E(0) -{\!\int_\O\!\frac\varrho2\big|\DT\bfu_\tau(T)\big|^2\! +\varphi_{12}(\bfeps(\bfu_\tau(T)),\bbm_\tau(T),\CHI_\tau(T)) +\frac\lambda2\big|\nabla\bbm_\tau(T)\big|^2\d x} \\[-.4em]\nonumber &\hspace{4.5em} -\int_\Sigma\!\bar \bff_{{\rm s},\tau}{\cdot}\DT\bfu_\tau\dS\d t +\int_Q\!\bar\bff_\tau{\cdot}\DT\bfu_\tau -\bfsigma_{\rm a}(\underline\bbm_\tau,\underline w_\tau) {:}\bfeps(\DT\bfu_\tau) -\bbs_{\rm a}(\underline\bbm_\tau,\underline w_\tau){\cdot}\DT\bbm_\tau \dx\d t\bigg) \\[-.3em]\nonumber &\ \le \mathcal E(0)-\mathcal E(T)-\int_\Sigma \bff_{\rm s}{\cdot}\DT\bfu\dS\d t +\int_Q\bff{\cdot}\DT\bfu -\bfsigma_{\rm a}(\bbm,w){:}\bfeps(\DT\bfu) -\bbs_{\rm a}(\bbm,w){\cdot}\DT\bbm\,\dx\d t \\[-.5em]&\ =\int_Q \zeta(\DT\bbm)+\bbD{\bfeps}(\DT\bfu){:}\bfeps(\DT\bfu) +\alpha|\DT\bbm|^2 +\bbM(\bfeps(\bfu),\bbm,\CHI,w)\nabla\mu{\cdot}\nabla\mu\,\d x\d t. \label{lim-inf-sup} \end{align} Thus we can write ``lim'' and ``='' everywhere in \eqref{lim-inf-sup} and, together with the already proved weak convergence, we obtain the desired strong convergence of $\nabla{\bfeps}(\DT\bfu_\tau)$ and $\DT\bbm_\tau$ and $\nabla\bar\mu_\tau$ in $L^2(Q)$-spaces. For technical details about the term $\bbM\nabla\mu{\cdot}\nabla\mu$ with the nonconstant coefficient $\bbM=\bbM(\bfeps(\bfu),\bbm,\CHI,w)$, we refer to \cite[Formula (4.25)]{Roubivcek2010}. \smallskip \noindent \emph{Step 8: Limit passage in the heat equation \eqref{heatequation2}.} Having proved the strong convergence in {Steps 3 and 7}, the right-hand side of \eqref{heatequation2} converges strongly in $L^1(Q)$ and this limit passage towards the weak solution to \eqref{heatequation}--\eqref{BC-t-2} is then easy. \smallskip \noindent \emph{Step 9: Total energy preservation, {i.e.\ \eqref{balance3} with $\nu=1$}.} {We have $\DT{w}\in L^1(I;H^3(\Omega)^*)$, cf.\ \eqref{a-priori-IId} and realize the already proved identity \eqref{heatequation}, which is in duality with the constant 1, we can perform rigorously this test and sum it with mechanical/chemical energy balance obtained already in Step 6.} \QED \section{Generalization of the model for electrolytes and fuel cell modeling}\label{sec-fuel-cells} In a very basic scenario, the above presented model allows for a relatively simple generalization for a multicomponent, charged (i.e.\ ionized), chemically reacting medium undergoing electro-diffusion in elastic medium. When having in mind \emph{hydrogen fuel cells}, the specific situation involves elastic polymeric porous layer with negatively-charged dopands and undergoing mechanical deformation/stresses e.g.\ due to {\it swelling} through which water H$_2$O, hydrogen ions H$^+$ (i.e.\ protons), and {hydronium} ions H$_3$O$^+$ (or, in general, ions of the type H$_{2n+1}$O$_n^+$ also with $n\ge0$) move by drift and diffusion; we speak about a polymeric electrolyte membrane (=PEM). This membrane is surrounded by two thin layers where catalyzed chemical reactions take place and another electron-conductive layers called an anode and a cathode. The mentioned reactions are H$_2\to$2H$^++2$e$^-$ (on the layer between the anode and membrane) and O$_2$+4H$^+$+4e$^-\to$2H$_2$O (on the layer between the cathode and membrane). There is vast amount of literature about such fuel cells and their modeling, cf.\ e.g.\ \cite{Fuhr13MNMF,Kuli10AMFC,ProWet09PEMF} for a survey. Similar scenario applies for methanol or ethanol fuel cells, except a different chemistry on the anode. All the models seem however to be focused on electro-chemistry without taking (thermo)mechanical interactions properly into account, sometimes being rather one-dimensional and mostly not accompanied by any mathematical analysis. Of course, the following outlined model can serve only as an ansatz to which a lot of concrete data is to be supplied. The generalization of the above presented model to $m$ diffusive constituents consists in taking the concentration $\boldsymbol\CHI$ and the (now electro-)chemical potential $\boldsymbol\mu$ vector valued. Moreover, we consider a vector of electric charges $\mathbf{z}$ of the $m$ constituents; some components of $\mathbf{z}$ can be zero. Further, we consider the vector of chemical-reactions rate ${\mathbf r}={\mathbf r}(x,\boldsymbol\CHI)$, electrostatic potential $\phi$ of the self-induced electric field, an electric permittivity $\epsilon=\epsilon(x)$, and $d=d(x)$ concentration of dopands. The $x$-dependence allows for distinction of particular layers composing the mentioned fuel cells. The above outlined chemistry was only an example - some alternative similar application might be e.g.\ methanol fuel cells, cf.\ \cite{DFGJ03PMDM} for a simplified model. The mass balance \eqref{EEd-t2} together with \eqref{def-of-mu} augments to \begin{subequations}\label{Rosbroeck} \begin{align}\label{mass-conserv} &\DT{\boldsymbol\CHI}- \mathrm{div}\big(\bbM(\bfeps(\bfu),\bbm,\boldsymbol\CHI,\vartheta) \nabla\boldsymbol\mu\big)={\mathbf r}(\boldsymbol\CHI), \\ &\boldsymbol\mu= \partial_\CHI\varphi_1(\bbm,\boldsymbol\CHI)+\mathbf{z}\phi, \end{align} where $\phi$ is to solve the (rest of Maxwell system for) electrostatics balancing the electrical induction $\epsilon\nabla\phi$ as\\[-1.2em] \begin{align}\label{poisson} &-\mathrm{div}\big(\epsilon\nabla\phi\big) =\mathbf{z}{\cdot}\boldsymbol\CHI+d\qquad\qquad \end{align} \end{subequations} with some (here unspecified) boundary conditions. In \eqref{mass-conserv}, $\bbM$ is now a 4-th order symmetric tensor and thus $[\mathrm{div}(\bbM\nabla\boldsymbol\mu)]_i^{}:= \sum_{k=1}^3\frac{\partial}{\partial x_k} \sum_{l=1}^3\sum_{j=1}^m\bbM_{ijkl} \frac{\partial\boldsymbol\mu_j}{\partial x_l}$. The essence of this electro-statical augmentation is that the test of \eqref{mass-conserv} by $\boldsymbol\mu$ now relies on the modification of \eqref{fundamental-test} if integrated over $\Omega$ as follows:\\[-1.5em] \begin{align}\nonumber \int_\Omega\DT{\boldsymbol\CHI}{\cdot}\boldsymbol\mu\,\d x&= \int_\Omega\DT{\boldsymbol\CHI}{\cdot}\big(\partial_\CHI\varphi_1(\bbm,\boldsymbol\CHI)+\mathbf{z}\phi\big)\,\d x \\\nonumber&= \frac{\d}{\d t}\int_\Omega\varphi_1(\bbm,\boldsymbol\CHI)\,\d x -\int_\Omega\partial_\bbm\varphi_1(\bbm,\boldsymbol\CHI){\cdot} \DT{\boldsymbol\mu}-\phi\mathbf{z}{\cdot}\DT{\boldsymbol\CHI}\,\d x \\\nonumber&= \frac{\d}{\d t}\int_\Omega\varphi_1(\bbm,\boldsymbol\CHI)\,\d x -\int_\Omega\partial_\bbm\varphi_1(\bbm,\boldsymbol\CHI){\cdot} \DT{\boldsymbol\mu}-\phi\,\mathrm{div}\big(\epsilon\nabla\DT\phi\big)\,\d x \\&= \frac{\d}{\d t}\int_\Omega\varphi_1(\bbm,\boldsymbol\CHI) +\frac\epsilon2|\nabla\phi|^2\,\d x -\int_\Omega\partial_\bbm\varphi_1(\bbm,\boldsymbol\CHI){\cdot} \DT{\boldsymbol\mu}\,\d x \end{align} together with (for simplicity unspecified) term arising from boundary conditions for \eqref{poisson}. The energy balance \eqref{balance3} now involves also the energy of the electrostatic field $\frac12\int_\Omega\epsilon|\nabla\phi|^2\,\d x$. The term ${\mathbf r}(\boldsymbol\CHI){\cdot}\boldsymbol\mu ={\mathbf r}(\boldsymbol\CHI){\cdot} \partial_\CHI\varphi_1(\bbm,\boldsymbol\CHI) +{\mathbf r}(\boldsymbol\CHI){\cdot}\mathbf{z}\phi$ arising by the mentioned test of \eqref{mass-conserv} is to be treated by Gronwall's inequality under some growth qualification on the chemical-reaction rates ${\mathbf r}$. The convergence analysis imitates \cite{Roub07IINF} as far as the electrostatic part concerns. Standard modeling of electro-chemical devices with sizes substantially larger than the so-called Debye length like fuel cells however simplifies the model by considering local electroneutrality, arising as an asymptotics for $\epsilon\to0$ in \eqref{poisson}, cf.\ e.g.\ \cite{Fuhr13MNMF}. Final note is that, considering $m=2$ and forgetting $\bfeps$, $\bbm$, and $\vartheta$, the system \eqref{Rosbroeck} itself represents the classical Roosbroeck's \emph{drift-diffusion model for semiconductors} \cite{Roos50TFEH}, the components of $\boldsymbol\CHI$ being then interpreted as concentrations of electrons and holes; cf.\ e.g.\ \cite[Sect.\,12.4]{Roubicek2013}. The generalization presented in this section can thus be also interpreted in its very special case $m=2$ as a model for thermodynamics of \emph{elastic semiconductors}, especially if the mobility tensor $\bbM$ would be allowed to depend also on the intensity of the electric field $\nabla\phi$.\\ \bibliographystyle{abbrv}
2024-02-18T23:40:52.423Z
2013-09-13T02:08:44.000Z
algebraic_stack_train_0000
3,542
13,400
proofpile-arXiv_066-1337
\section{Introduction} Let $G$ be a quasi-split reductive group over a number field $F$ and $\A$ the ring of adeles of $F$. Let $B$ be a Borel subgroup of $G$ defined over $F$, $N$ the unipotent radical of $B$ and fix a non-degenerate character $\psi_N$ of $N(\A)$, trivial on $N(F)$. For a cusp form $\varphi$ of $G(F)\bs G(\A)$ we consider the Whittaker--Fourier coefficient\footnote{The Haar measure is normalized so that $\vol(N(F)\bs N(\A))=1$} \[ \whit(\varphi)=\whit^{\psi_N}(\varphi):=\int_{N(F)\bs N(\A)}\varphi(n)\psi_N(n)^{-1}\ dn. \] If $\pi$ is an irreducible cuspidal representation then $\whit$, if non-zero, gives a realization of $\pi$ in the space of Whittaker functions on $G(\A)$, which by local multiplicity one depends only on $\pi$ as an abstract representation. It therefore provides a useful tool for understanding $\pi$, both computationally and conceptually. It is natural to study the size of $\whit(\varphi)$. For the general linear group, the theory of Rankin--Selberg integrals, developed in higher rank by Jacquet, Piatetski-Shapiro and Shalika, expresses, among other things, the Petersson inner product in terms of a canonical inner product on the local Whittaker model of $\pi$. (See e.g.~\cite{MR1816039}.) The global factor which shows up in this expression is the residue at $s=1$ of $L(s,\pi\otimes\pi^\vee)$ (or alternatively,\footnote{We caution that some authors refer to the adjoint $L$-function as the quotient of $L(s,\pi\otimes\pi^\vee)$ by $\zeta_F(s)$} the adjoint $L$-function of $\pi$) where $\pi^\vee$ is the contragredient of $\pi$. Dually, $\abs{\whit(\varphi)}^2$ is related to $(\res_{s=1}L(s,\pi,\Ad))^{-1}$ (assuming certain $L^2$-normalization). One way to try to make this more precise, and to generalize it to other groups, is to take the $\psi_N$-th Fourier coefficient of a matrix coefficient of $\pi$ and to relate it to the product of Whittaker functions. The integral \[ \int_{N(\A)}(\pi(n)\varphi,\varphi^\vee)_{G(F)\bs G(\A)^1}\psi_N(n)^{-1}\ dn \] does not converge. In fact, even the local integrals \begin{equation} \label{eq: Fourier MC} I_v(\varphi,\varphi^\vee)=\int_{N(F_v)}(\pi_v(n_v)\varphi_v,\varphi_v^\vee)_v\psi_N(n_v)^{-1}\ dn_v \end{equation} (where $\varphi_v$, $\varphi_v^\vee$ are now taken from the space of $\pi_v$ and $\pi_v^\vee$ respectively and $(\cdot,\cdot)_v$ is the canonical pairing) do not converge unless $\pi_v$ is square-integrable. However, it is possible to regularize $I_v$, and by the Casselman--Shalika formula, almost everywhere we have $I_v(\varphi_v,\varphi_v^\vee)=\Delta_{G,v}(1)L(1,\pi_v,\Ad)^{-1}$ if $\varphi_v$, $\varphi_v^\vee$ are unramified vectors with $(\varphi_v,\varphi_v^\vee)=1$ and $\Delta_{G,v}(1)$ is a certain $L$-factor depending on $G_v$ but not $\pi_v$. By local multiplicity one there exists a constant $\mainconst{\pi}$ depending on $\pi$ such that \begin{equation} \label{eq: gllc} \whit^{\psi_N}(\varphi)\whit^{\psi_N^{-1}}(\varphi^\vee) =(\mainconst{\pi}\vol(G(F)\bs G(\A)^1))^{-1}\frac{\Delta_G^S(s)}{L^S(s,\pi,\Ad)}\big|_{s=1}\prod_{v\in S}I_v(\varphi_v,\varphi_v^\vee) \end{equation} for all $\varphi=\otimes\varphi_v\in\pi$, $\varphi^\vee=\otimes_v\varphi_v^\vee\in\pi^\vee$ and all $S$ sufficiently large. Implicit here is the existence and non-vanishing of $\frac{\Delta_G^S(s)}{L^S(s,\pi,\Ad)}\big|_{s=1}$, (or equivalently, of $\lim_{s\rightarrow 1}(s-1)^lL^S(s,\pi,\Ad)$ where $l$ is the dimension of the split part of the center of $G$). The Rankin--Selberg theory for $\GL_n$ alluded to above shows that $\mainconst{\pi}=1$ for any irreducible cuspidal representations of $\GL_n$. (See \S\ref{sec: GLn}. A similar result was proved independently by Sakellaridis--Venkatesh \cite{1203.0039}.) It is desirable to extend this relation to other quasi-split groups. The first problem is that $\mainconst{\pi}$ depends on the automorphic realization of $\pi$. Therefore, in cases where there is no multiplicity one, it is not clear which $\pi$'s to take. There are (at least) two ways to approach this problem. One way is to use the notion of \emph{$\psi_N$-generic spectrum} as defined by Piatetski-Shapiro \cite{MR546599}. It is the orthogonal complement of the $L^2$ automorphic forms with vanishing Whittaker functions. We denote this space by $L^2_{\cusp,\psi_N}(G(F)\bs G(\A)^1)$. This space is multiplicity free and it is a meaningful problem to study $\mainconst{\pi}$ for the irreducible constituents of the $\psi_N$-generic spectrum. Another, more speculative way is to admit Arthur's conjectures (for the discrete spectrum) in a strong form, namely a canonical decomposition \[ L_{\disc}^2(G(F)\bs G(\A)^1)=\mathop{\widehat\oplus}\limits_{\phi}\overline{\canArthur_\phi} \] according to elliptic Arthur's parameters. This approach was taken by Sakellaridis--Venkatesh in \cite{1203.0039}. A good indication for its validity is a recent result of V. Lafforgue who established, in the function field case, (say in the split case) a canonical decomposition of $C_c^{\cusp}(G(F)\bs G(\A)/K\Xi,\overline{\Q_l})$ according to Langlands's parameters, for suitable compact open subgroups $K\subset G(\A)$ and a central lattice $\Xi$ \cite{1209.5352}. The difficulty with this approach in the number field case is that not only are Arthur's conjectures wide open, it is not even clear how to uniquely characterize the spaces $\canArthur_\phi$ since they cannot be pinned down purely representation theoretically (at least by standard Hecke operators).\footnote{In \cite{1209.5352} Lafforgue introduces (in the function field case) additional symmetries of geometric nature on the space $G(F)\bs G(\A)/K\Xi$} Nevertheless, it turns out to be profitable to admit the existence of the spaces $\canArthur_\phi$, hypothetical as they may be. For the group $G=\GL_n$ the spaces $\canArthur_\phi$ are always irreducible. Moreover, $\canArthur_\phi$ is cuspidal if and only if it is generic and this happens if and only if $\phi$ is of Ramanujan type (i.e., it has trivial $\SL_2$-type). For other groups, the reducibility of $\canArthur_\phi$ is measured to a large extent by a certain finite group $\cent_\phi$ (and its local counterparts) attached to $\phi$ \cite{MR1021499}. This of course goes back to Labesse--Langlands (\cite{MR540902}, cf.~\cite{MR757954}). For instance, if $G$ is split then the group $\cent_\phi$ is the quotient of the centralizer of the image of $\phi$ in the complex dual $\widehat G$ of $G$ by the center of $\widehat G$. In particular, for $G=\GL_n$ we always have $\cent_\phi=1$. In general we expect that if $\phi$ is of Ramanujan type then $\canArthur_\phi\cap L^2_{\cusp,\psi_N}(G(F)\bs G(\A)^1)$ is irreducible. We denote the representation on this (hypothetical) space by $\pi^{\psi_N}(\phi)$. If $\phi$ is not of Ramanujan type (as happens for instance if $\canArthur_\phi$ is not contained in the cuspidal spectrum) then $\whit^{\psi_N}$ vanishes on $\canArthur_\phi$ -- see \cite{MR2784745}. One is lead to make the following: \begin{conjecture} \label{conj: intro} For any elliptic Arthur's parameter $\phi$ of Ramanujan type we have $\mainconst{\pi^{\psi_N}(\phi)}=\abs{\cent_\phi}$. \end{conjecture} The conjecture is inspired by recent conjectures and results of Ichino--Ikeda \cite{MR2585578} which sharpen the Gross--Prasad conjecture. (See \cite{Gan_symplecticlocal} for a recent extension of these conjectures by Gan--Gross--Prasad.) In fact, this goes back to early results of Waldspurger (\cite{MR783511, MR646366} -- see below). More recently, Sakellaridis--Venkatesh formulated conjectures in the much broader scope of periods over spherical subgroups (at least in the split case) \cite{1203.0039}. Conjecture \ref{conj: intro} can be viewed as a strengthening of the conjectures of \cite{1203.0039} in the case at hand. In \S\ref{sec: conjWhit} we will reduce this conjecture, under some natural compatibility assumptions on Arthur's conjecture, to the case where $G$ is semisimple and simply connected. For quasi-split classical groups one may formulate Conjecture \ref{conj: intro} more concretely thanks to the work of Cogdell--Kim--Piatetski-Shapiro--Shahidi, Ginzburg--Rallis--Soudry and others. To that end let us recall the descent method of Ginzburg--Rallis--Soudry \cite{MR2848523}. Let $G$ be a quasi-split classical group and $\psi_N$ as before. Starting from a set $\{\pi_1,\dots,\pi_k\}$ of (distinct) cuspidal representations of general linear groups $\GL_{n_i}$ of certain self-duality type depending on $G$ and with $n_1+\dots+n_k=m$ where $m$ is determined by $G$, one constructs a $\psi_N$-generic cuspidal representation $\sigma=\sigma^{\psi_N}(\{\pi_1,\dots,\pi_k\})$ of $G(\A)$. Moreover, $\sigma$ is multiplicity free and any irreducible constituent of $\sigma$ has the isobaric sum $\pi_1\boxplus\dots\boxplus\pi_k$ as its functorial transfer to $\GL_m$ under the natural homomorphism of $L$-groups. In fact, combined with the results of Cogdell--Piatetski-Shapiro--Shahidi \cite{MR2767514} which unify and extend earlier work of Cogdell--Kim--Piatetski-Shapiro--Shahidi \cite{MR1863734, MR2075885} and Kim--Krishnamurthy \cite{MR2127169, MR2149370}, it is known that \emph{all} $\psi_N$-generic cuspidal representations of (quasi-split) classical groups are covered by descent. In particular, one can describe $L(1,\sigma,\Ad)$ in terms of known $L$-functions of $\GL_n$. The representation $\sigma$ is known to be irreducible for $\Orth(2n+1)$ (or equivalently, for $\SO(2n+1)$). It is expected to be irreducible in all cases except if one views $\SO(2n)$ (rather than $\Orth(2n)$) as a classical group (as we're forced to if we want to stick to connected groups): in that case $\sigma$ may decompose as $\tau\oplus\theta(\tau)$ where $\tau$ is irreducible and $\theta$ is an outer involution preserving $N$ and $\psi_N$. Conjecture \ref{conj: intro} translates into the following: \begin{conjecture} \label{conj: globalclassical} Let $\pi$ be an irreducible constituent of $\sigma^{\psi_N}(\{\pi_1,\dots,\pi_k\})$. Then \[ \mainconst{\pi}=\begin{cases}2^{k-2}&\text{if $G=\SO(2n)$ and }\theta(\pi)=\pi,\\2^{k-1}&\text{otherwise.}\end{cases} \] \end{conjecture} We remark that following Arthur's work \cite{Artendclass} and its follow-up by Mok \cite{1206.0882}, one expects to have multiplicity one for all classical groups (again, with the caveat that if $\SO(2n)$ is admitted as a classical group then multiplicity could be two). Thus, in all cases except for $\SO(2n)$ we could have formulated the conjecture for any $\psi_N$-generic representation whose functorial transfer to $\GL_m$ is $\pi_1\boxplus\dots\boxplus\pi_k$. At any rate, Arthur's work is not a prerequisite for the formulation of Conjecture \ref{conj: globalclassical}. In fact, one can easily modify Conjecture \ref{conj: globalclassical} for $\Gspin$ groups using the work of Asgari-Shahidi \cite{MR2219256, 1101.3467} and Hundley-Sayag \cite{MR2505178, 1110.6788}. We can also formulate an analogous conjecture for the metaplectic groups $\Mp_n$ -- the two-fold cover of the symplectic groups $\Sp_n$. (One expects multiplicity one to hold in this case as well.) While these groups are not algebraic, they behave in many respects like algebraic groups. In particular, the descent method applies to them (and gives rise to irreducible representations). For the metaplectic group, Conjecture \ref{conj: globalclassical} and the relation \eqref{eq: gllc} have to be modified as follows: \begin{conjecture} \label{conj: metplectic global} Assume that $\tilde\pi$ is the $\psi_N$-descent of $\{\pi_1,\dots,\pi_k\}$ to $\Mp_n$. Let $\pi$ be the isobaric sum $\pi_1\boxplus\dots\boxplus\pi_k$. Then \[ \whit^{\psi_N}(\varphi)\whit^{\psi_N^{-1}}(\varphi^\vee)= (2^k\vol(\Sp_n(F)\bs\Sp_n(\A)))^{-1}\Delta_{\Sp_n}^S(1)\frac{L^S(\frac12,\pi)}{L^S(1,\pi,\sym^2)}\prod_{v\in S}I_v(\varphi_v,\varphi_v^\vee). \] \end{conjecture} We note that in the case of $\Mp_n$, the image of the $\psi_N$-descent consists of the cuspidal $\psi_N$-generic spectrum whose $\psi$-theta lift to $\SO(2n-1)$ vanishes where $\psi$ is determined by $\psi_N$. (See \cite[\S11]{MR2848523} for more details.) In the case $n=1$, this excludes the so-called exceptional representations. The case of the metaplectic two-fold cover of $\SL_2$ (i.e., $n=1$) goes back to the classical result of Waldspurger on the Fourier coefficients of half-integral weight modular forms \cite{MR646366} which was later generalized by many authors \cite{MR894322, MR1244668, MR629468, MR1233447, MR783554, MR1404335, MR2059949, MR2669637, 1308.2353}. Waldspurger used the Shimura correspondence as his main tool. While it is conceivable that in general, the theta correspondence will reduce the conjecture for the metaplectic group to the case of $\SO(2n+1)$, this will not suffice by itself to prove the conjecture. A different approach, which was taken by Jacquet \cite{MR983610} and completed by Baruch--Mao (for $n=1$) \cite{MR2322488} is via the relative trace formula. An important step in generalizing this approach to higher rank was taken by Mao--Rallis \cite{MR2656089}. We mention in passing an exciting new result by Wei Zhang \cite{WeiZhang} on the Gan--Gross--Prasad conjecture for unitary groups, which uses an analogous relative trace formula conceived by Jacquet--Rallis \cite{MR2767518}. Let us describe the contents of the paper. In \S\ref{sec: FCMC} we define the local integrals \eqref{eq: Fourier MC} in the $p$-adic case as stable integrals, in the sense that the integral over a sufficiently large compact open subgroup $U$ of $N$ is independent of $U$. This is closely related to the situation in \cite{MR581582}. The integrals \eqref{eq: Fourier MC} are compatible with the Jacquet integral and parabolic induction. In particular, we can compute them in the unramified case using the Casselman--Shalika formula. In the Archimedean case we give an ad hoc definition for \eqref{eq: Fourier MC}, using the results of \cite[Ch. 15]{MR1170566}. This is used in \S\ref{sec: conjWhit} to introduce $\mainconst{\pi}$ -- the main object of interest of this paper. We formulate Conjecture \ref{conj: intro} and show that it is compatible with restriction to subgroups containing the derived group as well as with projection by a central induced torus. Next we show that $\mainconst{\pi}=1$ in the case of the general linear group using Rankin--Selberg integrals (\S\ref{sec: GLn}). Consequently, Conjecture \ref{conj: intro} holds for both the general and the special linear group. We consider classical groups and the metaplectic group more closely in \S\ref{sec: classical groups} where we formulate Conjectures \ref{conj: globalclassical} and \ref{conj: metplectic global}. Finally, in \S\ref{sec: examples} we explicate and prove certain low rank cases of Conjecture \ref{conj: globalclassical}. These cases boil down to a relation (for small $n$) between a certain group of self-twists of a representation of $\pi$ of $\GL_n(\A)$ and the isobaric decomposition of the functorial transfer under a certain representation of $\GL_n(\C)$. In a sequel to this paper, we will reduce conjectures \ref{conj: globalclassical} and \ref{conj: metplectic global} to a local conjectural identity which will be proved for the metaplectic group in the $p$-adic case. \subsection{Acknowledgement} It is a pleasure to acknowledge the contribution of several mathematicians to this paper. First, we thank Joseph Bernstein and Herv\'e Jacquet for discussions leading to Propositions \ref{prop: Bernstein} and \ref{prop: Jacquetprop} respectively. Conversations with Yiannis Sakellaridis and Akshay Venkatesh were very beneficial towards the formulation of Conjecture \ref{conj: intro}. We thank David Soudry for explaining to us many fine points about the descent method on numerous occasions. We thank Jean-Pierre Labesse for correspondence which lead to Appendix \ref{sec: appendix}. We thank Michail Borovoi, Philippe Gille and Diana Shelstad for their help with Galois cohomology, Vincent Lafforgue for expounding on the material of \cite{1209.5352}, Kaoru Hiraga for clarifying some aspects of the book \cite{MR2918491} and Freydoon Shahidi for answering a question about \cite{MR2784745}. We would also like to thank Wee Teck Gan, Atsushi Ichino and Omer Offen for helpful discussions and suggestions. We thank the anonymous referee for his careful reading of an earlier version of the paper and for making constructive comments. We thank Joachim Schwermer and the Erwin Schr\"odinger Institute in Vienna for their support and for providing excellent working conditions for collaboration. We cannot close this introduction without recalling our dear mentor Steve Rallis. Both authors were privileged to benefit from his guidance as post-docs at the Ohio State University. His advice and passion for mathematics made a deep impact on us. The subject matter of this paper was close to Steve's heart. We hope that it is suitable to dedicate this work to his memory. \section{Fourier coefficients of matrix coefficients} \label{sec: FCMC} Let $\bf G$ be a quasi-split reductive group over a local field $F$ of characteristic $0$. If $\bf X$ is a smooth variety over $F$ and $S$ is an $F$-algebra, we use $X(S)$ to denote the $S$-points of $\bf X$, or simply $X$ to denote its $F$-points. We write $C^\infty(X)$ for the space of smooth functions on $X$. (In the $p$-adic case, this means the locally constant functions on $X$.) We also write $C_c^\infty(X)$ for the space of compactly supported smooth functions on $X$. Let $\bf A$ be a maximal $F$-split torus of $\bf G$, ${\bf T}=C_{\bf G}({\bf A})$ (a maximal torus of $\bf G$, since $\bf G$ is quasi-split) and $\bf B=\bf T\ltimes \bf N$ a Borel subgroup containing $\bf A$ (defined over $F$). We denote by $\Phi$ the set of roots of $\bf A$, and by $\Phi_+$ (resp.~$\Delta_0$) the subset of positive indivisible (resp.~simple) roots with respect to $\bf B$. For any $\alpha\in\Phi_+$ let $\bf N_\alpha$ be the subgroup of $\bf N$ whose Lie algebra is the direct sum of the weight spaces corresponding to roots of $\bf T$ (over the algebraic closure of $F$) whose restriction to $\bf A$ is a multiple of $\alpha$. Let $W=\operatorname{Norm}_{G}(T)/T$ be the Weyl group of $G$ and $w_0$ the longest element of $W$. We fix a non-degenerate (continuous) character $\psi_N:N\rightarrow\C^*$, that is $\psi_N\big|_{N_\alpha}\not\equiv1$ for every $\alpha\in\Delta_0$. For any subgroup $N'$ of $N$ we denote the restriction of $\psi_N$ to $N'$ by $\psi_{N'}$. By a representation of $G$, we will always mean a smooth representation $(\pi,V)$ in the $p$-adic case (with the discrete topology on $V$) and a smooth Fr\'echet representation $(\pi,V)$ of moderate growth in the archimedean case. If $(\pi,V)$ is a representation of finite length, we write $(\pi^\vee,V^\vee)$ for the contragredient representation. Let $(\cdot,\cdot)=(\cdot,\cdot)_\pi$ be the canonical pairing on $V\times V^\vee$. For any pair $v\in V$, $v^\vee\in V^\vee$ we define the matrix coefficient $\mc_{v,v^\vee}(g)=(\pi(g)v,v^\vee)_{\pi}$. \label{sec: irrnot} We denote by $\Irr G$ the set of equivalence classes of irreducible representations of $G$. Recall that $\pi\in\Irr G$ is called \emph{square-integrable} if its cental character $\omega_\pi$ is unitary and any matrix coefficient lies in $L^2(Z\bs G,\omega_\pi)$ where $\bf Z$ is the center of $\bf G$. (The notation stands for the space of functions on $G$ which are $(Z,\omega_\pi)$-equivariant and which are square-integrable modulo $Z$.) We say that $\pi\in\Irr G$ is essentially square-integrable if some twist of $\pi$ by a (not necessarily unitary) character of $G$ is square-integrable. We denote by $\Irr_{\sqr}G$ the class of essentially square-integrable irreducible representations. In the $p$-adic case we will also write $\Irr_{\cusp}G$ for the set of supercuspidal representations in $\Irr G$. If $\pi\in\Irr_{\sqr}G$ then any matrix coefficient of $\pi$ belongs to the Harish-Chandra Schwartz space of the derived group of $G$ (\cite[Corollaire III.1.2]{MR1989693} -- $p$-adic case, \cite[Theorem 15.2.4]{MR1170566} -- archimedean case), and in particular, it is integrable over $N$ (\cite[Proposition II.4.5]{MR1989693}, \cite[Thereom 7.2.1]{MR929683}). This is not true for a general $\pi\in\Irr G$. The goal of this section is to make sense of the integral \[ \int_N\mc_{v,v^\vee}(n)\psi_N(n)^{-1}\ dn \] when it does not converge absolutely. Let $\JF_{\psi_N}(\pi)=\JF_{\psi_N}^G(\pi)$ be the twisted Jacquet module of $\pi$, namely, the quotient of $\pi$ by the closure of the span of $\pi(n)v-\psi_N(n)v$, $u\in N$, $v\in V_\pi$. In the $p$-adic case $\pi\mapsto\JF_{\psi_N}(\pi)$ is an exact functor. We say that $\pi$ is $\psi_N$-generic if $\JF_{\psi_N}(\pi)$ is nontrivial, in which case it is one dimensional (if $\pi\in\Irr G$). We denote by $\Irr_{\gen,\psi_N}G$ the set of equivalence classes of irreducible representations which are $\psi_N$-generic. \subsection{} We start with the $p$-adic case. Until further notice $F$ will be a $p$-adic field with ring of integers $\OO$. For any group $\bf H$ over $F$ denote by $\csgr(H)$ the set of compact open subgroups of $H$. Suppose that $U$ is a unipotent group over $F$ with a fixed Haar measure $du$. Recall that the group generated by a relatively compact subset of $U$ is relatively compact. In particular, the set $\csgr(U)$ is directed. \label{sec: stint} \begin{definition} \label{def: stable integral} Let $f$ be a smooth function on $U$. We say that $f$ has a \emph{stable integral} over $U$ if there exists $U_1\in\csgr(U)$ such that for any $U_2\in\csgr(U)$ containing $U_1$ we have \begin{equation} \label{eq: comval} \int_{U_2}f(u)\ du=\int_{U_1}f(u)\ du. \end{equation} In this case we write $\stint_U f(u)\ du$ for the common value \eqref{eq: comval} and say that $\stint_U f(u)\ du$ stabilizes at $U_1$. In other words, $\stint_U f(u)\ du$ is the limit of the net $(\int_{U_1}f(u)\ du)_{U_1\in\csgr(U)}$ with respect to the discrete topology of $\C$. \end{definition} \begin{remark} \label{rem: basicstable} \begin{enumerate} \item Clearly, if $f\in C_c^\infty(U)$ then $f$ has a stable integral. \item More generally, let $R$ (resp., $L$) be the right (resp., left) regular representation of $U$ on $C^\infty(U)$. We extend $R$ and $L$ to representations of the algebra of finite Borel measures of $U$ with compact support. Suppose that there exist $U_1,U_2\in\csgr(U)$ such that $R(e_{U_1})L(e_{U_2})f\in C_c^\infty(U)$ where $e_{U_i}$ is the Haar measure on $U_i$ with volume $1$, $i=1,2$. Then $f\in C^\infty(U)$ has a stable integral over $U$ and \[ \stint_U f(u)\ du=\int_U[R(e_{U_1})L(e_{U_2})f](u)\ du. \] In this case, we will say that $f$ is \emph{compactly supported after averaging}. \item It is not true in general that if $f\in L^1(U)\cap C^\infty(U)$ then $f$ has a stable integral. However, if it does, then the stable integral is equal to $\int_Uf(u)\ du$. \item If $f$ has a stable integral over $U$, then any right or left translate of $f$ by an element of $U$ has a stable integral (with the same value). \item \label{part: autstable} Similarly, if $\alpha$ is an automorphism of $U$ and $\stint_Uf(u)\ du$ is defined then $\stint_Uf(\alpha(u))\ du$ is defined and equals to $m_\alpha^{-1}\stint_Uf(u)\ du$ where $m_\alpha$ is the module of $\alpha$. \end{enumerate} \end{remark} \begin{proposition} \label{prop: stablemc} Let $(\pi,V)\in\Irr G$. Then for any $v\in V$, $v^\vee\in V^\vee$ the function $\psi_N^{-1}\cdot\mc_{v,v^\vee}\big|_N$ is compactly supported after averaging and hence has a stable integral over $N$. Moreover, if $K_0\in\csgr(G)$ and $v\in V^{K_0}$, $v^\vee\in (V^\vee)^{K_0}$ then \[ (v,v^\vee)_\pi^{\psi_N}:=\stint_{N(F)}(\pi(n)v,v^\vee)_\pi\psi_N(n)^{-1}\ dn \] stabilizes at $U_1\in\csgr(N)$ depending only on $K_0$. The bilinear form $(v,v^\vee)_\pi^{\psi_N}$ is $(N,\psi_N)$-equivariant in $v$ and $(N,\psi_N^{-1})$-equivariant in $v^\vee$. Thus, $(v,v^\vee)_\pi^{\psi_N}\equiv0$ unless $\pi\in\Irr_{\gen,\psi_N}G$, in which case $(\cdot,\cdot)_\pi^{\psi_N}$ descends to a non-degenerate pairing (denoted the same way) between the one-dimensional spaces $\JF_{\psi_N}(\pi)$ and $\JF_{\psi_N^{-1}}(\pi^\vee)$. \end{proposition} We will prove the proposition below. Note that if $\pi\in\Irr_{\sqr}$ then \[ (v,v^\vee)_\pi^{\psi_N}=\int_{N(F)}(\pi(n)v,v^\vee)\psi_N(n)^{-1}\ dn. \] \begin{remark} \label{rem: anydual} Suppose that we are given $\pi, \hat\pi\in\Irr G$ with a non-degenerate pairing $(\cdot,\cdot)$ between them. Then, by identifying $\hat\pi$ with $\pi^\vee$ we can make sense of $(\cdot,\cdot)^{\psi_N}$ in this context. \end{remark} \begin{remark} For a different approach to define $(\cdot,\cdot)^{\psi_N}$ (at least in the tempered case), see \cite{1203.0039}. \end{remark} \subsection{} In order to prove Proposition \ref{prop: stablemc} we will first need an auxiliary result which is based on \cite{MR581582}. Let $P=M\ltimes U$ be a standard parabolic subgroup of $G$ with its standard Levi decomposition. Let $P'=M'\ltimes U'$ be the standard parabolic subgroup of $G$ which is conjugate to the parabolic subgroup opposite to $P$. Denote by $W^M$ the Weyl group of $M$ and by $w_0^M$ the longest element in $W^M$. We identify $W^M\bs W$ with the set of left $W^M$-reduced elements of $W$. Denote by $w_M=w_0^Mw_0$ the longest element of $W^M\bs W$, so that $w_M^{-1}Mw_M=M'$. If $\sigma$ is a representation of $M$ then we write $\Ind_P^G\sigma=\Ind\sigma$ for the (normalized) parabolic induction. Recall the Bruhat decomposition \[ G=\cup_{w\in W^M\bs W}Pw N \] Also recall the Bruhat order on $W^M\bs W$ defined by $w_1\le w_2$ whenever $Pw_1N$ is contained in the closure of $Pw_2N$. We denote by $(\Ind_P^G\sigma)^\circ$ the $P'$-invariant space of sections in $\varphi$ which are supported on the big cell $Pw_MP'=Pw_MN=Pw_MU'$. Note that for any $\varphi\in (\Ind_P^G\sigma)^\circ$ the function $\varphi(w_M\cdot)$ on $U'$ is compactly supported. \begin{lemma} \label{lem: suppbc} Suppose that $\sigma$ is a representation of $M$ and $\pi=\Ind_P^G\sigma$. Then for any $\varphi\in\pi$ there exists $N_1\in\csgr(N)$ such that $\varphi_{N_1,\psi_{N_1}}:=\pi(\psi_{N_1}^{-1}e_{N_1})\varphi\in(\Ind\sigma)^\circ$. Moreover, let $K_0\in\csgr(G)$ and assume that $\varphi\in\pi^{K_0}$. Then we can choose $N_1$ above depending only on $K_0$, and the support of $\varphi_{N_1,\psi_{N_1}}$ on $w_MU'$ is bounded in terms $K_0$ only. \end{lemma} \begin{proof} This is proved exactly as in \cite[Lemma 2.2]{MR581582}. We show by induction on $\ell(w)$ that for any $w\in W^M\bs W$ there exists $N_1\in\csgr(N)$ such that $\varphi_{N_1,\psi_{N_1}}^{}$ vanishes on $\cup_{w'<w}Pw'N$. For $w=w_M$ we will obtain the lemma. The base of the induction ($w=e$) is the empty statement. Note that if $\varphi_{N_1,\psi_{N_1}}^{}$ vanishes on $Pw'N$ then the same holds for any $N_2\in\csgr(N)$ containing $N_1$. Therefore, for the induction step it will be enough to show the following statement for any $w\ne w_M$. \begin{multline*} \text{If $\varphi\rest_{\cup_{w'<w}Pw'N}\equiv0$ then we can choose $N_1\in\csgr(N)$ such that $\varphi_{N_1,\psi_{N_1}}^{}\rest_{PwN}\equiv0$.} \end{multline*} Since $w\ne w_M$, there exists $\alpha\in\Delta_0$ such that $w\alpha$ is a root of $A$ in the Lie algebra of $U$. Therefore $N_\alpha\subset N\cap w^{-1}Uw$ and $\psi_{N\cap w^{-1}Uw}\not\equiv1$. Let $N_2\in\csgr(N)$ be sufficiently large so that $\psi_{N_2\cap w^{-1}Uw}\not\equiv1$. Then \begin{align*} \varphi_{N_2,\psi_{N_2}}^{}(w)&=\int_{N_2}\varphi(wn)\psi_{N_2}(n)^{-1}\ dn\\&= \int_{N_2\cap w^{-1}Uw\bs N_2}\int_{N_2\cap w^{-1}Uw}\varphi(wn'n)\psi_{N_2}(n')^{-1}\psi_{N_2}(n)^{-1}\ dn'\ dn\\&= \int_{N_2\cap w^{-1}Uw\bs N_2}\varphi(wn)\psi_{N_2}(n)^{-1}\int_{N_2\cap w^{-1}Uw}\psi_{N_2}(n')^{-1}\ dn'\ dn=0. \end{align*} It follows that $\varphi_{N_1,\psi_{N_1}}^{}(wu)=(R(u)\varphi)_{uN_1u^{-1},\psi_{uN_1u^{-1}}}^{}(w)=0$ for any $N_1\in\csgr(N)$ and $u\in N$ such that \begin{equation} \label{eq: vancond} \psi_{uN_1u^{-1}\cap w^{-1}Uw}\not\equiv1. \end{equation} Clearly, the condition \eqref{eq: vancond} is right $N_1$-invariant in $u$. It is also left $N_w$-invariant where $N_w=N\cap w^{-1}Nw$ since $N_w$ normalizes $w^{-1}Uw$. By assumption, the support of $\varphi(w\cdot)$ on $N$ is compact modulo $N_w$. Choose a compact subset $\Omega\subset N$ such that the above support is contained in $N_w\Omega$. Choose $N_1\in\csgr(N)$ containing $\cup_{u\in\Omega}u^{-1}N_2u$. Thus, \eqref{eq: vancond} holds for $u\in\Omega$. Hence it holds for $u\in N_w\Omega N_1$. Thus $\varphi_{N_1,\psi_{N_1}}^{}$ vanishes on $wN_w\Omega N_1$. On the other hand, $\varphi_{N_1,\psi_{N_1}}^{}$ vanishes on $wN\setminus wN_w\Omega N_1$ by the support condition on $\varphi$. We conclude that $\varphi_{N_1,\psi_{N_1}}^{}$ vanishes on $wN$ and hence on $PwN$, as required. For the second statement we just need to observe that in the argument above, we can choose $\Omega$ to depend only on $K_0$. \end{proof} \begin{proof}[Proof of Proposition \ref{prop: stablemc}] By Jacquet's subrepresentation theorem, it is enough to consider the case where $\pi=\Ind_P^G\sigma$ (not necessarily irreducible) and $\sigma$ is a supercuspidal representation of $M$ (not necessarily unitary). We identify $\pi^\vee$ with $\Ind_P^G\sigma^\vee$ via the pairing \begin{equation} \label{eq: indpairing} (\varphi,\varphi^\vee)_\pi=\int_{P\bs G}(\varphi(g),\varphi^\vee(g))_\sigma\,dg. \end{equation} We will take the `measure' on $P\bs G$ by fixing a Haar measure on $U'$ and defining \begin{equation} \label{eq: measPG} \int_{P\bs G}f(g)\ dg=\int_{U'}f(w_Mu)\ du \end{equation} for any continuous function $f$ on $G$ satisfying $f(pg)=\modulus_P(p)f(g)$ for any $p\in P$, $g\in G$ where $\modulus_P$ is the modulus function of $P$. Let $\varphi\in\pi$, $\varphi^\vee\in\pi^\vee$. By the previous lemma there exists $N_1\in\csgr(N)$ such that $\varphi_{N_1,\psi_{N_1}}\in(\Ind\sigma)^\circ$. Similarly, there exists $N_2\in\csgr(N)$ such that $\varphi^\vee_{N_2,\psi_{N_2}^{-1}}\in(\Ind\sigma^\vee)^\circ$. Note that $R(\psi_{N_1}^{-1}e_{N_1})L(\psi_{N_2}e_{N_2})\mc_{\varphi,\varphi^\vee}=\mc_{\varphi_{N_1,\psi_{N_1}}^{},\varphi^\vee_{N_2,\psi_{N_2}^{-1}}}$. Therefore, upon replacing $\varphi$ and $\varphi^\vee$ by $\varphi_{N_1,\psi_{N_1}}$ and $\varphi^\vee_{N_2,\psi_{N_2}^{-1}}$ respectively we may assume that $\varphi\in(\Ind\sigma)^\circ$ and $\varphi^\vee\in(\Ind\sigma^\vee)^\circ$ and we will show that $\mc_{\varphi,\varphi^\vee}$ is compactly supported on $N$ in this case. By \eqref{eq: indpairing} and \eqref{eq: measPG} we have \begin{equation} \label{eq: MCu} (\pi(u)\varphi,\varphi^\vee)_\pi=\int_{U'}(\varphi(w_Mu_1u),\varphi^\vee(w_Mu_1))_\sigma\ du_1 \end{equation} for any $u\in N$. By the property of $\varphi^\vee$ the integral over $u_1$ can be taken over a compact subset. Write $u_1u=u_2u_3$ where $u_2\in N\cap M'$ and $u_3\in U'$. Then \[ \varphi(w_Mu_1u)=\sigma(w_Mu_2w_M^{-1})\varphi(w_Mu_3). \] Thus, in the integral on the right-hand side of \eqref{eq: MCu}, $u_3$ is confined to a compact set. Moreover, since the matrix coefficients of $\sigma$ are compactly supported modulo the center of $M$, $u_2$ is confined to a compact set as well. Hence, the same is true for $u$ as claimed. For the statement about the dependence on $K_0$ it suffices to use the corresponding statement in the previous lemma and the fact that there are only finitely many supercuspidal representations (up to twisting by unramified character) for a given level. We still have to show the non-vanishing of the pairing in the generic case. This will be done in Proposition \ref{prop: nontrivpsi_N} below. \end{proof} \begin{remark} The dependence of $U_1$ on $K_0$ in Proposition \ref{prop: stablemc} is not made explicit in the proof above. \end{remark} \subsection{The Jacquet integral} Next we consider the Jacquet integral. Let $P=M\ltimes U$ be a standard parabolic subgroup of $G$. Set $N_M=N\cap M$ and $N_{M'}=N\cap M'$. Recall that if $\pi\in\Irr_{\gen,\psi_N}G$ is a subquotient of $\Ind_P^G\sigma$ where $\sigma\in\Irr M$ then $\sigma\in\Irr_{\gen,\Mpsi{M}}M$ where $\Mpsi{M}$ is the character on $N_M$ given by $\Mpsi{M}(n)=\psi_N(w_M^{-1}nw_M)$. For any $\varphi\in\Ind\sigma$ choose $N_1\in\csgr(N)$ such that $\varphi_{N_1,\psi_{N_1}}^{}\in(\Ind\sigma)^\circ$. Then the integral \[ \varphi\mapsto\int_{U'}\varphi_{N_1,\psi_{N_1}}^{}(w_Mu)\psi_{U'}(u)^{-1}\ du \] converges and its projection to $\JF_{\Mpsi{M}}^M(\sigma)$ does not depend on the choice of $N_1$. Thus we get a map \[ J_\sigma^{\psi_N}:=\JF_{\psi_N}(\Ind_P^G\sigma)\rightarrow\JF_{\Mpsi{M}}^M(\sigma) \] which is in fact an isomorphism of vector spaces. Dually, from any $\Mpsi{M}$-Whittaker functional on $\sigma$ we construct a $\psi_N$-Whittaker functional on $\Ind\sigma$. Of course, this construction coincides with the usual one given by analytic continuation. By abuse of notation, we often view $J_\sigma^{\psi_N}$ as a map defined on $\Ind_P^G\sigma$ through the canonical projection $\Ind_P^G\sigma\rightarrow\JF_{\psi_N}(\Ind_P^G\sigma)$. \begin{proposition} \label{prop: jacquetdescent} Suppose that $\sigma\in\Irr_{\sqr,\gen,\Mpsi{M}}M$ and let $\pi=\Ind_P^G\sigma$. Identify $\pi^\vee$ with $\Ind_P^G\sigma^\vee$ as before. Then \begin{equation} \label{eq: redtosqrint} (\varphi,\varphi^\vee)_\pi^{\psi_N}= (J_\sigma^{\psi_N}(\varphi),J_{\sigma^\vee}^{\psi_N^{-1}}(\varphi^\vee))_\sigma^{\Mpsi{M}} \end{equation} for any $\varphi\in\pi$, $\varphi^\vee\in\pi^\vee$. \end{proposition} \begin{proof} As before, by Lemma \ref{lem: suppbc} we can assume that $\varphi\in(\Ind\sigma)^\circ$ and $\varphi^\vee\in(\Ind\sigma^\vee)^\circ$. In this case, by \eqref{eq: MCu}, the left-hand side of \eqref{eq: redtosqrint} is equal to \[ \int_{N_{M'}}\int_{U'}\int_{U'}(\varphi(w_Mu_1u_2u_3),\varphi^\vee(w_Mu_1))_\sigma\psi_N(u_2u_3)^{-1}\ du_1\ du_2\ du_3. \] By a change of variable we get \begin{align*} &\int_{N_{M'}}\int_{U'}\int_{U'}(\varphi(w_Mu_1u_3),\varphi^\vee(w_Mu_2))_\sigma\psi_N(u_1u_2^{-1}u_3)^{-1}\ du_1\ du_2\ du_3\\=& \int_{N_{M'}}\int_{U'}\int_{U'}(\varphi(w_Mu_3u_1),\varphi^\vee(w_Mu_2))_\sigma\psi_N(u_1u_2^{-1}u_3)^{-1}\ du_1\ du_2\ du_3\\=& \int_{N_M}\int_{U'}\int_{U'}(\sigma(u_3)\varphi(w_Mu_1),\varphi^\vee(w_Mu_2))_\sigma\psi_N(u_1u_2^{-1}w_M^{-1}u_3w_M)^{-1}\ du_1\ du_2\ du_3. \end{align*} Note that the integrals over $u_1$ and $u_2$ are effectively sums over finite sets which are independent of $u_3$ because of our assumption on $\varphi$ and $\varphi^\vee$, and for any $u_1$ and $u_2$, the integral over $u_3$ is absolutely convergent since $\sigma\in\Irr_{\sqr}M$. Thus the triple integral is absolutely convergent, which justifies the previous steps. We obtain \[ \int_{N_M}(\sigma(u_3)J_\sigma^{\psi_N}(\varphi),J_{\sigma^\vee}^{\psi_N^{-1}}(\varphi^\vee))_\sigma\Mpsi{M}(u_3)^{-1}\ du_3 \] which is the right-hand side of \eqref{eq: redtosqrint}, as required. \end{proof} \begin{remark} In fact, using induction in stages and the transitivity of the Jacquet integral, the proposition holds for any $\sigma\in\Irr_{\gen,\Mpsi{M}}M$, not necessarily essentially square integrable. \end{remark} We can now complete the remaining part of Proposition \ref{prop: stablemc}. \begin{proposition} \label{prop: nontrivpsi_N} Suppose that $\pi\in\Irr_{\gen,\psi_N}G$. Then the bilinear form $(\cdot,\cdot)_\pi^{\psi_N}$ is non-trivial. \end{proposition} \begin{proof} If $\pi$ is supercuspidal, this follows from \cite[Lemma 1.1]{MR729755} (which is stated for $\GL_n$, but proved in general). Alternatively, it follows from the proof of \cite[Lemma 3]{MR1816039}. In the general case, realize $\pi$ as a quotient of $\Ind_P^G\sigma$ where $\sigma$ is supercuspidal. By Proposition \ref{eq: redtosqrint} and the supercuspidal case, the bilinear form $(\cdot,\cdot)_{\Ind\sigma}^{\psi_N}$ is non-trivial. On the other hand $(\cdot,\cdot)_{\Ind\sigma}^{\psi_N}$ factors through $\pi\times\Ind\sigma^\vee$ since $\pi$ is the only $\psi_N$-generic subquotient of $\Ind\sigma$. Moreover, the embedding $\JF_{\psi_N^{-1}}(\pi^\vee)\rightarrow\JF_{\psi_N^{-1}}(\Ind\sigma^\vee)$ is an isomorphism since $\dim\JF_{\psi_N^{-1}}(\Ind\sigma^\vee)=1$. Hence, the restriction $(\cdot,\cdot)_{\Ind\sigma}^{\psi_N}$ to $\Ind\sigma\times\pi^\vee$ is non-zero. The proposition follows. \end{proof} We can also derive the following consequence. We thank Joseph Bernstein for this observation. \begin{proposition} \label{prop: Bernstein} For any left and right $G$-smooth function $f$ on $G$ the integral \[ \stint_N f(n)\psi_N(n)^{-1}\ dn \] is well defined. Moreover, for any $K_0\in\csgr(G)$ there exists $N_1\in\csgr(N)$ such that \[ \stint_N f(n)\psi_N(n)^{-1}\ dn=\int_{N_1}f(n)\psi_N(n)^{-1}\ dn \] for any bi-$K_0$-invariant function $f$ on $G$. \end{proposition} \begin{proof} We first prove it for $f\in C_c^\infty(G)$. In this case we use Plancherel inversion to write \[ f(x)=\int_{\Irr_{\temp}G}\tr(\pi(x)\pi(f))\ d\mu_{\operatorname{pl}}(\pi) \] where $d\mu_{\operatorname{pl}}$ is the Plancherel measure on $\Irr_{\temp}G$ the set of tempered representations of $G$. If $f$ is bi-$K_0$-invariant then only those $\pi$ such that $V_\pi^{K_0}\ne0$ contribute, namely only a finite number of compact tori. Integrating over a big compact open subgroup of $N$ and interchanging the integral we see that the integral stabilizes depending only on $K_0$ by the uniformity part of Proposition \ref{prop: stablemc}, since the trace is a finite sum of matrix coefficients corresponding to vectors in $V_\pi^{K_0}$. Of course, this also shows that there exists $N_1\in\csgr(N)$ depending only on $K_0$ such that \[ \int_{N_2}1_{K_0\gamma K_0}(n)\psi_N(n)^{-1}\ dn=0 \] for all $N_2\supset N_1$ and $\gamma$ outside the compact set $K_0N_1K_0$. The proposition follows. \end{proof} We also have the following closely related consequence which was kindly explained to us by Jacquet. \begin{proposition} \label{prop: Jacquetprop} Given $\varphi\in\ind_N^G\psi_N$, $\varphi^\vee\in\ind_N^G\psi_N^{-1}$, the matrix coefficient \[ (\pi(g)\varphi,\varphi^\vee)_{N\bs G} \] is compactly supported in $g\in G$.\footnote{Here $\ind$ denotes compact induction} \end{proposition} \begin{proof} For any $f\in C_c^\infty(G)$ we have \begin{multline*} \int_G(\pi(g)\varphi,\varphi^\vee)_{N\bs G}f(g)\ dg=\int_G\int_{N\bs G}\varphi(xg)\varphi^\vee(x)f(g)\ dx\ dg \\=\int_G\int_{N\bs G}\varphi(g)\varphi^\vee(x)f(x^{-1}g)\ dx\ dg= \int_{N\bs G}\int_{N\bs G}\varphi(g_1)\varphi^\vee(g_2)\int_Nf(g_2^{-1}ng_1)\psi_N(n)\ dn. \end{multline*} Suppose that $\varphi$ and $\varphi^\vee$ are right-$K_0$-invariant for some $K_0\in\csgr(G)$ and take $f$ to be bi-$K_0$-invariant. Then by the previous proposition \[ \varphi(g_1)\varphi^\vee(g_2)\int_Nf(g_2^{-1}ng_1)\psi_N(n)\ dn=0 \] if $f$ is supported outside a compact set depending only on $K_0$ and the support of $\varphi$, $\varphi^\vee$. The proposition follows. \end{proof} \begin{remark} \label{rem: archJacquet} It will be interesting to know whether the analogue of Proposition \ref{prop: Jacquetprop} holds in the archimedean case. Namely, suppose that $\varphi:G\rightarrow\C$ (resp., $\varphi^\vee$) is smooth, left $(N,\psi)$ (resp., $(N,\psi^{-1})$) equivariant and rapidly decreasing on $AK$ together with all its derivatives. Is the matrix coefficient \[ (\pi(g)\varphi,\varphi^\vee)_{N\bs G} \] necessarily a Schwartz function on $G$? \end{remark} \subsection{Unramified computation} Next we consider the unramified case. Suppose that $\bf G$ splits over an unramified extension of $F$. Let $K$ be a hyperspecial maximal compact subgroup of $G$ as in \cite[Corollary 4.6]{MR1474159}. Let $\Delta_G(s)$ be the $L$-factor of the dual $M^\vee$ to the motive $M$ introduced by Gross in [ibid.] (cf.~\cite[p.~79]{MR0230728}). It depends only on the $F$-isogeny class of $\bf G$. For instance, $\Delta_T(s)$ is the $L$-factor of the Artin representation of $\Gal(\bar F/F)$ on $X^*(\bf T)\otimes\C$ where $X^*(\bf T)$ is the lattice of algebraic characters of $\bf T$ (defined over the algebraic closure $\bar F$ of $F$). On the other hand, if $\bf G$ is split over $F$ of rank $r$ then \[ \Delta_G(s)=\prod_{i=1}^r\zeta_F(s+d_i-1) \] where $d_1,\dots,d_r$ are exponents of $\bf G$ and $\zeta_F(s)=(1-q_F^{-s})^{-1}$ is Tate's local factor corresponding to the trivial character of $F^*$ (where of course $q_F$ is the cardinality of the residue field of $F$). We denote by $\Irr_{\unr}G$ the set of unramified irreducible representations of $G$. \begin{proposition} \label{prop: unram} Suppose that $\pi\in\Irr_{\unr} G$ and $\psi_N$ is unramified in the sense of \cite[\S3]{MR581582}. Let $v_0$ and $v_0^\vee$ be unramified vectors in $\pi$ and $\pi^\vee$ respectively. Then \begin{equation} \label{eq: unramcomp} (v_0,v_0^\vee)_\pi^{\psi_N}=\vol(N\cap K)\frac{(v_0,v_0^\vee)_\pi\Delta_G(1)}{L(1,\pi,\Ad)}. \end{equation} \end{proposition} \begin{proof} Since the space of unramified vectors is one dimensional, it is enough to check the claim for a specific choice of non-zero vectors $v_0$, $v_0^\vee$. Write $\pi$ as a subrepresentation of $\Ind_B^G\chi$ where $\chi$ is an unramified character of $T$. (In fact, $\pi$ is a direct summand of $\Ind_B^G\chi$ but we will not use this fact.) Let $\varphi_0$ (resp.~$\varphi_0^\vee$) be the unramified vector in $\Ind_B^G\chi$ (resp.~$\Ind_B^G\chi^{-1}$) such that $\varphi_0(e)=\varphi_0^\vee(e)=1$. Note that the validity of \eqref{eq: unramcomp} is independent of any choice of Haar measure. We endow $G$ and $T$ with the `canonical' Haar measures described in the discussion preceding \cite[Proposition 4.7]{MR1474159}. We endow $N$ with the Haar measure such that $\vol(N\cap K)=1$. This gives rise to a Haar measure on $B=T\ltimes N$, which is compatible with the relation \eqref{eq: measPG}. It follows from \cite[Proposition 4.7]{MR1474159} (cf.~\cite{MR0213362, MR581580}) that \[ (\varphi_0,\varphi_0^\vee)_{\Ind\chi}=\frac{\vol_G(K)}{\vol_B(K\cap B)}=\frac{\vol_G(K)}{\vol_T(K\cap T)}=\frac{\Delta_T(1)}{\Delta_G(1)}. \] Therefore we need to show that (for the above choice of measures) \begin{equation} \label{eq: fromCS} (\varphi_0,\varphi_0^\vee)_{\Ind\chi}^{\psi_N}=\frac{\Delta_T(1)}{L(1,\pi,\Ad)}. \end{equation} However, by \eqref{eq: redtosqrint} we have \[ (\varphi_0,\varphi_0^\vee)_{\Ind\chi}^{\psi_N}=J_\chi^{\psi_N}(\varphi_0)J_{\chi^{-1}}^{\psi_N^{-1}}(\varphi_0^\vee) \] and \eqref{eq: fromCS} follows from the Casselman--Shalika formula \cite{MR581582}. To explain this, we introduce some more notation. Let $\widehat G$ be the complex dual group of $\bf G$ with Borel subgroup $\widehat B=\widehat T\ltimes\widehat N$. The Galois group $\Gal(\bar F/F)$ acts on $\widehat G$ and preserves $\widehat B$ and $\widehat T$. Let $X_*(\widehat T)$ be the lattice of algebraic co-characters of $\widehat T$. Let $\lambda\in X^*({\bf T})\otimes\C=X_*(\widehat T)\otimes\C$ be any element whose image under the canonical map $X^*({\bf T})\otimes\C\rightarrow\Hom(T,\C^*)$ is $\chi$ and let $\hat t_\chi\in\widehat T$ be the image of $\lambda$ under the canonical map $X_*(\widehat T)\otimes\C\rightarrow\widehat T(\C)$. Let $\hat{\mathfrak{n}}$ be the Lie algebra of $\widehat N$ and for each $\alpha\in\Phi_+$ let $\hat{\mathfrak{n}}_\alpha$ be the direct sum of the weight spaces of $\widehat T$ in $\hat{\mathfrak{n}}$ corresponding to the roots of $\bf T$ (over the algebraic closure of $F$) whose restriction to $\bf A$ is a multiple of $\alpha$. Thus, $\hat{\mathfrak{n}}=\dsum_{\alpha\in\Phi_+} \hat{\mathfrak{n}}_\alpha$. Suppose that $\bf G$ splits over an unramified extension $E/F$ and let $\sigma$ be the Frobenius element of $\Gal(E/F)$. By the Casselman--Shalika formula\footnote{This is written a little differently (but equivalently) in \cite{MR581582}. To compare the two expressions -- cf.~\cite[\S3]{MR581580}} we have \[ J_\chi^{\psi_N}(\varphi_0)=\prod_{\alpha\in\Phi_+} \det(1-q_F^{-1}\sigma\Ad\hat t_\chi\big|_{\hat{\mathfrak{n}}_\alpha}). \] Therefore (cf.~\cite[\S4]{MR581580}), \[ (\varphi_0,\varphi_0^\vee)_{\Ind\chi}^{\psi_N}=\det(1-q_F^{-1}\sigma\Ad\hat t_\chi\big|_{\hat{\mathfrak{n}}}) \det(1-q_F^{-1}\sigma\Ad\hat t_\chi^{-1}|_{\hat{\mathfrak{n}}})= \frac{\Delta_T(1)}{L(1,\pi,\Ad)}. \] The proposition follows. \end{proof} \subsection{Archimedean case} \label{sec: arch} Now consider the archimedean case. Let $\pi\in\Irr_{\gen,\psi_N}G$. We would like to define the pairing $(\cdot,\cdot)_\pi^{\psi_N}$. Suppose first that $\pi$ is tempered. We write $\pi$ as a direct summand of $\Ind_P^G\sigma$ where $P=M\ltimes U$ is a standard parabolic subgroup and $\sigma\in\Irr_{\sqr}M$. We have $\sigma\in\Irr_{\gen,\Mpsi{M}}M$. Identify $\pi^\vee$ with a direct summand of $\Ind_P^G\sigma^\vee$ as before. The Jacquet integral $J^{\psi_N}_\sigma$ still makes sense in this context. Namely, there is a unique isomorphism of vector spaces \[ \JF_{\psi_N}(\Ind_P^G\sigma)\rightarrow\JF_{\Mpsi{M}}^M(\sigma) \] such that the resulting map \[ J_\sigma^{\psi_N}:\Ind_P^G\sigma\rightarrow\JF_{\Mpsi{M}}^M(\sigma) \] extends the map \[ \varphi\mapsto\int_{U'}\varphi(w_Mu)\psi_{U'}(u)^{-1}\ du, \ \ \varphi\in(\Ind\sigma)^\circ \] (or rather its composition with the natural projection $\sigma\rightarrow\JF_{\Mpsi{M}}^M(\sigma)$) where $(\Ind\sigma)^\circ$ is defined as in the $p$-adic case \cite[Theorem 15.4.1]{MR1170566}. Thus we will \emph{define} the pairing $(v,v^\vee)_\pi^{\psi_N}$ to be right-hand side of \eqref{eq: redtosqrint}. Once again, this will descend to a pairing between $\JF_{\psi_N}(\pi)$ and $\JF_{\psi_N^{-1}}(\pi^\vee)$. The pairing $(v,v^\vee)_\pi^{\psi_N}$ does not depend on the choice of $\sigma$. Indeed, suppose that $\pi$ is a direct summand of $\Ind_{P'}^G\sigma'$ where $\sigma'\in\Irr_{\sqr}M'$ for standard parabolic subgroup $P'=M'\ltimes U'$ of $G$. Then there exists $w\in W$ such that $w$ is right $W_M$-reduced, $wMw^{-1}=M'$ and $\sigma'=w\sigma$. We identify $\JF_{\Mpsi{M}}^M(\sigma)$ with $\JF_{\Mpsi{M'}}^{M'}(\sigma')$ through $w$. We can use the results of Shahidi \cite{MR1070599} to define normalized intertwining operators \[ R(\sigma,w):\Ind_P\sigma\rightarrow\Ind_{P'}\sigma', \] and similarly for $\sigma^\vee$, such that \[ J_{\sigma'}^{\psi_N}\circ R(\sigma,w)=J_\sigma^{\psi_N},\,\, J_{\sigma'^\vee}^{\psi_N^{-1}}\circ R(\sigma^\vee,w)=J_{\sigma^\vee}^{\psi_N^{-1}} \] and $(R(\sigma,w)\varphi,R(\sigma^\vee,w)\varphi^\vee)=(\varphi,\varphi^\vee)$ for any $\varphi\in\pi$, $\varphi^\vee\in\pi^\vee$. More precisely, $R(\sigma,w)$ is given by normalizing the standard intertwining operator by the local coefficients of \cite{MR1070599}. Alternatively, following \cite{1203.0039}, we could define $(v,v^\vee)_\pi^{\psi_N}$ (in the tempered case) as follows. (See \cite{LMao4}.) Let $N^\circ$ be the derived group of $N$. The integral $\int_{N^\circ}(\pi(n\cdot)v,v^\vee)\ dn$ converges and defines an $L^2$ function on $N^\circ\bs N$. Its Fourier transform is regular on the open set of non-degenerate characters of $N$. Its value at $\psi_N^{-1}$ is by definition $(v,v^\vee)_\pi^{\psi_N}$. In the general case, we use the Langlands classification to write $\pi$ as a subrepresentation of $\Ind_P^G\sigma$ where $P=M\ltimes U$ is a standard parabolic and $\sigma\in\Irr_{\gen,\Mpsi{M}}M$ is essentially tempered. (In fact, $\Ind_P^G\sigma$ is irreducible.) As before, we identify $\pi^\vee$ with a quotient of $\Ind_P^G\sigma^\vee$. Once again, the Jacquet integral $J^{\psi_N}_\sigma$ gives rise to an isomorphism of vector spaces \[ J_\sigma^{\psi_N}:\JF_{\psi_N}(\Ind_P^G\sigma)\rightarrow\JF_{\Mpsi{M}}^M(\sigma) \] \cite[Theorem 15.6.7]{MR1170566}. We thus define the pairing $(v,v^\vee)_\pi^{\psi_N}$ to be the right-hand side of \eqref{eq: redtosqrint}. It descends to a pairing between $\JF_{\psi_N}(\pi)$ and $\JF_{\psi_N^{-1}}(\pi^\vee)$. By abuse of notation, if $f=\mc_{v,v^\vee}$ we will formally write \[ \stint_Nf(n)\psi_N(n)^{-1}\ dn:=(v,v^\vee)_\pi^{\psi_N}. \] \begin{remark} It will be interesting to find a purely function-theoretic way to define \[ \stint_Nf(n)\psi_N(n)^{-1}\ dn \] as we did in the $p$-adic case. \end{remark} If $S$ is a finite set of places, $\pi_S\in\Irr_{\gen,\psi_{N(F_S)}}G(F_S)$, $u=\otimes_{v\in S}u_v\in\pi_S$, $u^\vee=\otimes_{v\in S}u_v^\vee\in\pi_S^\vee$ we write \[ (u,u^\vee)_{\pi_S}^{\psi_N}:=\prod_{v\in S}(u_v,u_v^\vee)_{\pi_v}^{\psi_N}. \] We extend it to a bilinear form on $\pi_S\times\pi_S^\vee$ by linearity. Of course, as before the definition depends on a choice of a Haar measure on $N(F_S)$. \subsection{Metaplectic group} \label{sec: metalocal} The results of this section have analogues for the metaplectic group $\Mp_n$, the two-fold cover of the rank $n$ symplectic group $\Sp_n$. (Any representation of $\Mp_n$ will be implicitly assumed to be genuine.) The maximal unipotent subgroup $N$ of $\Sp_n$ embeds uniquely in $\Mp_n$ and we fix a non-degenerate character $\psi_N$ of $N$. Uniqueness of Whittaker model in this context was proved by Szpruch in the $p$-adic case \cite{MR2366363}. In the archimedean case \cite[Ch.~15]{MR1170566} is still applicable. Propositions \ref{prop: stablemc}, \ref{prop: jacquetdescent} and \ref{prop: nontrivpsi_N} and their proofs hold with minimal changes. In particular, we can define $(v,v^\vee)_\pi^{\psi_N}$ at least in the $p$-adic case. In the archimedean case we will define $(v,v^\vee)_\pi^{\psi_N}$ as in \S\ref{sec: arch} and \emph{assume} that this is unambiguous. (This can probably be checked using the results of \cite{Sz}.) Assume that $q$ is odd. Let $\pi$ be an unramified representation of $\Mp_n$. Then we have \begin{equation} \label{eq: metunram} (v_0,v_0^\vee)_\pi^{\psi_N}=\vol(N\cap K)\frac{(v_0,v_0^\vee)L_{\psi}(\frac12,\pi)\Delta_{\Sp_n}(1)}{L(1,\pi,\Ad)}. \end{equation} Here $\psi$ is a character of $F$ depending on $\psi_N$ and the factor $L_{\psi}(\frac12,\pi)$ in the numerator is the Shimura unramified local factor corresponding to $\pi$ and $\psi$. It is equal to $L(\frac12,\tau)$ when $\tau$ is the $\psi$-lift of $\pi$ to $\GL_{2n}$. (Cf. \cite{MR1722953} for precise definitions; recall that changing $\psi_N$ results in twisting $\tau$ by a quadratic character.) The factor $L(1,\pi,\Ad)$ in the denominator is defined to be $L(1,\tau,\sym^2)$. (Alternatively, we could have also defined it directly in terms of the parameters of $\pi$. In particular, it does not depend on the choice of $\psi_N$.) The proof of \eqref{eq: metunram} is the same as \eqref{eq: unramcomp}, except that instead of the Casselman--Shalika formula we use its metaplectic analogue due to Bump--Friedberg--Hoffstein \cite{MR1115113}. \section{Conjecture about Whittaker coefficients} \label{sec: conjWhit} \subsection{} \label{sec: cpidef} Now let us turn to the global case. That is, $\bf G$ will be a quasi-split group over a number field $F$, $\bf A$ a fixed maximal $F$-split torus of $\bf G$, ${\bf T}=C_{\bf G}(\bf A)$ and $\bf B=\bf T\ltimes \bf N$ a Borel subgroup (defined over $F$). Let $\A$ be the ring of adeles of $F$ and let $\psi_N$ be a non-degenerate character of $N(\A)$ which is trivial on $N(F)$. Denote by $\abs{\cdot}_{\A^*}$ the idele norm $\abs{\cdot}_{\A^*}:\A^*\rightarrow\R_{>0}$. As usual, let \[ G(\A)^1=\cap_{\chi}\Ker\abs{\chi}_{\A^*} \] where $\chi$ ranges over the lattice of (one-dimensional) $F$-rational characters of $\bf G$ and we extend $\chi$ to a homomorphism $\chi:G(\A)\rightarrow\A^*$. Thus $G(\A)^1$ is normal in $G(\A)$ and the quotient $G(\A)/G(\A)^1$ is isomorphic to $\R^l$ where $l$ is the rank of the split part of the center of $G$. We have $\vol(G(F)\bs G(\A)^1)<\infty$. Denote by $L^2_{\cusp}(G(F)\bs G(\A)^1)$ the cuspidal part of $L^2(G(F)\bs G(\A)^1)$. We write $\Cusp G$ for the set of equivalence classes of irreducible cuspidal representations of $G(\A)$. We take the Tamagawa measure on $N(\A)$ so that $\vol(N(F)\bs N(\A))=1$. Let $\whit^{\psi_N}$ be the $\psi_N$-th Whittaker--Fourier coefficient of a function $\varphi$ on $G(F)\bs G(\A)$, i.e. \[ \whit^{\psi_N}(g,\varphi)=\int_{N(F)\bs N(\A)}\varphi(ng)\psi_N(n)^{-1}\ dn. \] We often write $\whit^{\psi_N}(\varphi)=\whit^{\psi_N}(e,\varphi)$. Let $\pi$ be an irreducible automorphic cuspidal representation of $G(\A)$ realized in $L^2_{\cusp}(G(F)\bs G(\A))$. We assume that $\pi$ is $\psi_N$-generic, i.e. that $\whit^{\psi_N}$ does not vanish identically on the space of $\pi$. We realize the contragredient $\dual\pi$ automorphically as $\{\overline{\varphi}:\varphi\in\pi\}$. In other words, the pairing $(\cdot,\cdot)_\pi$ is given by \begin{equation} \label{eq: inner product} \int_{G(F)\bs G(\A)^1}\varphi(g)\dual\varphi(g)\ dg, \ \ \varphi\in\pi, \dual\varphi\in\dual\pi. \end{equation} Note that $\dual\pi$ is $\psi_N^{-1}$-generic. We will assume that the following property is satisfied. \begin{equation} \label{eq: Ad propr} \text{The partial $L$-function $L^S(s,\pi,\Ad)$ has a pole of order $l$ at $s=1$.} \end{equation} This is expected to hold in general. It is known for $\GL_n$ (and therefore, for $\SL_n$) and for classical groups (see \S\ref{sec: classical groups} below). Let $S$ be a finite set of places containing the archimedean places and such that $\bf G$ and $\psi_N$ are unramified outside $S$. Let $\K=\prod K_v$ be a maximal compact subgroup of $G(\A)$ which is special at all (finite) $v$ and hyperspecial for all $v\notin S$. Also set ${\K}^S={\K}\cap G(\A^S)$. We take the Haar measure on $N(F_S)$ such that $\vol({\K}^S\cap N(\A^S))=1$ with respect to the measure on $N(\A^S)$ which is compatible with the decomposition $N(\A)=N(F_S)\times N(\A^S)$. If $\pi$ is unramified outside $S$ then $\pi_S:=\pi^{{\K}^S}$, the ${\K}^S$-fixed vectors of $\pi$, is an irreducible representation of $G(F_S)$. We can identify $\dual{(\pi_S)}$ with $(\dual\pi)_S$ using the pairing $(\cdot,\cdot)_\pi$, i.e., we take $(\cdot,\cdot)_{\pi_S}=(\cdot,\cdot)_\pi$. Then by local uniqueness of Whittaker model and the non-vanishing of $(\cdot,\cdot)_{\pi_S}^{\psi_N}$ there exists a non-zero constant $\mainconst{\pi}$ such that for any such $S$ and for any $\varphi\in\pi_S=\pi^{{\K}^S}$, $\varphi^\vee\in\pi_S^\vee=(\pi^\vee)^{{\K}^S}$ we have \begin{equation} \label{def: cpi} \whit^{\psi_N}(\varphi)\whit^{\psi_N^{-1}}(\varphi^\vee)= (\mainconst{\pi}\vol(G(F)\bs G(\A)^1))^{-1}\lim_{s\rightarrow 1}\frac{\Delta_G^S(s)}{L^S(s,\pi,\Ad)}(\varphi,\varphi^\vee)_{\pi_S}^{\psi_N} \end{equation} where now $\Delta_G^S(s)$ is the partial $L$-function of the dual $M^\vee$ of the motive $M$ of \cite[\S1]{MR1474159}. For instance, if $\bf G$ is split over $F$ then $\Delta_G^S(s)=\prod_{i=1}^r\zeta_F^S(s+d_i-1)$ where $d_1,\dots,d_r$ are the exponents of $\bf G$ and $\zeta_F^S(s)=\prod_{v\notin S}\zeta_{F_v}(s)$, $\Re s>1$ is the partial Dedekind zeta function. In general, $\Delta_G^S(s)$ has a pole of order $l$ at $s=1$. By the unramified computation \eqref{eq: unramcomp} $\mainconst{\pi}$ does not depend on the choice of $S$. It also does not depend on the choice of Haar measure on $G(\A)$. However, it depends on the automorphic realization of $\pi$, not just on $\pi$ as an abstract representation, unless of course $\pi$ has multiplicity one in the cuspidal spectrum. Note that \begin{equation} \label{eq: twistconst} \mainconst{\pi\otimes\omega}=\mainconst{\pi} \end{equation} for any character $\omega$ of $G(F)\bs G(\A)$. \begin{remark} In principle, we could have considered the discrete, rather than cuspidal spectrum. However, if we admit Arthur's conjectures (see below) then $\whit^{\psi_N}$ vanishes on (the smooth part) of the residual spectrum of $G$ (namely, on the orthogonal complement of the cuspidal spectrum in the discrete spectrum of $L^2(G(F)\bs G(\A)^1)$). For the group $G=\GL_m$, the vanishing of $\whit^{\psi_N}$ on the residual spectrum follows (unconditionally) from the description of the latter by M\oe glin--Waldspurger \cite{MR1026752}. \end{remark} A similar relation holds for $\Mp_n$. As usual, $\Mp_n(\A)$ is the two-fold cover of $\Sp_n(\A)$ which splits over $\Sp_n(F)$. Let $\bf N$ be the standard maximal unipotent subgroup of $\Sp_n$ and $\psi_N$ a non-degenerate character of $N(\A)$ (viewed as a subgroup of $\Mp_n(\A)$), trivial on $N(F)$. Let $\psi$ be the corresponding character of $F\bs\A$ as in the local case (see \S\ref{sec: metalocal}). The pairing on $\Sp_n(F)\bs\Mp_n(\A)$ of two genuine functions $\varphi_1$, $\varphi_2$ is defined by $\int_{\Sp_n(F)\bs\Sp_n(\A)}\varphi_1(g)\varphi_2(g)\ dg$. Let $\tilde\pi$ be an irreducible genuine cuspidal automorphic representation of $\Mp_n$. For simplicity assume that the $\psi$-theta lift of $\tilde\pi$ to $\SO(2n-1)$ vanishes. In this case it is a consequence of the descent method of Ginzburg--Rallis--Soudry \cite{MR2848523} that $L^S(1,\tilde\pi,\Ad)$ is defined. (See \S\ref{sec: classical groups} below.) By \eqref{eq: metunram} \begin{equation} \label{eq: metacpi} \whit^{\psi_N}(\varphi)\whit^{\psi_N^{-1}}(\varphi^\vee)= (\mainconst{\tilde\pi}\vol(\Sp_n(F)\bs\Mp_n(\A)))^{-1} L_{\psi}^S(\frac12,\tilde\pi) \frac{\Delta_{\Sp_n}^S(s)}{L^S(1,\tilde\pi,\Ad)}(\varphi,\varphi^\vee)_{\tilde\pi_S}^{\psi_N} \end{equation} where $\mainconst{\tilde\pi}$ is independent of $S$ or the Haar measure on $\Sp_n(\A)$. Here we take the Haar measure on $\Mp_n(\A)$ so that $\vol(\Sp_n(F)\bs\Mp_n(\A))=2\vol(\Sp_n(F)\bs\Sp_n(\A))$. Note that $\Delta_{\Sp_n}^S(s)=\prod_{i=1}^n\zeta_F^S(2i)$. The main question that we shall study in this paper is what is the value of $\mainconst{\pi}$ both in the algebraic and the metaplectic case. \subsection{} At this point we will assume Arthur's conjectures \cite{MR1021499}. Actually, here we only care about the discrete spectrum but we will need a slightly stronger form of the conjectures which is not strictly speaking made explicit in [ibid.]. (See \cite[Conjecture 2A]{MR2331344}, \cite{1203.0039}, \cite{MR2784745} and \cite{MR2320317} for follow-up conjectures. Our formulation takes these supplements into account, but it is phrased somewhat differently.) First, we admit the existence of the Langlands group $\Langlands_F$, a locally compact group whose irreducible $n$-dimensional representations classify cuspidal representations of $\GL_n(\A)$ \cite{MR546619}. Let $W_F$ be the Weil group of $F$. The group $\Langlands_F$ comes equipped with a surjective homomorphism \begin{equation} \label{eq: langweil} \Langlands_F\rightarrow W_F \end{equation} whose kernel is a perfect group, as well as with embeddings of the local Weil groups $W_{F_v}\hookrightarrow\Langlands_F$ for any place $v$. Let $^LG=\widehat G\rtimes W_F$ be the $L$-group of $G$, where $\widehat G$ is the complex dual group of $\bf G$ and $W_F$ acting through the action of $\Gamma:=\Gal(E/F)$ where $E/F$ is a finite Galois extension over which $\bf G$ splits. Recall that \begin{equation} \label{eq: zhatg} Z(\widehat G)=\Ker[\widehat G\rightarrow \widehat{G_{\SC}}] \end{equation} where $\bf G_{\SC}$ is the simply connected cover of the derived group of $\bf G$ (with a natural map $\bf G_{\SC}\rightarrow\bf G$). We denote by $Z(\widehat G)_u$ the maximal compact subgroup of $Z(\widehat G)$. By the restriction-inflation sequence the map \eqref{eq: langweil} gives rise to isomorphisms \[ H^1(\Langlands_F,Z(\widehat G))=H^1(W_F,Z(\widehat G)). \] (All cocycles are understood to be continuous; $H^1$ is defined with respect to continuous cocycles.) Define \begin{gather*} \ker^1(\Langlands_F,Z(\widehat G))=\Ker[H^1(\Langlands_F,Z(\widehat G))\rightarrow\prod_v H^1(W_{F_v},Z(\widehat G))],\\ H^1_{\loc}(\Langlands_F,Z(\widehat G))=H^1(\Langlands_F,Z(\widehat G))/\ker^1(\Langlands_F,Z(\widehat G)). \end{gather*} Once again, we have \begin{gather*} \ker^1(\Langlands_F,Z(\widehat G))=\ker^1(W_F,Z(\widehat G)):=\Ker[H^1(W_F,Z(\widehat G))\rightarrow\prod_v H^1(W_{F_v},Z(\widehat G))],\\ H^1_{\loc}(\Langlands_F,Z(\widehat G))=H^1_{\loc}(W_F,Z(\widehat G)):=H^1(W_F,Z(\widehat G))/\ker^1(W_F,Z(\widehat G)). \end{gather*} In particular, $\ker^1(\Langlands_F,Z(\widehat G))$ is finite. We also write $H^1_{\loc}(\Langlands_F,Z(\widehat G)_u)$ for the image of $H^1(\Langlands_F,Z(\widehat G)_u)$ in $H^1_{\loc}(\Langlands_F,Z(\widehat G))$. Once again, this coincides with $H^1_{\loc}(W_F,Z(\widehat G)_u)$, defined analogously. By Lemma \ref{lem: characters}, the group $H^1_{\loc}(W_F,Z(\widehat G))$ (resp., $H^1_{\loc}(W_F,Z(\widehat G)_u)$) is isomorphic to the group of characters (resp., unitary characters) of $G(F)\bs G(\A)$. We consider the set $\params(G)$ of elliptic Arthur's parameters. The elements of $\params(G)$ are equivalence classes of homomorphisms $\phi:\Langlands_F\times\SL_2(\C)\rightarrow\,^LG$ satisfying the following properties: \begin{enumerate} \item The composition of $\phi\rest_{\Langlands_F}$ with the canonical map $^LG\rightarrow W_F$ is the map \eqref{eq: langweil}. \item The projection of the image of $\phi\rest_{\Langlands_F}$ to $\widehat G$ is bounded. \item The restriction of $\phi$ to $\SL_2(\C)$ (the so-called $\SL_2$-type of $\phi$) is an algebraic homomorphism to $\widehat G$. \item \label{part: elliptic} The centralizer $C_{\widehat G}(\phi)$ of the image of $\phi$ in $\widehat G$ is finite modulo $Z(\widehat G)^\Gamma=Z(\widehat G)\cap C_{\widehat G}(\phi)$. \end{enumerate} Two such homomorphisms $\phi_1$, $\phi_2$ are equivalent if there exist $s\in\widehat G$ and a $1$-cocycle $z$ of $\Langlands_F$ in $Z(\widehat G)$ whose class in $H^1(\Langlands_F,Z(\widehat G))$ lies in $\ker^1(\Langlands_F,Z(\widehat G))$ such that $s\phi_1(x,y)s^{-1}=z(x)\phi_2(x,y)$ for all $x\in\Langlands_F$, $y\in\SL_2(\C)$. Given $\phi\in\params(G)$ let $D_\phi$ be the subgroup of $\widehat G$ consisting of elements $s\in\widehat G$ such that\footnote{Here $[x,y]$ is the commutator $xyx^{-1}y^{-1}$} $[s,\Img(\phi)]\subset Z(\widehat G)$. (In particular, $s$ commutes with the image of $\SL_2(\C)$.) Then $D_\phi$ contains $Z(\widehat G)$ and $C_{\widehat G}(\phi)$ and the map attaching to $s\in D_\phi$ the $1$-cocycle $x\mapsto[s,\phi(x)]$ of $\Langlands_F$ in $Z(\widehat G)$ induces a homomorphism $D_\phi/Z(\widehat G)\rightarrow H^1(\Langlands_F,Z(\widehat G))$ whose kernel can be identified with $C_{\widehat G}(\phi)/Z(\widehat G)^\Gamma$. We denote by $S_\phi$ (resp., $\cent_\phi$) the inverse image of $\ker^1(\Langlands_F,Z(\widehat G))$ in $D_\phi$ (resp., $D_\phi/Z(\widehat G)$). Note that the finiteness of $C_{\widehat G}(\phi)/Z(\widehat G)^\Gamma$ is equivalent to the finiteness of $\cent_\phi$. If $\Gamma$ is cyclic then in fact $\ker^1(\Langlands_F,Z(\widehat G))=1$ and $\cent_\phi=C_{\widehat G}(\phi)/Z(\widehat G)^\Gamma$. We will mostly deal with $\phi$'s which are of Ramanujan type, i.e., with trivial $\SL_2$-type. We denote by $\tparams(G)\subset\params(G)$ the set of parameters of Ramanujan type. We can think of them as homomorphisms $\phi:\Langlands_F\rightarrow\,^LG$ (Langlands's parameters) up the equivalence relation above (which is simply conjugation if $G$ splits over a cyclic extension) -- cf.~\cite[\S10]{MR757954}. The main assertion is the existence of a canonical orthogonal decomposition\footnote{Here $\overline{\canArthur_\phi}$ denotes the $L^2$-closure} \begin{equation} \label{eq: Arthur decomposition} L_{\disc}^2(G(F)\bs G(\A)^1)=\mathop{\widehat\oplus}\limits_{\phi\in\params(G)}\overline{\canArthur_\phi} \end{equation} into subspaces that are invariant under the adjoint action of $G_{\Gad}(F)$, where the unramified components of the irreducible constituents of $\canArthur_\phi$ are determined by $\phi$ (or more precisely, by the Langlands parameter $w\mapsto\phi(w,\sm{\abs{w}^{\frac12}}00{\abs{w}^{-\frac12}})$ associated to $\phi$). Of course, this condition by itself \emph{does not} determine $\canArthur_\phi$ uniquely. Multiplying a homomorphism $\phi:\Langlands_F\times\SL_2(\C)\rightarrow\,^LG$ by a $1$-cocycle of $\Langlands_F$ in $Z(\widehat G)_u$ gives rise to an action, denoted $\alpha\cdot\phi$, of $H^1_{\loc}(\Langlands_F,Z(\widehat G)_u)$ on $\params(G)$ and $\tparams(G)$. We will give ourselves that if $\omega$ is the unitary character of $G(F)\bs G(\A)$ corresponding to $\alpha\in H^1_{\loc}(\Langlands_F,Z(\widehat G)_u)$ then \begin{equation} \label{eq: twistcompatible} \canArthur_{\alpha\cdot\phi}=\canArthur_\phi\otimes\omega:=\{\varphi\omega:\varphi\in\canArthur_\phi\}. \end{equation} If $\phi$ is not of Ramanujan type then $\whit^{\psi_N}$ vanishes on $\canArthur_\phi$ (for local reasons -- see \cite{MR2784745}). Suppose that $\phi$ is of Ramanujan type. Then every constituent of $\canArthur_\phi$ is tempered almost everywhere (in fact, conjecturally everywhere \cite{MR2784745}), hence cuspidal \cite{MR733320}. We will \emph{assume} that the orthogonal complement $\canArthur_\phi^{\psi_N}$ of the space \[ \{\varphi\in\canArthur_\phi:\whit^{\psi_N}(\cdot,\varphi)\equiv0\} \] in $\canArthur_\phi$ is irreducible (and in particular non-zero) and we will denote by $\pi^{\psi_N}(\phi)$ the irreducible automorphic cuspidal representation of $G(\A)$ on $\canArthur_\phi^{\psi_N}$. \label{sec: pipsi} We remark that in the function field case V. Lafforgue has recently obtained (assuming for simplicity that $\ker^1(W_F,Z(\widehat G))=1$) a canonical decomposition analogous to \eqref{eq: Arthur decomposition} for the cuspidal spectrum, parameterized by Langlands's parameters (which in this case amount to suitable homomorphisms $\Gal(\bar F/F)\rightarrow\,^LG$ up to conjugation by $\widehat G$) \cite{1209.5352}. Every parameter which contributes should arise (conjecturally) from an Arthur parameter (which is uniquely determined). However, it is not clear at this stage whether one can attach a canonical irreducible automorphic cuspidal representation which plays the role of $\pi^{\psi_N}(\phi)$. \begin{remark} It will be interesting to give an alternative description, independent of Arthur's conjectures, of the space $\oplus_{\phi\in\tparams(G)}\canArthur_\phi^{\psi_N}$. Of course, for $\GL_m$ this space coincides with the cuspidal spectrum. \end{remark} Assuming the above setup we can now formulate the conjecture. \begin{conjecture} \label{conj: main} For any $\phi\in\tparams(G)$ we have $\mainconst{\pi^{\psi_N}(\phi)}=\abs{\cent_\phi}$. \end{conjecture} \begin{remark} \label{rem: indepsi} Let $\bf T$ be a maximal torus of $\bf G$ normalizing $\bf N$ and let ${\bf T}_{\Gad}={\bf T}/{\bf Z}({\bf G})$. Then for any $\phi\in\tparams(G)$ and $t\in T_{\Gad}(F)$ we have $\pi^{\psi_N\circ\Ad(t)}(\phi)=\{\varphi\circ\Ad(t^{-1}):\varphi\in\pi^{\psi_N}(\phi)\}$. Therefore, Conjecture \ref{conj: main} is independent of the choice of $\psi_N$, since $T_{\Gad}(F)$ acts transitively on the set of non-degenerate characters. Also, if $\alpha\in H^1_{\loc}(\Langlands_F,Z(\widehat G)_u)$ and $\omega$ is the corresponding unitary character of $G(F)\bs G(\A)$ then by \eqref{eq: twistcompatible} \begin{equation} \label{eq: gentwist} \pi^{\psi_N}(\alpha\cdot\phi)=\pi^{\psi_N}(\phi)\otimes\omega. \end{equation} This is of course consistent with Conjecture \ref{conj: main} (using \eqref{eq: twistconst}) since $\cent_{\alpha\cdot\phi}=\cent_\phi$. \end{remark} Regardless of Arthur's conjectures one can consider, following Piatetski-Shapiro, the orthogonal complement $L^2_{\cusp,\psi_N}(G(F)\bs G(\A)^1)$ in $L^2_{\cusp}(G(F)\bs G(\A)^1)$ of the subspace \[ \{\varphi\in L^2_{\cusp}(G(F)\bs G(\A)^1):\whit^{\psi_N}(\cdot,\varphi)\equiv0\text{ almost everywhere}\}. \] By local uniqueness of Whittaker model, the space $L^2_{\cusp,\psi_N}(G(F)\bs G(\A)^1)$ is multiplicity free (cf.~\cite{MR546599}). Hence, we could have tried to formulate our conjectures for the irreducible constituents of $L^2_{\cusp,\psi_N}(G(F)\bs G(\A)^1)$ instead of the hypothetical spaces $\pi^{\psi_N}(\phi)$. (Note that if $\pi$ is such a constituent then $\dual\pi$ is realized in $L^2_{\cusp,\psi_N^{-1}}(G(F)\bs G(\A)^1)$.) It is not clear whether in general one can expect a nice formula. However, in certain cases we can hope to get a handle on the space $L^2_{\cusp,\psi_N}(G(F)\bs G(\A)^1)$ and the constants $\mainconst{\pi}$ for its constituents. (See the discussion in \S\ref{sec: classical groups} below about classical groups.) \subsection{} \label{sec: restder} Consider the case of a connected reductive group $\bf\widetilde G$ defined and quasi-split over $F$ and a connected algebraic subgroup $\bf G$ of $\bf\widetilde G$ defined over $F$ containing the derived group $\bf\widetilde G^{\der}$ of $\bf\widetilde G$. This case was considered by Hiraga--Saito in \cite{MR2918491} following Labesse--Langlands \cite{MR540902}. Let $\tilde\pi$ be an irreducible cuspidal automorphic representation of $\widetilde G(\A)$. Let $X(\tilde\pi)$ be the group of characters $\omega$ of $\widetilde G(\A)$ which are trivial on $\widetilde G(F)G(\A)$ such that $\tilde\pi\otimes\omega=\tilde\pi$ (as physical spaces). It is a finite group (\cite[Lemma 4.11]{MR2918491}) which may a priori depend on the automorphic realization of $\tilde\pi$. Let $\bf T$ be the torus $\bf\widetilde G/\bf G$. Note that $\Delta_{\widetilde G}^S(s)=\Delta_T^S(s)\Delta_G^S(s)$. The following result is essentially proved in \cite{MR2918491}. For convenience we include some details. \begin{lemma} \label{lem: restG} Suppose that $\tilde\pi$ is an irreducible cuspidal $\psi_N$-generic representation of $\widetilde G(\A)$ realized on $V_{\tilde\pi}$. Assume that the space of $\tilde\pi\otimes\omega$ is orthogonal to that of $\tilde\pi$ for any $\omega\notin X(\tilde\pi)$. (This condition is of course automatically satisfied if the cuspidal multiplicity of $\tilde\pi$ is one.) Let $V'_{\tilde\pi}=\{\varphi\rest_{G(\A)}:\varphi\in V_{\tilde\pi}\}$. Then \begin{enumerate} \item $V'_{\tilde\pi}$ is the direct sum of distinct irreducible cuspidal representations of $G(\A)$. \item There is a unique $\psi_N$-generic irreducible constituent of $V'_{\tilde\pi}$. \item If $\pi$ is the $\psi_N$-generic irreducible constituent of $V'_{\tilde\pi}$ then $\mainconst{\pi}=\abs{X(\tilde\pi)}\mainconst{\tilde\pi}$. \end{enumerate} \end{lemma} \begin{proof} The first property follows from the fact that for any place $v$, the restriction of $\tilde\pi_v$ to $G(F_v)$ is a direct sum of distinct irreducible representations (since $\pi_v$ is generic -- see [ibid., Ch. 3]). It is clear that $\whit^{\psi_N}$ does not vanish on some irreducible constituent of $V'_{\tilde\pi}$. On the other hand, $\tilde\pi_v$ admits a unique irreducible constituent which is $\psi_{N(F_v)}$-generic. The second property follows. Let $\varphi_i$, $i=1,2$ be in the space of $\pi$ and let $\tilde\varphi_i$ be in the space of $\tilde\pi$ such that $\varphi_i=\tilde\varphi_i\rest_{G(\A)}$. Replacing $\tilde\varphi_i$ by $\sum_{\omega\in X(\tilde\pi)}\tilde\varphi_i\omega$, we may assume that $\tilde\varphi_i$ is supported in $\cap_{\omega\in X(\tilde\pi)}\Ker\omega$. By the Poisson summation formula we have \[ \vol(G(F)\bs G(\A)^1)^{-1}(\varphi_1,\varphi_2)_{G(F)\bs G(\A)^1}=\vol(\widetilde G(F)\bs\widetilde G(\A)^1)^{-1} \sum_\omega(\tilde\varphi_1\omega,\tilde\varphi_2)_{\widetilde G(F)\bs\widetilde G(\A)^1} \] where $\omega$ ranges over the characters of the compact abelian group $\widetilde G(F)G(\A)^1\bs\widetilde G(\A)^1$. By the condition on $\tilde\pi$, only $\omega\in X(\tilde\pi)$ give a (possibly) non-zero contribution. By the conditions on $\tilde\varphi_i$ we therefore get \[ \vol(G(F)\bs G(\A)^1)^{-1}(\varphi_1,\varphi_2)_{G(F)\bs G(\A)^1}=\vol(\widetilde G(F)\bs\widetilde G(\A)^1)^{-1} \abs{X(\tilde\pi)}(\tilde\varphi_1,\tilde\varphi_2)_{\widetilde G(F)\bs\widetilde G(\A)^1}. \] The relation between $\mainconst{\pi}$ and $\mainconst{\tilde\pi}$ follows. \end{proof} Let us derive an analogous result at the level of parameters. The embedding $\bf G\subset\bf\widetilde G$ gives rise to a homomorphism $^L\widetilde G\rightarrow\,^LG$ and hence to a map \begin{equation} \label{eq: prmstG} \params(\widetilde G)\rightarrow\params(G). \end{equation} Note that if $\tilde\phi\mapsto\phi$ under this map then the $\SL_2$ type of $\tilde\phi$ is determined by that of $\phi$. Let $\tilde\phi\in\params(\widetilde G)$ and define \[ X(\tilde\phi)=\{\alpha\in\Ker [H^1_{\loc}(\Langlands_F,Z(\widehat{\widetilde G})_u)\rightarrow H^1_{\loc}(\Langlands_F,Z(\widehat G)_u)]: \alpha\cdot\tilde\phi\sim\tilde\phi\}. \] Let $\phi\in\params(G)$ be the image of $\tilde\phi$ under the map \eqref{eq: prmstG}. \begin{lemma} \label{lem: paramside} We have a natural short exact sequence \[ 1\rightarrow\cent_{\tilde\phi}\xrightarrow{\iota}\cent_\phi\xrightarrow{\kappa} X(\tilde\phi)\rightarrow1. \] Hence, $\abs{\cent_\phi}=\abs{\cent_{\tilde\phi}}\abs{X(\tilde\phi)}$. \end{lemma} \begin{proof} We have a short exact sequence \[ 1\rightarrow\widehat T\rightarrow\widehat{\widetilde G}\xrightarrow{p}\widehat G\rightarrow1. \] Since $\bf\widetilde G^{\der}$ is semisimple, $\bf G^{\der}=\bf\widetilde G^{\der}$ and therefore $\bf G_{\SC}=\bf\widetilde G_{\SC}$. By \eqref{eq: zhatg}, $\widehat T\subset Z(\widehat{\widetilde G})$ and we get a short exact sequence \[ 1\rightarrow\widehat T\rightarrow Z(\widehat{\widetilde G})\rightarrow Z(\widehat G)\rightarrow1. \] In other words, \begin{equation} \label{eq: invimgZ} \text{$Z(\widehat{\widetilde G})$ is the inverse image under $p$ of $Z(\widehat G)$.} \end{equation} The projection $p$ induces a map $\cent_{\tilde\phi}\xrightarrow{\iota}\cent_\phi$. We claim that $\iota$ is injective. Indeed, suppose that $\tilde s\in S_{\tilde\phi}$ has trivial image in $\cent_\phi$ under $\iota$. Then $p(s)\in Z(\widehat G)$ and by \eqref{eq: invimgZ} this implies that $s\in Z(G)$. Next, we define the map $\cent_\phi\xrightarrow{\kappa} X(\tilde\phi)$. Let $s\in S_\phi$, i.e. $s\in\widehat G$ is such that $x\mapsto [s,\phi(x)]\in Z(\widehat G)$ defines a locally trivial cocycle of $\Langlands_F$ in $Z(\widehat G)$. Let $\tilde s\in\widehat{\widetilde G}$ be any lift of $s$. Then again by \eqref{eq: invimgZ}, $x\mapsto [\tilde s,\tilde\phi(x)]\in Z(\widehat{\widetilde G})$ so that it defines an element $\kappa'(s)$ in $H^1(\Langlands_F,Z(\widehat{\widetilde G})_u)$ whose image in $H^1(\Langlands_F,Z(\widehat G)_u)$ is locally trivial. Clearly $\kappa'(s)$ does not depend on the choice of $\tilde s$ and it depends only on the image $\bar s$ of $s$ in $\cent_\phi$. We define $\kappa(\bar s)$ to be the image of $\kappa'(s)$ in $H^1_{\loc}(\Langlands_F,Z(\widehat{\widetilde G})_u)$. Since $[\tilde s,\tilde\phi(x)]\tilde\phi(x)=\tilde s\tilde\phi(x)\tilde s^{-1}$ we have $\kappa(\bar s)\in X(\tilde\phi)$. It is also clear that $\kappa'(s)$ is locally trivial in $H^1(\Langlands_F,Z(\widehat{\widetilde G}))$ if and only if $\tilde s\in S_{\tilde\phi}$. Finally we show that $\kappa$ is onto. Suppose that $\beta$ is a $1$-cocycle in $H^1(\Langlands_F,Z(\widehat{\widetilde G})_u)$ such that $\beta\cdot\tilde\phi\sim\tilde\phi$ and the image of $\beta$ in $H^1(\Langlands_F,Z(\widehat G)_u)$ is locally trivial. Then there exists $\tilde s\in\widehat{\widetilde G}$ and a $1$-cocycle $\gamma$ of $\Langlands_F$ in $Z(\widehat{\widetilde G})$ whose image in $H^1(\Langlands_F,Z(\widehat{\widetilde G})_u)$ is locally trivial such that $\tilde s\tilde\phi(x)\tilde s^{-1}=\beta(x)\gamma(x)\tilde\phi(x)$ for all $x\in\Langlands_F$. If $s=p(\tilde s)$ we infer that $s\phi(x)s^{-1}=p(\beta(x)\gamma(x))\phi(x)$, so that $s\in S_\phi$ and $\kappa(\bar s)=\beta$. \end{proof} Let $\tilde\phi\in\params(\widetilde G)$ and $\phi\in\params(G)$ be as before. It is natural to assume that $\canArthur_\phi$ is the image of the space $\canArthur_{\tilde\phi}$ under restriction of functions to $G(\A)$. In view of \cite[Ch.~4]{MR2918491} it is also natural to assume that the map $H^1(\Langlands_F,\widehat{\widetilde G})\rightarrow H^1(\Langlands_F,\widehat G)$ is onto, so that the map $\params(\widetilde G)\rightarrow\params(G)$ is onto. For an analogous result for Weil groups see \cite{MR795713}. \begin{corollary} \label{cor: subgroup} Assume the above. Then Conjecture \ref{conj: main} holds for $\bf\widetilde G$ if and only if it holds for $\bf G$. \end{corollary} Indeed, it follows from our assumptions that if $\tilde\pi=\pi^{\psi_N}(\tilde\phi)$ then the $\psi_N$-generic constituent $\pi$ of the restriction of $\tilde\pi$ to $G(\A)$ is $\pi^{\psi_N}(\phi)$. Moreover, it follows from Lemma \ref{lem: characters} and \eqref{eq: gentwist} that $X(\tilde\pi)=X(\tilde\phi)$. By \eqref{eq: Arthur decomposition} and \eqref{eq: twistcompatible} the space of $\tilde\pi\otimes\omega$ is orthogonal to that of $\tilde\pi$ unless $\omega\in X(\tilde\pi)$. Finally $L^S(s,\tilde\pi,\Ad)=\Delta_T^S(s)L^S(s,\pi,\Ad)$. The corollary follows. \subsection{} \label{sec: projG} Consider now the case where $\bf G=\bf\widetilde G/\bf T$ where $T$ is a central torus in $G$ which is \emph{induced} i.e., it is the product of restriction of scalars over extensions of $F$ of split tori. Thus we have a short exact sequence \[ 1\rightarrow\widehat G\rightarrow\widehat{\widetilde G}\rightarrow\widehat T\rightarrow1. \] In particular, $\widehat{\widetilde G}^{\der}\subset\widehat G$ and therefore $\widehat{\widetilde G}=Z(\widehat{\widetilde G})\widehat G$. This also implies that the short exact sequence \[ 1\rightarrow Z(\widehat G)\rightarrow Z(\widehat{\widetilde G})\rightarrow\widehat T\rightarrow1 \] is exact. We also have $\Delta_{\widetilde G}^S(s)=\Delta_G^S(s)\Delta_T^S(s)$. \begin{lemma} \label{lem: injindtorus} The map \begin{equation} \label{eq: kerinj} H^1_{\loc}(\Langlands_F,Z(\widehat G))\rightarrow H^1_{\loc}(\Langlands_F,Z(\widehat{\widetilde G})) \end{equation} is injective. \end{lemma} \begin{proof} By the assumption on $T$, for any subgroup $\Gamma'$ of $\Gamma$ the map $\widehat T\rightarrow\widehat T^{\Gamma'}$ given by $t\mapsto\prod_{\sigma\in\Gamma'}t^\sigma$ is surjective. It follows that for any surjective homomorphism of $\Gamma'$-modules $S\rightarrow\widehat T$, the map $S^{\Gamma'}\rightarrow\widehat T^{\Gamma'}$ is also surjective. In particular, $Z(\widehat{\widetilde G})^{\Gamma'}\rightarrow\widehat T^{\Gamma'}$ is onto and we conclude that the map \[ H^1(W_{F_v},Z(\widehat G))\rightarrow H^1(W_{F_v},Z(\widehat{\widetilde G})) \] is injective for all $v$. This immediately implies the injectivity of \eqref{eq: kerinj}. \end{proof} \begin{corollary} Let $\phi\in\params(G)$ and let $\tilde\phi$ be the corresponding element in $\params(\widetilde G)$. Then $\cent_{\tilde\phi}=\cent_{\phi}$. \end{corollary} \begin{proof} Since $Z(\widehat G)=\widehat G\cap Z(\widehat{\widetilde G})$ we need to show that $S_{\tilde\phi}=Z(\widehat{\widetilde G})S_\phi$. Suppose that $s\in S_{\tilde\phi}$. By changing $s$ by an element of $Z(\widehat{\widetilde G})$ we may assume that $s\in\widehat G$. By assumption the $1$-cocycle $x\mapsto[s,\phi(x)]$ is locally trivial as an element of $H^1(\Langlands_F,Z(\widehat{\widetilde G}))$. On the other hand $x\mapsto[s,\phi(x)]$ takes values in $Z(\widehat{\widetilde G})\cap\widehat G=Z(\widehat G)$. It follows from the injectivity of \eqref{eq: kerinj} that $s\in S_\phi$ as required. \end{proof} We will assume of course that if $\phi$ and $\tilde\phi$ are as above then $\canArthur_{\tilde\phi}$ consists of the pullback of functions in $\canArthur_\phi$ via $\widetilde G(\A)\rightarrow G(\A)$. In particular, $\tilde\pi:=\pi^{\psi_N}(\tilde\phi)$ is the pullback via $\widetilde G(\A)\rightarrow G(\A)$ of $\tilde\pi:=\pi^{\psi_N}(\phi)$. since $\widetilde G(\A)\rightarrow G(\A)$ is surjective by our assumption on $T$, the pull-back to $\widetilde G(\A)$ preserves the inner product. Note that $L^S(s,\tilde\pi,\Ad)=\Delta_T^S(s)L^S(s,\pi,\Ad)$. Therefore $\mainconst{\tilde\pi}=\mainconst{\pi}$. \begin{corollary} \label{cor: quotient} Suppose that Conjecture \ref{conj: main} holds for $\bf\widetilde G$. Then it holds for $\bf G$. \end{corollary} By taking a $z$-extension (cf.~\cite{MR540901, MR683003}) we infer from Corollaries \ref{cor: subgroup} and \ref{cor: quotient}: \begin{corollary} Suppose that Conjecture \ref{conj: main} holds for quasi-split semisimple simply connected groups. Then it holds for all quasi-split reductive groups. \end{corollary} Note that the reasoning is analogous to the argument reducing the computation of the Tamagawa number of a reductive group to the semisimple simply connected case, i.e. to Weil's conjecture (cf.~\cite{MR631309}, \cite[\S5]{MR757954}). \section{The $\GL_m$ case} \label{sec: GLn} Let ${\bf G}={\bf G}_m$ be the group $\GL_m$ over a number field $F$. Let $ N=N_m$ be the subgroup of upper unitriangular matrices in $G$. Fix a non-degenerate character $\psi_N$ of $N(\A)$, trivial on $N(F)$. \begin{theorem} \label{thm: GLn} We have $\mainconst{\pi}=1$ for any cuspidal irreducible automorphic representation $\pi$ of $\GL_m$. \end{theorem} We will prove the theorem below. (See \cite[\S18]{1203.0039} for a similar result.) The theorem essentially says that Conjecture \ref{conj: main} holds for $\GL_m$ (assuming the existence of Langlands's group) since in this case $\cent_\phi=1$ for any parameter $\phi$. Of course we will prove the theorem without assuming Arthur's conjectures. From the discussion of \S \ref{sec: restder} and \ref{sec: projG} we get the following analogous results for the groups $\PGL_m$ and $\SL_m$. \begin{corollary} \label{cor: PGLm} We have $\mainconst{\pi}=1$ for any cuspidal irreducible representation $\pi$ of $\PGL_m$. \end{corollary} \begin{corollary} \label{cor: SLm} Suppose that $\tilde\pi$ is a cuspidal irreducible automorphic representation of $\GL_m$ realized on $V_{\tilde\pi}$. Let $\pi$ be the unique irreducible constituent of $\SL_m(\A)$ on \[ V'_{\tilde\pi}=\{\varphi\rest_{\SL_m(\A)}:\varphi\in V_{\tilde\pi}\} \] on which $\whit^{\psi_N}$ is non-zero. Then we have $\mainconst{\pi}=\abs{X(\tilde\pi)}$ where $X(\tilde\pi)$ is the group of Hecke characters $\omega$ such that $\tilde\pi\otimes\omega=\tilde\pi$. \end{corollary} We remark that every irreducible constituent of $V'_{\tilde\pi}$ is generic with respect to \emph{some} non-degenerate character of $N(\A)$. We also remark that in the case $G'=\SL_2$ we have multiplicity one for $G'$ (\cite{MR1792292}), but this is not true for $m>2$ \cite{MR1303497} (hence, it is necessary to specify the automorphic realization of $\pi$). Let us now prove Theorem \ref{thm: GLn}. Since we are free to choose $\psi_N$, we fix a non-trivial continuous character $\psi=\otimes_v\psi_v$ of $\A$ which is trivial on $F$ and take $\psi_N$ to be the character $n\mapsto\psi(n_{1,2}+\dots+n_{m-1,m})$ of $N(\A)$ (trivial on $N(F)$). We are also free to choose the Haar measure on $G_m(\A)$. It will be convenient to take the Tamagawa measure (for any $m$). \label{sec: Tamagawa} First, we take the self-dual Haar measure on $\A$ with respect to $\psi$. This measure does not depend on the choice of $\psi$: it is the Tamagawa measure and satisfies $\vol(F\bs\A)=1$. On $F_v$ we take the self-dual measure with respect to $\psi_v$. Thus, the measure on $\A$ is the product measure of the $F_v$'s. We also take the product measure on any $F_v^k$. Consider the top degree invariant differential form (gauge form) $\wedge_{i,j=1,\dots,m}dg_{i,j}/\det g^m$ on $G_m$.\footnote{We can ignore the ambiguity of the sign.} By a standard construction (cf.~\cite{MR0217077}) this gauge form, together with our choice of measure on $F_v$, give rise to a Haar measure $dg_v$ on $G(F_v)$ for all $v$. If $v$ is $p$-adic and $\psi_v$ is unramified then the measure of $K_v=G_m(\OO_v)$ is $\Delta_{G_m,v}(1)^{-1}=\prod_{j=1}^m\zeta_{F_v}(j)^{-1}$. The measure on $G_m(\A)$ is then defined to be \[ (\res_{s=1}\Delta_{G_m}^S(s))^{-1}\cdot \prod_{v\in S}dg_v\prod_{v\notin S}\Delta_{G_m,v}(1)\,dg_v \] where $S$ is any finite set of places containing the archimedean ones. The definition is independent of $S$. Recall that $G(\A)^1=\{g\in G(\A):\abs{\det g}_{\A^*}=1\}$. Let $A_G$ denote the central subgroup of $G(\A)$ consisting of the scalar matrices $\lambda I_m$ with $\lambda\in\R_{>0}$ where we embed $\R$ in $\A$ (and therefore $\R_{>0}$ in $\A^*$) via $\R\hookrightarrow\A_{\Q}\hookrightarrow\A=\A_{\Q}\otimes_{\Q}F$ (where the second embedding is $x\mapsto x\otimes 1$). We have $G(\A)=G(\A)^1\times A_G$. We endow $A_G$ with the pull-back of the standard Haar measure $\frac{dx}x$ on $\R_{>0}$ (where $dx$ is the standard Lebesgue measure) under the isomorphism $\abs{\det}_{\A^*}:A_G\rightarrow\R_{>0}$. Together with the Haar measure on $G(\A)$, this defines a Haar measure on $G(\A)^1$. It is well known that $\vol(G(F)\bs G(\A)^1)=1$. Let $\mira=\mira_m$ be the mirabolic subgroup of $G$ consisting of matrices whose last row is $\xi_m=(0,\dots,0,1)\in F^m$. We have $\mira\simeq\GL_{m-1}\ltimes F^{m-1}$. We use this relation to endow $\mira(F_v)$ (and more generally $\mira_j(F_v)$) with local Tamagawa measures. We have the following integration formula \begin{equation} \label{eq: int form mira} \int_{\mira_j\bs\GL_j}\phi(\xi_jg)\abs{\det g}\ dg=\int_{F^j}\phi(\eta)\ d\eta=\hat\phi(0) \end{equation} for any $\phi\in C_c(F^j)$. Let $\pi\in\Cusp G$. By the theory of Rankin--Selberg integrals, for any $\varphi$ in the space of $\pi$ and $\varphi^\vee$ in the space of $\pi^\vee$ we have \[ (\varphi,\varphi^\vee)_{G(F)\bs G(\A)^1}=\lim_{s\rightarrow 1}\frac{L^S(s,\pi\otimes\pi^\vee)} {\Delta_G^S(s)}[\whit^{\psi_N}(\cdot,\varphi),\whit^{\psi_N^{-1}}(\cdot,\varphi^\vee)]_S \] where $S$ is a sufficiently large finite set of places and \[ [W,W^\vee]_S=\int_{N_m(F_S)\bs\mira_m(F_S)}W(p)W^\vee(p)\ dp \] (cf.~\cite{MR2309989}). The factor $\Delta_G^S(s)$ shows up because of our choice of measures on $G(\A)$ and $G(F_S)$. We recall that $[\cdot,\cdot]_S$ defines an invariant pairing. We also recall that $L^S(s,\pi\otimes\pi^\vee)=L^S(s,\pi,\Ad)$. Theorem \ref{thm: GLn} therefore reduces to the following local identity. \begin{lemma} \label{lem: whitrltnGLn} Let $F$ be a local field of characteristic $0$ and let $\pi$ be an irreducible unitary generic representation of $G=G(F)$, realized on its Whittaker model $\Whit^{\psi_N}(\pi)$. Let $[\cdot,\cdot]$ be the pairing \begin{equation} \label{eq: innerGL_m} [W,W^\vee]=\int_{N\bs\mira}W(p)W^\vee(p)\ dp,\ \ W\in\Whit^{\psi_N}(\pi),\ W^\vee\in\Whit^{\psi_N^{-1}}(\pi^\vee). \end{equation} Then for any $W\in\Whit^{\psi_N}(\pi)$ and $W^\vee\in\Whit^{\psi_N^{-1}}(\pi^\vee)$ we have \begin{equation} \label{eq: locidentwhit} [W,W^\vee]^{\psi_N}=W(e)W^\vee(e). \end{equation} (For the meaning of the left-hand side cf.~Remark \ref{rem: anydual}.) \end{lemma} \begin{proof} The local identity is proved in a way similar to the unfolding of the global Rankin--Selberg integral. The main difference is that while in the unfolding process for the global case, certain terms vanish by cuspidality, in the local case, which is continuous in nature, the analogous terms do not contribute because they are of measure zero. Note that the relation \eqref{eq: locidentwhit} is independent of the choice of Haar measure on $N$. It will be convenient to identify $N$ (as a variety) with $F^{m\choose2}$ (in the obvious way) and take the corresponding Haar measure. Suppose first that $\pi=\Ind\sigma$. Using the relation \eqref{eq: redtosqrint} and the result of \cite[Appendix A]{MR2930996}, the relations \eqref{eq: innerGL_m} for $\pi$ and $\sigma$ become equivalent. Thus we reduce to the case where $\pi\in\Irr_{\sqr}G$, in which \[ [W,W^\vee]^{\psi_N}=\int_N[\pi(n)W,W^\vee]\psi_N(n)^{-1}\ dn. \] (This is mostly useful in the archimedean case.) Assume therefore that $\pi\in\Irr_{\sqr}G$. Since we already know that \eqref{eq: locidentwhit} holds up to a constant, it suffices to assume that the restrictions of $W$ and $W^\vee$ to $\mira$ are compactly supported modulo $N$ (cf.~\cite{MR0404534, MR2733072} -- in fact, since we only consider the square-integrable case the archimedean case is elementary). For any $j=1,\dots,m$ let $\mira_j$ (resp.~$N_j$) be the mirabolic subgroup of $\GL_j$ (resp.~the group of upper unitriangular matrices of $\GL_j$). We embed $\GL_j$ (and its subgroups) in $\GL_m$ via $g\mapsto\sm g{}{}{I_{m-j}}$. Let \begin{align*} I_j&=\int_{N_j}\big(\int_{N_{j-1}\bs\GL_{j-1}}W(gn)W^\vee(g)\abs{\det g}^{j-m}\ dg\big)\ \psi_{N_j}(n)^{-1}\ dn\\&= \int_{N_j}\big(\int_{N_j\bs\mira_j}W(gn)W^\vee(g)\abs{\det g}^{j-m}\ dg\big)\ \psi_{N_j}(n)^{-1}\ dn. \end{align*} The sought-after identity \eqref{eq: locidentwhit} becomes $I_m=I_1$. We claim that for any $j=1,\dots,m-1$, if $I_{j+1}$ converges as an iterated integral then so does $I_j$ and $I_j=I_{j+1}$. Note that $N_{j+1}=N_j\ltimes C_j$ where $C_j\simeq F^j$ is the subgroup of unipotent matrices in $N_{j+1}$ whose upper left corner of size $j\times j$ is the identity matrix. Since $\modulus_{\mira_j}=\abs{\det\cdot}$ we can rewrite $I_{j+1}$ as \begin{multline*} \int_{N_j}\int_{C_j}\big(\int_{\mira_j\bs\GL_j}\int_{N_j\bs\mira_j}W(pgvu)W^\vee(pg)\abs{\det p}^{-1}\abs{\det pg}^{j+1-m}\ dp\ dg\big)\\ \ \psi_{C_j}(v)^{-1}\ dv\ \psi_{N_j}(u)^{-1}\ du. \end{multline*} In order to show the equality $I_{j+1}=I_j$ we will show that \begin{multline} \label{eq: n2p1} \int_{C_j}\big(\int_{\mira_j\bs\GL_j}\int_{N_j\bs\mira_j}W(pgvu)W^\vee(pg)\abs{\det p}^{-1}\abs{\det pg}^{j+1-m}\ dp\ dg\big)\ \psi_{C_j}(v)^{-1}\ dv\\=\int_{N_j\bs\mira_j}W(pu)W^\vee(p)\abs{\det p}^{j-m}\ dp. \end{multline} To that end, note that $\mira_j$ is the stabilizer of $\psi_{C_j}$ in $\GL_j$, so that we can write the left-hand side of \eqref{eq: n2p1} as \begin{multline} \label{eq: mira} \int_{C_j}\big(\int_{\mira_j\bs\GL_j}\int_{N_j\bs\mira_j}W(pgu)W^\vee(pg)\abs{\det p}^{-1}\abs{\det pg}^{j+1-m}\ dp \ \psi_{C_j}(gvg^{-1})\ dg\big)\\ \ \psi_{C_j}(v)^{-1}\ dv. \end{multline} Let $f_u$ be the function on $F^j-\{0\}$ (row vectors) defined by \[ f_u(\xi_jg)=\int_{N_j\bs\mira_j}W(pgu)W^\vee(pg)\abs{\det pg}^{j-m}\ dp, \ \ g\in\GL_j \] where $\xi_j=(0,\dots,0,1)\in F^j$. This is well defined since $\GL_j$ acts transitively on $F^j-\{0\}$ and the stabilizer of $\xi_j$ is $\mira_j$. Moreover, we have $f_u\in C_c^\infty(F^j-\{0\})$ because of the condition on $W$, $W^\vee$. Extending by $0$, we view $f_u$ as a function in $C_c^\infty(F^j)$. We view elements of $C_j$ as column vectors of size $j$, and hence, the group $C_j$ itself as the dual group of $F^j$. We write $\sprod{\cdot}{\cdot}$ for the corresponding pairing on $F^j\times C_j$. Then \eqref{eq: mira} is \[ \int_{C_j}\big(\int_{\mira_j\bs\GL_j}f_u(\xi_jg)\psi(\sprod{\xi_jg}v)\abs{\det g}\ dg\big) \ \psi_{C_j}(v)^{-1}\ dv, \] which by \eqref{eq: int form mira} and Fourier inversion is equal to \[ \int_{C_j}\big(\int_{F^j}f_u(\eta)\psi(\sprod\eta v)\ d\eta\big)\ \psi_{C_j}(v)^{-1}\ dv= \int_{C_j}\hat f_u(v)\ \psi(\sprod{\xi_j}v)^{-1}\ dv=f_u(\xi_j). \] This proves \eqref{eq: n2p1} and completes the proof of the Lemma. \end{proof} \begin{remark} \label{rem: cmpcsupp} In the $p$-adic case the proof above applies directly to any unitarizable $\pi\in\Irr_{\gen}G$ and there is no need to reduce to the square integrable case. Indeed, if the restrictions of $W$ and $W^\vee$ to $\mira$ are compactly supported modulo $N$ then $[\pi(\cdot)W,W^\vee]$ is compactly supported on $\mira$ (and in particular, on $N$). One way to see this is to choose an arbitrary supercuspidal representation $\tau$ of $\GL_m$ and to realize the restrictions $W\rest_\mira$ (resp. $W^\vee\rest_\mira$) as the restrictions of Whittaker functions $W_1\in\Whit^{\psi_N}(\tau)$ (resp. $W_1^\vee\in\Whit^{\psi_N^{-1}}(\tau^\vee)$). Then $[\pi(\cdot)W,W^\vee]=[\tau(\cdot)W_1,W_1^\vee]$ on $\mira$. As in Remark \ref{rem: archJacquet} it will be interesting to know whether an analogous fact holds in the archimedean case, namely if the restrictions of $W$ and $W^\vee$ to $\mira$ are Schwartz functions modulo $N$, is the matrix coefficient $[\pi(\cdot)W,W^\vee]$ necessarily a Schwartz function on $\mira$? \end{remark} \begin{remark} For a general $\pi\in\Irr_{\gen}\GL_m$ (not necessarily unitarizable) one can still define the pairing $[\cdot,\cdot]$ by \[ [W,W^\vee]=\int_{N\bs\mira}W(p)W^\vee(p)\abs{\det p}^s\ dp\big|_{s=0} \] in the sense of analytic continuation (cf.~\cite[Appendix A]{MR2930996}). Lemma \ref{lem: whitrltnGLn} and its proof remain valid. \end{remark} More generally for $i=1,\dots,m$ and any $W\in\Whit^{\psi_N}(\pi)$ and $W^\vee\in\Whit^{\psi_N^{-1}}(\pi^\vee)$, let \begin{equation}\label{eq: defAi} A_i(W,W^\vee)=\int_{N_{m-i}\bs\GL_{m-i}}W(g)W^\vee(g)\abs{\det g}^{1-i}\ dg. \end{equation} In particular, $A_1(W,W^\vee)=[W,W^\vee]$. The proof above shows the following. \begin{lemma}\label{L: Afe} Let $U_i$ be the unipotent radical of the parabolic subgroup of $\GL_m$ of type $(m-i,1,\dots,1)$ endowed with the Haar measure induced by the identification (of varieties) $U_i\cong F^{{m\choose 2}-{m-i\choose2}}$. Let $\pi\in\Irr_{\gen}$. Then \[ A_i(W,W^\vee)=\int_{U_{i-1}}[\pi(u)W,W^\vee]\psi_{U_{i-1}}(u)^{-1}\ du \] for any $W\in\Whit_{\psi_N}(\pi)$ and $W^\vee\in\Whit^{\psi_N^{-1}}(\pi^\vee)$ such that $W\rest_{\mira}\in C_c^\infty(N_m\bs\mira,\psi_N)$ and $W^\vee\rest_{\mira}\in C_c^\infty(N_m\bs\mira,\psi_N^{-1})$ and the right-hand side converges. (At least in the $p$-adic case, the last condition is satisfied automatically in view of Remark \ref{rem: cmpcsupp}.) In particular, if $\pi\in\Irr_{\cusp}G$ (and $F$ is $p$-adic) then this holds for all $W\in\Whit^{\psi_N}(\pi)$ and $W^\vee\in\Whit^{\psi_N^{-1}}(\pi^\vee)$. \end{lemma} \begin{proof} For $j=m-i+1,\dots,m$ let \begin{align*} I_j&=\int_{N_j\cap U_{i-1}}\big(\int_{N_{j-1}\bs\GL_{j-1}}W(gn)W^\vee(g)\abs{\det g}^{j-m}\ dg\big)\ \psi_{N_j}(n)^{-1}\ dn\\&= \int_{N_j\cap U_{i-1}}\big(\int_{N_j\bs\mira_j}W(gn)W^\vee(g)\abs{\det g}^{j-m}\ dg\big)\ \psi_{N_j}(n)^{-1}\ dn. \end{align*} As in the proof of Lemma \ref{lem: whitrltnGLn} ones shows that $I_j$ converges as an iterated integral and $I_j=I_{j+1}$, $j=m-i+1,\dots,m-1$. The relation $I_{m-i+1}=I_m$ is the required identity. \end{proof} \begin{remark} We refer to \cite{LMao4} for a generalization of Lemma \ref{L: Afe} and to \cite[Lemme 3.7]{Wald2} for a closely related statement. \end{remark} \section{Classical and metaplectic groups} \label{sec: classical groups} In the case of classical groups we can formulate a variant of Conjecture \ref{conj: main} without appealing to Arthur's conjectures. Instead, we use the results of Cogdell--Piatetski-Shapiro--Shahidi (\cite{MR2767514}, building on earlier works of Cogdell--Kim--Piatetski-Shapiro--Shahidi and Kim--Krishnamurthy \cite{MR1863734, MR2075885, MR2127169, MR2149370}) and Ginzburg--Rallis--Soudry (\cite{MR2848523} and previous works cited therein) describing the generic representations in terms of representations of $\GL_n$. This variant is also applicable for the (non-algebraic) metaplectic group. More precisely, let $\bf G$ be either $\SO(n)$ (the special orthogonal group of a split or a quasi-split quadratic space), $\Sp_n$ (the symplectic group of a symplectic space of dimension $2n$), $\Mp_n$ (the metaplectic double cover of $\Sp_n$) or $\U(n)$ (the quasi-split unitary group of a hermitian space of dimension $n$). In the even orthogonal case let $D\in F^*/(F^*)^2$ be the discriminant of the quadratic space (with the sign convention so that $\bf G$ is split if and only if $D\in(F^*)^2$). Let $\chi_D$ be the Hecke character $(D,\cdot)$ where $(\cdot,\cdot)$ denotes the quadratic Hilbert symbol. In the unitary case we write $E$ for the quadratic extension of $F$ over which $\bf G$ splits (i.e., over which the hermitian space is defined). In all other cases we set $E=F$. As usual, fix a maximal unipotent subgroup $\bf N$ of $\bf G$ and a non-degenerate character $\psi_N$ of $N(\A)$ trivial on $N(F)$. Since we are free to choose $\psi_N$, in the even orthogonal case we will take $\psi_N$ of a special form as in \cite{MR2848523} so that $(N,\psi_N)$ is preserved under conjugation by an element of order two in $\Orth(2n)$. We will denote this outer involution by $\theta$. In all other cases except $\SO(2n)$ set $\theta=\id$ (and there is no restriction on $\psi_N$). Consider the set $\imgset{G}$ whose elements are sets $\{\pi_1,\dots,\pi_k\}$ (mutually inequivalent representations) where $\pi_i$ is a cuspidal irreducible representation of $\GL_{n_i}(\A_E)$, $i=1,\dots,k$ such that \begin{enumerate} \item $n_1+\dots+n_k=m=\begin{cases}2n&{\bf G}=\SO(2n+1), \Mp_n\text{ or }\SO(2n),\\2n+1&{\bf G}=\Sp_n,\\n&{\bf G}=\U(n).\end{cases}$ \item $L(1,\pi_i,r)=\infty$ for all $i$ where $r=\begin{cases}\wedge^2&{\bf G}=\SO(2n+1)\text{ or }\Mp_n, \\\sym^2&{\bf G}=\Sp_n\text{ or }\SO(2n),\\\Asai^-&{\bf G}=\U(2n),\\\Asai^+&{\bf G}=\U(2n+1).\end{cases}$ \\(We could have considered instead the partial $L$-function since the local $L$-factors are holomorphic $s=1$.) Here $\Asai^{\pm}$ are the so-called Asai representations (see e.g., \cite{MR2127169, MR2149370} for the precise definition). In the case of $\Mp_n$ we also require that $L(\frac12,\pi_i)\ne0$ for all $i$. \item $\prod_{i=1}^k\omega_{\pi_i}=1$ if ${\bf G}=\Sp_n$; $\prod_{i=1}^k\omega_{\pi_i}=\chi_D$ if ${\bf G}=\SO(2n)$. (In all other cases we automatically have $\omega_{\pi_i}\rest_{\A^*}\equiv1$ for all $i$.) \end{enumerate} In all algebraic cases except $\SO(2n)$ the set $\imgset{G}$ corresponds exactly to $\tparams(G)$ and if $\phi$ is the corresponding parameter then $\cent_\phi\simeq(\Z/2\Z)^{k-1}$ (see \cite{Artendclass} and \cite{1206.0882}). In the case of $\SO(2n)$ the situation is the following. The set $\imgset{G}$ (resp., $\tparams(G)$) corresponds to the equivalence classes, under conjugation by $\Orth(2n,\C)$ (resp., $\SO(2n,\C)$), of homomorphism $\phi:\Langlands_F\rightarrow\Orth(2n,\C)$ with bounded image whose composition with the determinant corresponds to $\chi_D$ under class field theory. Thus, there is a surjective map $\tparams(G)\rightarrow\imgset{G}$ whose fiber over $\{\pi_1,\dots,\pi_k\}$ is a singleton unless $n_i$ is even for all $i$ in which case the fiber consists of two elements. We have $\cent_\phi\simeq(\Z/2\Z)^l$ where $l=k-1$ if $n_i$ is even for all $i$ and $l=k-2$ otherwise. We denote by $\Cusp_{\psi_N}G$ the set of irreducible constituents of $L^2_{\cusp,\psi_N}(G(F)\bs G(\A))$; in the case of $\Mp_n$ we also require that the $\psi$-theta lift to $\SO(2n-1)$ vanishes where $\psi$ is related to $\psi_N$ as in \cite[\S11]{MR2848523}. (In the case $n=1$ this excludes the so-called exceptional representations.) Let $\stdG{G}:\,^LG\rightarrow\,^L(\Res_{E/F}\GL_m)$ be the $L$-homomorphism such that \[ \stdG{G}\rest_{\widehat G}=\begin{cases}\Sp_n(\C)\hookrightarrow\GL_{2n}(\C)&{\bf G}=\SO(2n+1),\Mp_n\\ \SO(2n+1,\C)\hookrightarrow\GL_{2n+1}(\C)&{\bf G}=\Sp_n,\\ \SO(2n,\C)\hookrightarrow\GL_{2n}(\C)&{\bf G}=\SO(2n),\\ \GL(n,\C)\xrightarrow{g\mapsto(g,J_n^{-1}\,^tg^{-1}J_n)}\GL(n,\C)\times\GL(n,\C)&{\bf G}=\U(n),\end{cases} \] for a suitable hermitian form $J_n$ (cf. \cite[\S1]{MR2767514} for more details). By \cite{MR2767514} and \cite{MR2848523} for any $\psi_N$-generic (non-exceptional in the metaplectic case) irreducible cuspidal representation $\pi$ of $G(\A)$ the functorial transfer of $\pi$ to $\GL_N(\A_E)$ under $\stdG{G}$ exists and is of the form $\pi_1\boxplus\dots\boxplus\pi_k$ (isobaric sum) where $\{\pi_1,\dots,\pi_k\}\in\imgset{G}$. (In the metaplectic case, the functorial transfer is defined with respect to a character $\psi$ which is compatible with $\psi_N$: it is also compatible with the theta correspondence.) Note that $L^S(s,\pi,\Ad)=L^S(s,\pi_1\boxplus\dots\boxplus\pi_k,\tilde r)$ where $\tilde r=\begin{cases}\sym^2&{\bf G}=\SO(2n+1)\text{ or }\Mp_n, \\\wedge^2&{\bf G}=\Sp_n\text{ or }\SO(2n),\\\Asai^+&{\bf G}=\U(2n),\\\Asai^-&{\bf G}=\U(2n+1).\end{cases}$ \\Hence, the property \eqref{eq: Ad propr} holds for all $\psi_N$-generic $\pi$ (non-exceptional in the metaplectic case), so that $\mainconst{\pi}$ is well defined. In addition, the descent method provides for any $\{\pi_1,\dots,\pi_k\}\in\imgset{G}$, a subrepresentation $\sigma=\sigma(\{\pi_1,\dots,\pi_k\})$ of $L^2_{\cusp,\psi_N}(G(F)\bs G(\A))$ such that \begin{enumerate} \item For any irreducible constituent $\sigma'$ of $\sigma$ we have $\stdG{G}(\sigma')=\pi_1\boxplus\dots\boxplus\pi_k$. \item No $\psi_N$-generic cuspidal representation whose functorial lift is $\pi_1\boxplus\dots\boxplus\pi_k$ is orthogonal to $\sigma$ in $L^2(G(F)\bs G(\A))$. \end{enumerate} (See \cite[Theorem 11.2]{MR2075885}.) In the case of $\SO(2n+1)$ it was proved that $\sigma$ is irreducible (see \cite{MR1846354}, which is based on \cite{MR1983781}). It follows from the results of \cite{MR2330445} that $\sigma$ is irreducible in the case ${\bf G}=\Mp_n$ as well. It is expected that also in the remaining cases either $\sigma$ is irreducible and $\theta$-invariant or that (in the $\SO(2n)$ case) $\sigma=\tau\oplus\theta(\tau)$ where $\tau$ is irreducible and $\theta(\tau)\not\simeq\tau$. In the algebraic cases this construction is closely related to the (hypothetical) representations $\pi^{\psi_N}(\phi)$ of \S\ref{sec: pipsi}. More precisely, in all cases other than $\SO(2n)$, one expects multiplicity one for $G$ and therefore if $\{\pi_1,\dots,\pi_k\}\in\imgset{G}$, $\phi\in\tparams(G)$ is the corresponding parameter, and $\sigma=\sigma^{\psi_N}(\{\pi_1,\dots,\pi_k\})$ then $\sigma=\pi^{\psi_N}(\phi)$. In the case of $\SO(2n)$ the situation is the following. If not all $n_i$'s are even then there is a unique $\phi\in\tparams(G)$ above $\{\pi_1,\dots,\pi_k\}$ and as before $\sigma=\pi^{\psi_N}(\phi)$ (which is expected to be irreducible and $\theta$-invariant, cf.~\cite{MR1248702}). Suppose now that all $n_i$'s are even. Let $\{\phi_1,\phi_2\}$ be the two parameters in $\tparams(G)$ above $\{\pi_1,\dots,\pi_k\}$. Recall that we expect that either $\sigma$ is irreducible or $\sigma=\tau\oplus\theta(\tau)$ with $\tau$ irreducible and $\theta(\tau)\not\simeq\tau$. In the latter case $\{\pi^{\psi_N}(\phi_1),\pi^{\psi_N}(\phi_2)\}=\{\tau,\theta(\tau)\}$. On the other hand, if $\sigma$ is irreducible then $\sigma$ is not one of the $\pi^{\psi_N}(\phi_i)$'s since they define equivalent representations on which $\whit^{\psi_N}$ is non-zero. (Another way to say it: $\sigma$ is $\theta$-invariant while $\theta(\pi^{\psi_N}(\phi_1))=\pi^{\psi_N}(\phi_2)$.) Instead, $\pi^{\psi_N}(\phi_1)\oplus\pi^{\psi_N}(\phi_2)$ is the isotypic component of $\sigma$ in $L^2_{\cusp}(G(F)\bs G(\A))$ and the space of $\sigma$ is \[ \{\varphi_1+\varphi_2:\varphi_i\in\pi^{\psi_N}(\phi_i), i=1,2, \whit^{\psi_N}(\cdot,\varphi_1)\equiv\whit^{\psi_N}(\cdot,\varphi_2)\}. \] It follows that $\mainconst{\sigma}=\frac14(\mainconst{\pi^{\psi_N}(\phi_1)}+\mainconst{\pi^{\psi_N}(\phi_2)})$. Altogether we can formulate the following variant of Conjecture \ref{conj: main} for classical groups (which also covers the metaplectic group). \begin{conjecture} \label{conj: classical} Suppose that $\pi$ is an irreducible constituent of $\sigma^{\psi_N}(\{\pi_1,\dots,\pi_k\})$ and let $s$ be the size of the stabilizer of $\pi$ under $\{1,\theta\}$. (In particular, $s=1$ unless $G=\SO(2n)$.) Then $\mainconst{\pi}=2^{k-1}/s$. \end{conjecture} Conjecture \ref{conj: classical} is the combination of Conjectures \ref{conj: globalclassical} and \ref{conj: metplectic global} stated in the introduction of the paper. In our work in progress we will reduce these conjectures to local conjectures and prove the latter in the metaplectic case for $p$-adic fields. (The determination of the global constant in Conjecture \ref{conj: metplectic global} is based on this work.) \begin{remark} A similar picture holds for the groups $G=\Gspin(n)$ \cite{MR2219256, 1101.3467, MR2505178, 1110.6788}. If we fix the central character $\chi$ then the set $\Pi^G(\GL_m)$ (with $m=n$ if $n$ is even and $m=n-1$ if $n$ is odd) is defined analogously except that the $L$-function condition is changed to $L(1,\sym^2\pi_i\otimes\chi^{-1})=\infty$ or $L(1,\wedge^2\pi_i\otimes\chi^{-1})=\infty$ depending on the parity of $n$. Otherwise, Conjecture \ref{conj: classical} is unchanged. \end{remark} \section{Some examples} \label{sec: examples} We end the paper by proving some low rank cases of Conjecture \ref{conj: classical}. These cases are closely related to the case of the general linear group. Roughly speaking, we may recast Conjecture \ref{conj: classical} in each case as a relation between the size of a certain group of self-twists of a cuspidal representations $\pi$ of $\GL_n$ and the structure of the isobaric decomposition of a certain functorial transfer of $\pi$. While these relations are only valid for small $n$ and are purely fortuitous, they match up nicely with known results in the literature about the cuspidality of certain functorial transfers. (We mention \cite{MR2899809, MR2522032, MR2369494, MR2094113, MR2052020, MR1874921, MR1792292, MR1923967, MR1890650, MR1937203, MR2767509} for a partial list of results in this direction.) In the following we will have the situation of \S\ref{sec: restder}. The group $\bf G$ will be either a classical group $\bf G'$ or a cover thereof by an induced torus (so that by the analysis of \S\ref{sec: projG} we can replace $\bf G'$ by $\bf G$ without any loss of information). For any irreducible cuspidal representation $\tilde\pi$ of $\widetilde G(\A)$ we denote by $X(\tilde\pi)$ the group of characters of $\widetilde G(\A)$, trivial on $G(\A)\widetilde G(F)$, such that $\tilde\pi\otimes\omega=\tilde\pi$. In the cases at hand, because of an exceptional isomorphism, the group $\widetilde G$ will be the product of restriction of scalars of general linear groups. In particular, \begin{enumerate} \item $\widetilde G$ has multiplicity one. \item $\mainconst{\tilde\pi}=1$ for any $\tilde\pi$. \item \label{num: propconst} $\mainconst{\pi}=\abs{X(\tilde\pi)}$ for the $\psi_N$-generic constituent $\pi$ of $V_{\tilde\pi}'$ (Lemma \ref{lem: restG}). \end{enumerate} We will also have a homomorphism $\stdG{\widetilde G}:\,^L\widetilde G\rightarrow\,^L(\Res_{E/F}\GL_m)$, which factors through an embedding $\stdG{G}$ of $^LG$, where either $E=F$ or $E$ is quadratic extension of $F$. The restriction of $\stdG{G}$ to $^LG'$ coincides with the embedding $\stdG{G'}$ defined in \S\ref{sec: classical groups}. If $\stdG{\widetilde G}(\tilde\pi)=\pi_1\boxplus\dots\boxplus\pi_k$ with $\pi_i$ cuspidal representations of $\GL_{n_i}(\A_E)$, $n_1+\dots+n_k=m$ then we write $\cent_{\tilde\pi}=(\Z/2\Z)^l$ where $l=k-1$ except for the case where $G'=\SO(2n)$ and not all $n_i$'s are even in which $l=k-2$. Thus, by property \eqref{num: propconst} above, Conjecture \ref{conj: classical} boils down to the equality $\abs{\cent_{\tilde\pi}}=\abs{X(\tilde\pi)}$. Note that if we have Langlands's group $\Langlands_F$ in our disposal and let $\tilde\phi:\Langlands_F\rightarrow\,^L\tilde G$ be the Langlands parameter of $\tilde\pi$ then $\pi=\pi^{\psi_N}(\phi)$ where $\phi\in\tparams(G)$ is the composition of $\tilde\phi$ with the projection $^L\widetilde G\rightarrow\,^LG$. That's why the case of $\SO(2n)$ is consistent with Conjecture \ref{conj: classical}, where we considered representations which are not necessary of the form $\pi^{\psi_N}(\phi)$ -- see the discussion before Conjecture \ref{conj: classical}. \begin{remark} \label{rem: multone} Introduce two equivalence relations on the set of irreducible cuspidal representations of $\widetilde G(\A)$: $\pi_1\sim\pi_2$ (resp., $\pi_1\sim_w\pi_2$) if there exists a character $\omega$ of $\tilde G(F)G(\A)\bs \tilde G(\A)$ (resp., $G(\A)\bs\tilde G(\A)$) such that $\pi_2\simeq\pi_1\otimes\omega$. Then by \cite[Theorem 4.13 and Lemma 5.3]{MR2918491}, $G$ has multiplicity one if and only if the two equivalence relations coincide. \end{remark} We will use the following notation. If $E$ is a quadratic extension of $F$ let $\omega_{E/F}$ be the corresponding quadratic character of $\A_F^*$ and (unlike in \S\ref{sec: classical groups}) $\theta$ will denote the non-trivial Galois involution. If $\mu$ is a Hecke character of $\A_E^*$ then we will write $\AI_{E/F}(\mu)$ for the corresponding dihedral representation of $\GL_2$. It is cuspidal unless $\theta(\mu)=\mu$, or equivalently, $\mu$ factors through the norm map. If $\chi$ is a Hecke character of $\A_F^*$ then we write $\chi_E$ for the Hecke character of $\A_E^*$ given by composing $\chi$ with the norm map. We also write $\BC_{E/F}(\pi)$ for the base change from $\GL_m(F)$ to $\GL_m(E)$. \subsection{$\U(1)$, $\SO(2)$, $\SO(3)$, $\Sp_1$} The simplest case is $G'=\U(1)$, for which we can take $\widetilde G=G=\Res_{E/F}\GL_1$ and $\stdG{\widetilde G}=\id$. Of course in this case $X(\tilde\pi)$ is always trivial and $\stdG{\widetilde G}(\tilde\pi)$ is cuspidal. Similarly for $G'=\SO(2)$ we take $\widetilde G=G=\Res_{E/F}\GL_1$ where $E$ is the quadratic \'etale algebra defined by the discriminant and $\stdG{\widetilde G}$ the two-dimensional representation defining automorphic induction. In this case $X(\tilde\pi)$ trivial; $\AI_{E/F}\tilde\pi$ is cuspidal if and only if $\tilde\pi$ is not $\theta$-invariant but in any case $\cent_{\tilde\pi}$ is trivial. For $G'=\SO(3)$ (split) we take $\widetilde G=G=\GL_2$ and $\stdG{\widetilde G}=\id$. Then $X(\tilde\pi)$ is always trivial and $\stdG{\widetilde G}(\tilde\pi)$ is always cuspidal. Consider the case $G'=\Sp_1=\SL_2$. We take $G=G'$, $\widetilde G=\GL_2$ and $\stdG{\widetilde G}$ to be the adjoint representation. Let $\tilde\pi\in\Cusp\widetilde G$. Here $X(\tilde\pi)$ is as in Corollary \ref{cor: SLm}. There are three possibilities (cf.~\cite{MR540902}): \begin{enumerate} \item $X(\tilde\pi)=1$, \item $\abs{X(\tilde\pi)}=2$, \item $\abs{X(\tilde\pi)}=4$. \end{enumerate} Let $\Pi$ be the (adjoint) lifting of $\tilde\pi$ to $\GL_3$ \cite{MR533066}. In the first case $\tilde\pi$ is not dihedral and $\Pi$ is cuspidal. In the second case, let $E$ be the quadratic extension of $F$ defined by the non-trivial element $\omega$ of $X(\tilde\pi)$ and let $\theta$ be the Galois involution of $E/F$. Then $\tilde\pi=\AI_{E/F}(\mu)$ for some Hecke character $\mu$ of $\A_E^*$, $\theta(\mu)/\mu$ is not quadratic (i.e., not $\theta$-invariant) and $\Pi=\omega\boxplus\AI_{E/F}(\theta(\mu)/\mu)$. Finally, in the third case the group $X(\tilde\pi)=\{1,\omega_1,\omega_2,\omega_3\}$ defines a biquadratic extension $K$ of $F$ and $\Pi=\omega_1\boxplus\omega_2\boxplus\omega_3$. In all cases we have $\abs{X(\tilde\pi)}=\abs{\cent_{\tilde\pi}}$. Recall that $\SL_2$ has multiplicity one \cite{MR1792292}. In other words if $\tilde\pi_1,\tilde\pi_2\in\Cusp\GL_2$ and there exists a character $\omega$ of $\A^*$ such that $\tilde\pi_2=\tilde\pi_1\otimes\omega$ then we can choose such $\omega$ to be trivial on $F^*$ (i.e., a Hecke character). \subsection{$\U(2)$} Consider now $G'=\U(2)$. The group $\GU(2)$ of unitary similitudes is the quotient of $\widetilde G=\GL_2\times\Res_{E/F}\GL_1$ by $\GL_1$ embedded diagonally. In this identification the similitude factor is $(g,x)\mapsto\det g\Nm_{E/F}(x)^{-1}$. Therefore we take $G$ to be the kernel of $\det g\Nm_{E/F}(x)^{-1}$ and $\stdG{\widetilde G}:\,^L\widetilde G\rightarrow\,^L\Res_{E/F}\GL_2$ so that the functorial transfer of $(\pi,\mu)$ is $\BC_{E/F}(\pi)\otimes\mu$. Note that $X((\pi,\mu))$ is the group of Hecke characters $\chi$ of $\A^*$ such that $\pi=\pi\otimes\chi$ and $\mu=\mu\chi_E^{-1}$. Hence, $X((\pi,\mu))=1$ unless $\pi\otimes\omega_{E/F}=\pi$, i.e., unless $\pi$ is dihedral with respect to $E/F$, in which case $X((\pi,\mu))=\{1,\omega_{E/F}\}$. The condition for the non-cuspidality of $\BC_{E/F}(\pi)$ is also that $\pi$ is dihedral with respect to $E/F$. Hence indeed $\abs{\cent_{(\pi,\mu)}}=\abs{X((\pi,\mu))}$. We also note that group $G$ (and hence $G'$) has multiplicity one. This is easy to see from Remark \ref{rem: multone}. (This is of course not a new result, but we will use the argument later.) Indeed, suppose that $\pi_1, \pi_2\in\Cusp\GL_2$, $\mu_1$, $\mu_2$ are two Hecke characters of $\A_E^*$ and there exists a character $\chi$ of $\A_F^*$ such that $(\pi_1\otimes\chi,\mu_1\chi_E^{-1})\simeq(\pi_2,\mu_2)$. We need to show that we can take such $\chi$ which is trivial on $F^*$. Note that our assumption implies that $\mu_1/\mu_2$ is Galois invariant and hence can be written as $\chi'_E$ for some Hecke character $\chi'$ of $\A_F^*$. Therefore, $\chi_v\circ\Nm_{E_v/F_v}=\chi_v'\circ\Nm_{E_v/F_v}$ for all $v$. Thus, for all $v$, either $(\pi_1)_v\otimes\chi_v'=(\pi_2)_v$ or (in the case where $v$ is inert) $(\pi_1)_v\otimes\chi_v'=(\pi_2)_v\otimes\omega_{E_v/F_v}$. It follows that $\BC_{E/F}(\pi_1\otimes\chi')=\BC_{E/F}(\pi_2)$ and therefore either $\pi_2=\pi_1\otimes\chi'$ or $\pi_2=\pi_1\otimes\chi'\omega_{E/F}$. The conclusion follows. \subsection{Split $\SO(4)$} Consider now the case where $G'$ is the split $\SO(2,2)$. Then $G'$ is the quotient of group \[ G=\{(g_1,g_2)\in\GL_2\times\GL_2:\det g_1=\det g_2\} \] by the center of $\GL_2$ embedded diagonally. We take $\widetilde G=\GL_2\times\GL_2$ and $\stdG{\widetilde G}$ the tensor product to $\GL_4$. We denote the functorial transfer of $(\pi_1,\pi_2)$ by $\pi_1\boxtimes\pi_2$. Its existence was first proved by Ramakrishnan in \cite{MR1792292}. We have \[ X((\pi_1,\pi_2))=X'(\pi_1)\cap X'(\pi_2) \] where $X'(\pi)$ is the group of Hecke characters $\omega$ such that $\pi\otimes\omega=\pi$. Let us see that $\abs{\cent_{(\pi_1,\pi_2)}}=\abs{X((\pi_1,\pi_2))}$. The transfer $\pi_1\boxtimes\pi_2$ is cuspidal unless one of the following conditions holds (\cite{MR1792292}, cf.~also \cite{MR1936579} and \cite[Appendix]{MR2899809}): \begin{enumerate} \item $\pi_1$ is not dihedral (i.e., $X'(\pi_1)=1$) and $\pi_2=\pi_1\otimes\chi$ for some Hecke character $\chi$, or \item There exists a quadratic extension $E/F$ and Hecke characters $\mu_1$, $\mu_2$ of $\A_E^*$ such that $\pi_i=\AI_{E/F}(\mu_i)$, $i=1,2$. \end{enumerate} In the first case, $X((\pi_1,\pi_2))$ is trivial while $\pi_1\boxtimes\pi_2$ is of type $(3,1)$, namely it has the form $\sym^2(\pi_1)\otimes\chi\boxplus\omega_\pi\chi$, and $\cent_{(\pi_1,\pi_2)}=1$. The second case is analyzed in the following lemma. \begin{lemma} \label{lem: splitSO4} Suppose that $\pi_i=\AI_{E/F}(\mu_i)$, $i=1,2$. Then there are two possibilities. If \begin{equation} \label{eq: excpt} \theta(\mu_1)/\mu_1=\theta(\mu_2)/\mu_2\text{ and they are quadratic}, \end{equation} and $K$ is the quadratic extension of $E$ defined by $\theta(\mu_1)/\mu_1$ then $K$ is a biquadratic extension of $F$, $\abs{\cent_{(\pi_1,\pi_2)}}=4$ and $X((\pi_1,\pi_2))$ is the group of Hecke characters of $\A_F^*$ which are trivial on the norms from $K$. If \eqref{eq: excpt} is not satisfied then $\abs{\cent_{(\pi_1,\pi_2)}}=2$ and $X((\pi_1,\pi_2))=\{1,\omega_{E/F}\}$. \end{lemma} \begin{proof} We have $\pi_1\boxtimes\pi_2=\AI_{E/F}(\mu_1\mu_2)\boxplus\AI_{E/F}(\mu_1\theta(\mu_2))$. Therefore $\abs{\cent_{(\pi_1,\pi_2)}}=2$ unless neither $\AI_{E/F}(\mu_1\mu_2)$ nor $\AI_{E/F}(\mu_1\theta(\mu_2))$ is cuspidal, in which case $\abs{\cent_{(\pi_1,\pi_2)}}=4$. Note that \eqref{eq: excpt} is equivalent to saying that both $\mu_1\mu_2$ and $\mu_1\theta(\mu_2)$ factor through the norm (i.e., are Galois invariant), which is in turn equivalent to the condition that neither $\AI_{E/F}(\mu_1\mu_2)$ nor $\AI_{E/F}(\mu_1\theta(\mu_2))$ is cuspidal. On the other hand, $X((\pi_1,\pi_2))=\{1,\omega_{E/F}\}$ unless \eqref{eq: excpt} is satisfied, in which case $K$ is biquadratic over $F$. \end{proof} We mention that the group $G$ (and hence $\SO(4)$) has multiplicity one. By remark \ref{rem: multone} we need to show that if $\pi_1,\pi_1',\pi_2,\pi_2'\in\Cusp\GL_2$ such that there exists a character $\omega$ of $\A^*$ such that $\pi_1'\simeq\pi_1\otimes\omega$ and $\pi_2'\simeq\pi_2\otimes\omega^{-1}$, then we can choose such a $\omega$ which is trivial on $F^*$. We follow the argument of \cite{MR1792292}. The condition implies that $\pi_1\boxtimes\pi_2=\pi_1'\boxtimes\pi_2'$. By \cite{MR1792292} we have $\pi_1=\pi_1'\otimes\chi_1$ and $\pi_2=\pi_2'\otimes\chi_2$ for some Hecke characters $\chi_1$ and $\chi_2$. Thus we may assume without loss of generality that $\pi_1'=\pi_1$ and $\pi_2'=\pi_2\otimes\chi$ for some Hecke character $\chi$ (necessarily with $\chi^2=1$). We can assume that $\chi\not\equiv1$ since otherwise there is nothing to prove. Similarly, we can assume that $\pi_i\otimes\chi\ne\pi_i$, $i=1,2$. Consider $\Pi:=\pi_1\boxtimes\pi_2=\pi_1\boxtimes(\pi_2\otimes\chi)$. Now, $L^S(s,\Pi\otimes\dual\Pi)$ has a pole at $s=1$; it has at least a double pole if moreover $\pi_2$ is a twist of $\pi_1$. On the other hand, $L^S(s,\Pi\otimes\dual\Pi)$ factors as \[ L^S(s,\chi)L^S(s,\overline{\Ad}(\pi_1)\otimes\chi)L^S(s,\overline{\Ad}(\pi_2)\otimes\chi) L^S(s,\overline{\Ad}(\pi_1)\otimes\overline{\Ad}(\pi_2)\otimes\chi). \] Here we denote by $\overline{\Ad}$ the Gelbart-Jacquet lifting from $\GL_2$ to $\GL_3$ \cite{MR533066}. By our assumption on $\pi_i$ and $\chi$ only the last factor can have a pole at $s=1$. If $\pi_1$ and $\pi_2$ are not both dihedral this pole, necessarily simple, can occur only if $\pi_2$ is a twist of $\pi_1$. Therefore $\pi_1$ and $\pi_2$ are both dihedral. This case can be analyzed as in \cite[\S4.1]{MR1792292}. We omit the details. Thus, we get multiplicity one for $G$. \subsection{Non-split $\SO(4)$} Now we turn to the quasi-split group $G'=\SO(3,1)$, pertaining to a quadratic extension $E$ of $F$. It can be realized as the quotient of \[ G=\{g\in\GL_2(E):\det g\in F^*\} \] by the center of $\GL_2(F)$. We take $\widetilde G=\Res_{E/F}\GL_2$ and $\stdG{\widetilde G}$ to be the four-dimensional twisted tensor homomorphism. The functorial lift of $\tilde\pi$ to $\GL_4$ is the Asai transfer $\Asai_{E/F}(\tilde\pi)$ \cite{MR2000968}. $X(\tilde\pi)$ consists of the Hecke characters $\omega$ of $\A_E^*$, trivial $\A_F^*$, such that $\tilde\pi\otimes\omega=\tilde\pi$. Let us see that $\abs{X(\tilde\pi)}=\abs{\cent_{\tilde\pi}}$. We follow \cite[Appendix]{MR2899809}. The transfer $\Asai(\tilde\pi)$ is cuspidal unless one of the following conditions holds: \begin{enumerate} \item $\tilde\pi$ is not dihedral and $\theta(\tilde\pi)=\dual{\tilde\pi}\otimes\chi$ for some (necessarily $\theta$-invariant) Hecke character $\chi$ of $\A_E^*$. \item There exists a quadratic extension $K/E$ which is biquadratic over $F$ and a Hecke character $\mu$ of $K$ such that $\tilde\pi=\AI_{K/E}(\mu)$. \end{enumerate} In the first case, $\Asai(\tilde\pi)$ is of type $(3,1)$: it is of the form $\pi_1\boxplus\chi'$ where $\BC_{E/F}(\pi_1)=\Ad(\tilde\pi)\otimes\chi$ (which is cuspidal) and $\chi'_E=\chi$. On the other hand $X(\tilde\pi)=1$. The second case is analyzed in the following lemma which is the non-split analogue of Lemma \ref{lem: splitSO4}. \begin{lemma} \label{lem: SO31} Let $K$ be a quadratic extension of $E$ which is biquadratic over $F$, $\mu$ a Hecke character $\A_K^*$ and $\tilde\pi=\AI_{K/E}(\mu)$. Let $\sigma$ be the non-trivial element of $\Gal(K/E)$, $E_1$, $E_2$ the intermediate fields between $F$ and $K$ other than $E$, $\chi=\sigma(\mu)/\mu$. If \begin{equation} \label{eq: spclchi} \chi\rest_{\A_{E_i}^*}\equiv1, \ \ i=1,2, \end{equation} and $L$ is the quadratic extension of $K$ defined by $\chi$ then $L$ is a $(\Z/2\Z)^3$-extension of $F$, $\abs{\cent_{\tilde\pi}}=4$ and $X(\tilde\pi)$ is equal to the group of Hecke characters of $\A_E^*$ which are trivial on the norms of $L$. If \eqref{eq: spclchi} is not satisfied then $\abs{\cent_{\tilde\pi}}=2$ and $X(\tilde\pi)=\{1,\omega_{K/E}\}$. \end{lemma} \begin{proof} Let $\sigma_i$ be the non-trivial element of $\Gal(K/E_i)$, $i=1,2$, so that $\sigma_2=\sigma\sigma_1$. Then $\Asai(\tilde\pi)=\pi_1\boxplus\pi_2$ where $\BC_{E/F}(\pi_i)=\AI_{K/E}(\mu\sigma_i(\mu))$. Therefore, $\abs{\cent_{\tilde\pi}}=2$ if either $\pi_1$ or $\pi_2$ is cuspidal and $\abs{\cent_{\tilde\pi}}=4$ otherwise. We first show that \eqref{eq: spclchi} is equivalent to the condition that neither $\pi_1$ nor $\pi_2$ is cuspidal. Suppose that neither $\pi_1$ nor $\pi_2$ is cuspidal. Then in particular $\AI_{K/E}(\mu\sigma_i(\mu))=\BC_{E/F}(\pi_i)$ is not cuspidal, $i=1,2$. Thus, $\mu\sigma_i(\mu)=\sigma(\mu\sigma_i(\mu))$ or equivalently, $\chi$ is $\sigma_{3-i}$-invariant, $i=1,2$. Therefore, $\chi$ is also $\sigma$-invariant, hence quadratic. We may write $\mu\sigma_i(\mu)=(\mu_i)_K$ for a Hecke character $\mu_i$ of $\A_E^*$, and then $\BC_{E/F}(\pi_i)=\AI_{K/E}(\mu\sigma_i(\mu))=\mu_i\boxplus\mu_i\omega_{K/E}$. Since $\pi_i$ is not cuspidal this means that $\mu_i$ is $\theta$-invariant, and therefore $\mu_i=(\nu_i)_E$ for a Hecke character $\nu_i$ of $\A_F^*$. Thus, $(\mu\rest_{\A_{E_i}^*})_K=\mu\sigma_i(\mu)=\nu_i\circ\Nm_{K/F}=((\nu_i)_{E_i})_K$. It follows that $\mu\rest_{\A_{E_i}^*}$ is equal to either $(\nu_i)_{E_i}$ or $(\nu_i)_{E_i}\omega_{K/E_i}$. Upon multiplying $\nu_i$ by $\omega_{E/F}$ if necessary we may assume that $\mu\rest_{\A_{E_i}^*}=(\nu_i)_{E_i}$. (Note that $(\omega_{E/F})_{E_i}=\omega_{K/E_i}$.) Thus, $\mu\rest_{\A_{E_i}^*}$ is $\Gal(E_i/F)$-invariant, or equivalently $\chi\rest_{\A_{E_i}^*}\equiv1$. Conversely, suppose that \eqref{eq: spclchi} is satisfied. First note that this implies that $\chi\circ\Nm_{K/E_i}\equiv1$ and therefore $\sigma_i(\chi)=\chi^{-1}=\sigma(\chi)$, $i=1,2$. Thus, $\chi$ is $\Gal(K/F)$-invariant and in particular, $\chi$ is quadratic. As before, it follows from \eqref{eq: spclchi} that $\mu\rest_{\A_{E_i}^*}$ is $\Gal(E_i/F)$-invariant, $i=1,2$. Therefore we can write $\mu\rest_{\A_{E_i}^*}=(\nu_i)_{E_i}$ for a Hecke character $\nu_i$ of $\A_F^*$. Let $\mu_i=(\nu_i)_E$. Then $\mu\sigma_i(\mu)=(\mu_i)_K$ and therefore $\BC_{E/F}(\pi_i)=\AI_{K/E}(\mu\sigma_i(\mu))=\mu_i\boxplus\mu_i\omega_{K/E}$. Since $\mu_i$ is $\theta$-invariant, this implies that $\pi_i$ is not cuspidal. Continue to assume that \eqref{eq: spclchi} holds. Let $L$ be the quadratic extension of $K$ defined by $\chi$. Then by \cite[Appendix, Lemma E]{MR2899809}, $L/E_i$ is biquadratic since $\chi\rest_{\A_{E_i}^*}\equiv 1$. Moreover, $L/F$ is Galois. Indeed, let $L'$ be the normal closure of $L/F$ and let $\tau\in\Gal(L'/F)$. If $\tau\in\Gal(L'/E)$ then $\tau(L)=L$ since $L/E$ is Galois. Otherwise $\tau$ induces $\theta$ on $E$, so that $\tau$ induces $\sigma_i$ (for $i=1$ or $2$) on $K$. The extension $\tau(L)/K$ is determined by the character $\sigma_i(\chi)$. Since $\sigma_i(\chi)=\chi$ we necessarily have $\tau(L)=L$. Hence $L/F$ is Galois. Note that any subfield of $L$, other than $K$, contains at most one of the fields $E$, $E_1$, $E_2$. Therefore, $L$ is a quadratic extension of at least seven different subfields containing $F$ (namely, $K$ and two for each of $E$, $E_1$, $E_2$). Hence, we necessarily have $\Gal(L/F)=(\Z/2\Z)^3$. Let $K_1$, $K_2$ be the intermediate fields between $E$ and $L$, other than $K$. Then $K_i/F$ is biquadratic and therefore $\omega_{K_i/E}\rest_{\A_F^*}\equiv1$. It follows that $X(\pi)=\{1,\omega_{K/E},\omega_{K_1/E},\omega_{K_2/E}\}$. Note that if $\tilde\pi=\AI_{K/E}(\mu)$ then $X(\tilde\pi)$ contains $\omega_{K/E}$. Finally, suppose that $X(\tilde\pi)$ has order bigger than 2. Then $\chi=\sigma(\mu)/\mu$ is a quadratic character of $\A_K^*$ and if $L$ is the corresponding quadratic extension of $K$ then $L/E$ is biquadratic and all the Hecke characters of $\A_E^*$ which are trivial on norms of $L$ are trivial on $\A_F^*$. Let $K_1$, $K_2$ be as above. Then $\omega_{K_i/E}\rest_{\A_F^*}\equiv1$ which implies that $K_i/F$ is biquadratic, $i=1,2$. It follows that $L=K_1K_2$ is Galois over $F$ and $\Gal(L/F)=(\Z/2\Z)^3$. Thus, $L/E_i$, $i=1,2$ are biquadratic, and hence the restriction of $\chi=\omega_{L/K}$ to $\A_{E_i}^*$ is trivial, namely \eqref{eq: spclchi} holds. \end{proof} Once again we show that $G$ (and hence $\SO(3,1)$) has multiplicity one. By Remark \ref{rem: multone} we need to show that if $\pi_1, \pi_2\in\Cusp\GL_2(\A_E)$ and there exists a character $\omega$ of $\A_E^*$, trivial on $\A_F^*$, such that $\pi_1=\pi_2\otimes\omega$ then we can choose such a character which is also trivial on $E^*$. The condition implies that $\Asai(\pi_1)=\Asai(\pi_2)$. Using \cite{MR1792292} we may assume that $\pi_2=\pi_1\otimes\chi$ for some Hecke character $\chi$ of $\A_E^*$. For simplicity we write $\pi=\pi_1$. We obtain $\Asai(\pi)=\Asai(\pi)\otimes\chi\rest_{\A_F^*}$. In particular, \[ \pi\boxtimes\theta(\pi)=\BC_{E/F}(\Asai(\pi))=\BC_{E/F}(\Asai(\pi)\otimes\chi\rest_{\A_F^*})=\pi\boxtimes\theta(\pi)\otimes\chi\theta(\chi). \] As in the previous subsection, we analyze $L^S(s,\pi\boxtimes\theta(\pi)\otimes\dual\pi\boxtimes\theta(\dual\pi)\otimes\chi\theta(\chi))$ which has a pole at $s=1$, and at least a double pole if moreover $\theta(\pi)$ is a twist of $\pi$. If $\theta(\chi)\chi\not\equiv1$ then we infer that $\pi$ is dihedral. On the other hand if $\chi\rest_{\A_F^*}=\omega_{E/F}$ then $\pi\boxtimes\theta(\pi)=\BC_{E/F}(\Asai(\pi))$ is neither cuspidal nor of type $(3,1)$ and by \cite[Appendix, Theorem B]{MR2899809} we deduce once again that $\pi$ is dihedral. Thus if $\pi$ is not dihedral we get $\chi\rest_{\A_F^*}=1$, namely $\pi\sim\pi\otimes\chi$. Finally, the case where $\pi$ is dihedral can also be analyzed using the method of \cite[Appendix]{MR2899809}. We omit the details. \subsection{} Finally we consider the example of the split $\SO(6)$. We identify $\SO(6)$ with the quotient of \[ G=\{(g,\lambda)\in\GL_4\times\GL_1:\lambda^2\det g=1\} \] by the image of $(z,z^{-2}):\GL_1\rightarrow G$. We take $\widetilde G=\GL_4\times\GL_1$ and $\stdG{\widetilde G}$ to be the exterior square times $x\mapsto x^{-1}$ so that $\stdG{\widetilde G}((\pi,\mu))=\wedge^2(\pi)\otimes\mu^{-1}$, an isobaric representation of $\GL_6$ whose existence was proved by Kim \cite{MR1937203}. We have \[ X((\pi,\mu))=\{\chi\text{ quadratic Hecke character}:\pi\otimes\chi=\pi\}. \] Since $\mu$ is irrelevant, we will simply write $X(\pi)$. We also write $\Pi=\wedge^2\pi$. We want to verify the equality $\abs{\cent_\pi}=\abs{X(\pi)}$. There are several cases to consider. We start with the simplest. \begin{lemma} The following conditions are equivalent. \begin{enumerate} \item $\cent_\pi$ is non-trivial. \item There exists a quadratic extension $E/F$ such that $\pi$ is the automorphic induction of $\varrho\in\Cusp\GL_2(\A_E)$. \item $\pi\otimes\omega=\pi$ for some non-trivial character $\omega$. \item $X(\pi)\ne1$. \end{enumerate} \end{lemma} \begin{proof} For the equivalence of the last three conditions -- see \cite{MR2767509} (which is based on \cite{MR1007299}). Suppose that $\pi$ is not an automorphic induction. Then by \cite[Proposition 4.2]{MR2767509} we cannot have a $\GL_2$ in the isobaric decomposition of $\Pi$. We also claim that we cannot have two $\GL_1$'s in the isobaric decomposition. Indeed, if $\chi$ is a Hecke character which occurs in the isobaric decomposition of $\Pi$ then necessarily $\pi=\pi^\vee\otimes\chi\mu$. Hence, if we had two Hecke characters $\chi_1$ and $\chi_2$ in the isobaric decomposition of $\Pi$ then $\pi\otimes\chi_1\chi_2^{-1}=\pi$ contradicting our assumption that $\pi$ is not an automorphic induction. Thus, the only options for the isobaric decomposition of $\Pi$ is $6$, $(5,1)$, or $(3,3)$. In all cases $\cent_\pi$ is trivial. Conversely, suppose that $\pi=\AI_{E/F}\varrho$ where $\varrho\in\Cusp\GL_2(\A_E)$. Then \begin{equation} \label{eq: inducedcase} \wedge^2\pi=\Asai(\varrho)\otimes\omega_{E/F}\boxplus\AI(\omega_\varrho) \end{equation} \cite[\S3]{MR2076595}. It follows that $\cent_\pi$ is non-trivial. \end{proof} It remain to consider the case where $\pi=\AI_{E/F}\varrho$ where $\varrho\in\Cusp\GL_2(\A_E)$. From now on we assume that this is the case. \begin{lemma} Assume that $\varrho$ is not dihedral with respect to a biquadratic extension of $F$ (containing $E$). Then either $\abs{\cent_\pi}=2$ and $X(\pi)=\{1,\omega_{E/F}\}$ or the following equivalent conditions are satisfied. \begin{enumerate} \item $\abs{\cent_\pi}=4$. \item $\theta(\varrho)=\varrho\otimes\chi_E$ for some quadratic character $\chi$ of $\A_F^*$ (necessarily different from $\omega_{E/F}$ since $\pi$ is cuspidal). \item $X(\pi)=\{1,\omega_{E/F},\chi,\chi\omega_{E/F}\}$ for a quadratic character $\chi$ of $\A_F^*$ different from $\omega_{E/F}$. \end{enumerate} Moreover, in these cases $\varrho$ is not dihedral. \end{lemma} \begin{proof} Recall \eqref{eq: inducedcase}. If $\AI(\omega_\varrho)$ is cuspidal, i.e., if $\theta(\omega_\varrho)\ne\omega_\varrho$, then $\abs{\cent_\pi}=2$ since $\Asai(\varrho)$ is either cuspidal or of type $(3,1)$. If $\AI(\omega_\varrho)$ is not cuspidal, i.e., if $\theta(\omega_\varrho)=\omega_\varrho$ then $\abs{\cent_\pi}=2$ unless $\theta(\varrho)=\varrho\otimes\nu$ for some Hecke character $\nu$ of $\A_E^*$ in which case $\abs{\cent_\pi}=4$ and $\varrho$ is not dihedral. In the latter case $\omega_\varrho=\theta(\omega_\varrho)=\omega_\varrho\nu^2$ so that $\nu$ is quadratic. ($\nu$ is non-trivial since $\pi$ is cuspidal.) Moreover, since $\varrho$ is not dihedral we also have $\nu\theta(\nu)=1$ so that $\theta(\nu)=\nu$. Thus, $\nu=\chi_E$ for some Hecke character $\chi$ of $\A_F^*$. Finally, it follows from the relation $\theta(\varrho)=\varrho\otimes\nu$ that $\chi$ is quadratic, since by \cite{MR1611951} we cannot have $\chi^2=\omega_{E/F}$. For the last condition suppose that $\chi$ is a quadratic Hecke character of $\A_F^*$ different from $\omega_{E/F}$. Then the condition $\AI(\varrho)\otimes\chi=\AI(\varrho)$ is equivalent to $\varrho\otimes\chi_E=\varrho$ or $\theta(\varrho)$. However, by the assumption on $\varrho$ we cannot have $\varrho\otimes\chi_E=\varrho$ since $\chi_E$ defines a biquadratic extension of $F$. Thus it follows from the equivalence of the first two parts which was proved above that $\abs{X(\pi)}=2$ if and only if $\abs{\cent_\pi}=2$ and in this case $\abs{X(\pi)}=4$. The lemma follows. \end{proof} Finally, we consider the case where $\varrho=\AI_{K/E}(\mu)$ and $K/F$ is biquadratic. \begin{lemma} Assume first that $\mu\rest_{\A_E^*}$ is not $\theta$-invariant. Let $\sigma$ be the non-trivial element of $\Gal(K/E)$, $E_1$, $E_2$ the intermediate fields between $F$ and $K$ other than $E$, and $\chi=\sigma(\mu)/\mu$. If \begin{equation} \label{eq: spclchi2} \chi\rest_{\A_{E_i}^*}\equiv1, \ \ i=1,2, \end{equation} and $L$ is the quadratic extension of $K$ defined by $\chi$ then \begin{enumerate} \item $L$ is a $(\Z/2\Z)^3$-extension of $F$, \item $\abs{\cent_\pi}=8$, \item $X(\pi)$ is equal to the group of Hecke characters of $\A_F^*$ which are trivial on the norms of $L$. \end{enumerate} If \eqref{eq: spclchi2} is not satisfied then $\abs{\cent_\pi}=4$ and $X(\pi)$ is equal to the group of Hecke characters of $\A_F^*$ which are trivial on the norms of $K$. Now assume that $\mu\rest_{\A_E^*}$ is $\theta$-invariant but \eqref{eq: spclchi2} is not satisfied. Then $\abs{\cent_\pi}=4$ and $X(\pi)$ is equal to the group of Hecke characters of $\A_F^*$ which are trivial on the norms of $K$ unless we can write $\mu\sigma_i(\mu)=\xi_K$ for $i=1$ or $2$ and some Hecke character $\xi$ of $\A_F^*$ in which case $\abs{\cent_\pi}=8$ and $\abs{X(\pi)}=8$. Finally, if both $\mu\rest_{\A_E^*}$ is $\theta$-invariant and \eqref{eq: spclchi2} is satisfied then $\abs{\cent_\pi}=16$ and $\abs{X(\pi)}=16$. \end{lemma} \begin{proof} First note that the $\theta$-invariance of $\mu\rest_{\A_E^*}$ is equivalent to the $\theta$-invariance of $\omega_{\varrho}$ since $\omega_{\varrho}=\mu\rest_{\A_E^*}\omega_{K/E}$. Thus, if $\mu\rest_{\A_E^*}$ is non-$\theta$-invariant then by \eqref{eq: inducedcase}, $\abs{\cent_\pi}$ is either $8$ or $4$ depending on whether or not $\Asai(\varrho)$ is of type $(1,1,1,1)$. By Lemma \ref{lem: SO31} this is equivalent to the condition \eqref{eq: spclchi2}. Recall that $\Asai(\varrho)=\pi_1\boxplus\pi_2$ where $\BC_{E/F}(\pi_i)=\AI_{K/E}(\mu\sigma_i(\mu))$. Thus, if $\mu\rest_{\A_E^*}$ is $\theta$-invariant then $\abs{\cent_\pi}=2^{2+\epsilon_1+\epsilon_2}$ where $\epsilon_i=0$ if $\pi_i$ is cuspidal and $\epsilon_i=1$ otherwise. Let us analyze the condition $\AI_{E/F}(\varrho)=\AI_{E/F}(\varrho)\otimes\omega$. This is equivalent to either $\varrho=\varrho\otimes\omega_E$ or $\theta(\varrho)=\varrho\otimes\omega_E$. Note that a quadratic character $\mu$ of $\A_E^*$ is of the form $\chi_E$ for a quadratic Hecke character $\chi$ of $\A_F^*$ if and only if $\mu$ is trivial on $\A_F^*$. In particular, the group $\{\omega\text{ quadratic}:\varrho\otimes\omega_E=\varrho\}$ is precisely the preimage under $\omega\mapsto\omega_E$ of $\{\omega\text{ on }\A_E^*/\A_F^*:\varrho\otimes\omega=\varrho\}$. Once again, the existence of a quadratic Hecke character $\omega$ of $\A_F^*$ such that $\theta(\varrho)=\varrho\otimes\omega_E$ is equivalent to the existence of a quadratic Hecke character $\omega$ on $\A_E^*$ trivial on $\A_F^*$ such that $\theta(\varrho)=\varrho\otimes\omega$. In turn, the condition $\theta(\varrho)=\varrho\otimes\omega$ can be written as $\AI_{K/E}(\sigma_1(\mu))=\AI_{K/E}(\mu\omega_K)$ or equivalently as $\sigma_i(\mu)=\mu\omega_K$ for $i=1$ or $2$. Note that $\pi_i$ is not cuspidal if and only if there exists a $\theta$-invariant Hecke character $\nu$ of $\A_E^*$ such that $\mu\sigma_i(\mu)=\nu_K$, i.e. if and only if $\mu\sigma_i(\mu)$ factors through $\Nm_{K/F}$. The lemma will follow from the following claim: there exists a Hecke character $\omega$ of $\A_E^*$ such that $\omega^2=1$, $\omega\rest_{\A_F^*}\equiv1$ and $\sigma_{3-i}(\mu)/\mu=\omega_K$ if and only if $\mu\rest_{\A_E^*}$ is $\theta$-invariant and $\pi_i$ is not cuspidal. Indeed, suppose that such $\omega$ exists. Then \[ \frac{\sigma_{3-i}(\mu)}{\mu}\rest_{\A_E^*}=\omega_K\rest_{\A_E^*}=\omega^2=1. \] so that $\theta(\mu\rest_{\A_E^*})=\mu\rest_{\A_E^*}$. Moreover, \[ \mu\sigma_i(\mu)=\omega_K^{-1}\sigma_i(\mu)\sigma_{3-i}(\mu)= \omega_K^{-1}(\sigma_i(\mu)\rest_{\A_E^*})_K=\omega_K^{-1}(\theta(\mu\rest_{\A_E^*}))_K= (\omega^{-1}\mu\rest_{\A_E^*})_K \] and both $\omega$ and $\mu\rest_{\A_E^*}$ are $\theta$-invariant. In the converse direction, suppose that $\mu\sigma_i(\mu)=\nu_K$ where $\nu$ is a $\theta$-invariant Hecke character of $\A_E^*$ and that $\mu\rest_{\A_E^*}$ is $\theta$-invariant. Then we write $\nu=\lambda_E$ for some Hecke character $\lambda$ on $\A_F^*$, so that $\mu\sigma_i(\mu)=\lambda_K$. As before we have \[ \sigma_{3-i}(\mu)/\mu=\nu_K^{-1}\sigma_i(\mu)\sigma_{3-i}(\mu)=\omega_K \] where $\omega=\nu^{-1}\mu\rest_{\A_E^*}$. Restricting the relation $\nu_K=\mu\sigma_i(\mu)$ to $\A_E^*$ we get $\nu^2=\mu\rest_{\A_E^*}\theta(\mu\rest_{\A_E^*})=\mu^2\rest_{\A_E^*}$ so that $\omega^2=1$. Also, the relation \[ (\mu\rest_{\A_{E_i}^*})_K=\mu\sigma_i(\mu)=\lambda_K=(\lambda_{E_i})_K \] implies that \[ \mu\rest_{\A_{E_i}^*}=\lambda_{E_i}\text{ or }\lambda_{E_i}\omega_{K/E_i}. \] In both cases, \[ \mu\rest_{\A_F^*}=\lambda_{E_i}\rest_{\A_F^*}=\lambda^2=\lambda_E\rest_{\A_F^*}=\nu\rest_{\A_F^*}. \] We conclude that $\omega\rest_{\A_F^*}\equiv1$ as required. \end{proof} \begin{remark} We do not know whether $\SO(6)$ has multiplicity one. This is closely related to the question of whether the group $\SO(6,\C)=\SL_4(\C)/\{\pm1\}$ is acceptable in the language of \cite{MR1303498}. Unfortunately this case was left open in [loc. cit.]. It would be interesting to settle this. \end{remark}
2024-02-18T23:40:52.678Z
2013-10-18T02:08:56.000Z
algebraic_stack_train_0000
3,559
23,701
proofpile-arXiv_066-1395
\section{Introduction} Let $n \geq 1$ be an integer. For $(a,b) \in \mathbb{C}^n \times \mathbb{C}^n$, let $R_{a,b}$ be the rational map defined by the formula $$ z \mapsto \sum_{j=1}^n \frac{a_j}{z-b_j}. $$ We denote by $\mathcal{R}(n)$ the set of $(a,b) \in \mathbb{C}^n \times \mathbb{C}^n$ such that $\sum_{j=1}^n b_j = 0$ and the sublevel set $$ R_{a,b}^{-1}(\mathbb{D})=\{ z \in \widehat{\mathbb{C}} : |R_{a,b}(z)|<1 \} $$ is connected and bounded by $n$ disjoint analytic Jordan curves. A variant of the set $\mathcal{R}(n)$ is considered in \cite{JEONG3} and \cite{JEONG2}, where it is called the \textit{coefficient body for Bell representations}. The set $\mathcal{R}(n)$ is clearly invariant under the action of the symmetric group $\Sigma_n$ and the quotient $\mathcal{R}(n)/\Sigma_n$ is in bijection with the set of rational maps $\{ R_{a,b} : (a,b) \in \mathcal{R}(n)\}$. The goal of this paper is to prove that $\mathcal{R}(n)$ forms a trivial $\mathbb{T}^n$-bundle over a certain moduli space $\mathcal{M}(n)$.\\ We define $\mathcal{M}(n)$ to be the set of isomorphism classes of planar domains containing infinity and bounded by $n$ disjoint analytic Jordan curves, with an ordering of the boun\-dary curves. An \textit{isomorphism} between two such domains $(X,E_1,...,E_n)$ and $(Y,F_1,...,F_n)$ is by definition a biholomorphism $g: \overline X \to \overline Y$ such that $g(\infty)=\infty$, $g(z)/z \to 1$ as $z\to \infty$ and $g(E_j)=F_j$ for each $j$. Using Koebe's circle domain theorem (see e.g. \cite[Section 15.7]{CON}), we get that $\mathcal{M}(n)$ is in bijection with the set of $(c,r) \in \mathbb{C}^n \times (\mathbb{R}_{>0})^n$ such that $\sum_{j=1}^n c_j = 0$ and the $n$ disks $\{ z \in \mathbb{C} : |z-c_j| \leq r_j \}$ are disjoint. We use this bijection to put a topology on $\mathcal{M}(n)$. It is easy to see that $\mathcal{M}(n)$ is homotopy equivalent to the configuration space $$ \mathcal{F}_n\mathbb{C} := \{ (c_1,...,c_n) \in \mathbb{C}^n : c_i \neq c_j\mbox{ for }i\neq j \} $$ of $n$-tuples of distinct points in the plane. Therefore, the fundamental group of $\mathcal{M}(n)$ is the pure braid group on $n$ strands and the fundamental group of the quotient $\mathcal{M}(n)/\Sigma_n$ is the braid group on $n$ strands.\\ The relationship between $\mathcal{R}(n)$ and $\mathcal{M}(n)$ is expressed in the following theorem : \begin{reptheorem}{BiebBellRep} Let $[(X,E_1,...,E_n)] \in \mathcal{M}(n)$ and let $\alpha_j \in E_j$ for each $j$. Then there is a unique $(a,b) \in \mathcal{R}(n)$ and a unique isomorphism $$ g: (X,E_1,...,E_n) \to (R_{a,b}^{-1}(\mathbb{D}),F_1,...,F_n) $$ such that the curve $F_j$ encloses $b_j$ and $R_{a,b}(g (\alpha_j))=1$ for each $j$. \end{reptheorem} This shows that $\mathcal{R}(n)$ is the same as the moduli space of planar domains containing infinity and bounded by $n$ disjoint analytic Jordan curves, with one marked point on each boundary curve.\\ Let $P : \mathcal{R}(n) \to \mathcal{M}(n)$ be the map which sends $(a,b)$ to the isomorphism class of the domain $R_{a,b}^{-1}(\mathbb{D})$ with boundary curves $F_1,...,F_n$ ordered in such a way that $F_j$ encloses the pole $b_j$ of $R_{a,b}$. Among all the preimages $(a,b)$ of a given $\sigma \in \mathcal{M}(n)$ by $P$, there is a unique one such that $R_{a,b}'(\infty)=\sum_{j=1}^n a_j$ is equal to the analytic capacity of $\widehat{\mathbb{C}} \setminus R_{a,b}^{-1}(\mathbb{D})$, so that $R_{a,b}$ is the Ahlfors function on $R_{a,b}^{-1}(\mathbb{D})$. This will be explained in sections 7, 8 and 9. The map $A : \mathcal{M}(n) \to \mathcal{R}(n)$ which sends $\sigma$ to this parameter $(a,b)$ is a right inverse for $P$.\\ Let $$ \begin{array}{rcrcl} \pi & : & \mathcal{M}(n)\times\mathbb{T}^n &\to& \mathcal{M}(n)\\ & & (\sigma,\beta) & \mapsto & \sigma \end{array} $$ and let $$ \begin{array}{rcrcl} \iota & : & \mathcal{M}(n) & \to & \mathcal{M}(n)\times \mathbb{T}^n\\ & & \sigma & \mapsto & (\sigma,(1,...,1)). \end{array} $$ Our main result is : \begin{reptheorem}{MainTheorem} There is a homeomorphism $$ H : \mathcal{R}(n) \to \mathcal{M}(n) \times \mathbb{T}^n $$ which commutes with the action of $\Sigma_n$ and is such that the diagrams $$ \begin{tikzcd} \mathcal{R}(n) \arrow{r}{H} \arrow{dr}{P} & \mathcal{M}(n) \times \mathbb{T}^n \arrow{d}{\pi} \\ {} & \mathcal{M}(n) \end{tikzcd} $$ and $$ \begin{tikzcd} \mathcal{R}(n) \arrow{r}{H} & \mathcal{M}(n) \times \mathbb{T}^n \\ {} & \mathcal{M}(n) \arrow{u}{\iota} \arrow{ul}{A} \end{tikzcd} $$ commute. \end{reptheorem} Theorem \ref{MainTheorem} supersedes \cite[Theorem 2.4]{JEONG2}, which shows that $\mathcal{R}(n)$ is homotopy equivalent to $\mathcal{F}_n \mathbb{C} \times \mathbb{T}^n$.\\ A first consequence is that $P$ is continuous and open, and for every $\sigma \in \mathcal{M}(n)$ the inverse image $P^{-1}(\sigma)$ is homeomorphic to $\mathbb{T}^n$. This answers \cite[Problem 4.2]{JEONG3} partially and supports a claim made at the end of \cite{JEONG2}.\\ Another consequence is that $A$ is a topological embedding with closed image. Therefore, the set $\mathcal{A}(n):=A(\mathcal{M}(n))$ of coefficients $(a,b)$ in $\mathcal{R}(n)$ such that $R_{a,b}$ is the Ahlfors function on $R_{a,b}^{-1}(\mathbb{D})$ forms a closed embedded subma\-nifold inside $\mathcal{R}(n)$. This gives a qualitative answer to \cite[Problem 1.5]{JEONG}.\\ Given $\sigma \in \mathcal{M}(n)$, the image $A(\sigma)$ is the parameter $(a,b) \in P^{-1}(\sigma)$ such that the sum $\sum_{j=1}^n a_j$ has largest real part. Intuitively, this maximizing parameter $(a,b)$ should have all summands $a_j$'s as nearly positive as possible. Let $\mathcal{R}^+(n)$ denote the subset of $\mathcal{R}(n)$ consisting of parameters $(a,b)$ such that all the $a_j$'s are real and positive, that is, $$ \mathcal{R}^+(n) := \mathcal{R}(n) \cap ((\mathbb{R}_{>0})^n \times \mathbb{C}^n). $$ We show in \cite{FBY} that $$ \mathcal{A}(n)\cap (\mathbb{R}^n \times \mathbb{R}^n) = \mathcal{R}^+(n) \cap (\mathbb{R}^n \times \mathbb{R}^n) $$ for all $n$ and $$ \mathcal{A}(n)=\mathcal{R}^+(n) $$ when $n$ is equal to $1$ or $2$, supporting the above intuition. \\ In the same paper, we give a numerical example showing that $\mathcal{R}^+(3)$ is not contained in $\mathcal{A}(3)$. As an application of Theorem \ref{MainTheorem}, we show by contradiction that the reverse inclusion does not hold either. The method is adapted to all $n \geq 3$. \begin{reptheorem}{nonpositive} For every $n\geq 3$, neither $\mathcal{R}^+(n) \subset \mathcal{A}(n)$ nor $\mathcal{A}(n) \subset \mathcal{R}^+(n)$. \end{reptheorem} The tools used to construct the bijection $H$ in Theorem \ref{MainTheorem} are Theorem \ref{BiebBellRep} and Ahlfors functions. The proof that $H$ is a homeomorphism requires several genera\-lizations of the Carath\'eodory kernel convergence theorem for finitely connected domains, some of which are new.\\ The paper is structured as follows. In section 2, we define the notion of Carath\'eodory kernel convergence for planar domains and state the generalized Carath\'eodory kernel convergence theorem. Then, in section 3, we review the definition of $\mathcal{M}(n)$ and prove a convergence theorem for Koebe representations onto circle domains. In section 4, we review the definition of $\mathcal{R}(n)$. Section 5 contains the proof of a convergence theorem for Bell representations. In section 6, we prove a compactness result for Grunsky maps. Next, in section 7, we prove a convergence theorem for Ahlfors functions. Sections 8 and 9 contain the proofs of Theorem \ref{MainTheorem} and Theorem \ref{nonpositive} respectively. \section{Carath\'eodory kernel convergence} In this paper, there will be many instances where given a domain $X \subset \widehat{\mathbb{C}}$ and some extra information, we will associate a uniquely defined analytic function $f$ on $X$. In all these cases, the function $f$ will depend continuously on the domain $X$ as well as the extra information. To make this precise, we need to formalize what it means for a sequence of domains to converge. \begin{definition} \label{defcara} Let $\{X_k\}$ be a sequence of domains in $\widehat{\mathbb{C}}$ such that $\infty \in X_k$ for all $k$. We define the \textit{kernel} of $\{X_k\}$ (with respect to $\infty$), denoted by $\operatorname{ker}\{X_k\}$, to be the largest domain $X$ containing $\infty$ such that if $K$ is a compact subset of $X$, then there exists a $k_0$ such that $K \subseteq X_k$ for $k \geq k_0$, provided this set exists. Otherwise, we say that $\operatorname{ker}\{X_k\}$ does not exist. \\ Moreover, we say that $\{X_k\}$ \textit{converges to $X$ (in the sense of Carath\'eodory)} if $X$ is the kernel of every subsequence of $\{X_k\}$. This is denoted by $X_k \rightarrow X$. \end{definition} \begin{definition} Let $g$ be a function meromorphic in a neighborhood of $\infty$ in $\widehat{\mathbb{C}}$. We say that $g$ is \textit{tangent to the identity at infinity} if $g(\infty)=\infty$ and $\lim_{z \to \infty} g(z)/z=1$. \end{definition} Note that $g$ is tangent to the identity at infinity if and only if the function defined by $f(z):=1/g(1/z)$ satisfies $f(0)=0$ and $f'(0)=1$. The following lemma generalizes the theorem of Koebe which says that the family of schlicht functions on the unit disk is normal. \begin{lemma} \label{lemmeconv} Let $\{X_k\}$ be as in Definition \ref{defcara}. For $k \geq 1$, let $g_k$ be univalent on $X_k$ and tangent to the identity at infinity. If $X=\operatorname{ker}\{X_k\}$ exists, then there is a subsequence $(g_{k_\ell})_{\ell=1}^{\infty}$ such that $X = \operatorname{ker}\{X_{k_\ell}\}$ and $(g_{k_\ell})_{\ell=1}^{\infty}$ converges locally uniformly to a function $g$ univalent on $X$. \end{lemma} \begin{proof} See \cite[Lemma 15.4.6]{CON}. \end{proof} The next theorem was first proved by Carath\'eodory for simply connected domains. \begin{theorem}[Generalized Carath\'eodory kernel convergence theorem] \label{thmcara} Let $\{X_k\}$ be as in Definition \ref{defcara}. For $k \geq 1$, let $g_k$ be univalent on $X_k$ and tangent to the identity at infinity. Assume that $X_k \rightarrow X$. Then $(g_k)_{k=1}^{\infty}$ converges locally uniformly on $X$ to a univalent function $g$ if and only if $\{g_k(X_k)\}$ converges to some domain $Y$. When this happens, $Y=g(X)$ and $g_k^{-1} \to g^{-1}$ locally uniformly on $Y$. \end{theorem} \begin{proof} See \cite[Theorem 15.4.7]{CON}. \end{proof} \section{Koebe representations} If $X\subset \widehat{\mathbb{C}}$ is a connected open set and its complement $\widehat{\mathbb{C}} \setminus X$ has $n$ connected components none of which is a point, then we say that $X$ is a \textit{non-degenerate $n$-connected (planar) domain}. Any non-degenerate $n$-connected domain can be mapped conformally onto a domain bounded by $n$ disjoint analytic Jordan curves. Such a conformal map can be obtained by applying the Riemann mapping theorem $n$ times, one for each boundary component. The function theory on such domains is often nicer. For example, any biholomorphism between domains bounded by analytic Jordan curves extends analytically to the boundary curves by the Schwarz reflection principle.\\ Accordingly, consider objects of the form $(X,E_1,...,E_n)$, where $X$ is a planar domain contai\-ning infinity and bounded by the disjoint analytic Jordan curves $E_1,...,E_n$. We define an \textit{isomorphism} between two such objects $(X,E_1,...,E_n)$ and $(Y,F_1,...,F_n)$ to be a biholomorphism $g: \overline X \to \overline Y$ such that \begin{enumerate}[(i)] \item $$g(\infty)=\infty;$$ \item $$\lim_{z\to \infty} g(z)/z = 1;$$ \item $$g(E_j)=F_j\mbox{ for each }j \in \{1,...,n\}.$$ \end{enumerate} The set of isomorphism classes of such objects is denoted by $\mathcal{M}(n)$.\\ There are several families of special domains which represent all non-degenerate $n$-connected domains, see for example \cite[Chapter 15]{CON}. Among these, non-degenerate circle domains offer the advantage that each boundary component is an analytic Jordan curve. \begin{definition} A domain $Y \subset \widehat{\mathbb{C}}$ is called a \textit{circle domain} if each component of its complement is a spherical disk of non-negative radius. \end{definition} \begin{theorem}[Koebe] Let $X$ be a non-degenerate $n$-connected domain. Then there exists a biholomorphism $g : X \to Y$ onto a non-degenerate circle domain. If $h : X \to Z$ is another biholomorphism onto a circle domain, then $h=M \circ g$ for some M\"obius transformation $M$. \end{theorem} Given $[(X,E_1,...,E_n)] $ in $\mathcal{M}(n)$, let $g: \overline X \to \overline Y$ be a biholomorphism onto a circle domain. By post-composing with an appropriate M\"obius transformation, we may further assume that $g$ is tangent to the identity at infinity. Then $Y$ is bounded by round circles $g(E_1),..., g(E_n)$ in the plane and $g$ is determined up to translation. Let $c_j$ and $r_j$ be the center and radius of the circle $g(E_j)$ in the euclidean metric on the plane. If we require that $\sum_{j=1}^n c_j = 0$, then $g$ is uniquely determined, and is called the \textit{normalized Koebe representation of $X$}. After this normalization, the map $$ \begin{array}{rcrcl} \kappa & : & \mathcal{M}(n) & \to & \mathbb{C}^{n-1} \times (\mathbb{R}_+)^n \\ & & [(X,E_1,...,E_n)] & \mapsto & ((c_1,...,c_{n-1}),(r_1,...,r_n)) \end{array} $$ becomes well-defined and injective. Its image is the set of $(c,r) \in \mathbb{C}^{n-1} \times (\mathbb{R}_+)^n$ such that the $n$ disks $\{ z \in \mathbb{C} : |z-c_j|\leq r_j \}$ are disjoint, where $c_n:= - \sum_{j=1}^{n-1} c_j$. This is clearly an open set. We define the topology on $\mathcal{M}(n)$ to be the one induced by the map $\kappa$. Thus : \begin{lemma} \label{dimmoduli} The moduli space $\mathcal{M}(n)$ is a manifold of dimension $(3n-2)$ with a single chart. \end{lemma} \begin{remark} By successively removing the conditions (iii), (ii) and (i) on isomorphisms, one obtains maps $$ \mathcal{M}(n) \to \mathcal{M}(n)/\Sigma_n \to \mathcal{M}_{0,n,1} \to \mathcal{M}_{0,n,0}, $$ where $\Sigma_n$ is the symmetric group on $n$ symbols and $\mathcal{M}_{0,n,m}$ is the moduli space of Riemann surfaces of genus $0$ with $n$ disks removed and $m$ marked points. The spaces $\mathcal{M}_{0,n,1}$ and $\mathcal{M}_{0,n,0}$ are more natural to study from the point of view of Teichm\"uller theory, but the spaces $\mathcal{M}(n)$ and $\mathcal{M}(n)/\Sigma_n$ provide the appropriate framework for this paper. Note that one can think of $\mathcal{M}(n)/\Sigma_n$ as the moduli space of non-degenerate $n$-connected domains with one marked tangent vector. \end{remark} The following theorem provides a useful criterion to tell when a sequence converges in $\mathcal{M}(n)$ or $\mathcal{M}(n)/\Sigma_n$. \begin{theorem} \label{convergencekoebe} Let $\{X_k\}$ be a sequence of $n$-connected domains containing $\infty$ converging in the sense of Carath\'eodory to an $n$-connected domain $X$. If $g_k : X_k \to Y_k $ are the normalized Koebe representations, then $(g_k)_{k=1}^{\infty}$ converges locally uniformly on $X$ to the normalized Koebe representation $g:X \to Y$. In particular, the sequence of circle domains $\{Y_k\}$ converges to $Y$ in the sense of Carath\'eodory. \end{theorem} \begin{proof} We prove that every subsequence of $( g_k )_{k=1}^\infty$ has a subsequence which converges to $g$, which implies that $( g_k )_{k=1}^\infty$ converges to $g$. \\ By Lemma \ref{lemmeconv}, every subsequence of $( g_k )_{k=1}^\infty$ has a subsequence converging locally uniformly to a univalent function on $X$. \\ Let $h$ be a locally uniform limit of a subsequence $(g_k)_{k \in S}$. Then $h$ is tangent to the identity at infinity. By Theorem \ref{thmcara}, the corresponding subsequence of domains $\{Y_k\}_{k \in S}$ converges to $h(X)$ in the sense of Carath\'eodory. \\ We claim that $h(X)$ is a non-degenerate circle domain bounded by circles whose centers sum to zero, so that $h=g$ by uniqueness of normalized Koebe representations. Indeed, first note that $h(X)$ is $n$-connected and non-degenerate since $h$ is univalent on $X$. Furthermore, the sequences of centers and radii of the circles bounding $\{Y_k\}_{k \in S}$ must be bounded, otherwise the kernel of $\{Y_k\}_{k \in S}$ would not contain a neighborhood of $\infty$. Therefore, passing to a subsequence if necessary, we can assume that the centers of the circles bounding $\{Y_k\}_{k \in S}$ converge to $c_1, c_2, \dots, c_n \in \mathbb{C}$ and that the corresponding radii converge to $r_1, r_2, \dots, r_n \in \mathbb{R}$. Since $Y_k \to h(X)$ as $k \rightarrow \infty$ in $S$, it is easy to see then that $h(X)$ is the domain bounded by the circles centered at $c_1, c_2, \dots, c_n$ of corresponding radii $r_1, r_2, \dots, r_n$. Since $h(X)$ is $n$-connected and non-degenerate, these circles must be disjoint and non-degenerate, so that $h(X)$ is indeed a non-degenerate circle domain. The fact that $\sum_{j=1}^{n} c_j =0$ follows from the fact that this is true for every domain in the sequence $\{Y_k\}$.\\ Finally, the fact that $Y_k \to Y$ follows directly from Theorem \ref{thmcara}. \end{proof} \begin{remark} A similar argument shows that a sequence of $n$-connected circle domains $\{Y_k\}$ converges to a $n$-connected circle domain $Y$ if and only if the sequences of centers and radii bounding $\{Y_k\}$ converge to the centers and radii of the circles bounding $Y$. \end{remark} \section{Rational maps} \label{rational} First, we recall a simple lemma whose proof appears in \cite{FBY} : \begin{lemma} \label{nconnected} Let $R$ be a rational map of degree $n$ and $X:=R^{-1}(\mathbb{D})$. Then the following are equivalent : \begin{enumerate}[(1)] \item $\widehat{\mathbb{C}} \setminus X$ has $n$ connected components; \item $X$ is $n$-connected; \item $X$ is $n$-connected and non-degenerate; \item $X$ is connected and bounded by $n$ disjoint analytic Jordan curves; \item $R$ maps each component of $\widehat{\mathbb{C}} \setminus X$ homeomorphically onto $\widehat{\mathbb{C}} \setminus \mathbb{D}$; \item all the critical values of $R$ are in $\mathbb{D}$. \end{enumerate} \end{lemma} A rational map of degree $n$ which satisfies any (and hence all) of the conditions of Lemma \ref{nconnected} is called \textit{$n$-good}. Note that an $n$-good rational map must have only simple poles, since it cannot have $\infty$ as a critical value. An $n$-good map which vanishes at infinity and whose poles add up to zero is said to be \textit{normalized}. By definition, $\mathcal{R}(n)$ is the set of $(a,b)\in \mathbb{C}^n \times \mathbb{C}^n$ such that the rational map $$ R_{a,b}(z) := \sum_{j=1}^n \frac{a_j}{z-b_j} $$ is a normalized $n$-good map. Every normalized $n$-good map can be written as a sum of partial fractions and is thus equal to $R_{a,b}$ for some $(a,b) \in \mathcal{R}(n)$, unique up to permutation. Therefore, the set of normalized $n$-good maps can be identified with $\mathcal{R}(n) / \Sigma_n$.\\ By definition, if $(a,b) \in \mathcal{R}(n)$, then $\sum_{j=1}^n b_j = 0$. It follows that the map $$ \begin{array}{rcrcl} \theta & : & \mathcal{R}(n) & \to & \mathbb{C}^{n} \times \mathbb{C}^{n-1} \\ & & (a,b) & \mapsto & (a,(b_1,...,b_{n-1})). \end{array} $$ is injective. Its image $\theta(\mathcal{R}(n))$ is the set of $(a,\beta) \in \mathbb{C}^n \times \mathbb{C}^{n-1}$ such that $R_{a,b}$ is $n$-good, where $b=(\beta,-\sum_{j=1}^{n-1} \beta_j)$. This image is open since the set of critical values of $R_{a,b}$ depends continuously on $(a,\beta)$ and $\mathbb{D}$ is open. Thus : \begin{lemma}\label{dimrational} The space $\mathcal{R}(n)$ is a manifold of dimension $(4n-2)$ with a single chart. \end{lemma} For the quotient topology on the set of normalized $n$-good maps $\mathcal{R}(n)/\Sigma_n$, a sequence $(R_k)_{k=1}^\infty$ converges to $R$ if there exist $(a^k,b^k)$ and $(a,b)$ in $\mathcal{R}(n)$ such that $R_{a^k,b^k}=R_k$, $R_{a,b}=R$ and $(a^k,b^k) \to (a,b)$. It is easy to see that this implies that $(R_k)_{k=1}^\infty$ converges to $R$ uniformly on $\widehat{\mathbb{C}}$ with respect to the spherical metric. The converse is also true. \begin{lemma} \label{LemmaRational} Let $(R_k)_{k=1}^{\infty}$ be a sequence of rational functions of degree $n$, each vanishing at infinity. Suppose that $(R_k)_{k=1}^{\infty}$ converges locally uniformly on some open set $U$ to a function $Q$ holomorphic on $U$. Then $Q$ is a rational function of degree at most $n$. If $Q$ has degree exactly $n$, then $R_k \to Q$ spherically uniformly on $\widehat{\mathbb{C}}$. Moreover, $Q$ vanishes at infinity and the poles and residues of $R_k$ converge to the poles and residues of $Q$. \end{lemma} \begin{proof} Let us prove first that $Q$ is a rational function of degree at most $n$. Write $R_k=p_k/q_k$, where $p_k$ and $q_k$ are polynomials of degree at most $n$. Let $D$ be a closed disk contained in $U$ and fix a point $z_0$ in $D$ which is not a zero of any $q_k$. Multiplying $p_k$ and $q_k$ by the same constant $\lambda_k$ if necessary, we can assume that $\|q_k\|_{\infty,D} =1$ and that $q_k(z_0) > 0$ for all $k$. Then the $q_k$'s are in the closed unit ball of the finite-dimensional vector space of polynomials of degree at most $n$, which is compact. Consequently, there exists a subsequence $(q_{k_l})_{l=1}^{\infty}$ such that $q_{k_l} \to q$ uniformly on $D$, where $q$ is a polynomial of degree at most $n$. Moreover, we have that $\|q\|_{\infty,D}=1$ and $q(z_0) \geq 0$. Now, let $p:=Qq$. Since $p_k=R_k q_k$, $R_k \to Q$ and $q_{k_l} \to q$ uniformly on $D$, we get that $p_{k_l} \rightarrow p$ uniformly on $D$. By compactness, $p$ is a polynomial of degree at most $n$, as is every $p_k$. Hence we obtain that $Q$ is equal to the rational function $p/q$, which is of degree at most $n$. \\ Assume now that the degree of $Q$ is exactly $n$. Let us prove first that $q_k \to q$ locally uniformly on $\mathbb{C}$ by showing that every subsequence of $(q_k)_{k=1}^{\infty}$ has a subsequence which converges locally uniformly to $q$. By the same argument as in the first part of the proof, every subsequence of $(q_k)_{k=1}^{\infty}$ has a subsequence which converges uniformly on $D$. Let $\tilde{q}$ be the uniform limit of a subsequence $(q_k)_{k \in S}$. Then $\tilde{q}$ is a polynomial of degree at most $n$, $\|\tilde{q}\|_{\infty,D}=1$ and $\tilde{q}(z_0) \geq 0$. As in the first part of the proof again, if $\tilde{p}:=Q \tilde{q}$, then the subsequence $(p_k)_{k \in S}$ converges to $\tilde{p}$ uniformly on $D$. Hence we have two polynomials of degree at most $n$, $\tilde{p}$ and $\tilde{q}$, such that $Q= \tilde{p}/ \tilde{q} = p/q$. The polynomials $\tilde{p}$ and $\tilde{q}$, as well as $p$ and $q$, cannot have a common factor since $Q$ has degree $n$. It follows that $\tilde{p}$ has the same zeros as $p$ and $\tilde{q}$ has the same zeros as $q$. There is thus a constant $\lambda$ such that $\tilde{p} = \lambda p$ and $\tilde{q} = \lambda q$. Since $\|\tilde{q}\|_{\infty,D}=\|q\|_{\infty,D}=1$, $\lambda$ is unimodular. As $Q$ is assumed to be holomorphic on $D$, we have $q(z_0)>0$ and $\tilde{q}(z_0)>0$, and hence $\lambda=1$. Therefore, the subsequence $(q_k)_{k \in S}$ converges to $q$ uniformly on $D$, and thus locally uniformly on all of $\mathbb{C}$, since all norms on a finite-dimensional vector space are equivalent. \\ Now, since $p_k = R_k q_k$, $p=Q q$ and $R_k \to Q$, $q_k \to q$ on $D$, we get that $p_k \to p$ uniformly on $D$. Again by equivalence of norms, $p_k \to p$ locally uniformly on $\mathbb{C}$. One verifies easily that this implies that $R_k = p_k / q_k$ converges to $Q = p/q$ locally uniformly on $\mathbb{C}$ with respect to the spherical metric. \\ Since $R_k$ vanishes at infinity, $q_k$ has degree $n$ and $p_k$ has degree at most $(n-1)$. Therefore, $p$ has degree at most $(n-1)$. The degree of $q$ is thus $n$, so that $Q=p/q$ vanishes at infinity.\\ Since $q_k \to q$ locally uniformly on $\mathbb{C}$, it follows from Rouch\'e's theorem that the zeros of $q_k$ converge to the zeros of $q$ as $k \to \infty$. Equivalently, the poles of $R_k$ converge to the poles of $Q$. Integrating $R_k$ on a small circle surrounding any pole of $Q$ and letting $k \to \infty$ shows that the residues of $R_k$ must converge to the residues of $Q$.\\ Let $B$ be a closed disk centered at $\infty$ on which $Q$ has no poles. When $k$ is large enough, $R_k$ has no poles on $B$. Since $R_k$ converges to $Q$ uniformly on $\partial B$, the maximum principle implies that convergence is uniform on $B$. Thus $R_k$ converges to $Q$ locally uniformly on all of $\widehat{\mathbb{C}}$. Since $\widehat{\mathbb{C}}$ is compact, convergence is uniform. \end{proof} For each $(a,b)\in\mathcal{R}(n)$, the domain $ R_{a,b}^{-1}(\mathbb{D})$ contains $\infty$ and is bounded by $n$ disjoint analytic Jordan curves by definition. Since there is exactly one pole of $R_{a,b}$ in each component of $\widehat{\mathbb{C}} \setminus R_{a,b}^{-1}(\mathbb{D})$, the ordering $(b_1,...,b_n)$ induces an ordering $(F_1,...,F_n)$ of the components of $\partial R_{a,b}^{-1}(\mathbb{D})$. This gives a map $$ \begin{array}{rcrcl} P &:& \mathcal{R}(n) &\to& \mathcal{M}(n)\\ & & (a,b) & \mapsto & [(R_{a,b}^{-1}(\mathbb{D}),F_1,...,F_n)]. \end{array} $$ As mentioned earlier, if $(a^k,b^k)$ converges to $(a,b)$, then $R_{a^k,b^k}$ converges spherically uniformly to $R_{a,b}$. An easy consequence is that the domains $R_{a^k,b^k}^{-1}(\mathbb{D})$ converge to $R_{a,b}^{-1}(\mathbb{D})$ in the sense of Carath\'eodory. It then follows from Theorem \ref{convergencekoebe} and the remark after it that $P(a^k,b^k)$ converges to $P(a,b)$ in $\mathcal{M}(n)$ and hence $P$ is continuous. \\ If $P$ is smooth and regular, as is claimed without proof in \cite{JEONG2}, then the inverse image $P^{-1}(\sigma)$ of every point $\sigma \in \mathcal{M}(n)$ is a smooth manifold of dimension $$ n= \dim \mathcal{R}(n) - \dim \mathcal{M}(n).$$ We do not know how to prove that $P$ is smooth, but we prove in section 8 that $P^{-1}(\sigma)$ is homeomorphic to the $n$-dimensional torus $\mathbb{T}^n$ for every $\sigma \in \mathcal{M}(n)$.\\ In particular, $P$ is surjective. This means that for any $[(X,E_1,...,E_n)] \in \mathcal{M}(n)$, we can find $(a,b) \in \mathcal{R}(n)$ such that there is an isomorphism $$ g : (X,E_1,...,E_n) \to (R_{a,b}^{-1}(\mathbb{D}),F_1,...,F_n). $$ If we have this, then the composition $R_{a,b} \circ g : X \to \mathbb{D}$ is a proper holomorphic map of degree $n$ vanishing at infinity. Following \cite{BELL}, we call such a map a \textit{Grunsky map}. In section 5, we will see that every Grunsky map on $X$ arises in this way uniquely. In section 6, we will see that the set of Grunsky maps on $X$ is in bijection with the cartesian product of its boundary curves $E_1 \times \cdots \times E_n$. \section{Bell representations} Let $X$ be a non-degenerate $n$-connected planar domain containing infinity. Recall that a proper holomorphic map $f : X \to \mathbb{D}$ of degree $n$ with $f(\infty)=0$ is called a Grunsky map.\\ The existence part of the following theorem was first proved in \cite{JEONG}. The uniqueness part was proved in \cite{FBY}. \begin{theorem} \label{normalizedBellrep} Let $X$ be a non-degenerate $n$-connected planar domain containing infinity and let $f$ be a Grunsky map on $X$. Then there exists a unique normalized $n$-good map $R\in \mathcal{R}(n)/\Sigma_n$ and a unique biholomorphism $g : X \to R^{-1}(\mathbb{D})$ tangent to the identity at infinity such that $f=R\circ g$. \end{theorem} The pair $(R,g)$ in the above theorem is called the \textit{Bell representation of $X$ associated to $f$}. We now prove that Bell representations depend continuously on the input. \begin{theorem} \label{thmconv} Let $X, X_k$ be non-degenerate $n$-connected planar domains each containing infinity. Let $f$ and $f_k$ be Grunsky maps on $X$ and $X_k$ respectively. Suppose that $X_k \to X$ and $f_k \to f$ locally uniformly on $X$. Write $f=R\circ g$ and $f_k=R_k \circ g_k$, where $R,R_k \in \mathcal{R}(n)/\Sigma_n$ and $g,g_k$ are biholomorphisms tangent to the identity at infinity. Then $g_k \to g$ locally uniformly on $X$ and $R_k \to R$ spherically uniformly on $\widehat{\mathbb{C}}$. \end{theorem} \begin{proof} We prove that every subsequence $S\subset \mathbb{N}$ admits a subsequence $S'\subset S$ along which the claimed convergence holds. Accordingly, let $S\subset \mathbb{N}$ be any subsequence.\\ By Lemma \ref{lemmeconv}, $( g_k )_{k\in S}$ has a subsequence $( g_k )_{k\in S'}$ which converges locally uniformly to a univalent function $h$ on $X$. It follows that $h$ is tangent to the identity at infinity. Moreover, by Theorem \ref{thmcara}, the domains $g_k(X_k)$ converge to $h(X)$ and $g_k^{-1} \to h^{-1}$ locally uniformly on $h(X)$, as $k\to \infty$ in $S'$. Since $f_k \to f$ locally uniformly on $X$, we have that $R_k=f_k \circ g_k^{-1}$ converges to $Q:=f \circ h^{-1}$ locally uniformly on $h(X)$, as $k\to \infty$ in $S'$. \\ By Lemma \ref{LemmaRational}, $Q$ is a rational function of degree at most $n$. Since $f$ has degree $n$ and $h$ is univalent, $Q$ has degree exactly $n$ and $h(X)$ is $n$-connected. Furthermore, since the restriction $Q : h(X) \to \mathbb{D}$ is proper of degree $n$, $h(X)$ must be equal to the full preimage $Q^{-1}(\mathbb{D})$. By Lemma \ref{LemmaRational}, the poles of $R_k$ converge to the poles of $Q$. In particular, the poles of $Q$ sum to zero, as this is the case for every $R_k$. Therefore, $Q$ is a normalized $n$-good map. \\ It follows from the uniqueness part of Theorem \ref{normalizedBellrep} that $Q=R$ and that $h=g$. Therefore, $g_k \to g$ locally uniformly on $X$ and $R_k \to R$ locally uniformly on $g(X)$, as $k\to \infty$ in $S'$. Since the subsequence $S\subset \mathbb{N}$ was arbitrary, we have convergence as $k\to \infty$ in $\mathbb{N}$. By Lemma \ref{LemmaRational}, $R_k$ converges to $R$ spherically uniformly on $\widehat{\mathbb{C}}$. \end{proof} In the next section, we will see how many Grunsky maps there are on any given non-degenerate $n$-connected planar domain containing infinity. \section{Grunsky maps} Let $X$ be a planar domain containing infinity and bounded by $n$ disjoint analytic Jordan curves $E_1,...,E_n$. If $f$ is a Grunsky map on $X$, then it extends analytically to the closure $\overline X$ by the Schwarz reflection principle. Moreover, the extended map $f : \overline X \to \overline \mathbb{D}$ sends each boundary curve $E_j$ homeomorphically onto the unit circle. In particular, there exists a unique $\alpha_j \in E_j$ such that $f(\alpha_j)=1$ for each $j$. It turns out that each $n$-tuple $(\alpha_1,...,\alpha_n)\in E_1\times \cdots \times E_n$ arises in this way uniquely, and hence the set of Grunsky maps on $X$ can be parametrized by $E_1 \times \cdots \times E_n$. \begin{theorem} \label{normalizedBieberbach} Let $[(X,E_1,...,E_n)] \in \mathcal{M}(n)$ and let $\alpha_j \in E_j$ for each $j$. There exists a unique Grunsky map on $X$ whose extension $f: \overline X \to \overline \mathbb{D}$ satisfies $f(\alpha_j)=1$ for each $j$. \end{theorem} \begin{proof} See \cite[Corollary 2.6]{FBY} and \cite[Theorem 2.2]{BELL}. \end{proof} We call the function $f$ in the above theorem the \textit{Grunsky map for $(\alpha_1,...,\alpha_n)$}. By combining the above result with Theorem \ref{normalizedBellrep}, we obtain the following. \begin{theorem}\label{BiebBellRep} Let $[(X,E_1,...,E_n)] \in \mathcal{M}(n)$ and let $\alpha_j \in E_j$ for each $j$. Then there is a unique $(a,b) \in \mathcal{R}(n)$ and a unique isomorphism $$ g: (X,E_1,...,E_n) \to (R_{a,b}^{-1}(\mathbb{D}),g(E_1),...,g(E_n)) $$ such that the curve $g(E_j)$ encloses $b_j$ for each $j$ and $R_{a,b} \circ g$ is the Grunsky map for $(\alpha_1,...,\alpha_n)$. \end{theorem} We will need the fact that Grunsky maps depend continuously on the domain $X$ as well as on the $n$-tuple $(\alpha_1,...,\alpha_n)$, at least for circle domains. The first ingredient for this is the following compactness result. \begin{lemma} \label{GrunskyVariableDomain} Let $X$ be an $n$-connected circle domain containing infinity. Suppose that $X_k \to X$, where each $X_k$ is an $n$-connected circle domain containing infinity. Let $f_k$ be a Grunsky map on $X_k$. Then there exists a subsequence $(f_{k_\ell})_{\ell=1}^{\infty}$, a Grunsky map $f$ on $X$, and a neighborhood $U$ of $\overline X$ such that $f_{k_\ell}$ and $f$ extend holomorphically to $U$ for all $\ell$ and $(f_{k_\ell})_{\ell=1}^\infty$ converges uniformly to $f$ on $U$. \end{lemma} \begin{proof} Denote by $E_1,...,E_n$ the circles bounding $X$, let $J_j$ denote inversion in the circle $E_j$, and let $$ Y:=\overline X \cup J_1(X) \cup \cdots \cup J_n(X). $$ Similarly, denote by $E_1^k,...,E_n^k$ the circles bounding $X_k$ labeled in such a way that the center and radius of $E_j^k$ converge to the center and radius of $E_j$ as $k\to \infty$. Then let $J_j^k$ denote inversion in the circle $E_j^k$ and let $$ Y_k:=\overline{X_k} \cup J_1^k(X_k) \cup \cdots \cup J_n^k(X_k). $$ For each $j \in \{1,...,n\}$, we have that $J_j^k$ converges spherically uniformly to $J_j$ on $\widehat{\mathbb{C}}$. It follows that $Y_k \to Y$ in the sense of Carath\'eodory. Moreover, each $f_k$ extends to a meromorphic function on $Y_k$, by Schwarz reflection. More precisely, if $J$ denotes inversion in the unit circle $\mathbb{T}$, then for $z\in J_j^k(X_k)$ we define $$f_k(z):=J(f_k(J_j^k(z))).$$ Let $F_k:= f_k^{-1}(\{-1,1,i\})$. For each $k$, $F_k$ has cardinality $3n$ and is contained in $\partial X_k$. Therefore, by passing to a subsequence if necessary, we can assume that $F_k$ converges to a finite set $F \subset \partial X$. By Montel's fundamental normality test, we can further extract a subsequence $(f_{k_\ell})_{\ell=1}^{\infty}$ converging locally uniformly to some meromorphic function $f$ on $Y \setminus F$ with respect to the spherical metric.\\ Since for each $k$ we have $f_k(X_k)=\mathbb{D}$, we have that $f(X)\subset \overline \mathbb{D}$ and hence $f$ is holomorphic on $X$. We also have $f_k(\partial X_k) = \mathbb{T}$ for each $k$ and hence $f(\partial X \setminus F) \subset \mathbb{T}$. Moreover, $f(\infty)=\lim_{\ell \to \infty} f_{k_\ell}(\infty)=0$, so that $f$ is not constant. By the maximum principle, we have $f(X) \subset \mathbb{D}$.\\ Let $w\in \mathbb{D}$ and let $\zeta$ be a zero of $f-w$ in $X$ of multiplicity $m$. Let $D \subset X$ be a closed disk centered at $z$ and such that $f-w$ does not vanish on $D\setminus \zeta$. Since $f_{k_\ell} \to f$ uniformly on $\partial D$, there exists an $L$ such that $$|(f_{k_\ell}(z)-w) -(f(z) - w)| < |f(z)-w|$$ for all $z \in \partial D$ and all $\ell \geq L$. By Rouch\'e's theorem, $f_{k_\ell}-w$ then has $m$ zeros in $D$ counting multiplicity. Since $f_k-w$ has exactly $n$ zeros in $X_k$ for each $k$, this implies that $f-w$ has a total of at most $n$ zeros in $X$. Therefore, $f$ has degree at most $n$ on $X$. The same is true on $J_j(X)$ for each $j$, since $f$ must be equal to $J\circ f \circ J_j$ there. Note that the above argument also implies that the zeros of $f_{k_\ell}$ converge to the zeros of $f$ when $f$ has degree exactly $n$.\\ By Picard's big theorem, $f$ cannot have essential singularities in $F$, and thus extends to a meromorphic function on $Y$. By continuity, we have $f(\partial X) \subset \mathbb{T}$. Since $f$ is not constant, the restriction $f : E_j \to \mathbb{T}$ to each boundary component is open. The image $f(E_j)$ is therefore open in $\mathbb{T}$ as well as compact and hence closed. By connectedness of the circle, we have $f(E_j)=\mathbb{T}$ for each $j$. This means that $f$ has degree at least $n$ and thus exactly $n$ on $\overline X$. The map $f:X \to \mathbb{D}$ is proper since it extends continuously to $\overline{X}$ with $f(\partial X) \subset \mathbb{T}$. It is thus a Grunsky map.\\ Let $U$ be any neighborhood of $\overline X$ with closure in $Y$ on which $f$ is bounded. The identities $f_{k_\ell}(J_j^{k_\ell}(z))=J(f_{k_\ell}(z))$ and $f(J_j(z))=J(f(z))$ for $z \in X$ imply that $f_{k_\ell}$ has a zero at $z_0 \in X$ if and only if it has a pole at $J_j^{k_\ell}(z_0)$ for each $j$, and similarly for $f$. Since the zeros of $f_{k_\ell}$ converge to the zeros of $f$ and $J_j^{k_\ell}$ converges to $J_j$ locally uniformly on $\widehat{\mathbb{C}}$ for each $j$, it follows that the poles of $f_{k_\ell}$ must converge to the poles of $f$. We may thus assume that $f_{k_\ell}$ is holomorphic on $U$ for all $\ell$. By the maximum principle, the sequence $(f_{k_\ell})_{\ell=1}^{\infty}$ converges to $f$ uniformly on $\overline U$. Indeed, $\partial U$ is compact and contained in $Y\setminus F$, where $(f_{k_\ell})_{\ell=1}^{\infty}$ converges to $f$ locally uniformly.\\ \end{proof} The following two corollaries are precisely what we need for the proof of Theorem \ref{MainTheorem}. \begin{corollary} \label{GrunskyBoundaryPoints1} Let $(X_k,E_1^k,...,E_n^k)$ and $(X,E_1,...,E_n)$ be non-degenerate $n$-connec\-ted circle domains containing infinity. Suppose that the center and radius of the circle $E_j^k$ converge to the center and radius of the circle $E_j$, as $k\to \infty$ for each $j$. Let $\alpha^k \in \prod_{j=1}^n E_j^k$ and $\alpha \in \prod_{j=1}^n E_j$ be such that $\alpha^k \to \alpha$. Let $f_k$ and $f$ be the Grunsky maps on $X_k$ and $X$ for $\alpha^k$ and $\alpha$ respectively. Then $(f_k)_{k=1}^{\infty}$ converges to $f$ locally uniformly on $X$. \end{corollary} \begin{proof} Let $(f_k)_{k\in S}$ be any subsequence of $(f_k)_{k=1}^{\infty}$. By Lemma \ref{GrunskyVariableDomain}, we can extract a subsequence $S' \subset S$ such that there is a Grunsky map $\varphi$ on $X$ and a neighborhood $U$ of $\overline X$ such that $\varphi$ and $f_k$ extend analytically to $U$ for all $k\in S'$ and $(f_k)_{k \in S'}$ converges to $\varphi$ uniformly on $U$ as $k \to \infty$ in $S'$. For each $j \in \{1,...,n \}$, we thus have $$ \varphi(\alpha_j)=\lim_{\substack{k\to\infty \\ k \in S'}} f_k(\alpha_j^k) = 1, $$ and hence $\varphi=f$ by Theorem \ref{normalizedBieberbach}. This proves that every subsequence of $(f_k)_{k=1}^{\infty}$ has a subsequence which converges to $f$ locally uniformly on $X$. Therefore $(f_k)_{k=1}^{\infty}$ converges to $f$ locally uniformly on $X$. \end{proof} \begin{corollary} \label{GrunskyBoundaryPoints2} Let $(X_k,E_1^k,...,E_n^k)$ and $(X,E_1,...,E_n)$ be $n$-connected circle domains containing infinity. Suppose that the center and radius of the circle $E_j^k$ converge to the center and radius of the circle $E_j$ for each $j$ as $k\to \infty$. Let $f_k$ and $f$ be Grunsky maps on $X_k$ and $X$ respectively. Let $\beta^k,\beta \in \mathbb{T}^n$ be such that $\beta^k \to \beta$ and let $\alpha_j^k \in E_j^k$ and $\alpha_j \in E_j$ be such that $f_k(\alpha_j^k)=\beta_j^k$ and $f(\alpha_j)=\beta_j$ for each $j$. If $(f_k)_{k=1}^{\infty}$ converges to $f$ locally uniformly on $X$, then $\alpha^k \to \alpha$. \end{corollary} \begin{proof} Fix $j \in \{1,...,n\}$, and let $(\alpha_j^k)_{k \in S}$ be any subsequence of $(\alpha_j^k)_{k\in \mathbb{N}}$. Since the circle $E_j^k$ converges to $E_j$, we can extract a subsequence $S' \subset S$ such that $\alpha_j^k$ converges to some $z_j \in E_j$ as $k\to \infty$ in $S'$. Moreover, by Lemma \ref{GrunskyVariableDomain}, there exists a further subsequence $S'' \subset S'$, a Grunsky map $\varphi$ on $X$ and a neighborhood $U$ of $\overline X$ to which $\varphi$ and $f_k$ extend for all $k\in S''$ and such that $(f_k)_{k \in S''}$ converges to $\varphi$ uniformly on $U$ as $k\to \infty$ in $S''$. Since $f_k$ converges locally uniformly to $f$ on $X$, we have $\varphi=f$. Moreover, by uniform convergence, we have that $$ f(z_j)=\lim_{\substack{k\to\infty \\ k\in S''}} f_k(\alpha_j^k)= \lim_{\substack{k\to\infty \\ k\in S''}} \beta_j^k = \beta_j. $$ Since the restriction $f : E_j \to \mathbb{T}$ is injective, $z_j = \alpha_j$. Therefore, every subsequence of $(\alpha_j^k)_{k\in \mathbb{N}}$ has a subsequence converging to $\alpha_j$ and hence $(\alpha_j^k)_{k=1}^{\infty}$ converges to $\alpha_j$.\\ \end{proof} \section{Ahlfors functions} If $g$ is a function holomorphic in a neighborhood of $\infty$ in $\widehat{\mathbb{C}}$, then we define $$ v_\infty(g):=g'(\infty):=\lim_{z\to \infty} z(g(z)-g(\infty)). $$ Let $X$ be a planar domain containing $\infty$. The \textit{analytic capacity} of $\widehat{\mathbb{C}} \setminus X$ is defined as $$ \gamma(\widehat{\mathbb{C}} \setminus X) := \sup \{ |g'(\infty)| : g \in \mathcal{O}(X,\overline \mathbb{D}) \}, $$ where $\mathcal{O}(X,Y)$ denotes the set of holomorphic maps from $X$ to $Y$. The analytic capacity $\gamma(\widehat{\mathbb{C}} \setminus X)$ is also known as the \textit{Carath\'eodory length} of the tangent vector $v_\infty$ in $X$.\\ If $\gamma(\widehat{\mathbb{C}} \setminus X)>0$, then there is a unique $f \in \mathcal{O}(X,\mathbb{D})$ such that $$ f'(\infty) = \gamma(\widehat{\mathbb{C}} \setminus X), $$ called the \textit{Ahlfors function} on $X$. It is easy to see that the Ahlfors function satisfies $f(\infty)=0$. Furthermore, if $h$ is univalent on $X$ and satisfies $h(\infty)=\infty$ and $\lim_{z\to \infty} h(z)/z = a$, then the Ahlfors function on $h(X)$ is $(|a|/a) f \circ h^{-1}$ and $$\gamma(\widehat{\mathbb{C}} \setminus h(X))=|a|\gamma(\widehat{\mathbb{C}} \setminus X).$$ We refer to this property as the \textit{transformation law} for analytic capacity. Finally, analytic capacity is \textit{outer regular}, in the sense that if $K_1 \supset K_2 \supset K_3 \dots$ is a decreasing sequence of compact sets, then $\gamma(\cap_j K_j) = \lim_{j \rightarrow \infty} \gamma(K_j)$. \\ The following is due to Ahlfors \cite{AHL} : \begin{theorem}[Ahlfors] \label{Ahlfors} If $X$ is a non-degenerate $n$-connected domain containing $\infty$, then the Ahlfors function on $X$ is a Grunsky map. In particular, if $X$ is bounded by $n$ disjoint analytic Jordan curves, then the Ahlfors function on $X$ extends analytically to a neighborhood of $\overline X$. \end{theorem} We will need continuous dependence of Ahlfors functions on their domain at least when the limiting domain is bounded by analytic Jordan curves. This is false in general even if each domain considered is a non-degenerate circle domain. \begin{example} Let $X:= \widehat{\mathbb{C}} \setminus \overline{\mathbb{D}}$ and let $\{x_k\}_{k=1}^{\infty}$ be a sequence dense in $\mathbb{D}$. Define $X_k$ to be the complement in $\widehat{\mathbb{C}}$ of disjoint closed disks centered at $x_1,x_2, \dots, x_k$ and contained in $\mathbb{D}$ of radius sufficiently small so that the analytic capacity of $\widehat{\mathbb{C}} \setminus X_k$ is less than $1/2$. This is always possible by outer-regularity of analytic capacity and by the fact that the analytic capacity of a finite set is zero, by Riemann's removable singularity theorem and Liouville's theorem. Then it is easy to see that $X_k \to X$ in the sense of Carath\'eodory, but the corresponding Ahlfors functions $f_k$ do not converge locally uniformly to the Ahlfors function on $X$, for otherwise we would have $\gamma(\widehat{\mathbb{C}} \setminus X_k) \to \gamma(\widehat{\mathbb{C}} \setminus X)=1$. \end{example} We thus need a stronger notion of convergence for domains. \begin{definition} Let $X$ and $X_k$ be domains in $\widehat{\mathbb{C}}$. We say that $X_k$ \textit{converges strongly to} $X$, and write $X_k \rightrightarrows X$, if for every compact set $K \subset X$ and every open set $U \supset \overline X$, we have $K \subset X_k \subset U$ for all but finitely many $k$. \end{definition} Note that if $\infty \in X$ and $X_k$ converges strongly to $X$, then $X_k$ converges to $X$ in the sense of Carath\'eodory. \begin{lemma} \label{AhlforsContinuous} Let $X$ be a planar domain containing $\infty$ and bounded by finitely many analytic Jordan curves. Suppose that $X_k \rightrightarrows X$, where $X_k$ are arbitrary domains containing $\infty$. Let $f_k$ and $f$ be the Ahlfors functions on $X_k$ and $X$ respectively. Then $(f_k)_{k=1}^{\infty}$ converges locally uniformly to $f$ on $X$. \end{lemma} \begin{proof} By Ahlfors' theorem, $f$ extends holomorphically to some neighborhood $V$ of $\overline X$. Then take a neighborhood $U\supset \overline X$ with $\overline U \subset V$, so that $f$ is bounded on $U$. \\ We show that every subsequence of $( f_k )_{k=1}^\infty$ has a subsequence which converges to $f$, which implies that $(f_k)_{k=1}^{\infty}$ converges to $f$.\\ By Montel's theorem, the uniformly bounded sequence $( f_k )_{k=1}^\infty$ forms a normal family. Therefore, every subsequence of $( f_k )_{k=1}^\infty$ has a subsequence which converges locally uniformly to a holomorphic function on $X$.\\ Let $g$ be the locally uniform limit of a subsequence. Then $g \in \mathcal{O}(X,\overline\mathbb{D})$, so $g'(\infty) \leq f'(\infty)$.\\ By hypothesis, $X_k \rightrightarrows X$, so if $k$ is large enough we have $X_k \subset U$ and $f$ is defined and holomorphic on $X_k$. Let $M_k := \sup \{|f(z)| : z \in X_k\}$. Then $M_k^{-1}f \in \mathcal{O}(X_k,\overline \mathbb{D})$, so that $M_k^{-1}f'(\infty) \leq f_k'(\infty)$. Of course, $M_k \to 1$ as $k \to \infty$, since $ X_k\rightrightarrows X$ and $f$ is continuous on $U$. Therefore, $$ f'(\infty) \leq \liminf f_k'(\infty) \leq g'(\infty) $$ and thus $g'(\infty)=f'(\infty)$. By uniqueness of the Ahlfors function, we have $g=f$. \end{proof} If we require that each domain $X_k$ in the sequence has the same connectivity as $X$, then we can replace strong convergence by Carath\'eodory convergence. \begin{theorem}\label{nconnectedAhlfors} Let $X$ be a non-degenerate $n$-connected domain containing infinity. Suppose that $X_k \to X$, where each $X_k$ is a non-degenerate $n$-connected domain containing infinity. Let $f_k$ and $f$ be the Ahlfors functions on $X_k$ and $X$ respectively. Then $(f_k)_{k=1}^{\infty}$ converges locally uniformly to $f$ on $X$. \end{theorem} \begin{proof} Let $h_k : X_k \to Y_k$ and $h : X \to Y$ be normalized Koebe representations. Then $(h_k)_{k=1}^{\infty}$ converges locally uniformly to $h$ on $X$ and $Y_k \to Y$, by Theorem \ref{convergencekoebe}. Since $Y_k$ and $Y$ are circle domains of connectivity $n$, we have in fact $Y_k \rightrightarrows Y$. Let $\varphi_k$ and $\varphi$ be the Ahlfors functions on $Y_k$ and $Y$ respectively. By Lemma \ref{AhlforsContinuous}, $(\varphi_k)_{k=1}^{\infty}$ converges to $\varphi$ locally uniformly on $Y$. By the transformation law, we have $f_k = \varphi_k \circ h_k$ and $f=\varphi \circ h$, so that $(f_k)_{k=1}^{\infty}$ converges to $f$ locally uniformly on $X$. \end{proof} \begin{remark} By the transformation law, the Carath\'eodory length of $v_\infty$ is a well-defined function on the moduli space $\mathcal{M}(n)/\Sigma_n$. The above theorem implies that this function is continuous. A similar but more general result is true for the Kobayashi--Poincar\'e length (see \cite{HEJ}). \end{remark} \section{Rational Ahlfors functions} Let $[X]\in \mathcal{M}(n)/\Sigma_n$. The Ahlfors function $f$ on $X$ is a Grunsky map by Theorem \ref{Ahlfors}. By Theorem \ref{normalizedBellrep}, there exists a unique $R \in \mathcal{R}(n)/\Sigma_n$ and a unique biholomorphism $$ g : X \to R^{-1}(\mathbb{D}) $$ tangent to the identity at infinity such that $f=R \circ g$. By the transformation law, $R$ is the Ahlfors function on $g(X)=R^{-1}(\mathbb{D})$. \begin{definition} If $Q \in \mathcal{R}(n)/\Sigma_n$ is such that $Q$ is the Ahlfors function on $Q^{-1}(\mathbb{D})$, we say that $Q$ is a \textit{normalized rational Ahlfors function}. \end{definition} Let $Q \in \mathcal{R}(n)/\Sigma_n$ be a normalized rational Ahlfors function such that there is a biholomorphism $$ h : X \to Q^{-1}(\mathbb{D}) $$ tangent to the identity at infinity. Then $Q \circ h$ is the Ahlfors function on $X$ by the transformation law, and thus $Q=R$ by the uniqueness part of Theorem \ref{normalizedBellrep}. Accordingly, we say that $R$ is the \textit{normalized rational Ahlfors function associated to X}.\\ If the curves bounding $X$ are labelled $E_1,...,E_n$, then we may define $A(X,E_1,...,E_n)$ as the unique $(a,b) \in \mathcal{R}(n)$ such that $R_{a,b}=R$ and $b_j$ is contained in $g(E_j)$ for each $j$. If $$h: (X,E_1,...,E_n) \to (Y,F_1,...,F_n)$$ is an isomorphism, then the Ahlfors function on $Y$ is given by $f \circ h^{-1}$ by the transformation law, and the latter factors uniquely as $R_{a,b} \circ (g \circ h^{-1})$. This shows that the map $$ \begin{array}{rcrcl} A &:& \mathcal{M}(n) &\to& \mathcal{R}(n)\\ & & [(X,E_1,...,E_n)] & \mapsto & (a,b) \end{array} $$ is well-defined. Moreover, the image $\mathcal{A}(n):=A(\mathcal{M}(n))$ is the set of $(a,b) \in \mathcal{R}(n)$ such that $R_{a,b}$ is a normalized rational Ahlfors function. \\ The map $A$ is a right inverse for the map $P$ defined in section 4, since $$ g : (X,E_1,...,E_n) \to (R_{a,b}^{-1}(\mathbb{D}),g(E_1),...,g(E_n)) $$ is an isomorphism and $P(a,b)$ is by definition the isomorphism class of the latter.\\ We can use the map $A$ to construct a homeomorphism between $\mathcal{M}(n) \times \mathbb{T}^n$ and $\mathcal{R}(n)$. In turn, this will shed light on the topological properties of $A$. \begin{theorem}\label{MainTheorem} There is a homeomorphism $$ H : \mathcal{R}(n) \to \mathcal{M}(n) \times \mathbb{T}^n $$ which commutes with the action of $\Sigma_n$ and is such that the diagrams \begin{equation} \label{diagram1} \begin{tikzcd} \mathcal{R}(n) \arrow{r}{H} \arrow{dr}{P} & \mathcal{M}(n) \times \mathbb{T}^n \arrow{d}{\pi} \\ {} & \mathcal{M}(n) \end{tikzcd} \end{equation} and \begin{equation} \label{diagram2} \begin{tikzcd} \mathcal{R}(n) \arrow{r}{H} & \mathcal{M}(n) \times \mathbb{T}^n \\ {} & \mathcal{M}(n) \arrow{u}{\iota} \arrow{ul}{A} \end{tikzcd} \end{equation} commute, where $\pi : \mathcal{M}(n) \times \mathbb{T}^n \to \mathcal{M}(n)$ is the projection onto the first factor and $\iota: \mathcal{M}(n) \to \mathcal{M}(n) \times \mathbb{T}^n$ is the inclusion of $\mathcal{M}(n)$ as $\mathcal{M}(n) \times \{(1,...,1)\}$. \end{theorem} \begin{proof} Let $(a,b)\in \mathcal{R}(n)$. We define $H=(H_1,H_2)$ as follows. We take $$ H_1(a,b):=P(a,b)=[(R_{a,b}^{-1}(\mathbb{D}), F_1,...,F_n)] \in \mathcal{M}(n), $$ where $F_j$ is taken to be the boundary of the component of $R_{a,b}^{-1}(\widehat{\mathbb{C}} \setminus \mathbb{D})$ containing $b_j$. Let $\alpha_1,...,\alpha_n$ be the points in $F_1,...,F_n$ respectively with $R_{a,b}(\alpha_j)=1$. Also let $f: \overline{R_{a,b}^{-1}(\mathbb{D})} \to \overline{\mathbb{D}}$ be the Ahlfors function. We set $$H_2(a,b):=(f(\alpha_1),...,f(\alpha_n)) \in \mathbb{T}^n.$$ By construction, $H_2(a,b)=(1,...,1)$ if and only if $R_{a,b}$ is the Ahlfors function on $R_{a,b}^{-1}(\mathbb{D})$, i.e. if and only if $(a,b)=A(P(a,b))$.\\ We now construct an inverse $G$ to $H$. Given $[(X,E_1,...,E_n)] \in \mathcal{M}(n)$, let $f$ be the Ahlfors function on $X$. For $(\beta_1,...,\beta_n) \in \mathbb{T}^n$, let $\alpha_j$ be the unique point in $E_j$ such that $f(\alpha_j)=\beta_j$. By Theorem \ref{BiebBellRep}, there is a unique $(a,b) \in \mathcal{R}(n)$ and a unique isomorphism $g : (X,E_1,...,E_n) \to (R_{a,b}^{-1}(\mathbb{D}),F_1,...,F_n)$ such that $R_{a,b}(g(\alpha_j))=1$ for each $j$. We then set $ G([(X,E_1,...,E_n)],(\beta_1,...,\beta_n)):=(a,b).$\\ It is straightforward to verify that $G$ is the inverse of $H$, that both maps commute with the action of $\Sigma_n$, and that the diagrams commute.\\ By Lemma \ref{dimmoduli} and Lemma \ref{dimrational}, $\mathcal{R}(n)$ and $\mathcal{M}(n) \times \mathbb{T}^n$ are both manifolds of dimension $(4n-2)$. By Brouwer's invariance of domain, $H$ is continuous if and only if $G$ is.\\ Let us prove that $G$ is continuous. Let $(X_k,E_1^k,...,E_n^k)$ be a sequence of normalized circle domains converging to $(X,E_1,...,E_n)$ and let $\beta^k \in \mathbb{T}^n$ converge to $\beta$. Let $f_k$ and $f$ be the Ahlfors functions on $X_k$ and $X$ respectively. Let $\alpha_j^k$ be the point in $E_j^k$ such that $f_k(\alpha_j^k)=\beta_j^k$ and let $\alpha_j$ be the point in $E_j$ such that $f(\alpha_j)=\beta_j$.\\ By Theorem \ref{nconnectedAhlfors}, we have that $f_k$ converges to $f$ locally uniformly on $X$. By Corollary \ref{GrunskyBoundaryPoints2}, this implies that $\alpha^k \to \alpha$.\\ Let $g_k$ be the Grunsky map for $\alpha^k$ and $g$ the Grunsky map for $\alpha$. Then $g_k$ converges to $g$ locally uniformly on $X$ by Corollary \ref{GrunskyBoundaryPoints1}.\\ Let $(a^k,b^k)$ and $(a,b)$ be the parameters for the factorizations $R_{a^k,b^k} \circ h_k = g_k$ and $R_{a,b} \circ h = g$. Then $R_{a^k,b^k}$ converges to $R_{a,b}$ spherically uniformly on $\widehat{\mathbb{C}}$ by Theorem \ref{thmconv}. By Lemma \ref{LemmaRational}, we have $(a^k,b^k) \to (a,b)$. \end{proof} It follows that the maps $P$ and $A$ have the same topological properties as $\pi$ and $\iota$. \begin{corollary} The map $P$ is continuous and open, and for every $\sigma \in \mathcal{M}(n)$ the set $P^{-1}(\sigma)$ is homeomorphic to $\mathbb{T}^n$. \end{corollary} \begin{corollary} \label{ContinuousSection} The map $A : \mathcal{M}(n) \to \mathcal{R}(n)$ is a topological embedding with closed image. \end{corollary} \begin{remark} All the arrows in diagrams \ref{diagram1} and \ref{diagram2} commute with the action of the symmetric group $\Sigma_n$. Therefore, they descend to the quotients and the resulting diagrams $$ \begin{tikzcd} \mathcal{R}(n)/\Sigma_n \arrow{r}{\tilde H} \arrow{dr}{\tilde P} & (\mathcal{M}(n)/\Sigma_n) \times (\mathbb{T}^n/\Sigma_n) \arrow{d}{\tilde \pi} \\ {} & \mathcal{M}(n)/\Sigma_n \end{tikzcd} $$ and $$ \begin{tikzcd} \mathcal{R}(n)/\Sigma_n \arrow{r}{\tilde H} & (\mathcal{M}(n)/\Sigma_n) \times (\mathbb{T}^n/\Sigma_n) \\ {} & \mathcal{M}(n)/\Sigma_n \arrow{u}{\tilde \iota} \arrow{ul}{\tilde A} \end{tikzcd} $$ commute. The action of $\Sigma_n$ on $\mathcal{M}(n)$, $\mathcal{R}(n)$ and $\mathcal{A}(n)$ is properly disconti\-nuous and without fixed points. Therefore, the quotients $\mathcal{M}(n)/\Sigma_n$, $\mathcal{R}(n)/\Sigma_n$ and $\mathcal{A}(n)/\Sigma_n$ are manifolds without boundary. However, when $n>1$, the action of $ \Sigma_n$ on $\mathbb{T}^n$ has fixed points and the quotient $\mathbb{T}^n / \Sigma_n$ is a manifold with non-empty boundary (see \cite{MORTON}). For example, $\mathbb{T}^2 / \Sigma_2$ is the M\"obius band with boundary. \end{remark} \section{The positivity criterion} The manifold $\mathcal{A}(n):=A(\mathcal{M}(n))$ in $\mathbb{C}^n \times \mathbb{C}^n$ represents all Ahlfors functions on all non-degenerate $n$-connected domains. It is therefore of interest to determine this manifold explicitly.\\ Let $\mathcal{R}^+(n)$ denote the subset of $\mathcal{R}(n)$ consisting of parameters $(a,b)$ such that all the $a_j$'s are real and positive, that is, $\mathcal{R}^+(n) := \mathcal{R}(n) \cap ((\mathbb{R}_{>0})^n \times \mathbb{C}^n)$.\\ Here is a heuristic argument explaining why one should expect $\mathcal{A}(n)$ and $\mathcal{R}^+(n)$ to have anything to do with each other. For every $(a,b) \in \mathcal{R}(n)$, we have $$ R_{a,b}'(\infty) = \lim_{z\to \infty} z \sum_{j=1}^n \frac{a_j}{z-b_j}= \sum_{j=1}^n a_j. $$ Given $\sigma = [(X,E_1,...,E_n)]$ in $\mathcal{M}(n)$ and $(a,b) \in P^{-1}(\sigma)$ with biholomorphism $$ g : X \to R_{a,b}^{-1}(\mathbb{D}) $$ tangent to the identity at infinity, we have that $R_{a,b} \circ g \in \mathcal{O}(X,\mathbb{D})$ and hence $$ \Re \sum_{j=1}^n a_j = \Re R_{a,b}'(\infty) \leq |R_{a,b}'(\infty)|= |(R_{a,b}\circ g)'(\infty)| \leq \gamma(\widehat{\mathbb{C}} \setminus X). $$ Moreover, if the equality $\Re \sum_{j=1}^n a_j = \gamma(\widehat{\mathbb{C}} \setminus X)$ occurs, then $R_{a,b}\circ g$ is the Ahlfors function on $X$, so that $(a,b)=A(\sigma)$. In other words, $A(\sigma)$ is the unique parameter $(a,b) \in P^{-1}(\sigma)$ maximizing the quantity $\Re \sum_{j=1}^n a_j$. Intuitively, this parameter should have all $a_j$'s nearly positive in order to maximize the real part of the sum $\sum_{j=1}^n a_j$. Of course, this depends on the shape of the $n$-dimensional torus $P^{-1}(\sigma)$ sitting inside $\mathbb{C}^n \times \mathbb{C}^n$.\\ In fact, $\mathcal{A}(n)$ is equal to $\mathcal{R}^+(n)$ for $n=1,2$, as shown in \cite{FBY}. However, this fails in higher connectivity : \begin{lemma} \label{generalizedexample} For every $n\geq 3$, $\mathcal{R}^+(n) \setminus \mathcal{A}(n)$ is not empty. \end{lemma} \begin{proof} In \cite{FBY}, we give a numerical example of a $3$-good rational map with positive residues which is not Ahlfors. The specific example is $$ R(z) : = \frac{0.4}{z} + \frac{0.4}{z-(1+i)} + \frac{0.4}{z-6}. $$ \\ Choose distinct points $b_4,...,b_n$ in $R^{-1}(\mathbb{D})$. For $\varepsilon>0$, define $$ Q_\varepsilon(z) := R(z) + \sum_{j=4}^n \frac{\varepsilon}{z-b_j}. $$ Our claim is that when $\varepsilon$ is small enough, $Q_\varepsilon$ is $n$-good, but is not Ahlfors.\\ We have that $Q_\varepsilon \to R$ locally uniformly on $\widehat{\mathbb{C}} \setminus \{ b_4 ,..., b_n \}$ as $\varepsilon \to 0$. This implies that $Q_\varepsilon^{-1}(\mathbb{D})$ converges strongly to $R^{-1}(\mathbb{D}) \setminus \{ b_4 ,..., b_n\}$. In particular, when $\varepsilon$ is small enough, $Q_\varepsilon^{-1}(\widehat{\mathbb{C}} \setminus \mathbb{D})$ has at least $n$ connected components and thus exactly $n$ since $Q_\varepsilon$ has degree $n$. Therefore, $Q_\varepsilon$ is $n$-good. \\ Let $f$ be the Ahlfors function on $R^{-1}(\mathbb{D}) \setminus \{ b_4 ,..., b_n\}$. The singularities $b_4 ,..., b_n$ are removable so that $f$ is the Ahlfors function on $R^{-1}(\mathbb{D})$. Since $R$ is not the Ahlfors function on $R^{-1}(\mathbb{D})$, we have $f'(\infty) > R'(\infty)$. Now let $f_\varepsilon$ be the Ahlfors function on $Q_\varepsilon^{-1}(\mathbb{D})$. As in the proof of Lemma \ref{AhlforsContinuous}, we have that $f_\varepsilon \to f$ locally uniformly on $R^{-1}(\mathbb{D}) \setminus \{ b_4 ,..., b_n\}$ as $\varepsilon \to 0$. In particular, we have $f_\varepsilon'(\infty) \to f'(\infty)$.\\ On the other hand, $Q_\varepsilon'(\infty) = R'(\infty) + (n-3) \varepsilon \to R'(\infty)$ as $\varepsilon \to 0$. If $\varepsilon$ is small enough, we thus have $$ f_\varepsilon'(\infty) > Q_\varepsilon'(\infty) $$ so that $Q_\varepsilon$ is not the Ahlfors function on $Q_\varepsilon^{-1}(\mathbb{D})$. Precomposing $Q_\varepsilon$ with the appropriate translation yields an element in $(\mathcal{R}^+(n)/\Sigma_n) \setminus (\mathcal{A}(n)/\Sigma_n)$. \end{proof} We will prove that $\mathcal{A}(n)$ is not contained in $\mathcal{R}^+(n)$ either for $n\geq 3$. Beforehand, we need topological information on $\mathcal{R}^+(n)$. \begin{lemma} Let $K \subset \mathcal F_n \mathbb{C}$ be a compact set. There exists an $\varepsilon>0$ such that if $b \in K$ and $a=(a_1,...,a_n)$ satisfies $0<|a_j|\leq\varepsilon$ for each $j\in\{1,...,n\}$, then $R_{a,b}$ is $n$-good. \end{lemma} \begin{proof} For $(a,b) \in \mathbb{C}^n \times \mathbb{C}^n$, we define the critical radius $\rho(a,b)$ to be the modulus of the largest critical value of $R_{a,b}$. This is a continuous function. Note that $R_{\lambda a, b} = \lambda R_{a,b}$, so that $\rho(\lambda a , b) = |\lambda| \rho(a,b)$. Let $$ B:= \{ a \in \mathbb{C}^n : \|a\|_\infty \leq 1 \}. $$ Then $B$ is compact so that $\rho$ attains a maximum $M$ on $B \times K$. If $0< |a_j| \leq 1/2M$ for all $j$, then $2M a \in B$, so that for every $b\in K$ the critical radius of $(2M a, b)$ is at most $M$. The critical radius of $(a, b)$ is thus at most $1/2$, and hence $R_{a,b}$ is $n$-good since it has degree $n$ and all its critical values are in the unit disk. \end{proof} \begin{lemma} \label{PositiveConnected} $\mathcal{R}^+(n)$ is a closed connected submanifold of $\mathcal{R}(n)$ of dimension $(3n-2)$. \end{lemma} \begin{proof} The fact that $\mathcal{R}^+(n)$ is a closed submanifold of $\mathcal{R}(n)$ of dimension $(3n-2)$ is elementary.\\ We now prove that $\mathcal{R}^+(n)$ is path connected. Let $(a^0,b^0),(a^1,b^1) \in \mathcal{R}^+(n)$. To construct a path we first shrink the vector of residues $a^0$ while keeping the poles fixed. Once the residues are all small enough, we move the poles from $b^0$ to $b^1$ while adjusting the residues individually in such a way that we can then expand them by a common factor to end up with $a^1$.\\ It is well-known (and easy to prove by induction) that $\mathcal{F}_n\mathbb{C}$ is path-connected. Let $p : [0,1] \to \mathcal{F}_n\mathbb{C}$ be a path from $b^0$ to $b^1$. We can modify $p$ so that the sum of its coordinates is zero for all $t$. In other words, define $q(t)_j:= p(t)_j - \frac{1}{n}\sum_{i=1}^n p(t)_i$. Let $K:=q([0,1])$ denote the trace of the path, and let $0<\varepsilon \leq \min(\|a^0\|_\infty,\|a^1\|_\infty)$ be as in the previous lemma.\\ For each $j\in{1,...,n}$, define $$ b_j^t := \left\{ \begin{array}{ccl} b_j^0 & & t \in [0,1/3) \\ q(3t-1)_j & & t \in [1/3,2/3] \\ b_j^1 & & t\in (2/3,1] \end{array}\right. $$ and $$ a_j^t := \left\{ \begin{array}{ccl} \mu_t a_j^0 & & t \in [0,1/3) \\ (2-3t)\mu_{1/3} a_j^0 + (3t-1) \nu_{2/3} a_j^1 & & t \in [1/3,2/3] \\ \nu_t a_j^1 & & t\in (2/3,1] \end{array}\right., $$ where $$ \mu_t := 1-3(1- \varepsilon / \|a^0\|_\infty)t $$ and $$ \nu_t:= 3\varepsilon/\|a^1\|_\infty-2+3(1- \varepsilon / \|a^1\|_\infty)t $$ are positive constants varying affinely with $t$ such that $\mu_0=\nu_1=1$, $\mu_{1/3} a_j^0 \leq \varepsilon$ and $\nu_{2/3} a_j^1 \leq \varepsilon$ for all $j$.\\ The path $(a^t,b^t)$ is clearly continuous and is such that the points $b_1^t,...,b_n^t$ are distinct and add up to zero and $a_1^t,...,a_n^t >0$ for all $t$. The map $R_{a^t,b^t}$ is $n$-good for $t\in [0,1/3)$ since $$ R_{a^t,b^t} = \mu_t R_{a^0,b^0}, $$ and $\mu_t\in (0,1]$. It is $n$-good for $t\in (2/3,1]$ since $$ R_{a^t,b^t} = \nu_t R_{a^1,b^1}, $$ and $\nu_t \in (0,1]$. Finally, $R_{a^t,b^t}$ is $n$-good for $t \in [1/3,2/3]$ because $b^t \in K$ and $0<a_j^t \leq \varepsilon$ for each $j$.\\ Therefore, $(a^t,b^t)$ is a path from $(a^0,b^0)$ to $(a^1,b^1)$ inside $\mathcal{R}^+(n)$. \end{proof} We can now prove : \begin{lemma} \label{reverseinclusion} For every $n\geq 3$, $\mathcal{A}(n) \setminus \mathcal{R}^+(n)$ is not empty. \end{lemma} \begin{proof} Suppose for a contradiction that $\mathcal{A}(n)$ is contained in $\mathcal{R}^+(n)$. The map $A : \mathcal{M}(n) \to \mathcal{R}^+(n)$ is then an embedding with closed image by Corollary \ref{ContinuousSection} and Lemma \ref{PositiveConnected}. Since $\mathcal{M}(n)$ and $\mathcal{R}^+(n)$ are both manifolds of dimension $(3n-2)$, the map $A$ is open by Brouwer's invariance of domain. Since $\mathcal{R}^+(n)$ is connected, $A$ must be surjective, which contradicts Lemma \ref{generalizedexample}. \end{proof} \begin{remark} The above proof is non-constructive. Indeed, we don't know any explicit example of a rational Ahlfors function having non-positive residues. \end{remark} Lemma \ref{generalizedexample} and Lemma \ref{reverseinclusion} together yield : \begin{theorem} \label{nonpositive} For every $n\geq 3$, neither $\mathcal{R}^+(n) \subset \mathcal{A}(n)$ nor $\mathcal{A}(n) \subset \mathcal{R}^+(n)$. \end{theorem} \acknowledgments{The authors thank Jeremy Kahn and Thomas Ransford for helpful discussions.} \bibliographystyle{amsplain}
2024-02-18T23:40:52.908Z
2013-09-12T02:08:27.000Z
algebraic_stack_train_0000
3,567
11,324
proofpile-arXiv_066-1609
\section{#1} \label{sec-#2} \setcounter{equation}{0} \setcounter{theorem}{0} } \newcommand{\newsub}[2]{ \subsection{#1} \label{sub-#2} } \newcommand{\refeqn}[1]{ (\!\!~\ref{eq:#1}) } \newcommand{\refthm}[1]{ (\!\!~\ref{#1}) } \nc{\Holder}{H\"{o}lder\ } \newcommand{\vhat}[1]{ \hat{\mathbf{#1}} } \newcommand{\vnorm}[1]{ ||\vec{#1}|| } \newcommand{\parder}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\dotvecs}[2]{ \vec{#1} \bullet \vec{#2} } \newcommand{\crvecs}[2]{ \vec{#1} \times \vec{#2} } \nc{\ith}{ \ensuremath{\text{i}^{\text{th}}} } \nc{\jth}{ \ensuremath{\text{j}^{\text{th}}} } \nc{\kth}{ \ensuremath{\text{k}^{\text{th}}} } \nc{\curl}{ \nabla \times } \nc{\Div}{ \nabla \cdot } \nc{\Ppl}{ \mathcal{M}^{+} } \nc{\Pmn}{ \mathcal{M}^{-} } \nc{\smiley}{ $\stackrel{\because}{\smile} \;$ } \newcommand{\BVP}[4]{ \begin{equation} \begin{array}{rl} #1 & \ \text{in} \ \ #4 \vspace{.05in} \\ #2 & \ \text{on} \ \ \partial #4 \;. \end{array} \label{eq:#3} \end{equation} } \newcommand{\BVPom}[3]{ \BVP{#1}{#2}{#3}{ \Omega } } \newcommand{\BVPb}[3]{ \BVP{#1}{#2}{#3}{ B_{1} } } \newcommand{\BVPhao}[4]{ \begin{equation} \left\{ \begin{array}{ll} #1 & \ \text{in} \ \ #4 \vspace{.05in} \\ \ \\ #2 & \ \text{on} \ \ \partial #4 \;. \end{array} \right. \label{eq:#3} \end{equation} } \newcommand{\BVPK}[4]{ \begin{equation} \begin{array}{rl} #1 & \ \text{in} \ \ #4 \vspace{.05in} \\ #2 & \ \text{on} \ \ \partial #4 \;. \end{array} \label{eq:#3} \end{equation} } \newcommand{\BVPKb}[3]{ \BVP{#1}{#2}{#3}{ B_{3/4} } } \newcommand{\BVPc}[4]{ \begin{equation} \begin{array}{rl} #1 & \ \text{in} \ \ #4 \vspace{.05in} \\ #2 & \ \text{on} \ \ \partial #4 \;, \end{array} \label{eq:#3} \end{equation} } \newcommand{\BVPomc}[3]{ \BVPc{#1}{#2}{#3}{ \Omega } } \newcommand{\BVPbc}[3]{ \BVPc{#1}{#2}{#3}{ B_{1} } } \newcommand{\BVPcbsn}[3]{ \BVPc{#1}{#2}{#3}{ B_{s_0} } } \newcommand{\BVPn}[4]{ \begin{equation} \begin{array}{rl} #1 & \ \text{in} \ \ #4 \vspace{.05in} \\ #2 & \ \text{on} \ \ \partial #4 \end{array} \label{eq:#3} \end{equation} } \newcommand{\BVPomn}[3]{ \BVPn{#1}{#2}{#3}{ \Omega } } \newcommand{\BVPbn}[3]{ \BVPn{#1}{#2}{#3}{ B_{1} } } \begin{document} \numberwithin{equation}{section} \maketitle \begin{abstract} \noindent We study the obstacle problem with an elliptic operator in divergence form. We develop all of the basic theory of existence, uniqueness, optimal regularity, and nondegeneracy of the solutions. These results, in turn, allow us to begin the study of the regularity of the free boundary in the case where the coefficients are in VMO. \end{abstract} \newsec{Introduction}{Intro} \noindent We study minimizers of \begin{equation} \int_{B_1} a^{ij} D_i u D_j u \end{equation} among $u$ in the Hilbert space $W^{1,2}_0(B_1)$ which are constrained to lie above a fixed obstacle $\varphi \in C^{0}(\closure{B_1}).$ (We use Einstein summation notation throughout the paper.) We assume that our obstacle $\varphi < 0$ on $\partial B_1,$ and to avoid triviality we will assume that $\max \varphi > 0.$ We assume that at each $x \in B_1,$ the matrix $\mathcal{A} = (a^{ij})$ is symmetric and strictly and uniformly elliptic, i.e. \begin{equation} \mathcal{A} \equiv \mathcal{A}^{T} \ \ \text{and} \ \ 0 < \lambda I \leq \mathcal{A} \leq \Lambda I \;, \label{eq:UniformEllip} \end{equation} or, in coordinates: $$a^{ij} \equiv a^{ji} \ \ \text{and} \ \ 0 < \lambda |\xi|^2 \leq a^{ij} \xi_i \xi_j \leq \Lambda |\xi|^2 \ \ \text{for all} \ \xi \in \R^n, \ \xi \ne 0 \;.$$ If we let $Lv := D_{i}a^{ij}D_{j} v$ in the usual weak sense for a divergence form operator and we consider the case where $L\varphi \in L^{\infty}(B_1),$ then by letting $w:= u - \varphi$ and by letting $f := -L\varphi,$ the study of the minimizers above leads us to look at weak solutions of the obstacle-type problem: \begin{equation} Lw := D_{i}a^{ij}D_{j}w = \chisub{ \{ w > 0 \} }f \ \ \text{in} \ \ B_1 \;, \label{eq:BasicProb} \end{equation} where $\chisub{S}$ denotes the characteristic function of the set $S,$ and where we look for $w \geq 0.$ A weak solution to a second order partial differential equation is a weakly differentiable function which satisfies an appropriate equality when integrated against test functions. (See chapter 8 of \cite{GT}.) As an example, we will say that $w \in W^{1,2}(B_1)$ satisfies Equation\refeqn{BasicProb}if for any $\phi \in W^{1,2}_{0}(B_1)$ we have: \begin{equation} - \int_{B_1} a^{ij}D_{j}w D_{i}\phi = \int_{B_1} \phi \chisub{ \{ w > 0 \} }f \;. \label{eq:BasicProbWeakForm} \end{equation} Our motivations for studying this type of problem are primarily theoretical. Indeed, the obstacle problem is possibly the most fundamental and important free boundary problem, and it originally motivated the study of variational inequalities. On the other hand, the obstacle problem has well-established connections to the Stefan problem and the Hele-Shaw problem. (See \cite{C1} and \cite{BKM} for example.) Furthermore, as observed in \cite{MPS} the mathematical modeling of numerous physical and engineering phenomena can lead to elliptic problems with discontinuous coefficients, and so the current case seems to allow some of the weakest possible solutions. Our main result is the following: \begin{theorem}[Free Boundary Regularity] \label{FBRi} We assume \begin{enumerate} \item $w \geq 0$ satisfies Equation\refeqn{BasicProb}\!\!, \item $a^{ij}$ satisfies Equation\refeqn{UniformEllip}\!\!, \item $0 < \lambda^{\ast} \leq f \leq \Lambda^{\ast},$ and \item $a^{ij}$ and $f$ belong to the space of vanishing mean oscillation (VMO). \end{enumerate} We let $S_r$ denote the set of regular points of the free boundary within $B_r,$ and assume $K \subset \subset S_{1/2}.$ Then $K$ is a Reifenberg vanishing set. \end{theorem} \noindent The definition of Reifenberg vanishing is found at the beginning of the fifth section. As a corollary of this result we will conclude that blowup limits at regular points will be rotations and scalings of the function $(x_n^{+})^2.$ In terms of the fact that this function is homogeneous of degree 2, it is quite usual to use Weiss's celebrated monotonicity formula to prove this type of result. (See \cite{W}.) On the other hand, the weak nature of our equation, together with the weak $W^{1,2}$ convergence to blowup solutions make it difficult to estimate differences of the values of the Dirichlet integrals which appear in Weiss's formula. So, instead of using homogeneity to prove Reifenberg flatness, our paper goes in the opposite direction. \newsec{Preliminaries and Basic Results}{PrelBasRes} \noindent We will use the following basic notation throughout the paper: $$ \begin{array}{lll} \chisub{D} & \ & \text{the characteristic function of the set} \ D \\ \closure{D} & \ & \text{the closure of the set} \ D \\ \partial D & \ & \text{the boundary of the set} \ D \\ x & \ & (x_1, x_2, \ldots, x_n) \\ x^{\prime} & \ &(x_1, x_2, \ldots, x_{n-1}, 0) \\ B_{r}(x) & \ & \text{the open ball with radius} \ r \ \text{centered at the point} \ x \\ B_{r} & \ & B_{r}(0) \\ \Omega(w) & \ & \{ w > 0 \} \\ \Lambda(w) & \ & \{ w = 0 \} \\ FB(w) & \ & \partial \Omega(w) \cap \partial \Lambda(w) \\ \end{array} $$ Throughout the entire paper, $n, \lambda,$ and $\Lambda$ will remain fixed, and so we will omit all dependence on these constants in the statements of our theorems. We will typically work in the Sobolev spaces and the \Holder spaces, and we will follow all of the definitions and conventions found in the book by Gilbarg and Trudinger. (See \cite{GT}.) To simplify exposition slightly, for $u,v \in W^{1,2}(D)$ we will say that $u = v$ on $\partial D$ if $u - v \in W^{1,2}_{0}(D).$ We define the divergence form elliptic operator \begin{equation} L := D_j \; a^{ij}(x) D_i \;, \label{eq:Ldef} \end{equation} or, in other words, for a function $u \in W^{1,2}(\Omega)$ and $f \in L^2(\Omega)$ we say ``$Lu = f$ in $\Omega$'' if for any $\phi \in W_{0}^{1,2}(\Omega)$ we have: \begin{equation} - \int_{\Omega} a^{ij}(x) D_{i} u D_{j} \phi = \int_{\Omega} g \phi \;. \label{eq:Ldef2} \end{equation} (Notice that with our sign conventions we can have $L = \Delta$ but not $L = -\Delta.$) Next, we fix a function $\psi \in W_{loc}^{1,2}(\R^n)$ with $\psi \geq 0$ which we will use as boundary data, and we fix a function $\varphi \in C^{0}(\dclosure{B_1})$ which we will use as an obstacle. Define the functionals: $$D(u, \Omega) := \int_{\Omega} (a^{ij}D_i u D_j u) \;, \ \ \text{and}$$ $$J(w, \Omega) := \int_{\Omega} (a^{ij}D_i wD_j w + 2w) \;.$$ For any bounded set $\Omega \subset \R^n$ we will minimize these functionals in the following sets, respectively: $$S_{\Omega, \varphi} := \{ u \in W^{1,2}_{0}(\Omega) \; : u \geq \varphi \; \} \;, $$ $$H_{\Omega,\psi} := \{ w \in W^{1,2}(\Omega) \; : \; w - \psi \in W_{0}^{1,2}(\Omega) \; \} \;, \ \ \text{and}$$ $$K_{\Omega, \psi} := \{ \ w \in H_{\Omega, \psi} \; : \; w(x) \geq 0 \ \text{for all} \ x \in \Omega \; \}.$$ When it is clear on which set we are working, we will simply write ``$D(u)$'' in place of ``$D(u, \Omega)$'' and ``$S_{\varphi}$'' in place of ``$S_{\Omega, \varphi}$'' and so on. Probably the most classic version of the obstacle problem involves minimizing $D(u, B_1)$ within $S_{B_1,\varphi}$ in the case where $a^{ij} = \delta^{ij}.$ (Here we use $\delta^{ij}$ to denote the usual Kronecker delta function so that $D(u)$ simplifies to the usual Dirichlet integral. See \cite{C1}, \cite{C2}, \cite{C4}, and \cite{C5} for an analysis of this problem.) Indeed, following the same arguments given at the beginning of \cite{C5}, but for the more general $a^{ij}$ considered here, we can establish the following theorem: \begin{theorem}[Basic Results] \label{BasicRes} Given an obstacle $\varphi \in W^{1,2}(B_1)$ which has a trace on $\partial B_1$ which is negative almost everywhere, there is a unique $u \in S_{B_1,\varphi}$ which minimizes $D(u, B_1).$ Furthermore, $u$ is a bounded supersolution to the problem $L(u) = 0.$ Finally, if $\varphi$ is continuous, then $u$ is almost everywhere equal to a function which is continuous on all of $\closure{B_1}.$ \end{theorem} \begin{pf} For the proof, just follow the beginning of \cite{C5}. (Note that the details of the proof of the mean value formula that Caffarelli uses can be found within \cite{BH}.) \end{pf} Turning to the regularity questions, we find it convenient to work with the height function $w$ which is the minimizer of $J$ within $K_{B_1,\psi}.$ On the other hand, one can ask if this is really the same problem as before. In the original problem with the Laplacian (in other words, with $a^{ij} = \delta^{ij}$), if the obstacle is twice differentiable, then it makes sense to take its Laplacian. In the current situation, it is not as simple to characterize the functions $\varphi,$ where $L\varphi$ makes sense. The obvious route, however, is to simply assume that $L \varphi = -f$ for a function $f$ with specified properties. If we assume that $L\varphi = -f,$ and that $f \in L^{\infty}(B_1),$ then the two problems are completely equivalent. We are most interested in the obstacle problem where we minimize $J$ within $K_{B_1,\psi}.$ Besides requiring existence and regularity, we need to know that the minimizer, $w,$ satisfies $w \geq 0$ and \BVPb{L(w) = \chisub{ \{ w > 0 \} }f}{w = \psi}{OPPDE} The proof of this fact and many of the related facts follows \cite{BH} very closely, and so we will only mention that the proof is carried out with a penalization argument. The details can be found with only very minor adjustments in \cite{BH}. To summarize the relevant facts we can state the following result: \begin{theorem}[Problem Equivalencies] \label{ProbEquiv} Let $\varphi$ be an obstacle which satisfies the following: \begin{enumerate} \item $\psi := -\varphi > 0$ on all of $\partial B_1.$ \item $f := -L\varphi \in L^{\infty}(B_1).$ \end{enumerate} Finally assume that $w = u - \varphi.$ Then the following are equivalent: \begin{enumerate} \item $w$ satisfies Equation\refeqn{OPPDE}\!\!. \item $w$ minimizes $J$ in $K_{B_1,\psi}.$ \item $u \in W^{1,2}_{0}(B_1)$ satisfies $Lu = -\chisub{ \{ u = \varphi \} }f.$ \item $u$ minimizes $D$ in $S_{B_1,\psi}.$ \end{enumerate} \end{theorem} \noindent Now in order to get to the regularity of the free boundary we need two more basic facts which can also be found within \cite{BH}. At this point, having proven our theorem about the equivalencies between the problems, it is worth gathering a collection of assumptions that we will have for the rest of this paper. We will always assume: \begin{equation} \begin{array}{l} \displaystyle{L(w) = \chisub{ \{ w > 0 \} }f \ \ \text{in} \ B_1 \;,} \\ \ \\ \displaystyle{a^{ij}(x) \equiv a^{ji}(x) \;,} \\ \ \\ \displaystyle{ 0 < \lambda |\xi|^2 \leq a^{ij} \xi_i \xi_j \leq \Lambda |\xi|^2 \ \ \text{for all} \ \xi \ne 0 \;, } \\ \ \\ \displaystyle{0 < \lambda^{\ast} \leq f \leq \Lambda^{\ast} \;, \ \ \text{and} } \\ \ \\ \displaystyle{w \geq 0} \end{array} \label{eq:Always} \end{equation} and we will frequently assume \begin{equation} 0 \in \partial \{w > 0\}. \label{eq:Freq} \end{equation} For the next two theorems we assume both Equation\refeqn{Always}and Equation\refeqn{Freq}\!\!. We start with a regularity statement which gives us compactness of quadratic rescalings. \begin{theorem}[Optimal Regularity] \label{opreg} For any $x \in B_{1/2}$ we have \begin{equation} w(x) \leq \tilde{C}|x|^2 \label{eq:OpReg} \end{equation} for a constant $\tilde{C} = \tilde{C}(n, \lambda, \Lambda, \lambda^{\ast}, \Lambda^{\ast} ).$ \end{theorem} On the other hand, there is a nondegeneracy statement which prevents quadratic rescalings from vanishing in the blow up limit. Namely, we have: \begin{theorem}[Nondegeneracy] \label{NonDeg} With $C = C(n,\lambda, \Lambda, \lambda^{\ast}, \Lambda^{\ast}) > 0,$ and for any $r \leq 1$ we have \begin{equation} \sup_{x \in B_r} w(x) \geq Cr^2 \;. \label{eq:NonDeg} \end{equation} \end{theorem} Although the optimal regularity statement can be proven by a straightforward adjustment of the proof for the case when $a^{ij} \equiv \delta^{ij},$ the proof of nondegeneracy is much easier in the case with the Laplacian because of the usefulness of the function $|x|^2.$ In the present case, in order to prove nondegeneracy one seems to need a polygonal curve argument and this can be found in \cite{BH}. \newsec{Measure Stability}{meastab} Now we begin a measure theoretic study of regularity which will culminate in a measure theoretic version of the theorem proven by Caffarelli in 1977. (See \cite{C1}.) \begin{lemma}[Compactness I] \label{Lpoint} Let $\{a^{ij}_k\},$ $\{f_k\},$ and $\{w_k\}$ satisfy \begin{enumerate} \item $0 < \lambda I \leq a^{ij}_k \leq \Lambda I,$ \item $0 < \lambda^{\ast} \leq f_k \leq \Lambda^{\ast},$ \item $w_k \geq 0$, $D_ia^{ij}_kD_jw_k=\chisub{ \{ w_k>0 \} }f_k$ in $B_2,$ and $0\in\partial\{w_k>0\}$. \item $||w_k||_{W^{1,2}(B_2)} \leq \gamma < \infty$ and \item there exists an $f$ (with $0 < \lambda^{\ast} \leq f \leq \Lambda^{\ast}$), such that $f_k$ converges to $f$ strongly in $L^1.$ \end{enumerate} then there exists a $w \in W^{1,2}(B_1)$ and an $f \in L^{\infty}(B_1)$ and a subsequence of $\{w_k\}$ such that along this subsequence (which we still label with ``k''), we have \begin{itemize} \item[A.] uniform convergence of $w_k$ to $w,$ and weak convergence in $W^{1,2},$ \item[B.] for any $\phi\in W^{1,2}_0(B_1)$ \begin{equation} \label{eq:goodchiconv} \int_{B_1}\chisub{ \{ w_k>0 \} } f_k \phi \rightarrow \int_{B_1}\chisub{ \{ w>0 \} } f \phi. \end{equation} \end{itemize} \end{lemma} \begin{pf} Item A follows by using standard functional analysis combined with De Giorgi-Nash-Moser theory. Since we can take a subsequence, we can assume without loss of generality that $f_k$ converges to $f$ pointwise almost everywhere. In the interior of both $\{ w > 0 \}$ and $\{ w = 0 \}$ it is not hard to show that $\chisub{ \{ w_k>0 \} } f_k$ converges pointwise almost everywhere to $\chisub{ \{ w>0 \} } f$ (for the interior of $\{ w = 0 \}$ one needs to use the nondegeneracy statement), so by Lebesgue's dominated convergence theorem it suffices to prove that $\partial \{ w = 0 \}$ has no Lebesgue points. The proof of this fact is very similar to the proof of Lemma 5.1 of \cite{BT}, but we include it here for the convenience of the reader. Let $x_0\in\partial\{w=0\} \cap B_1,$ and choose $r > 0$ such that $$B_r(x_0)\subset B_1.$$ Define $W(x):= r^{-2}w(x_0+rx)$ and $W_k(x):= r^{-2}w_k(x_0+rx)$. After this change of coordinates, we have $0\in\partial\{W=0\},$ and so there exists $\{x_k\}\rightarrow 0$ such that $$W(x_k)>0, \ \text{for all}\ k.$$ Now fix $k$ so $x_k\in B_{1/8}$, take $J$ large enough such that $i,j\geq J$ implies \begin{equation} \label{eq:WjvsW} ||W_j-W||_{L^{\infty}(B_1)}\leq \frac{W(x_k)}{2}, \end{equation} and \begin{equation} \label{eq:WivsWj} ||W_i-W_j||_{L^{\infty}(B_1)}\leq \frac{\tilde{C}}{10} \end{equation} where $\tilde{C}= \frac{C}{10}$ which is the constant from the nondegeneracy statement. Since $W_j\rightarrow W$ in $C^{\alpha}$, $W_J(x_k)>0$ and nondegeneracy imply the existence of $\tilde{x}\in B_{1/2}$ such that \begin{equation} \label{eq:towrdcontra} W_J(\tilde{x}) \geq C \left(\frac{1}{2} - \frac{1}{8} \right)^2 = \frac{9}{64} C > \tilde{C}. \end{equation} Now $i\geq J$ implies $W_i(\tilde{x})\geq \frac{9\tilde{C}}{10}$. Since $W_i$ satisfies a uniform $C^{\alpha}$ estimate, there exists an $\tilde{r}>0$ such that $W_i(y)\geq \frac{\tilde{C}}{2}$ for all $y\in B_{\tilde{r}}(\tilde{x})$ once $i\geq J$. From this we can conclude $B_{\tilde{r}}(\tilde{x})\subset \{W_{\infty}>0 \}$. Scaling back to the original functions, we conclude $x_0$ is not Lebesgue point. Since $x_0$ was an arbitrary point of the free boundary there are no Lebesgue points in $\partial\{w>0\}$. \end{pf} \begin{lemma}[Compactness II] \label{CompII} If we assume everything we did in the previous lemma, and we assume in addition that $A = (A^{ij})$ is a symmetric, constant matrix with $$0 < \lambda I \leq A \leq \Lambda I,$$ and such that $$||a^{ij}_k - A ^{ij}||_{L^1(B_1)} \rightarrow 0,$$ then the limiting functions $w$ and $f$ given in the last lemma satisfy: \begin{equation} D_i A^{ij} D_j w = \chisub{ \{ w > 0 \} }f \label{eq:FinalEqn} \end{equation} in $B_1.$ Furthermore, $0 \in \partial \{ w > 0 \}.$ \end{lemma} \begin{pf} Since $a^{ij}_k\rightarrow A^{ij}$, and there is a uniform $L^{\infty}$ bound on all of $a^{ij}_k$ and $A^{ij}$, we have \begin{equation} \label{eq:strgLq} a^{ij}_k\rightarrow A^{ij} \ \ \text{in}\ L^{q}(B_1) \end{equation} for any $q<\infty$, in particular $a^{ij}_k\rightarrow A^{ij}$ in $L^2$. We have for any $\phi\in W^{1,2}_{0}(B_1)$, \begin{align*} \int_{B_1}a^{ij}_kD_i w_k D_j\phi & =\int_{B_1}(a^{ij}_k-A^{ij})(D_i w_k-D_i w)D_j\phi\\ & +\int_{B_1}a^{ij}_k D_i w D_j\phi + \int_{B_1} A^{ij}(D_iw_k-D_iw)D_j\phi \end{align*} Since $a^{ij}_k\rightarrow A^{ij}$ in $L^2$ and $D_iw_k \rightharpoonup D_iw$, we have \begin{equation} \int_{B_1}(a^{ij}_k-A^{ij})(D_i w_k-D_i w)D_j\phi \rightarrow 0, \end{equation} \begin{equation} \int_{B_1}a^{ij}_k D_i w D_j\phi \rightarrow \int_{B_1}A^{ij}D_iwD_j\phi \end{equation} and \begin{equation} \int_{B_1} A^{ij}(D_iw_k-D_iw)D_j\phi \rightarrow 0. \end{equation} Therefore, \begin{equation} \int_{B_1}a^{ij}_kD_i w_k D_j\phi \rightarrow \int_{B_1}A^{ij}D_i w D_j\phi . \end{equation} Together with Equation\refeqn{goodchiconv}\!\!, we proved $$D_i A^{ij} D_j w = \chisub{ \{ w > 0 \} }f.$$ Now in order to show that $0 \in \partial \{w > 0\}$ we observe first that $0\in\partial\{w_k > 0\}$ implies $$0\in\{w=0\}.$$ Next we suppose there exists $r_0$, such that $B_{2r_0}\in\{w=0\}$. For any $k$, we have \begin{equation} \sup_{x \in B_{r_0}} w_k(x) \geq C(r_0)^2 \;. \label{eq:NonDeg} \end{equation} By picking a convergent subsequence we get a contradiction to $w=0$ in $B_{2r_0}.$ Therefore, we have $0 \in \partial \{ w > 0 \}.$ \end{pf} \begin{theorem}[Measure Stability] \label{MeaStab} Fix positive constants $\gamma, \lambda, \Lambda, \lambda^{\ast},$ and $\Lambda^{\ast},$ and suppose $w$ satisfies Equation\refeqn{Always}\!\!, and for some constant $\mu \in [\lambda^{\ast}, \Lambda^{\ast}],$ assume that $u$ satisfies \begin{equation} \Delta u =\chisub{ \{ u>0 \} } \mu \ \ \text{in} \ B_1 \; \end{equation} with $$w=u, \ \ \text{on} \ \partial B_1.$$ where we assume in addition that $w$ satisfies $$||w||_{W^{1,2}(B_1)}\leq \gamma, \ \ \text{and} \ \ ||w||_{C^{\alpha}(\closure{B_1})} \leq \gamma.$$ Then there exists a modulus of continuity $\sigma(\epsilon)$, such that if \begin{equation} ||a^{ij}-\delta^{ij}||_{L^2(B_1)} < \sigma(\epsilon), \ \ \ \text{and} \ \ \ ||f - \mu||_{L^1(B_1)} < \sigma(\epsilon) \end{equation} then \begin{equation} |\{w=0\}\Delta\{u=0\}| < \epsilon. \end{equation} (We are abusing notation slightly by using $\mu$ to denote the function which is everywhere equal to $\mu$ in $B_1.$) \end{theorem} \begin{pf} The proof of Theorem 5.4 of \cite{BT} can be adapted to the current setting without too much difficulty, but we include it for the convenience of the reader. Suppose not. Then there exists $a^{ij}_k, \; w_k, \; f_k$ and $u_k$ such that, \begin{enumerate} \item $D_i a^{ij}_k D_j w_k=\chisub{ \{ w_k>0 \} } f_k$ in $B_1,$ \item $a^{ij}_k\rightarrow \delta^{ij}$ in $L^2(B_1),$ \item $f_k \rightarrow \mu$ in $L^1(B_1),$ \item $\begin{cases} \Delta u_k=\chisub{ \{ u_k>0 \} } \mu \ & \text{in}\ B_1 \\ u_k=w_k \ & \text{on} \ \partial B_1, \ \ \ \text{and} \end{cases}$ \item $||w_k||_{W^{1,2}(B_1)}\leq \gamma,$ and $||w_k||_{C^{\alpha}(\closure{B_1})} \leq \gamma.$ \end{enumerate} but $ |\{w_k=0\}\Delta\{u_k=0\}| \geq \epsilon_0 $ for some $\epsilon_0$ fixed. By applying the previous compactness lemmas to an arbitrary subsequence, there exists a $w_{\infty}$ and a sub-subsequence such that $$w_k \rightharpoonup w_{\infty}, \ \ \text{in} \ \ W^{1,2}(B_1)$$ and $$w_k\rightarrow w_{\infty} \ \ \text{in} \ \ C^{0}(\closure{B_1})$$ which implies $w_k \rightarrow w_{\infty}$ in $L^2(B_1).$ (We will still use ``$w_k$'' for the sub-subsequence.) Equation\refeqn{goodchiconv}is also satisfied with the constant function $\mu$ in place of $f.$ By standard comparison results for the obstacle problem (see for example Theorem 2.7a of \cite{B}), there exists $u$ such that \begin{equation} \label{eq:ukconvtou} u_k \rightarrow u \ \text{in} \ L^{\infty}(B_1) \end{equation} We have for any $\phi\in W^{1,2}_{0}(B_1)$, \begin{align*} \int_{B_1} a^{ij}_k D_i w_k D_j\phi & =\int_{B_1}(a^{ij}_k - \delta^{ij})( D_i w_k - D_i w_{\infty} )D_j \phi\\ & + \int_{B_1} a^{ij}_k D_i w_{\infty} D_j \phi + \int_{B_1} \delta^{ij} (D_i w_k - D_i w_{\infty}) D_j \phi \end{align*} Since $a^{ij}_k \rightarrow \delta^{ij}$ in $L^2$ and $D_i w_k \rightharpoonup D_i w_{\infty}$, we have \begin{equation} \int_{B_1} (a^{ij}_k - \delta^{ij})(D_i w_k - D_i w_{\infty}) D_j\phi \rightarrow 0, \end{equation} \begin{equation} \int_{B_1} a^{ij}_k D_i w_{\infty} D_j \phi \rightarrow \int_{B_1} \delta^{ij} D_i w_{\infty} D_j \phi \end{equation} and \begin{equation} \int_{B_1} \delta^{ij}( D_i w_k - D_i w_{\infty} )D_j\phi \rightarrow 0. \end{equation} Therefore, \begin{equation} \int_{B_1}a^{ij}_kD_i w_k D_j\phi \rightarrow \int_{B_1}\delta^{ij} D_i w_{\infty} D_j\phi . \end{equation} By Equation\refeqn{goodchiconv}with $\mu$ in place of $f,$ we have \begin{equation} \int_{B_1} \chisub{ \{ w_k>0 \} } f_k \phi \rightarrow \int_{B_1} \chisub{ \{ w_{\infty} > 0 \} } \mu \phi, \end{equation} so $w_{\infty}$ satisfies \begin{equation} \Delta w_{\infty} =\chisub{ \{ w_{\infty} > 0 \} } \mu \ \ \text{in} \ B_1. \end{equation} We notice that by assumption, \begin{align*} 0<\epsilon_0 & \leq |\{w_k=0\}\Delta\{u_k=0\}| \\ & = ||\chisub{ \{ u_k>0 \} }-\chisub{ \{ w_k>0 \} }||_{L^1(B_1)} \\ & \leq ||\chisub{ \{ u_k>0 \} }-\chisub{ \{ w_{\infty}>0 \} }||_{L^1(B_1)} + ||\chisub{ \{ w_{\infty}>0 \} }-\chisub{ \{ w_k>0 \} }||_{L^1(B_1)} \\ &= I+I\!I \;. \end{align*} For $I$, since \BVPhao{\Delta u_k = \chisub{ \{u_k > 0\} } \mu}{u_k = w_k}{PDEuk}{B_1} and \BVPhao{\Delta w_{\infty} = \chisub{ \{ w_{\infty} > 0\} }\mu}{w_{\infty} = u}{PDEw}{B_1} By Theorem 2.7a of \cite{B}, we have \begin{equation} ||u_k - w_{\infty} ||_{L^\infty(B_1)}\leq ||u_k-u||_{L^\infty(\partial B_1)} \;, \label{eq:512analog} \end{equation} and since $u_k\rightarrow u$ in $L^\infty,$ we have \begin{equation} ||\chisub{ \{ u_k>0 \} } - \chisub{ \{ w_{\infty}>0 \} }||_{L^1(B_1)} \rightarrow 0, \label{eq:CHIniceconv} \end{equation} by Corollary 4 of \cite{C2}. For $I\!I,$ we know that inside $\{ w_{\infty}>0 \},$ $w_k$ will eventually be positive by the uniform convergence, so $\chisub{ \{ u_k>0 \} }= \chisub{ \{ w_{\infty} > 0 \} }$ there. In the interior of $\{ w_{\infty} = 0 \},$ $w_k$ will eventually be $0,$ since otherwise we will violate the nondegeneracy property, and so $\chisub{ \{ u_k > 0 \} }= \chisub{ \{ w_{\infty} > 0 \} }$ there. Finally, since $\partial \{w_{\infty}=0\}$ has finite $(n-1)$-dimensional Hausdorff measure (see \cite{C2}, \cite{C3}, or \cite{C5}), we must have $|\partial \{w_{\infty}=0\}| = 0,$ and therefore $I\!I \rightarrow 0.$ This convergence to $0$ gives us a contradiction, since $0 < \epsilon_0 \leq I + I\!I.$ \end{pf} \newsec{Weak Regularity of the Free Boundary}{WRFB} In this section we establish the existence of blow up limits, and use this result to show a measure-theoretic version of Caffarelli's free boundary regularity theorem. We will show the existence of blowup limits in the case where the $a^{ij}$ and the $f$ belong to VMO. We define VMO to be the subspace of BMO such that if $g \in BMO$ and \begin{equation} \eta_{g}(r) : = \sup_{\rho \leq r, \; y \in \R^n} \ \frac{1}{|B_{\rho}|} \int_{B_{\rho}(y)} |g(x) - g_{_{B_{\rho}(y)}}| \; dx \;, \label{eq:etadef} \end{equation} then $\eta_{g}(r) \rightarrow 0$ as $r \rightarrow 0.$ For any $g \in \text{VMO},$ $\eta_{g}(r)$ is referred to as the VMO-modulus. For all conventions regarding VMO we follow \cite{BT} which in turn follows \cite{MPS}. \begin{theorem}[Existence of Blowup Limits I] \label{blowup} Assume $w$ satisfies Equations\refeqn{Always}and\refeqn{Freq}\!\!, and assume in addition that $a^{ij}$ and $f$ belong to VMO. Define the usual rescaling $$w_{\epsilon}(x):=\epsilon^{-2}w(\epsilon x).$$ Then for any sequence $\{\epsilon_m\}\downarrow 0$, there exists a subsequence, a real number $\mu \in [\lambda^{\ast}, \Lambda^{\ast}],$ and a symmetric matrix $A = (A^{ij})$ with $$0<\lambda I\leq A\leq \Lambda I$$ such that for all $i,j$ we have \begin{equation} \label{eq:vmoconv} \newavint{B_{\epsilon_m}} a^{ij}(x) dx \rightarrow A^{ij} \end{equation} and \begin{equation} \label{eq:vmofconv} \newavint{B_{\epsilon_m}} f(x) dx \rightarrow \mu \;, \end{equation} and on any compact set, $w_{\epsilon_m}(x)$ converges strongly in $C^{\alpha}$ and weakly in $W^{1,2}$ to a function $w_{\infty}\in W^{1,2}_{loc}(\R^n)$, which satisfies: \begin{equation} \label{eq:blowuplmt} D_i A^{ij}D_j w_{\infty}= \chisub{ \{ w_{\infty}>0 \} } \mu \ \ \text{on}\ \R^n, \end{equation} and has 0 in its free boundary. \end{theorem} \begin{pf} This proof is so similar to the proof of Theorem 6.1 of \cite{BT} that we leave it as an exercise for the reader. \end{pf} \begin{remark}[Nonuniqueness of Blowup Limits] \label{NBL} Notice that the theorem does not claim that the blowup limit is unique. In fact, it is relatively easy to produce nonuniqueness even in the case with a constant right hand side, and it was done in \cite{BT} for the nondivergence form case, but that counter-example can be copied almost exactly for the divergence form case. In the case where the coefficients of $L$ are constant, one can use the counter-example in \cite{B} to show nonuniqueness of blowup limits when the right hand side is only assumed to be continuous. \end{remark} \begin{theorem}[Caffarelli's Alternative in Measure (Weak Form)] \label{CAMW} Assuming again Equations\refeqn{Always}and\refeqn{Freq}\!\!, the limit \begin{equation} \lim_{r \downarrow 0} \frac{ |\Lambda(w) \cap B_r| }{ |B_r| } \label{eq:densitystatement} \end{equation} exists and must be equal to either $0$ or $1/2.$ \end{theorem} \begin{pf} Here again our proof is almost identical to the proof of Theorem 6.3 of \cite{BT}, so we leave it to the reader. \end{pf} \begin{definition}[Regular and Singular Free Boundary Points] \label{RSFBP} A free boundary point where $\Lambda$ has density equal to $0$ is referred to as \textit{singular}, and a free boundary point where the density of $\Lambda$ is $1/2$ is referred to as \textit{regular}. \end{definition} The theorem above gives us the alternative, but we do not have any kind of uniformity to our convergence. Caffarelli stated his original theorem in a much more quantitative (and therefore useful) way, and so now we will state and prove a similar stronger version. We need the stronger version in order to show openness and stability under perturbation of the regular points of the free boundary. \begin{theorem}[Caffarelli's Alternative in Measure (Strong Form)] \label{CAMS} Once again assuming Equations\refeqn{Always}and\refeqn{Freq}\!\!, for any $\epsilon \in (0, 1/8),$ there exists an $r_0 \in (0,1),$ and a $\tau \in (0,1)$ such that \newline \indent if there exists a $t \leq r_0$ such that \begin{equation} \frac{|\Lambda(w) \cap B_t|}{|B_t|} \geq \epsilon \;, \label{eq:bigonce} \end{equation} \indent then for all $r \leq \tau t$ we have \begin{equation} \frac{|\Lambda(w) \cap B_r|}{|B_r|} \geq \frac{1}{2} - \epsilon \;, \label{eq:bigallatime} \end{equation} and in particular, $0$ is a regular point according to our definition. The $r_0$ and the $\tau$ depend on $\epsilon$ and on the $a^{ij},$ but they do \textit{not} depend on the function $w.$ \end{theorem} \begin{remark}[Another version] \label{AnoVer} The theorem above is equivalent to a version using a modulus of continuity. In that version there is a universal modulus of continuity $\sigma$ such that \begin{equation} \frac{|\Lambda(w) \cap B_{\tilde{t}}|}{|B_{\tilde{t}}|} \geq \sigma(\tilde{t}) \label{eq:OtherVersion} \end{equation} for any $\tilde{t}$ implies a uniform convergence of the density of $\Lambda(w)$ to $1/2$ once $B_{\tilde{t}}$ is scaled to $B_1.$ (Here we mean uniformly among all appropriate $w$'s.) \end{remark} \begin{pf} Here again we have a proof which is almost identical to the proof of Theorem 6.5 in \cite{BT}. On the other hand, in an effort to make things more convenient for the reader, since we use this theorem quite a bit, we will include the proof here. We start by assuming that we have a $t$ such that Equation\refeqn{bigonce}holds, and by rescaling if necessary, we can assume that $t = r_0.$ Next, by arguing exactly as in the last theorem, by assuming that $r_0$ is sufficiently small, and by defining $s_0 := \sqrt{r_0},$ we can assume without loss of generality that \begin{equation} \myavinttwo{B_{s_0}} \left| a^{ij}(x) - \delta^{ij} \right| \; dx \label{eq:atbt} \end{equation} is as small as we like. Now we will follow the argument given for Theorem 4.5 in \cite{B} very closely. Applying our measure stability theorem on the ball $B_{s_0}$ we have the existence of a function $u$ which satisfies: \BVPcbsn{\Delta u = \chisub{\{u > 0\}}\mu}{u \equiv w}{newudef} and so that \begin{equation} |\{\Lambda(u) \Delta \Lambda(w)\} \cap B_{r_0}| \label{eq:damnsmall} \end{equation} is small enough to guarantee that \begin{equation} \frac{|\Lambda(u) \cap B_{r_0}|}{|B_{r_0}|} \geq \frac{\epsilon}{2} \;, \label{eq:uLbigonce} \end{equation} and therefore \begin{equation} m.d.(\Lambda(u) \cap B_{r_0}) \geq C(n) r_0 \epsilon \;. \label{eq:uLmdbigonce} \end{equation} Now if $r_0$ is sufficiently small, then by Caffarelli's $C^{1,\alpha}$ regularity theorem for the obstacle problem (see \cite{C4} or \cite{C5}) we conclude that $\partial \Lambda(u)$ is $C^{1,\alpha}$ in an $r_0^2$ neighborhood of the origin. Furthermore, if we rotate coordinates so that $FB(u) = \{ (x', x_n) \; | \; x_n = g(x') \},$ then we have the following bound (in $B_{r_0^2}$): \begin{equation} ||g||_{_{C^{1,\alpha}}} \leq \frac{C(n)}{r_0} \;. \label{eq:fSmoothBd} \end{equation} On the other hand, because of this bound, there exists a $\gamma < 1$ such that if $\rho_0 := \gamma r_0 < r_0,$ then \begin{equation} \frac{ | \Lambda(u) \cap B_{\rho_0} | }{| B_{\rho_0} | } \; > \; \frac{1 - \epsilon}{2} \;. \label{eq:BrideOfBigLamb} \end{equation} Now by once again requiring $r_0$ to be sufficiently small, we get \begin{equation} \frac{ | \Lambda(w) \cap B_{\rho_0} | }{| B_{\rho_0} | } \; > \; \frac{1}{2} - \epsilon \;. \label{eq:HoundOfBigLamb} \end{equation} (So you may note that here our requirement on the size of $r_0$ will be much smaller than it was before; we need it small both because of the hypotheses within Caffarelli's regularity theorems and because of the need to shrink the $L^p$ norm of $|a^{ij} - \delta^{ij}|$ and the $L^1$ norm of $|f - \mu|$ in order to use our measure stability theorem.) Now since $\frac{1}{2} - \epsilon$ is strictly greater than $\epsilon,$ we can rescale $B_{\rho_0}$ to a ball with a radius \textit{close to} $r_0,$ and then repeat. Since we have a little margin for error in our rescaling, after we repeat this process enough times we will have a small enough radius (which we call $\tau r_0$), to ensure that for all $r \leq \tau r_0$ we have $$\frac{ | \Lambda(w) \cap B_r |}{| B_r |} \; > \; \frac{1}{2} - \epsilon \;.$$ \end{pf} \begin{corollary}[The Set of Regular Points Is Open] \label{TSoRPIO} Still assuming Equations\refeqn{Always}and\refeqn{Freq}\!\!, the set of regular points of $FB(w)$ is an open subset of $FB(w).$ \end{corollary} \noindent The proof of this corollary is identical to the proof of Corollary 4.8 in \cite{B} except that in place of using Theorem 4.5 of \cite{B} we use Theorem\refthm{CAMS}from this work. \begin{theorem}[Existence of Blowup Limits II] \label{blowup2} We assume Equation\refeqn{Always}\!\!, and we assume $a^{ij}$ and $f$ belong to VMO. We let \begin{equation} S_r := \{ x \in FB(w) \cap B_r \; : \; x \ \text{is a regular point of} \ FB(w) \; \} \label{eq:Srdef} \end{equation} and we assume $S_{1/2} \ne \phi.$ Let $K \subset \subset S_{1/2},$ let $\{ x_m \} \subset K,$ and let $\epsilon_m \downarrow 0.$ Then there exists a constant $\mu \in [\lambda^{\ast}, \Lambda^{\ast}],$ a constant symmetric matrix $A = (A^{ij})$ with $0 < \lambda \leq A \leq \Lambda,$ and a strictly increasing sequence of natural numbers $\{ m_j \}$ such that the sequence of functions $\{ w_j \}$ defined by \begin{equation} w_j(x) := \epsilon_{m_j}^{-2}w(x_{m_j} + \epsilon_{m_j}x) \label{eq:wjdef} \end{equation} converges strongly in $C^{\alpha}$ (for some $\alpha > 0$) and weakly in $W^{1,2}$ on any compact set to a function $w_{\infty}$ which satisfies: \begin{equation} \label{eq:blowuplmt2} D_i A^{ij}D_j w_{\infty}= \chisub{ \{ u_{\infty}>0 \} } \mu \ \ \text{on}\ \R^n \;. \end{equation} Furthermore $0$ is a regular point of its free boundary. \end{theorem} \begin{pf} The existence of a function $w_{\infty} \geq 0$ satisfying Equation\refeqn{blowuplmt2}and the convergence of the $w_j$ to $w_{\infty}$ is carried out in exactly the same way as in the proof of Theorem\refthm{blowup}\!\!. Showing that $0$ is part of the free boundary of $w_{\infty}$ is also proven exactly as in Theorem\refthm{blowup}\!\!. It remains to show that $0$ is a regular point of the free boundary. For the first part, we observe that since each $x_m$ belongs to the regular part of the free boundary, we know that there exists an $r_m$ such that \begin{equation} \label{eq:LamIs3/8} \frac{\Lambda(w) \cap B_{r_m}(x_m)}{B_{r_m}} \geq \frac{3}{8} \;. \end{equation} There exists a small $\rho > 0$ depending only on the dimension, $n,$ such that if $x \in B_{\rho r_m}(x_m),$ then \begin{equation} \label{eq:LamIs1/4} \frac{\Lambda(w) \cap B_{r_m}(x)}{B_{r_m}} \geq \frac{1}{4} \;. \end{equation} Now the closure of the set $\{ x_m \}$ is compact, and that set is covered by the open balls in the set $\{ B_{\rho r_m}(x) \}.$ By compactness, the set is still covered by a finite number of these balls, and their radii have a positive minimum, $\rho_0.$ So, once $\epsilon_{m_j} < \rho_0,$ we know that \begin{equation} \label{eq:LamIs1/4Applied} \frac{\Lambda(w_j) \cap B_{r}}{B_{r}} \geq \frac{1}{4} \;, \end{equation} for all $r$ which are less than $\tau$ times $\rho_0.$ Here $\tau$ is the constant given in the statement of Theorem\refthm{CAMS}\!\!. From this we can conclude that $0$ must be a regular point of $FB(w_{\infty}).$ \end{pf} \begin{remark}[Hausdorff Dimension] \label{HDim} Exactly as in \cite{BT}, the arguments above lead to the statement that the free boundary is strongly porous and therefore has Hausdorff dimension strictly less than $n.$ (See \cite{BT} and see \cite{M} for the definition of porosity.) \end{remark} \newsec{Finer Regularity of the Free Boundary}{SRFB} In this section we show finer properties of the free boundary at regular points. Since the counter-examples in \cite{B} and in \cite{BT} are easily extended to the current setting, we can have regular free boundary points where the blowup limit is not unique. In spite of this fact, we show that the regular free boundary points enjoy a flatness property which is based on Reifenberg flatness. Reifenberg flatness was introduced by Reifenberg in \cite{R}, and is studied in more detail by Toro and Kenig in several papers. (See \cite{KT1} and \cite{KT2} for example.) For the definitions surrounding Reifenberg vanishing sets we follow the conventions in section 6 of \cite{B}, but now we must introduce a notion of sets which are ``relatively Reifenberg flat.'' \begin{definition}[Reifenberg Flatness] \label{ReifFlat} Let $S \subset \R^n$ be a locally compact set, and let $\delta > 0.$ Then $S$ is $\mathit{\delta\!-\!\textit{Reifenberg flat}}$ if for each compact $K \subset \R^n,$ there exists a constant $R_K > 0$ such that for every $x \in K \cap S$ and every $r \in (0, R_K]$ we have a hyperplane $L(x,r)$ containing $x$ such that \begin{equation} D_{\mathcal{H}}(L(x,r) \cap B_r(x), \; S \cap B_r(x)) \leq 2r\delta \;. \label{DefReif} \end{equation} Here $D_{\mathcal{H}}$ denotes the Hausdorff distance: If $A, \; B \subset \R^n,$ then \begin{equation} D_{\mathcal{H}}(A,B) := \max\{\; \sup_{a \in A} d(a,B) \;, \; \sup_{b \in B} d(b,A) \; \} \;. \label{Hdist} \end{equation} We also define the following quantity, which we call the \textit{modulus of flatness,} to get a more quantitative and uniform measure of flatness: \begin{equation} \theta_K(r) := \sup_{0 < \rho \leq r} \left( \sup_{x \in S \cap K} \frac{D_{\mathcal{H}}(L(x,\rho) \cap B_{\rho}(x), \; S \cap B_{\rho}(x))}{\rho} \right) \;. \label{ThetaFlat} \end{equation} Finally, we will say that $S$ is a \textit{Reifenberg vanishing} set, if for any compact $K \subset S$ \begin{equation} \lim_{r \rightarrow 0}\theta_K(r) = 0 \;. \label{ReifVan} \end{equation} \end{definition} \begin{definition}[Relatively Reifenberg Flat] \label{RelReifFlat} Let $S \subset \R^n$ be a locally compact set, let $K \subset \subset S,$ and let $\delta > 0.$ Then $K$ is \textit{relatively} $\mathit{\delta\!-\!}$\textit{Reifenberg flat with respect to} $\mathit{S}$ if there exists a constant $R > 0$ such that for every $x \in K$ and every $r \in (0, R]$ we have a hyperplane $L(x,r)$ containing $x$ such that \begin{equation} D_{\mathcal{H}}(L(x,r) \cap B_r(x), \; S \cap B_r(x)) \leq 2r\delta \;. \label{DefRelReif} \end{equation} We also define the \textit{modulus of flatness,} exactly as above, and then $K$ is \textit{relatively Reifenberg vanishing} if the modulus of flatness goes to zero as $r$ approaches $0.$ \end{definition} \begin{remark} \label{DiffK} It is worth noting that the compact set $K,$ plays a very different role in the two definitions above. In the first case, $K$ allows us to look at bounded sets to get uniform bounds on the constant $R_K$ which bounds the radius, while in the second case, $K$ \textit{is} the set that we want to show is Reifenberg vanishing, but we are allowing all of $S$ when seeing if we are close to a plane. As a simple example, a point can never be Reifenberg flat, but viewed as a subset of a plane, it is relatively $\delta$-Reifenberg flat. \end{remark} First we need to show that our measure stability theorem can be used to show uniform closeness of our solutions to solutions of obstacle problems with constant coefficients and constant right hand side, as long as we have zoomed in far enough. In particular, we can say the following: \begin{theorem}[Uniform Closeness Result] \label{UCR} We assume Equation\refeqn{Always}\!\!, and we let $u \geq 0$ satisfy: \BVPb{\Delta u = \chisub{\{u > 0\}}\mu}{u \equiv w}{newerudef} We also assume that there is a fixed constant $\beta,$ and an $\alpha \in (0,1)$ such that $||w||_{C^{\alpha}(\closure{B_1})} \leq \beta.$ For any $\epsilon > 0,$ there exists a $\delta > 0$ such that if \begin{equation} ||a^{ij}(x) - \delta^{ij}||_{L^1(B_1)} < \delta \ \ \ \text{and} \ \ \ ||f(x) - \mu||_{L^1(B_1)} < \delta \;, \label{eq:propercloseness} \end{equation} then \begin{equation} ||w - u||_{L^{\infty}(B_{3/4})} < \epsilon \;. \label{eq:uniformvictory} \end{equation} \end{theorem} \begin{pf} Some of the ideas in this proof were inspired by ideas of Li and Vogelius who in turn were following ideas of Caffarelli. (See \cite{LV} and \cite{C3}.) Letting $A(x)$ be the matrix determined by $a^{ij}(x),$ we have in $B_1$ (using ``divergence'' notation): \begin{alignat*}{1} \text{div} &\left[ A(x) \left( \nabla [w(x) - u(x)] \right) \right] \\ &= f(x) \chisub{ \{ w > 0 \} } - \text{div} \left[ \left( A(x) - I \right) \nabla u(x) \right] - \Delta u \\ &= f(x) \chisub{ \{ w > 0 \} } - \mu \chisub{ \{ u > 0 \} } - \text{div} \left[ \left( A(x) - I \right) \nabla u(x) \right] \\ &= f(x) \left( \chisub{ \{ w > 0 \} } - \chisub{ \{ u > 0 \} } \right) + \chisub{ \{ u > 0 \} } \left( f(x) - \mu \right) + \text{div} \left[ \left( I - A(x) \right) \nabla u(x) \right] \\ &= I + I\!I + \text{div} \left[ I\!I\!I \right] \;. \end{alignat*} After fixing $q \in (n, \infty),$ and by shrinking $\delta$ if necessary, we can use our measure stability theorem (Theorem\refthm{MeaStab}\!\!) and a simple interpolation, to ensure that the $L^{q/2}$ norm of $I$ on $B_1$ is as small as we like. Using our assumptions and shrinking $\delta$ if necessary, we can make the $L^{q/2}$ norm of $I\!I$ on $B_1$ as small as we like. (The fourth line of Equation\refeqn{Always}supplies the $L^{\infty}$ bound needed for the interpolation.) To control $I\!I\!I$ we need to shrink the ball slightly. First we observe that by De Giorgi-Nash-Moser theory (see Theorem 8.29 of \cite{GT}), there exists an $\alpha^{\prime} \in (0, \alpha)$ such that \begin{equation} ||u||_{C^{\alpha^{\prime}}(\closure{B_1})} \leq C(\beta, \Lambda^{\ast}) \;. \label{eq:GlobDNM} \end{equation} For any fixed $s \in (0,1/16)$ we then have \begin{equation} ||w-u||_{L^{\infty}(\partial B_{1-s})} \leq C(\beta, \Lambda^{\ast})s^{\alpha^{\prime}} \;. \label{eq:simpleHolder} \end{equation} For any $\tilde{q} < \infty,$ we can use Calderon-Zygmund Theory along with the Sobolev Imbedding to show \begin{equation} ||\nabla u||_{L^{\infty}(B_{1-s})} \leq C(\beta, \Lambda, s) \;. \label{eq:goodgradbound} \end{equation} Considering the boundary value problem that $w - u$ satisfies within $B_{1-s},$ we have the following: By shrinking $s$ we can make the boundary values as small as we like by Equation\refeqn{simpleHolder}\!\!. We already have the $L^{q/2}$ norm of $I$ and $I\!I$ as small as we like by making $\delta$ small. For $I\!I\!I$ we can use Equation\refeqn{goodgradbound}to ensure that $||\nabla u||_{L^{\infty}(B_{1-s})}$ is under control, and then shrink $\delta$ if necessary to ensure that $||A - I||_{L^{q}(B_1)}$ is as small as we like. Applying Theorem 8.16 of \cite{GT} yields the desired result. \end{pf} Now we have a standard corollary for obstacle type problems. \begin{corollary}[Free Boundaries Are Close] \label{FBAC} Assuming Equation\refeqn{Always}again, assuming $u$ is defined as in the previous theorem, and using $D_{\mathcal{H}}$ as the Hausdorff distance between sets defined at the beginning of this section, there exists a universal constant $C$ such that \begin{equation} D_{\mathcal{H}}(FB(w), FB(u)) \leq C\sqrt{\epsilon} \label{eq:FBsClose} \end{equation} where $\epsilon$ is the number given in Equation\refeqn{uniformvictory}\!\!. \end{corollary} \begin{pf} This result is a simple application of the nondegeneracy enjoyed by each function. Indeed, if there is a point $x$ where one function is positive and a ball $B_{r}(x)$ where the other function is zero, then nondegeneracy implies that the max of the first function is $Cr^2$ on $\partial B_{r}(x)$ and this must be smaller than $\epsilon.$ \end{pf} Now we prove the main theorem in this paper. \begin{theorem}[Free Boundary Regularity] \label{FBR} Once again we assume Equation\refeqn{Always}and we assume that $a^{ij}$ and $f$ belong to VMO. As in Equation\refeqn{Srdef}we define $S_r$ to be the set of regular points of the free boundary within $B_r.$ Let $K \subset \subset S_{1/2}.$ Then $K$ is relatively Reifenberg vanishing with respect to $S_{1/2}.$ \end{theorem} \begin{pf} Fix$\epsilon > 0.$ We will demonstrate that there is a radius $\tilde{r} > 0$ such that for any $x \in K,$ and any positive $r < \tilde{r}$ there is a hyperplane $H(r,x)$ such that \begin{equation} D_{\mathcal{H}}(FB(w) \cap B_r(x), H(r,x) \cap B_r(x)) \leq r\epsilon \;. \label{eq:WeWin} \end{equation} We start by using the compactness of $K$ in almost the same way as in Theorem\refthm{blowup2}\!\!. Namely, we know that for every $x \in K$ there exists an $r_x$ such that \begin{equation} \label{eq:LamIs3/8rx} \frac{\Lambda(w) \cap B_{r_x}(x)}{B_{r_x}} \geq \frac{49}{100} \;. \end{equation} Next, there exists a small $\rho > 0$ depending only on the dimension, $n,$ such that if $y \in B_{\rho r_x}(x) \cap FB(w),$ then \begin{equation} \label{eq:LamIs1/4y} \frac{\Lambda(w) \cap B_{r_x}(y)}{B_{r_x}} \geq \frac{48}{100} \;. \end{equation} Now $K$ is compact, and is therefore covered by the open balls in the set $\{ B_{\rho r_x}(x) \}.$ By compactness, the set is still covered by a finite number of these balls, and their radii have a positive minimum, $\rho_0.$ Using Theorem\refthm{CAMS}guarantees that for all $r < \tau \rho_0,$ and for all $x \in K,$ we have \begin{equation} \label{eq:wdensg} \frac{\Lambda(w) \cap B_{r}(x)}{B_{r}} \geq \frac{48}{100} \;. \end{equation} Here $\tau$ is the constant given in the statement of Theorem\refthm{CAMS}\!\!. Henceforth, the argument becomes completely independent of whatever point in the free boundary that we wish to consider, so we can fix $x_0 \in FB(w),$ and show flatness at that point. Also, given the VMO-modulus $\eta,$ we can be sure that every quantity that we wish to control below can be shrunk in a uniform and universal way by shrinking the radius that we are considering. We consider the situation in $B_{r}(x_0)$ and after a linear invertible change of coordinates with eigenvalues bounded away from $0$ and $\infty$ in a uniform way depending only on ellipticity, we can assume that the averages of $a^{ij}$ are $\delta^{ij}$ and the average of $f$ is $\mu.$ Then we let $u$ solve the boundary value problem: \BVP{\Delta u = \mu \chisub{ \{ u > 0 \} } }{ u = w }{ourbvp}{ B_{r}(x_0) } By the $L^1$ closeness of $a^{ij}$ to $\delta^{ij}$ and $f$ to $\mu$ which are controlled by the VMO-modulus along with our measure stability theorem (Theorem\refthm{MeaStab}\!\!), we can guarantee (by assuming $r_1$ is sufficiently small) that \begin{equation} \label{eq:vdensg} \frac{\Lambda(v) \cap B_{r_1}(x_0)}{B_{r_1}} \geq \frac{47}{100} \;. \end{equation} Now it follows from Caffarelli's free boundary regularity theorem (see Theorem 7 of \cite{C4} or \cite{C5}) that if $r_2 \leq \tau_2 r_1$ where $\tau_2$ is suitably small, then $FB(v) \cap B_{r_2}(x_0)$ is uniformly $C^{1,\alpha}$ in $B_{r_2}(x_0).$ We can also assume that $FB(v)$ has a free boundary point as close to $x_0$ as we like by using the last corollary (and shrinking $r_1$ again if needed). Now zooming in on a uniformly $C^{1,\alpha}$ set will flatten it in a uniform way depending only on how much one zooms, so after zooming in to $r_3 := \tau_3 r_2,$ where $\tau_3$ will only depend on estimating how uniformly $C^{1,\alpha}$ functions flatten out as you zoom in, so we can have $FB(v) \cap B_{r_3}(x_0)$ within $r_3 \cdot \epsilon/2$ of a plane. Now we invoke Corollary\refthm{FBAC}again to guarantee that $FB(w)$ is within $r_3 \cdot \epsilon/2$ of $FB(v)$ and we are done. \end{pf} \begin{remark}[Choosing $r$] \label{Chor} It is worth remarking that the $r_j$ that work for all of the estimates in the last proof must be found \textit{before} finding the function $u,$ and then in Equation\refeqn{ourbvp}we can use $r = r_3.$ \end{remark} \begin{remark}[Nondivergence Form Case] \label{NDFC} The Theorem above (and the next corollary) can be extended without any difficulty to the nondivergence form setting. On the other hand, in the nondivergence form setting, since the functions will have stronger convergence to their blowup limits, it is very likely that the Weiss-type Monotonicity formula can be used to give an easier proof. In the divergence form case, the presence of the Dirichlet integral within the Weiss-type monotonicity functional coupled with the weak convergence in $W^{1,2}$ to the blowup limit makes it difficult to move back and forth from the original function to its blowup limit. \end{remark} \begin{corollary}[Blowup Classification] \label{BC} Any blowup found in Theorem\refthm{blowup2}must be homogeneous of degree two, and therefore in the right coordinate system, it willl be a constant times $(x_n^{+})^2.$ \end{corollary} \begin{pf} By Theorem\refthm{FBR}any blowup found in Theorem\refthm{blowup2}will have to be a global solution to the obstacle problem with a free boundary which is a hyperplane. Then by applying a combination of the Cauchy-Kowalevski theorem and Holmgren's uniqueness theorem we conclude (after a possible rotation and change of coordinates) that the blowup limit is $C(x_n^{+})^2.$ \end{pf} \bibliographystyle{abbrv}
2024-02-18T23:40:53.731Z
2013-09-24T02:09:52.000Z
algebraic_stack_train_0000
3,604
8,669
proofpile-arXiv_066-1713
\subsection*{S1 Estimation of the residual specific heat} \begin{figure}\center \includegraphics[width=8.5cm]{figs1}\\ \caption{Temperature dependence of specific heat under zero field plotted as $C/T$ vs $T^2$ below 1 K. The solid line represents the linear extrapolation of the data down to 0 K.}\label{} \end{figure} Figure S1 shows the specific heat divided by temperature $C$/$T$ as a function of $T^2$ under zero magnetic field. The solid line represents the linear extrapolation of the data down to 0 K. The residual specific heat $\gamma_0$ is estimated as $\sim$ 0.5 mJ/mol$\cdot$K$^2$. \subsection*{S2 Noise level of the azimuthal angle dependence of specific heat} \begin{figure}\center \includegraphics[width=8.5cm]{figs2}\\ \caption{Azimuthal angle dependence of specific heat $\Delta C$/$T$ under 0.08 T at 0.35 K, the 2-fold signal of 1\% of $C_{\rm{e}}/T$ at 0.08 T, and the superposition of the data with 1\%, 2\%, and 3\% of the 2-fold signal of $C$/$T$.}\label{} \end{figure} Figure S2 shows the azimuthal angle dependence of specific heat under 0.08 T at 0.35 K (open black circles) together with the putative 2-fold signal of 1\% of $C_{\rm{e}}/T$ (solid line). It is obvious that the noise level is around 0.07 mJ/mol$\cdot$K$^2$. In this case, the fluctuation of the signal with respect to ($\gamma_n$-$\gamma_0$) $\sim$ 8 mJ/mol$\cdot$K$^2$ is $\sim$ 0.07/8 = 0.88\%. Superpositions of the data with putative 2-fold signals with amplitudes of 1\%, 2\%, and 3\% of $C_{\rm{e}}/T$ are also shown in Fig. S2 in red, orange, and purple solid circles, respectively. Even though the amplitude of 2-fold signal is comparable to the fluctuation of the data, it can be observed if it really exists. \subsection*{S3 Magnetic field dependence of specific heat at 0.33 K for $H \parallel ab$} \begin{figure*}\center \includegraphics[width=14cm]{figs3}\\ \caption{(a) Magnetic field dependence of specific heat at 0.33 K for $H \parallel ab$-plane. (b) The plot of $C$/$T$ vs $H^{0.64}$, and the inset is the enlarged range of 0.05 T$^{0.64}$ $< H^{0.64} <$ 0.2 T$^{0.64}$.}\label{} \end{figure*} To check the possible $c$-axis point nodes, we performed the measurements of field dependent specific heat with $H \parallel ab$-plane. (The measurement was done on another piece of crystal.) $C$/$T$ increases linearly with magnetic field with different slopes in the low field and high field regions [see Fig. S3(a)], representing the behavior of two-gap superconductor. It is clearly different from the expected behavior of $C$/$T$ $\propto H^{0.64}$ for point nodes as shown in Fig. S3(b), and the enlarged plot in the range of 0.05 T$^{0.64}$ $< H^{0.64} <$ 0.2 T$^{0.64}$ [see inset of Fig. S3(b)]. Therefore, the $c$-axis point nodes can be excluded. For a typical two-gap superconductor, the linear increase of $C$/$T$ in the low field region is dominant by the suppression of the gap with smaller upper critical field, which is usually defined as the virtual upper critical field $H^{\ast}$ as shown in Fig. S3(a). After the magnetic field increased above $H^{\ast}$, $C$/$T$ will linearly increase with field at the high field region due to the suppression of another gap with large upper critical field. The virtual upper critical field $H^{\ast}$ of PbTaSe$_2$ is estimated as 0.07 T as shown in Fig. S3(a). The value of $C$/$T$ at $H^{\ast}$ is $\sim$ 6.5 mJ/mol$\cdot$K$^2$, which is about 83\% of the $\gamma_n$ $\sim$ 7.8 mJ/mol$\cdot$K$^2$, which is consistent to the ratio of the two gaps (84\% : 16\%) obtained by the fitting of $C_e/T$ [see Fig. 1(f)]. According to the $C_e/T$ fitting, the larger ratio (84\%) corresponds to the larger gap $\Delta_2$. Therefore, $H^{\ast}$ is the upper critical field for $\Delta_2$. On the other hand, the virtual upper critical field $H^{\ast}$ can be expressed as $H^{\ast}$ $\sim$ $\Phi_0/2\pi\xi_{ab}^{\ast}\xi_c^{\ast}$, while the upper critical filed for $H \parallel ab$ can be expressed as $H_{c2}^{ab}$ $\sim$ $\Phi_0/2\pi\xi_{ab}\xi_c$. Here, $\xi_{ab}^{\ast}$ = $\hbar v_{F2}^{ab}$/$\pi\Delta_2$, and $\xi_{c}^{\ast}$ = $\hbar v_{F2}^{c}$/$\pi\Delta_2$ are the coherence length along $ab$ plane and $c$ axis for the larger gap $\Delta_2$ (0.58 meV). $\xi_{ab}$ = $\hbar v_{F1}^{ab}$/$\pi\Delta_1$, and $\xi_{c}$ = $\hbar v_{F1}^{c}$/$\pi\Delta_1$ are the coherence length along $ab$ plane and $c$ axis for the smaller gap $\Delta_1$ (0.28 meV). $v_{Fi}$ ($i$ = 1, 2) is the Fermi velocity for each band. Then, the ratio of $H^{\ast}$/$H_{c2}^{ab}$ can be expressed as \begin{equation} \label{eq.1} \frac{H^{\ast}}{H_{c2}^{ab}}=\frac{\xi_{ab}\xi_c}{\xi_{ab}^{\ast}\xi_c^{\ast}}=\frac{\Delta_2^2}{\Delta_1^2}\frac{v_{F1}^{ab}v_{F1}^{c}}{v_{F2}^{ab}v_{F2}^{c}}. \end{equation} The $H_{c2}^{ab}$ is obtained as $\sim$ 0.3 T from Fig. S3(a). The ratio of Fermi velocity for the two bands can be estimated as $v_{F2}$/$v_{F1}$ = 4.3, assuming the isotropic Fermi velocity $v_{Fi}^{ab}$ = $v_{Fi}^c$. Such large difference in the Fermi velocity between different bands may originate from the topological band structure of PbTaSe$_2$, where the large $v_{F2}$ is from the Dirac band with linear dispersion, while the small $v_{F1}$ is from the traditional parabolic band. Actually, a large difference in the effective mass in different bands has been observed by the quantum oscillation measurements [20]. \end{document}
2024-02-18T23:40:54.167Z
2020-07-23T02:15:44.000Z
algebraic_stack_train_0000
3,630
994
proofpile-arXiv_066-1868
\section{Introduction} What do the bounded derived categories of the projective plane $\mathbb{P}^2_\mathbb{C}$, the $3$-Kronecker quiver $K_3$, and the group algebra of the symmetric group on $4$ letters over $\overline{\mathbb{F}}_2$ have in common? One answer is that we don't really understand any of them. For instance, we have very little information on what the possible localizations of these categories are. We don't understand what the possible kernels of such localizations, i.e.\ the thick subcategories, are\textemdash{}let alone what the resulting quotient looks like. Even in these small examples there is much to be done; it was only recently shown by Pirozhkov \cite{P2phantoms} that there are no phantom subcategories, i.e.\ thick subcategories not detected by additive invariants, for $\mathbb{P}^2$ and the corresponding question is still open for $3$-Kronecker. What we understand quite well are the \emph{monoidal} localizations. Both $\mathsf{D}^\mathrm{b}(\mathbb{P}^2)$ and $\mathsf{D}^\mathrm{b}(\overline{\mathbb{F}}_2S_4)$ are equipped with compatible symmetric monoidal structures and the possible monoidal localizations were classified by Thomason \cite{Thomclass} in the former case and Benson, Carlson, and Rickard in the latter \cite{BCR}. The study of these monoidal localizations comprises tensor triangular geometry, affectionately known as tt-geometry, which was introduced by Balmer \cite{BaSpec} and has rapidly matured, aided by the plethora of tantalizing examples, into a very active field of study. The idea is to functorially assign to every tensor triangulated category (tt-category) a space parameterizing thick ideals, i.e.\ thick subcategories closed under tensoring and which are radical. This creates opportunities to work geometrically and topologically to study the behaviour of tt-categories and to probe their objects in terms of an associated support theory. The secret sauce is \emph{distributivity} of the lattice of radical thick tensor ideals. Distributivity is the bare minimum for a lattice to be isomorphic to the lattice of open subsets of some space. One can interpret intersections of ideals in terms of the monoidal structure and so, as for radical ideals in a commutative ring, intersections of ideals distribute over sums of ideals. The insight is that the lattices in tt-geometry are always lattices of opens and so one can pass, without loss of information, from the lattice to the associated space\textemdash{}the spectrum. The aim of this paper is to initiate a systematic and general study of lattices of thick subcategories of triangulated categories. To this end we make a contribution to the taxonomy of properties that such a lattice may, or may not, possess including distributivity and various weakenings thereof. This occupies Section~\ref{sec:shithappens}. Our first main observation is that for lattices of thick subcategories the bare minumum requirement, i.e.\ distributivity, suffices to be the lattice of opens of a topological space. \begin{thm*}[Corollary~\ref{cor:spatial}] Let $\mathsf{K}$ be an essentially small triangulated category with lattice of thick subcategories $\Thick(\mathsf{K})$. If $\Thick(\mathsf{K})$ is distributive then there is a sober topological space $X$, given by prime thick subcategories of $\mathsf{K}$, such that $\Thick(\mathsf{K})$ is isomorphic to the lattice of open subsets of $X$. \end{thm*} However, distributivity does not hold in many examples. None of the projective plane, $3$-Kronecker, or $\overline{\mathbb{F}}_2S_4$ have bounded derived categories with a distributive lattice of thick subcategories. Indeed, when there is some discrete combinatorial data involved in understanding objects, for example the twisting sheaves and their mutations on $\mathbb{P}^2$ or the preprojectives and preinjectives for $3$-Kronecker, this seems to rule out distributivity. So, how are we to approach thick subcategories in these settings? As a first step in this direction we construct, in a universal way, two spaces one can associate to \emph{any} essentially small triangulated category $\mathsf{K}$ in order to get a measure of its lattice of thick subcategories. In the absence of distributivity these spaces cannot faithfully reflect the structure of $\Thick(\mathsf{K})$. The first one, which is functorial with respect to all exact functors, adds information and the second loses information unless the lattice is distributive. Let us denote by $\mathsf{tcat}$ the category of essentially small triangulated categories and exact functors, and by $\mathsf{Sob}$ the category of sober spaces and continuous maps. \begin{thm*} There is a functor $\fSpcnt\colon \mathsf{tcat}^\mathrm{op} \to \mathsf{Sob}$, and an associated notion of support, such that for each $\mathsf{K} \in \mathsf{tcat}$ the pair $(\fSpcnt \mathsf{K}, \supp)$ is the terminal support theory. \end{thm*} One can think of $\fSpcnt \mathsf{K}$ as a coarse moduli space for thick subcategories of $\mathsf{K}$, where the topology reflects the various inclusion relations. The second space is, in some sense, a refinement of $\fSpcnt$ which is still free (in the technical sense), but subject to more constraints. \begin{thm*} For any essentially small triangulated category $\mathsf{K}$ there exists a sober space $\Spcnt \mathsf{K}$ such that if $\Thick(\mathsf{K})$ is distributive then $\mathcal{O}\Spcnt \mathsf{K}$, the lattice of open subsets of $\Spcnt \mathsf{K}$ is isomorphic to $\Thick\mathsf{K}$. This space comes with a canonical comparison map $\Spcnt \mathsf{K} \to \fSpcnt \mathsf{K}$. \end{thm*} These spaces are constructed and studied in Sections \ref{sec:ff} and \ref{sec:pf} respectively. The space $\Spcnt \mathsf{K}$, which we dub the `non-tensor spectrum' of $\mathsf{K}$, admits natural comparison maps to the Balmer spectrum when $\mathsf{K}$ is a rigid tt-category and to the noncommutative spectrum of \cite{nakano2019} when $\mathsf{K}$ is a monoidal triangulated category such that every 2-sided thick $\otimes$-ideal is semiprime. It also recovers the spectrum defined by Matsui \cite{matsui2019} in examples e.g.\ singularity categories of complete intersections. However, caveat emptor: the non-tensor spectrum can be empty in non-trivial examples! In fact, one of the original motivations for this work was to illustrate that, without distributivity hypotheses, one can't develop a reasonable theory of supports on a space which is functorial and compatible with other theories. One seems to require some refined notion combining the continuous and discrete pieces of the classification. A tempting direction for future work is to consider spaces with some form of decoration e.g.\ those appearing in \cite{StevensonAntieau} and \cite{GratzZvonareva}. We conclude the paper with two sections discussing examples and a small selection of open problems respectively. \section{Preliminaries on lattices} In this section we recall various facts about lattices, spaces, and the duality between them that we will use throughout. We do not aim to give a complete overview, and recommend \cite{sketches} and \cite{stonespaces} as useful references. \subsection{The players} We begin by introducing various characters and the relations between them. \begin{defn}\label{clat} A \emph{lattice} $L$ is a poset which has a least upper bound and greatest lower bound for every finite non-empty set of elements, i.e.\ has finite non-empty joins and meets which we denote by $\vee$ and $\wedge$ respectively. We say $L$ is \emph{complete} if it admits meets of arbitrary sets of elements, and note that this implies that arbitrary joins also exist (equivalently $L$ is complete if it admits arbitrary joins and the existence of meets follows). We denote the top and bottom of a lattice, whenever they exist, by $1$ and $0$ respectively. Note that a complete lattice has both a top and a bottom. There are two different categories of complete lattices that we will be concerned with. We denote by $\mathsf{CjSLat}$ the category of complete lattices (thought of as complete join semilattices) and poset maps which preserve arbitrary joins. We note that the bottom element of a lattice, being the empty join, is preserved by any such map, and that we do not require the top element to be preserved. We denote by $\mathsf{CLat}$ the category of complete lattices and poset maps which preserve arbitrary joins and finite meets, and hence both the top and bottom. \end{defn} For our purposes complete lattices are not sufficient. We ultimately want to work with lattices that arise as the poset of open subsets of a space. This leads us to the notion of spatial frames. \begin{defn}\label{frm} A \emph{frame} $F$ is a complete lattice in which finite meets distribute over arbitrary joins. We denote by $\mathsf{Frm}$ the category of frames with poset maps which preserve finite meets and arbitrary joins. \end{defn} \begin{defn} Let $L$ be a lattice. An element $l\in L$ is \emph{compact} (or \emph{finitely presented}) if, for any set of elements $\{m_i \mid i\in I\} \subseteq L$, we have \[ l \leq \bigvee_{i\in I}m_i \text{ if and only if } \exists i_1,\ldots, i_n \text{ such that } l \leq \bigvee_{j=1}^n m_{i_j}. \] Viewing $L$ as a category this just says that $l$ is a finitely presented object, i.e.\ $L(l,-)$ commutes with filtered colimits. \end{defn} \begin{defn}\label{cohfrm} A frame $F$ is \emph{coherent} if its finitely presented objects form a bounded sublattice (in particular $1$ must be finitely presented) and generate $F$ under joins. \end{defn} \begin{rem} From the categorical perspective this is just asking that $F$ is locally finitely presented and the finitely presented objects of $F$ are closed under finite limits and colimits. \end{rem} There is an obvious fully faithful inclusion functor $\mathsf{Frm} \to \mathsf{CLat}$ and a (faithful) forgetful functor $\mathsf{Frm} \to \mathsf{CjSLat}$. We denote by $\mathbf{2}$ the unique frame with two elements $\{0 \leq 1\}$. \begin{defn}\label{pt} A \emph{point} of a complete lattice $L$ is a morphism $L \to \mathbf{2}$ in $\mathsf{CLat}$. The set of points of $L$ is thus $\mathsf{CLat}(L,\mathbf{2})$. \end{defn} Since $\mathsf{Frm}$ is a full subcategory of $\mathsf{CLat}$ this reduces to the usual notion for frames. We take this opportunity to recall another definition which is intimately connected to the notion of points. \begin{defn}\label{defn:meetprime} An element $\mathfrak{p}\neq 1$ of a lattice $L$ is called \emph{meet-prime} if whenever $l \wedge m \leq \mathfrak{p}$ then we have $l \leq \mathfrak{p}$ or $m \leq \mathfrak{p}$. \end{defn} \begin{rem} There is a bijection between points of $L$ and the set $\mathcal{P}$ of meet-prime elements of $L$. Explicitly, this is given by sending a meet-prime element $\mathfrak{p}$ to the point $p$ defined by \[ p(a) = \begin{cases} 0 & \text{ if } a\leq \mathfrak{p} \\ 1 & \text{ if } a\nleq \mathfrak{p}. \end{cases} \] The inverse is given by sending a point $p$ to the element \[ \mathfrak{p} = \bigvee\{ a \mid p(a)=0\} \] which is easily checked to be meet-prime. \end{rem} \begin{defn}\label{sfrm} A frame $F$ is \emph{spatial} if it has enough points in the sense that whenever $a\nleq b$ in $F$ there exists a point $p\colon F\to \mathbf{2}$ such that $p(a)=1$ and $p(b)=0$. We denote by $\mathsf{SFrm}$ the full subcategory of $\mathsf{Frm}$ consisting of the spatial frames. \end{defn} We now turn to the topological incarnation of spatial frames and the passage between spaces and frames. \begin{defn}\label{Sob} A topological space $X$ is \emph{sober} if every irreducible closed subset of $X$ has a unique generic point. We denote by $\mathsf{Sob}$ the category of Sober spaces and continuous maps. \end{defn} We recall that $\mathsf{SFrm}$ and $\mathsf{Sob}$ are dual, i.e.\ there are quasi-inverse equivalences \begin{displaymath} \xymatrix{ \mathsf{SFrm}^\mathrm{op} \ar[rr]<0.75ex>^-{\pt} \ar@{<-}[rr]<-0.75ex>_-{\mathcal{O}} && \mathsf{Sob} } \end{displaymath} where for a frame $F$ \begin{displaymath} \pt(F) = \mathsf{Frm}(F, \mathbf{2}) \end{displaymath} with open subsets, indexed by $l\in F$, given by $U_l = \{p\colon F\to \mathbf{2} \mid p(l) = 1\}$, and for a sober space $X$ we define $\mathcal{O}(X)$ to be the frame of open subsets of $X$, which is spatial (for any space $X$). This equivalence is known as Stone duality. \begin{rem} Sometimes $\pt(F)$ is also denoted by $\Spec(F)$ and called the spectrum of $F$ in analogy with algebraic geometry. \end{rem} The following lemma is an immediate consequence of this duality. \begin{lem}\label{lem:SAFT} The category $\mathsf{SFrm}$ of spatial frames is cogenerated by $\mathbf{2}$. Moreover, it is complete, locally small, and well-powered and so satisfies the hypotheses of the Special Adjoint Functor Theorem. \end{lem} Before moving on we briefly return to coherent frames as introduced in Definition~\ref{cohfrm}. A coherent frame is automatically spatial, as shown for instance in \cite{stonespaces}*{Theorem~II.3.4}. Thus, under Stone duality, the coherent frames correspond to a distinguished class of sober spaces. \begin{defn}\label{defn:spectral} The essential image of the coherent frames under Stone duality consists of the \emph{spectral spaces} (sometimes also called coherent spaces for coherence with the frame terminology). These are the quasi-compact sober spaces such that the quasi-compact open subsets are closed under finite intersections and form a basis for the topology. \end{defn} As shown by Hochster \cite{Hochster} the spectral spaces are precisely the spectra of commutative rings. Their relevance to us is that they appear naturally in the context of tt-geometry. There is a natural notion of duality for spectral spaces which will appear in the comparison of our work to the tt-framework. \begin{defn}\label{defn:Hochsterdual} Let $X$ be a spectral space. The \emph{Hochster dual} of $X$, denoted by $X^\vee$, has the same points as $X$ and the topology generated by taking the closed subsets of $X$ with quasi-compact open complement as a basis of open subsets for $X^\vee$. \end{defn} \begin{rem} Another point of view is given as follows: let $K(X)$ denote the lattice of quasi-compact open subsets of $X$, i.e.\ the compact objects of the coherent frame $\mathcal{O}(X)$. The dual $X^\vee$ corresponds to the space of points of $\Ind(K(X)^\mathrm{op})$, the coherent frame given by the ind-completion (i.e.\ lattice of ideals) of the opposite lattice of $K(X)$. \end{rem} \subsection{Limits}\label{ssec:limits} We now briefly discuss limits in the various categories we have introduced above. By \cite{sketches}*{Lemma~C.1.1.3} the forgetful functors to $\mathsf{Set}$ create limits for $\mathsf{Frm}$ and $\mathsf{CjSLat}$ (one defines the lattice operations pointwise on products) and the forgetful functor $\mathsf{Frm} \to \mathsf{CjSLat}$ has a left adjoint. One can easily check that the forgetful functor to $\mathsf{Set}$ also creates limits for $\mathsf{CLat}$. \begin{lem}\label{lem:limclosed} The full subcategory $\mathsf{SFrm}$ of $\mathsf{Frm}$ is closed under limits. \end{lem} \begin{proof} It is enough to check that $\mathsf{SFrm}$ is closed under products and equalizers. Suppose that $L_1$ and $L_2$ are spatial frames and we are given an equalizer diagram, taken in $\mathsf{Frm}$, \begin{displaymath} \xymatrix{ L\ar[r]^-e & L_1 \ar[r]<0.5ex>^-f \ar[r]<-0.5ex>_-g & L_2 } \end{displaymath} so that \begin{displaymath} L = \{l\in L_1 \mid f(l) = g(l)\} \end{displaymath} by the discussion above. If $l \nleq l'$ in $L$, then this is still true in $L_1$, and so there exists a point $p\colon L_1\to \mathbf{2}$ separating these elements. Thus one can separate these elements of $L$ via the point $L \stackrel{e}{\to} L_1 \stackrel{p}{\to} \mathbf{2}$. Suppose $\{L_i \mid i\in I\}$ is a set of spatial frames with product (taken in $\mathsf{Frm}$) $L$ and $l\nleq l'$ in $L$. Then, there exists some $i\in I$ such that in the $i$th component $l_i \nleq l'_i$ and so, since $L_i$ is spatial, a point $p\colon L_i \to \mathbf{2}$ separating $l_i$ and $l'_i$. Then precomposing with the canonical projection $L\to L_i$ gives a point of $L$ separating $l$ and $l'$. \end{proof} \subsection{The adjunctions} Our approach to studying lattices of thick subcategories will be centred around the existence of two adjoint functors; the point of this section is to exhibit them. We can immediately deduce the existence of the first from general nonsense. \begin{prop}\label{prop:adjoint1} The forgetful functor $i\colon \mathsf{SFrm} \to \mathsf{CjSLat}$ has a left adjoint $d$. \end{prop} \begin{proof} We have seen in Lemma~\ref{lem:limclosed} that the inclusion $\mathsf{SFrm} \to \mathsf{Frm}$ preserves limits. By \cite{sketches}*{Lemma~C.1.1.3} the canonical functor $\mathsf{Frm} \to \mathsf{CjSLat}$ preserves limits, and thus the composite $i\colon \mathsf{SFrm} \to \mathsf{CjSLat}$ preserves limits. The category $\mathsf{CjSLat}$ is certainly locally small and so by Lemma~\ref{lem:SAFT} the Special Adjoint Functor Theorem applies to yield the left adjoint $d$. \end{proof} It is possible to give a concrete construction of $d$. We sketch the construction here, in order that the examples we give later are not completely out of the blue, but do not provide full justifications. \begin{con}\label{cons:left} Let $L$ be a complete join semilattice. We define the collection of \emph{semipoints} of $L$ to be \begin{displaymath} \spt(L) = \mathsf{CjSLat}(L, \mathbf{2}), \end{displaymath} i.e.\ the collection of join (but not necessarily meet) preserving maps to the frame $\mathbf{2}$. As in the discussion of Stone duality (and as to come in Construction~\ref{cons:lefter}) we define for $l\in L$ a subset \begin{displaymath} U_l = \{p\in \spt(L) \mid p(l)=1\} \end{displaymath} and note that, for any collection of elements $\{l_\lambda \mid \lambda \in \Lambda\}$ of $L$, we have $U_{\vee_{\lambda} l_\lambda} = \cup_{\lambda} U_{l_\lambda}$. These subsets will, in general, not be closed under finite intersections (for example $\spt(L)$ need not be of this form), but we take the topology they generate (so the open subsets are unions of finite intersections of $U_l$'s) and endow $\spt(L)$ with the structure of a topological space. The free (spatial) frame on $L$, $d(L)$, is given by $\mathcal{O}\spt(L)$ the corresponding spatial frame of opens ($\mathcal{O}^\mathrm{op}\spt^\mathrm{op}$ to be precise about the action on maps, but we omit these op's throughout for readability). The unit of the adjunction $L \to d(L)$ is given by the obvious assignment $l \mapsto U_l$, and for a frame $F$ the counit is given by \begin{displaymath} \varepsilon\colon d(F) \to F \quad \quad \varepsilon(\bigcup_i (\bigcap_{j=1}^{n_i} U_{f_j})) = \bigvee_i (\bigwedge_{j=1}^{n_i} f_j). \end{displaymath} \end{con} We also wish to consider the fully faithful functor $j\colon \mathsf{SFrm} \to \mathsf{CLat}$ which, as we shall see, is also a right adjoint. This follows, as above, by noting that limits in $\mathsf{CLat}$ are created by the forgetful functor. However, we wish to give a direct construction of the left adjoint which we denote by $\Omega \colon \mathsf{CLat} \to \mathsf{SFrm}$. \begin{con}\label{cons:lefter} Given a complete lattice $L\in \mathsf{CLat}$ we can construct from it a space exactly as if it were a frame. Indeed, the space of points of $L$ is $\mathsf{CLat}(L, \mathbf{2})$ (note that this is taken in $\mathsf{CLat}$ and so maps are required to preserve all joins and finite meets), with open subsets the $U_l = \{p\colon L\to \mathbf{2} \mid p(l)=1\}$. One checks easily (using that joins and meets are preserved as indicated) that these are the open subsets of a topology. As for a frame, we often denote this space by either $\pt(L)$ or $\Spec(L)$. We define $\Omega L$ to be $\mathcal{O}\mathsf{CLat}(L,\mathbf{2})$, the spatial frame of open subsets of this space. Both $\mathcal{O}$ and $\mathsf{CLat}(-,\mathbf{2})$ are (contravariantly) functorial and so we get a functor $\Omega\colon \mathsf{CLat} \to \mathsf{SFrm}$. Now let us construct the unit and counit of adjunction. The unit for $L\in \mathsf{CLat}$ as above is given by \begin{displaymath} L \stackrel{\eta_L}{\to} j\Omega L \quad \text{via} \quad l \mapsto U_l. \end{displaymath} On the other hand for a spatial frame $F$ we know that $\Omega jF$ is naturally isomorphic to $F$, via the duality with $\mathsf{Sob}$. We thus take for the counit $\varepsilon\colon \Omega jF \to F$ the counit of the equivalence $\mathsf{SFrm} \cong \mathsf{Sob}^\mathrm{op}$. \end{con} \begin{prop}\label{prop:adjoint2} The above construction determines an adjunction $\Omega \dashv j$ between $\mathsf{SFrm}$ and $\mathsf{CLat}$. \end{prop} \begin{proof} We check the triangle identities, which is straightforward in this case. We start with the composite \begin{displaymath} \xymatrix{ \Omega \ar[r]^-{\Omega(\eta)} & \Omega j \Omega \ar[r]^-{\varepsilon_\Omega} & \Omega. } \end{displaymath} Since $\Omega$ lands in $\mathsf{SFrm}$ this composite is the identity by Stone duality (using our choice of the counit). The other identity concerns the composite \begin{displaymath} \xymatrix{ j \ar[r]^-{\eta_j} & j \Omega j \ar[r]^-{j(\varepsilon)} & j, } \end{displaymath} which can be identified with $j$ applied to the composite $\id_{\mathsf{SFrm}} \to \Omega j \to \id_{\mathsf{SFrm}}$ by using that $j$ is fully faithful. This latter composite is again the identity by Stone duality. \end{proof} Before continuing with our discussion let us make the following observation which will be useful in the sequel; we use the concepts recalled in Definition~\ref{defn:meetprime} and the remark following it. \begin{lem}\label{lem:sober} Let $L$ be a complete lattice. The space $X = \mathsf{CLat}(L,\mathbf{2})$ is sober. \end{lem} \begin{proof} The closed subsets of $X$ are precisely those of the form \begin{displaymath} Z_l = \{p\in \mathsf{CLat}(L,\mathbf{2}) \mid p(l)=0\} \text{ for } l\in L. \end{displaymath} Under the bijection between points of $L$ and the set $\mathcal{P}$ of meet-prime elements we have $Z_l = \{\mathfrak{p} \in \mathcal{P} \mid l\leq \mathfrak{p}\}$. We note that, under this identification, it is clear that for $\mathfrak{q}\in \mathcal{P}$ the subset $Z_\mathfrak{q}$ is irreducible with generic point $\mathfrak{q}$. Suppose that $Z_l$ is irreducible, and consider $\mathfrak{q} = \bigwedge_{\mathfrak{p}\in Z_l}\mathfrak{p}$. We claim that $\mathfrak{q}$ is meet-prime. To see this, suppose that we are given $m,n\in L$ with $m\wedge n\leq \mathfrak{q}$ i.e.\ $m\wedge n \leq \mathfrak{p}$ for every $\mathfrak{p}\in Z_l$, i.e. $Z_l\subseteq Z_{m\wedge n} = Z_m \cup Z_n$. By irreducibility of $Z_l$ we have, without loss of generality, that $Z_l\subseteq Z_m$, i.e.\ $m\leq \mathfrak{p}$ for all $\mathfrak{p}\in Z_l$ and so $m\leq \mathfrak{q}$. Hence $\mathfrak{q}$ is meet-prime as claimed and we have $Z_l = Z_{\mathfrak{q}}$, so $Z_l$ has unique generic point $\mathfrak{q}$. \end{proof} So we have two ways of constructing a spatial frame from a complete lattice: the functor $\Omega$ leaves lattices which are already spatial frames unchanged but we need to be careful to take maps that also preserve finite meets, while $d$ is more brutal but we can work with arbitrary join preserving maps. However, $\Omega$ can fail to be kind to nondistributive lattices. \begin{ex}\label{ex:diamond} Consider the diamond lattice $D$ with Hasse diagram \begin{displaymath} \begin{tikzpicture}[scale=0.7 \node (v0) at (0,0) {}; \node (va) at (-2,2) {}; \node (vb) at (0,2) {}; \node (vc) at (2,2) {}; \node (v1) at (0,4) {}; \draw[fill] (v0) circle (2pt) node [below] {0}; \draw[fill] (va) circle (2pt) node [left] {$l$}; \draw[fill] (vb) circle (2pt) node [right] {$m$}; \draw[fill] (vc) circle (2pt) node [right] {$n$}; \draw[fill] (v1) circle (2pt) node [above] {$1$}; \path[-] (v0) edge node [above] {} (va); \path[-] (v0) edge node [above] {} (vb); \path[-] (v0) edge node [above] {} (vc); \path[-] (va) edge node [above] {} (v1); \path[-] (vb) edge node [above] {} (v1); \path[-] (vc) edge node [above] {} (v1); \end{tikzpicture} \end{displaymath} This is a complete nondistributive lattice. We have $\mathsf{CLat}(D,\mathbf{2})=\varnothing$ since preserving meets and joins implies that any point $p$ would, for each pair of elements $i,j\in \{l,m,n\}$, have to send one of them to $1$ and the other to $0$. This cannot be done compatibly for all three of $l,m$, and $n$. Thus $\Omega D$ is $\mathcal{O}(\varnothing) = \{\varnothing\} = \mathbf{1}$ which is the terminal frame. \end{ex} \begin{lem}\label{lem:serialkiller} Let $L\in \mathsf{CLat}$ be a complete lattice. If there is an embedding of complete lattices $D\to L$ then $\Omega L \cong \mathbf{1}$. \end{lem} \begin{proof} If $p\colon L\to \mathbf{2}$ were a point of $L$ then it would restrict to a point of $D$. This is impossible, and so we must have $\pt(L) = \varnothing$ and the conclusion follows. \end{proof} \begin{rem} The lemma requires that the embedding takes place in $\mathsf{CLat}$ and so it must preserve the top and bottom elements. As the following example shows a lattice can be nondistributive, due to containing the diamond as a non-bounded sublattice (i.e.\ the diamond is a subposet), and be non-trivial under $\Omega$. \end{rem} \begin{ex} Consider the complete nondistributive lattice $L$ with Hasse diagram \begin{displaymath} \begin{tikzpicture}[scale=0.7 \node (v0) at (0,-2) {}; \node (vx) at (0,0) {}; \node (va) at (-2,2) {}; \node (vb) at (0,2) {}; \node (vc) at (2,2) {}; \node (v1) at (0,4) {}; \draw[fill] (vx) circle (2pt) node [right] {$x$}; \draw[fill] (v0) circle (2pt) node [below] {0}; \draw[fill] (va) circle (2pt) node [left] {$l$}; \draw[fill] (vb) circle (2pt) node [right] {$m$}; \draw[fill] (vc) circle (2pt) node [right] {$n$}; \draw[fill] (v1) circle (2pt) node [above] {$1$}; \path[-] (v0) edge node [above] {} (vx); \path[-] (vx) edge node [above] {} (va); \path[-] (vx) edge node [above] {} (vb); \path[-] (vx) edge node [above] {} (vc); \path[-] (va) edge node [above] {} (v1); \path[-] (vb) edge node [above] {} (v1); \path[-] (vc) edge node [above] {} (v1); \end{tikzpicture} \end{displaymath} This lattice has a point $p\colon L\to \mathbf{2}$ defined by $p(0) = 0$ and sending the remaining elements to $1$. In fact this is the unique point and so $\Spec L = \ast$, the terminal space, and $\Omega L = \mathbf{2}$. \end{ex} The other minimal nondistributive lattice, the pentagon lattice, is not killed by $\Omega$. \begin{ex}\label{ex:pentagon} Let $P$ be the pentagon lattice with Hasse diagram \begin{displaymath} \begin{tikzpicture}[scale=0.7 \node (v0) at (0,0) {}; \node (va) at (-1,1) {}; \node (vb) at (1,2) {}; \node (vc) at (-1,3) {}; \node (v1) at (0,4) {}; \draw[fill] (v0) circle (2pt) node [below] {0}; \draw[fill] (va) circle (2pt) node [left] {$m$}; \draw[fill] (vb) circle (2pt) node [right] {$l$}; \draw[fill] (vc) circle (2pt) node [left] {$n$}; \draw[fill] (v1) circle (2pt) node [above] {$1$}; \path[-] (v0) edge node [above] {} (va); \path[-] (v0) edge node [above] {} (vb); \path[-] (va) edge node [above] {} (vc); \path[-] (vb) edge node [above] {} (v1); \path[-] (vc) edge node [above] {} (v1); \end{tikzpicture} \end{displaymath} The lattice $P$ has precisely two points $p_1$ and $p_2$ given by \begin{displaymath} \begin{array}{cc} m,n & \mapsto 0 \\ l & \mapsto 1 \end{array} \quad \text{ and } \quad \begin{array}{cc} m,n &\mapsto 1 \\ l & \mapsto 0 \end{array} \end{displaymath} respectively, with the discrete topology. Thus $\Omega P$ is the lattice \begin{displaymath} \begin{tikzpicture}[scale=0.7 \node (v0) at (0,0) {}; \node (va) at (-1,1) {}; \node (vb) at (1,1) {}; \node (v1) at (0,2) {}; \draw[fill] (v0) circle (2pt) node [below] {0}; \draw[fill] (va) circle (2pt) node [left] {$x$}; \draw[fill] (vb) circle (2pt) node [right] {$y$}; \draw[fill] (v1) circle (2pt) node [above] {$1$}; \path[-] (v0) edge node [above] {} (va); \path[-] (v0) edge node [above] {} (vb); \path[-] (va) edge node [above] {} (v1); \path[-] (vb) edge node [above] {} (v1); \end{tikzpicture} \end{displaymath} where $x=\{p_2\}$ and $y = \{p_1\}$, and we see $m$ and $n$ have been collapsed to a single element $x$. \end{ex} \section{Preliminaries on triangulated categories}\label{sec:t} In this section, which serves mostly to fix notation and ideas, we introduce the lattices that will be our main focus. We denote by $\mathsf{tcat}$ the category of essentially small triangulated categories and exact functors between them. Throughout all triangulated categories will be essentially small unless it is explicitly mentioned otherwise. Let $\mathsf{K}$ be a triangulated category. \begin{defn}\label{def:thick} A full subcategory $\mathsf{M}\subset \mathsf{K}$ is \emph{thick} if it is closed under: \begin{itemize} \item isomorphisms; \item all suspensions; \item cones; \item direct summands, \end{itemize} i.e.\ it is a triangulated subcategory closed under summands. Given a collection of objects $C \subseteq \mathsf{K}$ we denote by $\thick(C)$ the smallest thick subcategory containing $C$ (which exists by Lemma~\ref{lem:thicklattice}). We will instead use $\thick_\mathsf{K}(C)$ if we wish to emphasise the ambient triangulated category. \end{defn} We denote by $\Thick(\mathsf{K})$ the collection of thick subcategories of $\mathsf{K}$. This is naturally a poset when ordered by inclusion. \begin{lem}\label{lem:thicklattice} The poset $\Thick(\mathsf{K})$ is a complete lattice with meets given by intersections. The join of a family $\{\mathsf{M}_\lambda\mid \lambda \in \Lambda\}$ of thick subcategories is given by \begin{displaymath} \bigvee_{\lambda \in \Lambda} \mathsf{M}_\lambda = \thick(\bigcup_{\lambda \in \Lambda} \mathsf{M}_\lambda). \end{displaymath} \end{lem} \begin{proof} It is evident from the closure conditions defining a thick subcategory that any intersection of thick subcategories is again thick. Since a thick subcategory is determined by the objects it contains it follows immediately that this must give the meet. That the join is as specified can be checked directly or deduced from the formula for the join in terms of the meet. \end{proof} \begin{rem} The lattice $\Thick(\mathsf{K})$ is in many examples not even distributive, let alone a frame; we explore this in depth in Section~\ref{sec:shithappens}. \end{rem} \begin{lem}\label{lem:commute} Suppose $F\colon \mathsf{K} \to \mathsf{L}$ is an exact functor. For any collection of objects $C\subseteq \mathsf{K}$ there is an equality \begin{displaymath} \thick_{\mathsf{L}}(F\thick_{\mathsf{K}}(C)) = \thick_{\mathsf{L}}(FC). \end{displaymath} \end{lem} \begin{proof} It is clear that $\thick_{\mathsf{L}}(FC) \subseteq \thick_{\mathsf{L}}(F\thick_{\mathsf{K}}(C))$. For the other direction, consider the subcategory \begin{displaymath} \mathsf{M} = \{k\in \mathsf{K} \mid Fk \in \thick_{\mathsf{L}}(FC)\} \end{displaymath} of $\mathsf{K}$. By (the analogue for thick subcategories of) \cite{StevensonActions}*{Lemma~3.8} the subcategory $\mathsf{M}$ is thick. It contains $C$ by construction and hence contains $\thick_{\mathsf{K}}(C)$. Thus $F\thick_{\mathsf{K}}(C) \subseteq \thick_{\mathsf{L}}(FC)$ from which the remaining containment follows immediately. \end{proof} \begin{lem}\label{lem:thickfunctor} Suppose $F\colon \mathsf{K} \to \mathsf{L}$ is an exact functor. Then the assignment \begin{displaymath} T(F)\colon \Thick(\mathsf{K}) \to \Thick(\mathsf{L}) \quad \quad \mathsf{M} \mapsto \thick(F\mathsf{M}) =: T(F)\mathsf{M} \end{displaymath} is a map in $\mathsf{CjSLat}$, i.e.\ it preserves the order and arbitrary joins. \end{lem} \begin{proof} If $\mathsf{M} \subseteq \mathsf{N} \subseteq \mathsf{K}$ are thick subcategories then $F\mathsf{M} \subseteq F\mathsf{N}$ and so clearly $T(F)$ is order preserving. Given a family $\{\mathsf{M}_\lambda\mid \lambda \in \Lambda\}$ of thick subcategories of $\mathsf{K}$ we have \begin{align*} T(F) \bigvee_{\lambda \in \Lambda} \mathsf{M}_\lambda &= \thick_\mathsf{L}\left(F \thick_{\mathsf{K}}\left(\bigcup_{\lambda \in \Lambda} \mathsf{M}_\lambda\right)\right) \\ &= \thick_\mathsf{L}\left(F \bigcup_{\lambda \in \Lambda} \mathsf{M}_\lambda\right) \\ &= \thick_\mathsf{L}\left(\bigcup_{\lambda \in \Lambda} F\mathsf{M}_\lambda\right) \\ &= \thick_\mathsf{L}\left(\bigcup_{\lambda \in \Lambda} \thick_{\mathsf{L}}\left( F\mathsf{M}_\lambda\right)\right) \\ &= \bigvee_{\lambda \in \Lambda} T(F)\mathsf{M}_\lambda, \end{align*} where we have used Lemma~\ref{lem:commute} for the second equality. \end{proof} \begin{defn} We define a functor $T\colon \mathsf{tcat} \to \mathsf{CjSLat}$, by setting $T(\mathsf{K}) = \Thick(\mathsf{K})$ and using Lemma~\ref{lem:thickfunctor} to define the action on maps. \end{defn} One cannot, in general, improve this to a factorization through $\mathsf{CLat}$. If one wants the assignment $\mathsf{K} \mapsto \Thick(\mathsf{K})$ to be functorial with respect to all exact functors then the price one pays is that one has to ignore meets. This is fairly typical rather than pathological behaviour, which one should expect in examples where there is a `combinatorial' component to the classification problem for thick subcategories. \begin{ex}\label{ex:letsnotmeet} Consider the linearly oriented Dynkin quiver $A_2$ \begin{displaymath} \begin{tikzpicture}[shorten >= 2pt, shorten <= 2pt] \node (v1) at (1.5,0) {}; \node (v2) at (3,0) {}; \draw[fill] (v1) circle (2pt) node [above] {\footnotesize 1}; \draw[fill] (v2) circle (2pt) node [above] {\footnotesize 2}; \path[->] (v1) edge node [above] {} (v2); \end{tikzpicture} \end{displaymath} and the quiver endomorphism $f$ determined by $f(1) = 2 = f(2)$. Let $F$ denote the corresponding derived base change functor $\mathsf{D}^\mathrm{b}(kA_2) \to \mathsf{D}^\mathrm{b}(kA_2)$. Let $P_1$ and $P_2$ denote the indecomposable projective modules. We have, on one hand, \begin{align*} T(F)\thick(P_1) \wedge T(F) \thick(P_2) &= \thick(FP_1) \wedge \thick (FP_2) \\ &= \thick(P_2) \wedge \thick(P_2) \\ &= \thick(P_2). \end{align*} But, on the other hand, \begin{align*} T(F)(\thick(P_1) \wedge \thick(P_2)) = T(F)(0) = 0. \end{align*} \end{ex} \begin{rem} It is also natural (perhaps seemingly more so given one's training) to consider, given $F\colon \mathsf{K} \to \mathsf{L}$, the poset map \begin{displaymath} F^{-1}\colon \Thick(\mathsf{L}) \to \Thick(\mathsf{K}) \end{displaymath} which preserves arbitrary meets (but not necessarily joins). Ordering thick subcategories by reverse containment one can thus define a functor $\mathsf{tcat}^\mathrm{op} \to \mathsf{CjSLat}$. However, the choice is in some sense cosmetic: the map $F^{-1}$ is right adjoint to $T(F)$ and so the functor $T$ and the functor $\mathsf{tcat}^\mathrm{op} \to \mathsf{CjSLat}$, defined above, just differ by the standard duality on $\mathsf{CjSLat}$ (see \cite{sketches}*{Remark~1.1.7}). We regard the choice we make here, to prioritise $T$ when not inconvenient, as the correct one in the sense that it agrees with the choice implicitly made in algebraic and tt-geometry. This is a little counterintuitive, so let us sketch the reasoning in the case of geometry. Let $f\colon R\to S$ be a map of commutative rings. Let $\Zar(R)$ denote the Zariski frame of radical ideals. There is a corresponding frame map $f\colon \Zar(R) \to \Zar(S)$ which, analogously to the above, sends an ideal $I$ to $\sqrt{(fI)}$. This gives rise to a continuous map $F\colon \pt(\Zar(S)) \to \pt(\Zar(R))$ by precomposition. A point of $\Zar(S)$ is equivalent to a principal prime ideal in the lattice-theoretic sense, which is given by $I_\mathfrak{p} = \{J\in \Zar(S) \mid J\subseteq \mathfrak{p}\}$ for a usual prime ideal $\mathfrak{p}$ of $S$. The corresponding point of $\Zar(R)$, which is the composite $\Zar(R) \to \Zar(S) \to \mathbf{2}$, is then given precisely by the meet-prime \begin{displaymath} \bigvee \{K\in \Zar(R)\mid fK \leq \mathfrak{p}\} = \bigvee \{K\in \Zar(R) \mid K\subseteq f^{-1}\mathfrak{p}\} = f^{-1}\mathfrak{p}. \end{displaymath} Thus, as claimed, the counterintuitive map on Zariski frames give rise to our beloved preimages on primes. \end{rem} Now let us give a very brief recapitulation of the examples motivating the constructions of this article. \begin{defn} By a \emph{tt-category} we mean a tensor triangulated category, i.e.\ a triangulated category equipped with a symmetric monoidal structure which is exact in each variable. \end{defn} There are by now many surveys on tt-categories and their corresponding tt-geometry, for instance \cite{BaICM} and \cite{Stevensontour} and the references therein, and one can see the work of Kock and Pitsch \cite{KockPitsch} for a more lattice-theoretic point of view; given this we omit the majority of the details here. They key facts are that the lattice of radical thick $\otimes$-ideals forms a coherent frame (by \cite{BKS}) and that under the functor $\pt$ of Stone duality this gives the Balmer spectrum. The following should be well known (e.g.\ the statement can be found in \cite{HPS}), but we provide a proof for completeness. \begin{lem}\label{lem:rigid} Let $\mathsf{T}$ be a closed tt-category. Then the full subcategory of rigid objects $\mathsf{T}^\mathrm{rig}$ is thick. In particular, we have $\thick(\mathbf{1})\subseteq \mathsf{T}^\mathrm{rig}$. \end{lem} \begin{proof} By definition $x\in \mathsf{T}$ is rigid if for all $y\in \mathsf{T}$ the natural map \begin{displaymath} \xymatrix{ x^\vee \otimes y \ar[r]^-{\alpha_{x,y}} & \hom(x,y) } \end{displaymath} is an isomorphism. Consider then, for fixed $y\in \mathsf{T}$, the subcategory \begin{displaymath} \mathsf{R}_y = \{x\in \mathsf{T} \mid \alpha_{x,y} \text{ is an isomorphism.}\}. \end{displaymath} Since $\alpha_{x,y}$ is natural in $x$ and compatible with sums and suspension, the source functor is exact, and the target functor sends triangles to pre-triangles, we see that $\mathsf{R}_y$ is thick\textemdash{}this is a twist on the standard argument, exploiting the fact that, by \cite{NeeCat}*{Proposition~1.1.20}, one already has the 2-of-3 property for isomorphisms for maps of pre-triangles. It is then enough to note that \begin{displaymath} \mathsf{T}^\mathrm{rig} = \bigcap_{y\in \mathsf{T}} \mathsf{R}_y \end{displaymath} since an intersection of thick subcategories is thick. \end{proof} We close by discussing a class of examples that will be of use in the sequel. Let $\{\mathsf{K}_\lambda\mid \lambda\in \Lambda\}$ be a family of triangulated categories indexed by a set $\Lambda$. Then the coproduct and product, as additive categories, exist and inherit canonical pointwise triangulated structures. \begin{lem}\label{lem:product} Let $\{\mathsf{K}_\lambda\mid \lambda\in \Lambda\}$ be a family of triangulated categories with direct sum $\mathsf{K}$. Then the natural map \begin{displaymath} \Thick(\mathsf{K}) \stackrel{\sim}{\to} \prod_{\lambda\in \Lambda} \Thick(\mathsf{K}_\lambda), \end{displaymath} is an isomorphism of complete lattices, i.e.\ it is a poset isomorphism preserving finite meets and arbitrary joins. \end{lem} \begin{proof} The canonical inclusions $\mathsf{K}_\lambda \to \mathsf{K}$ give, via taking preimages, complete meet-semilattice maps $\Thick(\mathsf{K}) \to \Thick(\mathsf{K}_\lambda)$ and so there is a canonical map of meet-semilattices as indicated in the statement, which we call $\phi$. Let us prove that $\phi$ is bijective. If $X = \{\mathsf{M}_\lambda\mid\lambda\in \Lambda\}$ is an element of $\prod_{\lambda\in \Lambda} \Thick(\mathsf{K}_\lambda)$, then there is a fully faithful exact functor \begin{displaymath} \mathsf{M} = \bigoplus_\lambda \mathsf{M}_\lambda \to \mathsf{K}, \end{displaymath} sending an object on the left to the direct sum (which is finite) of its components, and we set $\psi(X) = \mathsf{M}$ which defines a map \[ \psi \colon \prod_{\lambda\in \Lambda} \Thick(\mathsf{K}_\lambda) \to \Thick(\mathsf{K}) \] It is clear that $\phi\psi(X) = X$. On the other hand, if $\mathsf{N}$ is a thick subcategory of $\mathsf{K}$ then, by summand closure, we certainly have $\psi\phi(\mathsf{N}) \subseteq \mathsf{N}$. This is an equality since every object of $\mathsf{K}$ is a finite sum of objects of the $\mathsf{K}_\lambda$. Finally, we need to check that $\phi$ also preserves all joins. There are several ways to deduce this; the most direct is to notice it follows, more or less immediately, from the componentwise nature of the triangulated structure. \end{proof} \begin{rem} Understanding the product $\prod_{\lambda} \mathsf{K}_\lambda$ is another story. This contains the direct sum as a thick subcategory, and hence a copy of $\prod_{\lambda} \Thick(\mathsf{K}_\lambda)$ as an (unbounded) sublattice, but the lattice $\Thick(\prod_{\lambda} \mathsf{K}_\lambda)$ is strictly larger (assuming infinitely many $\mathsf{K}_\lambda$ are non-zero) and seems significantly more complicated. \end{rem} \section{Most possibilities occur}\label{sec:shithappens} In this section we present examples illustrating that various properties may or may not hold for the lattice of thick subcategories of a triangulated category $\mathsf{K}$. We start with the best possible, from the point of view of pointless topology, and work our way down the hierarchy. \subsection{Frames} It is well known, by the initiated, that the lattice of thick radical $\otimes$-ideals in a tt-category forms a coherent frame (see e.g.\ \cite{BKS}). It follows that, in special cases, the entire lattice of thick subcategories forms a coherent frame. \begin{lem}\label{lem:unit} If $\mathsf{T}$ is a tt-category such that $\mathsf{T} = \thick(\mathbf{1})$ then every thick subcategory is a $\otimes$-ideal and $\mathsf{T}$ is rigid. Hence $\Thick(\mathsf{T})$ is a coherent frame. \end{lem} \begin{proof} The first part is standard, and the second is an application of Lemma~\ref{lem:rigid}. The final statement then follows by the identification of $\Thick(\mathsf{T})$ with the lattice of radical $\otimes$-ideals, which is isomorphic to the frame of Thomason subsets of $\Spc \mathsf{T}$ by \cite{BaSpec}*{Theorem~4.10}. The latter is always a coherent frame. \end{proof} \begin{ex} For instance, we could take $R$ to be a commutative ring and $\mathsf{K} = \mathsf{D}^\mathrm{perf}(R)$, which is generated by the tensor unit $R$. Then $\Thick(\mathsf{D}^\mathrm{perf}(R))$ is a coherent frame, and in fact is dual to the Zariski frame of radical ideals of $R$ (see \cite{KockPitsch} for further discussion). \end{ex} Upon seeing this one might hope that $\Thick(\mathsf{T})$ being a frame characterized those tt-categories where $\mathbf{1}$ generates. As the following example shows this is not at all the case. We can use Lemma~\ref{lem:product} to produce non-trivial examples of tt-categories $\mathsf{T}$ such that $\Thick(\mathsf{T})$ is a frame and $\Thick^\otimes(\mathsf{T})\subsetneq \Thick(\mathsf{T})$, so in particular they are not generated by $\mathbf{1}$. \begin{ex} Let $R$ be a commutative noetherian ring and let $G$ be a finite abelian group (abelian is only necessary so that the monoidal structure we obtain is symmetric). We construct the tt-analogue of the group algebra: consider the `group tt-category' $\mathsf{D}^\mathrm{perf}(R)G$ \begin{displaymath} \mathsf{D}^\mathrm{perf}(R)G = \bigoplus_{g\in G} \mathsf{D}^\mathrm{perf}(R)_g, \end{displaymath} where the $g$ is just an index to let us keep track of components. This is triangulated, with the componentwise triangulated structure, and we define the tensor product by \begin{displaymath} X_g \otimes Y_h = (X\otimes_R Y)_{gh} \end{displaymath} and additivity. One could, equivalently, enhance the situation and view this as the homotopy category of the functor category $[G,\mathsf{D}^\mathrm{perf}(R)]$ with the Day convolution monoidal structure, or as the compact objects of the derived category of $G$-graded $R$-modules, where $R$ is equipped with the trivial grading, with the derived tensor product of graded modules. By Lemma~\ref{lem:product} we have $\Thick(\mathsf{D}^\mathrm{perf}(R)G) \cong \prod_{g\in G} \Thick(\mathsf{D}^\mathrm{perf}(R))$, i.e. a product of $\vert G\vert$ copies of the frame of Thomason subsets of $\Spec R$. On the other hand $\Thick^\otimes(\mathsf{D}^\mathrm{perf}(R)G)$ is a single copy of the frame of Thomason subsets, as follows from either direct computation, \cite{DS13}*{Theorem~5.1} if viewing this as the derived category of $G$-graded modules, or \cite{aoki}*{Theorem~I} from the functor category point of view. \end{ex} \subsection{Distributivity} It is natural to ask if one can find examples where $\Thick(\mathsf{K})$ is only a frame, i.e.\ not coherent, or only distributive. It turns out that actually one gets a lot for free: we will show that distributivity of $\Thick(\mathsf{K})$ implies that $\Thick(\mathsf{K})$ is a frame. We start with a standard fact. \begin{lem}\label{lem:finite} Let $\{\mathsf{M}_\lambda\mid \lambda \in \Lambda\}$ be a family of thick subcategories and suppose that \begin{displaymath} k\in \bigvee_{\lambda \in \Lambda} \mathsf{M}_\lambda. \end{displaymath} Then there exist $\lambda_1,\ldots \lambda_n$ such that $k\in \mathsf{M}_{\lambda_1} \vee \cdots \vee \mathsf{M}_{\lambda_n}$. \end{lem} \begin{proof} By definition \begin{displaymath} k\in \bigvee_{\lambda \in \Lambda} \mathsf{M}_\lambda = \thick\left( \: \bigcup_{\lambda \in \Lambda} \mathsf{M}_\lambda\right). \end{displaymath} We can construct the latter inductively, in the usual fashion, by starting with the $\mathsf{M}_\lambda$ and taking iterated cones. Each cone involves only a finite sum of objects from the $\mathsf{M}_\lambda$, and our fixed object $k$ occurs after finitely many steps, and so the statement follows. \end{proof} \begin{prop}\label{prop:dist} Let $\mathsf{K}$ be an essentially small triangulated category such that $\Thick(\mathsf{K})$ is distributive. Then $\Thick(\mathsf{K})$ is a frame. \end{prop} \begin{proof} Suppose we are given a thick subcategory $\mathsf{L}$ and a family $\{\mathsf{M}_\lambda\mid \lambda \in \Lambda\}$ of thick subcategories. It is always true that \begin{displaymath} \mathsf{L} \cap \bigvee_{\lambda \in \Lambda} \mathsf{M}_\lambda \supseteq \bigvee_{\lambda \in \Lambda} (\mathsf{L} \cap \mathsf{M}_\lambda), \end{displaymath} so it is sufficient to check the reverse inclusion. Let $k$ be an object of the category on the left. In particular, $k$ lies in $\bigvee_{\lambda \in \Lambda} \mathsf{M}_\lambda$ and so by Lemma~\ref{lem:finite} there exist $\lambda_1,\ldots \lambda_n$ such that $k$ lies $\bigvee_{i=1}^n \mathsf{M}_{\lambda_i}$. Thus \begin{displaymath} k \in \mathsf{L} \cap \left(\bigvee_{i=1}^n \mathsf{M}_{\lambda_i}\right) = \bigvee_{i=1}^n(\mathsf{L} \cap \mathsf{M}_{\lambda_i}) \end{displaymath} by distributivity of $\Thick(\mathsf{K})$. Of course we have \begin{displaymath} \bigvee_{i=1}^n(\mathsf{L} \cap \mathsf{M}_{\lambda_i}) \subseteq \bigvee_{\lambda \in \Lambda} (\mathsf{L} \cap \mathsf{M}_\lambda) \end{displaymath} and so $k$ lies in $\mathsf{L} \cap \bigvee_{\lambda \in \Lambda} \mathsf{M}_\lambda$ proving the desired equality. \end{proof} \begin{rem} This is really a special case of a more general statement about lattices (standard lore to a different crowd). If $L$ is an algebraic (aka compactly generated) lattice, then if it is distributive it is automatically a frame (cf.\ Lemma~\ref{lem:compact} for the fact that $\Thick(\mathsf{K})$ is algebraic). In fact, one can even deduce that such a frame is spatial; we give a proof imminently. \end{rem} \subsection{Spatiality} We have seen that distributivity implies infinite distributivity for lattices of thick subcategories. We next discuss the property of being spatial; it turns out that this too is for free. \begin{thm}\label{thm:spatial} Suppose that $\Thick(\mathsf{K})$ is distributive. If $s\in \mathsf{K}$ and $\mathsf{M}$ is a thick subcategory of $\mathsf{K}$ with $s\notin \mathsf{M}$ then there exists a meet-prime thick subcategory $\mathsf{P}$ such that $\mathsf{M}\subseteq \mathsf{P}$ and $s\notin \mathsf{P}$. \end{thm} \begin{proof} Let us, with a view to using Zorn's lemma, consider the set \[ \mathcal{F} = \{\mathsf{L} \in \Thick(\mathsf{K}) \mid s\notin \mathsf{L} \text{ and } \mathsf{M}\subseteq \mathsf{L}\}. \] Clearly $\mathsf{M} \in \mathcal{F}$ and so $\mathcal{F}$ is not empty. We will show that every chain in $\mathcal{F}$ has an upper bound. Suppose then that $\{\mathsf{L}_i \mid i\in I\}$ is a chain in $\mathcal{F}$. Since being thick is a family of conditions on finite sets of objects, the category $\mathsf{L} = \cup_i \mathsf{L}_i$ is thick. As $s\notin \mathsf{L}_i$ for any $i\in I$ we have $s\notin \mathsf{L}$ and it is clear that $\mathsf{M}\subseteq \mathsf{L}$. Thus $\mathsf{L}\in \mathcal{F}$ and is an upper bound for the given chain. By Zorn's lemma $\mathcal{F}$ contains a maximal element $\mathsf{P}$, which we shall show is meet-prime. Suppose then that $\mathsf{A}$ and $\mathsf{B}$ are thick subcategories with $\mathsf{A}\cap \mathsf{B} \subseteq \mathsf{P}$. We have $\mathsf{P} \subseteq \mathsf{P} \vee \mathsf{A}$ and so if $\mathsf{A} \nsubseteq \mathsf{P}$, by maximality of $\mathsf{P}$ in $\mathcal{F}$, we must have $s\in \mathsf{P} \vee \mathsf{A}$ to avoid a contradiction, and similarly for $\mathsf{B}$. With this in mind, consider the equation \[ \mathsf{P} = \mathsf{P} \vee (\mathsf{A} \cap \mathsf{B}) = (\mathsf{P} \vee \mathsf{A}) \cap (\mathsf{P} \vee \mathsf{B}), \] where the first equality follows from $\mathsf{A}\cap \mathsf{B} \subseteq \mathsf{P}$ and the second from distributivity. If neither $\mathsf{A}$ nor $\mathsf{B}$ are contained in $\mathsf{P}$ then, by what we saw above, we have $s\in \mathsf{P} \vee \mathsf{A}$ and $s \in \mathsf{P} \vee \mathsf{B}$ and so $s\in \mathsf{P}$. But this is absurd, and so at least one of $\mathsf{A}$ or $\mathsf{B}$ is contained in $\mathsf{P}$. This shows that $\mathsf{P}$ is meet-prime as claimed. \end{proof} \begin{cor}\label{cor:spatial} If $\Thick(\mathsf{K})$ is distributive then it is a spatial frame. \end{cor} \begin{proof} Suppose that $\mathsf{M}, \mathsf{N} \in \Thick(\mathsf{K})$ are such that $\mathsf{N} \nsubseteq \mathsf{M}$. Then there is an $s\in \mathsf{N}$ with $s\notin \mathsf{M}$. By the Theorem we can find a meet-prime $\mathsf{P}$ with $\mathsf{M}\subseteq \mathsf{P}$ and $s\notin \mathsf{P}$. It follows that $\mathsf{N}\nsubseteq \mathsf{P}$ and the point $p$ of $\Thick(\mathsf{K})$ corresponding to $\mathsf{P}$ satisfies $p(\mathsf{M}) = 0$ and $p(\mathsf{N}) = 1$ as required. \end{proof} The following statement is immediate from this last corollary, but striking enough to bear repeating. \begin{cor}\label{cor:sober} If $\Thick(\mathsf{K})$ is distributive then there is a sober topological space, namely $X = \Spec \Thick(\mathsf{K})$, such that $\Thick(\mathsf{K})$ is isomorphic to the lattice of open subsets of $X$. \end{cor} \subsection{Coherence} We have shown that as soon as $\Thick(\mathsf{K})$ is distributive it is necessarily a spatial frame. It is then natural to ask about coherence. This turns out to be a more delicate issue and can fail, at least in principle, in a couple of ways. The first matter is to identify the compact thick subcategories, which is independent of any distributivity hypothesis. Such subcategories have appeared in various spots in the literature, usually under the moniker `finitely generated'. \begin{lem}\label{lem:compact} A thick subcategory $\mathsf{L}\in \Thick(\mathsf{K})$ is compact if and only if there exists a $k\in \mathsf{K}$ such that $\mathsf{L} = \thick(k)$. \end{lem} \begin{proof} Suppose first that $\mathsf{L}$ is compact. We have \begin{displaymath} \mathsf{L} = \bigvee_{l\in \mathsf{L}} \thick(l). \end{displaymath} By compactness there exist $l_1,\ldots, l_n$ in $\mathsf{L}$ such that \begin{displaymath} \mathsf{L} = \bigvee_{i=1}^n \thick(l_i) \end{displaymath} and so we can take $k= l_1\oplus \cdots \oplus l_n$. On the other hand, suppose that $\mathsf{L} = \thick(k)$ and we have a family of thick subcategories $\{\mathsf{M}_\lambda\mid \lambda \in \Lambda\}$ with \begin{displaymath} \mathsf{L} \subseteq \bigvee_{\lambda \in \Lambda} \mathsf{M}_\lambda. \end{displaymath} By Lemma~\ref{lem:finite} we can find $\lambda_1,\ldots \lambda_n$ such that $k$ is in the thick subcategory generated by the $\mathsf{M}_{\lambda_i}$ for $1\leq i \leq n$. This subcategory is thick and so contains $\mathsf{L} = \thick(k)$, i.e.\ we have \begin{displaymath} \mathsf{L} \subseteq \bigvee_{i=1}^n \mathsf{M}_{\lambda_i}. \end{displaymath} \end{proof} \begin{rem} We will sometimes adopt the common terminology mentioned above and call a thick subcategory $\mathsf{L}$ of the form $\mathsf{L} = \thick(k)$ finitely generated (due to the presence of finite direct sums being finitely generated is the same as being principal). \end{rem} \begin{rem} We note that, since every thick subcategory is the union of the thick subcategories generated by each of its objects, $\Thick(\mathsf{K})$ is always an algebraic lattice. \end{rem} It follows from the lemma that any finite join of compact objects is again compact. However, coherence requires that the compact objects form a bounded sublattice; the following example illustrates the most naive obstruction. \begin{ex} Fix a field $k$ and consider $\mathsf{K} = \mathsf{D}_\mathrm{tors}^\mathrm{b}(k[x])$ the derived category of bounded complexes of $k[x]$-modules with finitely generated torsion cohomology. This is a thick subcategory of $\mathsf{D}^\mathrm{b}(k[x])$ and so $L = \Thick(\mathsf{K})$ is distributive, and hence a frame, by virtue of being a sublattice of the coherent frame $\Thick(\mathsf{D}^\mathrm{b}(k[x]))$. However, $L$ is not coherent. Indeed $\mathsf{D}_\mathrm{tors}^\mathrm{b}(k[x])$ is not finitely generated and hence not compact: there is a direct sum decomposition \[ \mathsf{D}_\mathrm{tors}^\mathrm{b}(k[x]) = \bigoplus_{\alpha} \thick(k(\alpha)) \] where $\alpha$ runs over the closed points of $\mathbb{A}^1_k$. The corresponding space, $\Spec L$, is in bijection with the closed points of $\mathbb{A}^1_k$ and has the discrete topology. In particular, it is not quasi-compact. \end{ex} There is also another potential subtlety: even if $\Thick(\mathsf{K})$ is distributive and $\mathsf{K}$ itself is finitely generated it is not clear that an intersection of finitely generated thick subcategories remains finitely generated. We are not aware of an example where this goes awry, but suspect such an example exists. \subsection{Modularity} Let $L$ be a complete lattice. We recall that $L$ is \emph{modular} if for $l,m,s\in L$ we have \begin{displaymath} l\leq m \text{ implies } l\vee (s\wedge m) = (l\vee s)\wedge m. \end{displaymath} This is a weakening of distributivity, and is satisfied in a number of contexts, for instance lattices of submodules are always modular. While the analogy between thick subcategories and lattices of ideals or submodules has been quite fruitful, it does not extend to include this observation. \begin{ex}\label{ex:notmodular} Consider $\mathsf{K} = \mathsf{D}^\mathrm{b}(\mathbb{P}^1)$ over some fixed, but nameless, ground field. The lattice of thick subcategories of $\mathsf{K}$ is given, as a set, by \begin{displaymath} \{\text{specialization closed subsets of } \mathbb{P}^1\} \coprod \mathbb{Z}, \end{displaymath} where the first component classifies tensor ideals, the second is given by the twisting sheaves, and the two pieces interact trivially, see \cite{KS19}*{Section~4.1} for further discussion. An embedding of the minimal non-modular lattice is given as follows \begin{displaymath} \begin{tikzpicture}[scale=0.7 \node (v0) at (0,0) {}; \node (va) at (-1,1) {}; \node (vb) at (1,2) {}; \node (vc) at (-1,3) {}; \node (v1) at (0,4) {}; \draw[fill] (v0) circle (2pt) node [below] {0}; \draw[fill] (va) circle (2pt) node [left] {$\{x\}$}; \draw[fill] (vb) circle (2pt) node [right] {$\mathcal{O}$}; \draw[fill] (vc) circle (2pt) node [left] {$\{x,y\}$}; \draw[fill] (v1) circle (2pt) node [above] {$\mathsf{D}^\mathrm{b}(\mathbb{P}^1)$}; \path[-] (v0) edge node [above] {} (va); \path[-] (v0) edge node [above] {} (vb); \path[-] (va) edge node [above] {} (vc); \path[-] (vb) edge node [above] {} (v1); \path[-] (vc) edge node [above] {} (v1); \end{tikzpicture} \end{displaymath} where $x,y\in \mathbb{P}^1$ are distinct points, the subsets indicate the corresponding tensor ideals of objects supported on those points, and $\mathcal{O}$ is shorthand for the thick subcategory it generates, showing that this lattice is non-modular. \end{ex} In fact this lattice is not even \emph{semi-modular}. A lattice is semi-modular if for all $l,m\in L$ we have that $l$ covers $l\wedge m$ implies that $m$ is covered by $l\vee m$. Indeed, taking $l$ to be $\thick(\mathcal{O})$ and $m$ to be the thick subcategory generated by two distinct points gives a violation of this requirement. However, there are examples of lattices of thick subcategories which are not distributive but do satisfy the modular law. \begin{ex}\label{ex:onlymodular1} The lattice of thick subcategories of $\mathsf{D}^\mathrm{b}(kA_2)$, for a field $k$, is given by the diamond lattice (as depicted in Example~\ref{ex:diamond}). This is not distributive, but it is modular (for trivial reasons). \end{ex} \begin{rem} We note that $\Thick(\mathsf{D}^\mathrm{b}(kA_n))$ is not even semi-modular for $n\geq 3$. \end{rem} Due to our dearth of understanding of $\Thick(\mathsf{K})$ in non-trivial non-distributive examples it is difficult to provide further examples. However, modularity corresponds to a strong, but entirely natural, closure condition. \begin{lem}\label{lem:modular} Let $\mathsf{K}$ be a triangulated category. Then $\Thick(\mathsf{K})$ is modular if and only if for $\mathsf{L},\mathsf{M},\mathsf{S} \in \Thick(\mathsf{K})$ with $\mathsf{L} \subseteq \mathsf{M}$ then $X\in \thick(\mathsf{L}, \mathsf{S})$ and $X\in \mathsf{M}$ implies that $X\in \thick(\mathsf{L}, \mathsf{S}\cap \mathsf{M})$, i.e.\ if $X\in M$ is built from $\mathsf{L}$ and $\mathsf{S}$ then $X$ is built from objects in $\mathsf{L}$ and $\mathsf{S}\cap \mathsf{M}$. \end{lem} \begin{proof} Let $\mathsf{L},\mathsf{M}$, and $\mathsf{S}$ be as in the statement. The modular inequality \begin{displaymath} \mathsf{L} \vee (\mathsf{S} \wedge \mathsf{M}) \leq (\mathsf{L} \vee \mathsf{S})\wedge \mathsf{M} \end{displaymath} always holds. Indeed, this reads \begin{displaymath} \thick(\mathsf{L},\mathsf{S}\cap \mathsf{M}) \subseteq \thick(\mathsf{L},\mathsf{S})\cap \mathsf{M}, \end{displaymath} which holds as $\mathsf{L}\subseteq \mathsf{M}$ so the left-hand side is contained in $\mathsf{M}$, and is evidently contained in $\thick(\mathsf{L},\mathsf{S})$, so is contained in their intersection. Thus $\Thick(\mathsf{K})$ is modular precisely if the right-hand side is contained in the left. This means that for $X\in \thick(\mathsf{L},\mathsf{S})\cap \mathsf{M}$ we must have $X\in \thick(\mathsf{L},\mathsf{S}\cap \mathsf{M})$. \end{proof} \section{The fully functorial theory}\label{sec:ff} In this section we use the adjunction between $\mathsf{SFrm}$ and $\mathsf{CjSLat}$ to assign a sober space to any essentially small triangulated category. This construction is universal and gives some measure of the lattice of thick subcategories. As we shall see it is a rather imperfect measure, but it has the advantage of being functorial with respect to all exact functors. \subsection{The construction} We have constructed in Section~\ref{sec:t} the functor $T\colon \mathsf{tcat} \to \mathsf{CjSLat}$, which sends a triangulated category $\mathsf{K}$ to its lattice of thick subcategories $T(\mathsf{K}) = \Thick(\mathsf{K})$. We can combine this with our lattice theoretic noodling to get somewhere. \begin{defn}\label{def:fspcnt} We define the \emph{fully functorial non-tensor spectrum} \begin{displaymath} \fSpcnt\colon \mathsf{tcat}^\mathrm{op} \to \mathsf{Sob} \end{displaymath} to be the composite \begin{displaymath} \xymatrix{ \mathsf{tcat}^\mathrm{op} \ar[r]^-{T^\mathrm{op}} & \mathsf{CjSLat}^\mathrm{op} \ar[r]^-{d^\mathrm{op}} & \mathsf{SFrm}^\mathrm{op} \ar[r]^-{\sim} & \mathsf{Sob} } \end{displaymath} where $d$ is the adjoint of Proposition~\ref{prop:adjoint1}. \end{defn} \begin{rem} Let us explain the notation $\fSpcnt$: the f denotes functorial (or free, since this construction is as free as is reasonable in contrast to the construction of Section~\ref{sec:pf}) and the nt is for `no tensor' (to be read as $\Spc$n't, as in `tensorn't triangular geometry', with the consequent abbreviation tnt-geometry). \end{rem} \begin{ex}\label{ex:fspcnta2} Consider the Dynkin quiver $A_2$ as in Example~\ref{ex:letsnotmeet} and denote by $P_1$ and $P_2$ the projectives and $S_2$ the non-projective simple. Then the lattice of thick subcategories of $\mathsf{D}^\mathrm{b}(kA_2)$ is \begin{displaymath} \begin{tikzpicture \node (v0) at (0,0) {}; \node (va) at (-2,2) {}; \node (vb) at (0,2) {}; \node (vc) at (2,2) {}; \node (v1) at (0,4) {}; \draw[fill] (v0) circle (2pt) node [below] {0}; \draw[fill] (va) circle (2pt) node [left] {$\thick(P_1)$}; \draw[fill] (vb) circle (2pt) node [right] {$\thick(P_2)$}; \draw[fill] (vc) circle (2pt) node [right] {$\thick(S_2)$}; \draw[fill] (v1) circle (2pt) node [above] {$\mathsf{D}^\mathrm{b}(kA_2)$}; \path[-] (v0) edge node [above] {} (va); \path[-] (v0) edge node [above] {} (vb); \path[-] (v0) edge node [above] {} (vc); \path[-] (va) edge node [above] {} (v1); \path[-] (vb) edge node [above] {} (v1); \path[-] (vc) edge node [above] {} (v1); \end{tikzpicture} \end{displaymath} Note that $L = T(\mathsf{D}^\mathrm{b}(kA_2))$ is not distributive, and hence not a frame. We apply Construction~\ref{cons:left} to compute $\fSpcnt \mathsf{D}^\mathrm{b}(kA_2)$. For brevity, let us denote by $0$ and $1$ the bottom and top elements and set \begin{displaymath} a= \thick(P_1), \; b = \thick(P_2), \text{ and } c = \thick(S_2). \end{displaymath} We need to compute $\spt(L) = \mathsf{CjSLat}(L, \mathbf{2})$. Suppose that $p\in \spt(L)$ satisfies $p(1)=1$. Then since $a\vee b = 1$ at least one of $a$ or $b$ must be sent to $1$. The roles of $a,b,c$ are interchangeable and so we see that if $p(1)=1$ then $p$ sends at most one of $a,b,c$ to $0$ and this determines $p$. This gives rise to $4$ points $\{p_a, p_b, p_c, p_\varnothing\}$ determined by sending $a$, $b$, and $c$ to $0$ in the first three instances, and sending all of $a,b,c$ to $1$ in the final instance. There is a fifth point $p_0$ sending all elements to $0$. A subbase of opens is given by \begin{displaymath} U_a = \{p_b, p_c, p_\varnothing\}, U_b = \{p_a, p_c, p_\varnothing\}, U_c = \{p_a,p_b,p_\varnothing\}, \text{ and } U_1 = \{p_a,p_b,p_c, p_\varnothing\}. \end{displaymath} Thus we obtain the following space (where an edge indicates a specialization relation, i.e.\ that the top vertex of an edge is in the closure of the bottom one): \begin{displaymath} \begin{tikzpicture}[scale=0.7 \node (v0) at (0,0) {}; \node (va) at (-2,2) {}; \node (vb) at (0,2) {}; \node (vc) at (2,2) {}; \node (v1) at (0,4) {}; \draw[fill] (v0) circle (2pt) node [below] {$p_\varnothing$}; \draw[fill] (va) circle (2pt) node [left] {$p_a$}; \draw[fill] (vb) circle (2pt) node [right] {$p_b$}; \draw[fill] (vc) circle (2pt) node [right] {$p_c$}; \draw[fill] (v1) circle (2pt) node [above] {$p_0$}; \path[-] (v0) edge node [above] {} (va); \path[-] (v0) edge node [above] {} (vb); \path[-] (v0) edge node [above] {} (vc); \path[-] (va) edge node [above] {} (v1); \path[-] (vb) edge node [above] {} (v1); \path[-] (vc) edge node [above] {} (v1); \end{tikzpicture} \end{displaymath} with a unique generic point $p_\varnothing$ and unique closed point $p_0$. \end{ex} This looks rather a lot like the lattice we started with. We show in Proposition~\ref{prop:L} that this is not a coincidence. \subsection{Failure to reconstruct the Balmer spectrum}\label{ssec:fail} In this section we discuss the price for working with arbitrary exact functors: if $\mathsf{K}$ already has a spatial frame of thick subcategories, and hence has a space controlling the thick subcategories, then $\fSpcnt$ fails to recover this space. This is for the simple reason that the forgetful functor $i\colon \mathsf{SFrm} \to \mathsf{CjSLat}$ is only faithful and not full. Thus, for a spatial frame $F$, the counit \begin{displaymath} \varepsilon_F\colon d i F \to F \end{displaymath} is an epimorphism, but cannot be an isomorphism in general. \begin{ex} We consider a pair of points, i.e.\ $\mathsf{D}^\mathrm{b}(k\times k) \cong \mathsf{D}^\mathrm{b}(k) \oplus \mathsf{D}^\mathrm{b}(k)$. The Balmer spectrum is a disjoint union of two points, and the corresponding frame (of opens or, as we label it, of radical $\otimes$-ideals) is \begin{displaymath} \begin{tikzpicture}[scale=0.7 \node (v0) at (0,0) {}; \node (va) at (-2,2) {}; \node (vc) at (2,2) {}; \node (v1) at (0,4) {}; \draw[fill] (v0) circle (2pt) node [below] {0}; \draw[fill] (va) circle (2pt) node [left] {$\mathsf{M}$}; \draw[fill] (vc) circle (2pt) node [right] {$\mathsf{N}$}; \draw[fill] (v1) circle (2pt) node [above] {$\mathsf{D}^\mathrm{b}(k\times k)$}; \path[-] (v0) edge node [above] {} (va); \path[-] (v0) edge node [above] {} (vc); \path[-] (va) edge node [above] {} (v1); \path[-] (vc) edge node [above] {} (v1); \end{tikzpicture} \end{displaymath} where $\mathsf{M}$ and $\mathsf{N}$ correspond to the two factors. The space $\fSpcnt\mathsf{D}^\mathrm{b}(k\times k)$, which is computed in a manner analogous to that used in Example~\ref{ex:fspcnta2}, is \begin{displaymath} \begin{tikzpicture}[scale=0.7 \node (v0) at (0,0) {}; \node (va) at (-2,2) {}; \node (vc) at (2,2) {}; \node (v1) at (0,4) {}; \draw[fill] (v0) circle (2pt) node [below] {$p_\varnothing$}; \draw[fill] (va) circle (2pt) node [left] {$p_\mathsf{M}$}; \draw[fill] (vc) circle (2pt) node [right] {$p_\mathsf{N}$}; \draw[fill] (v1) circle (2pt) node [above] {$p_0$}; \path[-] (v0) edge node [above] {} (va); \path[-] (v0) edge node [above] {} (vc); \path[-] (va) edge node [above] {} (v1); \path[-] (vc) edge node [above] {} (v1); \end{tikzpicture} \end{displaymath} which has acquired two extra points and become both irreducible and local. The lattice of open subsets of $X = \fSpcnt\mathsf{D}^\mathrm{b}(k\times k)$ is \begin{displaymath} \begin{tikzpicture}[scale=0.7 \node (00) at (0,-2) {}; \node (v0) at (0,0) {}; \node (va) at (-2,2) {}; \node (vc) at (2,2) {}; \node (v1) at (0,4) {}; \node (11) at (0,6) {}; \draw[fill] (v0) circle (2pt) node [right] {$U_\mathsf{M}\cap U_\mathsf{N}$}; \draw[fill] (va) circle (2pt) node [left] {$U_\mathsf{M}$}; \draw[fill] (vc) circle (2pt) node [right] {$U_\mathsf{N}$}; \draw[fill] (v1) circle (2pt) node [right] {$U_1$}; \draw[fill] (00) circle (2pt) node [right] {$U_0 = \varnothing$}; \draw[fill] (11) circle (2pt) node [right] {$X$}; \path[-] (v0) edge node [above] {} (va); \path[-] (v0) edge node [above] {} (vc); \path[-] (va) edge node [above] {} (v1); \path[-] (vc) edge node [above] {} (v1); \path[-] (00) edge node [above] {} (v0); \path[-] (11) edge node [above] {} (v1); \end{tikzpicture} \end{displaymath} which contains a copy of the original lattice $\Thick(\mathsf{D}^\mathrm{b}(k\times k))$ via the embedding $l\mapsto U_l$. \end{ex} \subsection{Sobriety and a simplification} We now further discuss the construction and analyse the space we produce; in particular, we see that the suggestive form of the examples we have treated so far is a general phenomenon. What we need works in complete generality, but is more enlightening and compelling with our construction in mind. Let $L$ be a complete join semi-lattice and let $p\colon L\to \mathbf{2}$ be a semipoint. Then, since $p$ is order preserving, $p^{-1}(0)$ is downward closed and, since $p$ preserves joins, $p^{-1}(0)$ is closed under all joins. Define $x\in L$ by $x = \vee p^{-1}(0)$. Then $x\in p^{-1}(0)$ by join closure, and for all $l\in p^{-1}(0)$ we have $l\leq x$. It follows that \begin{displaymath} p^{-1}(0) = [0,x] = \{l\in L\mid l\leq x\}. \end{displaymath} On the other hand, if $x\in L$ one easily checks that there is an associated semipoint given by the characteristic function for $L\setminus [0,x]$. This proves the following lemma. \begin{lem} There is a canonical identification of sets $L = \spt(L)$. \end{lem} We will use this identification without further mention and turn to discussing the topology. A subbase of opens for $\spt(L)$ is given by the subsets \begin{displaymath} U_l = \{p\in \spt(L) \mid p(l) = 1\}. \end{displaymath} Under the identification with $L$ we see that \begin{align*} U_l = \{x\in L\mid p_x(l) = 1\} &= \{x\in L\mid l\notin [0,x]\} \\ &= \{x\in L\mid l\nleq x\} \\ &= L\setminus [l,1]. \end{align*} \begin{lem} Under the identification $L = \spt(L)$ the closure of $l\in L$ is $[l,1]$. \end{lem} \begin{proof} Suppose that $m\in \overline{\{l\}}$. This occurs if and only if for every closed subset $V$ of $L$ we have $l\in V$ implies $m\in V$, i.e.\ if for every open subset $U$ we have $m\in U$ implies $l\in U$. Thus, if $m\in U_l = L\setminus [l,1]$ we would have $l\in U_l$ which is nonsense. So $m\notin U_l$, i.e.\ $m\in [l,1]$. On the other hand, suppose that $m\in [l,1]$. If $m\in U_r$, for $r\in L$, i.e.\ $m\notin [r,1]$ but $l\notin U_r$, i.e.\ $r\leq l$, then since we have assumed $l\leq m$ we obtain $r\leq m$ which is a contradiction. Thus if $m\in U_r$ we must have $l\in U_r$. Since every open is obtained from finite intersections of these subsets, it follows that if $m$ is in an open subset then $l$ must also lie in that open. Hence $m\in \overline{\{l\}}$. \end{proof} Combining what we have learned from our two lemmas we deduce the following result. \begin{prop}\label{prop:L} The space $\spt(L)$ is homeomorphic to $L$ with the topology whose closed subsets are finite unions of principal upward closed subsets of $L$, i.e.\ the subsets $[l,1]$ for $l\in L$. In particular, $\spt(L)$ is sober. \end{prop} From sobriety of $\spt(L)$ it follows that \begin{displaymath} \fSpcnt = \pt \circ \mathcal{O} \circ \spt \circ T^\mathrm{op} \cong \spt \circ T^\mathrm{op} \end{displaymath} i.e.\ in the context of our construction, for a triangulated category $\mathsf{K}$, $\fSpcnt(\mathsf{K})$ is the set $T(\mathsf{K}) = \Thick(\mathsf{K})$ with the topology described above. To reiterate: we have a canonical bijection between the points of $\fSpcnt(\mathsf{K})$ and the elements of $\Thick(\mathsf{K})$, and so $\fSpcnt(\mathsf{K})$ is, in a weak sense, a coarse moduli space for the lattice of thick subcategories where the inclusion ordering of subcategories is recovered by the specialization ordering. This is good news in the sense that we haven't lost information, but it means that computation in new cases is likely to be challenging\textemdash{}we have not made our lives any easier. However, one can optimistically expect that there are cases where, although we could not directly describe the elements of the lattice, one can construct the space by some other means. \subsection{The universal property}\label{ssec:universal} As $\fSpcnt$ is freely constructed it satisfies a universal property: the unit of adjunction $T(\mathsf{K}) \to i\mathcal{O}\fSpcnt(\mathsf{K})$ is the initial map from $T(\mathsf{K})$ to a spatial frame, i.e.\ any join preserving map from $T(\mathsf{K})$ to a spatial frame factors via the lattice of open subsets of $\fSpcnt(\mathsf{K})$. Let us describe how this can be converted to a support theoretic universal property in line with Balmer's original work. It will be more convenient for us to work with open subsets, rather than Thomason subsets, and so our support theories will associated open, rather than closed, subsets to objects. This is, from the point of view of tt-geometry, purely cosmetic and corresponds to working with the Hochster dual of the Balmer spectrum. Let $X$ be a Sober space with corresponding spatial frame $\mathcal{O}(X)$. \begin{defn} A $\vee$-support datum on $\mathsf{K}$ with values in $X$ is a map $\sigma\colon \Ob\mathsf{K} \to \mathcal{O}(X)$ from objects of $\mathsf{K}$ to open subsets of $X$ such that for all $x,y\in \mathsf{K}$: \begin{itemize} \item[(i)] $\sigma(0) = \varnothing$; \item[(ii)] $\sigma(\Sigma^i x) = \sigma(x)$ for all $i\in \mathbb{Z}$; \item[(iii)] $\sigma(x\oplus y) = \sigma(x)\cup \sigma(y)$; \item[(iv)] for every triangle $x\to y\to z$ we have $\sigma(y)\subseteq \sigma(x)\cup \sigma(y)$. \end{itemize} \end{defn} A support datum's purpose in life, whether it be of the kind described above or as in the usual tt-setting, is to describe a lattice map from some lattice of thick subcategories to $\mathcal{O}(X)$. \begin{prop}\label{prop:support} Giving a $\vee$-support datum on $\mathsf{K}$ with values in $X$ is equivalent to specifying a morphism $T(\mathsf{K}) \to \mathcal{O}(X)$ in $\mathsf{CjSLat}$. \end{prop} \begin{proof} Suppose we are given a $\vee$-support datum $\sigma$ on $\mathsf{K}$ with values in $X$. We define a morphism $\phi\colon T(\mathsf{K}) \to \mathcal{O}(X)$ by setting \[ \phi(\mathsf{M}) = \bigcup_{x\in \mathsf{M}} \sigma(x). \] This is clearly an open subset of $X$ and it is order preserving by definition. Condition (i) that $\sigma(0)=\varnothing$ tells us that $\phi$ preserves the bottom element, i.e.\ the empty join. Suppose then that $\mathsf{M}_i$ for $i\in I$ is a family of thick subcategories with join $\mathsf{M}$. We need to show equality of \[ U = \phi( \mathsf{M}) \text{ and } V = \cup_i \phi(\mathsf{M}_i). \] Since each $\mathsf{M}_i$ is contained in $\mathsf{M}$ it is immediate from the construction that $V\subseteq U$. On the other hand given $x\in \mathsf{M}$ we can find, by Lemma~\ref{lem:finite}, $x_1,\ldots,x_n \in \cup_i \mathsf{M}_i$ such that $x\in \thick(x_1,\ldots,x_n)$. It follows from axioms (ii)-(iv) that \[ \sigma(x) \subseteq \sigma(x_1)\cup \cdots \cup \sigma(x_n) \subseteq V \] showing that $U\subseteq V$. Given a morphism $\phi\colon T(\mathsf{K}) \to \mathcal{O}(X)$ in $\mathsf{CjSLat}$ we define \[ \sigma(x) = \phi(\thick(x)). \] This satisfies (ii) by construction and (i),(iii), and (iv) follow from the fact that $\phi$ preserves joins. \end{proof} There is a natural $\vee$-support datum on $\mathsf{K}$ taking values in $\fSpcnt(\mathsf{K})$ which is given by sending $x$ to the open $U_{\thick(x)}$ corresponding to $\thick(x)$, and this is also universal in the appropriate sense. Given a $\vee$-support datum $\sigma$ on $\mathsf{K}$ taking values in $X$ there is a corresponding morphism in $\mathsf{CjSLat}$ $\phi\colon T(\mathsf{K}) \to i\mathcal{O}(X)$. The map $\phi$ corresponds, via adjunction, to a frame map $d T(\mathsf{K}) \to \mathcal{O}(X)$ and we can take points to obtain $X \to \fSpcnt(\mathsf{K})$ such that $\sigma$ is obtained by pulling back the universal support datum. On the other hand, given $X \to \fSpcnt(\mathsf{K})$ we can take this pullback, i.e.\ consider the composite \[ T(\mathsf{K}) \to i\mathcal{O}(\fSpcnt(\mathsf{K})) \to i\mathcal{O}(X) \] which is the join-preserving map associated to the corresponding $\vee$-support datum. \section{The partially functorial theory}\label{sec:pf} As seen in Section~\ref{ssec:fail} if one works with all functors, and hence complete join semilattices, one cannot reconstruct the Balmer spectrum. In this section we develop a partially functorial theory based on $\mathsf{CLat}$ and $\Omega$ which addresses this issue by giving the correct result for categories where the lattice of thick subcategories is distributive. We have already seen in Example~\ref{ex:letsnotmeet} that $T\colon \mathsf{tcat} \to \mathsf{CjSLat}$ does not factor via $\mathsf{CLat}$, i.e.\ we may not get meet preserving maps out of $T$. The situation isn't improved in the case that the lattices involved are distributive. \begin{ex} Let $k$ be a field and consider the diagonal inclusion $f\colon k\to k\times k = R$ and the corresponding bounded derived categories. We know that all lattices of thick subcategories involved are coherent frames, indeed the lattices are $\mathbf{2}$ and $\mathbf{2} \times \mathbf{2}$ (i.e.\ the powerset of $\{0,1\}$) respectively. However, the functor $f_*\colon \mathsf{D}^\mathrm{perf}(R) \to \mathsf{D}^\mathrm{perf}(k)$ does not induce a map of frames, i.e.\ does not give a map in $\mathsf{CLat}$. Indeed, writing $R = R_1\times R_2$ we have \begin{displaymath} \thick(f_*(\thick(R_1)\cap \thick(R_2))) = \thick(f_*0) = 0 \end{displaymath} but, on the other hand, \begin{displaymath} \thick(f_*R_1) \cap \thick(f_*R_2) = \mathsf{D}^\mathrm{perf}(k) \cap \mathsf{D}^\mathrm{perf}(k) = \mathsf{D}^\mathrm{perf}(k) \end{displaymath} exhibiting that $T(f_*)$ fails to preserve meets. \end{ex} We see that even with distributivity hypotheses arbitrary exact functors may fail to induce maps in $\mathsf{CLat}$. So, if we wish to work in the category of complete lattices we must restrict the functors we allow. \begin{defn}\label{defn:confluent} Let $F\colon \mathsf{K} \to \mathsf{L}$ be a morphism in $\mathsf{tcat}$, i.e.\ an exact functor between essentially small triangulated categories. We say that $F$ is a \emph{confluent functor} if $T(F)$ preserves finite meets, i.e.\ $F(\mathsf{K})$ generates $\mathsf{L}$ and for every pair of thick subcategories $\mathsf{M}$ and $\mathsf{N}$ of $\mathsf{K}$ \begin{displaymath} \thick(F\mathsf{M})\cap \thick(F\mathsf{N}) = \thick(F(\mathsf{M}\cap \mathsf{N})). \end{displaymath} \end{defn} \begin{rem} The condition that $F(\mathsf{K})$ generates $\mathsf{L}$ is forced by the fact that the empty meet, namely the top element, needs to be preserved by $T(F)$. The analogous constraint in tt-geometry is not serious, the tensor unit always generates as a tensor ideal, but here it is a restriction. \end{rem} \begin{rem} We will sometimes say that an exact functor $F\colon \mathsf{K} \to \mathsf{L}$ is confluent with respect to some class $\mathcal{S}$ of thick subcategories of $\mathsf{K}$ (e.g.\ radical tensor ideals) if $T(F)$ preserves meets of members of $\mathcal{S}$. \end{rem} \begin{rem} We note that the containment \begin{displaymath} \thick(F\mathsf{M})\cap \thick(F\mathsf{N}) \supseteq \thick(F(\mathsf{M}\cap \mathsf{N})). \end{displaymath} always holds. \end{rem} \begin{ex} Let $f\colon A\to B$ be a map of commutative rings. The induced functor $f^*\colon \mathsf{D}^\mathrm{perf}(A) \to \mathsf{D}^\mathrm{perf}(B)$ is confluent. The generation condition is clear as $f^*A \cong B$, and preservation of meets follows from the classification of thick subcategories via support, as in \cite{Thomclass}, and the formula, for $E \in \mathsf{D}^\mathrm{perf}$, \[ \supp f^*E = \Spec(f)^{-1} \supp E \] (see Lemma~\ref{lem:ttconfluent} for details). \end{ex} \begin{ex}\label{ex:notok} Localizations are not necessarily confluent. Consider the localization \[ \pi\colon \mathsf{D}^\mathrm{b}(\mathbb{P}^1) \to \mathsf{D}^\mathrm{b}(\mathbb{A}^1) \] at $\thick(k(x))$ for some point $x\in \mathbb{P}^1$. Then \[ \thick(\pi(\thick(\mathcal{O}) \cap \thick(\mathcal{O}(1)))) = 0 \text{ but } \thick(\pi\mathcal{O}) \cap \thick(\pi \mathcal{O}(1)) = \mathsf{D}^\mathrm{b}(\mathbb{A}^1) \] showing that even passing to an open subset of the Balmer spectrum can behave poorly with respect to the whole lattice of thick subcategories. \end{ex} \begin{lem}\label{lem:subcat} Let $\mathsf{K} \xrightarrow{F} \mathsf{L} \xrightarrow{G} \mathsf{M}$ be exact functors. Then: \begin{itemize} \item[(a)] if $F$ is an equivalence it is confluent; \item[(b)] if $F$ and $G$ are confluent so is $GF$. \end{itemize} \end{lem} \begin{proof} We begin with (a): if $F$ is an equivalence then, for thick subcategories $\mathsf{N}_1$ and $\mathsf{N}_2$ of $\mathsf{K}$, we have \begin{align*} \thick(F(\mathsf{N}_1)) \cap \thick(F(\mathsf{N}_2)) &= F(\mathsf{N}_1) \cap F(\mathsf{N}_2) \\ &= F(\mathsf{N}_1 \cap \mathsf{N}_2) \\ &= \thick(F(\mathsf{N}_1\cap \mathsf{N}_2)). \end{align*} Here the first and last equalities are the fact that $F$ is an equivalence and so sends thick subcategories to thick subcategories, and the middle equality is that $F$ is an equivalence and so reflects isomorphisms. There is a small lie here as strictly we still need to close under isomorphisms i.e.\ $F\mathsf{N}_1$ may fail to be closed under isomorphisms, but this can be accomplished without harm or we can assume $\mathsf{K}$ and $\mathsf{L}$ are skeletal without changing the associated lattices. It is clear that $F(\mathsf{K})$ generates $\mathsf{L}$. Now for (b), suppose that $F$ and $G$ are confluent. We know that $F(\mathsf{K})$ generates $\mathsf{L}$, i.e.\ $\thick(F(\mathsf{K})) = \mathsf{L}$. By Lemma~\ref{lem:commute} and confluence of $G$ we have \begin{displaymath} \mathsf{M} = \thick(G(\mathsf{L})) = \thick(G\thick(F(\mathsf{K}))) = \thick(GF\mathsf{K}). \end{displaymath} Next let us turn to non-empty meets. We see that, for thick subcategories $\mathsf{N}_1$ and $\mathsf{N}_2$ of $\mathsf{K}$, \begin{align*} \thick(GF(\mathsf{N}_1)) \cap \thick(GF(\mathsf{N}_2)) &= \thick(G(\thick(F\mathsf{N}_1))) \cap \thick(G(\thick(F\mathsf{N}_2))) \\ &= \thick(G(\thick(F\mathsf{N}_1) \cap \thick(F\mathsf{N}_2))) \\ &= \thick(GF(\mathsf{N}_1 \cap \mathsf{N}_2)) \end{align*} where the first equality is Lemma~\ref{lem:commute}, the second is that $G$ is confluent, and the third is that $F$ is confluent (and another application of Lemma~\ref{lem:commute}). \end{proof} \begin{defn}\label{defn:htcat} By the Lemma, taking $\mathsf{tcat}_\wedge$ to be the collection of essentially small triangulated category with confluent exact functors between them gives a subcategory of $\mathsf{tcat}$ which contains all equivalences. \end{defn} By construction the functor $T\colon \mathsf{tcat}_\wedge \to \mathsf{CjSLat}$ factors via $T_\wedge \colon \mathsf{tcat}_\wedge \to \mathsf{CLat}$. \begin{defn}\label{defn:spcnt} We define the \emph{non-tensor spectrum} functor $\Spcnt\colon \mathsf{tcat}_\wedge^\mathrm{op} \to \mathsf{Sob}$ to be the composite \begin{displaymath} \mathsf{tcat}_\wedge^\mathrm{op} \xrightarrow{T_\wedge^\mathrm{op}} \mathsf{CLat}^\mathrm{op} \xrightarrow{\Omega^\mathrm{op}} \mathsf{SFrm}^\mathrm{op} \xrightarrow{\Spec} \mathsf{Sob} \end{displaymath} where $\Omega$ is the adjoint of Proposition~\ref{prop:adjoint2}. \end{defn} \begin{rem} We note that $\Spcnt$, like $\fSpcnt$, can be applied to \emph{any} essentially small triangulated category; the only issue is functoriality. \end{rem} We can actually simplify the construction of $\Spcnt$, as we did for $\fSpcnt$. We have noted in Lemma~\ref{lem:sober} that for any complete lattice $L$ the space $\mathsf{CLat}(L,\mathbf{2})$ is sober. Hence \begin{displaymath} \Spec\Omega L = \Spec \mathcal{O} \mathsf{CLat}(L,\mathbf{2}) \cong \mathsf{CLat}(L,\mathbf{2}) \end{displaymath} by Stone duality. Thus $\Spcnt(\mathsf{K}) \cong \mathsf{CLat}(T(\mathsf{K}),\mathbf{2})$. The non-tensor spectrum satisfies a universal property, which can be neatly expressed as follows. \begin{lem}\label{lem:universal} There is a natural isomorphism $\mathcal{O}\Spcnt \cong j\Omega$. Thus there is a natural transformation $T_\wedge \to \mathcal{O}\Spcnt$ exhibiting $\mathcal{O}\Spcnt$ as the spatial frame approximation $\Omega T_\wedge$ (up to taking the opposite functors) of $T_\wedge$ in $\mathsf{CLat}$. \end{lem} \begin{proof} The composite $\mathcal{O} \Spcnt$ is, by definition, \begin{displaymath} \mathcal{O} \circ \Spec \circ \Omega^\mathrm{op} \circ T_\wedge^\mathrm{op} \cong \Omega^\mathrm{op} \circ T_\wedge^\mathrm{op}. \end{displaymath} \end{proof} By Stone duality one can reinterpret this property in terms of continuous maps of spaces. However, despite being `obvious' i.e.\ given by intersections, the meets in $T(\mathsf{K})$ are somewhat subtle: we simply do not know how to produce a generator for the intersection $\thick(x)\cap \thick(y)$, if it even has one! Without a tensor product there is no obvious operation on objects which allows us to define a map $T_\wedge(\mathsf{K}) \to F$ in $\mathsf{CLat}$, with $F$ a spatial frame, purely in terms of objects of $\mathsf{K}$ as in Section~\ref{ssec:universal} or tt-geometry. By construction $\Spcnt$ doesn't lose any information in the case that $T(\mathsf{K})$ is distributive. \begin{thm}\label{thm:class} Suppose that $\mathsf{K}$ is such that $T(\mathsf{K})$ is distributive. Then the comparison map is a lattice isomorphism \begin{displaymath} T(\mathsf{K}) \xrightarrow{\sim} \mathcal{O}\Spcnt \mathsf{K}, \end{displaymath} i.e.\ thick subcategories of $\mathsf{K}$ are classified by $\Spcnt \mathsf{K}$. The inverse is given by sending an open subset $U$ to the thick subcategory \begin{displaymath} \bigcap_{\mathsf{P}\notin U} \mathsf{P} \end{displaymath} where we have identified $\Spcnt \mathsf{K}$ with the space of meet-prime thick subcategories. \end{thm} \begin{proof} By Corollary~\ref{cor:spatial} the lattice $T(\mathsf{K})$ is a spatial frame. Hence $\Omega T(\mathsf{K})\cong T(\mathsf{K})$, and so the claimed lattice isomorphism is immediate from Lemma~\ref{lem:universal}. The explicit description of the inverse is a standard fact, and can easily be verified directly. \end{proof} \begin{rem} The classification mirrors the one from tt-geometry, cf.\ \cite{BaSpec}*{Lemma~4.8} in particular for a similar description of the inverse. \end{rem} \subsection{The comparison with tt-geometry} We now discuss the relation to tt-geometry in the case of tt-categories. Given a tt-category $\mathsf{T}$ we denote by $T^\otimes(\mathsf{T})$ the lattice of radical thick $\otimes$-ideals, and for a tt-functor $F\colon \mathsf{T}\to \mathsf{S}$ we denote by $T^\otimes(F)$ the corresponding map $T^\otimes(\mathsf{T})\to T^\otimes(\mathsf{S})$ sending $\mathsf{J}$ to the smallest radical thick $\otimes$-ideal containing $F\mathsf{J}$. As usual $\Spc \mathsf{T}$ denotes the Balmer spectrum i.e.\ the space $(\Spec T^\otimes(\mathsf{K}))^\vee$. We know $T^\otimes(\mathsf{T})$ is a coherent frame. Moreover, $T^\otimes(F)$ is a map of frames preserving compact objects; we give a proof of the first assertion in the next lemma. \begin{lem}\label{lem:ttconfluent} Let $\mathsf{T}$ and $\mathsf{S}$ be tt-categories and $F\colon \mathsf{T} \to \mathsf{S}$ be a tt-functor. Then $F$ is confluent with respect to radical thick $\otimes$-ideals, i.e.\ $T^\otimes(F)$ preserves finite meets. \end{lem} \begin{proof} Suppose that $\mathsf{I}$ and $\mathsf{J}$ are radical $\otimes$-ideals of $\mathsf{T}$. Then, by \cite{BaSpec}*{Theorem~4.10}, we have $\mathsf{I} = \mathsf{T}_V$ and $\mathsf{J} = \mathsf{T}_W$, the ideals of objects supported on $V$ and $W$ respectively, where $V$ and $W$ are Thomason subsets of $\Spc \mathsf{T}$. By the classification and \cite{BaSpec}*{Proposition~3.6} we have \begin{displaymath} T^\otimes(F)\mathsf{I} = \mathsf{S}_{\supp F\mathsf{I}} = \mathsf{S}_{\Spc(F)^{-1}V} \text{ and } T^\otimes(F)\mathsf{J} = \mathsf{S}_{\supp F\mathsf{J}} = \mathsf{S}_{\Spc(F)^{-1}W}. \end{displaymath} Thus we see, using that $\mathsf{I}\cap\mathsf{J} = \mathsf{T}_{V\cap W}$, \begin{displaymath} T^\otimes(F)\mathsf{I} \cap T^\otimes(F)\mathsf{J} = \mathsf{S}_{\Spc(F)^{-1}V \cap \Spc(F)^{-1}W} = \mathsf{S}_{\Spc(F)^{-1}(V\cap W)} = T^\otimes(F)(\mathsf{I}\cap \mathsf{J}). \end{displaymath} It is clear that $T^\otimes(F)(\mathsf{T}) = \mathsf{S}$. \end{proof} Thus we get a functor $T^\otimes\colon \mathsf{ttcat} \to \mathsf{CohFrm}$, from the category of tt-categories and tt-functors to the category of coherent frames with maps preserving joins, finite meets, and compacts, which gives Balmer's theory of tt-geometry by composing with Stone duality (up to some pesky $\mathrm{op}$'s which we suppress, see Theorem~\ref{thm:comparisonB} for a precise statement). We do \emph{not} get a functor $T\colon \mathsf{ttcat} \to \mathsf{CLat}$ by sending $\mathsf{T}$ to its lattice of thick subcategories; a tt-functor may not respect meets of general thick subcategories, cf.\ Example~\ref{ex:notok}. However, we are able to produce a natural transformation between the restrictions of $T$ and $T^\otimes$ on a subcategory of $\mathsf{ttcat}$. \begin{lem}\label{lem:Bcomparison} Let $\mathsf{T}$ be a rigid tt-category. Then $T^\otimes(\mathsf{T})$ is a bounded sublattice of $T(\mathsf{T})$. \end{lem} \begin{proof} Both are collections of subcategories ordered by inclusion, so $T^\otimes(\mathsf{T})$ is a subposet of $T(\mathsf{T})$. Clearly the maximal elements coincide and so do the minimal ones\textemdash{}the key point is that rigidity of $\mathsf{T}$ implies that every $\otimes$-ideal is radical and so the zero ideal is radical. It remains to show that $T^\otimes(\mathsf{T})$ is closed under meets and joins in $T(\mathsf{T})$. The former is clear: the intersection of radical $\otimes$-ideals is again a radical $\otimes$-ideal, and so $T^\otimes(\mathsf{T})$ is closed under arbitrary meets (and this doesn't require rigidity). For the latter, let $\mathsf{J}_i$ for $i\in I$ be a collection of radical $\otimes$-ideals and consider \begin{displaymath} \mathsf{X} = \bigcup_{i\in I}\mathsf{J}_i \text{ and } \mathsf{J} = \thick(\mathsf{X}), \end{displaymath} where $\mathsf{J}$ is the join of the $\mathsf{J}_i$ in $T(\mathsf{T})$. We observe that $\mathsf{X}$ is closed under the tensor product, and so $\mathsf{J}$ is a $\otimes$-ideal (see for instance \cite{StevensonActions}*{Lemma~3.8}). As $\mathsf{T}$ is rigid $\mathsf{J}$ is automatically radical, and so $\mathsf{J}\in T^\otimes(\mathsf{T})$. Therefore $\mathsf{J}$ must be the join in $T^\otimes(\mathsf{T})$ of the $\mathsf{J}_i$ as required. \end{proof} \begin{rem} The lemma does not hold in complete generality: $\otimes$-nilpotent objects will result in the minimal element $0$ of $T(\mathsf{T})$ failing to be a radical $\otimes$-ideal, and so the inclusion will not preserve the empty join. \end{rem} As Example~\ref{ex:notok} illustrates the correct setting in which to compare $T$ and $T^\otimes$ is a delicate issue, at least if we want a natural transformation rather than just an objectwise comparison. The following is probably not optimal, but is sufficient to illustrate the situation. We denote by $\mathsf{rttcat}_\wedge^{\mathrm{loc}}$ the category of rigid tt-categories with exact monoidal localizations which are confluent, e.g.\ the monoidal equivalences. \begin{lem}\label{lem:naturalB} The inclusions of Lemma~\ref{lem:Bcomparison} assemble to give a natural transformation $T^\otimes \to T_\wedge$ of functors $\mathsf{rttcat}_\wedge^{\mathrm{loc}} \to \mathsf{CLat}$. \end{lem} \begin{proof} Suppose that $F\colon \mathsf{T}\to \mathsf{S}$ is a confluent monoidal localization and consider the corresponding square \begin{displaymath} \xymatrix{ T^\otimes(\mathsf{T}) \ar[r] \ar[d] & T^\otimes(\mathsf{S}) \ar[d] \\ T(\mathsf{T}) \ar[r] & T(\mathsf{S}) } \end{displaymath} Let $\mathsf{J}$ be a thick $\otimes$-ideal of $\mathsf{T}$. Then, because $F$ is essentially surjective, $F\mathsf{J}$ is closed under the tensor product in $\mathsf{S}$ and so $\thick(F\mathsf{J})$ is a $\otimes$-ideal of $\mathsf{S}$, and hence the smallest $\otimes$-ideal of $\mathsf{S}$ containing $F\mathsf{J}$. This shows \begin{displaymath} T^\otimes(F)(\mathsf{J}) = \thick^\otimes(F\mathsf{J}) = \thick(F\mathsf{J}) = T(F)(\mathsf{J}) \end{displaymath} proving the square commutes. \end{proof} Let $X$ be a spectral space. Recall from Definition~\ref{defn:Hochsterdual} the Hochster dual of $X$, which we denote by $X^\vee$. \begin{thm}\label{thm:comparisonB} There is a canonical natural transformation $\varphi\colon \Spcnt \to \Spc^\vee$ of the functors $(\mathsf{rttcat}_\wedge^{\mathrm{loc}})^\mathrm{op} \to \mathsf{Sob}$, i.e. for every rigid tt-category $\mathsf{T}$ there is a canonical comparison map \begin{displaymath} \varphi_\mathsf{T} \colon \Spcnt\mathsf{T} \to (\Spc \mathsf{T})^\vee \end{displaymath} which is natural with respect to confluent monoidal localizations. \end{thm} \begin{proof} The natural transformation of Lemma~\ref{lem:naturalB} can be postcomposed with $\Spec$ to yield a natural transformation \begin{displaymath} \Spcnt \to \Spec(T^\otimes(-)). \end{displaymath} The subtlety is that $\Spec(T^\otimes(-))$ is not quite $\Spc$. Identifying $\Spec(T^\otimes(-))$ with the set of prime $\otimes$-ideals of $\mathsf{T}$ one checks that the closure of $\mathsf{P}$ is $Z_\mathsf{P} = \{\mathsf{Q}\mid \mathsf{P}\subseteq \mathsf{Q}\}$, rather than the Balmer-closed subset $\{\mathsf{Q}\mid \mathsf{Q}\subseteq \mathsf{P}\}$. This is rectified by reversing the order, i.e.\ taking the Hochster dual. \end{proof} \begin{rem} To give another perspective on the appearence of Hochster duality in the theorem, let us note that one recovers $T^\otimes$ from $\Spc$ via the lattice of Thomason subsets whereas via Stone duality we would recover $T^\otimes$ via the lattice of open subsets. The Thomason subsets of $\Spc\mathsf{T}$ are precisely the open subsets of $\Spc\mathsf{T}^\vee$. \end{rem} \begin{rem} In practice it is convenient to have $\Spc\mathsf{D}^\mathrm{perf}(R)\cong \Spec R$ and so we would like access to ``$\varphi^\vee\colon \Spcnt^\vee \to \Spc$''. However, as $\Spcnt$ need not, at least \emph{a priori}, be spectral we do not necessarily have access to Hochster duality for the non-tensor spectrum. \end{rem} We defer the discussion of examples to Section~\ref{sec:ex}. \subsection{The comparison with Matsui's work} There are other theories on the market which should be compared to our offering. We begin by discussing the work of Matsui \cite{matsui2019}. Matsui introduces a notion of support theory, based on the work of Balmer \cite{BaSpec}, and shows it classifies thick subcategories under suitable noetherian, sobriety, and distributivity hypotheses (the last of which is left implicit). As we now show the non-tensor spectrum generalizes Matsui's machinery under strong (in a vacuum) but standard hypotheses. Let us begin by reinterpreting part of the setup of \cite{matsui2019}. Fix some triangulated category $\mathsf{K}$. Let $\MSpec \mathsf{K}$ ($M$ is for Matsui) denote the collection of finitely generated join-prime elements of $T(\mathsf{K})$, and define for each $x\in \mathsf{K}$ a basic closed subset \begin{displaymath} Z_x = \{\mathsf{J} \in \MSpec \mathsf{K} \mid \mathsf{J} \subseteq \thick(x)\}, \end{displaymath} and topologize $\MSpec \mathsf{K}$ by letting these determine the closed subsets. In general we have the following statement, expressing Matsui's construction as dual to ours when join-prime thick subcategories are under control. \begin{prop}\label{prop:matsui} Suppose that every join-prime thick subcategory of $\mathsf{K}$ is finitely generated. Then \begin{displaymath} \MSpec \mathsf{K} = \Spec (\Omega (T(\mathsf{K})^\mathrm{op})). \end{displaymath} \end{prop} \begin{proof} A join-prime in $T(\mathsf{K})$ is precisely a meet-prime in $T(\mathsf{K})^\mathrm{op}$ and passing from $T(\mathsf{K})$ to $T(\mathsf{K})^\mathrm{op}$ sends our closed subsets to Matsui's closed subsets. Thus $\MSpec \mathsf{K}$ is the collection of meet-prime elements of $T(\mathsf{K})^\mathrm{op}$ together with the natural topology, or equivalently the spectrum of the spatial frame approximation $\Omega (T(\mathsf{K})^\mathrm{op})$. \end{proof} \begin{rem} We note that while join-primes are often finitely generated there is an asymmetry: in general meet-prime thick subcategories will \emph{not} be finitely generated. This is already the case for $\mathsf{D}^\mathrm{b}(\mathbb{Z})$. \end{rem} We have the following theorem which covers the main examples of \cite{matsui2019}, for instance, tt-categories with noetherian spectrum where the unit generates (cf.\ Lemma~\ref{lem:unit}) and singularity categories of complete intersections. \begin{thm}\label{thm:comparisonM} If $T(\mathsf{K})$ is a coherent frame and $\Spcnt\mathsf{K}^\vee$ is noetherian, then $\MSpec \mathsf{K} \cong \Spcnt\mathsf{K}^\vee$. \end{thm} \begin{proof} Suppose that $T(\mathsf{K})$ is a coherent frame such that $\Spcnt\mathsf{K}^\vee$ is noetherian. Then, by Stone duality (see Theorem~\ref{thm:class}), $\Spcnt\mathsf{K}^\vee$ gives a classifying support datum for $\mathsf{K}$ in the sense of \cite{matsui2019}*{Definition~2.9}. Mastui's Theorem~2.13 thus provides the claimed homeomorphism. \end{proof} \begin{rem} The Theorem, when combined with our Theorem~\ref{thm:comparisonB} and discussion of functoriality, complements \cite{matsui2019} by expressing part of the functoriality of the construction and making explicit the general comparison with Balmer's theory. \end{rem} \subsection{The comparison with the ``noncommutative tt-spectrum''} Finally, we discuss a potential lattice theoretic approach to \cite{nakano2019} and the corresponding comparison with the non-tensor spectrum. Let $(\mathsf{R},\otimes,\mathbf{1})$ be a monoidal triangulated category, such that the tensor product is exact in each variable. We do not assume that $\mathsf{R}$ is symmetric, or even braided, monoidal. By $\otimes$-ideal in this section we will always mean $2$-sided $\otimes$-ideal. Following noncommutative ring theory and \cite{nakano2019} we say that a 2-sided thick $\otimes$-ideal $\mathsf{J}$ of $\mathsf{R}$ is \emph{semiprime} if given $r\in \mathsf{R}$ then $r\otimes s \otimes r\in \mathsf{J}$ for all $s\in \mathsf{R}$ implies that $r\in \mathsf{J}$. Given a 2-sided $\otimes$-ideal $\mathsf{I}$ we denote by $\sqrt{\mathsf{I}}$ the smallest semiprime $\otimes$-ideal containing $\mathsf{I}$. Let us denote by $\mathsf{T}^\otimes(\mathsf{R})$ the complete lattice of semiprime $\otimes$-ideals of $\mathsf{R}$, where the ordering is via inclusion and arbitrary meets are given by intersection. This is not in conflict with our earlier notation, as semiprime reduces to radical in the symmetric setting. A proper 2-sided $\otimes$-ideal $\mathsf{P}$ is \emph{prime} in the sense of \cite{nakano2019} if for 2-sided $\otimes$-ideals $\mathsf{I}$ and $\mathsf{J}$ we have $\mathsf{I}\otimes \mathsf{J} \subseteq \mathsf{P}$ implies $\mathsf{I}\subseteq \mathsf{P}$ or $\mathsf{J}\subseteq \mathsf{P}$. \begin{rem} In \cite{nakano2019} a semiprime $\otimes$-ideal is defined to be an intersection of prime $\otimes$-ideals and their Theorem~3.4.2 shows the equivalence of that definition with the one given above. In particular, it is not clear from the definitions above that prime $\otimes$-ideals are semiprime, but they are. \end{rem} For us, the point is that this notion of prime is related to the corresponding lattice theoretic notion. \begin{lem}\label{lem:rad} Let $\mathsf{I},\mathsf{J}\in T^\otimes(\mathsf{R})$. Then \begin{displaymath} \sqrt{\thick(\mathsf{I}\otimes \mathsf{J})} = \mathsf{I} \cap \mathsf{J} = \sqrt{\thick(\mathsf{J}\otimes \mathsf{I})}. \end{displaymath} \end{lem} \begin{proof} By symmetry it is enough to prove the equality on the left. It is clear, by the ideal property, that $\mathsf{I}\otimes \mathsf{J}$ is contained in $\mathsf{I}\cap \mathsf{J}$. Since $\mathsf{I}\cap \mathsf{J}$ is semiprime the left-hand side is contained in it. Suppose, on the other hand, that $j\in \mathsf{I}\cap \mathsf{J}$. Then, for any $r\in \mathsf{R}$, we have $j\otimes r\otimes j \in \mathsf{I}\otimes \mathsf{J}$. Hence $j\in \sqrt{\thick(\mathsf{I}\otimes \mathsf{J})}$ proving the claimed equality. \end{proof} We will say a $2$-sided $\otimes$-ideal $\mathsf{P}$ is \emph{prime with respect to semiprime $\otimes$-ideals} if it is semiprime, and for semiprime $\otimes$-ideals $\mathsf{I}$ and $\mathsf{J}$ we have $\mathsf{I}\otimes \mathsf{J} \subseteq \mathsf{P}$ implies one of $\mathsf{I}$ or $\mathsf{J}$ is contained in $\mathsf{P}$. \begin{lem}\label{lem:sameprimes} A 2-sided $\otimes$-ideal $\mathsf{P}$ is prime with respect to semiprime $\otimes$-ideals if and only if it is meet-prime in $T^\otimes(\mathsf{R})$. \end{lem} \begin{proof} Suppose that $\mathsf{P}$ is semiprime. Given semiprime $\otimes$-ideals $\mathsf{I}$ and $\mathsf{J}$ we have $\mathsf{I} \otimes \mathsf{J} \subseteq \mathsf{P}$ if and only if $\sqrt{\thick(\mathsf{I}\otimes \mathsf{J})}$ is contained in $\mathsf{P}$, by virtue of $\mathsf{P}$ being semiprime and thick, i.e.\ if and only if $\mathsf{I}\cap \mathsf{J} \subseteq \mathsf{P}$. Thus asking for $\mathsf{I}\otimes \mathsf{J}$ contained in $\mathsf{P}$ to imply one of $\mathsf{I}$ or $\mathsf{J}$ is contained in $\mathsf{P}$ is the same as asking for $\mathsf{I}\cap \mathsf{J}$ to imply the same. \end{proof} \begin{rem} In the symmetric setting we have a good description of the radical and one can deduce that being prime with respect to semiprime $\otimes$-ideals is equivalent to being prime. However, in the noncommutative setting this is missing. \end{rem} The dual topology on $\mathsf{CLat}(T^\otimes(-),\mathbf{2})$, which we denote by $D(\mathsf{CLat}(T^\otimes(-),\mathbf{2}))$ is given by declaring the closed subsets to be generated by the quasi-compact open subsets of $\mathsf{CLat}(T^\otimes(-),\mathbf{2})$. We note that if our space is spectral this is precisely Hochster duality. \begin{thm}\label{thm:comparisonN} Suppose that every $\otimes$-ideal of $\mathsf{R}$ is semiprime. Then the noncommutative Balmer spectrum $\Spc^\mathrm{nc}$ of \cite{nakano2019} is given by \begin{displaymath} D\Spec (\Omega T^\otimes(\mathsf{R})) = D\mathsf{CLat}(T^\otimes(\mathsf{R}),\mathbf{2}). \end{displaymath} \end{thm} \begin{proof} By Lemma~\ref{lem:sameprimes} the underlying set of $\Spc^{\mathrm{nc}}$ is precisely $\mathsf{CLat}(T^\otimes(-),\mathbf{2})$, and unwinding the definitions one sees that (as in the case of the Balmer spectrum) the topology is the dual one. \end{proof} \begin{rem} By \cite{Nakano21}*{Proposition~4.1.1} if every object of $\mathsf{R}$ is left or right dualizable then every $\otimes$-ideal is semiprime and one can apply the theorem. \end{rem} With the assumption of Theorem~\ref{thm:comparisonN} $T^\otimes(\mathsf{R})$ is a sublattice of $T(\mathsf{R})$ and so there is a comparison map \[ \Spcnt \mathsf{R} \to \mathsf{CLat}(T^\otimes(\mathsf{R}),\mathbf{2}). \] However, as noted the topology of $\mathsf{CLat}(T^\otimes(\mathsf{R}),\mathbf{2})$ does not necessarily agree with that of $\Spc^\mathrm{nc}(\mathsf{R})$. This can be compensated for in various cases. For instance, if $T^\otimes(\mathsf{R}) \to T(\mathsf{R})$ preserves finite presentation, i.e.\ finite generation as an ideal implies finite generation as a thick subcategory, then there is an induced map \[ D(\Spcnt \mathsf{R}) \to D\mathsf{CLat}(T^\otimes(\mathsf{R}),\mathbf{2}) = \Spc^\mathrm{nc} \mathsf{R}, \] and in the case that $\Spc^\mathrm{nc} \mathsf{R}$ is spectral then we get a comparison map \[ \Spcnt \mathsf{R} \to (\Spc^\mathrm{nc} \mathsf{R})^\vee. \] \section{Examples}\label{sec:ex} In this short section we discuss a few additional examples, which are the standard ones in various contexts, to illustrate the machinery. \begin{ex} Let us consider $\mathsf{K} = \mathsf{D}^\mathrm{perf}(R)$ for a commutative ring $R$. In this case, since the tensor unit $R$ generates and $\mathsf{K}$ is rigid, $T(\mathsf{K}) = T^\otimes(\mathsf{K})$ is a coherent frame (cf.\ Lemma~\ref{lem:unit}). We have \[ \Spcnt \mathsf{K} \cong (\Spec R)^\vee \] due to Theorem~\ref{thm:comparisonB}. Indeed, since $T(\mathsf{K})$ is already a coherent and hence spatial frame, we have $\Omega T(\mathsf{K}) = T(\mathsf{K})$ and so \[ \Spcnt \mathsf{K} = \Spec \Omega T (\mathsf{K}) \cong \Spec T(\mathsf{K}) = (\Spc \mathsf{K})^\vee \cong (\Spec R)^\vee. \] We know from Proposition~\ref{prop:L} that $\fSpcnt \mathsf{K}$ is $T(\mathsf{K})$ with the topology generated by the opens $U_\mathsf{M}$ for $\mathsf{M}\in T(\mathsf{K})$ a thick subcategory. There is a comparison map \[ \Spcnt \mathsf{K} \to \fSpcnt\mathsf{K}, \quad \mathsf{P} \mapsto \mathsf{P} \] where we identify $\Spcnt \mathsf{K}$ with the set of meet-prime thick subcategories. \end{ex} \begin{rem}\label{rem:SvsfS} Given any complete lattice $L$ there is an inclusion \[ \mathsf{CLat}(L, \mathbf{2}) \to \mathsf{CjSLat}(L, \mathbf{2}) \] which is a continuous map with respect to the corresponding topologies. Thus there is always a comparison map $\Spcnt \mathsf{K} \to \fSpcnt \mathsf{K}$ as described in the above example, and using the identification of Proposition~\ref{prop:L} this map is just the inclusion of the meet-prime elements of $L$ into $L$. \end{rem} \begin{ex} Suppose that $(R,\mathfrak{m},k)$ is an isolated local complete intersection of codimension $c$, and let $\mathsf{K} = \mathsf{D}_\mathrm{sg}(R)$ be the singularity category. It is proved in \cite{Stevensonclass} that $T(\mathsf{K})$ is the coherent frame of Thomason subsets of $\mathbb{P}^{c-1}_k$. Thus $(\Spcnt \mathsf{K})^\vee \cong \mathbb{P}^{c-1}_k$ as topological spaces. \end{ex} \begin{ex}\label{ex:pp1} Consider $\mathsf{K} = \mathsf{D}^\mathrm{b}(\mathbb{P}^1_k)$, the bounded derived category of coherent sheaves on $\mathbb{P}^1_k$ for some field $k$, whose corresponding lattice $T(\mathsf{K})$ was discussed in Example~\ref{ex:notmodular}. We claim that $\Spcnt \mathsf{K} = \pt T(\mathsf{K}) = \varnothing$. Indeed, given distinct integers $i,j,$ and $k$ the lattice $T(\mathsf{K})$ contains a bounded sublattice \begin{displaymath} \begin{tikzpicture \node (v0) at (0,0) {}; \node (va) at (-2,2) {}; \node (vb) at (0,2) {}; \node (vc) at (2,2) {}; \node (v1) at (0,4) {}; \draw[fill] (v0) circle (2pt) node [below] {0}; \draw[fill] (va) circle (2pt) node [left] {$\langle\mathcal{O}(i)\rangle$}; \draw[fill] (vb) circle (2pt) node [right] {$\langle\mathcal{O}(j)\rangle$}; \draw[fill] (vc) circle (2pt) node [right] {$\langle\mathcal{O}(k)\rangle$}; \draw[fill] (v1) circle (2pt) node [above] {$\mathsf{D}^\mathrm{b}(\mathbb{P}^1_k)$}; \path[-] (v0) edge node [above] {} (va); \path[-] (v0) edge node [above] {} (vb); \path[-] (v0) edge node [above] {} (vc); \path[-] (va) edge node [above] {} (v1); \path[-] (vb) edge node [above] {} (v1); \path[-] (vc) edge node [above] {} (v1); \end{tikzpicture} \end{displaymath} where $\langle\mathcal{O}(i)\rangle$ is shorthand for $\thick(\mathcal{O}(i))$. The claim that $\Spcnt \mathsf{K}$ is empty then follows from Lemma~\ref{lem:serialkiller}. \end{ex} \begin{ex}\label{ex:bicycle} Let $\Lambda$ be the path algebra of the $2$-cycle \[ \xymatrix{ 1 \ar[rr]<0.75ex>^-{\alpha} \ar@{<-}[rr]<-0.75ex>_-{\beta} && 2 } \] modulo the square of the radical, so $\Lambda$ is a $4$-dimensional self-injective algebra with $2$-dimensional projective right modules $P_1$ and $P_2$ (also known as the preprojective algebra of type $A_2$). We consider $\mathsf{K} = \mathsf{D}^\mathrm{perf}(A)$. The lattice $T(\mathsf{K})$ is \begin{displaymath} \begin{tikzpicture \node (v0) at (0,0) {}; \node (va) at (-3,2) {}; \node (vb) at (-1,2) {}; \node (vc) at (1,2) {}; \node (vd) at (3,2) {}; \node (v1) at (0,4) {}; \draw[fill] (v0) circle (2pt) node [below] {0}; \draw[fill] (va) circle (2pt) node [left] {$\langle P_1 \rangle$}; \draw[fill] (vb) circle (2pt) node [right] {$\langle P_2 \rangle$}; \draw[fill] (vc) circle (2pt) node [right] {$\langle A \rangle$}; \draw[fill] (vd) circle (2pt) node [right] {$\langle B \rangle$}; \draw[fill] (v1) circle (2pt) node [above] {$\mathsf{K}$}; \path[-] (v0) edge node [above] {} (va); \path[-] (v0) edge node [above] {} (vb); \path[-] (v0) edge node [above] {} (vc); \path[-] (v0) edge node [above] {} (vd); \path[-] (va) edge node [above] {} (v1); \path[-] (vb) edge node [above] {} (v1); \path[-] (vd) edge node [above] {} (v1); \path[-] (vc) edge node [above] {} (v1); \end{tikzpicture} \end{displaymath} where $A = P_1 \stackrel{\alpha}{\to} P_2$ and $B = P_2 \stackrel{\beta}{\to} P_1$. It again follows from Lemma~\ref{lem:serialkiller} that $\Spcnt \mathsf{K} = \varnothing$. \end{ex} These last two examples seem quite typical: when the lattice of thick subcategories has some `combinatorial component', i.e.\ a non-distributive piece, it often seems to be embedded such that $\Spcnt$ disintegrates. This is also the case, for instance, for the bounded derived category of an elliptic curve or for the bounded derived category of the path algebra of $A_n$ for $n\geq2$. However, one can dodge this issue and somehow separate the topological and combinatorial components, so to speak, in certain cases. \begin{ex}\label{ex:serre1} Consider again the case of $\mathsf{K} = \mathsf{D}^\mathrm{b}(\mathbb{P}^1_k)$, which has Serre functor $S = (-)\otimes \Sigma \mathcal{O}(-2)$. We can consider the bounded sublattice $T^S(\mathsf{K})$ of $\mathsf{K}$ consisting of those thick subcategories which are closed under $S$. One easily checks that $T^S(\mathsf{K}) = T^\otimes(\mathsf{K})$ which is isomorphic to the coherent frame of Thomason subsets of $\mathbb{P}^1_k$. Thus we recover $\mathbb{P}^1_k$, up to Hochster duality, from the Serre functor invariant thick subcategories. \end{ex} \begin{ex}\label{ex:serre2} Let $\Lambda$ be the algebra of Example~\ref{ex:bicycle} and $\mathsf{K} = \mathsf{D}^\mathrm{perf}(\Lambda)$. The Serre functor $S$ of $\mathsf{K}$ is given by the Nakayama automorphism which interchanges $P_1$ and $P_2$. An easy computation shows that $T^S(\mathsf{K}) \cong \mathbf{2}$, i.e.\ the only Serre functor invariant thick subcategories are $0$ and $\mathsf{K}$. Thus $\Spec T^S(\mathsf{K}) \cong \ast$ consists of a single point. \end{ex} This example generalizes, for instance, to the situation of $\mathsf{D}^\mathrm{b}(kQ)$ for $Q$ a quiver of type $A_n$ or $D_n$ where the only Serre functor invariant thick subcategories are also $0$ and the whole category. These examples are discussed a little further in Section~\ref{ssec:serre}. \section{Our ignorance}\label{sec:questions} There is a lot we don't know, and so we don't attempt to give a particularly exhaustive list of open problems. Let us rather make a few further comments and pose a few concrete challenges (beyond the obvious, standard, and vexing challenge of performing meaningful computations in this setting). \subsection{Realization} It is natural to ask which lattices are of the form $T(\mathsf{K})$ for an essentially small triangulated category $\mathsf{K}$. \begin{qn} Given a complete lattice $L$ is there a triangulated category $\mathsf{K}$ such that $T(\mathsf{K})\cong L$? \end{qn} The answer to this question is already known in the nicest possible case. \begin{prop} Suppose that $F$ is a coherent frame. Then there is a commutative ring $R$ such that $F \cong T(\mathsf{D}^\mathrm{perf}(R))$. \end{prop} \begin{proof} By \cite{Hochster} there is a commutative ring $R$ such that $\Spec R \cong (\Spec F)^\vee$. By Thomason's theorem \cite{Thomclass} we have $\Spc \mathsf{D}^\mathrm{perf}(R) \cong \Spec R$. As the perfect complexes are generated by the tensor unit $R$, this gives $F \cong T(\mathsf{D}^\mathrm{perf}(R))$. \end{proof} However, we do not (to our knowledge) know the answer for any more general class of lattices that isn't obtained from the above by some sleight of hand. \subsection{Serre functor invariant thick subcategories}\label{ssec:serre} We learn from tt-geometry that, for a tt-category $\mathsf{K}$, although $T(\mathsf{K})$ might be non-distributive and resist our attempts to understand it, the lattice $T^\otimes(\mathsf{K})$ is as nice as can be and is accessible via Stone duality. We are led to ask if there are other natural ways to produce sublattices of $T(\mathsf{K})$ which are spatial frames? We have seen in Examples~\ref{ex:serre1} and \ref{ex:serre2} that sometimes taking the lattice of Serre functor invariant thick subcategories $T^S(-)$ accomplishes this. This point of view is actually related to tt-geometry. \begin{ex} Suppose that $X$ is a smooth irreducible projective scheme. Then $\mathsf{D}^\mathrm{perf}(X)$ is a tt-category and has a Serre functor $S$ which can be described as $(-)\otimes \Sigma^{\dim X}\omega_X$ where $\omega_X$ is the canonical bundle. Thus, being a Serre functor invariant thick subcategory just means being closed under tensoring with powers of $\omega_X$. If $\omega_X$ or $\omega_X^{-1}$ is ample this is nothing but being a tensor ideal. Thus in this setting $T^S(\mathsf{K}) = T^\otimes(\mathsf{K})$ is a coherent frame, and we see that this lattice is actually intrinsic to $\mathsf{D}^\mathrm{perf}(X)$ and not dependent on the monoidal structure. This is closely related to, and gives another route to, the reconstruction theorem of Bondal and Orlov \cite{BOreconstruction}. \end{ex} It is not always the case for an essentially small triangulated category $\mathsf{K}$ with a Serre functor $S$ that $T^S(\mathsf{K})$ is distributive. If $\mathsf{K}$ is Calabi-Yau, i.e.\ $S\cong \Sigma^n$ for some $n$, then $T^S(\mathsf{K}) = T(\mathsf{K})$ and there are examples in which this lattice is non-distributive. For instance this happens for the bounded derived category of an ellptic curve and in discrete cluster categories of infinite type A \cite{GratzZvonareva}. On the other hand small examples from the representation theory of finite dimensional algebras, e.g.\ Example~\ref{ex:serre2}, are quite suggestive and motivate the following question. \begin{qn} Let $\mathsf{K}$ be an essentially small triangulated category. \begin{itemize} \item[i)] If $\mathsf{K}$ has a Serre functor $S$ under which conditions is $T^S(\mathsf{K})$ distributive? \item[ii)] Can we identify, through other means, `interesting' distributive sublattices of $T(\mathsf{K})$? For instance, are there interesting maximal bounded distributive sublattices? \end{itemize} \end{qn} \begin{ex} If $E$ is an elliptic curve over a field $k$ then the maximal distributive sublattices of $\mathsf{D}^\mathrm{b}(E)$ are all isomorphic to one of $\mathbf{2} \times \mathbf{2}$ or $T^\otimes(\mathsf{D}^\mathrm{b}(E))$. This seems meaningful. \end{ex}
2024-02-18T23:40:54.799Z
2022-05-27T02:16:49.000Z
algebraic_stack_train_0000
3,667
19,322
proofpile-arXiv_066-1908
\section*{\label{Abstract}Abstract} \begin{abstract} In quantum information, it is of high importance to efficiently detect entanglement. Generally, it needs quantum tomography to obtain state density matrix. However, it would consumes a lot of measurement resources, and the key is how to reduce the consumption. In this paper, we discovered the relationship between convolutional layer of artificial neural network and the average value of an observable operator in quantum mechanics. Then we devise a branching convolutional neural network which can be applied to detect entanglement in 2-qubit quantum system. Here, we detect the entanglement of Werner state, generalized Werner state and general 2-qubit states, and observable operators which are appropriate for detection can be automatically found. Beside, compared with privious works, our method can achieve higher accuracy with fewer measurements for quantum states with specific form. The results show that the convolutional neural network is very useful for efficiently detecting quantum entanglement. \end{abstract} \maketitle \section*{\label{Introduction}Introduction} Nowadays, Machine Learning has become a powerful tool in tackling some complicated quantum physics problems, beacuse of its ability to find potential patterns in vast data. The breakthroughs have been made in multi-particle quantum system state ansatz \cite{S1-RBMstate,S2-NNtomography,S3-Purification-via-Neural-Density,S4-Eigenstate-extraction,S5-Reconstructing-quantum-states,S8-En-in-Deep-Learning,S9-CNNstate1,S10-CNNstate2,S44-NNdensity,S11-Phases-of-spinless-lattice,S12-HybridCNN,S23-Entanglement-in-NNS,G8-Efficient-representation-state}, discovering quantum phase transition \cite{LZXB-1,LZXB-2,LZXB-3,LZXB-4,LZXB-5,LZXB-6}, classifying quantum correlations \cite{G24-Ma,G25-sample-convex-hull,G26,G27-from-disorder-system,G28,G48-find-nonlocality,G49-branching-FC,G52-quantum-discord,G53-Ren-multiple-classify,G54-Ren-Steerability-detection}, detecting entanglement structure \cite{G29-Entanglement-Structure} and estimating the violation of multi-particle Bell inequalities \cite{G47-bellinequality}, etc. On the other hand, with the development of the quantum computer, scientists pay more attention to quantum Machine Learning algorithms, which will be implemented on quantum computer, such as quantum approximate optimization algorithm \cite{S56-QAOA}, variational quantum eigensolver \cite{S57-eigenvalue-solver}, quantum Boltzmann machine \cite{S59-QRBM}, and quantum neural network \cite{S47-QCNN,S58}. They will promote the development of Machine Learning. Among complicated quantum physics problems, the detection of quantum entanglement is an essential one. Quantum entanglement is the essential resource in application of quantum teleportation \cite{K55-quantum-teleportation}, quantum key distribution \cite{K57-keydistribution} and quantum computation \cite{K56-Quantum-Compute}. Although there are many entanglement detection criteria have been proposed such as positive partial transpose (PPT) criterion \cite{PPTcriterion1,PPTcriterion2}, computable cross norm criterion or realignment \cite{CCNR1,CCNR2} and entanglement witnesses \cite{Witness1,Witness2,Witness3,Witness4,Witness5} etc, complete classifying entanglement is still an NP-hard problem \cite{NPhard}. Recent years, scientists have also done many researches on using machine learning to study quantum entanglement. For instance, based on deterministic measurement operators, artificial neural network can be used to classify the entanglement in 2-qubits or 3-qubits systems \cite{G24-Ma,G53-Ren-multiple-classify}. Combined with supervised learning, convex hull approximation can sample a mass of separable pure state to approximate the shape of separable states set \cite{G25-sample-convex-hull}. Convolutional neural network (CNN) can estimate the entanglement entropy of disordered interacting quantum many-particle system \cite{G27-from-disorder-system}. And fully connected neural network (FC) can be used to predict the multipartite entanglement structure of states composed by random subsystems \cite{G29-Entanglement-Structure} or find semi-optimal measurements for entanglement detection \cite{2020Finding}. In this work, we successfully applied CNN to detect entanglement. CNN is one of the most representative neural networks which is considered more efficient than FC \cite{K22-CNNmore-efficient}. At present, CNN has been used to express quantum state of multi-particle system \cite{S9-CNNstate1,S10-CNNstate2,S11-Phases-of-spinless-lattice,S44-NNdensity}, estimate entanglement entropy of quantum many-particle system \cite{G27-from-disorder-system} and the parameters of multi-particle Hamiltonian \cite{S40-CNN-Hamiltonian}. Here, we first show why the observable operator of quantum system with discrete energy levels can be regarded as a special convolution kernel and how to use a convolution kernel to represent an observable operator and get the average of the operators. Afterward, we prove that the Hermiticity of convolutional layers can maintain in the training course if the input and the kernels are initialized as Hermitian matrix. Then, we devised branching convolutional neural network (BCNN) (depicted in Fig.\ref{convolutional_pathes_and_BCNN}(c)) which has a number of independent convolutional pathes. Every convolutional path accurately calculates average value of observable operators in inputted quantum state. According to the features of quantum state, the structures of convolutional pathes, it can automatically find appropriate observable operators whitch can extract information required by training goal. Because of that, it can decrease resource consumption in practice. We detected the entanglement of 2-qubits state and research the influence of the number of observable operators on the accuracy of our model. \section*{Results} \subsection*{Regard observable operator as a kernel} CNN extract features from input data by the kernels, whitch are composed of trainable parameters. Every kernel scans the input data according to a certain step size. In each step, its parameters are multiplied with the corresponding input data and all are added as output. We can see a simple example of convolution without bias and activation function in Fig.\ref{simple example and convolutional path0}(a). As we know, quantum states can completely describe a system. Observable operators can extract features such as momentum, spin, position, etc. from quantum states. From the point of feature extraction, we prove that the observable operator of discrete level system is a special convolution kernel. In Fig.\ref{simple example and convolutional path0}(b), we show how the convolutional layer can accurately calculate the average value of observable operator. We take state density matrix $\rho$ as the input of the neural network and the transpose of observable operator $M^T$ as the kernel, and the output of convolution without activation function and bias is \begin{figure} \centering \includegraphics[width=3.25in]{simple_example_and_convolutional_path0.eps} \caption{\label{simple example and convolutional path0} \textbf{(a),} A example of convolution without bias and activation function. The step size of kernel equal to 1. \textbf{(b),}The convolutional layer with the input $\rho$ and the kernel $M^T$, and its output is $tr(\rho M)$.} \end{figure} \begin{equation}\label{eq:1} \begin{split} \rho\ast M^T &=\sum_{ij}{\bra{i}\left( \sum_{kl}{\rho_{kl}M_{lk}\ket{k}\bra{l}}\right)\ket{j} }\\ &=\sum_{ij}{\rho_{ij}M_{ji}}\\ &=\langle M\rangle, \end{split} \end{equation} where $\ast$ means the convolution in the artificial neural network. If there are $2$ subsystem with the dimension $d^{(1)}$ and $d^{(2)}$, its state density matrix can be written as $\rho=\sum_{ij,kl}^{d^{(1)},d^{(2)}}{\rho_{ijkl}\ket{i}\bra{j}\otimes \ket{k}\bra{l} }$ and the observable operator $M$ can be written as $M=M^{(1)}\otimes M^{(2)}$. In the same way, we take $M^{(2)T}$ and $M^{(1)T}$ as the kernels of the first and the second convolutional layers, and their step sizes are exactly equal to their own dimensions. The output of the first convolutional layer is \begin{figure*} \centering \includegraphics[width=6.5in]{convolutional_path1.eps} \caption{\label{convolutional path1}Take the $3\times 2$ system as an example. \textbf{(a),} There are two convolutional layers without activation function and bias. The first layer has the kerner $M^{(2)T}$ and the input $\rho$. The second has the kernel $M^{(1)T}$. The output of the second convolutional layer is exactly equal to $\langle M^{(1)}\otimes M^{(2)}\rangle$. \textbf{(b),} The gradient $\delta M^{(2)T}$ of the kernel $M^{(2)T}$ can be obtained by calculating convolution of $\sum{ \rho_{ijkl}|k\rangle\langle l|\otimes |i\rangle\langle j|}$ and $\delta^{[1]}$, where $\sum{ \rho_{ijkl}|k\rangle\langle l|\otimes |i\rangle\langle j|}$ can be obtained by swapping the corner labels of $\rho$ and $\delta^{[1]}$ is the error propagated into the first convolutional layer.} \end{figure*} \begin{equation}\label{eq:2} \begin{split} O^{[1]}&=\rho\ast M^{(2)T}\\ &=\left(\sum_{ij,kl}^{d^{(1)},d^{(2)}}{ \rho_{ij,kl}\ket{i}\bra{j}\otimes \ket{k}\bra{l} }\right)\ast M^{(2)T} \\ &=\sum_{ij,kl}^{d^{(1)},d^{(2)}}{ \rho_{ij,kl}\ket{i}\bra{j}\otimes\left( \ket{k}\bra{l}\ast M^{(2)T}\right) } \\ &=\sum_{ij}^{d^{(1)}}\sum_{kl}^{d^{(2)}}{ \rho_{ij,kl}\bra{l}M^{(2)}\ket{k}\ket{i}\bra{j} } \\ &=tr_{(2)}\left(\rho\cdot I^{(1)}\otimes M^{(2)} \right). \end{split} \end{equation} i.e., the first convolutional layer calculate the partial trace for the subsystem $(2)$ of $\rho\cdot I^{(1)}\otimes M^{(2)}$, thus $O^{[1]}$ is Hermitian and its dimension is $d^{(1)}$. Then, the output of the second convolutional layer can be got as \begin{equation}\label{eq:3} \begin{split} O^{[2]}&=O^{[1]}\ast M^{(1)T}\\ &=\left(\rho\ast M^{(2)T}\right)\ast M^{(1)T} \\ &=tr_{(2)}\left(\rho\cdot I^{(1)}\otimes M^{(2)} \right) \ast M^{(1)T} \\ &=tr\left[ \rho\cdot\left(M^{(1)}\otimes M^{(2)} \right) \right] \\ &=\langle M^{(1)}\otimes M^{(2)} \rangle. \end{split} \end{equation} Similarly, suppose that there are $N$ subsystems with dimension $d^{(1)}, d^{(2)},\cdots, d^{(N)}$, and the observable operator $M$ can be written as $M=M^{(1)}\otimes M^{(2)}\otimes\cdots \otimes M^{(N)}$, it is possible to caculate its average via the convolutional path with $N$ convolutional layers. For $\forall n\leq N$, the kernel of the $n$-th convolutional layer is $M^{(N-n+1)T}$, and the step size equal to its dimensions. Therefore, for $\forall n< N$, the output $O^{[n]}$ is \begin{equation}\label{eq:4} \begin{aligned} O^{[n]}&=O^{[n-1]}\ast M^{(N-n+1)T}\\ &=tr_{(N-n+1)}(O^{[n-1]}\cdot I^{(1)}\otimes\cdots\otimes I^{(N-n)} \\ & \otimes M^{(N-n+1)} )\\ &=tr_{(N-n+1,\cdots,N)} (\rho I^{(1)}\otimes \cdots \otimes I^{(N-n)}\\ & \otimes M^{(N-n+1)}\otimes\cdots \otimes M^{(N)}) . \end{aligned} \end{equation} Likewise, it can be prove that $O^{[n]}$ is also Hermitian, and its dimension is $d^{(O)}= d^{(1)}\cdot d^{(2)}\cdots d^{(N-n)}=d^{(O')}\cdot d^{(M')}$, where $d^{(O')}$ and $d^{(M')}$ are the dimensions of the output $O^{(n+1)}$ and kernel $M^{(N-n)T}$ of next layer. In the same way, the output of the last layer also can be obtained \begin{equation}\label{eq:5} \begin{split} O^{[N]}=&\left(\left(\left(\rho \ast {_{}^{}}{M}{_{}^{(N)T}} \right)\ast {_{}^{}}{M}{_{}^{(N-1)T}}\right)\ast\cdots\right)\ast{_{}^{}}{M}{_{}^{(1)T}}\\ =&\langle {_{}^{}}{M}{_{}^{(1)}} \otimes {_{}^{}}{M}{_{}^{(2)}}\otimes \cdots \otimes {_{}^{}}{M}{_{}^{(N)}} \rangle . \end{split} \end{equation} Furthermore, considering that artificial neural networks are usually trained based on gradient descent, we prove that if the input and kernel of the convolutional layer are initialized as Hermitian matrixes, the gradient of the kernel will also be Hermitian. The calculation of kernel's gradient depends on the neural network error matrix from back propagation and the input of convolutional layer. We let $\delta^{[n]}$ be the error matrix whtich is propagated into the $n$-th convolutional layer. Its dimension always equal to dimension of $O^{[n]}$. Since the step size of the kernels here are equal to their dimensions, for $\forall n< N$, $\delta^{[n]}$ can be expressed as \begin{equation}\label{eq:6} \begin{split} \delta^{[n]}=\delta^{[n+1]} \otimes M^{(N-n)T}. \end{split} \end{equation} Because of the Hermitianity of $O^{[N]}$, the error $\delta^{[N]}$, which is propagated from the fully connected layer, must be a real number too. It means for $\forall n \leq N$, $\delta^{[n]}$ is Hermitian. Considering that, for $\forall n < N$, $d^{(O)}=d^{(O')}\cdot d^{(M')}$, so $O^{[n]}$ can be written as $\sum_{ij,kl}^{d^{(O')},d^{(M')}}{O_{ijkl}^{[n]}\ket{i}\bra{j}\otimes\ket{k}\bra{l}}$. Then according to the neural network back propagation theory, for $\forall n \leq N$, the kernel gradient is \begin{equation}\label{eq:7} \begin{split} \delta M^{(N-n+1)T}&=\sum_{ij,kl}^{d^{(O')},d^{(M')}}{O_{ij,kl}^{[n-1]} \ket{k}\bra{l}\otimes\ket{i}\bra{j}}\ast\delta^{[n]} \\ &=\sum_{kl}^{d^{(M')}}\sum_{ij}^{d^{(O')}}{O_{ij,kl}^{[n-1]} \bra{j}\delta^{[n]T} \ket{i} \ket{k}\bra{l}}. \end{split} \end{equation} Since $O^{[n-1]}$ and $\delta^{[n]}$ are Hermitian, so $\sum_{ij}{O_{ij,kl}^{[n-1]} \bra{j}\delta^{[n]T} \ket{i}}=( \sum_{ij}{O_{ij,lk}^{[n-1]} \bra{j}\delta^{[n]T} \ket{i}} )^\ast$, as well as, $\delta M^{(N-n+1)T}$ is also Hermitian. Therefore, in the process of updating based on gradient descent, the Hermitianity of the kernel will not change. So far, we prove that the observable operator of the discrete level system can indeed be regarded as a special convolution kernel, the convolutional layer can be used to calculate the average of observable operators, and these convolutional layers can naturally keep Hermitianity when trained by gradient-based optimization methods. \subsection*{Entanglement detection for 2-qubits state} \begin{figure*} \centering \includegraphics[width=6.5in]{BCNN.eps} \caption{\label{convolutional_pathes_and_BCNN} Branching convolutional neural network(BCNN). The input of the network is the density matrix $\rho$, and it goes through several independent convolutional paths (showed in red dotted box). Every convolutional path has two convolutional layers and each convolutional layer has several kernels. Each convolutional path outputs the average of all combinations of the kernels of its two convolutional layers. Then, we take the outputs of these convolutional paths as the input of fully connected layer to classify the entanglement of states. For Werner, G\uppercase\expandafter{\romannumeral1}-Werner and G\uppercase\expandafter{\romannumeral2}-Werner states, we use three fully connected layers. For general 2-qubits state, we add two fully connected layers in the former structure.} \end{figure*} Based on the content we introduced above, we devise the BCNN (depicted in Fig.\ref{convolutional_pathes_and_BCNN}) to classify the entanglement of 2-qubits state. BCNN consists of several convolution paths and the following fully connected layers. It can automatically find proper observable operators which can extract information needed for the training goal. Here, we use $(m; n_1, n_2)$ to describe the structure of convolutional paths, where $m$ means how many convolutional pathes the network has, $n_1$ and $n_2$ means there are two layers of convolutional layer in a convolutional path and they have $n_1$ and $n_2$ kernels respectively . After training, the trained observable operators can be obtained from these kernels. More details of BCNN is introduced in the section Methods\ref{Methods}. Our dataset consists of state density matrixes and corresponding entanglement labels. The labels are determinded by PPT criterion, which is necessary and sufficient for entanglement classification of $2\times 2$ and $2\times 3$ system \cite{necessary-sufficient}. Next, we will briefly introduce the quantum states we tested. The Werner state is \begin{equation}\label{eq:8} \rho=p \ket{\psi} \bra{\psi} + \frac{ \left( 1-p\right) I}{4}, \end{equation} where, $\ket{\psi}=\frac{1}{\sqrt{2}} \left( \ket{00}+\ket{11} \right)$, $p\in (0,1)$. It has only one free parameter $p$, when $p> \frac{1}{3}$ it is entangled \cite{Entanglement}. The first generalized Werner state which we called G\uppercase\expandafter{\romannumeral1}-Werner state is \begin{equation}\label{eq:9} \rho(\theta)=p\ket{\psi _\theta} \bra{\psi _\theta}+\frac{(1-p)I_A }{2} \otimes \rho_{B}^{\theta}, \end{equation} where, $\ket{\psi _\theta}=\cos\theta \ket{00}+\sin\theta \ket{11}$, $p\in (0,1)$, $\theta \in (0,2\pi)$, and ${_{}^{}}{\rho}{_{B}^{\theta}}=tr_A(\ket{\psi _\theta} \bra{\psi _\theta})$ is the reduced density matrix of the B system. The G\uppercase\expandafter{\romannumeral1}-Werner state has two free parameters $\theta$ and $p$, however its entanglement information only related to $p$. Like the Werner state, when $p> \frac{1}{3}$ it is entangled \cite{G53-Ren-multiple-classify}. The second generalized Werner state, which we call the G\uppercase\expandafter{\romannumeral2}-Werner state, is \begin{equation}\label{eq:10} \rho(\theta,\phi)=p\ket{\psi_{\theta,\phi}} \bra{\psi_{\theta,\phi}}+\frac{(1-p)I}{4}, \end{equation} where, $\ket{\psi_{\theta,\phi}}=\cos \frac{\theta}{2}\ket{00}+e^{i \phi} \sin \frac{\theta}{2}\ket{11}$, $p\in (0,1)$, $\theta \in (0,\pi)$, $\phi \in (0,2\pi)$. The G\uppercase\expandafter{\romannumeral2}-Werner state has three free parameters $\theta$, $p$ and $\phi$, but its entanglement information is only related to $\theta$ and $p$. It is entangled when $p> \frac{1}{(1+2\sin\theta)}$ \cite{G24-Ma}. Normally, it needs 15 observable operators to reconstruct a 2-qubits state density matrix. However, the number of free parameters of above three quantum states is less than 15, and that of parameters related to entanglement may be even less. In principle, if we can effectively extract and process the entanglement information, we can classify the entanglement of quantum states with the least resource consumption. \begin{figure*} \centering \includegraphics[width=5in]{G0G1G2werner_acuracys_and_errors.eps} \caption{\label{G0G1G2werner acuracys and errors} \textbf{(a),}The performance of BCNN for entanglement detection of Werner state, G\uppercase\expandafter{\romannumeral1}-Werner state and G\uppercase\expandafter{\romannumeral2}-Werner state. The accuracy increases with the number of observable operators. \textbf{(b),}The error distribution of the BCNN with 1 observable operators, when entanglement classification is performed on G\uppercase\expandafter{\romannumeral1}-Werner state. When $p> \frac{1}{3}$, the state is entangled. And the errors concentrate on the boundery of entanglement and separability. \textbf{(c)(d),}The error distribution of the BCNN with 2 and 3 observable operators, when entanglement classification is performed on G\uppercase\expandafter{\romannumeral2}-Werner state. We only drew the distribution when $\theta=0, 0.1\pi, \cdots, \pi$ for more clear view. The boundery of entanglement and separability is $p=\frac{1}{(1+2\sin\theta)}$. And the errors concentrate on the boundery and the area $\theta=0$ and $\pi$. } \end{figure*} We use the BCNN consisting of convolutional paths $(m\in \{1,2,3,4\}; n_1=1, n_2=1)$ and three fully connected layers to classify the entanglement of Werner state, G\uppercase\expandafter{\romannumeral1}-Werner state and G\uppercase\expandafter{\romannumeral2}-Werner state. The convolutional path uesd here has two convolutional layer, and each layer has just one kernel. It can train a observable operator and calculate its average. In practice, based on few observable operators, the BCNN can predict the entanglement of the these quantum states with high accuracy, whitch shown in Fig.\ref{G0G1G2werner acuracys and errors}(a). When classifying the entanglement of the Werner state, the accuracy of the BCNN achieve 99.7\% with only one observable operator $(m=1)$. For the G\uppercase\expandafter{\romannumeral1}-Werner state, FC has achieved 97\% accuracy with two selected observable operators \cite{G53-Ren-multiple-classify} and BCNN can achieve 99.8\% with only one observable operator $(m=1)$. For the G\uppercase\expandafter{\romannumeral2}-Werner state, BCNN can achieve 98.4\% with two observable operators $(m=2)$ and 99.6\% with three observable operators $(m=3)$, which is at the same level with the performance of FC with four selected observable operators \cite{G24-Ma}. (Compared with using a FC with four selected observable operators \cite{G24-Ma}, our results are about the same.) The error distributions of the BCNN are shown in Fig.\ref{G0G1G2werner acuracys and errors}(b-d). As we can see, the errors are concentrated on the boundary of entanglement and separability. Especially for G\uppercase\expandafter{\romannumeral2}-Werner state, the errors also occur when $\theta=0$ and $\pi$, whitch there are only separable states. The trained observable operators which used to extract the entanglement information can be acquired from the kernels. We show them in TABLE \ref{tab:1} and only keep two decimal places. \begin{table*} \centering \caption{\label{tab:1}Trained observable operators} \begin{tabular}{cccccccccc} \hline state & operator number & $X^{(1)}$ & $Y^{(1)}$ & $Z^{(1)}$ & $I^{(1)}$ & $X^{(2)}$ & $Y^{(2)}$ & $Z^{(2)}$ & $I^{(2)}$\\ \hline Werer & 1 & -1.26 & -0.97 & -1.40 & 0.61 & 0.49 & -0.18 & 1.63 & 0.62\\ \hline G\uppercase\expandafter{\romannumeral1}-Werner & 1 & -0.37 & 0.16 & 1.08 & 0.19 & -0.18 & 0.39 & 0.95 & -0.51\\ \hline \multirow{6}{*}{G\uppercase\expandafter{\romannumeral2}-Werner} & \multirow{2}{*}{2} & 0.04 & 0.17 & 0.49 & -1.57 & -0.05 & -0.22 & 1.00 & -0.06\\ \multirow{6}{*}{} & \multirow{2}{*}{} & -0.05 & -0.09 & 1.39 & 0.07 & 0.02 & 0.22 & 0.97 & 0.17\\ \\ \multirow{6}{*}{} & \multirow{3}{*}{3} & 2.71 & -0.27 & 0.50 & 0.56 & 2.40 & -0.96 & 0.48 & -0.53 \\ \multirow{6}{*}{} & \multirow{3}{*}{} & 0.71 & -1.23 & -1.33 & 1.11 & 0.95 & 0.11 & 0.83 & 0.69 \\ \multirow{6}{*}{} & \multirow{3}{*}{} & -0.20 & -0.64 & -0.30 & 0.10 & 3.38 & 0.29 & -0.25 & -0.08 \\ \hline \end{tabular} \end{table*} Finally, we apply BCNN to classify the entanglement of general 2-qubits state. For the state generation, we adopt the method of $\rho=\frac{\sigma \sigma^+}{tr(\sigma \sigma^+)}$, where $\sigma$ is a random complex matrix and keep the proportion of entangled states and separable states at 1:1. We show the performance of BCNN with three different convolutional path $(m=1; n_1=4, n_2=4)$, $(m\in [6, 15]; n_1=2, n_2=2)$ and $(m\in[6,15]; n_1=1, n_2=1)$ in Fig.\ref{general state accuracy error}. Since the general state is more complicated, we use five fully conneted layers in BCNN. For the structure $(m=1; n_1=4, n_2=4)$, we fix one kernel in each convolutional layer as the identity matrix, and other kernels are still trainable. The outputs of the convolutional path are the averages of 15 observable operators and a constant 1. In this case, the accuracy of BCNN can achieve 97.5\%. For $(m\in [6, 15]; n_1=2, n_2=2)$, we also fix one kernel in each convolutional layer as the identity matrix. Each convolutional path outputs 3 observable operator averages and a constant 1. When $m\geq 9$, the convolutional paths are able to get all the information about the quantum state, and the accuracy of BCNN can be higher than 96.0\%. For the structure $(m\in[6,15]; n_1=1, n_2=1)$, each convolutional path computes the average of just one observable operator. Therefore, with the increase of $m$, the accuracy of this structure raises lowest. And the BCNN still need 15 convolutional paths to extract all quantum state information and its accuracy can achieve 97.2\%. In Fig.\ref{general state accuracy error}(b), we show the error distribution of BCNN with three above convolutional paths when they are just able to extract all the information about quantum state. There only a few errors occur symmetrically around the boundary of entanglement and separability. As long as the structure of the convolutional paths allows all information to be extracted, the BCNN can train appropriate observable operators and detect the entanglement of a general quantum state with high accuracy, which is comparable to the results in article \cite{G24-Ma}. \begin{figure*} \centering \includegraphics[width=6.5in]{general_state_accuracy_and_error.eps} \caption{\label{general state accuracy error} \textbf{(a),}The performance of BCNN for entanglement detection of 2-qubits general state. The accuracy almost increases linearly with the number of observable operators. \textbf{(b),}The error distribution of the BCNN with 15 observable operators, when entanglement classification is performed on 2-qubits general state. The horizontal axis is the minimum eigenvalue $\lambda_{min}$. The error concentrates on the boundery of entanglement and separability. The error distribution is symmetric about the boundary, so our prediction is unbiased.} \end{figure*} \section*{Discussion} In this work, we prove the observable operator of discrete-level systems is a convolution kernel, which means that the convolutional layer of artificial neural network can accurately calculate the average value of observable operator in quantum state, and that the Hermiticity of the convolutional layer can be maintained with the optimization algorithm based on gradient descent. With the foundation of above, we propose a BCNN, which can obtain well-trained observable operators to efficiently extract entanglement information and classify entanglement. We classify the entanglement of 2-qubits states, and studied the accuracy and the error distribution of BCNN. We believe that CNN will be a promising tool for quantum physics. In our work, for Werner state, G\uppercase\expandafter{\romannumeral1}-Werner state and G\uppercase\expandafter{\romannumeral2}-Werner state, it can achieve 99.7\%, 99.8\%, and 98.4\% respectively when the numbers of observable operators are same with that of parameters related to entanglement. It is superior to previous work in reducing resource consumption. For general 2-qubits state, our model still needs 15 observable operators to achieve the accuracy of 97.2\%. In addition, we can extract the trained observable operators from kernels. These observable operators can be rewritten as the sum of the orthogonal normalization operators. We only keep two decimal places of the coefficient and input them into nerual network to test again, and find that the accuracy of the BCNN almost keep the original level. Since the convolutional layer can accurately calculate the average value of observable operator in quantum state, and this property applies to any dimension discrete-level system. It may be a powerful tool for solving measurement direction of the maximum violation of Bell inequality. We believe this property is likely to be used in other research and can bring new inspiration to the understanding of the relationship between quantum physics and artificial neural networks. \section*{\label{Methods}METHODS} \begin{table*} \centering \caption{\label{tab:2}Adam parameters} \begin{tabular}{l|cccccc} \hline state & operates number & lr & $\beta_1$ & $\beta_2$ & batch size & epoches\\ \hline Werer & 1-15 & 0.001 & 0.9 & 0.99 & 10 & 10\\ \hline G\uppercase\expandafter{\romannumeral1}-Werner & 1-15 & 0.001 & 0.35 & 0.99 & 10 & 10\\ \hline \multirow{3}{*}{G\uppercase\expandafter{\romannumeral2}-Werner}& 1 & 0.001 & 0.5 & 0.9 & 10 & 10\\ \multirow{3}{*}{}& 2 & 0.001 & 0.9 & 0.99 & 200 & 30\\ \multirow{3}{*}{}& 3-15 & 0.001 & 0.375 & 0.99 & 10 & 10\\ \hline \multirow{8}{*}{General} & 8 & 0.0003 & 0.325 & 0.825 & 400 & 20\\ \multirow{8}{*}{} & 9 & 0.0003 & 0.325 & 0.85 & 400 & 20\\ \multirow{8}{*}{} & 10 & 0.0003 & 0.325 & 0.87 & 400 & 20\\ \multirow{8}{*}{} & 11 & 0.0003 & 0.325 & 0.9 & 400 & 20\\ \multirow{8}{*}{} & 12 & 0.0003 & 0.325 & 0.95 & 400 & 20\\ \multirow{8}{*}{} & 13 & 0.0003 & 0.325 & 0.925 & 400 & 20\\ \multirow{8}{*}{} & 14 & 0.0003 & 0.325 & 0.925 & 400 & 20\\ \multirow{8}{*}{} & 15 & 0.0003 & 0.325 & 0.975 & 400 & 20\\ \hline \end{tabular} \end{table*} The BCNN has several independent convolutional paths and fully connected layers. A convolutional path has several convolutional layers. The kernel of each convolutional layer represents the transpose of a observable operator acting on the system or subsystem. Therefore their convolutional layers should not have activation functions and biases. Each convolutional layer outputs all the averages of all combinations of the kernels of its convolutional layers. Then, we take the outputs of the convolutional paths as the input of fully connected layers for entanglement detection. And the structure of convolutional paths and fully connected layers should be adjusted according to the task. In this work, we use the BCNN consists of the convolutional paths $(m; n_1=1,n_2=1)$ and three fully connected layers with the structure $(\alpha,1024,1)$ to detect the entanglement of Wenrer state, G\uppercase\expandafter{\romannumeral1}-Werner state and G\uppercase\expandafter{\romannumeral2}-Werner state. For general 2-qubits state, we test the BCNN consists of one of three different convolutional paths $(m; n_1=4, n_2=4)$, $(m; n_1=2, n_2=2)$ or $(m; n_1=1, n_2=1)$, and five fully connected layers ($\alpha$,1024,1024,1024,1). The $\alpha$ is the number of the input nodes of the first fully connected layer, which is related to the structure of convolutional paths. The first layer has no activation functions and bias. The final layer's activation function is sigmoid and the loss function is cross entropy \cite{cross-entropy}. For other layers, we take Relu \cite{relu} as the activation function. We use Adam \cite{2014Adam} as our optimizer to make it more likely to cross the saddle point and local minimum. We did not use the default recommended parameters of Adam, but adjust them according to the quantum state and the number of convolutional paths. Our adam parameter settings are listed in TABLE \ref{tab:2} for reference. In training process, we takes state density matrix as the input, and the kernels are initialized to a random Hermitian matrix. According to the features of the quantum state and the structure of convolutional paths, BCNN can automatically find appropriate observable operators for training task. In test process, we can directly calculate the average value of these trained observable operators, and input them into the following fully connected layers to detect entanglement. Of course, based on Eq.(\ref{eq:1}), single convolutional layer can be used to find global observable operators for entanglement detection, but the same task can already be completed by FC \cite{2020Finding}. Therefore, in this article, we will focus on using two convolutional layers to express the product observable operator.
2024-02-18T23:40:55.021Z
2022-05-31T02:04:49.000Z
algebraic_stack_train_0000
3,677
5,028
proofpile-arXiv_066-1939
\section{Introduction} Machine learning models have achieved remarkable results in critical care \citep{hyland2020early} but most models are evaluated on data similar to what they have been trained with. Common evaluation practices involve splitting the training and testing sets based on some criteria, ranging from completely random sampling to group-based cross-validation, especially when working with human-centered applications. Still, deployed models in healthcare tend to perform worse when compared to the training phase, due to the dissimilarity between training (in-distribution) data and data that the model is applied to after deployment \citep{castro2020causality, johnson2018generalizability}. Multiple factors can influence the deployment performance of a clinical model. Observing differences on withheld data from previous or later years constitutes a \textit{temporal shift} and has found to be an important factor of performance drop \citep{guo2022evaluation, nestor2019feature}. This can be due to various reasons: new types of data collection devices or terminologies (ICD-9 vs ICD-10 codes), internal changes in variable definitions when adopting a new Electronic Health Records (EHR) platform, changes in disease incidence (e.g., COVID-19 prevalence), or changing demographics (e.g., through hospital mergers) \citep{finlayson2021clinician}. Machine learning models are known to struggle to generalize OoD, failing in unexpected ways when tested outside of the training samples domain. For instance, self-driving cars are affected by variations in light or weather \citep{dai2018dark}. This undesirable behaviour can be attributed to inherent limitations of neural networks trained on pooled data that learn easier-to-fit spurious correlations instead of the causal factors of variation in data. A notable example are vision models focusing on the background of the image instead of the respective object (cows in `common' contexts such as alpine pastures are detected correctly, while cows in uncommon contexts such as the beach are not) \citep{beery2018recognition}. Therefore, the unpredictable behaviour of machine learning models given OoD inputs constitutes a significant obstacle to their deployment in critical applications such as in healthcare. Recent methods attempt to address these issues. In particular, the area of `domain generalization' assumes access to multiple `environments' during training, each of them containing samples in different conditions. A successful model should learn the invariances across these environments that will then generalize to held-out test domains. However, despite the introduction of multiple domain generalization methods \citep{arjovsky2019invariant, li2018learning}, there is evidence that they do not outperform traditional domain-unaware training \citep{gulrajani2020search}. A key challenge in this area is that domain generalization methods typically make strong assumptions about the nature of the dataset shift. When these assumptions are violated, it is not surprising that methods may not demonstrate improved performance. However, it is not always clear whether such assumptions can be applied to \emph{real-world} dataset shifts. One of the first works to apply domain generalization methods to critical care, concluded that the selected environment was \textit{not} OoD and instead resorted to inducing artificial shifts such as adding noise or resampling \citep{zhang2021empirical}. It is not clear however, whether these environments reflect real distribution shifts and can generalize in different hospital settings. In this work, we take a step back and hypothesize that without "true" OoD environments, the evaluation of distribution shift and domain generalization methods is not well founded. Therefore, we investigate realistic scenarios of OoD detection in critical care, using the eICU Database as a case study. In particular, we propose a model-based method to identify hospitals that perform poorly when used in evaluation (held-out) mode. Through Leave One Hospital Out (LoHo) training, we select potential OoD hospitals given an empirical threshold which corresponds to the delta between Out-of-domain test and In-domain test score. We argue that in order for an environment to be considered OoD, models with access to the "local" hospital data should outperform those trained with the rest of hospitals. Additionally, we generate alternative candidate environments based on gender, age, and their combinations, using the same hypothesis that models with access to a unique characteristic (e.g. old men) should outperform those trained with the rest of the demographics. For the above methods, we test 3 models with different levels of access to environments: only training, "local" candidate OoD, and all environments, which allows us to assess the `OoD-ness' of environment. In short, this paper makes the following contributions: \begin{itemize} \item We take a principled approach with regards to identifying OoD environments in real-world critical care data by proposing a model-based leave-one-hospital-out experiment and cross-sectional feature splits. \item We identify a set of potential OoD hospitals and analyze their characteristics in various dimensions including region, size, demographics, and teaching status, findings which could help future works using the eICU database. \item Then, we conduct extensive experiments with models on three different levels of access to OoD environments and assess the impact of data subsampling and model size in the task of mortality prediction. \item Despite the varying levels of performance drop in the first step of OoD candidate generation, we show that there is no significant performance improvement when using data from the OoD environment(s), which motivates further research on the suitability of this benchmark dataset for evaluating robust clinical models. \end{itemize} \section{Related work} Despite the abundance of machine learning benchmarks for most data types, there are considerably fewer benchmarks to evaluate OoD detection models. Recent attempts to evaluate models on different environments include \textit{DomainBed} and \textit{WILDS} \citep{gulrajani2020search, koh2021wilds}, which have curated datasets ranging from textual data, to satellite and medical images. However, there is limited work on critical care and electronic health record benchmarks. Due to this lack of standard OoD benchmarks, recent works introduce synthetic shifts to evaluate the generalization capabilities of models in unseen domains. The most relevant work to ours compared domain generalization models to traditional empirical risk minimization (ERM), using the eICU database \citep{zhang2021empirical}. One of the five available hospital regions ("South" region) was selected as OoD environment because its demographics appear to be the most distinct (mainly ethnicity). However, according to the authors, the performance of ERM on the eICU test set was on-par with the "oracles" that have access to this environment, indicating that this environment is likely not OoD. To overcome this limitation in this real-world dataset, the authors proposed a set of synthetic domain shifts such as noise-corrupted labels, feature-correlated corrupted labels, and biased subsampling. These artificial shifts were empirically found to be OoD, but the impact of domain generalization (DG) models was not significant over ERM. We posit that it is challenging to deconfound whether this is attributed to the models or the selected environments. Another important factor here is that even \emph{if} the DG methods had improved over ERM, because these were 'synthetic' OoD environments, it is not a given that those would work for real-world shifts. We should note that experiments with medical imaging data showed that OoD environments are easier to find but ERM still performs equally well. However, given our focus on critical care data, we believe that there is a research gap in appropriate benchmarks in this area and we build upon their framework by exploring more environments in a principled model-based approach. A similar sentiment is echoed with the MIMIC database \citep{johnson2016mimic}, where studies on temporal shifts have been inconclusive. We note that MIMIC contains EHR data from 2008 to 2019. A common experimental setup is to train models on a snapshot of data and evaluate them on subsequent years. However, domain generalization models struggle to outperform domain-unaware models in EHR tasks including mortality, sepsis, invasive ventilation, and length of stay \citep{guo2022evaluation}. More importantly, this study compared models trained on each individual year to a single model trained on the first snapshot and tested on the subsequent years. The findings hint that temporal dataset shift was not detected in three out of four tasks, where only the Sepsis task showed significant differences. This motivates the problem of a principled OoD identification method which is the main focus of our work. Other works focus on coming up with OoD environments through exclusion criteria that are medically meaningful. This involves excluding groups based on demographics, splitting features related to a dynamic clinical status, or artificially creating OoD groups by withholding them during training \citep{zadorozhny2021out}. We extend this line of work by proposing combinatorial criteria (e.g. both gender and age). However, we attempt to identify natural OoD environments, whereas Zadorozhny and colleagues' scope didn't include comparisons to assess the OoD-ness of the environments. Its focus was on benchmarking density estimation models in the AmsterdamUMC Database, which includes a mortality task like in our case \citep{zadorozhny2021out}. Another challenge in evaluating predictive models in clinical care is comparing across different medical centers. As discussed above, most popular datasets such as MIMIC are single-center whereas the eICU Database spans more than 200 hospitals \citep{pollard2018eicu}. A systematic comparison study applied cross-validation across different hospitals in the eICU and assessed how well EHR models transfer to held out hospitals as compared to locally developed models \citep{johnson2018generalizability}. We incorporate a similar cross-validation scenario to identify OOD hospitals and assess transferability. Also, the recalibration process involved transferring the feature scalers (ranges of vitals etc.) to other hospitals, whereas more recent approaches applied pre-training and fine-tuning but there were no experiments across hospitals \citep{mcdermott2021comprehensive}. \section{Methods} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{figures/hospital_splits.pdf} \caption{Illustration of the data splitting strategy. Each environment (e.g. hospital) is assigned to a single \textit{vertical } training/validation/test set. Within each set there are further \textit{horizontal} train/validation/test splits to enable in-domain and out-of-domain comparisons.} \label{fig:splits} \end{figure} Following the notation introduced by Zhang et al. \citep{zhang2021empirical}, we assume labelled data $ \{(x_i^e,y_i^e )\}_{i=1}^n $, with $n$ examples in total, sourced from distinct training environments $e$. The goal of a robust predictor $f$ is to minimize the risk $R^e(f)$ (or loss) across all possible environments. This is usually assessed by reporting a performance metric in unseen test environment(s) $R^{e_{test}}(f)$ and associated validation environments $R^{e_{val}}(f)$. \subsection{Training scenarios} \label{training_scenarios} We compare the performance of three models with different levels of access to unseen OoD environments. The first one is traditional training and we report the performance of two "\textit{oracles}" which have access to the test environment(s) during training: \begin{itemize} \item \textbf{ERM} (Empirical Risk Minimization): traditional training where data from all training environments are pooled together. \item \textbf{ERMID}: an ERM model trained on the training split of the test environment(s). Assuming sufficient data from the test environment, we would expect this model to perform well, as it does not suffer from distribution shift. \item \textbf{ERMMerged}: an ERM model trained on the combination of all environments: training \textit{and} test environment(s). This model can be seen as an upper bound of performance when one has access to all data, which is an unrealistic scenario. \end{itemize} The difference in performance between the oracles and the ERM model is a proxy measure of how distinct the new environment is from the training environments, or in other words, a measure of OoD-ness of the environment. \subsection{Environment splits} \label{envsplits} We consider the following environment splits to evaluate the impact of the model selection to the generalization performance. We designate disjoint training, validation, and test splits, and each environment is assigned to one split (also see Figure \ref{fig:splits}). Within each split, there are three train/validation/test sets. In other words, every environment (hospital)\footnote{An environment doesn't \emph{always} align with hospital, however in most cases in this work --except from Section \ref{envs_demographics}--, an environment is a hospital.} belongs to only one split and is further split to three sets. The data from the validation environment is used only for early stopping and hyperparameter tuning as needed. Each model employs the following configuration: \begin{itemize} \item \textbf{Training only}: \textit{ERM} uses the pooled training splits of the training environments, the validation split of the validation environment(s) for model selection, and the testing split of the test environment(s) for OoD generalization. \item \textbf{Testing only}: \textit{ERMID} uses the training split of the test environment, the validation split of the test environment for model selection, and the testing split of the test environment for OoD generalization. \item \textbf{All environments}: \textit{ERMMerged} uses the pooled training splits of the combined training, validation and test environments (all), the combined validation splits of all environments, and the testing split of the test environment(s) for OoD generalization. \end{itemize} Note that the test split of the test environment is held out from all models, and can be used to compare performance. \subsection{Model-based OoD environment identification} Motivated by the problem of cross-hospital transfer of clinical models, we design a model-based OoD environment identification approach. To this end, we use model performance to evaluate OoDness. This is based on the observation that the model can be a good reflection of the underlying training data it has been trained on, including the non-linear interactions of multiple features that could be impossible to capture with bespoke exclusion criteria. Considering that it is impossible to know beforehand which environment is going to be OoD, we propose a Leave-One-Hospital-Out (LOHO) training setup to conduct an exhaustive search over possible environments. This way, we move away from single exclusion criteria (e.g. patients with high blood pressure), and instead we use the model performance as proxy for OoDness. To facilitate this, we are considering unique hospitals as candidates for those environments. Specifically, we iterate through all $m$ hospitals, treating each one as the test environment in turn. We use the remaining $m-1$ hospitals as the train environment, training $m$ predictors in a leave-one-out fashion. For each trained model, we compute the Out-of-Domain test performance ($P_{\text{out-domain}}$); computed on the test set of the test split (unseen data from the unseen environment) as well as the In-domain test performance ($P_{\text{in-domain}}$);, computed on the test set of the train split (unseen data from the same environment(s)). We consider the difference between these as indicative of the `OoD-ness' of the left-out hospital (test environment). We can then rank each hospital according to this difference: \begin{equation} P_{rank}^{m} = P_{in-domain} - P_{out-domain} \end{equation} A threshold $T$ is applied to $P$ as a cutoff value to select candidate environments. In practical terms, $T$ can be a quantile cutoff of the environments' distribution and its value can be traded off with the size of the resulting test set. We should take into account that in most ML experiments, the test set corresponds commonly to 20\% of the available data. Algorithm~\ref{algorithm} describes the steps of the model-based OoD environment detection approach. \begin{algorithm2e} \SetAlgoLined \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{$ \{(x_i^e,y_i^e )\}_{i=1}^n $ (input data), $\{e_m\}_{m=1}^{|E|}$ (environments), $T$ (threshold), $P$ (env performance measure)} \Output{ $\{f^{E\setminus m}\}_{m=1}^{|E|}$ (predictors), $e_m' \subseteq e_m$ (candidate envs.) } \For{$e_m$ in $\{e_1, \dots, e_{|E|}\}$}{ train predictor $f^{E\setminus m}$ on environment $E \setminus m$ (with $ \{(x_i^{e},y_i^{e} )\}_{i=1; e \neq e_m}^{n} $)\; evaluate predictor based on performance measure $P$, to obtain $P^m_{\text{in-domain}}$ (on environment $E \setminus m$), and $P^m_{\text{out-domain}}$ (on environment $m$)\; compute $P_{rank}^m = P^m_{in-domain} - P^m_{out-domain}$\; } rank predictors $f^{E \setminus m}$ by $P_{rank}^m$\; apply threshold $T$ to $P_{rank}$ and generate candidate list $e_m'$ \; set $e_m'$ environments as test set for OoD validation\; \caption{Model-based OoD identification} \label{algorithm} \end{algorithm2e} The test and validation environments come from the thresholding step of Algorithm \ref{algorithm}. The intuition behind this choice is that validation environments should show OoD qualities, similar to those on the test set. We discuss more about these parameters in the next sections. \subsection{Comparing models on equal terms} Considering the different environment splitting strategies we discussed in \ref{envsplits}, we acknowledge that ERMID is typically trained on significantly less data compared to ERM and ERMMerged. ERMID has only access to the candidate OoD environment, or in the case of critical care, the local hospital(s). Machine learning models tend to perform better when trained on more data, so it would be difficult to assess whether potential performance gains are due to the choice of the environment or merely because the model has seen more examples. To compare models on equal terms we must control for the training data set size. This would entail subsampling the datasets used with ERM and ERMMerged. A first approach would be to apply naive subsampling to match the size of ERMID with that of ERM \& ERMMerged. However, by doing so, there is the possibility of discarding data from entire environments. To mitigate that, we randomly select data within each environment (e.g. hospital) so that they all add up to match the size of ERMID. \section{Data \& Experimental setup} The eICU Database\footnote{officially known as the eICU collaborative research database V2.0 \citep{pollard2018eicu}. In this work we call it \textit{eICU} for brevity.} includes data from patients in the Intensive Care Unit (ICU) spanning over 200,000 admissions from over 200 hospitals in the United States. The ICU environment is information-rich and the literature has sliced this dataset in various ways to explore many different research questions \citep{sjoding2020racial, rocheteau2021temporal}. Given its large number of hospitals, we use it as our testbed for OoD environment identification. \subsection{Mortality prediction} In this work, we focus on mortality prediction, borrowing the experimental setup and cohort selection as defined in previous works \citep{zhang2021empirical, sheikhalishahi2020benchmarking}. This is a binary prediction task which aims to forecast ahead of time whether a patient will die while in hospital, using data from the first 48 hours of the stay. We include patients who are between 18-89 years old and who are still alive after the first 48 hours. For those with multiple ICU stays, only the first stay is included. Each patient provides timeseries and static features. There are 10 continuous and 4 categorical timeseries features, as well as 3 numerical and 2 categorical static features. The observations are resampled to 1-hour windows and missing values are imputed from the previous observation with forward filling. The resulting dataset has 30,691 patients from 208 hospitals. To ensure there are enough datapoints per environment, we further exclude hospitals with <50 patient stays, ending up with 29,082 patients from 127 hospitals. Each hospital belongs to one of the following US regions: Midwest, West, Northeast, South, and the placeholder Missing region for no data. \subsection{Model architecture} To allow for comparisons with previous work \citep{zhang2021empirical}, we employ the same model architecture which consists of bidirectional Gated Recurrent Unit \citep{chung2014empirical} layers followed by a linear layer for classification with a 2-unit output. The categorical variables pass through individual embedding layers per variable, and standard scaling is applied to continuous features. The GRU layers receive the timeseries features along with appended static features at each timestep. The model hyperparameters for our experiments are provided in Appendix \ref{appendix}. \subsection{Assigning OoD candidates to splits} After applying Algorithm \ref{algorithm} to the data, we end up with a candidate list which contains the hospital IDs sorted by the lowest $P_{rank}$. To assess whether these hospitals are actually OoD we conduct experiments using the training scenarios as described in \ref{training_scenarios}. To this end, we assign hospitals to train/validation/test sets. We argue that the worse-performing hospitals in terms of $P_{rank}$ should go to the test and validation set. In practice, we use the lowest 20\% quantile of the $P_{rank}$ distribution to generate the OoD candidate list. Out of those hospitals, we assign the largest ones to the test set and the rest to the validation set, approximating a final split ratio of 85\%/10\%/5\% across the three sets, which is common in ML experiments. The full environment list is available in the Appendix \ref{appendix}. \subsection{Evaluation} Given the binary prediction task, we employ the Area Under the Receiver Operating Characteristic curve (AUC-ROC), which can handle class imbalances. We note that the original dataset shows a 11\% mortality rate (positive class) \citep{zhang2021empirical}. We train for 100 epochs or until AUC stops improving for 7 consecutive epochs on the validation set. Unlike previous work, we employ bootstrapping by sampling with replacement on the test set (500 repetitions) to calculate 95\% confidence intervals. \begin{figure*} \centering \includegraphics[width=1\textwidth]{figures/LOO-hospital-ranking.pdf} \caption{Ranking all hospitals with Leave One Hospital Out (LOHO) training. On the left tail we can see the miss-match between in-domain test ROC (blue) and out-of-domain test ROC (orange). These hospitals are OOD candidates because there is significant performance drop when tested in different domains. Darker blue and orange lines respectively denote 95\% bootstrap confidence intervals. } \label{fig:LOOranking} \end{figure*} \section{Results} \subsection{Ranking all environments} After applying Algorithm \ref{algorithm} to eICU according to the previous sections, we end up with a ranking of all hospitals sorted by those with the higher $P_{rank}$ to the lowest. In other words, these hospitals have a high In-domain but low Out-of-domain performance. In Figure \ref{fig:LOOranking}, we can see these hospitals on the left tail of the ranking along with their IDs. We note that there are hospitals on the right tail of the ranking where their Out-of-domain exceeds the In-domain performance. These are mainly smaller hospitals with high mortality imbalance which corrupts the evaluation metric. Similarly, some hospitals on the left tail exhibit out-of-domain AUCs near or below 0.5, reflecting metric instability. We also observe that the confidence intervals on the test set are significantly broader compared to the in-domain set which is more stable since it comes from the same environment(s). This is also because the in-domain test set is considerably bigger, so bootstrapping has lower variance. In Figure \ref{fig:splits_scatterplot} we can see the overall performance $P_{rank}$ as a function of hospital size (datapoints per hospital), which shows that in the few large hospitals there seem to be no OoD environment, whereas almost all potential OoD environments are concentrated on smaller hospitals featuring between 100 and 400 datapoints. \begin{figure} \includegraphics[width=\columnwidth]{figures/data_splits_Delta.pdf} \caption{ Scatterplot of the assigned environment splits as a function of the performance delta between Out-of-domain and In-domain AUC on the LOHO experiment. The dashed line denotes the area where the two metrics are equal (In-domain=Out-of-domain). } \label{fig:splits_scatterplot} \end{figure} \subsection{The characteristics of the worst-performing hospitals} In Figure \ref{fig:aggregate} we aggregate the LOHO results and group the hospitals by region, teaching status, and number of beds. The first observation is that there are notable differences between the regions. In particular, hospitals in regions 'Missing' and 'Midwest' tend to have lower test performance and therefore are potential OoD environments. On the other hand, the 'Northeast' region shows the highest performance, followed by 'West' and 'South'. This result also confirms the inconclusive findings of previous studies using the 'South' region as OoD \citep{zhang2021empirical}. By using our LOHO approach, \textit{we recommend to future studies employing region-level environments to focus on the 'Missing' and 'Midwest' environments} \footnote{Our focus is on hospital-level rather than region-level environments and therefore we do not explore this direction.}. Beyond regions, we observe smaller differences between the teaching status of the hospital, with non-teaching ones performing worse. Last, we see that hospitals with 250-499 beds tend to perform worse, followed by those with 100-249 beds. It is noteworthy that the hospitals with <100 beds tend to perform better than every other category, pointing to non-linear correlations between the number of patient stays (or datapoints in Figure \ref{fig:splits_scatterplot}) and number of beds, in terms of OoD performance. This relationship should be investigated more in future works. However, another observation across these groupings is that the number of hospitals seems to explain some of these differences, for example the 'Northeast' region features only 13 hospitals as well as there are only 8 hospitals with <100 beds. Bigger groups (e.g. 250 - 499 beds) tend to include more hospitals which predictably introduces more noise and diverse patient demographics which impact test performance. Demographics play an important role in the performance of clinical models \citep{finlayson2021clinician} and therefore we investigate whether age or gender explain these differences. In figure \ref{fig:demographics_scatterplot}, we present scatterplots of each hospital as a function of average gender imbalance and age versus the test AUC. We fit linear regression models to illustrate these trends. We observe no trend between OoD test performance and gender imbalance. For age, we observe a very moderate negative trend between lower test AUC and average age: hospitals with older patients tend to perform worse out of domain. In Figure \ref{fig:demographics_scatterplot}, we also plot the in-domain (test split of training set) versus the out-domain (test split of test set) performance on the hospital level. We observe a moderate negative trend between lower out-domain and higher in-domain AUC: hospitals perform worse OoD when the in-domain performance is higher, pointing to overfitting issues. In other words, these hospitals over-rely on the training data and struggle to generalize to new settings. We should note though, that the In-domain AUC has a very limited range (0.84-0.88 AUC) compared to the Out-domain AUC, which could confound this trend. This limited range is expected given that the in-domain test set is usually mostly the same, since we only leave out one hospital at a time - the train environment is usually almost the same. \begin{figure*} \centering \includegraphics[width=.35\textwidth]{figures/regions_aggregate_AUC.pdf}\hfill \includegraphics[width=.28\textwidth]{figures/teaching_aggregate_AUC.pdf}\hfill \includegraphics[width=.35\textwidth]{figures/beds_aggregate_AUC.pdf} \caption{Grouped performance of hospitals after applying the LOHO procedure. Instead of looking at individual OoD hospitals, here we examine whether the regions they come from, the teaching status, and the number of beds affect OoD performance.} \label{fig:aggregate} \end{figure*} \subsection{Are these environments really OoD?} After assigning the candidate hospitals to the three splits, we train the ERM, ERMMerged, and ERMID models to investigate whether access to test environments impacts OoD generalization. The statistics of the sets are provided in Table \ref{tab:sets_statistics}. Since the ERM and ERMmerged models have access to more environments, they are trained on more data which could potentially impact the performance and comparisons. On that account, we evaluate an imbalanced and a resampled variant. Despite the test environment being putatively OoD by the LoHo approach, both subsampling variants show no \emph{significant} difference across the three models (Table \ref{tab:resampling}). Predictably, the resampled experiment yields lower AUCs given that all models have access to an order of magnitude fewer datapoints (data sizes available in the Appendix \ref{appendix}). Focusing on the resampled variant which we consider to be fairer compared to other setups, we also investigate the impact of model overparameterization. We evaluate two model variants by scaling down the model size (reducing the GRU layers and units). More details about the networks are available in the Appendix \ref{appendix}. In Table \ref{tab:param}, we see that reducing the network size leads to improved AUC, slightly outperforming the Imbalanced variant of Table \ref{tab:resampling}. In other words, reducing the model complexity trades off the smaller training size in ERM and ERMMerged. Considering the broad confidence intervals derived from bootstrapping in the results described above, we also tested computing variance by changing the random seed of training/model initialisation, reporting mean and standard deviation. This experiment yielded an AUC of 0.62 ($\pm$0.01) for ERM, 0.61 ($\pm$0.04) for ERMMerged, and 0.58 ($\pm$0.05) for ERMID. In all cases, ERMID underperforms the rest models, hinting that this test environment is not OoD. \section{Further analysis} \subsection{Cross-sectional features for OoD environment identification} \label{envs_demographics} As an alternative to the model-based leave-one-hospital-out approach, we explored a method based on cross-sectional features. As discussed earlier, previous approaches focused on using single demographic features as exclusion criteria \citep{zadorozhny2021out}. However, by simply training on males and testing on females, for example, we will not be able to assess the ability of domain generalization algorithms to learn invariances across \emph{multiple} environments. To mitigate this, we take continuous and categorical features and calculate cross-sectional quantiles across the entire dataset. The hypothesis here is that specific cross-sections (e.g. young women) will show higher degrees of OoDness. First, we start with single features (age) and we split it into ten quantiles where each quantile has almost 3000 patients. By using the two oldest quantiles as test set, ERMID again underperforms ERM and ERMMerged, indicating that this environment is not OoD. Similar results are obtained with the youngest quantile. To create more complex and numerous environments, we then explored cross-sectional feature environments by intersecting continuous and categorical features (e.g. age and gender) to produce equally-sized quantiles. We split the patient stays into twenty cross-sectional age and gender quantiles, where each quantile has almost 1500 patients. Following a similar resampling procedure as in the previous section, we found that when using `young women' as test set, the ERMID model slightly overperforms the rest models (0.82 [0.78-0.86], over 0.81 [0.77-0.85] for ERM, and 0.81 [0.77-0.86]), which, however given the CI overlaps must be considered marginal. The `old women' environment did not show any OoD properties. \input{table_resampling} \input{table_parameterization} \section{Discussion} We believe that the results presented above motivate further work on stress-testing OoD environments and models in clinical settings, as well as call for theoretical advances in automating the discovery of such environments. Our study has the following limitations. Even though bootstrapping with replacement is supposed to be the golden standard in reporting medical results and comparing across different statistical models, the small test environment dataset sizes result in large confidence intervals and inconclusive findings. However, since real-world applications of domain-invariant methods are liable to be deployed in scenarios with limited test environment data, methods for quantifying dataset shift (e.g. OoDness) even on limited data are necessary. Another limitation is that the prediction task itself might contribute to the problem of identifying OoD environments in the eICU. The conditional probability of mortality given readily-measured variables in an ICU may not genuinely shift significantly between hospitals. For future work, we plan to apply our model-based approach to tasks more likely to be susceptible to operational factors, such as the prediction of Length of Stay in the ICU. Combined with the Mortality task, we expect to identify environments and hospitals with more 'distinct' characteristics that differ from the 'average' hospital. In this study, we are contributing to work helping improve the evaluation of ML for health methods. We hope to see more studies on discovering 'natural' distribution shifts in EHR data, towards making the most out of real-world clinical data that can be used to benchmark robust ML models. \section{Conclusion} Here we proposed a framework for Out-of-Distribution detection in multi-center critical care data, using the eICU Database as case study. In particular, we argued that we lack principled ways to identify "natural" OoD environments in real-world non-imaging clinical datasets. To this end, we conducted extensive experiments in a Leave-One-Hospital-Out fashion to identify potential OoD hospitals and benchmarked three models with different levels of test-set access to validate our hypothesis. We found that access to OoD data does not improve test performance, which points to inherent limitations in defining OoD environments in the eICU Database, potentially due to extensive data harmonization and processing applied during its collection. Our alternative training scenarios employed cross-sectional features as potential OoD environments and hinted that this approach might be promising in specific feature combinations that however require domain knowledge. All in all, echoing similar results with other established clinical benchmarks in the literature, we believe that new methodological approaches along with new benchmarks are required for the evaluation of robust ML models in critical care. \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,[email protected]} \email{[email protected]} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} Always use midrule to separate table header rows from data rows, and use it only for this purpose. This enables assistive technologies to recognise table headers and support their users in navigating tables more easily. \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{A woman and a girl in white dresses sit in an open car.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions are placed {\itshape below} the figure. Every figure should also have a figure description unless it is purely decorative. These descriptions convey what’s in the image to someone who cannot see it. They are also used by search engine crawlers for indexing images, and when images cannot be loaded. A figure description must be unformatted plain text less than 2000 characters long (including spaces). {\bfseries Figure descriptions should not repeat the figure caption – their purpose is to capture important information that is not already provided in the caption or the main text of the paper.} For figures that convey important and complex new information, a short text description may not be adequate. More complex alternative descriptions can be placed in an appendix and referenced in a short figure description. For example, provide a data table capturing the information in a bar chart, or a structured list representing a graph. For additional information regarding how best to write figure descriptions and why doing this is so important, please see \url{https://www.acm.org/publications/taps/describing-figures/}. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,[email protected]} \email{[email protected]} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} Always use midrule to separate table header rows from data rows, and use it only for this purpose. This enables assistive technologies to recognise table headers and support their users in navigating tables more easily. \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{A woman and a girl in white dresses sit in an open car.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions are placed {\itshape below} the figure. Every figure should also have a figure description unless it is purely decorative. These descriptions convey what’s in the image to someone who cannot see it. They are also used by search engine crawlers for indexing images, and when images cannot be loaded. A figure description must be unformatted plain text less than 2000 characters long (including spaces). {\bfseries Figure descriptions should not repeat the figure caption – their purpose is to capture important information that is not already provided in the caption or the main text of the paper.} For figures that convey important and complex new information, a short text description may not be adequate. More complex alternative descriptions can be placed in an appendix and referenced in a short figure description. For example, provide a data table capturing the information in a bar chart, or a structured list representing a graph. For additional information regarding how best to write figure descriptions and why doing this is so important, please see \url{https://www.acm.org/publications/taps/describing-figures/}. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,[email protected]} \email{[email protected]} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} Always use midrule to separate table header rows from data rows, and use it only for this purpose. This enables assistive technologies to recognise table headers and support their users in navigating tables more easily. \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{A woman and a girl in white dresses sit in an open car.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions are placed {\itshape below} the figure. Every figure should also have a figure description unless it is purely decorative. These descriptions convey what’s in the image to someone who cannot see it. They are also used by search engine crawlers for indexing images, and when images cannot be loaded. A figure description must be unformatted plain text less than 2000 characters long (including spaces). {\bfseries Figure descriptions should not repeat the figure caption – their purpose is to capture important information that is not already provided in the caption or the main text of the paper.} For figures that convey important and complex new information, a short text description may not be adequate. More complex alternative descriptions can be placed in an appendix and referenced in a short figure description. For example, provide a data table capturing the information in a bar chart, or a structured list representing a graph. For additional information regarding how best to write figure descriptions and why doing this is so important, please see \url{https://www.acm.org/publications/taps/describing-figures/}. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format}
2024-02-18T23:40:55.193Z
2022-05-27T02:18:47.000Z
algebraic_stack_train_0000
3,683
12,843
proofpile-arXiv_066-2027
\section{Introduction} Throughout this note, we work over the field of complex numbers $\mathbb{C}$. A normal projective variety $X$ is called a {\it Fano variety} (resp., {\it weak Fano variety}) if the anti-canonical divisor $-K_X$ is ample (resp., nef and big). A {terminal} (resp. canonical) Fano variety is a Fano variety with at worst terminal (resp., canonical) singularities. A {terminal} (resp. canonical) weak Fano variety is a weak Fano variety with at worst terminal (resp., canonical) singularities. According to the minimal model program, Fano varieties form a fundamental class among research objects of birational geometry. Motivated by the classification theory of $3$-dimensional algebraic varieties, we are interested in the study of explicit geometry of terminal/canonical Fano $3$-folds. Given a terminal weak Fano $3$-fold $X$, it was proved in \cite[Theorem~1.1]{CC08} that $(-K_{X})^3\geq \frac{1}{330}$. This lower bound is optimal, as it is attained when $X=X_{66}\subset \mathbb{P}(1,5,6,22,33)$ is a general weighted hypersurface of degree $66$. Moreover, \cite[Theorem~1.1]{CC08} showed that when $(-K_{X})^3= \frac{1}{330}$, then $X$ has exactly the same Reid basket of virtual orbifold singularities as $X_{66}$. So it is interesting to ask the following question: \begin{q}\label{main q} Let $X$ be a terminal (weak) Fano $3$-fold with $(-K_X)^3=\frac{1}{330}$. Is $X$ a $\mathbb{Q}$-Gorenstein deformation of a quasi-smooth weighted hypersurface of degree $66$ in $\mathbb{P}(1,5,6,22,33)$? \end{q} In \cite{330}, we give a partial answer to this question in the category of non-rational $\mathbb{Q}$-factorial terminal Fano $3$-folds with $\rho=1$. The main purpose of this note is to remove the non-rationality condition. The following is the main theorem of this note. \begin{thm}\label{mainthm} Let $X$ be a $\mathbb{Q}$-factorial terminal Fano $3$-fold with $\rho(X)=1$ and $(-K_X)^3=\frac{1}{330}$. Then $X$ is a weighted hypersurface in $\mathbb{P}(1,5,6,22,33)$ defined by a weighted homogeneous polynomial $F$ of degree $66$, where $$F(x, y, z, w, t)=t^2+F_0(x, y, z, w)$$ in suitable homogeneous coordinates $[x: y: z: w: t]$ of $\mathbb{P}(1, 5, 6, 22, 33)$. \end{thm} As mentioned in \cite[Remark~1.3]{330}, similar method can be applied to characterize other weighted hypersurfaces in Iano-Fletcher's list \cite[16.6]{IF00}. We list them in Table~\ref{tableA}. The weighted hypersurfaces in Table~\ref{tableA} are all of the form $X_{6d}\subset \mathbb{P}(1,a,b,2d,3d)$ with $a+b=d$ and any such general hypersurface is a $\mathbb{Q}$-factorial terminal Fano $3$-fold with $\rho=1$. The geometry of them are very interesting in the sense that the behavior of the anti-pluri-canonical systems of them match perfectly with the theoretical estimates in our prediction (\cite[Example~5.12]{CJ16}, \cite[Example~4.4.12]{Phd}). \begin{longtable}{|l|l|l|l|} \caption{Fano $3$-folds from Iano-Fletcher's list}\label{tableA}\\ \hline No. & hypersurface & $-K^3$ & basket \\ \hline \endfirsthead \multicolumn{4}{l}{{ {\bf \tablename\ \thetable{}} \textrm{-- continued from previous page}}} \\ \hline No. & hypersurface & $-K^3$ & basket \\ \hline \endhead \multicolumn{4}{l}{{\textrm{Continued on next page}}} \\ \hline \endfoot \hline \endlastfoot 14 & ${X_{12}}\subset \mathbb{P}(1,1,1,4,6)$ & $1/2$ & $ (1,2) $\\ \hline 34 & ${X_{18}}\subset \mathbb{P}(1,1,2,6,9)$ & $1/6$ &$ 3\times(1,2),(1,3) $\\ \hline 53 & ${X_{24}}\subset \mathbb{P}(1,1,3,8,12)$ & $1/12$ &$ 2\times(1,3), (1,4) $\\ \hline 70 & ${X_{30}}\subset \mathbb{P}(1,1,4,10,15)$ & $1/20$ &$ (1,2), (1,4), (1,5) $\\ \hline 72 & ${X_{30}}\subset \mathbb{P}(1,2,3,10,15)$ & $1/30$ &$ 3\times(1,2), (2, 5), 2\times(1,3)$\\ \hline 82 & ${X_{36}}\subset \mathbb{P}(1,1,5,12,18)$ & $1/30$ &$ (2,5), (1,6) $\\ \hline 88 & ${X_{42}}\subset \mathbb{P}(1,1,6,14,21)$ & $1/42$ &$ (1,2), (1,3), (1,7)$\\ \hline 89 & ${X_{42}}\subset \mathbb{P}(1,2,5,14,21)$ & $1/70$ &$ 3\times(1,2),(3,7), (1,5) $\\ \hline 90 & ${X_{42}}\subset \mathbb{P}(1,3,4,14,21)$ & $1/84$ &$ (1,2),2\times(1,3), (2,7), (1,4) $\\ \hline 92 & ${X_{48}}\subset \mathbb{P}(1,3,5,16,24)$ & $1/120$ &$ (3,8),2\times(1,3), (1,5) $\\ \hline 94 & ${X_{54}}\subset \mathbb{P}(1,4,5,18,27)$ & $1/180$ &$ (1,2), (2,5), (1,4), (2,9) $\\ \hline 95 & ${X_{66}}\subset \mathbb{P}(1,5,6,22,33)$ & $1/330$ &$ (1,2), (2,5), (1,3), (2,11) $ \end{longtable} We show the following result saying that a $\mathbb{Q}$-factorial terminal Fano $3$-fold with $\rho=1$ shares the same numerical information with a weighted hypersurface in Table~\ref{tableA} is always a weighted hypersurface of the same type. \begin{thm}\label{mainthm2} Let $X$ be a $\mathbb{Q}$-factorial terminal Fano $3$-fold with $\rho(X)=1$ such that $(-K_X)^3=(-K_{X_{6d}})^3$ and $B_{X}=B_{X_{6d}}$ for some $$X_{6d}\subset \mathbb{P}(1,a,b,2d, 3d)$$ as in Table~\ref{tableA}. Then $X$ is a weighted hypersurface in $\mathbb{P}(1,a,b,2d,3d)$ defined by a weighted homogeneous polynomial $F$ of degree $6d$, where $$F(x, y, z, w, t)=t^2+F_0(x, y, z, w)$$ in suitable homogeneous coordinates $[x: y: z: w: t]$ of $\mathbb{P}(1, a, b, 2d, 3d)$. \end{thm} According to \cite{330}, the idea of the proof is as the following: we construct a rational map $\Phi_{3d}: X\dashrightarrow \mathbb{P}(1, a, b, 2d, 3d)$ by general global sections of $H^0(X, -mK_X)$ and show that $\Phi_{3d}$ is indeed an embedding. The reason that we need to assume non-rationality of $X$ in \cite{330} is that we need to rule out the case that the induced map $X\dashrightarrow \mathbb{P}(1, a, b, 2d)$ is birational. To remove this condition, one idea is to directly show that $X$ is non-rational under this setting, which seems very difficult (as non-rationality of Fano varieties is always a difficult problem). Our new ingredient is to apply the theory in \cite{CJ16, Phd} to show directly that $X\dashrightarrow \mathbb{P}(1, a, b, 2d)$ cannot be birational. This involves detail discussions on the behavior of anti-pluri-canonical systems of $X$. We will discuss another approach (unsuccessful yet) in the end which is related to Shokurov's earlier work. \section{Reid's Riemann--Roch formula}\label{sec 2} A {\it basket} $B$ is a collection of pairs of integers (permitting weights), say $\{(b_i,r_i)\mid i=1, \cdots, s; b_i\ \text{is coprime to}\ r_i\}$. Let $X$ be a canonical weak Fano $3$-fold and $Y$ be a terminalization of $X$. According to Reid \cite{YPG}, there is a basket of orbifold points (called {\it Reid basket}) $$B_X=\bigg\{(b_i,r_i)\mid i=1,\cdots, s; 0<b_i\leq \frac{r_i}{2};b_i \text{ is coprime to } r_i\bigg\}$$ associated to $X$, which comes from locally deforming singularities of $Y$ into cyclic quotient singularities, where a pair $(b_i,r_i)$ corresponds to a (virtual) orbifold point $Q_i$ of type $\frac{1}{r_i}(1,-1,b_i)$. Recall that for a Weil divisor $D$ on $X$, $$H^0(X, D)=\{f\in \mathbb{C}(X)^{\times}\mid \text{div}(f)+D\geq 0\}\cup \{0\}.$$ By Reid's Riemann--Roch formula and the Kawamata--Viehweg vanishing theorem, for any positive integer $m$, \begin{align*} h^0(X, -mK_X)={}&\chi(X, \OO_X(-mK_X))\\={}&\frac{1}{12}m(m+1)(2m+1)(-K_X)^3+(2m+1)-l(m+1) \end{align*} where $l(m+1)=\sum_i\sum_{j=1}^m\frac{\overline{jb_i}(r_i-\overline{jb_i})}{2r_i}$ and the first sum runs over all orbifold points in Reid basket (\cite[2.2]{CJ16}). Here $\overline{jb_i}$ means the smallest non-negative residue of $jb_i \bmod r_i$. We will freely use the following lemma to compute anti-pluri-genera. \begin{lem}\label{lem Xd RR} Let $X$ be a canonical weak Fano $3$-fold such that $(-K_X)^3=(-K_{X_{d}})^3$ and $B_{X}=B_{X_{d}}$ for some weighted hypersurface $$X_{d}\subset \mathbb{P}(a_0,a_1,a_2,a_3, a_4)$$ as in Iano-Fletcher's list \cite[16.6]{IF00}. Then $$ \sum_{m\geq 0}h^0(X, -mK_X)q^m=\frac{1-q^{d}}{(1-q^{a_0})(1-q^{a_1})(1-q^{a_2})(1-q^{a_3})(1-q^{a_4})}. $$ \end{lem} \begin{proof} By Reid's Riemann--Roch formula, $h^0(X, -mK_X)$ depends only on $(-K_X)^3$ and $B_X$. Note that $\mathcal{O}_{{X_{d}}}(-K_{{X_{d}}})=\mathcal{O}_{\mathbb{P}}(1)|_{{X_{d}}}$ where $\mathcal{O}_{\mathbb{P}}(1)$ is the natural twisting sheaf of $\mathbb{P}(a_0,a_1,a_2,a_3, a_4)$. So \begin{align*} \sum_{m\geq 0}h^0(X, -mK_X)q^m={}&\sum_{m\geq 0}h^0({X_{d}}, \mathcal{O}_{\mathbb{P}}(m)|_{{X_{d}}} )q^m, \end{align*} and the last series can be computed by by \cite[Theorem~3.4.4]{Dol82}. \end{proof} \section{Generic finiteness and birationality}\label{section 3} In this section we recall the criteria for generic finiteness and birationality of anti-pluri-canonical systems of canonical weak Fano $3$-folds from \cite{CJ16, Phd}. We refer to \cite{CJ16} for basic definitions. Here we should remind that in \cite{CJ16}, a $\mathbb{Q}$-factorial terminal Fano $3$-fold with $\rho=1$ is called a {\it $\mathbb{Q}$-Fano $3$-fold}, and a terminal weak Fano $3$-fold is called a {\it weak $\mathbb{Q}$-Fano $3$-fold}. As a canonical weak Fano $3$-fold has a terminalization which is a crepant birational morphism from a terminal weak Fano $3$-fold, all results in \cite{CJ16}, which hold for terminal weak Fano $3$-folds, also hold for canonical weak Fano $3$-folds (see \cite[Remark~1.9]{CJ16}). Let $X$ be a canonical weak Fano $3$-fold and $m$ a positive integer such that $h^0(X, -mK_X)\geq 2$. We consider the rational map $\varphi_{-m}$ defined by the linear system $|-mK_X|$. We say that $|-mK_X|$ defines a generically finite map (resp. a birational map), if $\varphi_{-m}$ is generically finite (resp. birational) onto its image. We say that $|-mK_X|$ is {\it composed with a pencil} if the image of $\varphi_{-m}$ is a curve. \begin{set}\label{setup} \begin{enumerate} \item Let $X$ be a canonical weak Fano $3$-fold and $m_0$ a positive integer such that $h^0(X, -m_0K_X)\geq 2$. \item Suppose that $m_1\geq m_0$ is an integer with $h^0(X, -m_1K_X)\geq 2$ such that $|-m_1K_X|$ is not composed with a pencil. \item Take a resolution $\pi: W\to X$ such that $$ \pi^*(-mK_X)=M_{m}+F_{m} $$ where $M_m$ is free and $F_{m}$ is the fixed part for all $m_0\leq m\leq m_1$. \item Pick a generic irreducible element $S$ of $|M_{m_0}|$ (a generic irreducible element means an irreducible component of a general element in \cite{CJ16}). We have $m_0\pi^*(-K_X)\geq S$. \item Define the real number $$\mu_0:=\text{inf}\{t\in \mathbb{Q}^+ \mid t\pi^*(-K_X)-S\sim_{\mathbb{Q}} \text{effective}\ \mathbb{Q}\text{-divisor}\}.$$ It is clear that $\mu_0\leq m_0$. \item By the assumption on $|-m_1K_X|$, we know that $|{M_{m_1}}|_{S}|$ is a non-trivial base point free linear system. Denote by $C$ a generic irreducible element of $|{M_{m_1}}|_{S}|$. \item Define \begin{align*} \zeta{}&:=(\pi^*(-K_X)\cdot C);\\ \varepsilon(m){}&:=(m+1-\mu_0-m_1)\zeta. \end{align*} \end{enumerate} \end{set} Under this setup, we recall 2 propositions from \cite{CJ16, Phd}. \begin{prop}[{\cite[Proposition~5.7]{CJ16}}]\label{prop zeta} Let $X$ be a canonical weak Fano $3$-fold. Keep the notation in Setup~\ref{setup}. \begin{enumerate} \item If $g(C)>0$, then $\zeta\geq \frac{2g(C)-1}{\mu_0+m_1}$. \item If $g(C)=0$, then $\zeta\geq 2$. \end{enumerate} \end{prop} \begin{prop}\label{prop gen finite bir} Let $X$ be a canonical weak Fano $3$-fold with $h^0(X, -K_X)>0$. Keep the notation in Setup~\ref{setup}. Take an integer $m\geq m_0+m_1+1$. \begin{enumerate} \item If $\varepsilon(m) > \max\{2-g(C), 0\}$, then $|-mK_X|$ defines a generically finite map. \item If $\varepsilon(m) > 2$, then $|-mK_X|$ defines a birational map. \end{enumerate} \end{prop} \begin{proof} This is by \cite[Theorem~5.5]{CJ16} and \cite[Theorem~4.4.4]{Phd}. Here we only need to explain that when $m\geq m_0+m_1+1$, \cite[Assumption~5.3]{CJ16} and \cite[Assumption~4.4.3]{Phd} hold. By the proof of \cite[Propositions~5.8 and 5.10]{CJ16}, these assumptions hold as long as $m\geq m_0+m_1+k_0$ where $k_0$ is an integer such that $h^0(X, -kK_X)>0$ for all $k\geq k_0$ (we take $k_0=6$ in \cite{CJ16, Phd}). As $h^0(X, -K_X)>0$, we can take $k_0=1$ in this proposition. \end{proof} By Propositions~\ref{prop zeta} and \ref{prop gen finite bir}, we get the following consequence on the behavior of anti-pluri-canonical systems. \begin{cor}\label{cor bir criterion} Let $X$ be a canonical weak Fano $3$-fold with $h^0(X, -K_X)>0$. Keep the notation in Setup~\ref{setup}. \begin{enumerate} \item If $m\geq 2m_0+2m_1$, then $|-mK_X|$ defines a generically finite map. \item If $g(C)\neq 1$ and $m\geq m_0+m_1+1$, then $|-mK_X|$ defines a generically finite map. \item If $m\geq 3m_0+3m_1$, then $|-mK_X|$ defines a birational map. \end{enumerate} \end{cor} \section{Proofs of main theorems} In this section, we prove Theorems~\ref{mainthm} and \ref{mainthm2}. We will analyze the behavior of anti-pluri-canonical systems. \begin{lem}\label{lem 1} Let $X$ be a canonical weak Fano $3$-fold such that $(-K_X)^3=(-K_{X_{6d}})^3$ and $B_{X}=B_{X_{6d}}$ for some $$X_{6d}\subset \mathbb{P}(1,a,b,2d, 3d)$$ as in Table~\ref{tableA}. Then \begin{enumerate} \item $h^0(X, -kK_X)>0$ for all positive integer $k$; \item $h^0(X, -aK_X)\geq 2$. \end{enumerate} \end{lem} \begin{proof} This can be directly verified by Lemma~\ref{lem Xd RR}. \end{proof} \begin{lem}\label{lem non-pencil} Let $X$ be a $\mathbb{Q}$-factorial terminal Fano $3$-fold with $\rho(X)=1$ such that $(-K_X)^3=(-K_{X_{6d}})^3$ and $B_{X}=B_{X_{6d}}$ for some $$X_{6d}\subset \mathbb{P}(1,a,b,2d, 3d)$$ as in Table~\ref{tableA}. Then $|-bK_X|$ is not composed with a pencil. \end{lem} \begin{proof} If $b=1$, then $h^{0}(X, -K_X)=3$. Then $|-K_X|$ is not composed with a pencil by \cite[Theorem~3.2]{CJ16} (with $m=1$). If $a=1<b$, then $h^0(X, -K_X)=2$. Then $|-K_X|$ has no fixed part by \cite[Theorem~3.2]{CJ16} (with $m=1$). Moreover, by Lemma~\ref{lem Xd RR}, $h^0(X, -kK_X)=k+1$ for $1\leq k \leq b-1$ and $h^0(X, -bK_X)=b+2$. Then $|-bK_X|$ is not composed with a pencil by \cite[Theorem~3.4]{CJ16} (with $m=n_0=1$ and $l_0=b$). If $a>1$, then $h^0(X, -K_X)=1$. Then the unique element of $|-K_X|$ is a prime divisor by \cite[Theorem~3.2]{CJ16} (with $m=1$). Moreover, by Lemma~\ref{lem Xd RR}, $$h^0(X, -kK_X)=\begin{cases}1 & \text{if }1\leq k\leq a-1; \\ \lfloor k/a\rfloor+1 & \text{if } a\leq k\leq b-1;\\\lfloor b/a\rfloor+2 & \text{if } k=b.\end{cases}$$ Then $|-bK_X|$ is not composed with a pencil by \cite[Theorem~3.4]{CJ16} (with $m=1$, $n_0=a$, and $l_0=b$). \end{proof} Lemma~\ref{lem non-pencil} is the only place that we use the condition that $X$ is $\mathbb{Q}$-factorial terminal with $\rho(X)=1$. Its proof relies heavily on \cite[Theorem~3.2]{CJ16} which generalizes \cite[Theorem~2.18]{Ale94}. Without this assumption, we have no idea how to show the non-pencilness of $|-bK_X|$ in Lemma~\ref{lem non-pencil}. On the other hand, once we know the non-pencilness of $|-bK_X|$, then the geometry of $X$ is as good as we expect. So we make the following definition. \begin{definition} We say that a canonical weak Fano $3$-fold $X$ satisfies condition (IF$_{a, b}$) if the following conditions hold: \begin{enumerate} \item $(-K_X)^3=(-K_{X_{6d}})^3$ and $B_{X}=B_{X_{6d}}$ for some $$X_{6d}\subset \mathbb{P}(1,a,b,2d, 3d)$$ as in Table~\ref{tableA}. \item $|-bK_X|$ is not composed with a pencil. \end{enumerate} \end{definition} Then we can describe the behavior of anti-pluri-canonical systems of weak Fano $3$-folds satisfying condition (IF$_{a,b}$) which is the same as weighted hypersurfaces in Table~\ref{tableA} (compare \cite[Example~5.12]{CJ16}, \cite[Example~4.4.12]{Phd}). \begin{lem}\label{lem gen finite} Let $X$ be a canonical weak Fano $3$-fold satisfying condition (IF$_{a,b}$). Then \begin{enumerate} \item $|-2dK_X|$ defines a generically finite map; \item $|-3dK_X|$ defines a birational map. \end{enumerate} \end{lem} \begin{proof} This follow from Corollary~\ref{cor bir criterion} (taking $m_0=a$ and $m_1=b$). \end{proof} Combining with Reid's Riemann--Roch formula, we get algebraic information of the anti-canonical rings of $X$ from the anti-canonical geometry of $X$. \begin{lem}\label{lem fghp} Let $X$ be a canonical weak Fano $3$-fold satisfying condition (IF$_{a,b}$). Take general elements \begin{align*} {}&f\in H^0(X, -K_X)\setminus\{0\},\\ {}&g\in H^0(X, -aK_X)\setminus\{0\},\\ {}&h\in H^0(X, -bK_X)\setminus\{0\}, \\ {}&p\in H^0(X, -2dK_X)\setminus\{0\},\\ {}&q\in H^0(X, -3dK_X)\setminus\{0\}. \end{align*} For a positive integer $k$, denote $$ S_k=\{f^{s_1}g^{s_2}h^{s_3}p^{s_4}\mid s_1, \dots, s_4\in \mathbb{Z}_{\geq 0}, s_1+as_2+bs_3+2ds_4=k\}. $$ Then \begin{enumerate} \item The set $\{f, g, h, p\}$ is algebraically independent in the graded algebra $$R(X, -K_X)=\bigoplus_{k\geq 0}H^0(X, -kK_X);$$ \item for $1\leq k\leq 3d-1$, $H^{0}(X, -kK_X)$ is spanned by a basis $S_k$; \item $H^{0}(X, -3dK_X)$ is spanned by a basis $S_{3d}\cup\{q\}.$ \end{enumerate} \end{lem} \begin{proof} First we show the following claim. \begin{claim}\label{claim fgh} The set $\{f, g, h\}$ is algebraically independent. \end{claim} \begin{proof} If $a=b=1$, then as $h^0(X, -bK_X)=3$, $\{f, g, h\}$ is a basis of $H^0(X, -K_X)$. By assumption, $|-bK_X|=|-K_X|$ is not composed with a pencil, which is equivalent to say that the rational map $X\dashrightarrow \mathbb{P}^2$ induced by $f, g, h$ is dominant. This implies that $\{f, g, h\}$ is algebraically independent. If $b>1$, we have $h^0(X, -aK_X)=2$. So $H^0(X, -aK_X)$ is spanned by $\{f^a, g\}$ and $\{f, g\}$ is algebraically independent. So $$S_b\setminus \{h\}=\{f^{b-as_2}g^{s_2}\mid 0\leq s_2\leq \lfloor b/a\rfloor\}$$ is linearly independent in $H^0(X, -bK_X)$. On the other hand, $h^0(X, -bK_X)=\lfloor b/a\rfloor+2=|S_b|$ by Lemma~\ref{lem Xd RR}. So $S_b$ is a basis of $H^0(X, -bK_X)$ by the generality of $h$. By the assumption that $|-bK_X|$ is not composed with a pencil, the transcendental degree of $\mathbb{C}(f, g, h)$ is large than $2$, which implies that $\{f, g, h\}$ is algebraically independent. \end{proof} By Claim~\ref{claim fgh}, $$ S_k'=\{f^{s_1}g^{s_2}h^{s_3}\mid s_1, s_2, s_3\in \mathbb{Z}_{\geq 0}, s_1+as_2+bs_3=k\} $$ is linearly independent in $H^0(X, -kK_X)$ for any positive integer $k$. On the other hand, $h^0(X, -2dK_X)=|S'_{2d}|+1$ by Lemma~\ref{lem Xd RR}. Then $S_{2d}=S'_{2d}\cup \{p\}$ is a basis of $H^0(X, -2dK_X)$ by the generality of $p$. By Lemma~\ref{lem gen finite}, $|-2dK_X|$ defines a generically finite map, so the transcendental degree of $\mathbb{C}(f, g, h, p)$ is large than $3$, which implies that $\{f, g, h, p\}$ is algebraically independent. This proves (1). In particular, $S_k$ is linearly independent in $H^0(X, -kK_X)$ for any positive integer $k$. So (2) follows from the computation that $h^0(X, -kK_X)=|S_{k}|$ for $1\leq k\leq 3d-1$ by Lemma~\ref{lem Xd RR}, and (3) follows from the computation that $h^0(X, -3dK_X)=|S_{3d}|+1$ and the generality of $q$. \end{proof} The following is the key lemma of this note, which allows us to drop the non-rationality condition in \cite{330}. \begin{lem}\label{lem deg=2} Let $X$ be a canonical weak Fano $3$-fold satisfying condition (IF$_{a,b}$). Then \begin{enumerate} \item $|-kK_X|$ does not define a generically finite map for $k<2d$; \item $|-2dK_X|$ defines a generically finite map of degree $2$. \item Let $\pi: W\to X$ be a resolution such that for $k=a, b, 2d$, $$\pi^*(-kK_X)=M_{k}+F_{k}$$ where $M_{k}$ is free and $F_{k}$ is the fixed part. Then $$(M_{a}\cdot M_{b}\cdot M_{2d})=2.$$ \end{enumerate} \end{lem} \begin{proof} By Lemma~\ref{lem fghp}, $S_k$ has transcendental degree at most $3$ for $k<2d$. So $|-kK_X|$ does not define a generically finite map for $k<2d$. Keep the same notation as in Setup~\ref{setup}. We may take $m_0=a$ and $m_1=b$. Then by construction, $S\in |M_a|$ and $C$ is a generic irreducible element of $|M_b|_S|$. If $g(C)\neq 1$, then by Corollary~\ref{cor bir criterion}, $|-(a+b+1)K_X|$ defines a generically finite map, which contradicts (1) as $a+b+1=d+1<2d$. So $g(C)=1$. If $|-2dK_X|$ defines a generically finite map of degree $d_0$, since $M_{2d}$ is the free part of $|\pi^*(-2dK_X)|$ and $C$ is general, $|M_{2d}|_C|$ defines a generically finite map of degree $d_0$ on $C$. On the other hand, $$(M_{2d}\cdot C)\leq (M_{2d}\cdot M_a\cdot M_b)\leq 2abd\pi^*(-K_X)^3 =2.$$ Note that a divisor of degree $1$ on an elliptic curve is never movable, so $M_{2d}\cdot C=2$. Therefore, $(M_{2d}\cdot M_a\cdot M_b)=2$ and $d_0=2$. \end{proof} \begin{thm}\label{mainthm3} Let $X$ be a canonical weak Fano $3$-fold satisfying condition (IF$_{a,b}$). Then $X$ is a weighted hypersurface in $\mathbb{P}(1,a,b,2d,3d)$ defined by a weighted homogeneous polynomial $F$ of degree $6d$, where $$F(x, y, z, w, t)=t^2+F_0(x, y, z, w)$$ in suitable homogeneous coordinates $[x: y: z: w: t]$ of $\mathbb{P}(1, a, b, 2d, 3d)$. \end{thm} \begin{proof} Keep the notation in Lemma~\ref{lem fghp}. We can define $3$ rational maps by $\{f,g,h,p, q\}$: \begin{align*} \Phi_{b}: {}&X\dashrightarrow \mathbb{P}(1, a, b); \\ {}& P\mapsto [f(P):g(P):h(P)];\\ \Phi_{2d}: {}&X\dashrightarrow \mathbb{P}(1, a, b, 2d); \\ {}& P\mapsto [f(P):g(P):h(P):p(P)];\\ \Phi_{3d}: {}&X\dashrightarrow \mathbb{P}(1, a, b, 2d, 3d);\\ {}& P\mapsto [f(P):g(P):h(P):p(P): q(P)]. \end{align*} We claim that they have the following geometric properties. \begin{claim}Keep the above settings. \begin{enumerate} \item $\Phi_{b}$ is dominant; $\Phi_{2d}$ is dominant and generically finite of degree $2$; \item $\Phi_{3d}$ is birational onto its image; \item let $Y$ be the closure of $\Phi_{3d}(X)$ in $\mathbb{P}(1, a, b, 2d, 3d)$, then $Y$ is defined by a weighted homogeneous polynomial $F$ of degree $6d$, where $$F(x, y, z, w, t)=t^2+F_0(x, y, z, w)$$ in suitable homogeneous coordinates $[x: y: z: w: t]$ of $\mathbb{P}(1, a, b, 2d, 3d)$. \end{enumerate}\end{claim} \begin{proof} (1) By Lemma~\ref{lem fghp}, $\{f, h, h, p\}$ is algebraically independent. Hence $\Phi_{b}$ and $\Phi_{2d}$ are dominant. In particular, $\Phi_{22}$ is generically finite by dimension reason. The degree of $\Phi_{22}$ is the number of points in the fiber over a general point in $\mathbb{P}(1, a, b, 2d)$. After taking a resolution as in Lemma~\ref{lem deg=2}(3), this number is just $(M_a\cdot M_b\cdot M_{2d})$. So $$ \deg \Phi_{2d}=(M_a\cdot M_b\cdot M_{2d})=2. $$ \medskip (2) By Lemma~\ref{lem gen finite}, $|-3dK_X|$ defines a birational map. As $q$ is general, it can separate two points in a general fiber of $\Phi_{2d}$, so $\Phi_{3d}$ is birational onto its image. \medskip (3) Note that $h^0(X, -6dK_X)=|S_{6d}|+|S_{3d}|$ by Lemma~\ref{lem Xd RR}. On the other hand, $$S_{6d}\sqcup (S_{3d}\cdot q)\sqcup \{q^2\}\subset H^0(X, -6dK_X).$$ So $S_{6d}\sqcup (S_{3d}\cdot q)\sqcup \{q^2\}$ is linearly dependent in $H^0(X, -6dK_X)$. In other words, there exists a weighted homogeneous polynomial $F(x, y, z, w, t)$ of degree $6d$ with $\text{wt}(x, y, z, w, t)=(1,a,b,2d,3d)$ such that $$F(f, g, h, p, q)=0.$$ So $Y$ is contained in $(F=0)\subset \mathbb{P}(1, a, b, 2d, 3d)$. Note that $Y$ is a hypersurface in $\mathbb{P}(1, a, b, 2d, 3d)$ by dimension reason. We claim that $Y=(F=0)$ and $t^2$ has non-zero coefficient in $F$. Otherwise, either $Y$ is defined by a weighted homogeneous polynomial of degree $\leq 65$, or $t^2$ has zero coefficient in $F$. In either case, $Y$ is defined by a weighted homogeneous polynomial $\tilde{F}$ of the form $$\tilde{F}(x, y, z, w, t)=t\tilde{F}_1(x, y, z, w)+\tilde{F}_2(x, y, z, w).$$ Here $\tilde{F}_1\neq 0$, as $\{f, g, p, q\}$ is algebraically independent. Then $Y$ is birational to $\mathbb{P}(1, a, b, 2d)$ under the rational projection map \begin{align*} {}&\mathbb{P}(1, a, b, 2d, 3d)\dashrightarrow \mathbb{P}(1, a, b, 2d);\\ {}&[x:y:z:w:t]\mapsto [x:y:z:w]. \end{align*} But the induced map $X\dashrightarrow Y\dashrightarrow \mathbb{P}(1, a, b, 2d)$ coincides with $\Phi_{2d}$, which contradicts the fact that $\Phi_{2d}$ is not birational. So $Y=(F=0)$ and $t^2$ has non-zero coefficient in $F$. After a suitable coordinate change we may assume that $F=t^2+F_0(x, y, z, w)$. \end{proof} Now go back to the proof of Theorem~\ref{mainthm3}. By the above claim, $F$ is the only algebraic relation on $f, g, h, p, q$. Denote $\mathcal{R}$ to be the graded sub-$\mathbb{C}$-algebra of $$R(X, -K_X)=\bigoplus_{m\geq 0}H^0(X, -mK_X)$$ generated by $\{f, g, h, p, q\}$. Then we have a natural isomorphism between graded $\mathbb{C}$-algebras $$ \mathcal{R}\simeq \mathbb{C}[x, y, z, w, t]/(t^2+F_0) $$ by sending $f\mapsto x$, $g\mapsto y$, $h\mapsto z$, $p\mapsto w$, $q\mapsto t$ and the right hand side is exactly the weighted homogeneous coordinate ring of $Y$. Write $\mathcal{R}=\bigoplus_{m\geq 0}\mathcal{R}_m$ where $\mathcal{R}_m$ is the homogeneous part of degree $m$. Then by \cite[3.4.2]{Dol82}, $$ \sum_{m\geq 0}\dim_\mathbb{C} \mathcal{R}_m \cdot q^m= \frac{1-q^{6d}}{(1-q)(1-q^{a})(1-q^{b})(1-q^{2d})(1-q^{3d})}. $$ So by Lemma~\ref{lem Xd RR}, $\mathcal{R}_m=H^0(X, -mK_X)$ for any $m\in \mathbb{Z}_{\geq 0}$, and hence the inclusion $\mathcal{R}\subset R(X, -K_X)$ is an isomorphism. Since $-K_X$ is ample, this implies that $$X\simeq \operatorname{Proj} R(X, -K_X) = \operatorname{Proj}\mathcal{R}\simeq Y. $$ This finishes the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{mainthm2}] It follows from Lemma~\ref{lem non-pencil} and Theorem~\ref{mainthm3}. \end{proof} \begin{proof}[Proof of Theorem~\ref{mainthm}] By \cite[Theorem~1.1]{CC08}, if $(-K_X)^3=\frac{1}{330}$, then $B_{X}=B_{X_{66}}$ for $$X_{66}\subset \mathbb{P}(1,5,6,22, 33)$$ as in Table~\ref{tableA}. Hence the theorem is a special case of Theorem~\ref{mainthm2}. \end{proof} \section{Another approach via the Noether--Fano--Iskovskikh inequality and general elephants} In this section, we discuss another possible approach to Theorem~\ref{mainthm2}. Keep the notation in Lemma~\ref{lem fghp}, following the proof in \cite{330} and Theorem~\ref{mainthm2}, the essential point is to show that the map \begin{align*} \Phi_{2d}: {}&X\dashrightarrow \mathbb{P}(1, a, b, 2d); \\ {}& P\mapsto [f(P):g(P):h(P):p(P)] \end{align*} is not birational. Suppose that $\Phi_{2d}$ is birational. Note that $\mathcal{O}_{\mathbb{P}}(2abd)$ is very ample on $\mathbb{P}(1, a, b, 2d)$ and the strict transform of $|\mathcal{O}_{\mathbb{P}}(2abd)|$ on $X$ is the movable linear system $\mathcal{M}\subset |-2abdK_X|$ generated by $S_{2abd}\subset H^0(X, -2abdK_X)$. As $X$ is a $\mathbb{Q}$-factorial terminal Fano $3$-fold with $\rho(X)=1$, the Noether--Fano--Iskovskikh inequality (\cite{Cor95}, \cite{CPR00}) implies that $(X, \frac{1}{2abd}\mathcal{M})$ is not canonical. So in order to show that $\Phi_{2d}$ is not birational, it suffices to show that $(X, \frac{1}{2abd}\mathcal{M})$ is canonical. Furthermore, by Bertini type theorem, it suffices to show that $(X, E)$ is canonical, where $E\in |-K_X|$ is defined by $f\in H^0(X, -K_X)$, which is equivalent to say that a general element in $|-K_X|$ has at worst Du Val singularities. The latter one is called the {\it general elephant conjecture} proposed by Reid. If $X$ is a smooth Fano $3$-fold, then an earlier work of Shokurov \cite{Sho} showed that a general element of $|-K_X|$ is smooth. In general there are counterexamples to the {general elephant conjecture} for terminal Fano $3$-folds (see for example \cite[Examples~4.2--4.5]{Sano}), but we might still hope that it holds for Fano $3$-folds in Theorem~\ref{mainthm2}. In fact, there is an interesting 1-1 correspondence between general elephants of weighted hypersurface Fano $3$-folds in Iano--Fletcher's list \cite[16.6]{IF00} and weighted hypersurface Du Val K3 surfaces in Reid's list \cite[13.3]{IF00}. See also \cite{Ale94} for discussions on the general elephant conjecture. At the last, recall that our goal is to show that $(X, \frac{1}{2abd}\mathcal{M})$ is canonical, which is weaker than the {general elephant conjecture}. One can try to prove this by methods in \cite{CP17}. The bad news is that we have no hypersurface structure on $X$ and it is not easy to find good divisors and curves as in \cite{CP17}; on the other hand, we know a lot on singularities of $X$ and the behavior of anti-pluri-canonical systems, which might help us to shape the geometry of $X$. For example, it might be possible to show that the movable linear system $\mathcal{M}$ is free (this is true by the conclusion of Theorem~\ref{mainthm2}, but we are looking for a different approach here), and hence $(X, \frac{1}{2abd}\mathcal{M})$ is automatically canonical by the Bertini theorem. \section*{Acknowledgments} The author was supported by National Key Research and Development Program of China (Grant No.~2020YFA0713200) and NSFC for Innovative Research Groups (Grant No. 12121001).
2024-02-18T23:40:55.615Z
2022-07-11T02:11:44.000Z
algebraic_stack_train_0000
3,700
5,439
proofpile-arXiv_066-2099
\section{A Tribute to Wilhelm von Waldenfels} Wilhelm was a master of doing concrete, hands-on calculations and extracting abstract concepts from such calculations, leading to a deeper understanding of what is going on; as for example in \cite{GivW,GlvW,vWa}. Much of my own work -- not only during my Heidelberg times as a PhD student of Wilhelm's, but more or less during my whole career -- was inspired by this role model. One example of this spirit were my investigations on the question: how many notions of non-commutative independence are there? Or, from my perspective: how special is free independence? This was something which was started in Heidelberg, mainly in discussions with Michael Sch\"urmann and later resulted in the classification of non-commutative independences in works of Ben Gorbal and Sch\"urmann \cite{BGS}, Muraki \cite{Mur}, and myself \cite{Spe}. It might be fair to say that my contribution is more in Wilhelm's spirit of concrete calculations, whereas Michael took over more of the abstract conceptual approach. Since those beginnings, when non-commutative life was nice and easy, there have emerged many new versions and generalizations of notions of non-commutative independence, most notably the theory of bi-freeness \cite{Voi} and the one of higher order freeness \cite{MSp}. In such theories, one is not just looking at an algebra equipped with a state (or linear functional), but there is more structure around -- like distinguished subalgebras in the case of bi-freeness or an additional bilinear functional in the case of second order freeness -- and the notion of independence should respect this additional structure. We have in bi-freeness or second order freeness some basic examples for such independences, but again we would like to see how many more are there. Again I would prefer if there are not so many, so that the theories we are working on can be sold as quite unique theories and not just as one possibility in a multiverse of many. Recently there has been quite some progress -- in particular by Var\u so \cite{Var} and by Gerhold, Hasebe, and Ulrich \cite{GHU} -- on those issues, most notably for bi-freeness. As to be expected the situation is much more complicated than in the ``classical'' case and there seem to be possibilities at least for non-trivial deformations of the known examples. Maybe there is still hope that modulo such deformations (which might be excluded by some more or less canonical extra normalization conditions) bi-freeness and its few relatives are unique theories. From time to time I also try applying my hands-on approach to these questions, in particular, for the second order situation. There are at least some simple observations, which are still far away from a complete classification of such possibilities, but which might be a first step. I think it is in the spirit of Wilhelm if in the following I present some of these calculations and hope that others might see more in them. Let me close this introductory section with another important lesson which I learned from Wilhelm, namely that thinking, in particular on mathematics, is hard and it does not become easier when you get older. At the times in Heidelberg I could not really appreciate this insight and thought that age and experience would help to understand things better and more easily. As I now experience myself this is unfortunately not the case; in order to understand and see some structure, I still have to calculate many examples and hope to get some more insight by discussing also preliminary and inconclusive results with others. Just sitting together with others and talking about your problems often results in unexpected progress. That's what we did in Heidelberg back in the good old days \dots \quad \includegraphics[width=4.in]{heidelberg.jpeg} \quad \dots and that's what I am still trying to do nowadays. The following is such a discussion on my ignorance in the spirit of Wilhelm's approach to mathematics. \section{Universal second order constructions} A non-commutative probability space $(\mathcal{A},\varphi)$ consists of a unital algebra $\mathcal{A}$ and a unital linear functional $\varphi:\mathcal{A}\to{\mathbb C}$. The notion of independence corresponds then to constructions which embed given non-commutative probability spaces into a bigger one; and this construction should be natural or universal in the sense that it does not use any internal structure of the considered non-commutative probability spaces, but works in the same way for all of them. In particular, this means that all elements in an algebra have to be considered equal, there is no special role for generators. In probabilistic terms this corresponds to the requirement that functions of independent variables should also be independent. One can formulate the requirements for such constructions in an abstract categorical setting (and this is done by Michael Sch\"urmann \cite{BGS}); but from my more naive approach a universal construction should just be given by formulas to express mixed moments in moments of the individual variables. The universality means then that those formulas cannot use more information than is visible in the mixed moment and, in particular, they must respect associativity. In order to go for the nicest situation I usually, and in particular for today, also restrict to unital situations, where the unit is respected, and to symmetric products, where the product is invariant under exchanging the non-commutative probability spaces. In this setting I could then show \cite{Spe} that the consistency of the calculation rules for mixed moments poses strong conditions on the possibilities and the final result is that there are only two such contructions, namely classical (= tensor) independence and free independence. I think the astonishing point here is not that one can exclude other possibilities, but the fact that this exclusion does not bring you down to one, but to two possibilities. This means of course that at some point your arguments must be of a non-linear nature. Before the birth of free probability \cite{Voi1}, tensor independence was considered as the one and only form of independence in the non-commutative setting. That there is another one came as a surprise and showed that the ideas which one has about what a notion of independence should satisfy are not strong enough to make this unique. Then for a while the pendulum was moving in the opposite direction and there was the feeling around that if there are two possibilities then there should probably be many more. However, this is not true. There are just two. Let us now move forward to second-order probability spaces. There a non-commutative probability space is equipped with an additional object, namely a functional with two arguments $$\varphi_2:\mathcal{A}\times \mathcal{A}\to{\mathbb C};\qquad (a_1,a_2)\mapsto \varphi_2(a_1,a_2),$$ which is linear in each argument and for which the unit also plays a special role, but in the following way: $$\varphi_2(a,1)=0=\varphi_2(1,a)$$ for all $a\in\mathcal{A}$. (Think of $\varphi_2$ as a covariance of traces of random matrices.) We also assume that $\varphi_2$ is symmetric in its two arguments and that it is tracial in each of its arguments, but I am not sure whether this is relevant for the universality question. In this setting it is then appropriate to also denote the state $\varphi$ from before as $\varphi_1$. A second-order non-commutative probability space is thus a triple $(\mathcal{A},\varphi_1,\varphi_2)$. Positivity might be assumed for $\varphi_1$ (but is usually not relevant for the arguments), but for $\varphi_2$ we don't even know what a good notion of (or replacement for) positivity would be. Let us now start our game and try to find out what the formulas for mixed moments could look like. We can of course restrict everything to first order, only dealing with $\varphi_1$. But then we are back to the old situation and we know that there are only two possibilities. Since we will allow that mixed moments of second order can also depend on moments of first order we have to specify which situation we want to consider for $\varphi_1$. Clearly, I vote for the free situation. (Though, it might be interesting to see whether there is a second order theory which goes with tensor independence.) Now we can look at second order mixed moments. In the following I will assume that I have two or more algebras and I will denote the elements from the first algebra by $a$, the elements from the second algebra by $b$ and so on ... The first mixed moment of second order is $\varphi_2(a,b)$. The only moments in $a$ or in $b$ which I see there are $\varphi_1(a)$ and $\varphi_1(b)$. Because our formulas must be multi-linear in all appearing arguments, the only possibility is $$\varphi_2(a,b)=\alpha \varphi_1(a)\varphi_1(b)$$ for some complex number $\alpha\in{\mathbb C}$. The important point here is of course that $\alpha$ is independent from the concretely considered algebras and elements. So, in particular, we can put $b=1$, where we get $$0=\varphi_2(a,1)=\alpha \varphi_1(a)\varphi_1(1)=\alpha\varphi_1(a).$$ Since $\varphi_1(a)$ can be arbitrary, we thus have $\alpha=0$. Hence the general formula for this simplest mixed moment must be $$\varphi_2(a,b)=0.$$ Now let's consider a mixed moment of the form $\varphi_2(a_1b,a_2)$. The only individual moments I can build from this are $\varphi_2(a_1,a_2)\varphi_1(b)$, $\varphi_1(a_1a_2)\varphi_1(b)$, and $\varphi_1(a_1)\varphi_1(a_2)\varphi_1(b)$; hence the formula for the mixed moment must be of the form $$\varphi_2(a_1b,a_2)=\alpha \varphi_2(a_1,a_2)\varphi_1(b)+\beta \varphi_1(a_1a_2)\varphi_1(b)+\gamma \varphi_1(a_1)\varphi_1(a_2)\varphi_1(b)$$ for three coefficients $\alpha,\beta,\gamma\in{\mathbb C}$. Putting $b=1$ gives \begin{align*} \varphi_2(a_1,a_2)&=\alpha \varphi_2(a_1,a_2)\varphi_1(1)+\beta \varphi_1(a_1a_2)\varphi_1(1)+\gamma \varphi_1(a_1)\varphi_1(a_2)\varphi_1(1)\\ &=\alpha \varphi_2(a_1,a_2)+\beta \varphi_1(a_1a_2)+\gamma \varphi_1(a_1)\varphi_1(a_2) \end{align*} Since the three different moments in this formula can be choosen independently (here one notices that insisting on positivity is in general not a good idea for such arguments) this implies that $\alpha=1$ and $\beta=\gamma=0$ and thus $$\varphi_2(a_1b,a_2)=\varphi_2(a_1,a_2)\varphi_1(b).$$ Now let's get more serious and consider \begin{multline*} \varphi_2(a_1b_1,a_2b_2)=\alpha_1\varphi_2(a_1,a_2)\varphi_2(b_1,b_2)\\+\alpha_2 \varphi_2(a_1,a_2)\varphi_1(b_1b_2)+\alpha_3\varphi_2(a_1,a_2)\varphi_1(b_1)\varphi_1(b_2)\\ + \alpha_4\varphi_1(a_1a_2)\varphi_2(b_1,b_2)+\alpha_5\varphi_1(a_1)\varphi_1(a_2)\varphi_2(b_1,b_2)\\+\alpha_6\varphi_1(a_1a_2)\varphi_1(b_1b_2)+\alpha_7\varphi_1(a_1a_2)\varphi_1(b_1)\varphi_1(b_2)\\+\alpha_8 \varphi_1(a_1)\varphi_1(a_2)\varphi_1(b_1b_2)+\alpha_9\varphi_1(a_1)\varphi_1(a_2)\varphi_1(b_1)\varphi_1(b_2). \end{multline*} Putting $b_2=1$ gives \begin{align*} \varphi_2(a_1,a_2)\varphi_1(b_1)&= \varphi_2(a_1b_1,a_2)\\ &=\alpha_2 \varphi_2(a_1,a_2)\varphi_1(b_1)+\alpha_3\varphi_2(a_1,a_2)\varphi_1(b_1)\\ &\quad+\alpha_6\varphi_1(a_1a_2)\varphi_1(b_1)+\alpha_7\varphi_1(a_1a_2)\varphi_1(b_1)\\ &\quad+\alpha_8 \varphi_1(a_1)\varphi_1(a_2)\varphi_1(b_1)+\alpha_9\varphi_1(a_1)\varphi_1(a_2)\varphi_1(b_1). \end{align*} This yields $$\alpha_2+\alpha_3=1,\qquad \alpha_6+\alpha_7=0,\qquad \alpha_8+\alpha_9=0.$$ Putting $b_1=1$ gives the same. So let us put now $a_2=1$ instead: \begin{align*} \varphi_1(a_1)\varphi_2(b_1,b_2)&= \varphi_2(a_1b_1,b_2)\\ &= \alpha_4\varphi_1(a_1)\varphi_2(b_1,b_2)+\alpha_5\varphi_1(a_1)\varphi_2(b_1,b_2)\\ &\quad+\alpha_6\varphi_1(a_1)\varphi_1(b_1b_2)+\alpha_7\varphi_1(a_1)\varphi_1(b_1)\varphi_1(b_2)\\ &\quad+\alpha_8 \varphi_1(a_1)\varphi_1(b_1b_2)+\alpha_9\varphi_1(a_1)\varphi_1(b_1)\varphi_1(b_2). \end{align*} This gives $$\alpha_4+\alpha_5=1,\qquad \alpha_6+\alpha_8=0,\qquad \alpha_7+\alpha_9=0.$$ By symmetry in $a$ and $b$ we also know that $$\alpha_2=\alpha_4,\qquad \alpha_3=\alpha_5,\qquad \alpha_7=\alpha_8.$$ Not all equations are linearly independent, so we do not get a unique solution from them. We need some additional argument(s). For this let us use now associativity of the universal contructions and consider $\varphi_2(a_1b_1c_1,a_2b_2c_2)$ (where three different algebras are now involved). We will see whether and how we can get the term $\varphi_1(a_1a_2)\varphi_1(b_1b_2)\varphi_2(c_1,c_2)$ in the final formula; first we do the calculation according to \begin{align*} \varphi_2(a_1(b_1c_1),a_2(b_2c_2))&=\alpha_4 \varphi_1(a_1a_2) \varphi_2(b_1c_1,b_2c_2)+\cdots\\ &=\alpha_4\varphi_1(a_1a_2) \alpha_4 \varphi_1(b_1b_2)\varphi_2(c_1,c_2)+\cdots \\ &=\alpha_4^2\varphi_1(a_1a_2) \varphi_1(b_1b_2)\varphi_2(c_1,c_2)+\cdots. \end{align*} Let us now do the calculation according to \begin{align*} \varphi_2((a_1b_1)c_1,(a_2b_2)c_2)&=\alpha_4\varphi_1(a_1b_1a_2b_2)\varphi_2(c_1,c_2)+\cdots\\ &=\alpha_4\cdot 0\cdot \varphi_1(a_1a_2)\varphi_1(b_1b_2)\varphi_2(c_1,c_2)+\cdots. \end{align*} In the last line we have used the fact that in first order we have freeness and thus the wanted term does not appear in the factorization of $\varphi_1(a_1b_1a_2b_2)$. This calculation thus gives $\alpha_4^2=0$ and thus $\alpha_4=0$. From this we then also get $$\alpha_2=0,\qquad\alpha_3=1, \qquad\alpha_4=0,\qquad \alpha_5=1. $$ Still not enough information, but let us write down what we know up to now: \begin{multline*} \varphi_2(a_1b_1,a_2b_2)=\alpha\varphi_2(a_1,a_2)\varphi_2(b_1,b_2)\\+\varphi_2(a_1,a_2)\varphi_1(b_1)\varphi_1(b_2) +\varphi_1(a_1)\varphi_1(a_2)\varphi_2(b_1,b_2)\\+\beta[\varphi_1(a_1a_2)\varphi_1(b_1b_2)-\varphi_1(a_1a_2)\varphi_1(b_1)\varphi_1(b_2)\\- \varphi_1(a_1)\varphi_1(a_2)\varphi_1(b_1b_2)+\varphi_1(a_1)\varphi_1(a_2)\varphi_1(b_1)\varphi_1(b_2)], \end{multline*} where we have renamed $\alpha_1$ and $\alpha_6$ to $\alpha$ and $\beta$, respectively. We will now follow the term $\varphi_2(a_1,a_2)\varphi_1(b_1)\varphi_1(b_2)\varphi_1(c_1)\varphi_1(c_2)$ in the same calculation as before. This term can appear on one side as \begin{align*} \varphi_2((a_1b_1)c_1,(a_2b_2)c_2)&=\varphi_2(a_1b_1,a_2b_2)\varphi_1(c_1)\varphi_1(c_2)+\cdots\\ &=\varphi_2(a_1,a_2)\varphi_1(b_1)\varphi_1(b_2)\varphi_1(c_1)\varphi_1(c_2)+\cdots \end{align*} and on the other side as \begin{align*} &\varphi_2(a_1(b_1c_1),a_2(b_2c_2))\\&=\alpha\varphi_2(a_1,a_2)\varphi_2(b_1c_1,b_2c_2)+\varphi_2(a_1,a_2)\varphi_1(b_1c_1)\varphi_1(b_2c_2)+\cdots\\ &=[\alpha\beta+1]\varphi_2(a_1,a_2)\varphi_1(b_1)\varphi_1(b_2)\varphi_1(c_1)\varphi_1(c_2)+\cdots. \end{align*} Comparison of the coefficient gives $1=\alpha\beta+1$, thus $\alpha\beta =0$. So at least one of $\alpha$ or $\beta$ has to be zero. I am not sure whether there is a meaningful way to normalize the coefficients to 1, but let's do it, and so we essentially are left with two possibilities, namely \begin{multline*} \varphi_2(a_1b_1,a_2b_2)=\varphi_2(a_1,a_2)\varphi_1(b_1)\varphi_1(b_2) +\varphi_1(a_1)\varphi_1(a_2)\varphi_2(b_1,b_2)\\+[\varphi_1(a_1a_2)\varphi_1(b_1b_2)-\varphi_1(a_1a_2)\varphi_1(b_1)\varphi_1(b_2)\\- \varphi_1(a_1)\varphi_1(a_2)\varphi_1(b_1b_2)+\varphi_1(a_1)\varphi_1(a_2)\varphi_1(b_1)\varphi_1(b_2)], \end{multline*} or \begin{multline*} \varphi_2(a_1b_1,a_2b_2)=\varphi_2(a_1,a_2)\varphi_2(b_1,b_2)\\+\varphi_2(a_1,a_2)\varphi_1(b_1)\varphi_1(b_2) +\varphi_1(a_1)\varphi_1(a_2)\varphi_2(b_1,b_2). \end{multline*} The first case gives us our usual second order freeness. But what about the second case? Is there an argument which excludes the second possibility? And if we choose the first possibility here, are then the formulas for other mixed moments determined, or do we have to make choices again and again. Experience with the classical case suggests that the concrete form for some mixed moments of small size determines everything, but experience might be misleading. This is the point where I am running out of steam and it would be good to sit together with Wilhelm and discuss what those calculations tell us and what to try next, in the hopes of getting some insight into the structure behind all this.
2024-02-18T23:40:55.930Z
2022-07-11T02:13:07.000Z
algebraic_stack_train_0000
3,711
2,621
proofpile-arXiv_066-2173
\section{Overview and Motivation} \IEEEPARstart{H}{ands} or grippers are essential component for robotic manipulation as they handle objects with certain positions, orientations and contact forces. They serve as the end-effector interfacing between target objects and robots. Dexterous grasping is a prerequisite for task-dependent manipulation, which requires the consideration of important factors such as the interaction-force, stiffiness/compliance, dexterity and number of degrees of freedom \cite{Raphael35}. \begin{figure}[t] \centering \includegraphics[width=.8\linewidth]{figure/vic.pdf} \caption{The soft humanoid hand and the grasping objects used in this work.} \vspace{-.5cm} \label{fig:handandobjects} \end{figure} Conventional rigid robotic hands for industrial applications are generally able to provide high accuracy in position thanks to their sophisticated actuation and sensing mechanisms. However, it is hard to control the contact force between the rigid hand and objects as the rigid structure driven by electrical motors commonly generates large contact forces. In real-world scenarios, usually, we require grippers to manipulate objects with uncertain shapes, sizes and poses in uncertain environments \cite{Xiaomin1}. Moreover, when the targeted objects are fragile or delicate, large contact forces can deform or even damage the objects. Another drawbacks of these rigid hands lie in their heavy weight and high cost. Thus, the applications of soft robotic hands with passive compliance have attracted attention for an inherently safe and adaptive contact. Soft robotic hands not only can easily adapt to objects of various shapes and sizes, but also can perform a self-adaptive contact without the need of sophisticated control as rigid hands. Furthermore, their soft nature helps to minimize the damage to the manipulated objects. \begin{figure*}[t] \centering \includegraphics[width=1\linewidth]{figure/softHand.pdf} \caption{The proposed pneumatic soft humanoid hand with dexterous palm. (a) Illustrates the 3D model of our hand. (b) Shows air chambers: the red parts show the inner distribution of the air chambers, which will be pressurized based on SoftRobots plugin. (c) The soft robotic hand prototype.} \vspace{-.5cm} \label{fig:overallStructure} \end{figure*} Soft bending actuators, used as fingers, are the main component of soft robotic hands/grippers. They can be sorted as different types, such as \acp{fea}, cable-driven actuators, \acp{sma}, electromagnetic/magnetic actuator \cite{Shintake30}. Amongst these, \acp{fea} have achieved particular sizable push toward the utilization of compliant hands. \acp{fea} are mainly made of silicone rubber and driven by pneumatic or hydraulic. The pneumatic type of \acp{fea} are known as \acp{spa}. The most popular \acp{spa} are the \ac{pneunets} bending actuators designed by Whitesides et al. \cite{Mosadegh24} and fibre-reinforced actuators designed by Galloway et al. \cite{6766586}. \ac{pneunets} is bonded by 2 layers: the silicone-based top layer containing numerous chambers inside (like networks) and the inextensible bottom layer. When the actuator is inflated, the top layer will extend, and the actuator will achieve a bending motion. Fibre-reinforced actuator comprises an extensible chamber, an inextensible layer and fibres. Its bending mechanism is similar to the \ac{pneunets} actuator. The fibre-reinforcement is used to limit the chamber in axial extension instead of useless radial expansion. Both types are simple in design, effective and easy to fabricate. In literature, there are different design and application for soft robotic hands based on Pneu-nets bending actuator \cite{Lotfiani7, Abondance855,Yilin97,Clark65} and fibre-reinforced actuator prototypes \cite{Raphael35, Xiaomin1, Polygerinos94, Fras03, Mahmoud77,Galloway19,Weiping015}. However, compared with each other, \ac{pneunets} actuator has a lower capacity of input pressure due to its total soft top layer under the same wall thickness, which limits its maximal grasping force. In addition, fibre-reinforced actuator has a lower bending efficiency, which limit its bending angle under the same pressure. In order to overcome the shortcomings of them, in this paper, we design a novel \acf{hbsf} by integrating the inner chambers network structure inspired by \ac{pneunets} with fibre-reinforcement method. Recently, there have been rapid developments on soft robotic hands and grippers. However, most of them focus on the study of soft fingers and overlook the importance of the palm. The fingers were usually assembled together and fixed in a rigid palm or basement. However, the palm plays a considerable role in the grasping functioning. The fixed position of fingers will greatly limits the grasping scope and pose of the hands. In order to achieve a dexterous manipulation, we refer to the postural variability of the hand: the higher this variability, the more dexterous we consider a hand (for example of grasping postures referring to the grasp taxonomies purposed by Feix et al. \cite{Feix27}). Robotic hand with a changeable palm can adjust the position and orientation of fingers, which can significantly improve the postural flexibility of a robotic hand in terms of sizes and shapes of objects which can be grasped. Sun et al. \cite{Yilin97} presented a flexible robotic gripper with a rigid changeable palm. The distance of the fingers can adjust using slider and beam mechanism. An opposable thumb is important and useful to achieve dexterity in robotic hand. However, the rigid changeable palm can only change the position of the fingers. RBO hand 2 \cite{Raphael35} has a soft palmar actuator for enabling thumb abduction. They proposed a new PneuFlex actuator with fiber-reinforcement as the soft fingers. In addition, they also use two connected PneuFlex actuators as the base of the thumb to achieve the dexterity of the thumb. The other four fingers were fixed on a 3D-printed scaffold. The assembly angle between thumb and the other four fingers is about 120\degree, instead of a plane. The soft biomimetic prosthetic hand developed by Fras et al. \cite{Fras03} presents a similar design of the thumb abduction. They used the PneuFlex actuators as in \cite{Raphael35}. They applied one for the thumb and two for the palm. The exoskeleton for fixing the soft actuators is deformable and based on a 3D scan of a real human hand. Both hands have soft actuators for thumb abduction. However, the palm motion for the other four fingers were ignored. The four fingers of their hands can only bend to a certain direction. In contrast of \cite{Raphael35} and \cite{Fras03}, in order to augment the human-like palmar function of the soft hand, we present a pneumatic soft humanoid palm that can help thumb abduction, the four other fingers splay and palm bend. In this paper, we propose a compact hybrid solution of soft humanoid hand, as shown in Figure \ref{fig:handandobjects}. For the sake of clarity, the main contributions of this paper are: \begin{itemize} \item [--] A novel design of \acf{hbsf} by integrating the inner chambers network structure with fibre-reinforcement method. \item [--] A novel design of pneumatic soft humanoid palm. \end{itemize} The soft robotic hand is made of soft materials only. The \ac{hbsf} can robustly grasp a variety of objects with different weights, sizes, shapes and stiffness. The soft humanoid hand consists of 5 soft pneumatic fingers and 2 parts of the soft palm, which are all independent and assembled together using silicone connections. \begin{figure*}[htbp] \centering \includegraphics[width=.9\linewidth]{figure/structure.pdf} \caption{The structure of the \ac{hbsf}. (a) The schematic illustration of the components of \ac{hbsf}. The fibre-reinforcement structure is presented by the thin raised features on the main body's surface. (b) The sectional drawing (bottom view). The red chambers in (c)-(i) present the pressurized actuators in SOFA \cite{allard2007sofa}. (c) The original morphology. (d) Pressurize the 2nd section of the index and middle fingers. (e) Pressurize the two sections together. (f) Deflection of two fingers to enlarge the grasping region. (g) Deflection of two fingers to decrease the gap between them. (h)-(i) Deflection of two fingers to achieve their wiggling to the left and to the right. } \vspace{-.5cm} \label{fig:designoffinger} \end{figure*} \section{Design and Simulation} \label{sec:design} \subsection{Soft hand} \label{sec:structure} The design of soft hands/grippers can be divided into two main morphological types: the anthropomorphic hands and the grippers with several spatially evenly distributed fingers. For our hand, we choose an anthropomorphic design in shape with a new dexterous soft robotic palm. Figure \ref{fig:overallStructure}(c) presents the prototype of the soft humanoid hand, which has a weight of only 300 g and is about 1.2 times the size of a typical human hand. As shown in Figure \ref{fig:overallStructure}(a), the soft humanoid hand in this paper has three functional components: finger, palm part A and palm part B. All these components are actuated pneumatically. The fingers are designed to grasp, grip and manipulate targeted objects through bending in the grasping axis and deflecting in the side-to-side axis. The five fingers of the hand are sorted as thumb and four planar fingers (index, middle, ring and little finger). Palm part A is used to splay the four planar fingers, which extend the distance between fingers and enlarge the grasping scope. Palm part B achieves two functions: palm bending and thumb abduction and adduction. Simulation was conducted to guide the design and to ensure the proposed design can work as intended in real-time. We used \ac{fem} to simulate and analyze the whole soft hand by following the analysis in \cite{Duriez138}. The simulation in real-time was implemented in SOFA framework \cite{allard2007sofa} with SoftRobots plugin \cite{Coevoet362}. The mesh file of the soft hand consists of 40316 tetrahedra and 11894 nodes. \subsection{Soft finger} \label{subsec:finger} \subsubsection{Function} In this work, we aim to develop a humanoid pneumatic soft hand with dexterous soft fingers. Among the five fingers, the functions of the thumb, index and middle fingers are the main ones for grasp manipulation, while ring and little finger are usually play an assistant role. Index and middle fingers have two functional goals: bending in the primary grasping axis (toward the soft palm) and deflecting from side to side (perpendicular to the primary grasping direction). The function of the ring and little fingers in this work is only bending motion. Bending motion is the basic and main function for grasping. The bending angles are set to about 180\degree like human fingers. Side-to-side deflection motion is also useful and necessary to preform splaying and gripping between two adjacent normal planar fingers and rotatory movement of the object among three fingers' grasping. \subsubsection{Design} The soft hand fingers, which are able to both bend and deflect, are made of \ac{hbsf} based on a modular approach. \acp{hbsf} are used for the thumb, index and middle fingers. The structure of the 3 fingers is identical in shape with a length $L$ of 90 mm, while the thumb is 20 mm shorter than the index and middle fingers. The structure of ring and little fingers is a simplified version of \ac{hbsf}, which can only bend to the main direction. The two side by side chambers in one segment of \ac{hbsf} was omitted into one chamber. In this work, the \ac{hbsf} combines the advantages of \ac{pneunets} bending actuators \cite{Mosadegh24} and fibre-reinforced actuators \cite{6766586}. About the configuration, we refer to the humanoid morphology of the soft fingers in Deimel et al. \cite{Raphael35}. As shown in Figure \ref{fig:designoffinger}(a), the main body of \ac{hbsf} consists of silicone materials, a strain- limiting layer in inextensible but flexible materials (silk screen) in its bottom, and the fibre-reinforcement thread. The strain- limiting layer determines the bending motion of the finger. And the fibre-reinforcements can protect the silicone-based chambers from excessive expansion. The tubes are used to pressurize the air cavities inside fingers. A flex sensor (Spectra Symbol FS-L-0095-103-ST) is glued to the bottom flat face. The structure of the \ac{hbsf} is like a bellow. From the sectional drawing in the bottom plane in Figure \ref{fig:designoffinger}(b), the most recent version used in index/middle fingers has 11 short bellows (called as 11 segments in this paper) with a wall thickness $t=2$ mm. The two sections of the finger are shared with the same length, which means $l_1=l_2$. The little gap between segments $a$ is equal to 1 mm, which is limited by the thin-wall strength of the 3D printed molds. The distance between each short bellows $b$ is the key parameter, which will affect the bending and deflection performance of the \ac{hbsf}. A \ac{hbsf} is divided into four air cavities. \subsubsection{Simulation} Figure \ref{fig:designoffinger}(c)-(i) present the simulation results of the soft fingers. Because the SOFA framework is developed for simulation in real-time, the fibre-reinforcement structure is so thin to mesh it into element based on the \ac{fem}. Thus, it is omitted in the simulation. As a result, the input air pressure in SOFA simulation cannot fit the capability of the \ac{hbsf} in real-world, which limits the bending angle of the fingers in the simulation results. The fingers' deflection enables several useful collaborative motions between the index and middle fingers. Although the motions in Figure \ref{fig:designoffinger}(f) and (g) cannot work as a gripper due to the limitation of the low stiffness of the soft fingers in the side-to-side axis, they play a key role in achieving the Feix \cite{Feix27} grasp postures 14, 17, 20, 21 and 23 in Figure \ref{fig:experimets photos}. In addition, the deflection of the thumb acts similar to the functions shown in Figure \ref{fig:designoffinger}(h) and (i). The thumb deflects toward the palm for grasping small objects, while deflects away from the palm for grasping bigger objects. \subsection{Soft palm} \label{subsec:palm} \subsubsection{Function} For human hand grasping, the palm always collaboratively works with the fingers to implement all kinds of manipulation. In this paper, the functions of the soft palm is inspired from human palm motion, which include three main functions: thumb abduction, four planar fingers splaying and palm bending. A key feature of the human hand is the thumb abduction and adduction. This feature allow the thumb to rotate and has a proper position and orientation with respect to the manipulated object, which is essential of grasping. The splaying of the four fingers enlarges the capability of grasping different size objects. By increasing the angle between the adjacent fingers, the contacting force from the four planar fingers will evenly distributed on the objects and it also make it possible to grip larger objects using two adjacent fingers. The bending of the whole plane of the palm is beneficial to facilitate the grasping of smaller objects. Without the help of palm bending, the thumb of soft hand cannot touch the other fingers even all the fingers are pressurized under the highest pressure and bending with 180\degree. \subsubsection{Design} \label{subsubsec:Design of soft palm} \begin{figure}[tbp] \centering \includegraphics[width=.9\linewidth]{figure/softPalm.pdf} \caption{The structure and simulation of the soft palm. (b) and (d) Show a comparison of the palm actuators under inflation and deflation states. The red chambers in (b) and (d) presents the pressurized actuators in SOFA.} \vspace{-.5cm} \label{fig:design of soft palm} \end{figure} The method of modular design are applied in the design of soft palm. The three functional goals are divided and embedded into two parts: palm part A and B, as shown in Figure. \ref{fig:design of soft palm}. The structural design of both two parts are inspired by the mechanisms of \ac{pneunets} actuators, consisting of networks-like chambers inside the silicone body and the strain-limiting layer on the bottom. \begin{figure*}[htbp] \centering \includegraphics[width=1\linewidth]{figure/fig02.pdf} \caption{The fabrication process of the soft hand. \emph{Finger}: (a1)-(a4) show that the main body of the \acp{hbsf}. It is made by first pouring silicone material in its bottom mold and then press the top mold. In Step 2, the red part in (b3) is for clamping and positioning the main body. (c1)-(c4) is to fix the fibre and flex sensor to prevent relative displacement when the finger bends. \emph{Palm part A}: (d1) The molds. (d2) Pure silicone material. (d3)-(d4) Unmold. In Step 2, the 3D printer support is used to hold and ensure the thickness of the bottom surface. \emph{Palm part B}: The process of making the main and bottom body is similar with the other parts. The 2 red sticks in (g1)-(g5) are to reserve the space for air tubes and electric wire. The yellow support in (g2) works as a similar role of the red in (b3). \emph{Hand assembly}: (j1) The assembly of palm and 4 planar fingers. (j4) The final result. } \vspace{-.5cm} \label{fig:fabricaiton} \end{figure*} Palm part A achieves the splaying of the four planar fingers, while part B implements the abduction of the thumb and overall bending of the palm. Instead of bending as \ac{pneunets} actuators, the palm part A splays the four planar fingers with a small hump after pressurized. As shown in Figure \ref{fig:design of soft palm}(a), the main body has four air chambers to be actuated. The four large groove in front are used to fix the four fingers. The bottom cavity is designed to arrange the pneumatic tubes and sensor wires inside the hands. As for part B, the bending motion of the palm is achieved by enlarging the horizontal width of the \ac{pneunets} actuator. The width of the air chambers along the palm were designed as wide as possible to generate enough force to bend the palm with four planar fingers. The tubing tunnel connecting with the bottom cavity in part A is used to run the tubes and lines through the palm. The soft palm uses the same key geometric parameters with the soft fingers, such as the wall thickness $t$ and the little gap between segments $a$. The stiffness of the palm for enabling a reliable support for the other fingers is considered and simulated in SOFA framework. In addition, the humanoid appearance design were implemented at last, after ensuring the soft actuators work as intended and it will not affect the performance of the pneumatic actuators. \subsubsection{Simulation} Figure \ref{fig:design of soft palm}(b) and (d) present the simulation results of the two parts of the soft palm. The three functions of the palm can be clearly detected through the comparison of each actuators before and after being pressurized. \section{Fabrication} \subsection{Actuator body molding} The soft body was fabricated using the silicone rubber (Dragon Skin 10 Medium) by Smooth-On (Sil-Poxy) with a 10 Shore-A hardness. Parts (A and B) were mixed (1:1 ratio) in a plastic cup and then put into a vacuum chamber. Using vacuum pump, the air trapped in mixed silicone materials will expand to bubble and finally collapse under about 0.9 bar vacuum pressure. Afterward, the silicone material was poured into the molds as described in Step 1 and 2 of finger and palm parts in Figure \ref{fig:fabricaiton}. The molds are generally left in an upright position to cure for five hours at room temperature. \subsection{Strain-limiting layer sealing} The strain-limiting layer is used to ensure the actuators bending in the desired direction during pneumatic pressurization \cite{Mosadegh24}. Its material is silkscreen fabric. It is placed in the bottom mold of the Step 2 in Figure \ref{fig:fabricaiton} and glued together with the bottom sealing portions of all three components. Subsequently, the fabrication of the main body of the actuators is finished and the chambers inside the silicone body are all sealed. It is noted that the Step 3 of Palm part B does not need molds. As shown in in Figure \ref{fig:fabricaiton}(h1)-(h2), the silkscreen fabric is added manually in the reserved gap. Then, after pouring silicone into the gap and curing, the silkscreen fabric is combined with the main body. \subsection{Assembly Figure \ref{fig:fabricaiton}(d1)-(d4) shows the main procedure of the hand assembly. There are five soft fingers and two soft palm parts in this hand. We used the same silicone materials to bond different components together by manually pouring liquid silicone on their connecting surface. As shown in Figure \ref{fig:fabricaiton}(d1), the end of four planar fingers is fixed on the large groove mentioned in Section \ref{subsubsec:Design of soft palm}. The tubes of each finger are arranged into the bottom cavity inside the palm part A. Figure \ref{fig:fabricaiton}(d2) presents the assembly result of the palm part B and a soft-rigid hybrid support. The support consists of silicone skin fitting with the shape of the palm part B and the rigid 3D-printed skeleton inside. All the pneumatic tubes will go through the support and run out the hand together. The yellow 3D-printed skeleton of the support is used as the base to fix the hand with the robot arm. As a result, the three parts in Figure \ref{fig:fabricaiton}(d1)-(d3) are assembled together into a soft humanoid hand, as shown in Figure \ref{fig:fabricaiton}(d4). \section{Control} \label{sec:control} To control the proposed soft hand at different operating conditions, a controller platform is constructed. As the designed soft actuators are pneumatic, controlling the pressure and the duration of the input pneumatic supply is needed. The controller platform is implemented based on the proposed design of the soft robotics toolkit\footnote{Fluidic Control Board, \url{https://softroboticstoolkit.com/book/control-board}}. The architecture of our controller board is shown in Figure \ref{fig:controlboard}. \begin{figure}[!htbp] \centering \includegraphics[width=.8\linewidth]{figure/controller.png} \caption{The architecture of the implemented controller board. For clarity, we only show the connection scheme of the Arduino 1. The connection scheme of the Arduino 2 is identical to that of the Arduino 1.} \vspace{-.5cm} \label{fig:controlboard} \end{figure} The controller board consists of a pneumatic regulator (which regulates the pressurized air to the system), a set of solenoid valves\footnote{\raggedright SMC-VQ110U-5M Solenoid valve, \url{https://www.smcpneumatics.com/VQ110U-5M.html}} (which can open and close to direct the flow of air into the system). The valves are powered and directed by power FET switches. As we want the hand to have as much degree of freedom as possible, we use 20 solenoid valves in total. Thus, each chamber of the finger can be pressurized independently, which in turn, allow the hand to have more postures. Two Arduinos Mega 2560 REV3 controller are used to enable users to interface with the hardware via a serial port connection. The board can be controlled manually (by adjusting switches and potentiometers) or automated via a programmed software. The system pressure is regulated with \ac{pwm}, which basically controls the opening and closing times of the valves, at a rate of 60 Hz through the Arduino boards. \ac{pwm} can be expressed as a technique for getting analog results with digital means. One of the most important terms in \ac{pwm} is the duty cycle. The duty cycle is the proportion of 'on' time to the regular interval or 'period' of time. Duty cycle is expressed in percent, 100\% being fully on, and 0\% being fully off. By modulating the value of the duty cycle, analog values can be achieved. For example, the valve fully closes at 0\% duty cycle, fully opens at 100\% duty cycle and opens halfway at 50\% duty cycle. Thus, the fixed regulated input pressure is set to the desired value based on the duty cycle of the \ac{pwm} signal. With this technique, the finger can easily be controlled to a certain bending angle. \section{Experiments} \label{sec:Experiments} \subsection{Analysis of the HBSF} The \acp{hbsf} in this paper have novel motion. They deflect in the side-to-side axis, while the bending motion in the grasping axis is still the main function of the soft fingers. After pressurized, the adjacent inflated segments will generate a mutual force, which achieve the deflection motion. In addition, the number of segments depends on the distance $b$, as shown in Figure \ref{fig:designoffinger}(b). When the finger length is defined, larger $b$ leads to a less segment number. When the finger only has 1 segment, it changes back to the original paradigm of PneuFlex actuator with two chambers. In order to analyze the influence of the segments number on the deflection, six soft fingers with 1, 3, 5, 7, 9, 11 segments are fabricated and tested, as shown in Figure \ref{fig:experiment of finger}. The bending angle increases non-linearly with the applied air pressure, with a relatively slow increase in low pressure region and a rapid increase afterward when the pressure reaches beyond 20 kPa. It is obvious that the variation between the force and bending angle has a close positive correlation. However, it is also noted that under the same air pressure, the fingers with more segments generate a larger bending angle and force when pressurizing both left and right chambers but generate a lower angle and force when pressurizing single chamber. This phenomenon actually reflects the different mechanisms of the bending and deflection motion. The more segments means more grooves that mutually swell during inflation working as PneuNets actuators and less little sections that extend during inflation working as fibre-reinforcement actuators. As shown in Figure \ref{fig:experiment of finger}(h), the generated grasping force of the S1 finger becomes larger than those with multi-segments, when the applied air pressure goes from 20 to 50 kPa. Figure \ref{fig:experiment of finger}(i) shows the deflection displacement in the side-to-side axis. The S11 finger has significantly better performance for deflecting up to 20 mm. Combining the experimental results of bending angle and force, the S11 finger ranks the second that is easily driven under double-chamber actuation and present the minimum bending angle under single-chamber actuation. Both characteristics are conducive to enhance the deflection region of the \acp{hbsf}. It is also notable that the finger will first wiggle outwards in the initial stage of the actuation with lower air pressures and turn back when the bending angle of the \acp{hbsf} becomes large at higher air pressure, resulting in losing efficacy of the deflection motion and even get just the opposite displacement. As a conclusion, we selected the \ac{hbsf} with 11 segments in our new hand. \begin{figure}[!tbp] \centering \includegraphics[width=1\linewidth]{figure/fingers.pdf} \caption{The analysis of the effect of the segments number on the bending and deflection motions of the six fingers. (a) The comparison of the deflection extent from front to back, respectively S11, S9, $\cdots$, S1; (b) The comparison of the bending angle from front to back, respectively S1, S3, $\cdots$, S11. (c) The sectional drawing of the six fingers. (f) The experimental setup for measuring the force of the fingers while the top layer of the fingers was constrained. Variation of the (d) bending angle and (g) generated force of fingers while applying air pressure equally to both left and right chambers. Variation of the (e) bending angle, (h) generated force of fingers and (i) displacement of the fingertip while applying air pressure to single chamber.} \vspace{-.5cm} \label{fig:experiment of finger} \end{figure} \subsection{Analysis of the palm} Figure \ref{fig:experiment of palm} presents the experimental results of the soft palm. Different from the normal fixed palm, almost all the parts of our soft palm is deformable, which greatly increases the diversity of the hand posture. Figure \ref{fig:experiment of palm}(e) shows the relationship between the deformation performance and the air pressure of the palm Part A. It indicates that the splaying angle and force increase slowly when the applied air pressure increases from 10 to 60 kPa and then achieve a rapid increase afterward. Figure \ref{fig:experiment of palm}(a) shows the posture of the hand under 90 kPa air pressure with a 50\degree splaying angle. The air pressure should not be more than 100 kPa in order to avoid collapse. The influence of the gravity on palm bending can be observed in Figure \ref{fig:experiment of palm}(b) and (c). The bending angles of the palm with down pose are obviously larger than those with the palm up pose. The tested maximum angle of the palm bending motion is up to 68\degree and of the thumb abduction is about 90\degree. The two bending ranges can meet most of the common grasping. Without the fibre-reinforcement, the two actuators of the palm part B should be driven under the safe range of air pressure less than 40 kPa. However, when the object is grasped and in contact with the hand, the pressure can be manually increased to enhance the reliability of the grasping. \begin{figure}[!tbp] \centering \includegraphics[width=\linewidth]{figure/palm.pdf} \caption{The validation of palm performance on bending angle and output force. (a,e) palm splaying; (b,c,f) palm bending with palm down and palm up; (d,g) thumb abduction.} \vspace{-.5cm} \label{fig:experiment of palm} \end{figure} \subsection{Grasping in real-world} \begin{figure}[!hbp] \centering \includegraphics[width=\linewidth]{figure/fig-SpecialGrasping.pdf} \caption{(a)-(e) present the capability of our soft hand prototype of performing lifting a watering can (143 g) by using 2 adjacent \acp{hbsf} (f) Grasping and lifting a chair (541 g) (g,h) Show the safely compliance in human-robot interaction.} \vspace{-.5cm} \label{fig:experiment of special application} \end{figure} \begin{figure*}[htbp] \centering \includegraphics[width=.9\linewidth]{figure/taxonomy2.pdf} \caption{Enacted grasps of the Feix taxonomy. Grasps are numbered according to the Feix taxonomy \cite{Feix27}. Please check the attached video for the 32 grasps, \url{https://irobotics.aalto.fi/wp-content/uploads/2020/09/NewSoftHandvFast576p30.mp4} } \label{fig:experimets photos} \end{figure*} The deflection of the \acp{hbsf} can collaboratively work used as a two-finger gripper to grasp and lift objects that allow the insertion of fingers into the neck portion, as shown in Figure \ref{fig:experiment of special application}(a)-(e). From Figure \ref{fig:experiment of special application}(b) to (c), the index and middle \acp{hbsf} illustrate the deflection motion as illustrated in Figure \ref{fig:designoffinger}(f). The gap between the index and middle fingers enlarges obviously. The bending motion can be used to provide the support force against gravity in case of the watering can. Besides, our hand is able to firmly grasp and lift the heavy and large-size chair in Figure \ref{fig:experiment of special application}(f). Figure \ref{fig:experiment of special application}(g) and (h) show the safety and compliance of the soft hand in human-robot interaction scenarios. \subsection{Grasp dexterity in Feix taxonomy} To test the grasping performance of our hand, we implemented the grasping experiments according to the Feix taxonomy \cite{Feix27}, which includes 33 comprehensive grasp types. For every cases, the pressures are adjusted to reach the desired posture and the actuation sequences are generally follow the principle of actuating the palm first and then driving the fingers. The actuation of the soft palm includes the splaying of the planar four fingers, palm bending and thumb abduction. It help to firstly fit the shape and size of the target objects and move the fingers in the proper position and orientation. Then the fingers are pressurized to implement the grasping tasks. The grasp quality is judged by moving up, down, and rotating the hand by Franka Emika Panda robotic arm under the speed of 40 mm/s. Furthermore, this procedure is repeated several times to evaluate the quality of the grasp. Fig. \ref{fig:experimets photos} shows snapshots of 33 grasping posture types in Feix taxonomy \cite{Feix27}. Our hand failed to perform one grasping posture out of the 33. The ventral posture numbered as 32 failed because the object (marker pen used in 32) is thin and long, so the hand could not grasp it firmly. As the bending profile of the soft fingers has a circular shape, the inner diameter of the bending fingers is larger than the diameter of the marker pen, even by reaching a maximum bending angle. \section{Conclusion} In this paper, we have successfully developed a new soft humanoid hand capable of grasping different kinds of objects robustly. The hand exhibits the advantages of large grasping force, low cost, lightweight and potential applications to special cases. The pneumatic actuation enables the quick response of grasping in high compliance without damaging the objects. Meanwhile, we proposed a new design of soft finger, \ac{hbsf}, and a novel soft palm with 2 parts. The functions of each soft actuators were simulated based on \ac{fem} in SOFA. The experimental results on the 6 fingers with different account of segments of the \ac{hbsf} show that the soft finger with 11 segments is the best choice for achieving both the bending and deflection motions. The main advantage of this study is that the postures of the hand can be adjusted with the help of the soft palm, and then we can use different configurations to realise stable and dexterous grasping. The hand allows us to achieve 32 out of 33 grasp postures in Feix taxonomy, which demonstrates the postural dexterity of our soft hand. A design limitation is that the thumb abduction angle and fingers splaying angle are limited by the silicone tensile strength and wall thickness of the air chamber structure. And the Flex sensors embedded in soft finger are not well used in the controller. Future work would address the optimisation of the soft palm to achieve that the thumb can touch to little finger and improve the control with safety and sensing. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
2024-02-18T23:40:56.268Z
2020-09-03T02:14:58.000Z
algebraic_stack_train_0000
3,726
5,694
proofpile-arXiv_066-2245
\section{Introduction} In gravitation two features are commonly associated: the absence of propagating degrees of freedom and the triviality of the curvature, or at most, constant curvature in the presence of a cosmological constant. Of course, this is the case of 2+1 general relativity. Nontrivial configurations are more likely associated to global effects, like the case of the Ba\~nados-Teitelboim-Zanelli black hole with negative cosmological constant \cite{Banados:1992wn}. In this paper we explore on the relationship between the absence of local degrees of freedom and the triviality of the curvature in the framework of the Ho\v{r}ava gravity \cite{Horava:2009uw,Horava:2008ih}. This theory is based on a foliation of spacelike surfaces along a given direction of time, and the symmetry is given by the transformations that preserve the foliation. A spacetime metric is not mandatory as fundamental object. Nevertheless, the gravitational fields are taken from the Arnowitt-Deser-Misner (ADM) decomposition of general relativity. The underlying gauge symmetry leads to a quantum theory with improved behavior in the ultraviolet, since terms of higher order in spatial derivatives can be incorporated in the action. Unitarity can be safe since no terms of higher order in time derivatives are neccesary to define the theory. The most studied formulation of the Ho\v{r}ava theory, both in its projectable and nonprojectable versions (the last one extended in Ref.~\cite{Blas:2009qj}), propagates one physical degree of freedom additional to the ones of general relativity. Although this can be generically associated to the fact that the gauge symmetry group of the Ho\v{r}ava theory is smaller than the one of general relativity, actually this is not a unavoidable feature. There is a critical point in the space of coupling constants where the extra physical mode disappears (with an enhancement of the gauge symmetry the extra mode also disappears \cite{Horava:2010zj,Zhu:2011yu}). The kinetic term of the Lagrangian has the general form $\sqrt{g} N ( K_{ij} K^{ij} - \lambda ( g^{ij} K_{ij} )^2 ) $, where $\lambda$ is an arbitrary coupling constant. The additional extra mode is eliminated when $\lambda$ takes the critical value $\lambda = 1/d$, where $d$ is the spatial dimension. To distinguish this special case, we refer to the theory formulated under the $\lambda =1/d$ condition as the critical Ho\v{r}ava theory. The canonical formulation characterizing the degrees of freedom in the critical $d=3$ case can be found in Ref.~\cite{Bellorin:2013zbp}. Two additional second-class constraints arise in the critical case, which are not associated to gauge symmetries. These constraints eliminate the extra mode. Thus, the elimination of the extra mode in the critical case is a consecuence of the dynamics, rather than the symmetry. In a foliation of $d=2$ spatial dimensions, one expects that the critical Ho\v{r}ava theory does not propagate any mode at all, since the would-be single scalar mode should be eliminated by the additional second-class constraints. Therefore, the critical Ho\v{r}ava theory in 2 spatial dimensions behaves like $2+1$ general relativity in the sense that local degrees of freedom are absent. Since this is a central feature of the critical theory, we study this rigorously by means of a canonical analysis on the large-distance effective action, which is of second order in time and spatial derivatives. The Hamiltonian formulation is also useful to discuss the gravitational energy and its relation to the asymptotically flat configurations. The absence of degrees of freedom raises a question about the solutions: whether in the critical $2+1$ Ho\v{r}ava theory are or are not nonflat solutions. The qualitative comparison with general relativity is particularly relevant for the large-distance effective action, since they are of the same order. In principle, there is no reason to think that the vacuum field equations (without cosmological constant) lead to zero curvature, since the field equations of the effective Ho\v{r}ava theory are different to the ones of general relativity, even for the critical Ho\v{r}ava theory. The ADM fields enter in some combinations that do not belong to the spacetime curvature. This can be casted on the equivalent formulation of the Einstein-aether theory \cite{Jacobson:2000xp}. If the aether field is restricted to be hypersurface orthogonal, the resulting theory is physically equivalent to the large-distance effective theory of the nonprojectable Ho\v{r}ava theory \cite{Blas:2009ck,Jacobson:2010mx,Jacobson:2013xta}. Since the gravitational field equations of the Einstein-aether theory incorporate the energy-momentum tensor of the aether field, it is clear that, whenever solutions with nonzero aether energy-momentum tensor exist, these solutions have nontrivial spacetime curvature. We remark that, in the framework of the Ho\v{r}ava theory, the aether energy-momentum tensor should not be interpreted as an external source, since it is the way of representing the intrinsically gravitational terms of the Ho\v{r}ava theory that are different to the ones of general relativity. Asymptotically flat solutions are of special importance. They represent the gravitational field of isolated sources. In $2+1$ general relativity there is the definition of asymptotic flatness given in Ref.~\cite{Ashtekar:1993ds}, which is motivated by the rest-particle solution of Ref.~\cite{Deser:1983tn}. In contrast to the $3+1$ dimensions, in the $2+1$ case the dominant mode in the asymptotic expansion is not fixed to an unique metric at infinity. Instead, the asymptotically flat solutions approach to the asymptotic region of a cone of variable conical angle, which can be a defficit or zero angle. The case of excess angle is discarded by the condition of positive energy. The same definition of asymptotic flatness can be adopted for the $2+1$ Ho\v{r}ava theory \cite{Bellorin:2019zho}. Therefore, to undertake a general analysis on asymptotically flat configurations, one should take the general definition with the variability on the dominant mode. Moreover, in the case of the $2+1$ Ho\v{r}ava theory, there are more possibilities for positive energy among the asymptotically flat configurations. A property of the critical theory is that the coupling constant of the spatial Ricci scalar in the Lagrangian, denoted by $\beta$, has no defined sign. Due to this, the set of possible asymptotically flat solutions with positive energy is enhanced, allowing that also the metrics that approach a cone with an excess angle have positive energy. We analyse both the degrees of freedom and the asymptotically flat solutions of the $2+1$ nonprojectable Ho\v{r}ava theory defined at the critical point $\lambda = 1/2$. We develop a detailed canonical analysis, showing the self-consistency of the theory and the fact that it has no propagating degrees of freedom. In the analysis of the solutions, we emphasize on the presence or absence of nonflat solutions. We find that a class of the asymptotically flat configurations are necessarily globally flat. This is the class of solutions that has nonnegative energy when the range $\beta > 0$ is considered in the space of coupling constants. Another related feature is the absence of a Newtonian force, as happens in $2+1$ general relativity. Despite these results, we also find that there are more possibilities for nonflat solutions: by relaxing the boundary conditions we find a nonflat solution that is not asymptotically flat. This solution is static and with rotational symmetry. It is an evidence that the critical $2+1$ Ho\v{r}ava theory is a gravitational theory without local degrees of freedom but that still possesses nonflat solutions at the level of the second-order action. The noncritical $2+1$ Ho\v{r}ava theory has been previously studied in many aspects. This theory propagates a scalar mode, hence the dynamics is different to the case we study here. A dynamical analysis for the noncritical case was done in Ref.~\cite{Sotiriou:2011dr}, where the physical propagating mode was characterized. Several solutions has been studied in the three-dimensional noncritical Ho\v{r}ava theory. Some of these works can be found in Refs.~\cite{Park:2012ev,Shu:2014eza,Sotiriou:2014gna,Basu:2016vyz}, where black holes and other solutions with diverse asymptotics have been studied. Among them, different models of Lagrangians have been adopted, including a cosmological constant term, which allows for more kinds of asymptotic geometries, i.~e.~de Sitter, anti-de Sitter and Lifshitz asymptotics. Quantum aspects of the $2+1$ Ho\v{r}ava theory has been developed, for example, in Refs.~\cite{Benedetti:2013pya,Barvinsky:2015kil,Griffin:2017wvh,Barvinsky:2017kob,Bellorin:2019gsc}. The three-dimensional Ho\v{r}ava gravity has been related to the gauging of some nonrelativistic algebras in Ref.~\cite{Hartong:2016yrf}. This paper is organized as follows: in section 2 we perform the canonical analysis, starting with the nonperturbative general analysis and then ending with a perturbative analysis under which the constraints can be solved explicitly. In section 3 we first study the asymptotically flat solutions, and then we find an explicit nonflat solution. We present some conclusions and in the appendix we discuss the Newtonian potential. \section{The absence of local degrees of freedom} \subsection{Nonperturbative canonical formulation} We perform a detailed canonical analysis on the large-distance effective action of the critical Ho\v{r}ava theory in two spatial dimensions, with the aim of showing that this theory has no local physical degrees of freedom. A foliation of spacelike surfaces along a given direction of time is assumed. We may use local coordinates $(t,\vec{x})$ on the foliation. The theory is defined in terms of the ADM variables $N(t,\vec{x})$, $N_i(t,\vec{x})$ and $g_{ij}(t,\vec{x})$. We consider the nonprojectable case where the lapse function $N$ is allowed to depend on the time and the spatial point. We consider the effective action for large distances, which has a potential of $z=1$ order, according to the criterium of anisotropy introduced in Ref.~\cite{Horava:2009uw}. The action of the purely gravitational theory (without sources) is \begin{equation} S= \int dt d^2x \sqrt{g} N \left(G^{ijkl}K_{ij}K_{kl} + \beta R + \alpha a_i a^i \right) \,. \label{action} \end{equation} In this section we use the standard notation of Riemannian geometry to denote the objects associated to the two-dimensional spatial metric $g_{ij}$. Thus, $K_{ij}$ is the extrinsic curvature of the leaves, \begin{equation} \label{k} K_{ij}=\frac{1}{2N}(\dot{g}_{ij}-2\nabla_{(i}N_{j)}) \,, \end{equation} $K$ is its trace, $K = g^{ij} K_{ij}$, and the dot stands for derivative with respect the time. The hypermatrix $G^{ijkl}$ is defined by \begin{equation} \label{G} G^{ijkl}=\frac{1}{2}\left(g^{ik}g^{jl}+g^{il}g^{jk}\right)-\lambda g^{ij}g^{kl} \,. \end{equation} It contains the arbitrary constant $\lambda$, which is the coupling constant of the kinetic term \cite{Horava:2009uw}. $R$ is the spatial Ricci scalar. $a_i$ is the FDiff-covariant vector $a_i = \partial_i \ln N$ \cite{Blas:2009qj}. $\lambda$, $\beta$ and $\alpha$ are the independent coupling constants of the $z=1$ model. From the identity in two spatial dimensions, \begin{equation} G^{ijkl}g_{kl}=(1-2\lambda)g^{ij} \,, \label{criticalcondition} \end{equation} it follows that for the value $\lambda=1/2$ the hypermatrix $G^{ijkl}$ is not invertible. This critical condition has profound consequences on the dynamics of the theory; this is our case of interest in this paper. The condition of asymptotic flatness in this theory is the same of $2+1$ general relativity \cite{Ashtekar:1993ds}. If $x^1,x^2$ are Cartesian coordinates at spatial infinity and $r=\sqrt{x^k x^k}$, an asymptotically flat configuration behaves asymptotically as \begin{eqnarray} && g_{ij} = r^{-\mu} \left( \delta_{ij} + \mathcal{O}(r^{-1}) \right) \,, \label{asympgij} \\ && N = 1 + \mathcal{O}(r^{-1}) \,, \label{asympn} \\ && N^i = \mathcal{O}(r^{-1}) \,, \quad N_i = r^{-\mu} \mathcal{O}(r^{-1}) \,, \label{asympni} \end{eqnarray} where $\mu$ is an arbitrary constant that takes different values among the asymptotically flat configurations. The reason for using in the $2+1$ Ho\v{r}ava theory the same condition of asymptotic flatness as in $2+1$ general relativity is that the gravitational field of a point particle at rest is the same, and this solution is taken as the asymptotic reference to define the asymptotically flat condition. Since this is connected with the issue of the Newtonian potential, in the appendix we give more details on how this solution arises in Ho\v{r}ava theory. With the coordinate system used in (\ref{asympgij}), there are three cases for the asymptotic cone depending of the sign of $\mu$: For $\mu > 0$ a cone with a defficit angle is approached, for $\mu = 0$ the metric approaches the complete Euclidean metric without conical angle, and for $\mu < 0$ a cone with an excess angle is approached. There is an upper bound on $\mu$, $\mu < 2$ \cite{Ashtekar:1993ds}, which is needed for dynamical consistency. Below we comment that this bound is also needed in the critical Ho\v{r}ava theory. We perform the Legendre transformation for the critical case $\lambda=1/2$ to cast the theory in its canonical formulation. The phase space is spanned by the conjugated pairs $(g_{ij},\pi^{ij})$ and $(N,P_{N})$. The action does not depend on the time derivative of lapse function $N$, hence we obtain the primary constraint $P_{N}=0$. The momentum conjugate of the spatial metric obeys the relation \begin{equation} \frac{\pi^{ij}}{\sqrt{g}}=G^{ijkl} K_{kl} \,. \label{pij} \end{equation} Due to (\ref{criticalcondition}), the trace of this yields another primary constraint, namely, \begin{equation} \pi \equiv g_{ij} \pi^{ij} = 0 \,. \label{pi} \end{equation} After the Legendre transformation, one integration by parts and the addition of the two primary constraints with the Lagrange multipliers $\sigma_1, \sigma_2$, we obtain the Hamiltonian \begin{eqnarray} H= \int d^2x \left[ \sqrt{g} N \left( \frac{\pi^{ij}\pi_{ij}}{g} - \beta R - \alpha a_i a^i \right) + N_{i}\mathcal{H}^{i} + \sigma_1 P_{N} + \sigma_2 \pi \right] + 2\pi\beta \mu \,. \label{H} \end{eqnarray} The last constant term is added in order to ensure the functional differentiability of the Hamiltonian under the asymptotically flat conditions (\ref{asympgij}) -- (\ref{asympni}) \cite{Ashtekar:1993ds} (the terms that depends on $a_i$ do not affect the differentiability of the Hamiltonian, as in the noncritical $2+1$ Ho\v{r}ava theory \cite{Bellorin:2019zho}). The definition of the momentum constraint $\mathcal{H}^i$ is extended in order to get the generator of spatial diffeomorphisms on the full phase space \cite{Donnelly:2011df}, hence one defines the momentum constraint \begin{equation} \mathcal{H}^{i} \equiv -2\nabla_{k}\pi^{ik}+P_{N}\partial^{i}N = 0 \,. \label{momentum} \end{equation} Next, we apply Dirac's procedure to obtain the full set of constraints. We first impose the preservation in time of the primary constraint $P_{N}=0$. This yields the Hamiltonian constraint \begin{equation} \mathcal{H} \equiv \frac{\pi^{ij}\pi_{ij}}{\sqrt{g}} - \sqrt{g}\beta R + \alpha \sqrt{g} ( 2 \nabla_k a^k + \alpha a_k a^k) = 0 \,. \label{HamiltonianConst} \end{equation} The preservation of the $\pi = 0$ constraint yields a new constraint which we denote by $\mathcal{C}$. It is given by \begin{equation} \mathcal{C} \equiv \frac{N}{\sqrt{g}}\pi^{ij}\pi_{ij}-\beta\sqrt{g}\nabla^{2}N = 0 \,. \label{C} \end{equation} Dirac's procedure ends with the step of imposing the preservation of the secondary $\mathcal{H}$ and $\mathcal{C}$ constraints. These conditions generate elliptic differential equations for the Lagrange multipliers $\sigma_1$ and $\sigma_2$. The two resulting equations are equivalent to the system \begin{eqnarray} 0 &=& \beta \nabla^{2} \sigma_2 + 4\alpha a_{k}\nabla^{k} \sigma_2 - \frac{2\alpha}{N} \nabla_{k} ( a^{k} \sigma_1 ) + \frac{\pi^{ij}\pi_{ij}}{g} \left( \frac{ 2\alpha }{\beta} \frac{\sigma_1}{N} - \left( 1 + \frac{2\alpha}{\beta} \right) \sigma_2 \right) \nonumber \\ && + 2 \left( 1 - \frac{\alpha}{\beta} \right) \frac{ N^2 \pi_{ij} }{ \sqrt{g} } a^i a^j \,, \\ 0 &=& \beta \nabla^{2}\sigma_1 - \frac{ \pi^{ij}\pi_{ij} }{g} ( \sigma_1 - N \sigma_2 ) - 2 \beta N^2 \frac{\pi_{ij}}{\sqrt{g}} \left( 2 \nabla^i a^j + \left( 1 - \frac{\alpha}{\beta} \right) a^{i}a^{j} \right) \,. \end{eqnarray} In order to keep this system elliptic on $\sigma_1,\sigma_2$, we must discard the possibility of zero $\beta$, ensuring in this way the consistency of the dynamics of the theory. Therefore, the set of constraints closes consistently. In summary, the nonreduced phase space of the nonprojectable Ho\v{r}ava theory in 2 spatial dimensions and formulated at the critical point $\lambda = 1/2$ is spanned by the variables $(g_{ij},\pi^{ij})$ and $(N,P_N)$, which amount for eight functional degrees of freedom. The theory possesses the momentum constraint $\mathcal{H}^i=0$, which is a first-class constraint, and the second-class constraints $P_N = 0$, $\pi = 0$, $\mathcal{H} = 0$ and $\mathcal{C} = 0$. This gives a total of six constraints that must be imposed. Subtracting the two functional degrees of freedom corresponding to the gauge symmetry of spatial diffeomorphisms, we have that no physical, propagating, degree of freedom is left in the phase space. In other words, the reduced phase space of this theory has no dimension at all. With regards to the absence of local physical degrees of freedom, this theory behaves like 2+1 general relativity. The equations of motion in the canonical formalism are \begin{eqnarray} \dot{N} &=& \sigma_1 + N^{k}\nabla_{k}N \,, \label{dotn} \\ \dot{g}_{ij} &=& 2N\frac{\pi_{ij}}{\sqrt{g}} + 2\nabla_{(i}N_{j)} + \sigma_2 g_{ij} \,, \label{dotg} \\ \dot{\pi}^{ij} &=& - \frac{N}{2\sqrt{g}}\Big( 4 \pi^{k(i}\pi^{j)}{}_{k} - g^{ij}\pi^{kl}\pi_{kl} \Big) - \frac{\alpha}{2} \sqrt{g} N \Big( 2 a^{i}a^{j} - g^{ij} a_k a^k \Big) \nonumber \\ && + \beta\sqrt{g} \Big( \nabla^{ij} N-g^{ij}\nabla^{2}N \Big) - \sigma_2 \pi^{ij} - 2\nabla_{k} N^{(i} \pi^{j)k} + \nabla_{k}(N^{k}\pi^{ij}) \,. \label{dotpi} \end{eqnarray} To arrive at this form of the equations of motion we have considered the Hamiltonian only with the primary constraints added, as it is shown in Eq.~(\ref{H}). The secondary constraints $\mathcal{H}$ and $\mathcal{C}$ can also be added, but, with suitable boundary conditions, one finds that the solution for their corresponding Lagrange multipliers is zero, which is equivalent to drop these constraints out from the Hamiltonian (an extended discussion about this issue in the 3+1 theory can be found in \cite{Bellorin:2017gzj}). Thus, the equations of motion (\ref{dotn}) - (\ref{dotpi}) are the evolution equations of the theory on very general grounds. Viewed as a problem of initial data, the absence of local degrees of freedom means that the constraints, together with the choice of a gauge to fix the freedom of performing spatial diffeomorphisms, determine the initial data completely. The evolution equations (\ref{dotn}) -- (\ref{dotpi}) give the flow in time fo the initial data, but the freedom to change this initial data has been already fixed by the constraints and the gauge chosen. In the next section we will see how this works explicitly, by means of a perturbative analysis. As happens in $2+1$ general relativity \cite{Ashtekar:1993ds}, the term of the Hamiltonian (\ref{H}) that is quadratic on the canonical momentum impose an upper bound on $\mu$: in order to get a finite generator of time evolution, one must impose $\mu < 2$. Moreover, by using the constraint (\ref{HamiltonianConst}), the Hamiltonian (\ref{H}) can be written as a sum of constraints plus the boundary term, \begin{equation} H = \int d^Dx \left( N \mathcal{H} + N_k \mathcal{H}^k + \sigma_1 P_N + \sigma_2 \pi \right) + 2 \pi \beta \mu \,. \label{HND} \end{equation} The constant term $2\pi\beta\mu$, already present in (\ref{H}), is the only boundary contribution. Hence, this constant gives the value of the gravitational energy in the same way as $2+1$ general relativity. With regards to the sign of the energy and the asymptotic behavior, here there are more possibilities due to the presence of the coupling constant $\beta$, whose sign is not restricted by symmetry, physical-mode propagation nor the existence of a Newtonian potential. In general relativity the symmetry fixes $\beta = 1$. In the (linearized) $3+1$ Ho\v{r}ava theory, $\beta$ is the square of the speed of the tensorial waves, hence it must be positive. But in the critical $2+1$ theory there are no gravitational waves, since there are no local degrees of freedom. Another theory one can compare with is the $2+1$ noncritical ($\lambda \neq 1/2$) Ho\v{r}ava theory, which has a propagating mode. The squared speed of this mode is proportional to $\beta^2$, hence it does not restrict the sign of $\beta$. Another way to fix the sign of $\beta$ would be the necessity of making atractive the Newtonian potential, but in the appendix we discuss that in the critical theory there is no analogue of the Newtonian potential, as happens in $2+1$ general relativity. As a consequence, if $\beta < 0 $ one could have a configuration with $\mu < 0 $ and still it has positive energy. In $2+1$ general relativity a theorem of positivity of the energy is given in Ref.~\cite{Ashtekar:1993ds}: all globally well-defined solutions that satisfy the asymptotic condition (\ref{asympgij}) and are coupled to matter that satisfies the energy conditions have nonnegative energy. A related study for the Ho\v{r}ava theory in $3+1$ dimensions has been presented in \cite{Garfinkle:2011iw}. \subsection{Linearized version} It is important to have a way for solving the contraints explicitly and checking that no propagation of free data is allowed. This can be achieved conveniently in the linearized theory. In this analysis we fix $\mu = 0$ for simplicity. The absence of fluctuations on the Minkoswski background provides a clear example. The configuration that is the analogue of the Minkowski space in 2+1 dimensions (in Cartesian coordinates) is given by the setting $N = 1$, $g_{ij} = \delta_{ij}$ and $\pi^{ij} = 0$, with multipliers $N_i = \sigma_1 = \sigma_2 = 0$. This is an exact solution of all the constraints and the evolution equations of the theory shown in the previous section. We perform perturbations around this solution by means of the canonical variables \begin{equation} N=1+n \,, \qquad g_{ij}=\delta_{ij}+h_{ij} \,, \qquad \pi^{ij}=p^{ij} \,. \end{equation} The asymptotic decay is $h_{ij}\sim\mathcal{O}(r^{-1})$, $p^{ij}\sim\mathcal{O}(r^{-2})$ and $n \sim \mathcal{O}(r^{-1})$, where $r = \sqrt{ x^i x^i }$. We introduce the transverse-longitudinal decomposition \begin{equation} h_{ij}= \left( \delta_{ij} - \frac{\partial_{i}\partial_{j}}{ \Delta } \right) h^{T} + \partial_{(i}h_{j)} \,, \end{equation} where $\Delta = \partial_1^2 + \partial_2^2$ is the two-dimensional flat Laplacian, and similarly for $p^{ij}$. We impose the transverse gauge $\partial_{i}h_{ij}=0$, under which the longitudinal sector of the metric is eliminated, $h_{i} = 0$. The linearized momentum constraint (\ref{momentum}) becomes $\partial_{i}p^{ij} = 0$, hence $p_i = 0$, whereas the linearized version of the $\pi = 0$ constraint eliminates the transverse scalar mode, $p^T=0$. Thus, in the linearized theory the canonical momentum is completely frozen. Constraint $\mathcal{C}$, given in Eq.~(\ref{C}), becomes $\beta \Delta n = 0$. Since in this perturbative analysis we are considering $n = \mathcal{O}(r^{-1})$, the solution for $n$ is $n = 0$. After this, the Hamiltonian constraint, given in Eq.~(\ref{HamiltonianConst}), becomes $\beta \Delta h^T = 0$, and since we consider $h^T = \mathcal{O}(r^{-1})$, this constraint fixes $h^T = 0$. Therefore, all the perturbations of the canonical variables are frozen by the constraints and the choice of a gauge condition, no fluctuations of the background are allowed. Since all perturbations of the canonical variables are frozen (and the background is static), the evolution equations (\ref{dotn}) -- (\ref{dotg}) yield no relevant information, except for the fact that they fix the possible fluctuations of the Lagrange multipliers. This includes the shift vector since we have already fixed the gauge symmetry of spatial diffeomorphisms. Let us see this explicitly. We suppose that $N_i$, $\sigma_1$ and $\sigma_2$ are variables of first order in perturbations. At linear order, Eq. (\ref{dotn}) fixes $\sigma_1 = 0$, whereas Eq.~(\ref{dotpi}) yields no new information. The linearized equation (\ref{dotg}) has three different components, which are equations for the three unknowns $N_1$, $N_2$ and $\sigma_2$. Writen in matrix form, these equations are \begin{equation} \left( \begin{array}{ccc} 2 \partial_1 & 0 & 1 \\ 0 & 2 \partial_2 & 1 \\ \partial_2 & \partial_1 & 0 \end{array} \right) \left( \begin{array}{c} N_1 \\ N_2 \\ \sigma_2 \end{array} \right) = 0 \,. \end{equation} By multiplying this equation from the left with \begin{equation} \frac{1}{2} \left( \begin{array}{ccc} \partial_1 & -\partial_1 & 2\partial_2 \\ -\partial_2 & \partial_2 & 2\partial_1 \\ 2\partial_2^2 & 2\partial_1^2 & - 4 \partial_1\partial_2 \end{array} \right) \,, \end{equation} we obtain \begin{equation} \Delta \left( \begin{array}{c} N_1 \\ N_2 \\ \sigma_2 \end{array} \right) = 0 \,. \end{equation} By assuming that the Lagrange multipliers decay fast enough at infinity, we find that the only solution is $N_1 = N_2 = \sigma_2 =0$. Therefore, also the Lagrange multipliers are frozen in the linearized theory. \section{Asymptotic conditions, flat and nonflat solutions} We want to contrast the previously shown feature that this theory has no local degrees of freedom with the presence or absence of nonflat solutions. This can be conveniently study in the Lagrangian formulation, with the action (\ref{action}). The equations of motion derived from the action (\ref{action}) at the critical point $\lambda = 1/2$ are \begin{eqnarray} G^{ijkl} K_{ij} K_{kl} + \beta R + \alpha a_i a^i - 2 \alpha \frac{\nabla^{2} N}{N} &=& 0\,, \label{deltaN} \\ G^{ijkl} \nabla_{j} K_{kl} &=& 0 \,, \label{deltaNi} \\ \frac{1}{\sqrt{g}} \frac{\partial}{\partial t} \left( \sqrt{g} G^{ijkl} K_{kl} \right) + 2 G^{klm(i|} \nabla_k ( N^{|j)} K_{lm} ) - G^{ijkl} \nabla_{n}( N^{n} K_{kl} ) && \nonumber \\ + 2 N (K^i{}_k K^{jk} -\frac{1}{2} KK^{ij}) - \frac{1}{2} N g^{ij} G^{klmn} K_{kl} K_{mn} && \nonumber \\ - \beta \left( \nabla^{ij} N - g^{ij} \nabla^{2}N \right) + \alpha N \left( a^i a^j - \frac{1}{2} g^{ij} a_k a^k \right) &=& 0 \,. \label{deltag} \end{eqnarray} \subsection{Asymptotically flat solutions with $\mu \geq 0$ are globally flat} The main result about the asymptotically flat solutions can be stated as a theorem: The only regular solutions of Eqs.~(\ref{deltaN}) -- (\ref{deltag}) that satisfy the asymptotically flat condition (\ref{asympgij}) -- (\ref{asympni}) with $\mu \geq 0$ are the totally flat configurations. We remark that in the range $\beta > 0$, the configurations with $\mu \geq 0$ are the ones with nonnegative energy. To proof the theorem, since the spatial metric is bidimensional, we may consider a conformally flat form in Cartesian coordinates, \begin{equation} ds^2 = \Omega^2(t,\vec{x}) ( (dx^1)^2 + (dx^2)^2 ) \,. \label{conformal} \end{equation} The leading mode of the asymptotic expansion of the spatial metric in (\ref{asympgij}) is conformal to the 2D Euclidean metric, hence we have the asymptotic behavior $\Omega^2 = r^{-\mu} \left( 1 + \mathcal{O}(r^{-1}) \right)$. An identity in the form of sum of squares arises in the $2+1$ critical theory $\lambda =1/2$ with this conformally flat form of the spatial metric: if $\mathfrak{M}_{ij}$ is an arbitrary $2\times 2$ matrix, then \begin{equation} G^{ijlk} \mathfrak{M}_{ij} \mathfrak{M}_{kl} = \Omega^{-4} \left[ \left( \mathfrak{M}_{11} - \mathfrak{M}_{22} \right)^2 + \left( \mathfrak{M}_{12} + \mathfrak{M}_{21} \right)^2 \right]\,. \label{identity} \end{equation} We start by analysing the solution of Eq.~(\ref{deltaNi}). We recall that in the critical case $\lambda = 1/2$ we have the geometrical identity $G^{ijkl} g_{kl} = 0$. Since the spatial metric has been put in the conformally flat form (\ref{conformal}), this identity also holds for the time derivative, $G^{ijkl} \dot{g}_{kl} = 0$. Therefore, Eq.~(\ref{deltaNi}) reduces to \begin{equation} G^{ijkl} \nabla_j \left( N^{-1} \nabla_k N_l \right) = 0 \,. \label{eq} \end{equation} We pose this equation as an equation for the shift vector $N_i$, under the condition of asymptotic flatness defined in (\ref{asympgij}) -- (\ref{asympni}). We may contract this equation with $\sqrt{g} N_i$, and integrate over the whole spatial slide at an instant of time, obtaining \begin{equation} \int d^2x \sqrt{g} G^{ijkl} N_i \nabla_j \left( N^{-1} \nabla_k N_l \right) = 0 \,. \end{equation} By integrating by parts we get two terms, \begin{equation} \int d^2x \partial_j \left( \sqrt{g} G^{ijkl} N_i N^{-1} \nabla_k N_l \right) - \int d^2x \sqrt{g} N^{-1} G^{ijkl} \nabla_i N_j \nabla_k N_l = 0 \,. \end{equation} According to the asymptotic conditions (\ref{asympgij}) -- (\ref{asympni}), the first integral yields a boundary contribution that goes as $r^{-(2 + \mu)}$, hence it vanishes for $\mu \geq 0$. Thus, we arrive at the equation \begin{equation} \int d^2x \sqrt{g} N^{-1} G^{ijkl} \nabla_i N_j \nabla_k N_l = 0 \,. \end{equation} By using identity (\ref{identity}), this equation takes the explicit form \begin{equation} \int d^2x\, \Omega^{-2} N^{-1} \left[ \left( \nabla_1 N_1 - \nabla_2 N_2 \right)^2 + \left( \nabla_1 N_2 + \nabla_2 N_1 \right)^2 \right] = 0 \,. \end{equation} Since the integrand is a point-to-point sum of nonnegative quantities, and since we assume continuity of the functions under integration, we have that this equation necessarily implies the two equations \begin{eqnarray} \nabla_1 N_1 - \nabla_2 N_2 &=& 0\,, \\ \nabla_1 N_2 + \nabla_2 N_1 &=& 0 \,. \end{eqnarray} Explicitly, these equations are \begin{eqnarray} \partial_1 N_1 - 2 \Omega^{-1} \partial_1 \Omega N_1 &=& \partial_2 N_2 - 2 \Omega^{-1} \partial_2 \Omega N_2 \,, \\ \partial_1 N_2 - 2 \Omega^{-1} \partial_1 \Omega N_2 &=& - \partial_2 N_1 + 2 \Omega^{-1} \partial_2 \Omega N_1 \,, \end{eqnarray} or, equivalently, \begin{eqnarray} \partial_1 \left( \frac{N_1}{\Omega^2} \right) &=& \partial_2 \left( \frac{N_2}{\Omega^2} \right) \,, \\ \partial_1 \left( \frac{N_2}{\Omega^2} \right) &=& -\partial_2 \left( \frac{N_1}{\Omega^2} \right) \,. \end{eqnarray} The integrability of these equations leads to the condition of harmonic functions with respect to the totally flat Euclidean Laplacian, $\Delta \equiv \partial_1^2 + \partial_2^2$, namely \begin{eqnarray} \Delta \left( \frac{N_1}{\Omega^2} \right) = 0 \,, \quad \Delta \left( \frac{N_2}{\Omega^2} \right) = 0 \,. \end{eqnarray} Note that $N_{1,2} / \Omega^2$ are $\mathcal{O}(r^{-1})$ asymptotically. The only continuous function of asymptotic order $\mathcal{O}(r^{-1})$ that is harmonic with respect to $\Delta$ is the zero function. Therefore, we have shown that the only solution of Eq.~(\ref{deltaNi}) satisfying the asymptotic condition (\ref{asympgij}) -- (\ref{asympni}) with $\mu \geq 0$ is $N_i = 0$. With this result the fields equations (\ref{deltaN}) and (\ref{deltag}) reduce themselves greatly. Since in this case $K_{ij} = \frac{\dot{g}_{ij}}{2N}$ and the metric has been put in conformal form, we have the identities $G^{ijkl} K_{kl} = 0$ and $K^i{}_k K^{jk} - \frac{1}{2} K K^{ij} = 0$. Equation (\ref{deltag}) takes then the form \begin{equation} \beta \left( \nabla^{ij} N - g^{ij} \nabla^{2}N \right) - \frac{\alpha}{N} \left(\nabla^i N \nabla^j N - \frac{1}{2} g^{ij} \nabla_{k} N \nabla^{k} N \right) = 0 \,. \label{asympflateq} \end{equation} Since the second term is traceless, the trace of this equation yields \begin{equation} \nabla^2 N = 0 \,. \label{Ntrivial} \end{equation} By a similar procedure of multiplying this equation by $\sqrt{g} N$ and integrating over the spatial slide, we get the equation \begin{equation} \int d^2x \partial_k \left( N \partial_k N \right) - \int d^2x \partial_k N \partial_k N = 0 \,. \label{ntrivialint} \end{equation} With the asymptotic condition (\ref{asympgij}) -- (\ref{asympn}) we have that the boundary term goes as $r^{-1}$, hence it is zero. Again, the integrand of the remainning integral is nonnegative. Thus, the only continuous solution of Eq.~(\ref{ntrivialint}) is $\partial_i N = 0$, which is fixed as $N=1$ by the value at infinity. By inserting all the results we have obtained so far in the field equation (\ref{deltaN}), we obtain it reduces to the condition of zero spatial curvature, $R = 0$. After using the form of the metric (\ref{conformal}), this condition takes the explicit form \begin{equation} \Omega \Delta \Omega - \partial_k \Omega \partial_k \Omega = 0 \,. \label{eqOmega} \end{equation} By integrating this equation over a spatial slide and integrating by parts, we get \begin{equation} \int d^2x \partial_k ( \Omega \partial_k \Omega ) - 2 \int d^2x \partial_k \Omega \partial_k \Omega = 0 \,. \label{omegatrivial} \end{equation} The boundary integral is of order $\sim r^{-\mu} ( \mu + \mathcal{O}(r^{-1}) )$, hence we can ignore it. Thus, the behavior of $\Omega$ is the same as $N$: the only continuous solution of (\ref{omegatrivial}) is $\partial_i \Omega = 0$. Since $\Omega$ is constant, the only possibility left is $\mu = 0$. This completes the proof of the theorem; the only regular asymptotically flat solutions of the critical $2+1$ Ho\v{r}ava theory with $\mu \geq 0$ are the totally flat configurations. They have $\Omega = N = 1$ and $N_i = 0$. This can be stated in terms of the positivity of the energy: in the range of the space of coupling constants defined by $\beta > 0$, the only regular asymptotically flat vacuum solutions that have nonnegative energy are the flat configurations, which have zero energy. \subsection{Nonflat solutions} \label{rotations} We may drop prefixed boundary conditions and look for more solutions under a specific ansatz. We start by evaluating Eqs.~(\ref{deltaN}) -- (\ref{deltag}) for the case of static configurations and imposing the condition $N_i = 0$. Equation (\ref{deltaNi}) is automatically solved. Equation (\ref{deltag}) takes exactly the form given in (\ref{asympflateq}), with its trace (\ref{Ntrivial}) yielding the condition of harmonicity on $N$. But, since in this part we do not impose boundary conditions, we continue on solving the equations without fixing $N$ yet. We put together the resulting Eqs.~(\ref{deltaN}) and (\ref{deltag}), simplified by (\ref{Ntrivial}), \begin{eqnarray} \beta R + \alpha a_k a^k &=& 0 \,, \label{A1} \\ \beta \nabla_{ij} N - \alpha N \left( a_i a_j - \frac{1}{2} g_{ij} a_k a^k \right) &=& 0 \,. \label{B11} \end{eqnarray} This is the system of equations that must be solved for static configurations (the trace of Eq.~(\ref{B11}) reproduces Eq.~(\ref{Ntrivial})). Next, we introduce the ansatz of a static spatial metric with rotational symmetry, which in polar coordinates is \begin{equation} ds^{2} = f^{-1}(r) dr^{2}+r^{2} d\theta^{2} \,, \end{equation} and we assume that the lapse function depends only on the radius, $N=N(r)$. Under this ansatz, Eq.~(\ref{A1}) and the two diagonal components of the Eq.~(\ref{B11}) take the form \begin{eqnarray} \beta \frac{f'}{r f} - \alpha \left( \frac{N'}{N} \right)^{2} &=& 0 \,, \label{A111} \\ \beta \frac{N''}{N} + \frac{\beta}{2} \frac{f' N'}{f N} - \frac{\alpha}{2} \left( \frac{N'}{N} \right)^2 &=& 0 \,, \label{rr} \\ \frac{N'}{N} \left( \alpha \frac{N'}{N} + \frac{2 \beta}{ r } \right) &=& 0 \,. \label{tetateta} \end{eqnarray} The off-diagonal component of Eq.~(\ref{B11}) vanishes identically. We will see that Eq.~(\ref{rr}) is implied by Eqs.~(\ref{A111}) and (\ref{tetateta}), hence the system is self-consistent. Equation (\ref{tetateta}) has only two solutions. One is $N' = 0$, hence $N = \mbox{constant}$, which inserted in Eq.~(\ref{A111}) gives also $f = \mbox{constant}$. This is a flat configuration and Eq.~(\ref{rr}) is automatically solved by it. The other possibility is the vanishing of the second factor in (\ref{tetateta}), \begin{equation} \alpha \frac{N'}{N} + \frac{2 \beta}{ r } = 0 \,. \label{nonflatcase} \end{equation} This condition requires, by consistency, that $\alpha \neq 0$. The solution for the lapse function $N$ is obtained by the direct integration of (\ref{nonflatcase}), and then the solution for $f$ by the integration of (\ref{A111}). In the integration of $N$, a multiplicative integration constant arises; it has no physical meaning since it can be absorbed by re-scaling the time parameter of the foliation. We put this integration constant equal to $1$ for simplicity. Instead, the integration constant that arises by integrating $f$ cannot be absorbed by rescaling coordinates; we denote it by $r_0$. In this way we get the exact vacuum solution \begin{equation} N(r) = r^{ -2\beta/\alpha } \,, \quad f(r) = \left( \frac{ r }{ r_0 } \right)^{ 4\beta/\alpha } \,. \label{nonflatsol} \end{equation} Equation (\ref{rr}) is solved by this configuration. Functions $N$ and $f$ are singular at the spatial infinity. The singularity could be $0$ or $\infty$ depending on the sign of $\beta/\alpha$. We may build a spacetime curvature for this last solution if we assume that a spacetime metric is built with the ADM fields, that is, $^{(3)}g_{00} = - N^2$ and $^{(3)}g_{ij} = g_{ij}$. The point we want to highlight is that the spacetime curvature is not zero for this solution. Indeed, the only nonzero component of the Ricci tensor is $^{(3)}R_{rr}$, which, together with the Ricci scalar, are \begin{equation} ^{(3)}R_{rr} = - \frac{4\beta}{\alpha} r^{-2} \,, \quad ^{(3)}R = R = - \frac{4\beta}{\alpha} \frac{ r^{4\beta/\alpha-2} }{ r_0^{ 4\beta/\alpha} } \,. \end{equation} There is a range in the space of parameters where the three-dimensional Ricci scalar is regular in all finite points and diverges at infinity, namely \begin{equation} \frac{2\beta}{\alpha} > 1 \,. \label{boundalpha} \end{equation} We may cast this range as a condition on $\alpha$. Therefore, at least in the range (\ref{boundalpha}), this solution is not asymptotically flat. In the same range of parameters, the three-dimensional Kretschmann scalar is also regular in all finite points and diverges asymptotically, \begin{equation} ^{(3)}R_{\alpha\beta\gamma\delta}\, {^{(3)}}R^{\alpha\beta\gamma\delta} = 3 \left(\frac{4\beta}{\alpha}\right)^2 \frac{ r^{8\beta/\alpha-4} }{ r_0^{8\beta/\alpha} } \,. \end{equation} The same bound (\ref{boundalpha}) on $\alpha$ has arisen in the nonprojectable Ho\v{r}ava theory in other contexts, see, for example, \cite{Blas:2009qj}. \section*{Conclusions} We have found that the critical $2+1$ nonprojectable Ho\v{r}ava theory has properties quite similar to $2+1$ general relativity, although some differences arise. We have studied the large-distance effective action, hence these effects manifest themselves at second order in derivatives. We have shown, by means of a rigorous canonical analysis, that this formulation of the Ho\v{r}ava theory does not propagate any physical local mode, like $2+1$ general relativity. Despite the fact that the field equations are different to the Einstein equations, we have found that, in the theory without sources, a class of regular asymptotically flat solutions are indeed globally flat. Moreover these solutions are the ones that can be found among the asymptotically flat configurations with positive energy when $\beta > 0$. Thus, one sees that, in spite of being a different theory with a different gauge symmetry group, the critical theory in two spatial dimensions tends to exhibit a dynamical behavior similar to general relativity. Another feature that we have presented is the absence of Newtonian potential (assuming again the condition of asymptotic flatness). It would be interesting to elucidate if the condition of asymptotic flatness leads always to totally flat solutions, by completing the proof for negative $\mu$. In the contrary case, if regular asymptotically flat solutions with negative $\mu$ exist, then they possess positive energy in the range $\beta < 0$. The sign of $\beta$ is not restricted, at least by appealing to the symmetry, wave propagation or Newtonian force. This is a particular feature of the $2+1$ Ho\v{r}ava theory. The only restriction is $\beta \neq 0$, which is a requisite for the consistency of the dynamics. On the other hand, we have seen that the critical theory admits an exact vacuum solution with nontrivial curvature that is not asymptotically flat. We have presented the specific solution, which is static and with rotational symmetry. Thus, there is a relationship between the choice of the boundary conditions and the closedness to general relativity.
2024-02-18T23:40:56.636Z
2021-03-24T01:29:01.000Z
algebraic_stack_train_0000
3,740
6,810
proofpile-arXiv_066-2345
\section{} Phonon-mediated particle detectors~\cite{enss:2008a} (often defined ``bolometers'') have nowadays important applications in neutrino physics,~\cite{giuliani:2012a,nucciotti:2014a} dark-matter searches~\cite{pirro:2017a} and rare nuclear decay investigations.~\cite{belli:2019a} They also provide outstanding $\alpha$, $\beta$, $\gamma$, X-ray and neutron spectroscopy.~\cite{enss:2008a,pirro:2017a,belli:2019a,bekker:2016a} Neutrinoless double-beta ($0\nu2\beta$) decay~\cite{dolinski:2019a} is a hypothetical rare nuclear transition of an even-even nucleus to an isobar with two more protons, with the emission of just two electrons. Its observation would provide a unique insight into neutrino physics.~\cite{vergados:2016a} Bolometers based on Li$_2$MoO$_4$~crystals are promising detectors for a next-generation $0\nu2\beta$~decay experiment.~\cite{bekker:2016a,armengaud:2017a,armengaud:2020a} They embed the favorable candidate $^{100}$Mo, maximising the detection efficiency. The $0\nu2\beta$~decay signature is a peak in the sum energy spectrum of the two emitted electrons, expected at 3.034~MeV for $^{100}$Mo. In a bolometer, the energy deposited by a particle in the crystal is converted into phonons, which are then detected by a suitable sensor. The highest challenge in $0\nu2\beta$~decay search is the control of the radioactive background, due to the long expected lifetime of the process ($> 10^{25}-10^{26}$~y).~\cite{gando:2016a,agostini:2020a,adams:2019a} The experiments are located underground under heavy shielding. To reduce the current background level of bolometric experiments, it is mandatory to reject $\alpha$ or $\beta$ events --- defined ``surface $\alpha$'s or $\beta$'s'' in the following for brevity --- induced by radioactive impurities located either close to the surface of the crystal itself or to that of the surrounding structure.\cite{artusa:2014a,alduino:2017a} Surface $\alpha$'s can be rejected in scintillating materials --- such as Li$_2$MoO$_4$~--- by detecting simultaneously scintillation and phonons for the same event~\cite{pirro:2006a,artusa:2014a,poda:2017a} and exploiting the generally lower light yield of $\alpha$'s with respect to $\beta$'s,~\cite{tretyak:2010a} but the rejection of surface $\beta$'s requires dedicated techniques capable of tagging surface events in bolometers.~\cite{foggetta:2005a,marnieros:2008a,nones:2010a,nones:2012a,agnese:2013a} In this letter, we report an effective method to identify both surface $\alpha$'s and $\beta$'s in Li$_2$MoO$_4$~bolometers. The discrimination is achieved by coating a Li$_2$MoO$_4$~crystal side with a metallic film acting as a pulse-shape modifier for events that release energy close to the coated face. When an ionizing event occurs in a dielectric crystal kept at mK temperature, the deposited energy is readily converted to athermal phonons with typical energies of the order of tens of meV, to be compared with the few $\mu$eV thermal-bath energy. The energy down-conversion of these athermal phonons occurs mainly by anharmonic decay and is progressively slowing down, as the phonon lifetime scales as the fifth power of the energy.~\cite{orbach:1964a,bron:1982a} If a sensor sensitive mainly to thermal phonons is used (as in this work), the rise time of the signal is in the $\sim 10$~ms range, which corresponds to the typical thermalization time of the deposited energy. However, thermalization can speed up via a metallic film covering a crystal side. If the particle is absorbed close to the film, a significant fraction of its energy is trapped in the metal in the form of hot electrons, excited by the absorption of the particle-generated athermal phonons. The energy is quickly thermalised in the electron system, so that phonons of much lower energies are re-injected in the crystal from the film. Signals from events occurring close to the film will present therefore a shorter rise time and a modified time evolution. We show here that surface events can be tagged according to this approach. All the detectors in this work share a common basic structure, which is similar to that used in the $0\nu2\beta$~experiments CUORE,~\cite{adams:2019a} LUMINEU,~\cite{armengaud:2017a} CUPID-Mo,~\cite{armengaud:2020a,armengaud:2020b} CUPID-0~\cite{azzolini:2019a} and in the dark-matter experiment EDELWEISS~\cite{armengaud:2017b} as far as the phonon readout is concerned. The surface sensitivity was studied above ground with prototypes of reduced size with respect to the final $0\nu2\beta$~bolometers. The energy absorber of the bolometers described here is a single Li$_2$MoO$_4$~crystal~\cite{grigorieva:2017a} with a size of $20 \times 20 \times 10$~mm$^3$ and a mass of $\sim 12$~g. All the tests involve just a single $20 \times 20$~mm$^2$ coated side. \begin{figure}[t] \includegraphics[scale=0.26]{photo-set-up.pdf} \caption{\label{fig:detector-assembly} Scheme (left) and photograph (right) of the detector assembly with a bare Li$_2$MoO$_4$~crystal. A Ge thermistor and a Si:P heater (used to stabilize the bolometric response) are glued on the upper face of the crystal, which is held by polytetrafluoroethylene (PTFE) elements, not shown in the scheme. A uranium source --- visible in transparency in the photograph --- is placed below the crystal. A bolometric light detector --- removed to take the photograph --- faces the upper side of the crystal. The reflecting foil forms an optical cavity that aids light collection.} \end{figure} The phonon sensor is a neutron transmutation doped Ge thermistor~\cite{haller:1984a} (NTD) with a size of $3 \times 3 \times 1$~mm$^3$. Its resistivity increases exponentially as the temperature decreases.~\cite{efros:1975a} The NTD is glued on the crystal by means a two-component epoxy. The glue provides a slow transmission interface, making the NTD sensitive mainly to thermal phonons. We used uranium radioactive sources to test the detector surface sensitivity. They were obtained by drying up a drop of uranium acid solution on a copper foil. These sources provide two main $\alpha$ lines at $\sim 4.2$ and $\sim 4.7$~MeV from $^{238}$U and $^{234}$U respectively, affected by a significant straggling due to partial $\alpha$ absorption in the source residues and/or in the copper substrate. $^{238}$U disintegration is followed by two consecutive $\beta$ emissions, from $^{234}$Th (with a half-life of 24.1~d and an end-point of 0.27~MeV) and from $^{234m}$Pa (with a half-life of 1.2~min and an end-point of 2.27~MeV). The $^{238}$U $\alpha$ rate and the $^{234m}$Pa $\beta$ rate are extremely close. \begin{figure*}[t] \includegraphics[scale=0.48]{Pd-film.pdf} \caption{\label{fig:Pd-film} Particle identification obtained by a Li$_2$MoO$_4$~detector with a 10-nm-thick Pd coating (see inset on the left) exposed to a uranium source. Left: The pulse-shape parameter $m/S_m$ is plotted as a function of the heat energy (estimated by $S_m$) deposited in the Li$_2$MoO$_4$~crystal, with a $\gamma$-based calibration. Surface events appear as a population with lower $m/S_m$ values. They are selected by visual inspection and highlighted in red. The neutron-capture line from the reaction $^6$Li(n,t)$\alpha$ lays in the interior-event band. The $\alpha$ particles are mis-calibrated by about $20$~\% due to both pulse-shape effects and intrinsic different responses for $\alpha$'s and $\beta$/$\gamma$'s. Right: The Li$_2$MoO$_4$~scintillation light yield (LY) is plotted for the same pulses on the same energy scale. The LY is expressed by the ratio of the energy deposited in the light detector by scintillation photons (in keV) to that deposited in the Li$_2$MoO$_4$~crystal as heat (in MeV), for the same event. The same surface events highlighted in the left panel are shown in red. Surface $\beta$'s lay in the high-LY band, while $\alpha$'s and the neutron capture events are well separated in the low-LY band. } \end{figure*} The detector assembly is shown in Fig.~\ref{fig:detector-assembly}. A first test was conducted with a Li$_2$MoO$_4$~crystal without coating to establish the bare detector performance. All subsequent tests have adopted the configuration shown in Fig.~\ref{fig:detector-assembly}, where the metal-coated side, which is optically polished before the film deposition, faces the radioactive source. A bolometric light detector based on a Ge wafer~\cite{armengaud:2017a,armengaud:2020a} is used to separate $\alpha$ from $\beta$/$\gamma$/muon events by detecting the scintillation light.~\cite{pirro:2006a,artusa:2014a,poda:2017a} The detectors were cooled down in a dilution refrigerator located in IJCLab, Orsay, France.~\cite{mancuso:2014a} All the data discussed here have been collected with the detector thermalized to a copper plate at $\sim 22$~mK (Fig.~\ref{fig:detector-assembly}). A current of the order of $\sim 5$~nA is injected in the NTD, rising the detector temperature to about $\sim 25$~mK, at which the NTD resistance is about 0.5~M$\Omega$. The voltage signal amplitude across the NTD is $\sim 60$~$\mu$V/MeV for the bare crystal, corresponding to an NTD temperature change of $\sim 0.5$~mK/MeV. The pulse rise time (from 10\% to 90\% of maximum amplitude) is typically in the 3--10~ms range and the pulse decay time (from 90\% to 30\% of maximum amplitude) is tens of ms. The signals are read out by a DC-coupled low-noise voltage-sensitive amplifier.~\cite{arnaboldi:2002a} In all the tests, the Li$_2$MoO$_4$~detector is energy-calibrated using $\gamma$~peaks of the environmental background, and the light detector using the cosmic muons crossing the Ge wafer~\cite{novati:2019a}. In the Li$_2$MoO$_4$~heat channel, we obtained routinely good energy resolutions of 5--10~keV FWHM for environmental $\gamma$ peaks in the 0.2--1~MeV region. The first test to attain surface sensitivity was performed with a 10-$\mu$m-thick Al-film coating. The details of the achieved results are reported elsewhere.~\cite{bandac:2020a,khalife:2020a} We remind here that an excellent separation of surface $\alpha$ particles was demonstrated thanks to pulse-shape discrimination. The best separation between surface $\alpha$'s and any type of interior events was obtained via a specially developed pulse-shape parameter --- extensively used here --- that we will designate as $m/S_m$.~\cite{bandac:2020a} To construct it, the signals are passed through a digital optimal filter,~\cite{gatti:1986a} whose transfer function is built using the noise power spectrum and the pulse shape of an interior event. This filter provides the best estimator of the signal amplitude $S_m$ (i.e. energy). An individual pulse $S(t)$ is plotted point by point against an average pulse $A(t)$ --- formed from a large sample of interior events and normalized to~1 --- obtaining approximately a straight line. The related slope parameter $m$ is an estimator of the pulse amplitude as well. The ratio $m/S_m$ turns out to be very sensitive to the pulse shape. Interior events have $m/S_m \sim 1$, as expected. On the contrary, $m/S_m$ deviates from~1 for surface $\alpha$ events. For the Al-coated Li$_2$MoO$_4$~crystal, the separation between the interior and the surface $\alpha$ events from a uranium source is better than $10 \sigma$ in terms of $m/S_m$ distributions.~\cite{bandac:2020a} Unfortunately, only a slight hint of separation of the surface $\beta$ events emitted by the same source was observed,~\cite{bandac:2020a,khalife:2021a} ruling out Al coating as a viable method for a complete surface-event tagging. Aluminum was chosen as it is superconductive at the bolometer operation temperature, with a critical temperature $T_C (\mathrm{Al}) \sim 1.2$~K.~\cite{Cochran:1958a} This leads to a negligible contribution to the heat capacity of the full bolometer, as the electron specific heat of superconductors vanishes exponentially with the temperature. We remark that the heat capacity of a bolometer must be as low as possible to achieve high signal amplitudes. In fact, no deterioration of the detector sensitivity was observed with respect to the bare Li$_2$MoO$_4$~crystal. However, the behaviour of superconductors can spoil surface particle tagging. The prompt absorption of athermal phonons by the film breaks Cooper pairs and forms quasi-particles. Theoretically, quasi-particle lifetime diverges as the temperature of the superconductor decreases,~\cite{kaplan:1976a,barends:2008a} although it is often experimentally found to saturate at low temperatures.~\cite{devisser:2014a,fyhrie:2020a} In aluminum, at very low temperatures such as ours ($T/T_C < 0.02$), we expect the quasi-particle lifetime to be as large as several ms,~\cite{schnagel:2000a,baselmans:2009a,fyhrie:2020a,devisser:2014a} similar to the thermalization time of interior events. This mechanism competes with the faster thermalization that should be provided by the film. Driven by these considerations, we tested a Li$_2$MoO$_4$~bolometer with a normal-metal coating. At low temperatures, the electron specific heat of normal metals is proportional to the temperature and tends to dominate over the crystal heat capacity, which scales as $T^3$ according to the Debye law. The thickness of normal-metal films must be substantially smaller than the aluminum ones. We chose palladium as a coating material as it can provide continuous thin films down to 2~nm thickness and no challenging radioactive isotopes are present in its natural composition. A thickness of 10~nm was chosen as a good compromise between heat capacity reduction and phonon absorption probability. The particle-identification results are encouraging, as shown in Fig.~\ref{fig:Pd-film}: both surface $\alpha$'s and $\beta$'s are well separated from the interior events. Unfortunately, the heat capacity of the Pd film~\cite{mizutani:2001a} competes with that of the Li$_2$MoO$_4$~crystal~\cite{musikhin:2015a} affecting seriously the sensitivity of the detector, which was only $\sim 23$~$\mu$V/MeV, about one third of that achieved with the bare crystal. Therefore, this option is not viable for a full coating of the crystal. To overcome the heat-capacity problem, we developed a detector coated with an Al-Pd bi-layer (100~nm and 10~nm thick respectively, with Al on the top), which is superconducting by proximity effect below $T_C$(Al-Pd)~$= 0.65$~K. The superconductive gap induced in Pd by the Al film reduces substantially the Pd specific heat with respect to the normal state. This gap is however low enough to ensure the fast thermalization of the energy deposited by surface events. In fact, the surface-event discrimination capability was fully maintained (see Fig.~\ref{fig:Al-Pd}, left). The detector sensitivity was measured to be 43~$\mu$V/MeV, almost doubled with respect to the pure Pd film. \begin{figure*}[t] \includegraphics[scale=0.49]{Al-Pd.pdf} \caption{\label{fig:Al-Pd} Particle identification obtained by a Li$_2$MoO$_4$~detector with an Al-Pd coating exposed to a uranium source. The $\alpha$ events are removed by a light-yield cut. Left: in a plot of the pulse-shape parameter $m/S_m$ versus energy, the surface events (in red) lay below a black curve defining a 3$\sigma$ acceptance region for the interior events (in blue). The analysis is carried out in the [0-3000]~keV energy interval, which is divided in several sub-intervals. For each of them, a double Gaussian fit of the $m/S_m$ distribution is performed to separate the two populations. An example is provided in the inset. The black point is located 3$\sigma$ at the left of the mean of the Gaussian of the interior events. The black curve fits the black points by a power-law function. Right: Energy spectra (with and without source) of the surface events selected according to the procedure illustrated on the left. The live times of the two measurements are normalized. The fit of the source data accounts for the two simulated $\beta$ contributions of the uranium source and that of the background. } \end{figure*} \begin{figure}[t] \includegraphics[scale=0.49]{alpha-source.pdf} \caption{\label{fig:alpha-source} Energy spectrum collected by a Li$_2$MoO$_4$~detector with an Al-Pd coating exposed to a uranium source after selection of the $\alpha$ events by a light-yield cut with $\sim 100$\% efficiency. The spectrum is calibrated using the $\alpha$-line positions. The measurement is the same that provided the source data shown in Fig.~\ref{fig:Al-Pd}. The straggling can be reproduced by assuming five source components in copper. Each component is a 6~mm diameter disk with a given thickness. The active nuclei are assumed to be uniformly distributed in each disk. The exact source structure is unknown, but our goal here is to set up a phenomenological model capable of explaining the observed straggling. $^{238}$U and $^{234}$U are not in secular equilibrium, as already observed in these types of liquid sources.} \end{figure} \begin{figure}[t] \includegraphics[scale=0.46]{grid.pdf} \caption{\label{fig:grid} Particle identification obtained by a Li$_2$MoO$_4$~detector with an Al-Pd grid coating exposed to a uranium source. The event selection is performed as in Fig.~\ref{fig:Al-Pd}, left. In the top inset, the grid-coated crystal is shown. In the bottom inset, pulses from a surface (red) and an interior (blue) event are shown, corresponding to a deposited energy of about 1~MeV.} \end{figure} We performed two runs with the bi-layer detector. In the first, a uranium source was present, while the second was a background measurement in the same configuration. The trigger rate was $\sim 0.2$~Hz with the source. The contribution of the source is at the level of $\sim 0.03$~Hz. First, we developed a method to separate the surface $\beta$ component. The events below the black curve in the left panel of Fig.~\ref{fig:Al-Pd} --- collected in a source run --- are selected as surface events, while those above represent more than 99\% of the interior event population. The same analysis was performed for the background run. By means of Geant4-based~\cite{agostinelli:2003a} Monte-Carlo simulations (using the G4RadioactiveDecay and Decay0~\cite{ponkratenko:2000a} event generators), we were then able to confirm that the surface $\beta$ events isolated at low energies come actually from the radioactive source. We built a model to predict the $\beta$ spectrum shape considering the observed $\alpha$ straggling (Fig.~\ref{fig:alpha-source}) and the $\beta$ interactions in the detector. We fitted then the experimental $\beta$ spectrum using the predicted shape and taking the background into account (right panel in Fig.~\ref{fig:Al-Pd}). The total number of $^{234m}$Pa decay events returned by the fit --- the only free parameter --- is 3526(81). To build the source model, we set a uranium-source depth profile capable of reproducing the observed $\alpha$ spectrum and the related straggling, as shown in Fig.~\ref{fig:alpha-source}. From the model and the experimental number of $\alpha$ counts it was possible to predict independently the expected total number of $^{234m}$Pa events, which resulted to be 3455(273), in excellent agreement with that deduced from the selection of the source $\beta$ events. The efficiency in selecting surface $\beta$ events can be estimated as 102(8)\%. The $\beta$-particle range in Li$_2$MoO$_4$~is of the order of 2~mm at 1~MeV and 4~mm at 2~MeV. Therefore, we can separate events that deposit a significant amount of energy up to $\sim 4$~mm from the film, well beyond its thickness. We performed then a last test by replacing the continuous Al-Pd film with an Al-Pd grid. The width of the grid lines was 70~$\mu$m and the spacing between each line was 700~$\mu$m (see inset in Fig.~\ref{fig:grid}). The purpose of using a grid is manifold: (1) further reduction of the heat capacity of the coating; (2) possibility to extract scintillation light through the coating; (3) availability of geometrical parameters to possibly tune the discrimination depth. The grid was tested with another uranium source, prepared with the same method as the first one, but about twice less intense. The detector with grid coating can separate surface $\beta$ events (see Fig.~\ref{fig:grid}). The $\beta$ selection efficiency resulted to be 93(10)\%, in good agreement with the continuous-film results. In addition, we measured a discrimination power of about 4.5$\sigma$ for surface $\alpha$ events using the $m/S_m$ parameter. In terms of detector performance, we observed an almost full recovering of the detector sensitivity, that was $\sim 51$~$\mu$V/MeV for $\beta$/$\gamma$ events. Therefore, the grid method is currently our protocol for surface event discrimination. In conclusion, we have shown that both $\alpha$ and $\beta$ particles absorbed close to a metal-coated surface of a Li$_2$MoO$_4$~bolometer can be rejected with high efficiency by pulse-shape discrimination. The prospects of this approach for $0\nu2\beta$~searches are promising. In fact, the current background model of the future $0\nu2\beta$~experiment CUPID~\cite{CUPID:2019a} predicts a background level of 0.1~counts/(tonne~y~keV). Next-to-next generation experiments aim to a reduction of an additional factor 10. Since surface $\beta$ events contribute significantly to the current background level, a necessary condition to achieve the desired reduction is to reject them with an efficiency up to 90\%. This is achievable with the technique here described. This work is supported by the European Commission (Project CROSS, Grant ERC-2016-ADG, ID 742345). The ITEP group was supported by the Russian Scientific Foundation (grant No. 18-12-00003). F.A.D., V.I.T. and M.M.T. were supported in part by the National Research Foundation of Ukraine (grant No. 2020.02/0011). The PhD fellowship of H.K. has been partially funded by the P2IO LabEx (ANR-10-LABX-0038) managed by the Agence Nationale de la Recherche (France). The dilution refrigerator used for the tests and installed at IJCLab (Orsay, France) was donated by the Dipartimento di Scienza e Alta Tecnologia of the Insubria University (Como, Italy). The data that support the findings of this study are available from the corresponding author upon reasonable request. \nocite{*}
2024-02-18T23:40:57.184Z
2021-05-12T02:17:47.000Z
algebraic_stack_train_0000
3,757
3,740
proofpile-arXiv_066-2476
\section{Kleinian groups, limit sets, and the Poincar\'e exponent} For integers $n \geq 2$, $n$-dimensional hyperbolic space can be modelled by the Poincar\'e ball \[ \mathbb{D}^n = \{ z \in \mathbb{R}^n: |z| <1\} \] equipped with the hyperbolic metric $d_H$ given by \[ |ds| = \frac{2|dz|}{1-|z|^2}. \] The group of orientation preserving isometries of $(\mathbb{D}^n, d_H)$ is the group of conformal automorphisms of $\mathbb{D}^n$, which we denote by $\textup{con}^+(\mathbb{D}^n)$. A good way to get a handle on this group is to view it as the (orientation preserving) stabliser of $\mathbb{D}^n$ as a subgroup of the M\"obius group acting on $\mathbb{R}^n \cup \{\infty\}$. This group consists of maps given by the composition of reflections in spheres. A group $\Gamma \leq \textup{con}^+(\mathbb{D}^n)$ is called \emph{Kleinian} if it is discrete. Kleinian groups generate beautiful fractal limit sets defined by \[ L(\Gamma) = \overline{\Gamma(0)} \setminus \Gamma(0) \] where $\Gamma(0) = \{ g(0) : g \in \Gamma\}$ is the orbit of 0 under $\Gamma$ and the closure is the Euclidean closure. Discreteness of $\Gamma$ implies that all $\Gamma$-orbits are locally finite in $\mathbb{D}^n$ and this ensures that $L(\Gamma) \subseteq S^{n-1}$. Here $S^{n-1}$ is the `boundary at infinity' of hyperbolic space. A Kleinian group is called \emph{non-elementary} if its limit set contains at least 3 points, in which case it is necessarily an uncountable perfect set. The \emph{Poincar\'e exponent} captures the coarse rate of accumulation to the limit set and is defined as the exponent of convergence of the \emph{Poincar\'e series} \[ P_\Gamma(s) = \sum_{g \in \Gamma } \exp(-sd_H(0,g(0))) = \sum_{g \in \Gamma} \left(\frac{1-|g(0)|}{1+|g(0)|} \right)^s \] for $s \geq 0$. The \emph{Poincar\'e exponent} is therefore \[ \delta(\Gamma) = \inf\{ s \geq 0 : P_\Gamma(s) <\infty\}. \] It is a simple exercise to show that the \emph{Poincar\'e series} may be defined using the orbit of an arbitrary $z \in \mathbb{D}^n$ at the expense of multiplicative constants depending only on $z$. In particular, the exponent of convergence does not depend on the choice of $z$. (The definition above uses $z=0$.) For more background on hyperbolic geometry and Kleinian groups see \cite{beardon, maskit}. There has been a great deal of interest in computing or estimating the fractal dimension of the limit set $L(\Gamma)$ (as a subset of Euclidean space $\mathbb{R}^n$) and the Poincar\'e exponent plays a central role. We write $\dim_{\textup{H}}, \, \overline{\dim}_{\textup{B}}, \, \dim_{\textup{A}}$ to denote the Hausdorff, upper box, and Assouad dimensions respectively. These constitute three distinct and well-studied notions of fractal dimension. See \cite{falconer} for more background on dimension theory and fractal geometry, especially the box and Hausdorff dimensions, and \cite{book} for the Assouad dimension. For all non-empty bounded sets $F \subseteq \mathbb{R}^n$, \[ 0 \leq \dim_{\textup{H}} F \leq \overline{\dim}_{\textup{B}} F \leq \dim_{\textup{A}} F \leq n. \] For all non-elementary Kleinian groups, \[ \delta(\Gamma) \leq \dim_{\textup{H}} L(\Gamma) \] and for non-elementary \emph{geometrically finite} Kleinian groups, \[ \delta(\Gamma) = \dim_{\textup{H}} L(\Gamma) = \overline{\dim}_{\textup{B}} L(\Gamma). \] See \cite{bowditch} for more details on geometric finiteness. Roughly speaking it means that the Kleinian group admits a reasonable fundamental domain. The equality of Hausdorff dimension and Poincar\'e exponent in the geometrically finite setting goes back to Sullivan \cite{sullivan}, see also Patterson \cite{patterson}. The coincidence with box dimension in this case was proved (rather later) independently by Bishop and Jones \cite{bishopjones} and Stratmann and Urba\'nski \cite{su}. The fact that the Poincar\'e exponent is always a lower bound for the Hausdorff dimension (without the assumption of geometric finiteness) is due to Bishop and Jones \cite{bishopjones}. See the survey \cite{stratmann}. In the presence of parabolic elements the Assouad dimension can be strictly greater than $\delta(\Gamma) $, even in the geometrically finite situation, see \cite{kleinian}. In the geometrically infinite setting, $\delta(\Gamma) < \dim_{\textup{H}} L(\Gamma) < \overline{\dim}_{\textup{B}} L(\Gamma)$ is possible, and it is an intriguing open problem to determine if $\dim_{\textup{H}} L(\Gamma) = \overline{\dim}_{\textup{B}} L(\Gamma)$ for all finitely generated $\Gamma$ for $n \geq 4$. For $n = 3$, Bishop and Jones proved that for finitely generated, geometrically infinite $\Gamma$, $\dim_{\textup{H}} L(\Gamma) = \overline{\dim}_{\textup{B}} L(\Gamma)=2$, see \cite{bishopjones}. This result was extended by Bishop to \emph{analytically finite} $\Gamma$ \cite{bishop, bishopinvent}. Falk and Matsuzaki characterised the upper box dimension of an arbitrary non-elementary Kleinian group in terms of the \emph{convex core entropy} \cite{falk}. This can also be expressed as the exponent of convergence of an `extended Poincar\'e series', but is more complicated to introduce. Proving the general inequality $\delta(\Gamma) \leq \dim_{\textup{H}} L(\Gamma)$ involves carefully constructing a measure supported on the limit set and applying the mass distribution principle. Our investigation begins with the following question: since (upper) box dimension is a simpler concept than Hausdorff dimension, can we prove the weaker inequality $\delta(\Gamma) \leq \overline{\dim}_{\textup{B}} L(\Gamma)$ using only elementary methods? We provide a short and self-contained proof of this estimate in the sections which follow. It is instructive to think about why our proof fails to prove the equality $\delta(\Gamma) = \overline{\dim}_{\textup{B}} L(\Gamma)$ in general and what sort of extra assumptions on $\Gamma$ would be needed to `upgrade' the proof to yield this stronger conclusion. The (upper) box dimension of a non-empty bounded set $F \subseteq \mathbb{R}^n$ can be defined in terms of the asymptotic behaviour of the volume of the $r$-neighbourhood of $F$. Given $r>0$ the $r$-neighbourhood of $F$ is denoted by $F_r$ and consists of all points in $\mathbb{R}^n$ which are at Euclidean distance less than or equal to $r$ from a point in $F$. Write $V_E$ to denote the Euclidean volume, that is, $n$-dimensional Lebesgue measure. If $V_E(F) =0$, then $V_E(F_r) \to 0$ as $r \to 0$. The upper box dimension of $F$ captures this rate of decay and is defined formally by \[ \overline{\dim}_{\textup{B}} F = n-\liminf_{r \to 0} \frac{\log V_E(F_r) }{\log r}. \] Another elementary proof of the estimate $\delta(\Gamma) \leq \overline{\dim}_{\textup{B}} L(\Gamma)$, at least for $n=2,3$, can be found in \cite[Lemmas 2.1 and 3.1]{bishop}. Here the connection is made via `Whitney squares'. \section{Proof of dimension estimate} Let $\Gamma$ be an arbitrary non-elementary Kleinian group acting on the Poincar\'e ball and $\delta(\Gamma)$ denote the associated Poincar\'e exponent. We prove the following (well-known) inequality: \begin{equation} \label{main} \delta(\Gamma) \leq \overline{\dim}_{\textup{B}} L(\Gamma). \end{equation} Throughout we write $A \lesssim B$ to mean there is a constant $c >0$ such that $A \leq cB$. Similarly, we write $A \gtrsim B$ if $B \lesssim A$ and $A \approx B$ if $A \lesssim B$ and $A \gtrsim B$. The implicit constants may depend on $\Gamma$ and other fixed parameters, but it will be crucial that they never depend on the scale $r>0$ used to compute the box dimension or on a specific element $g \in \Gamma$. \subsection{Elementary estimates from hyperbolic geometry} Since $\Gamma$ is non-elementary, it is easy to see that it must contain a loxodromic element, $h$. Loxodromic elements have precisely two fixed points on the boundary at infinity. Let $z \in \mathbb{D}^{n}$ be a point lying on the (doubly infinite) geodesic ray joining the fixed points of $h$. We may assume $z$ is not fixed by any elliptic elements in $\Gamma$ since it is an elementary fact that the set of elliptic fixed points is discrete. Choose $a>0$ such that the set \[ \{ B_{H}(g(z),a)\}_{g \in \Gamma} \] is pairwise disjoint, where $B_H(g(z),a)$ denotes the closed hyperbolic ball centred at $g(z)$ with radius $a$. To see that such an $a$ exists recall that the orbit $\Gamma(z)$ is locally finite. As such, $a$ can be chosen such that $B_{H}(z,2a)$ contains only one point from the orbit $\Gamma(z)$, namely $z$ itself. Then the pairwise disjointness of the collection $\{ B_{H}(g(z),a)\}_{g \in \Gamma}$ is guaranteed since if $y \in B_{H}(g_1(z), a) \cap B_{H}(g_2(z), a) $ for distinct $g_1,g_2 \in \Gamma$, then \[ d_{H}(z, g_1^{-1}g_2(z)) = d_{H}(g_1(z), g_2(z)) \leq d_{H}(g_1(z), y)+d_{H}(y, g_2(z)) \leq 2a \] which gives $g_1^{-1}g_2(z) \in B_{H}(z,2a)$, a contradiction. We will use the simple volume estimate \begin{equation} \label{size} V_E(B_{H}(g(z),a)) \approx (1-|g(z)|)^n \end{equation} for all $g \in \textup{con}^+(\mathbb{D}^n)$, where the implicit constants are independent of $g$ and $z$, but depend on $a$ and $n$. This follows since $B_{H}(g(z),a)$ is a \emph{Euclidean} ball with diameter comparable to $1-|g(z)|$ (most likely not centred at $g(z)$). To derive this explicitly it is useful to recall the (well-known and easily derived) formula for hyperbolic distance \[ d_H(0,w) = \log \frac{1+|w|}{1-|w|}, \qquad (w \in \mathbb{D}^n). \] The next result says that if $1-|g(z)|$ is small, then the image of a fixed set under $g$ must be contained in a comparably small neighbourhood of the limit set. This is the only point in the proof where the fact that the group is non-elementary is used. It is instructive to find an example of an elementary group where the conclusion fails. \begin{lma} \label{volume} Let $a,z$ be as above. There exists a constant $c>0$ depending only on $\Gamma$, $a$ and $z$ such that if $g \in \Gamma$ is such that $1-|g(z)|< 2^{-k+1}$ for a positive integer $k$, then \[ B_{H}(g(z),a) \subseteq L(\Gamma)_{c2^{-k}}. \] \end{lma} \begin{proof} The idea is that there must be a loxodromic fixed point close to $g(z)$ and loxodromic fixed points are necessarily in the limit set. Indeed, $g(z)$ lies on the geodesic ray joining the fixed points of the loxodromic map $g h g^{-1}$. These fixed points are the images of the fixed points of $h$ under $g$ and at least one of them must lie in the smallest Euclidean sphere passing through $g(z)$ and intersecting the boundary $S^{n-1}$ at right angles. This uses the fact that geodesic rays are orthogonal to the boundary and $g$ is conformal. The diameter of this sphere is \[ \lesssim 1-|g(z)| < 2^{-k+1} \] and the result follows, recalling that the Euclidean diameter of $B_{H}(g(z),a)$ is $\approx 1-|g(z)| $. \end{proof} \subsection{Estimating the Poinca\'re series using the limit set } Let $s>t>\overline{\dim}_{\textup{B}} L(\Gamma)$. Then by definition \begin{equation} \label{box} V_E(L(\Gamma)_{r}) \lesssim r^{n-t} \end{equation} for all $0<r<c/2$ with implicit constant independent of $r$ but depending on $t$ and where $c$ is the constant from Lemma \ref{volume}. Then \begin{eqnarray*} P_\Gamma(s) &\approx& \sum_{g \in \Gamma} \left(\frac{1-|g(z)|}{1+|g(z)|} \right)^s \\ \\ &\approx& \sum_{k=1}^\infty \sum_{\substack{g \in \Gamma: \\2^{-k} \leq 1-|g(z)| < 2^{-k+1}}} (1-|g(z)|)^s \\ \\ &\approx& \sum_{k=1}^\infty 2^{-k(s-n)} \sum_{\substack{g \in \Gamma: \\2^{-k} \leq 1-|g(z)| < 2^{-k+1}}} (1-|g(z)|)^n \\ \\ &\lesssim& \sum_{k=1}^\infty 2^{-k(s-n)} \sum_{\substack{g \in \Gamma: \\2^{-k} \leq 1-|g(z)| < 2^{-k+1}}} V_E(B_{H}(g(z),a)) \qquad \text{(by \eqref{size})} \\ \\ &\lesssim& \sum_{k=1}^\infty 2^{-k(s-n)} V_E(L(\Gamma)_{c2^{-k}}) \qquad \text{(by Lemma \ref{volume} and choice of $a$)}\\ \\ &\lesssim& \sum_{k=1}^\infty 2^{-k(s-n)} 2^{-k(n-t)} \qquad \text{(by \eqref{box})} \\ \\ &=& \sum_{k=1}^\infty 2^{-k(s-t)}\\ \\ &<&\infty. \end{eqnarray*} Therefore $\delta(\Gamma) \leq s$ and letting $s \to \overline{\dim}_{\textup{B}} L(\Gamma)$ proves \eqref{main}.
2024-02-18T23:40:57.722Z
2021-08-13T02:01:47.000Z
algebraic_stack_train_0000
3,787
2,128
proofpile-arXiv_066-2682
\section{Introduction} Given a metric space $M$, by $\F(M)$ we denote the \emph{Lipschitz-free space over~$M$}. This is a Banach space whose linear structure somehow reflects the metric structure of~$M$. The study of Banach space theoretical properties of Lipschitz-free spaces was initiated by a paper by G. Godefroy and N. Kalton \cite{gk03}, where the authors proved, using this notion, e.g. that if a separable Banach space~$Y$ is isometric (not necessarily linearly) to a subset of a Banach space~$X$, then $Y$ is already linearly isometric to a subspace of $X$. Soon after, the study of Lipschitz-free Banach spaces became an active field of study, see e.g. \cite{glz}, \cite[Section 10]{ost} and the results mentioned below. However, the structure of these spaces is still very poorly understood to this day. For example, it is not known whether $\F(\er^2)$ is isomorphic with $\F(\er^3)$. Let us recall the construction of $\F(M)$. Choose a distinguished ``base point'' $0\in M$ and denote by $\Lip_0(M)$ the space of all real-valued Lipschitz functions on $M$ that map $0\in M$ to $0\in\er$. It becomes a Banach space if we define the norm of $f$ to be its minimal Lipschitz constant. For any $x\in M$ we denote by $\delta(x)\in\Lip_0(M)^*$ the evaluation functional, i.e. $\ev{\delta(x)}f=f(x)$ for $f\in\Lip_0(M)$. It is easy to see that $\delta$ is an isometric embedding of $M$ into $\Lip_0(M)^*$. The space $\F(M)$ is defined to be the closed linear span of $\{\delta(x)\setsep x\in M\}$ with the dual space norm denoted simply by $\norm{\cdot}$. For details and additional properties see Section 2 or \cite[Section~1]{CDW}. There is a large number of results on the structure of Lipschitz-free space (some of them are recalled below), but an explicit isometric representation of $\F(M)$ is known only for very special $M$. It is known that $\F(\er)$ is isometric to $L^1(\er)$ (cf. \cite[page 128]{gk03} or \cite[page 33]{glz}) and there is an isometric representation for certain discrete spaces. Our main result is an explicit isometric representation of $\F(\Omega)$ where $\Omega$ is a nonempty convex domain in a finite-dimensional normed space. It reads as follows: \begin{thm}\label{t:main} Let $E$ be a real normed space of dimension $d\in\en$ and $\Omega\subset E$ be a nonempty convex open subset. Then the Lipschitz-free space $\F(\Omega)$ is canonically isometric to the quotient space $$L^1(\Omega,E)/\{\g\in L^1(\Omega,E)\setsep\div\g=0\mbox{ in the sense of distributions on }\er^d\}.$$ Moreover, if $\o\in\Omega$ is the base point and $\a\in\Omega$ is arbitrary, then in this identification we have \begin{multline*}\delta(\a)=[\g]\mbox{ if and only if }\g\in L^1(\Omega,E)\\\mbox{ and }\div\g=\ep_\o-\ep_\a\mbox{ in the sense of distributions on }\er^d,\end{multline*} where $\ep_{\x}$ denotes the Dirac measure supported at $\x$. \end{thm} Let us explain the notation and terminology used in the theorem. First, by $[\g]$ we denote the equivalence class of $\g$ as an element of the quotient space. Further, since $L^1(\Omega,E)$ is canonically isometrically embedded into $L^1(\er^d,E)$ (just extend any $\g\in L^1(\Omega,E)$ by zero outside $\Omega$), any $\g\in L^1(\Omega,E)$ can be viewed either as a regular distribution on $\Omega$ or as a regular distribution on $\er^d$. Thus, $\div\g$ in the sense of distributions on $\er^d$ is the distributional divergence of the regular distribution on $\er^d$ induced by the described extension of $\g$. This is quite an important matter, since the result would become false if we considered divergence in the sense of distributions on $\Omega$. The above theorem is new for $d\ge2$, but it covers also the known case $d=1$. Indeed, if $E=\er$ and $\Omega=(a,b)$ where $-\infty\le a<b\le\infty$, then $\F((a,b))$ is canonically identified with $L^1((a,b))$ and, if $0\in (a,b)$, then in this identification we have $$\delta(x)=\begin{cases} \chi_{(0,x)},& x\in(0,b),\\ 0, & x=0,\\ -\chi_{(-x,0)},& x\in(a,0).\end{cases}$$ This case is covered by our theorem, since in dimension one the divergence is just the derivative and the only $f\in L^1(a,b)$, whose derivative in the sense of distributions on $\er$ is zero, is the constant zero function. (We point out that we work in distributions on $\er$, not in distributions on $(a,b)$. If $(a,b)$ is a bounded interval, then constant functions on $(a,b)$ belong to $L^1(a,b)$ and their derivative in the sense of distributions on $(a,b)$ is zero unlike their derivative in the sense of distributions on $\er$.) Moreover, it is clear that the distributional derivative of the characteristic function $\chi_{(u,v)}$ is $\ep_u-\ep_v$. Our result was motivated in part by a result of N.~Lerner \cite{lerner} who proved that $\Lip_0(\er^d)$ is isomorphic to the dual of the quotient described in our theorem. This is closely related to our result, since $\F(M)^*$ is isometric to $\Lip_0(M)$ for any metric space $M$ (see the next section). However, it was not known to us whether $\F(M)$ is the unique predual of $\Lip_0(M)$, therefore to give a representation of $\F(\er^d)$ we described the mapping $\delta$. Let us note that it has been proved recently by N. Weaver \cite{weaver} that $\F(M)$ actually is the unique predual of $\Lip_0(M)$ if $M$ is a convex set in a Banach space; however, the description of the mapping $\delta$ could be of an independent interest. We further extended the results for other norms on $\er^d$ (which is easy) and to general convex domains (which requires some additional work). Let us note that during the review process of this paper we found out that the problem of characterizing $\F(\er^d)$ has been independently investigated in the Master thesis \cite{flores}, where some partial results were obtained, and that similar ideas as we use in our paper were also used in \cite[Appendix: The Sobolev space $W^{-1,1}$]{maly}, where the author describes the Sobolev space $W^{-1,1}$. Let us summarize what is known to the authors about the structure of Lipschitz-free spaces over subsets of $\er^d$. If $X$ and $Y$ are Banach spaces, we write $X\equiv Y$, $X\cong Y$ and $X\hookrightarrow Y$ if $X$ is linearly isometric with $Y$, linearly isomorphic with $Y$ and isomorphic to a subspace of $Y$, respectively. If this is not the case, we write $X\nequiv Y$, $X\ncong Y$ and $X\not \hookrightarrow Y$. \begin{fact} \begin{enumerate} \item $\F(\er)\equiv L^1(\er)$. \item For any measure $\mu$, we have $\F(\er^2)\not \hookrightarrow L^1(\mu)$. Moreover, there exists a convergent sequence $K\subset\er^2$ such that $\F(K)\not\hookrightarrow L^1(\er)$ \item If $M\subset\er^d$ has a nonempty interior, then $\F(M)\cong \F(\er^d)$. \item For every $M\subset\er^d$, the space $\F(M)$ is weakly sequentially complete. In particular, $c_0\not\hookrightarrow \F(M)$. \item $\F(\er^d)$ has a Schauder basis. Moreover, for every $M\subset\er^d$ bounded and convex, $\F(M)$ has the Schauder basis as well. \item For every $M\subset \er^d$, the space $\F(M)$ has the bounded approximation property (BAP). Moreover, if $M$ is compact and convex or $M = \er^d$, then $\F(M)$ has the metric approximation property (MAP) with respect to any norm on $\er^d$. \item If $K\subset \er^d$ is a countable compact set, then $\F(K)$ is a dual space which has the MAP. On the other hand, if $M\subset\er^d$ is convex and not reduced to a point, then $L^1([0,1])\hookrightarrow \F(M)$; in particular, $\F(M)$ is not a dual space. \item For a metric space $M$, the space $\F(M)$ is isometric to a subspace of an $L^1$ space if and only if $M$ isometrically embeds into an $\er$-tree. \end{enumerate} \end{fact} The assertion (1) is well known, see e.g. \cite[page 128]{gk03} or \cite[page 33]{glz}, the proof is easy using the above description and we recall it at the beginning of the following section. The first result in (2) was shown by A. Naor and G. Schechtmann \cite{naoSch}. As observed in \cite[Remark 4.2]{CDW}, this result actually follows already from \cite{kis} using minor modifications. The ``moreover'' part of (2) follows from \cite[Theorem 1.2 and Remark 4.5]{CDW} The assertion (3) is proved in \cite[Corollary 3.5]{kauf} and the assertion (4) in \cite[Theorem 1.3]{CDW}. The assertion (5) is a result of E. Perneck\'a and P. H\'ajek, see Theorem 3.1 and page 645 in \cite{hp}. The first statement of (6) follows from \cite[Proposition 2.3]{lp}, the case of $\F(\er^d)$ from \cite[Proposition 5.1]{gk03}, the remaining case is proved in \cite[Corollary 1.2]{ps}. The first part of (7) is a special case of a result of A. Dalet \cite[Theorem 2.1]{aude}. On the other hand, if $M$ is a convex set, then $[0,1]$ bi-Lipschitz embeds into $M$; hence, $L^1([0,1])\equiv\F([0,1])\hookrightarrow \F(M)$. Therefore, if $\F(M)$ is a separable space which fails the Radon-Nikod\'ym property, hence it is not a dual space. The assertion (8) is proved in \cite[Theorem 4.2]{godard} It seems that the main open problems are whether $\F(\ell_1)$ is complemented in its bidual and whether $\F(M)$ has BAP for every uniformly discrete metric space, see \cite[Problem 16 and Problem 18]{glz}. \section{Preliminaries} In this section we introduce some notation we need to formulate and prove our results. Let us recall some basic facts concerning the Lipschitz-free spaces (for the proofs we refer to~\cite[Section~1]{CDW}). Let $(M,\rho,0)$ be a pointed metric space, i.e. a metric space with a distinguished ``base point'' denoted by $0$. The space $\F(M)$ described in the introduction is uniquely characterized by the following universal property: Let $X$ be a Banach space and suppose that $L\colon M\to X$ is a Lipschitz mapping satisfying $L(0)=0$. Then there exists a unique bounded linear operator $\widehat{L}\colon\F(M)\to X$ extending $L$, i.e. the following diagram commutes: \begin{center}\begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em] { M & X \\ \F(M) & X \\}; \path[-stealth] (m-1-1) edge node [left] {$\delta_M$} (m-2-1) edge node [above] {$L$} (m-1-2) (m-2-1.east|-m-2-2) edge [dashed] node [above] {$\widehat{L}$} (m-2-2) (m-1-2) edge node [right] {$\mathrm{id}_X$} (m-2-2); \end{tikzpicture}\end{center} Using this universal property of $\F(M)$ for $X=\er$ it can be rather easily shown that $\F(M)^*$ is linearly isometric to $\Lip_0(M)$. The notation and terminology we use are relatively standard. For a Banach space $X$, $x\in X$ and $x^*\in X^*$ we denote by $\ip{x^*}{x}$ or by $\ip{x}{x^*}$ the application of the functional $x^*$ on the vector $x$; that is, $\ip{x^*}{x} = \ip{x}{x^*} = x^*(x)$. Our basic setting is the following: Let $E$ be a finite-dimensional real normed space with a fixed basis $\e_1,\dots,\e_d$, where $d\ge 2$. $E^*$ be the dual space to $E$ and $\e^*_1,\dots\e^*_d$ be the dual basis. Using the coordinates with respect to these bases we can canonically identify both $E$ and $E^*$ with the space $\er^d$ equipped with the corresponding norms. On $\er^d$ we will consider the standard Lebesgue $d$-dimensional measure $\lambda^d$ and the standard $(d-1)$-dimensional Hausdorff measure $\H^{d-1}$. These measures are transferred to $E$ and $E^*$ via the mentioned identification. If $U\subset \er^d$ (or $U\subset E$) is a nonempty open set, by $\Dc(U)$ we denote the space of real-valued $\C^\infty$-smooth functions with a compact support in $U$. The symbol $\Dcp(U)$ denotes the respective space of distributions. Finally, let $(u_n)$ be a fixed approximate unit in $\Dc(\er^d)=\Dc(E)$, i.e., $u_n(\x)=n^d\rho(n\x)$, where $\rho\in\Dc(\er^d)$ is a non-negative function with $\int_{\er^d}\rho=1$. We will use also vector-valued functions. $\Dc(U,\er^d)$ will denote the space of $\C^\infty$-smooth vector fields with a compact support in $U$ and with values in $\er^d$, i.e., $d$-tuples of elements of $\Dc(U)$. If $\g = (g_1,\ldots,g_n)$ is a $d$-tuple of distributions, then $\div\g = \sum_{i=1}^n \partial_ig_i$. By $L^1(U,E)$ we denote the space of all (equivalence classes of) integrable $E$-valued functions defined on $U$. The norm on this space is defined by $$\norm{\f}=\int_{U} \norm{\f(\x)}_E\di\lambda^d(\x),\quad \f\in L^1(U,E).$$ Recall that $L^1(U,E)$ is canonically embedded to $L^1(\er^d,E)$, where each $f\in L^1(U,E)$ is extended by zero outside $E$. The space $L^\infty(U,E^*)$ is the space of all (equivalence classes of) essentially bounded measurable $E^*$-valued functions defined on $U$. The norm is defined by $$\norm{\g}=\esssup_{\x\in U} \norm{\g(\x)}_{E^*},\quad \g\in L^\infty(U,E^*).$$ The space $L^\infty(U,E^*)$ is canonically isometric to the dual of $L^1(U,E)$, the duality being defined by $$\ip{\g}{\f} = \int_U \ip{\g(\x)}{\f(\x)}\di\lambda^d(\x), \quad \g\in L^\infty(U,E^*),\f\in L^1(U,E).$$ Finally, let $\Omega$ be a fixed nonempty convex open subset of $E$ with a base point $\o\in\Omega$. Without loss of generality we may and shall suppose that $\o$ is the origin. \section{Proof of the main result} The idea of the proof of our main result, Theorem \ref{t:main}, is to mimic the known and easy proof in dimension one. Let us recall it: It is well known that the mapping $T:\Lip_0(\er)\ni f\mapsto f'\in L^\infty(\er)$ is an onto isometry. Consider the adjoint $T^*$. If $x>0$, then $T^*(\chi_{(0,x)})=\delta(x)$ since for any $F\in\Lip_0(\er)$ we have $$\ip{T^*\chi_{(0,x)}}{F} =\ip{\chi_{(0,x)}}{TF}=\ip{\chi_{(0,x)}}{F'}=\int_0^x F'=F(x)=\ip{\delta(x)}{F}.$$ Similarly, $T^*(-\chi_{(x,0)})=\delta(x)$ for $x<0$. Since characteristic $\delta(\er)$ is linearly dense in $\F(\er)$ and characteristic functions of intervals are linearly dense in $L^1(\er)$, it easily follows that $T^*$ isometrically maps $L^1(\er)$ onto $\F(\er)$. Hence, we try to mimic this approach in higher dimension and for general convex domains. We consider the mapping $T:\Lip_0(\Omega)\ni f\mapsto \nabla f\in L^\infty(\Omega,E^*)$. It is rather a standard fact that $T$ is an into isometry (cf. Proposition~\ref{P:2}(i)). However, it is not onto, the range is described in Proposition \ref{P:2}(ii). It turns up that the range is weak$^*$-closed in $L^\infty(\Omega,E^*)$ and in Proposition \ref{P:3} we describe its pre-annihilator and hence, the respective quotient is a predual. Finally, in Proposition~\ref{P:main} we show, using the auxiliary Proposition~\ref{P:div}, that the adjoint map $T^*$ maps this predual isometrically onto $\F(\Omega)$ and give a representation of the mapping $\delta$. A large part of the first two steps, i.e., Proposition~\ref{P:2}(ii) and Proposition~\ref{P:3}, for the case $\Omega=\er^d$ are due to \cite{lerner}. We start by the following proposition. The parts (i) and (ii) are classical well-known facts, we provide references for the proof. The assertion (iii) is essentially standard as well, we provide the proof for the sake of completeness. \begin{prop}\label{P:L1} Let $F:\Omega\to\er$ be an $L$-Lipschitz function. Then the following hold: \begin{itemize} \item[(i)] For almost all $\x\in\Omega$ there exists the Fr\'echet derivative $F'(\x)\in E^*$. Moreover, $\norm{F'(\x)}\le L$ whenever $F'(\x)$ exists. \item[(ii)] The mapping $F':\x\mapsto F'(\x)$ is an almost everywhere defined measurable function. Moreover, it is the gradient of $F$ in the sense of distributions on $\Omega$, i.e., $$\int_\Omega F'\cdot \varphi \di\lambda^d=-\int_\Omega F\cdot \nabla\varphi\di\lambda^d,\quad \varphi\in\Dc(\Omega).$$ \item[(iii)] For each $\x,\y\in\Omega$ we have $$F(\y)-F(\x)=\lim_{n\to\infty} \int_0^1 \ip{(F'*u_n)(\x+t(\y-\x))}{\y-\x}\di t.$$ \end{itemize} \end{prop} \begin{proof} (i) $F$ is differentiable almost everywhere by the classical Rademacher theorem (see, e.g., \cite[Theorem 3.1.6]{federer} or \cite[Theorem 30.3]{LM}). The estimate of the norm is obvious. (ii) The set of Fr\'echet-differentiability points is of full measure, hence Lebesgue measurable, by (i) (in fact, it is a Borel set by \cite[Lemma 30.2]{LM}, for a more general result see \cite[Theorem 2]{zaj91}). Moreover, since Fr\'echet derivative is determined by partial derivatives and partial derivatives of a continuous function are clearly of the first Baire class, the mapping $F'$ is measurable. The integral formula follows easily using Fubini theorem and integration by parts for absolutely continuous functions. (iii) Let us first show the formula in case $\Omega=\er^d$. Then for each $n\in\en$ the convolution $F*u_n$ is a $\C^\infty$ function on $\er^d$ and its gradient satisfies $\nabla(F*u_n)= F'*u_n$. Hence we have $$(F*u_n)(\y)-(F*u_n)(\x)=\int_0^1 \ip{(F'*u_n)(\x+t(\y-\x))}{\y-\x}\di t,\ \x,\y\in\er^d,n\in\en.$$ Since $F*u_n\to F$ pointwise (in fact, uniformly on compact sets), we can take the limit to obtain the formula. Next suppose that $\Omega\subsetneqq\er^d$. In this case the convolutions including functions defined on $\Omega$ are understood, as usually, in the sense that the respective functions are extended by zero outside $\Omega$. Let $\tilde F$ be a Lipschitz extension of $F$ defined on $\er^d$. Such an extension exists, for example, due to \cite[Theorem 30.5]{LM}. (In fact, one can preserve the Lipschitz constant, but it is not important at this point.) Hence, using the case $\Omega=\er^d$ the formula holds with $\tilde F$ in place of $F$, i.e., $$\tilde F(\y)-\tilde F(\x)=\lim_{n\to\infty} \int_0^1 \ip{(\tilde F'*u_n)(\x+t(\y-\x))}{\y-\x}\di t,\quad \x,\y\in\er^d.$$ Given $\x,\y\in\Omega$, the segment $[\x,\y]$ is a compact subset of $\Omega$ and hence we can find $\varepsilon>0$ with $\varepsilon<\dist([\x,\y],\er^d\setminus\Omega)$. Further, by the properties of the approximate unit, there is some $n_0\in\en$ such that $\spt u_n\subset U(\o,\varepsilon)$ for $n\ge n_0$. Then for $n\ge n_0$ one has $$ (F'*u_n)(\x+t(\y-\x))=(\tilde F'*u_n)(\x+t(\y-\x)) \mbox{ for }t\in[0,1].$$ Hence the formula follows. \end{proof} \begin{prop}\label{P:2} For any $F\in\Lip_0(\Omega)$ set $T(F)=F'$. Then the following hold. \begin{itemize} \item[(i)] $T$ is a linear isometry of $\Lip_0(\Omega)$ into $L^\infty(\Omega,E^*)$. \item[(ii)] The range of $T$ is $$X(\Omega)=\{\f=(f_i)_{i=1}^d\in L^\infty(\Omega,E^*)\setsep \partial_i f_j=\partial_j f_i\mbox{ for }i,j=1,\dots,d\},$$ where the derivatives are considered in the sense of distributions on $\Omega$. \item[(iii)] The inverse operator $T^{-1}\colon X(\Omega)\to \Lip_0(\Omega)$ is defined by $$T^{-1}(\f)(\x)= \lim_{n\to\infty}\int_0^1 \ip{(\f*u_n)(t\x)}{\x}\di t,\ \f\in X(\Omega),\x\in\Omega.$$ \end{itemize} \end{prop} \begin{proof} It follows from Proposition~\ref{P:L1} that $T$ is a linear operator from $\Lip_0(\Omega)$ to $L^\infty(\Omega,E^*)$ with $\norm{T}\le 1$. Further, the formula for the inverse mapping follows from Proposition~\ref{P:L1}(iii). It remains to identify the range and to show that $T$ is an isometry. Let us start by proving that $T$ is an isometry. Fix $F\in \Lip_0(\Omega)$ and set $$L=\norm{F'}_{L^\infty(\Omega,E^*)} =\esssup_{x\in\Omega}\norm{F'(\x)}_{E^*}.$$ Then $\norm{F'*u_n}_{L^\infty(\Omega,E^*)}\le L$ for each $n\in\en$. Indeed, fix $n\in\en$, $\x\in\Omega$ and $\h\in E$ with $\norm{\h}\le 1$. Then $$\begin{aligned} \abs{\ip{(F'*u_n)(\x)}{\h}} &=\abs{\sum_{i=1}^d (\partial_i F * u_n)(\x) h_i} =\abs{\sum_{i=1}^d \int_{\er^d}\partial_i F(\y) u_n(\x-\y)\di \y \cdot h_i} \\&=\abs{ \int_{\er^d}\sum_{i=1}^d\partial_i F(\y)\cdot h_i\cdot u_n(\x-\y)\di \y } =\abs{\int_{\er^d}\ip{F'(\y)}{\h} u_n(\x-\y)\di \y } \\&\le\int_{\er^d}\abs{\ip{F'(\y)}{\h}} u_n(\x-\y)\di \y \le \int_{\er^d}\norm{F'(\y)}_{E^*} u_n(\x-\y)\di \y \le L.\end{aligned}$$ (During the computation we again used the convention that the functions defined on $\Omega$ are extended by zero on $\er^d\setminus\Omega$.) Hence, indeed, $\norm{(F'*u_n)(\x)}\le L$ for each $n\in\en$ and $\x\in\Omega$. Therefore, for each $n\in\en$ and $\x,\y\in\Omega$ we have $$\begin{aligned}\abs{\int_0^1 \ip{(F'*u_n)(\x+t(\y-\x))}{\y-\x}\di t}&\le \int_0^1 \abs{\ip{(F'*u_n)(\x+t(\y-\x))}{\y-\x}}\di t \\&\le \int_0^1 \norm{(F'*u_n)(\x+t(\y-\x))}_{E^*}\cdot\norm{\y-\x}_E\di t \\&\le L\norm{\y-\x}_E, \end{aligned}$$ hence by the formula in Proposition~\ref{P:L1}(iii) the function $F$ is $L$-Lipschitz. It remains to prove the assertion (ii). One one hand, it is obvious that the range of $T$ is contained in $X(\Omega)$, since, in the sense of distributions on $\Omega$, always $\partial_i\partial j F=\partial_j\partial_i F$ for $F\in \Lip_0(\Omega)$ (in fact, for any distribution $F\in\Dcp(\Omega)$). Conversely, suppose that $\f\in X(\Omega)$. Consider $\f$ to be extended by $0$ on $\er^d\setminus\Omega$. For $i,j\in\{1,\dots,d\}$ set $$U_{i,j}=\partial_j f_i\mbox{ in }\Dcp(\Omega)\mbox{\quad and\quad}V_{i,j}=\partial_j f_i\mbox{ in }\Dcp(\er^d).$$ Clearly $U_{i,j}=V_{i,j}|_{\Dc(\Omega)}$ and $U_{i,j}=U_{j,i}$ for $i,i\in\{1,\dots,d\}$. Suppose without loss of generality that $\spt u_n\subset U(\o,\frac1n)$ for each $n\in\en$. For $n\in\en$ set $\Omega_n=\{\x\in\Omega\setsep \dist(\x,\er^d\setminus\Omega)>\frac1n\}$. Then $\Omega_n$ is a convex open set and, moreover, $\o\in\Omega_n$ for $n$ large enough. Consider functions $u_n*\f$ for $n\in\en$. Then $u_n*\f$ is a $\C^\infty$ mapping defined on $\er^d$. Moreover, $$\partial_j(u_n*f_i)(\x)=\partial_i(u_n*f_j)(\x),\quad i,j\in\{1,\dots,d\},\x\in \Omega_n,n\in\en.$$ Indeed, fix $n\in\en$, $\x\in\Omega_n$ and $i,j\in\{1,\dots,d\}$. Then $$\partial_j(u_n*f_i)(\x)=(u_n*V_{i,j})(\x)=V_{i,j}(\y\mapsto u_n(\x-\y))=U_{i,j}(\y\mapsto u_n(\x-\y)),$$ since the support of $\y\mapsto u_n(\x-\y)$ equals $$\x-\spt u_n\subset U(\x,\tfrac1n)\subset\Omega.$$ Thus we can conclude by recalling $U_{i,j}=U_{j,i}$. It follows that there is a $\C^\infty$-function $F_n\colon \Omega_n\to\er$ with $\nabla F_n=u_n*\f$ on $\Omega_n$. Without loss of generality we can suppose $F_n(\o)=0$ for each $n\in\en$. Further, since $\norm{u_n*\f}_{L^\infty(\Omega,E^*)}\le \norm{\f}_{L^\infty(\Omega,E^*)}$ for each $n\in\en$, each $F_n$ is $L$-Lipschitz, where $L=\norm{\f}_{L^\infty(\Omega,E^*)}$. Since the sequence $(F_n)$ is uniformly Lipschitz and $F_n(\o)=0$ for each $n\in\en$, it is easy to check it is locally uniformly bounded on $\Omega$. Therefore, by Arzel\`a-Ascoli theorem one can find a subsequence which converges to an $L$-Lipschitz function $F$ uniformly on compact subsets of $\Omega$. (Given $K\subset \Omega$ compact, then $K\subset\Omega_n$ for large $n$ and hence there is $n_0$ such that $(F_n)_{n\ge n_0}$ is uniformly bounded and uniformly Lipschitz on $K$, so there is a subsequence uniformly convergent on $K$. A diagonal argument completes the construction.) Without loss of generality suppose $F_n\to F$. Then for any $\varphi\in\Dc(\Omega)$ we have $$\int_{\Omega} F\cdot \nabla\varphi =\lim_{n\to\infty} \int_{\Omega_n} F_n\cdot \nabla\varphi =-\lim_{n\to\infty} \int_{\Omega_n} (u_n*\f)\cdot \varphi =-\int_{\Omega} \f\cdot \varphi.$$ where we used that $\spt\varphi\subset\Omega_n$ for $n$ large enough. Hence $F'=\f$. \end{proof} \begin{remark} The assertion (ii) of the previous proposition for the case $\Omega=\er^d$ is the content of \cite[Lemma 2]{lerner}. The proof of the nontrivial inclusion is done in a different way. First, if $\f\in X(\er^d)$, by \cite[Proposition 4.3.9]{horvath} there is a distribution $U\in\Dcp(\er^d)$ such that $\f$ is the distributional derivative of $U$. This means that $U$ belongs to the space $L^1_\infty(\er^d)$ defined in \cite[Section~1.1]{Mazya}. It is shown in \cite[Theorem~1.1.2]{Mazya} that then $U\in W^{1,p}_{loc}(\er^d)$ for any $p>1$. Let $K=\sqrt d\int_0^1 t^{-d/p}\di t$. Lemma~4.28 in \cite{Adams-Fournier} gives \begin{equation}\label{eq:embedd} \forall \x,\y\in\er^d,p>d, u\in C^\infty(\er^d): |u(\x)-u(\y)|\leq 2K\|\x-\y\|^{1-\frac dp}\|\nabla u\|_{L^p(Q)}, \end{equation} where $Q$ is a cube with sidelength $2\|\x-\y\|$ that contains $\x$ and $\y$. From \cite[Theorem~4.12, Part I]{Adams-Fournier} the continuous embedding of $W^{1,p}_{loc}(\er^d)$ into $C(\er^d)$ follows. Consequently, \eqref{eq:embedd} holds for all $u\in W^{1,p}_{loc}(\er^d)$ whenever $p>d$. Application of \eqref{eq:embedd} to $U$ together with H\"older's inequality and the fact that $\nabla U\in L^\infty(\er^d)$ gives $$ \forall \x,\y\in\er^d,p>d: |U(\x)-U(\y)|\leq 2^{1+\frac 1p}K\|\x-\y\|^{1-\frac dp}\|\nabla U\|_{L^p(Q)}\leq 2^{1+\frac 1p}K\|\x-\y\|\|\f\|_\infty , $$ i.e., $U$ is a Lipschitz function. A similar argument probably could be done for any convex domain, but it should be more technical and we have not find any explicit reference for it. Therefore we give above a direct proof of the assertion (ii) for a general convex domain. We further remark that $X(\Omega)$ is a known object, the elements of $X(\Omega)$ are called \emph{closed $L^\infty(\Omega)$ currents}. \end{remark} \begin{prop}\label{P:3} Set $$Y(\Omega)=\{\g\in L^1(\Omega,E)\setsep \div \g=0\mbox{ in }\Dcp(\er^d)\}.$$ Then the following hold: \begin{itemize} \item $X(\Omega)=Y(\Omega)^\perp$ and $Y(\Omega)=(X(\Omega))_\perp$ in the standard duality $(L^1(\Omega,E))^*=L^\infty(\Omega,E^*)$. \item $Y(\Omega)\cap\Dc(\Omega,E)$ is dense in $Y(\Omega)$. \end{itemize} \end{prop} \begin{proof} Let us recall any $\g\in L^1(\Omega,E)$ is canonically identified with an element of $L^1(\er^d,E)$ ($\g$ is extended by zero outside $\Omega$). Hence, if $\g\in L^1(\Omega,E)$, then $\div \g$ in $\Dcp(\er^d)$ is the distributional divergence of the described extension of $\g$. Let us start the proof by showing $X(\Omega)=(Y(\Omega)\cap \Dc(\Omega,E))^\perp$: \begin{itemize} \item[$\supset$:] Fix $\f\in (Y(\Omega)\cap \Dc(\Omega,E))^\perp$. To prove that $\f\in X(\Omega)$, take arbitrary $i,j\in\{1,\dots,d\}$ distinct and $\varphi\in\Dc(\Omega)$. Set $g_i=\partial_j\varphi$, $g_j=-\partial_i\varphi$ and $g_k=0$ for $k\in\{1,\dots,d\}\setminus\{i,j\}$. Then $$\ip{\partial_i f_j-\partial_j f_i}{\varphi}=-\ip{f_j}{\partial_i\varphi}+\ip{f_i}{\partial_j\varphi}=\ip{\f}{\g}=0.$$ \item[$\subset$:] Suppose that $\f\in X(\Omega)$ and take $\g\in Y(\Omega)\cap\Dc(\Omega,E)$. We will show that $\ip{\f}{\g}=0$. By Proposition~\ref{P:2} we know that there is $F\in\Lip_0(\Omega)$ with $F'=\f$. Then $$\ip{\f}{\g}=\ip{F'}{\g}=-\ip{F}{\div\g}=0.$$ \end{itemize} Hence, by the Hahn-Banach theorem we have $(X(\Omega))_\perp=\overline{Y(\Omega)\cap\Dc(\Omega,E)}$. Therefore, to complete the proof it is enough to show that $Y(\Omega)\subset(X(\Omega))_\perp$. Let us prove it first in case $\Omega=E$. Take $\f\in X(\Omega)$ and $\g\in Y(\Omega)$. In the first step suppose that $\g\in\C^\infty(\er^d)$. Fix some $\psi\in\Dc(\er^d)$ with $\psi=1$ on $U(\o,1)$, $\spt\psi\subset U(\o,2)$ and $0\le\psi\le 1$ on $\er^d$. (The balls are taken with respect to the norm of $E$.) For $n\in\en$ set $\g_n(\x)=\psi(\frac{\x}{n})\cdot\g(\x)$. Let $F\in\Lip_0(\er^d)$ be such that $\f=F'$. Denote by $L$ the Lipschitz constant of $F$ (i.e., $L=\norm{\f}_{L^\infty(\er^d,E^*)}$). Then $$\begin{aligned} \abs{\ip{\f}{\g_n}}& = \abs{\ip{F'}{\g_n}}=\abs{\ip{F}{\div\g_n}} \\&=\abs{\int_{\er^d}F(\x)\cdot\left(\psi(\tfrac{\x}{n})\div\g(\x) + \tfrac1n\ip{\g(\x)}{\nabla\psi(\tfrac{\x}{n})}\right)\di\x} \\&=\abs{\int_{\er^d}F(\x)\cdot \frac1n\ip{\g(\x)}{\nabla\psi(\frac{\x}{n})}\di\x} \\&=\abs{\int_{U(\o,2n)\setminus U(\o,n)}F(\x)\cdot \tfrac1n\ip{\g(\x)}{\nabla\psi(\tfrac{\x}{n})}\di\x} \\&\le\frac1n\cdot\sup_{\x\in U(\o,2n)}\cdot\abs{F(\x)}\cdot\norm{\nabla\psi}_{L^\infty(\er^d,E^*)}\cdot \int_{\er^d\setminus U(\o,n)} \norm{\g(\x)}_E\di x \\&\le 2L\cdot\norm{\nabla\psi}_{L^\infty(\er^d,E^*)}\cdot \int_{\er^d\setminus U(\o,n)} \norm{\g(\x)}_E\di x\to 0\end{aligned}$$ for $n\to\infty$. Since $\g_n\to\g$ in $L^1(\er^d,E)$, we conclude that $\ip{\f}{\g}=0$. In the second step let $\g\in Y^d$ be arbitrary. Then for each $n\in\en$ we have $u_n*\g\in\C^\infty(\er^d,E)$, $\div(u_n*\g)=u_n*\div\g=0$, hence $\ip{\f}{u_n*\g}=0$. Since $u_n*\g\to\g$ in $L^1(\er^d,E)$, necessarily $\ip{\f}{\g}=0$. Finally, let $\Omega$ be arbitrary, $\f\in X(\Omega)$ and $\g\in Y(\Omega)$. Let $F\in\Lip_0(\Omega)$ be such that $\f=F'$. Let $\tilde F\in\Lip_0(E)$ be an extension of $F$. Then, using the case $\Omega=E$ and the assumption $\div \g=0$ in $\Dcp(\er^d)$ (and not just in $\Dcp(\Omega)$), we have $$\begin{aligned}\ip{\f}{\g}&=\int_{\Omega}\ip{F'(\x)}{\g(\x)}\di\x =\int_{\Omega}\ip{\tilde F'(\x)}{\g(\x)}\di\x \\&=\int_{\er^d}\ip{\tilde F'(\x)}{\g(\x)}\di\x =\ip{\tilde F'}{\g}=0.\end{aligned}$$ \end{proof} \begin{remark} (1) By Proposition \ref{P:2} $Lip_0(\Omega)$ is isometric with $X(\Omega)$, by Proposition \ref{P:3} $X(\Omega)$ is isometric with $(L^1(\Omega,E)/_{Y^d})^*$. Hence $Lip_0(\Omega)$ is isometric with $(L^1(\Omega,E)/_{Y^d})^*$. For the case $\Omega=\er^d$ this is the content of \cite[Theorem 3]{lerner}. (2) In \cite{lerner} the proof of the case $\Omega=\er^d$ of Proposition~\ref{P:3} is contained in Lemmata 4 and 5, although the equality $X(\er^d)=Y(\er^d)^\perp$ is not explicitly mentioned there. \end{remark} \begin{prop}\label{P:div} Let $\a\in\Omega$ be fixed. Then there is $\g\in L^1(\Omega,\er^d)$ with compact support in $\Omega$ such that $\div\g=\ep_\o-\ep_\a$ in $\Dcp(\er^d)$. \end{prop} \begin{proof} In this proof we will consider $\er^d$ with the euclidean norm. Set $$\h(\x)=\frac{\x}{d\kappa_d\norm{\x}^d},\quad \x\in\er^d\setminus\{\o\},$$ where $\kappa_d$ is the volume of the $d$-dimensional ball. Then $\h$ is the gradient of the standard fundamental solution of the Laplace equation in $\er^d$. In particular, \begin{gather} \div\h=\ep_\o \mbox{ in }\Dcp(\er^d),\\ \h\in\C^\infty(\er^d\setminus\{\o\}), \\ \div\h(\x)=0\mbox{ for } \x\in\er^d\setminus\{\o\}. \label{eq:3} \end{gather} Further, for any $r>0$ we have \begin{equation} \label{eq:4}\int_{\partial U(\o,r)} \ip{\h(\x)}{\nu(\x)}\di\H^{n-1}(\x) =1, \end{equation} where $\nu(\x)$ denotes the outer normal at the point $\x$ and $U(\o,r)$ is the Euclidean ball centered at $\o$ with radius $r$. Indeed, since $\nu(\x)=\frac{\x}{\norm{\x}}$, one has $$\begin{aligned}\int_{\partial U(\o,r)} \ip{\h(\x)}{\nu(\x)}\di\H^{n-1}(\x)&=\int_{\partial U(\o,r)} \frac{1}{d \kappa_d\norm{\x}^{d-1}}\di\H^{n-1}(\x)\\&=\frac{\H^{n-1}(\partial U(\o,r))}{d\kappa_d r^{d-1}} =\frac{\H^{n-1}(\partial U(\o,1))}{d\kappa_d}=1.\end{aligned}$$ The equation \eqref{eq:4} can be extended to more general domains: \begin{equation} \label{eq:5}\begin{aligned} \mbox{If $U$ is a bounded convex domain containing $\o$,}&\\ \mbox{then }\int_{\partial U} \ip{\h(\x)}{\nu(\x)}&\di\H^{n-1}(\x) =1.\end{aligned} \end{equation} Indeed, let $r>0$ be so small that $\overline{U(\o,r)}\subset U$. Then, by \eqref{eq:3} and the Gauss theorem \cite[Theorem 37.22]{LM} (note that the boundary of a convex domain is Lipschitz) we have $$0=\int_{U\setminus\overline{U(\o,r)}} \div\h=\int_{\partial U} \ip{\h(\x)}{\nu(\x)}\di\H^{n-1}(\x)-\int_{\partial U(\o,r)} \ip{\h(\x)}{\nu(\x)}\di\H^{n-1}(\x),$$ thus \eqref{eq:5} follows from \eqref{eq:4}. We continue by setting $$\h_\a(\x)=\h(\x)-\h(\x-\a),\quad \x\in\er^d\setminus\{\a,\o\}.$$ Then clearly \begin{gather} \div\h_\a=\ep_\o-\ep_\a \mbox{ in }\Dcp(\er^d),\label{eq:div}\\ \h_\a\in\C^\infty(\er^d\setminus\{\o,\a\}), \\ \div\h_\a(\x)=0\mbox{ for } \x\in\er^d\setminus\{\o,\a\}. \label{eq:8} \end{gather} and, moreover, by \eqref{eq:5} we get \begin{equation}\label{eq:6} \begin{aligned}\mbox{If $U$ is a bounded convex domain containing $\{\o,\a\}$, then }\\\int_{\partial U} \ip{\h_\a(\x)}{\nu(\x)}\di\H^{n-1}(\x) =0.\end{aligned} \end{equation} Since the segment $[\o,\a]$ is a compact subset of $\Omega$, there is $r>0$ such that $$\overline{[\o,\a]+U(\o,r)}\subset \Omega.$$ Choose some $\eta\in\Dc(\er^d)$ such that $$\eta=1 \mbox{ on } [\o,\a]+U(0,\tfrac r2) \mbox{ and }\spt\eta\subset[\o,\a]+U(0,\tfrac34r).$$ Then we have \begin{itemize} \item $\eta\h_\a\in L^1(\Omega,\er^d)$ and has a compact support in $\Omega$; \item $\eta\h_\a\in\C^\infty(\er^d\setminus\{\o,\a\})$; \item $\div \eta\h_\a(\x)=0$ for $\x\in ([\o,\a]+U(0,\tfrac r2))\setminus\{\o,\a\}$; \item $\div \eta\h_\a(\x)=0$ for $\x\in \er^d\setminus([\o,\a]+U(0,\frac34r))$. \end{itemize} Hence, if we set $U_1=[\o,\a]+U(0,\frac14r)$ and $U_2=[\o,\a]+U(0,r)$, we get \begin{multline*}\int_{U_2\setminus\overline{U_1}}\div(\eta\h_\a)\di\lambda^d= \int_{\partial U_2} \ip{\eta(\x)\h_\a(\x)}{\nu(\x)}\di\H^{n-1}(\x) \\\qquad \qquad -\int_{\partial U_1} \ip{\eta(\x)\h_\a(\x)}{\nu(\x)}\di\H^{n-1}(\x) =0-\int_{\partial U_1} \ip{\h_\a(\x)}{\nu(\x)}\di\H^{n-1}(\x)=0.\end{multline*} Since $\div(\eta\h_\a)$ restricted to $U_2\setminus \overline{U_1}$ is a $\C^\infty$-function with a compact support, by \cite[Auxiliary lemma 3.15]{NoSt} there is $\g_0\in\Dc(U_2\setminus \overline{U_1},\er^d)$ with $\div\g_0(x)=\div(\eta\h_\a)(x)$ for $x\in U_2\setminus \overline{U_1}$. Set $\g=\eta\h_\a-\g_0$. Then $\g\in L^1(\Omega,\er^d)$ with compact support in $\Omega$. Moreover, we will show that $\div\g=\varepsilon_\o-\varepsilon_\a$ in $\Dc(\er^d)$. Due to \eqref{eq:div} it is enough to show that $\div(\g-\h_\a)=0$ in $\Dc(\er^d)$. But $$\g-\h_\a=(\eta-1)\h_\a-\g_0$$ is a $\C^\infty$ vector field on $\er^d$ (note that $\eta-1=0$ on $[\o,\a]+U(\o,\frac{r}{2})$), hence its distributional divergence is a $\C^\infty$ function and can be computed pointwise. Let us consider the following three open sets covering $\er^d$: (i) On $\overline{U_1}$ we have $\eta=1$ and $\g_0=0$, hence $(\eta-1)\h_\a-\g_0=0$. Thus $\div((\eta-1)\h_\a-\g_0)(\x)=0$ for each $\x\in[\o,\a]+U(\o,\frac{r}{2})$. (ii) On $\er^d\setminus U_2$ we have $\eta=0$ and $\g_0=0$, hence $(\eta-1)\h_\a-\g_0=-\h_\a$, therefore the divergence is zero at each point of this set by \eqref{eq:8}. (iii) For $x\in U_2\setminus\overline{U_1}$ we have $$\div((\eta-1)h_\a-\g_0)(\x)=\div(\eta\h_a)(\x)-\div h_\a(\x)-\div\g_0(\x)=0$$ by \eqref{eq:8} and the choice of $\g_0$. This completes the proof. \end{proof} The final ingredient is the following proposition, the proof of Theorem~\ref{t:main} then immediately follows. \begin{prop}\label{P:main} Let $T:F\mapsto F'$ be the isometry from Proposition~\ref{P:2}. Then the dual mapping $T^*$ maps $(L^1(\Omega,E)/Y(\Omega))$ onto $\F(\Omega)$. Moreover, for $\g\in L^1(\Omega,E)$ one has $T^*([\g])=\delta(\a)$ if and only if $\div \g=\ep_\o-\ep_\a$ in $\Dcp(\er^d)$. \end{prop} \begin{proof} Fix $\a\in\Omega$. Let $\g\in L^1(\Omega,E)$ be a mapping with compact support and satisfying $\div\g=\ep_\o-\ep_\a$ in $\Dcp(\er^d)$. (Such a mapping exists by Proposition~\ref{P:div}.) For any $F\in\Lip_0(\Omega)$ we then have $$\begin{aligned} \ip{T^*([\g])}{F}&=\ip{[\g]}{TF}=\ip{[\g]}{F'} =\int_\Omega \ip{\g(\x)}{F'(\x)}\di\x \\&=\lim_{n\to\infty} \int_\Omega \ip{(u_n*\g)(\x)}{F'(\x)}\di\x =\lim_{n\to\infty} \ip{(u_n*\g)}{F'} \\& =-\lim_{n\to\infty} \ip{\div (u_n*\g)}{F} =-\lim_{n\to\infty} \int_\Omega F(\x)\cdot \div (u_n*\g)(\x)\di\x \\&=-\lim_{n\to\infty} \int_\Omega F(\x)\cdot (u_n*\div\g)(\x)\di\x \\&=-\lim_{n\to\infty} \int_\Omega F(\x)\cdot (u_n*(\ep_\o-\ep_\a))(\x)\di\x \\&=-\lim_{n\to\infty}\int_\Omega F(\x)\cdot (u_n(\x)-u_n(\x-\a))\di\x \\&= \lim_{n\to\infty} ((F*\widecheck{u_n})(\a)-(F*\widecheck{u_n})(\o)) = F(\a)-F(\o)=\ip{\delta(\a)}{F}. \end{aligned}$$ The first three equalities are just applications of definitions. The fourth one follows from the fact that $u_n*\g\to\g$ in the $L^1$-norm. Since $\g$ has compact support in $\Omega$, $u_n*\g\in\Dc(\Omega,\er^d)$ for large $n$ and hence the fifth one is just rewriting the expression in the sense of distributions and the sixth one follows from the differentiation rules for distributions. Then a standard calculation follows. Let us point out that by $\widecheck{u_n}$ we mean the function defined by a formula $\widecheck{u_n}(\x)=u_n(-\x)$, $\x\in\er^d$, and that we use the obvious fact that the sequence $(\widecheck{u_n})$ is also an approximate unit. Hence, we conclude that $T^*([\g])=\delta(\a)$. Now, let $\g_1\in L^1(\Omega,E)$ be arbitrary. Since $T^*$ is an isometry, $T^*([\g_1])=\delta(\a)$ if and only if $[\g_1]=[\g]$, i.e. $\div(\g_1-\g)=0$ in $\Dcp(\er^d)$, or, equivalently, $\div\g_1=\delta_\o-\delta_\a$ in $\Dcp(\er^d)$. To conclude the proof, observe that $(T^*)^{-1}(\delta(\a))\in (L^1(\Omega,E)/Y(\Omega))$ for any $\a\in\Omega$, hence $(T^*)^{-1}$ maps $\F(\Omega)$ into $(L^1(\Omega,E)/Y(\Omega))$. Let $S$ be the restriction of $(T^*)^{-1}$ to $\F(\Omega)$. Since $S^*=T^{-1}$ is an onto isometry, necessarily $S$ is also an onto isometry. This completes the proof. \end{proof} \section*{Acknowledgement} We are grateful to Professor N.~Lerner for sending us his preprint \cite{lerner}. \def\cprime{$'$}
2024-02-18T23:40:58.677Z
2017-04-12T02:07:32.000Z
algebraic_stack_train_0000
3,826
7,101
proofpile-arXiv_066-2689
\section{Introduction The problem of estimating the state of a quantum system from observed measurement outcomes has been around since the dawn of quantum mechanics. In recent years, driven by technological advances in building and controlling quantum systems, this question has received a renewed interest especially in the context of quantum information processing (QIP). Since QIP relies on one's ability to prepare arbitrary multi-qubit quantum states, verifying experimentally that a quantum state has been indeed prepared acceptably close (by some metric) to a target is of paramount importance. As a result, methods built on classical statistical parameter estimation procedures have been adopted in order to satisfy a demand for quantum state characterization tools. Arguably the most widespread quantum state estimation approach relies on classical maximum likelihood estimation (MLE) technique. Pioneered in~\cite{Hradil}, it offers a great simplicity in numerical implementation but suffers from several dangerous flaws. Most prominently, MLE is prone to rank-deficient estimates of a density matrix from a {\it finite} number of measurement samples~\cite{blume2010optimal} which in turn implies that probability of observing certain states is zero -- a statement that is only valid in the limit of {\it infinite} number of observations. Also, MLE does not provide a straightforward way to place error bars on an estimated quantum state. Fortunately, there is an alternative parameter estimation technique called the Bayesian mean estimate (BME) that can be applied to quantum state characterization~\cite{blume2010optimal,jones1991principles,Granade2016} and is free of the the aforementioned shortcomings. In addition, BME minimizes the mean square error(MSE)~\cite{blume2010optimal,jaynes2003probability} i.e. the average square difference between the parameter and its estimate. Thus, BME offers a more accurate estimate of a quantum state. But, BME poses an implementation challenge. It computes a posterior distribution over quantum states for given measurement data by using Bayes' rule which in turn requires one to calculate the probability of the data by integrating over the manifold of all physical quantum states. While it may be possible to carry out the multi-dimensional integration analytically, ultimate evaluation still may be computationally prohibitive. Thus, numerical routines that use Monte Carlo (MC) methods are applied in order to sample from the posterior distribution. There is a trade-off between the speed and accuracy of BME depending on which MC algorithm is used. To our knowledge two types of MC algorithms were proposed for quantum state BME so far. The first one is the Metropolis-Hastings~\cite{MetropolisHastings} (MH) algorithm--an example of Markov Chain MC (MCMC)--was adopted in~\cite{blume2010optimal}. The second one is sequential MC (SMC) ~\cite{del2006sequential}--an importance sampling based algorithm--recently used for adaptive quantum state tomography~\cite{adaptive}. The MH algorithm is known for its ability to reproduce probability distributions very accurately at the expense of slow convergence. On the other hand, the SMC algorithm is fast but may converge to a sample that does not faithfully represent the distribution of interest. When to apply or the BME depends on many factors. For instance, for a small measurement data set and a large number of unknown parameters--a typical situation for multi-qubit systems--the BME is superior to as we demonstrate in Section~\ref{performance} of this paper. But perhaps even more crucially, the applicability of the BME approach depends on the choice of the form and parametrization of the likelihood function. The experimental likelihood most used in application is a simple multinomial that connects the observed data set directly with the quantum probabilities using Born's rule \cite{Hradil}. This approach assumes the data set, observed measurement outcomes, result directly and only from the quantum state and unitary operations. This is not always the case, since the measurement apparatus often introduces operation bias (not always unitary) and inefficiencies that modify the probability of an experimental observation. Previously, James et al. \cite{James01} accounted for bias in qubit operations in their two-photon tomography method using MLE. More recent MLE works such as Gate Set Tomography \cite{stark2014self,blume2013robust} assume nothing about the qubit state or operations, gates, other than their dimension. However, previous methods stop short of full accounting for non-unitary operations such as qubit loss. Thus, experimentalists may find themselves applying normalizing constants to account for deficiencies in the defined likelihood. These normalizing constants require preliminary experiments to obtain, and the method for obtaining these constants is often not well defined nor reported In this paper we develop a BME-based quantum state reconstruction method that utilizes the slice sampling~\cite{sliceSampling} (SS) algorithm which has the accuracy of the MH algorithm but demonstrates faster convergence~\cite{NealConv} and is more resilient for a numerical implementation. We show that by using the hyperspherical parameterization of the manifold of density matrices the BME of a state of a single qubit can be computed analytically by using a uniform prior. For a two-qubit system, in a situation when individual qubits may be lost during the measurement process, we apply SS algorithm to the same parameterization and an experiment-specific likelihood demonstrating a computationally stable and efficient way of sampling from the posterior distribution over the density matrices. We compare the resulting BME estimates to the corresponding MLE estimates as a function of the number of measurements and observe the superiority of the BME method, especially in the limit of small sample sizes. We begin this paper with a quick outline of our method in Section~\ref{Sec:outline}. We derive a closed BME for the ideal single qubit experiment in Section~\ref{Sec:singlequbit}. This approachable example illustrates our method and contrasts it with traditional MLE methods. It may also inspire further research into closed-form BME solutions of higher dimensional quantum systems. Next, in Section~\ref{Sec:twoqubits}, we derive a likelihood for a finite data two-photon experiment where detector inefficiencies and experimental asymmetries are taken into account. Utilization of this approach results in the real world benefit of eliminating the need to perform preliminary experiments to determine normalization constants. Subsequently, in Section~\ref{performance} we simulate a multitude of two-qubit photon experiments generating data sets from which we compare the performance of various MLE and BME approaches. Lastly, we apply our estimation to a real world two-photon experiment in Section~\ref{Sec:Experimental}. In the Appendices, we describe a common MLE approach using a traditional likelihood, we detail numerical procedures for sampling density matrices from the true state distribution using slice sampling, and describe the optimization method used in likelihood maximization. \section{Approach Outline}\label{Sec:outline} The components of our quantum state estimation pipeline are outlined in Fig. \ref{outline}. First, we define a model of our experiment by enumerating all the possible outcomes. This enumeration allows us to specify an experiment-specific likelihood $P(\mathcal{D}|\alpha)$, the probability of observing a specific data set $\mathcal{D}$ given the experimental parameters $\alpha=\{\alpha_{1},\cdots,\alpha_{N}\}$. In our case parameters $\alpha$ are elements of a density matrix $\rho$ representing the quantum state to be estimated. Bayes' rule, \begin{equation}P(\alpha|\mathcal{D})=\frac{P(\mathcal{D}|\alpha)P(\alpha)}{P(\mathcal{D})}\end{equation} then allows us to express $P(\alpha|\mathcal{D})$, a posterior distribution (PD) for the variables $\alpha$, given an observed data set $\mathcal{D}$ and a prior probability distribution $P(\alpha)$. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{summaryfigure.pdf} \caption{Our Bayesian mean estimation of density matrices is outlined above. a) We define our experiment by specifying a likelihood function $\!P\left(\mathcal{D}|\alpha\right)$ of data $\mathcal{D}$ given parameters $\alpha$ and b) express a corresponding posterior distribution of the parameters $\alpha$ which define the density matrix given data using Bayes' rule. c) We parametrize the density matrix such that any choice of parameters in a specified range leads to a valid physical state. d) We represent our posterior distribution using these new parameters; now the BME of the density matrix can be formally written down as an integral using a Haar-invariant measure. Unfortunately, the integral's analytical solution is typically computationally intensive to evaluate. e) Thus, we use computationally efficient numerical slice sampling to make samples of $\rho$ from the postirior distribution $\!P\left(\tau|\mathcal{D}\right)$. f) As $R$$\rightarrow$$\infty$ we tend to the true mean $\overline{\rho}$.\label{outline}}\end{figure} Next, the BME for a specific parameter $\alpha_i$ given a data set $\mathcal{D}$ is \begin{equation}\overline{\alpha_i}=\int d\alpha P(\alpha|\mathcal{D})\times \alpha_i \textrm{.}\end{equation} We expand our analysis to quantum systems by assuming that parameters $\alpha$ are entries of a density matrix $\rho$ ($\rho_{ij}=\alpha_{k}$) describing a valid quantum state (i.e. $\rho\ge 0$, $\rho=\rho^{\dagger}$, $\textrm{Tr}(\rho)=1$). Therefore, $\alpha$'s are not independent as we must enforce quantum constraints. To achieve this in a computationally tractable fashion, instead of using the Cartesian parameterization given by $\alpha$'s we parametrize a density matrix $\rho$ utilizing a Cholesky decomposition (see panel {\bf c} in Fig.(\ref{outline})) and hyperspherical parameters as suggested by Daboul \cite{daboul1967conditions}. We abbreviate this parametrization with $\tau$ to distinguish it from the Cartesian parametrization $\alpha$. Next, in order to compute the BME estimator, we need to select a prior probability distribution over density matrices $P(\tau)\equiv P(\rho(\tau))$ and an integration measure over the set of all physical quantum states $d\tau$ such that $d\mu(\rho(\tau))=P(\tau)d\tau$ is a valid probability measure i.e. $\int d\mu(\rho(\tau)) = 1$. We use a non-informative prior $P(\tau) = \textrm{const}$ and derive the integration measure $d\tau$ induced by the Riemanian metric $g_{ij}$ computed from the Euclidian length element between density matrices $(ds)^{2} = \textrm{Tr}\left(d\rho(\tau)\cdot d\rho^{\dagger}(\tau)\right)$~\cite{fyodorov2005introduction}. This choice of the integration measure guarantees Haar invariance of the probability measure over the set of density matrices. Thus, the probability of a state $\rho(\tau)$ is invariant under an arbitrary unitary rotation $U$ i.e. $P(\rho(\tau))=P(U\rho(\tau) U^{\dagger})$. Then the BME of an unknown quantum state reads, \begin{equation}\overline{\rho}=\int d\tau P(\tau|\mathcal{D})\times \rho(\tau).\end{equation} The latter expression for the BME can, in principle, be evaluated analytically. However, in practice it almost surely requires computational resources and (or) time constraints that prohibit analytical evaluation. In this case, an estimate can be obtained using numerical sampling from the posterior distribution $P(\tau|\mathcal{D})$. For example, in later sections we utilize numerical slice sampling \cite{sliceSampling} to arrive at approximate estimates for a two-photon experiment. \section{An example: Bayesian mean estimation of an ideal single qubit}\label{Sec:singlequbit Consider an ideal single-qubit experiment. In this experiment we can reliably and repetitively prepare a qubit in an unknown state $\rho$ and measure the value of any desired observable $M$ without err or qubit loss. If $M$ is a two outcome POVM defined by operators $M_i$ with $i\in\{0,1\}$ , $M_0 + M_1 = I$ then the respective probabilities $p_i$ to observe outcome $i$ are determined by the unknown quantum state via $p_i = \textrm{Tr}\left(\rho\cdot M_i\right)$. To fully describe a single qubit we need to measure a set of informationally complete POVMs which will fully define the density matrix. For concreteness let us consider a case when the qubit is represented by the polarization degree of freedom of a single photon. In this case a complete state description can be achieved by estimating the probability of observing one of two orthogonal outcomes in the rectilinear basis ($Z$, horizontal ($h$) and vertical ($v$) polarization), the diagonal basis ($X$, diagonal ($d$) and anti-diagonal ($a$) polarization), or the circular basis ($Y$, left ($l$) and right ($r$) circular polarization). The likelihood of observing a data set $\mathcal{D}$ from these measurements given we know the probabilities of each outcome exactly is \begin{equation}P(\mathcal{D}|\alpha)=p_h^{c_h} (1\textrm{-} p_h)^{c_v} p_d^{c_d}(1\textrm{-} p_d)^{c_a} p_l^{c_l} (1\textrm{-} p_l)^{c_r}\label{likelihood}\end{equation} where $\alpha=\{p_h,p_d,p_l\}$, $\mathcal{D}=\{c_h,c_v,c_d,c_a,c_l,c_r\}$, we have enforced the single basis requirement that the sum of orthogonal probabilities is unity, $p_{h,d,l}+p_{v,a,r}=1$. Using Bayes rule, the distribution for $\alpha$ given $\mathcal{D}$ is \begin{equation}P(\alpha|\mathcal{D})=\frac{P(\mathcal{D}|\alpha)P(\alpha)}{\int d\alpha P(\mathcal{D}|\alpha)P(\alpha)}\end{equation} which has no quantum constraints, i.e. associated density matrices may not be physical. A physical density matrix $\rho$ for the single-qubit must fulfill constraints \begin{eqnarray}\textrm{Tr}\left(\rho\right)=1 \quad &&\textrm{probabilities sum to 1} \nonumber \\ \left\langle \phi \right| \rho \left|\phi\right\rangle \geq 0 \quad &&\textrm{positive semi-definite} \nonumber \\ \rho=\rho^\dagger \quad &&\textrm{hermitian} \textrm{.}\nonumber \end{eqnarray} These can all be fulfilled by parametrizing the density matrix as suggested by Daboul \cite{daboul1967conditions}. For the single-qubit the parametrized matrix is \begin{equation}\rho(\tau)\!=\!\left(\!\begin{array}{cc} \cos^2\left(u\right) & \frac{1}{2}\cos\left(\theta\right)\sin\left(2u\right)e^{i\phi} \\ \frac{1}{2}\cos\left(\theta\right)\sin\left(2u\right)e^{-i\phi} & \sin^2\left(u\right) \end{array}\!\right)\label{ideal_rho}\end{equation} where parameter ranges $\tau=\{u,\theta,\phi\}$, $u\in[0,\frac{\pi}{2}]$, $\theta\in[0,\frac{\pi}{2}]$, and $\phi\in[0,2\pi]$ ensure there is no state redundancy, states having multiple representations. This matrix heeds all quantum constraints for any values of the parameters. The parameters $\alpha$ in terms of the new parameters $\tau$ are \begin{align}p_h(\tau) &= \cos^2\left(u\right)\label{pH}\\ p_d(\tau)&=\frac{1}{2}+\frac{1}{2}\sin\left(2u\right)\cos\left(\theta\right)\cos\left(\phi\right)\label{pV}\\ p_l(\tau)&=\frac{1}{2}+\frac{1}{2}\sin\left(2u\right)\cos\left(\theta\right)\sin\left(\phi\right)\textrm{.}\label{pL}\end{align} This results in the new likelihood \begin{equation}P(\mathcal{D}|\tau)=p_h(\tau)^{c_h} (1\textrm{-} p_h(\tau))^{c_v} p_d(\tau)^{c_d}(1\textrm{-} p_d(\tau))^{c_a} p_l(\tau)^{c_l} (1\textrm{-} p_l(\tau))^{c_r}\label{tau_likelihood}\textrm{.}\end{equation} To complete the new description we must define a new integration measure in $\tau$ space. Our original probability space has an infinitesimal length element $(ds)^2=(dp_h)^2+(dp_d)^2+(dp_r)^2$. The measure in this case is reduced to the volume element in Cartesian coordinates $d\alpha=dp_h dp_d dp_l$. This space can be considered a``cube" that includes both physical and unphysical states. Within this cube, the new space is a sphere containing only and all physical density matrices. The length element in this space is \cite{fyodorov2005introduction} \begin{equation}(ds)^2=\textrm{Tr}\left(d\rho \cdot d\rho^\dagger\right)=\sum\limits_{i,j}\textrm{Tr}\left(\frac{\partial \rho}{\partial \tau_i}\cdot\frac{\partial \rho}{\partial \tau_j}\right)d\tau_i d\tau_j\label{measure}\end{equation} where $\tau_i\in\{u,\theta,\phi\}$. The new measure, the infinitesimal volume, is \begin{equation}d\tau = d\tau_0\; d\tau_1 ..d\tau_m \textrm{Det}\sqrt{g} \label{dTau}\end{equation} where \begin{equation}g_{i j}=\textrm{Tr}\left(\frac{\partial \rho}{\partial \tau_i}\cdot\frac{\partial \rho}{\partial \tau_j}\right)\textrm{.}\label{gIJ}\end{equation} The integration measure in the ideal single qubit experiment is \begin{equation}d\tau = du\; d\theta\; d\phi\;\frac{\textrm{sin}^3\left(2u\right)\textrm{sin}\left(2\theta\right)}{2\sqrt{2}} \textrm{.}\nonumber\end{equation} As described earlier, this measure is Haar invariant. We will also consider how this parametrization relates to the Pauli operators \begin{equation}\sigma_z= \left(\!\begin{array}{cc} 1 & 0\\ 0 & -1 \\ \end{array}\!\right) \quad \sigma_x= \left(\!\begin{array}{cc} 0 & 1\\ 1 & 0 \\ \end{array}\!\right)\quad \sigma_y= \left(\!\begin{array}{cc} 0 & -i\\ i & 0 \\ \end{array}\!\right) \end{equation} and their expectations \begin{align}z&=\textrm{Tr}\left(\sigma_z\cdot\rho\right)=\cos(2u)\label{pauliZ}\\ x&=\textrm{Tr}\left(\sigma_x\cdot\rho\right)=\sin(2u)\cos(\theta)\cos(\phi)\label{pauliX}\\ y&=\textrm{Tr}\left(\sigma_y\cdot\rho\right)=\sin(2u)\cos(\theta)\sin(\phi)\label{pauliY}\textrm{.} \end{align} With our likelihood defined, one estimation technique is to approximate the true distribution utilizing Laplace's method \cite{mackay2003information}, also known as the saddle-point approximation. This is a multivariate Gaussian centered on the MLE defined by $k$ parameters. This MLE is found by simultaneously solving $k$ equations of the form \begin{equation}\frac{\partial P(\mathcal{D}|\tau)}{\partial \tau_i}=0\end{equation} and verifying this point represents the global maximum. The uncertainty in the parameters can be captured utilizing the covariance matrix which we estimate as \begin{equation}A_{ij}=\left.-\frac{\partial^2 \log\left(P(\mathcal{D}|\tau)\right)}{\partial \tau_i \partial \tau_j}\right|_{\tau=\tau_{\textrm{ml}}}\textrm{.}\end{equation} The approximate distribution is then \begin{equation}P(\mathcal{D}|\tau)\approx\sqrt{\frac{(2\pi)^k}{\det\left(\mathbf{A}\right)}}e^{-\frac{1}{2}\left(\mathbf{\tau}-\mathbf{\tau}_{mle}\right)^T \cdot A \cdot\left(\mathbf{\tau}-\mathbf{\tau}_{mle}\right)}\end{equation} where $\mathbf{\tau}$ is a column vector. For the ideal single-qubit we find unbounded MLE \begin{equation}u_{\textrm{uml}}=\frac{\arccos\left(z_f\right)}{2}\quad\quad \theta_{\textrm{uml}}=\arccos\left(\sqrt{\frac{x_f^2+y_f^2}{1-z_f^2}}\right)\quad\quad \phi_{\textrm{uml}}=\arctan\left(x_f,y_f\right)\label{unbounded}\end{equation} where $z_f$, $x_f$, and $y_f$ are the frequency based linear inversion estimates (LIE) of the Pauli operator expectations \begin{equation} z_f=\frac{c_h-c_v}{c_h+c_v}\quad\quad\quad x_f=\frac{c_d-c_a}{c_d+c_a}\quad\quad\quad y_f=\frac{c_l-c_r}{c_l+c_r}\textrm{.}\label{unbounded2} \end{equation} When $x_f^2+y_f^2+z_f^2\leq1$ these LIE are the correct MLE. However the parameter set given in Eq. \ref{unbounded} and \ref{unbounded2} is undefined for unphysical states, when $x_f^2+y_f^2+z_f^2>1$. When this is the case, the MLE is found on the boundary of the Bloch sphere due to the concavity of the likelihood given by Eq. \ref{tau_likelihood}. This point is not necessarily the one with smallest Euclidean distance to the unbounded MLE. Determination of the boundary MLE is accomplished by setting $\theta=0$, restricting us to the boundary, and maximizing the parametrized likelihood Eq. \ref{tau_likelihood} over the parameter ranges $u\in\left[0,\frac{\pi}{2}\right]$ and $\phi\in\left[0,2\pi\right]$. Next, we derive a closed-form BME which always results in a quantum bound obedient estimate. To calculate the BME for the single qubit density matrix we first evaluate the normalizing constant \begin{equation}P(\mathcal{D})=\int d\tau P(\mathcal{D}|\tau)P(\tau)\end{equation} and then estimate our mean density matrix \vspace{-5pt} \begin{align}\overline{\rho}&=\frac{1}{P(\mathcal{D})}\int d\tau P(\mathcal{D}|\tau)P(\tau)\times \rho\;(\tau)\nonumber\\ &=\frac{1}{P(\mathcal{D})}\int du\; d\theta\; d\phi\; \frac{\textrm{sin}^3\left(2u\right)\textrm{sin}\left(2\theta\right)}{2\sqrt{2}}\left(\cos^2 u\right)^{c_h}\left(\sin^2 u\right)^{c_v} \nonumber \\ &\quad\times \left(\frac{1}{2}+\frac{1}{2}\sin\left(2u\right)\cos\left(\theta\right)\cos\left(\phi\right)\right)^{c_d}\left(\frac{1}{2}-\frac{1}{2}\sin\left(2u\right)\cos\left(\theta\right)\cos\left(\phi\right)\right)^{c_a}\nonumber \\ &\quad\times \left(\frac{1}{2}+\frac{1}{2}\sin\left(2u\right)\cos\left(\theta\right)\sin\left(\phi\right)\right)^{c_l}\left(\frac{1}{2}-\frac{1}{2}\sin\left(2u\right)\cos\left(\theta\right)\sin\left(\phi\right)\right)^{c_r}\nonumber \\ &\quad\times \left(\!\begin{array}{cc} \cos^2\left(u\right) & \frac{1}{2}\cos\left(\theta\right)\sin\left(2u\right)e^{i\phi} \\ \frac{1}{2}\cos\left(\theta\right)\sin\left(2u\right)e^{-i\phi} & \sin^2\left(u\right) \end{array}\!\right)\nonumber \textrm{.} \end{align} Using the binomial theorem we can rewrite this as \small \begin{align}&\overline{\rho}\;=\frac{1}{P(\mathcal{D})}\int_0^{\pi/2}\!\!\!\!\!du\int_0^{\pi/2}\!\!\!\!\!d\theta\int_0^{2\pi}\!\!\!\!\!d\phi\; \frac{8\; \textrm{sin}^3\left(u\right)\textrm{cos}^3\left(u\right)\sin\left(\theta\right)\cos\left(\theta\right)}{\sqrt{2}}\left(\cos^2 u\right)^{c_h}\left(\sin^2 u\right)^{c_v} \nonumber \\ &\times\!\!\sum_{k_d=0}^{c_d}\!\!\binom{c_d}{k_d} 2^{-k_d} \!\left(\sin(u)\cos(u)\cos(\theta)\cos(\phi)\right)^{c_d-k_d}\sum_{k_a=0}^{c_a}\!\!\binom{c_a}{k_a} 2^{\textrm{-} k_a} \!\left(\textrm{-}\sin(u)\cos(u)\cos(\theta)\cos(\phi)\right)^{c_a-k_a}\nonumber \\ &\times\!\!\sum_{k_l=0}^{c_l}\!\!\binom{c_l}{k_l} 2^{-k_r} \!\left(\sin(u)\cos(u)\cos(\theta)\sin(\phi)\right)^{c_l-k_l}\sum_{k_r=0}^{c_r}\!\!\binom{c_r}{k_r} 2^{\textrm{-} k_l} \!\left(\textrm{-}\sin(u)\cos(u)\cos(\theta)\sin(\phi)\right)^{c_r-k_r}\nonumber \\ &\quad\times \left(\!\begin{array}{cc} \cos^2\!\left(u\right) & \frac{1}{2}\cos\!\left(\theta\right)\sin\!\left(2u\right)e^{i\phi} \\ \frac{1}{2}\cos\!\left(\theta\right)\sin\!\left(2u\right)e^{-i\phi} & \sin^2\!\left(u\right) \end{array}\!\right)\nonumber \textrm{.} \end{align} \normalsize The integral over $u$ has solution \begin{equation}\int_0^{\pi/2} du \sin^x\!u \cos^y\!u =\frac{1}{2}\;\textrm{Beta}\left(\frac{1+x}{2},\frac{1+y}{2}\right)\end{equation} and similar for $\theta$. The integral over $\phi$ can be shown to be \begin{equation}\int_0^{2 \pi} d\phi\; \sin^x \phi \cos^y \phi= \frac{\left(1+(\textrm{-}1)^x\right)\left(1+(\textrm{-}1)^y\right)}{2}\;\textrm{Beta}\left(\frac{1+x}{2},\frac{1+y}{2}\right)\end{equation} which is zero for odd $x$ or $y$. To ease representation of the solutions, define \scriptsize \begin{align}&F_{u_0,u_1,\theta_0,\theta_1,\phi_0,\phi_1}\nonumber \\ &=\int_0^{\pi/2}\!\!\!\!\!du\int_0^{\pi/2}\!\!\!\!\!d\theta\int_0^{2\pi}\!\!\!\!\!d\phi\; \frac{8\; \textrm{sin}^3\left(u\right)\textrm{cos}^3\left(u\right)\sin\left(\theta\right)\cos\left(\theta\right)}{\sqrt{2}} \left(\cos^2 u\right)^{c_h}\left(\sin^2 u\right)^{c_v} \nonumber \\ &\quad\times\!\!\sum_{k_d=0}^{c_d}\!\!\binom{c_d}{k_d} 2^{-k_d} \!\left(\sin(u)\cos(u)\cos(\theta)\cos(\phi)\right)^{c_d-k_d} \sum_{k_a=0}^{c_a}\!\!\binom{n_d\textrm{-} c_d}{k_a} 2^{-k_a} \!\left(-\sin(u)\cos(u)\cos(\theta)\cos(\phi)\right)^{c_a-k_a}\nonumber \\ &\quad\times\!\!\sum_{k_l=0}^{c_l}\!\!\binom{c_r}{k_r} 2^{-k_r} \!\left(\sin(u)\cos(u)\cos(\theta)\sin(\phi)\right)^{c_l-k_l} \sum_{k_r=0}^{c_r}\!\!\binom{n_c\textrm{-} c_r}{k_l} 2^{-k_l} \!\left(-\sin(u)\cos(u)\cos(\theta)\sin(\phi)\right)^{c_r-k_r}\nonumber \\ &\quad\times \cos^{u_0}(u)\sin^{u_1}(u)\cos^{\theta_0}(\theta)\sin^{\theta_1}(\theta)\cos^{\phi_0}(\phi)\sin^{\phi_1}(\phi) \nonumber \\ &=\sum_{k_d=0}^{c_d}\sum_{k_d=0}^{c_d}\sum_{k_l=0}^{c_l}\sum_{k_r=0}^{c_r}\binom{c_d}{k_d}\binom{c_a}{k_a}\binom{c_l}{k_l}\binom{c_r}{k_r} 2^{-k_d-k_a-k_l-k_r}(\textrm{-} 1)^{c_a+c_r-k_a-k_r}\!\left(\!1\!+\!(\textrm{-} 1)^{n_c \textrm{-} k_l \textrm{-} k_r \+ \phi_0}\right)\left(\!1\!+\!(\textrm{-} 1)^{n_d \textrm{-} k_d \textrm{-} k_a \+ \phi_1}\right)\nonumber \\ &\quad\times\textrm{Beta}\left(\frac{4 \+ 2\;c_h + n_d +n_c - k_d - k_a - k_l - k_r + u_0}{2},\frac{4 + 2\;c_v + n_d + n_c - k_d - k_a - k_l - k_r + u_1}{2}\right)\nonumber \\ &\quad\times\textrm{Beta}\left(\frac{2+\theta_0}{2},\frac{1 + n_d + n_c - kd - ka - k_l - k_r + \theta_1}{2}\right)\;\textrm{Beta}\left(\frac{1 + n_c - k_l - k_r + \phi_0}{2},\frac{1 + n_d - k_d - k_a + \phi_1}{2}\right)\nonumber\textrm{.} \end{align} \normalsize The BME for our ideal single-qubit is \begin{equation}\overline{\rho}= \frac{1}{F_{0,0,0,0,0,0}} \left(\!\begin{array}{cc} F_{2,0,0,0,0,0} & F_{1,1,0,1,0,1}\+ i F_{1,1,0,1,1,0}\\ F_{1,1,0,1,0,1}\textrm{-} i F_{1,1,0,1,1,0} & F_{0,2,0,0,0,0} \\\end{array}\!\right)\textrm{.}\nonumber \end{equation} This is the best possible estimation of the ideal single-qubit given a set of data $\mathcal{D}$ and a uniform prior. \begin{figure}[b] \centering \includegraphics[scale=0.5]{bloch_sphere_XZ.pdf} \includegraphics[scale=0.5]{bloch_sphere_YZ.pdf} \includegraphics[scale=0.5]{bloch_sphere_XY.pdf} \includegraphics[scale=0.26]{bloch_legend.pdf} \caption{We plot posterior marginal distributions $P(x,z|\mathcal{D})$, $P(y,z|\mathcal{D})$, and $P(x,y|\mathcal{D})$ at top left, top right, and bottom, respectively. The contour plots relate the relative probability of Bloch sphere coordinates. The plotted dots represent the locations of the true state, MLE, LIE, and BME. These plots are given to emphasize the physicality of the posterior distribution used to calculate the BME--the posterior distributions conform to quantum bounds. \label{bloch}} \end{figure} To illustrate the physicality of the distribution $P(\tau|D)$ and the differences between the MLE, LIE, and the BME consider a true quantum state $\rho_0$ defined by the parameters $u_0=0.864$, $\theta_0=0.393$, and $\phi_0=5.18$. For visualization we use the Bloch sphere where the state is represented by the expectations of the the Pauli operators given in Eq. \ref{pauliZ}-\ref{pauliY}. We simulated taking 10 measurements in the $Z$, $X$, and $Y$ bases from which we generated counts $c_h=7$, $c_v=3$, $c_d=7$, $c_a=3$, $c_l=0$, and $c_r=10$. We plot the distributions $P(x,z|\mathcal{D})$, $P(y,z|\mathcal{D})$, and $P(x,y|\mathcal{D})$ in Fig. \ref{bloch}. The coordinates for each estimate are given in Table \ref{tableEstimates}. This small data set emphasizes the parametrized distribution's physicality and the qualitative difference between the MLE and BME. The gray locations correspond to unphysical states. As can be seen in the top right and bottom plots in Fig. \ref{bloch}, the LIE can be unphysical. To correct this, the MLE is found on the boundary, a pure state. In contrast, the BME will always be located within the physical space. This illustration is not made to emphasize the performance of any of specific approach. Performance is addressed in Section \ref{performance}. \begin{table}[t] \centering \small \begin{tabular}{|c|c|c|c|c|} \hline & $z$ & $x$ & $y$ & $\sqrt{z^2+x^2+y^2}$ \\ \hline true &-0.156&0.414&-0.813& 0.925\\\hline MLE &0.263& 0.263& -0.928& 1.00\\\hline LIE &0.400& 0.400& -1.00& 1.15$^\dagger$\\\hline BME &0.226&0.216&-0.695&0.762\\\hline \hline \end{tabular} \normalsize \caption{Bloch sphere coordinates for the true state, MLE, LIE, and BME. $\dagger$In this case, the LIE is unphysical.\label{tableEstimates}} \end{table} In order to utilize the ideal single qubit formalism with single-qubit experiments, the data can be renormalized to the lowest efficiency measurement similar to the procedure used in Appendix \ref{mleAppendix}. This method does not fully utilize the available information and has the additional complication that preliminary experiments must transpire to determine the measurement efficiencies. In the remainder of our manuscript we address qubit estimation for experiments. \section{Bayesian mean estimation for multi-qubit experiments}\label{Sec:twoqubits In an experiment the probability of observing an outcome depends not only on the quantum state but also on the measurement apparatus itself. In this case imperfections and asymmetries in the measurement process prohibit the type of ``perfect" estimate we investigated in the last section. Our experiment of investigation is the common two-photon experiment for which we introduce the fundamental assumptions and model below. James et al. \cite{measureQubits2001} previously reported an MLE approach to this experiment as well as higher dimensional experiments. In contrast to that method, we account for qubit loss within our defined likelihood and enable determination of the BME, the best estimate on average \cite{blume2010optimal}, which avoids MLE pitfalls such as ``zero" probabilities, impossible outcomes. \subsection{Estimating parameters in a single-basis two-photon experiment}\label{A In this section, we give an example of estimating parameters in a single-basis experiment. To begin, we assume the existence of a photon pair. A member of this pair is sent to Alice and the other one to Bob each of whom has chosen a measurement basis as seen in Fig. \ref{setup}. A single photon can result in one of two observable orthogonal outcomes, $0$ or $1$, and one unobservable outcome, the photon is lost. All observable outcomes have probabilities of occurrence proportional to the joint probabilities $p_{00}$, $p_{01}$, $p_{10}$, and $p_{11}$ as seen in Fig. \ref{bayes_tree}a. Additionally, Fig. \ref{bayes_tree}b illustrates the four possible outcomes for a given ``destiny" when pathway efficiencies are considered. The possibilities include both photons being counted giving one coincidence count and two singles counts, one photon being counted and one lost giving one singles count, or both photons being lost giving no counts. \begin{figure}[tb] \centering \includegraphics[width=0.6\linewidth]{unknownN.pdf} \caption{The two-photon experiment is illustrated above. One member of a photon pair, a qubit, is sent to both Alice and Bob who have each chosen a measurement basis. An individual qubit can result in one of two orthogonal outcomes, $0$ or $1$, or the qubit can be lost.\label{setup}} \end{figure} Alice and Bob record event $0$ or event $1$ with number $A_0,A_1\leq N$ and $B_0,B_1\leq N $, respectively, since typically some portion of the $N$ photons are lost. Losses are due to Alice and Bob's suboptimal pathway efficiencies $\left\{a_0,a_1,b_0,b_1\right\}\in\left[0,1\right]$. In the event both members of a photon pair are detected, Alice and Bob observe joint results, giving coincidence totals $c_{00},c_{01},c_{10},$ and $c_{11}$. \begin{figure}[tb] \centering \includegraphics[width=\linewidth]{little_treeNJP.pdf} \caption{a) Our Bayesian tree begins with the existence of a photon pair. This pair is then ``destined" to the joint outcome $ij$ according to probability $p_{ij}$. b) A closer view of each tree branch shows that each has four possible terminations due to pathway inefficiencies. These possibilities include a joint event, a single event (one photon is lost), and no event (both photons are lost).\label{bayes_tree}} \end{figure} From this data we may enumerate the number of each type of event. The number of joint coincidence events are straightforward, given by the $c_{ij}$ with the probability of these events being $a_i b_j p_{ij}$. The number of events where Alice registers result $i$ and Bob loses his photon is $A_i\textrm{-} c_{i0}\textrm{-} c_{i1}$. The probability of this occurrence is $a_i\left[(1-b_0)p_{i0}+(1-b_1)p_{i1}\right]$. The terms for Bob registering a photon and Alice losing her photon are similar. The number of events where both photons are lost is $N\textrm{-} A_{0}\textrm{-} A_{1}\textrm{-} B_{0}\textrm{-} B_{1}\+ c_{00}\+ c_{01}\+ c_{10}\+ c_{11}$ with probability \begin{equation}p_{\substack{pair\\lost}}=(1\textrm{-} a_0)(1\textrm{-} b_0)p_{00}+(1\textrm{-} a_0)(1\textrm{-} b_1)p_{01}+(1\textrm{-} a_1)(1\textrm{-} b_0)p_{10}+(1\textrm{-} a_1)(1\textrm{-} b_1)p_{11}\textrm{.}\nonumber\end{equation} For now, assume the photon number $N$ is known. In this case, using Bayes' rule, the event number, and the probabilities given above, the PD is \begin{equation}P\left(\alpha|\mathcal{D},N\right)=\frac{P\left(\mathcal{D},N|\alpha\right)P(\alpha)}{P\left(\mathcal{D},N\right)}\end{equation} where $\alpha$=$\left\{p_{00},p_{01},p_{10},p_{11},a_0,a_1,b_0,b_1\right\}$ are the unknown parameters, joint probabilities and pathway efficiencies, the data set $\mathcal{D}$=$\left\{c_{00},c_{01},c_{10},c_{11},A_0,A_1,B_0,B_1\right\}$ consists of the known singles and coincidence count values totalling \begin{equation}s=A_0+A_1+B_0+B_1\quad\quad\quad n=c_{00}+c_{01}+c_{10}+c_{11}\textrm{,}\nonumber\end{equation} respectively, \begin{align} &P(\mathcal{D},N|\alpha)=\nonumber\\ &\gamma\!\left(N\right)(a_0b_0p_{00})^{c_{00}}(a_0b_1p_{01})^{c_{01}}(a_1b_0p_{10})^{c_{10}}(a_1b_1p_{11})^{c_{11}}\nonumber\\ &\qquad\times [a_0\left(p_{00}(1-b_0)+p_{01}(1-b_1)\right)]^{A0-c_{00}-c_{01}}[a_1\left(p_{10}(1-b_0)+p_{11}(1-b_1)\right)]^{A1-c_{10}-c_{11}}\nonumber\\ &\qquad\times [b_0\left(p_{00}(1-a_0)+p_{10}(1-a_1)\right)]^{B0-c_{00}-c_{10}}[b_1\left(p_{01}(1-a_0)+p_{11}(1-a_1)\right)]^{B1-c_{01}-c_{11}}\nonumber\\ &\qquad\times[p_{00}(1\textrm{-} a_0)(1\textrm{-} b_0)+p_{01}(1\textrm{-} a_0)(1\textrm{-} b_1)+p_{10}(1\textrm{-} a_1)(1\textrm{-} b_0)+p_{11}(1\textrm{-} a_1)(1\textrm{-} b_1)]^{N-(s-n)}\textrm{,}\nonumber\\ &P(\alpha)=1\textrm{,}\nonumber\\ &P(\mathcal{D},N)=\!\!\!\!\int\!\!d\alpha \;P(\mathcal{D},N|\alpha)P(\alpha)\textrm{,}\nonumber\\ &\gamma\!\left(N\right)=\frac{N!}{(N\!-\!(s\!-\!n))!(A_0\textrm{-} c_{00}\textrm{-} c_{01})!(A_1\textrm{-} c_{10}\textrm{-} c_{11})!(B_0\textrm{-} c_{00}\textrm{-} c_{10})!(B_1\textrm{-} c_{01}\textrm{-} c_{11})!c_{00}!c_{01}!c_{10}!c_{11}!}\textrm{.}\nonumber\end{align} The likelihood $P(\mathcal{D},N|\alpha)$ consists of the probability of each type of event with a multiplicity equal to the number of times it occurred. Both the probabilities and number of events were described in the preceding paragraph. We have retained the full form of the likelihood that includes $\gamma(N)$ for use below. It is typical in two-photon experiments that the photon number $N$ is not known. If $N$ is known, the following step may be skipped and the above PD is the appropriate choice. Otherwise, we must make $N$ an unobserved parameter or seek a way to eliminate it. Fortunately, there is an analytical method to remove $N$ from the PD completely \cite{jaynes2003probability} by taking an average over the $N$ distribution using the summation formula \begin{equation}\sum_{m=0}^\infty\binom{m+y}{m}m^zx^m=\left(x\frac{d}{dx}\right)^z(1-x)^{-(y+1)}\textrm{.}\label{sumOverN}\end{equation} Since the average is taken over the distribution, see Fig. \ref{estimates}, only probable values of $N$ will have appreciable contribution. Applying this formula, $N$ is removed giving PD \begin{equation}P\left(\alpha|\mathcal{D}\right)=\frac{\sum_{N=s-n}^{\infty}P\left(\mathcal{D},N|\alpha\right)P(\alpha)}{P\left(\mathcal{D}\right)}=\frac{P\left(\mathcal{D}|\alpha\right)P(\alpha)}{P\left(\mathcal{D}\right)}\label{PD}\end{equation} where \begin{align}& P(\mathcal{D}|\alpha)=a_0^{A_0}a_1^{A_1}b_0^{B_0}b_1^{B_0}p_{00}^{c_{00}}p_{01}^{c_{01}}p_{10}^{c_{10}}p_{11}^{c_{11}}\nonumber\\ &\qquad\times[p_{00}(1-b_0)+p_{01}(1-b_1)]^{A0-c_{00}-c_{01}}[p_{10}(1-b_0)+p_{11}(1-b_1)]^{A1-c_{10}-c_{11}}\nonumber\\ &\qquad\times[p_{00}(1-a_0)+p_{10}(1-a_1)]^{B0-c_{00}-c_{10}}[p_{01}(1-a_0)+p_{11}(1-a_1)]^{B1-c_{01}-c_{11}}\nonumber\\ &\qquad\times[1\textrm{-} p_{00}(1\textrm{-} a_0)(1\textrm{-} b_0)\textrm{-} p_{01}(1\textrm{-} a_0)(1\textrm{-} b_1)\textrm{-} p_{10}(1\textrm{-} a_1)(1\textrm{-} b_0)\textrm{-} p_{11}(1\textrm{-} a_1)(1\textrm{-} b_1)]^{-s+n-1}\textrm{,}\label{likelihood2}\\ &P(\alpha)=1\textrm{,}\nonumber\\ &P(\mathcal{D})=\int\!\!d\alpha\;P(\mathcal{D}|\alpha)P(\alpha)\textrm{,}\nonumber\end{align} and, in this specific case, \begin{equation}\int\! d\alpha \!\equiv\! \!\int_0^1\!\!\!\!da_0\!\int_0^1\!\!\!\!da_1\!\int_0^1\!\!\!\!db_0\!\int_0^1\!\!\!\!db_1\!\!\int_{0}^1\!\!\!\! dp_{00}\!\! \int_{0}^{1-p_{00}}\hspace{-25pt}dp_{01}\!\!\int_{0}^{1-p_{00}-p_{01}}\hspace{-42pt}dp_{10}\nonumber \end{equation} with $p_{11}=1-p_{00}-p_{01}-p_{10}$. We omitted all constants. Assuming the integral can be carried out, we can make estimates of any parameter via its the mean value, for instance, \begin{equation}\overline{p_{00}}=\int\!\!d\alpha P\left(\alpha|\mathcal{D}\right)\times p_{00}.\end{equation} Likewise, any other parameter mean $\;\overline{p_{ij}}\;$, $\;\overline{a_{i}}\;$, or $\;\overline{b_{i}}\;$ as well as their standard deviations may be estimated. One exception is the mean value $\overline{N}$. We find this mean by setting $z=1$ in Eq. (\ref{sumOverN}), \begin{equation}\overline{N}=\int\!\!d\alpha P\left(\alpha|\mathcal{D}\right)\times\frac{s-n+g(\alpha)}{1-g(\alpha)}\end{equation} where \begin{equation}g(\alpha)=p_{00}(1\textrm{-} a_0)(1\textrm{-} b_0)+p_{01}(1\textrm{-} a_0)(1\textrm{-} b_1)+p_{10}(1\textrm{-} a_1)(1\textrm{-} b_0)+p_{11}(1\textrm{-} a_1)(1\textrm{-} b_1)\textrm{.}\end{equation} In principal, all of the above integrals have analytical solutions via the multinomial theorem, \begin{equation}(x_0+x_1+...+x_m)^n =\sum_{k_0+k_1+...+k_m=n}\!\!\binom{n}{k_0,k_1,..,k_n}x_0^{k_0}x_1^{k_1}\cdots x_m^{k_m}\textrm{,}\nonumber\end{equation} which gives exact answers in the form of sums of Beta and Gamma functions. However, the computation needed to carry out the resultant sums is prohibitive. If we cannot efficiently make our parameter estimations analytically, we can utilize numerical sampling to approximate the BMEs of interest. We discuss this in detail in Appendix \ref{ss}. If the probability of obtaining a sample $\alpha^{(r)}$ tends to the true probability $P(\alpha^{(r)}|\mathcal{D})$, the mean estimations can then be made by repetitive sampling, \begin{align}\overline{\alpha}=\frac{1}{R}\sum_{r=0}^R\alpha^{(r)}&=\frac{1}{R}\sum_{r=0}^R\left\{p_{00}^{(r)},p_{01}^{(r)},p_{10}^{(r)},p_{11}^{(r)},a_0^{(r)},a_1^{(r)},b_0^{(r)},b_1^{(r)}\right\}\nonumber\\ &=\left\{\overline{p_{00}},\;\overline{p_{01}},\;\overline{p_{10}},\;\overline{p_{11}},\;\overline{a_{0}},\;\overline{a_{1}},\;\overline{b_{0}},\;\overline{b_{1}}\right\}\textrm{.}\nonumber\end{align} \subsection{Single-basis simulation \begin{figure}[b] \centering \includegraphics[width=0.32\linewidth]{prob_histogram_paper_colors.pdf} \includegraphics[width=0.32\linewidth]{eta_histogram_paper_colors.pdf} \includegraphics[width=0.32\linewidth]{N_histogram_paper_colors.pdf} \caption{Left. Sample histograms given proportional to the approximate the probability distribution for each parameter $p_{ij}$ whose true value is given by the black vertical line. Middle. Sample histograms are given proportional to the probability distribution for each efficiency parameter $a_{0}$, $a_{1}$, $b_{0}$, and $b_{1}$ whose true value is given by the black vertical line. Right. A histogram of samples for parameter $N$ are given proportional to the probability distribution for photon number $N$.\label{estimates}} \end{figure} Consider a single-basis simulation where unbeknownst to Alice and Bob a source generates $N$=$10,000$ photon pairs with joint probabilities and pathways efficiencies \begin{align}p_{00}&=0.3\quad\quad p_{01}=0.05\quad\quad p_{10}=0.2\quad\quad p_{11}=0.45\nonumber\\ a_{0}&=0.3\quad\quad\; a_{1}=0.7\quad\quad \;\;\;b_{0}=0.9\quad\quad \;\;b_{1}=0.5\;\textrm{.}\nonumber\end{align} The only information available to Alice and Bob are their count numbers \begin{align}A_0&=1079\quad\quad A_{1}=4553\quad\quad B_{0}=4474\quad\quad B_{1}=2565\nonumber\\ c_{00}&=829\phantom{0}\quad\quad c_{01}=89\phantom{00}\quad\quad c_{10}=1245\quad\quad c_{11}\!=1624\nonumber\textrm{.}\end{align} Numerical sampling, see Appendix \ref{ss}, is used to produce sample $\alpha^{(r)}$ from $P\left(\alpha|\mathcal{D}\right)$. Fig. \ref{estimates} includes histograms for each parameter from 25,200 $\alpha$ samples. Each parameter histogram contains 100 bins. This large sample size was chosen to illustrate that the samples do come from a distribution. For the typical application a much smaller sample size would likely be adequate. From the distribution $P\left(\alpha|\mathcal{D}\right)$ parameter mean values are found to be \begin{align}\overline{p_{00}}=0.300\pm 0.008& \quad\overline{p_{01}}=0.059 \pm 0.007 \quad \overline{p_{10}}=0.191\pm0.007&&\quad\overline{p_{11}}=0.450\pm0.009\nonumber\\ \overline{a_{0}}=0.303\pm0.011&\quad \overline{a_{1}}=0.716\pm0.014 \quad \overline{b_{0}}=0.918\pm0.018&&\quad \overline{b_{1}}=0.508\pm0.011\nonumber\end{align} and mean photon number $\overline{N}=9926\pm 50.9$. Comparison with the above true values shows qualitative agreement \subsection{Parametrizing the n-dimensional density matrix}\label{C For approachability, we obscured the construction of the density matrix for the ideal single qubit given in Eq. \ref{ideal_rho}. We briefly describe this construction here, for a full proof with discussion see \cite{daboul1967conditions}. We note that this construction is similar to that recently proposed by Seah et al. \cite{MCsamplingQStates_II} whose density matrix sampling application is similar to our approach. Daboul's parametrization can be extended to quantum systems of any dimension. For an $n$-dimensional Hilbert space, the density matrix is formed using the Cholesky decomposition, requiring $\rho=L(\!\tau\!)L(\!\tau\!)^\dagger$ with \begin{equation}L(\!\tau\!)\! =\!\!\left(\!\begin{array}{ccccc} L_{11} (\!\tau\!)\!\!& 0& 0 & \cdots& 0 \\ L_{21}(\!\tau\!)\! \! & L_{22}(\!\tau\!)\!\! & 0 & \cdots& 0 \\ L_{31}(\!\tau\!)\! \! & L_{32}(\!\tau\!)\!\!& L_{33}(\!\tau\!)\!\! & \cdots& 0 \\ \vdots& \vdots& \vdots & \ddots& \vdots \\ L_{n1}(\!\tau\!)\!\! & L_{n2}(\!\tau\!)\!\! & L_{n3}(\!\tau\!)\!\! &\cdots& \!\!\! L_{nn}(\!\tau\!) \end{array}\right)\nonumber\vspace{5pt}\end{equation} being a lower triangular matrix with positive real diagonal elements. The parameter set $\tau$ include $n^2$$-$$1$ parameters which describe a unique density matrix. The elements $L_{ij}$ may be written as \begin{align} L_{ij} (\!\tau\!)&=U_i V_{ij}\phantom{0}\quad\quad (j\leq i)\nonumber \\ L_{ij} (\!\tau\!)&=0\phantom{U_i V_{ij}}\quad\quad (j> i) \nonumber \end{align} where \begin{align}U_{1}&=\cos\left(u_1\right) \hspace{0.3\linewidth} V_{ii}=1 \nonumber \\ U_{k}&=\cos\left(u_k\right)\prod_{j=1}^{k-1}\sin\left(u_j\right)\quad \!\!\textrm{\scriptsize $(1<k<n)$} \;\hspace{0.05\linewidth} V_{i1}=\cos\left(\theta_{i1}\right)e^{i\phi_{i1}}\quad (i>1)\nonumber \\ U_{n}&=\prod_{j=1}^{n-1}\sin\left(u_j\right) \hspace{0.26\linewidth} V_{ik}=\cos\left(\theta_{ik}\right)e^{i\phi_{ik}}\prod_{j=1}^{k-1}\sin\left(\theta_{ij}\right)\quad \!\!\textrm{\scriptsize $(1<k<i)$} \textrm{.}\nonumber \end{align} Consider the case of two qubits with dimension $n$=$4$, the parametrized matrix elements of $L(\!\tau\!)$ are \begin{align}L_{11}(\!\tau\!)\!&=\!\cos(u_1)\nonumber\\ L_{21}(\!\tau\!)\!&=\!\sin(u_1)\cos(u_2)\cos(\theta_{21})e^{i \phi_{21}}\nonumber\\ L_{22}(\!\tau\!)\!&=\!\sin(u_1)\cos(u_2) \sin(\theta_{21})\nonumber\\ L_{31}(\!\tau\!)\!&=\!\sin(u_1)\sin(u_2)\cos(u_3)\cos(\theta_{31})e^{i \phi_{31}}\nonumber\\ L_{32}(\!\tau\!)\!&=\!\sin(u_1)\sin(u_2)\cos(u_3)\sin(\theta_{31})\cos(\theta_{32})e^{i \phi_{32}}\nonumber\\ L_{33}(\!\tau\!)\!&=\!\sin(u_1)\sin(u_2)\cos(u_3)\sin(\theta_{31})\sin(\theta_{32})\nonumber\\ L_{41}(\!\tau\!)\!&=\!\sin(u_1)\sin(u_2)\sin(u_3)\cos(\theta_{41})e^{i \phi_{41}}\nonumber\\ L_{42}(\!\tau\!)\!&=\!\sin(u_1)\sin(u_2)\sin(u_3)\sin(\theta_{41})\cos(\theta_{42})e^{i \phi_{42}}\nonumber\\ L_{43}(\!\tau\!)\!&=\sin(u_1)\sin(u_2)\sin(u_3)\sin(\theta_{41})\sin(\theta_{42})\cos(\theta_{43})e^{i \phi_{43}}\nonumber\\ L_{44}(\!\tau\!)\!&=\!\sin(u_1)\sin(u_2)\sin(u_3)\sin(\theta_{41})\sin(\theta_{42})\sin(\theta_{43}) \nonumber \end{align} with $u_{i}\in[0,\frac{\pi}{2}]$, $\theta_{ij}\in[0,\frac{\pi}{2}]$, and $\phi_{ij}\in[0,2\pi]$. Indeed, one could instead change the $u_i$ and $\theta_{ij}$ trigonometric terms to \begin{equation}\cos(u_i)\rightarrow \sqrt{u_i'}\quad\quad \sin(u_i)\rightarrow \sqrt{1-u_i'}\quad\quad \cos(\theta_i)\rightarrow \sqrt{\theta_i'}\quad\quad \sin(\theta_i)\rightarrow \sqrt{1-\theta_i'}\nonumber \end{equation} with $u_i'\in[0,1]$, $\theta_{ij}'\in[0,1]$. The complex terms involving $\phi_{ij}$ remain unchanged. A similar adjustment was used by Chung and Trueman \cite{ChungTrueman}. \subsection{Estimating parameters in a multi-basis two-photon experiment}\label{multi In section \ref{A} and \ref{C}, respectively, we defined our experimental likelihood for the single-basis experiment and detailed the parametrization of any $n$-dimensional density matrix. To make estimations using data from multi-basis two-photon experiment we will use both of these pieces. Our example will be full-state tomography. Other multiple basis experiments will have similar estimation constructions. In the case that the data set is incomplete, our method will still return an estimate true to both the given data and all quantum constraints. To complete full-state tomography Alice and Bob each take measurements in bases $Z$, $X$, and $Y$ such that all outcomes are observable in each basis combination $ZZ$, $ZX$, $XZ$, $ZY$, $YZ$, $XX$, $XY$, $YX$, and $YY$. Thus, Alice and Bob's data set will include the data from all 9 basis combinations. The likelihood is a product of the single-basis likelihoods, Eq. \ref{likelihood2}, from each of these basis combinations, \begin{equation}P(\mathcal{D}|\alpha)\!=\!P(\mathcal{D}_{ZZ}|\alpha_{ZZ})P(\mathcal{D}_{ZX}|\alpha_{ZX})\cdots P(\mathcal{D}_{YY}|\alpha_{YY})\label{likelihood_product}\end{equation} where $\alpha$ includes the probabilities of all measurement outcomes and the four experimental pathway efficiencies which we assume are the same over all bases. However, in the experimental section, Section \ref{Sec:Experimental}, we do not make this assumtion. Next, we parametrize our density matrix using the hyperspherical parameters described in the previous section, \begin{equation}P(\mathcal{D}|\alpha)\rightarrow P(\mathcal{D}|\tau)\textrm{.}\end{equation} This parametrization comes with a new measure defined by Eq. \ref{measure}, \ref{dTau}, and \ref{gIJ}. Putting it all together we can make any BME of interest, for instance the mean density matrix \begin{equation}\overline{\rho}\;= \frac{1}{P(\mathcal{D})}\int d\tau P(\mathcal{D}|\tau) P(\tau) \times \rho(\tau) \label{hardintegral}\end{equation} where $P(\mathcal{D})=\int d\tau P(\mathcal{D}|\tau) P(\tau)$. If it is not computationally convenient to evaluate the integrals of the type given in Eq. \ref{hardintegral}, we can utilize numerical sampling. If we can draw samples $\rho^{(r)}$ from the distribution $P(\tau|\mathcal{D})$ we can estimate the BME of our density matrix as \begin{equation}\overline{\rho}\;= \lim_{R\rightarrow\infty}\frac{1}{R} \sum_{i=1}^{R} \rho^{(r)}\label{PDlast}\textrm{.}\end{equation} We address our numerical sampling approach in Appendix \ref{ss}. \subsection{State certainty When reporting the values of experimental measurements such as the visibility of an interference curve $V$ or the value of the Bell parameter $S$, it is typical to provide a standard deviation to describe the uncertainty in the parameter, e.g. $V=0.98\pm 0.01$ or $S=2.65\pm 0.05$. This gives a quantification of the uncertainty in the estimate. When the BME is multi-dimensional the uncertainty can be represented by a covariance matrix \cite{blume2010optimal,Granade2016} \begin{equation}\Delta \rho(\tau)\!=\!\left(\!\begin{array}{cccc} \Delta \tau_0^2 & \Delta \tau_0 \tau_1 & \cdots &\Delta \tau_0 \tau_k\\ \Delta \tau_1 \tau_0 & \Delta \tau_1^2 & \cdots &\Delta \tau_1 \tau_k\\ \vdots & \vdots & \ddots & \vdots\\ \Delta \tau_2 \tau_0 & \Delta \tau_2 \tau_1 & \cdots &\Delta \tau_k^2\\ \end{array}\!\right)\end{equation} with each element being a covariance, \begin{equation}\Delta \tau_i \tau_j= \overline{\tau_i \tau_j}-(\overline{\tau_i})(\overline{\tau_j})\end{equation} where $\overline{\tau_i}$ is the expectation, mean value, of $\tau_i$. For $i=j$ this is just the usual variance. Here the $k$ parameters include the $n^2-1$ parameters needed to define the density matrix as well as any additional experimental parameters such as the efficiencies We can also define a single quantity that captures a compact representation of the certainty in the estimation. We use the \emph{trace distance deviation} $\Delta D$, which is the mean trace distance over the distribution with the distribution's mean density matrix $\overline{\rho}$, \begin{equation}\Delta D=\int d\tau D\textrm{\large$($}\;\overline{\rho},\rho(\tau)\textrm{\large$)$}\textrm{.}\label{dd}\end{equation} The trace distance is \begin{equation}D(\rho,\sigma)=\textrm{Tr}\left(\sqrt{\left(\rho-\sigma\right)^2}\right)=\frac{1}{2}\sum_{i}\left|\lambda_i\right|\label{traceD}\end{equation} where the $\lambda_i$ are the eigenvalues of $\rho-\sigma$. We approximate $\Delta D$ with numerical sampling using the formula \begin{equation}\Delta D\approx\frac{1}{R}\sum_{r=1}^R D\textrm{\large$($}\;\overline{\rho},\rho^{(r)}\textrm{\large$)$}\textrm{.}\end{equation} When the certainty is high all samples will be close to the mean value giving $\Delta D\rightarrow 0$ which can be compared to the typical standard deviation where smaller is better. This is also useful when there is no particular state one wishes to compare the estimations with such as is typically done when reporting the fidelity. \section{BME performance with numerical sampling}\label{performance To characterize the performance of the presented estimation methods we used the following procedure. For each $N\in\{10,10^2,10^3,10^4,10^5\}$ the following steps are repeated: \begin{enumerate}[1.] \item A density matrix $\rho$ is sampled from a uniform distribution using a Haar measure in the hypershperical parameter space. \item A random set of pathway efficiencies $a_0, a_1, b_0,$ and $b_1$ are chosen from the range $\left[0,1\right]$. These are chosen to be the same across all bases. \item We simulate a two-photon experiment for $N$ identical states $\rho$ in each of the 9 bases given in Section \ref{multi}, $9N$ total identical states to generate a data set $\mathcal{D}$. Each simulated experiment for a single basis is the same as described by the Bayesian tree in Section \ref{A}. \item Using $\mathcal{D}$ we find using a traditional likelihood and the actual randomly chosen pathway efficiencies as described in Appendix~\ref{mleAppendix}. Also with $\mathcal{D}$, we find the BME and MLE using the experiment-specific likelihood in which the pathway efficiencies are not known as described in Section \ref{multi}. Thus, the traditional MLE has the unfair and unrealistic advantage of knowing the pathway efficiencies exactly. \item The distance $D$, Eq. \ref{traceD}, is found between each estimate and the true state $\rho$. \item If the experiment-specific BME or MLE is closer to the true state than the traditional MLE, that estimation type has a win tallied. \item Steps 1.-6. are repeated for $1000$ repetitions. \item The average distance $\overline{D}$ over all $1000$ repetitions is found for the traditional MLE approach and the experiment-specific BME and MLE. The total wins versus the traditional MLE are also recorded. \end{enumerate} For these simulations the average distance $\overline{D}$ results are given at left in Fig.~\ref{results}, and the win percentages for the experiment-specific likelihood MLE and BME versus the traditional MLE are given at right in Fig.~\ref{results}. The MLE was found using an gradient ascent method described in Appendix~\ref{searchAppendix}. We emphasize that these results are conservative, since we give the traditional MLE process the pathway efficiencies exactly--these would not be known exactly in an experiment. \begin{figure}[tbh] \centering \includegraphics[width=0.6\linewidth]{avgDistanceData.pdf} \includegraphics[width=0.6\linewidth]{wins.pdf} \caption{We generated data from simulating two-photon experiments for various states and photon pair number $N$ as outlined in this section. Top. We have plotted the average distance estimate for photon pair number $N$ over 1000 randomly sampled states. Bottom. We give the win percentage for the experiment-specific BME and MLE versus the traditional maximum likelihood method. The best performance is achieved using the experiment-specific Bayesian mean estimate. Another observation from this data is that an experimentalist can achieve a better estimate by switching to an experiment-specific likelihood which allows them to forgo any preliminary experiments to determine normalizing constants.\label{results}}\end{figure} The states $\rho$ were drawn from a uniform distribution which is Haar invariant when using the measure obtained from Eq. \ref{measure}, \ref{dTau}, and \ref{gIJ}. We also use the same distribution as a prior to compute our BME estimate. We also made estimates using a non-Haar invariant measure to evaluate the significance of prior selection. This results in a drastically different prior relative to that used in generating the random state. In Fig. \ref{results} the experiment-specific BME with the original advantageous prior and the ``bad" prior are both plotted. As can be seen, there is, possibly, a small gain using the advantageous prior for the smaller photon pair number estimations. But, it also highlights that the prior choice can be made effectively inconsequential given enough data \cite{jaynes2003probability}. The prior certainly can improve the estimate when little data is available. Granade and colleagues discuss this in depth \cite{Granade2016}. \section{Experimental tomography}\label{Sec:Experimental We performed state tomography on a two-photon polarization entangled target state \begin{equation}\left|\Psi^+\right\rangle=\frac{1}{\sqrt{2}}\left(\left|H_A\right\rangle\otimes\left|V_B\right\rangle+\left|V_A\right\rangle\otimes\left|H_B\right\rangle\right)\end{equation} generated by pumping a periodically poled potassium titanyl phosphate (PPKTP) nonlinear crystal inside a Sagnac loop with two counterpropagating 405nm pump beams \cite{SagnacSource,tamperSeal}. The two possibilities of Type II \footnote{The signal and idler photons are produced with orthogonal polarizations in Type II SPDC.} spontaneous parametric downconverison (SPDC), either the clockwise or counter-clockwise beam generated a 810nm photon pair, leads to a polarization entangled state output into the idler and signal modes received by Alice and Bob, respectively. Alice and Bob each choose a basis by inclusion or omission of waveplates. Since this requires a physical adjustment to our apparatus for each basis choice, we assume in our likelihood that the efficiencies are independent parameters in each basis. The half-wave and quarter-wave plate matrix operations are, respectively, \begin{equation}\textrm{H}=\left( \begin{array}{cc} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & \frac{-1}{\sqrt{2}} \end{array}\right) \quad \textrm{Q}=\left( \begin{array}{cc} 1 & 0 \\ 0 & i \end{array}\right)\textrm{.}\nonumber \end{equation} To measure in basis $Z$ Alice omits her waveplates. To measure in $X$ she includes the half-wave plate, she operates on her single-photon with $H$. Finally, to measure in the $Y$ basis she includes both waveplates, operating with $Q$ then $H$. Single-photon detectors record the detection mode, orthogonal outcomes 0 or 1 in each basis. \begin{figure}[t] \centering \includegraphics[width=0.38\linewidth]{experiment.pdf} \caption{Our two-photon polarization entangled state is generated by pumping a nonlinear PPKTP crystal inside a Sagnac loop with two counterpropagating pump beams each of which may generate Type II SPDC pairs. This leads to a polarization entangled state shared by Alice and Bob. Alice and Bob each choose a basis by inclusion or ommission of waveplates. Single-photon detectors record the detection mode, 0 or 1. ds$\equiv$dichroic splitter, pbs$\equiv$polarizing beamsplitter, pf$\equiv$pump filter, hwp$\equiv$ half-wave plate, qwp$\equiv$ quarter-wave plate\label{experiment}} \end{figure} From the experimental data given in Table 1, our mean density matrix is found to be \begin{equation}\overline{\rho}\!=\!\! \left(\!\begin{array}{cccc} 0.01 & 0.03\+ i0.00 & 0.03\+ i0.00 & \textrm{-} 0.00\textrm{-} i0.01 \\ 0.02\textrm{-} i0.00 & 0.48 & 0.48\textrm{-} i0.02 &\textrm{-} 0.01\textrm{-} i0.04 \\ 0.03\textrm{-} i0.00 & 0.48\+ i0.02 & 0.49 & \textrm{-} 0.01\textrm{-} i0.05 \\ \textrm{-}0.00\+ i0.01 & \textrm{-}0.01\+ i0.04 & \textrm{-} 0.01\+ i0.05 & 0.02 \end{array}\!\right)\nonumber\end{equation} with trace distance deviation $\Delta D=0.006$ defined in Eq. \ref{dd}. We have reported only 2 significant digits in $\;\overline{\rho}\;$ for brevity. Every element has a finite value, i.e. every outcome has a non-zero probability of occurrence. The fidelity of our mean $\overline{\rho}$ with the intended state $\Psi^+$ is \begin{equation}\mathcal{F}=\sqrt{\left\langle \Psi^+ \right|\;\overline{\rho}\;\left|\Psi^+\right\rangle }=0.9838\pm0.0005 \textrm{.}\nonumber\end{equation} We have not removed accidental coincidences from our estimation, we have assumed this contribution is negligible. \begin{table}[t] \centering \small \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Basis & $A_0$ & $A_1$ & $B_0$ & $B_1$ & $c_{00}$ & $c_{01}$ & $c_{10}$ & $c_{11}$ \\ \hline $ZZ$ &47718&50367&45793&44942&189&7302&7903&250\\\hline $ZX$ &47117&50726&45467&45831&2735&3826&4075&5061\\\hline $XZ$ &45985&51051&45509&44441&4077&3643&3806&4317\\\hline $ZY$ &47775&51018&46149&45415&2579&4382&4650&4545\\\hline $YZ$ &44564&49626&45739&44157&3382&4155&4414&3505\\\hline $XX$ &46547&50920&45186&45658&6801&104&148&9083\\\hline $XY$ &45630&50932&44970&44155&3131&3770&3309&4638\\\hline $YX$ &44553&49430&45364&45428&3775&3318&2909&4650\\\hline $YY$ &44499&49666&45718&45152&6586&61&177&8915\\\hline $\textrm{Dark}$ &418&460&406&440&0&0&0&0\\ \hline \end{tabular} \normalsize \caption{Experimental tomography data for our two-photon experiment. Counts were 1 second in length. A final dark count was taken with the photon source blocked.} \end{table} \section{Conclusions We have presented a novel method of Bayesian mean estimation using hyperspherical parametrization and an experiment-specific likelihood. This method has allowed us to derive a closed-form BME for the ideal single-qubit and to develop a numerical approach to approximating the BME for a two-qubit experiment using numerical slice sampling. Our approach offers the real world benefit of eliminating the need for preliminary experiments in common two-photon experiments by accounting for qubit loss within the likelihood. Our method is also scalable beyond two-qubit systems. Finally, we illustrated our approach by applying it to the measurement data obtained from a real-world two-photon entangled state. \section{Acknowledgement We would like to thank Nick Peters, Ryan Bennink, and Robin Blume-Kohout for comments, criticisms, and suggestions regarding this manuscript. This work was supported by the Oak Ridge National Laboratory Postdoctoral Program. This manuscript has been authored by UT-Battelle, LLC, under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. \section{References} \bibliographystyle{unsrt}
2024-02-18T23:40:58.703Z
2016-11-04T01:08:58.000Z
algebraic_stack_train_0000
3,829
9,909
proofpile-arXiv_066-2703
\section{\label{sec:introduction}Introduction} In the past decades, outstanding progress has been made in reproducing the properties of the strong interaction by numerically calculating QCD correlators on a Euclidean spacetime lattice. One goal of such calculations is to extract various aspects of nuclear structure from the underlying theory, and a target quantity here is the nucleon axial charge, $g_A$, defined via \begin{equation} \label{eq:gAdef} \langle N, \textbf p,\sigma' \vert A^{a}_\mu(0) \vert N, \textbf p, \sigma \rangle = g_A \overline u_{\sigma'}(\textbf p) \Gamma_{A,\mu}^{a} u_{\sigma}(\textbf p) \,, \end{equation} where $A^{ a}_\mu \equiv \overline {\mathcal Q}\Gamma_{A,\mu}^{a } \mathcal Q$ and $\Gamma_{A,\mu}^{a } = T^a \gamma_\mu \gamma_5$. Here $T^a = \tau^a/2$ are the generators of $SU(2)$ isospin and $\mathcal Q$ is a doublet containing the up and down quarks. We have also introduced single nucleon states with momentum $\textbf p$ and spin $\sigma, \sigma'$ as well as their corresponding spinors $\overline u_{\sigma'}(\textbf p)$ and $ u_{\sigma}(\textbf p)$. These are also isospin doublets built from the proton and neutron. In this work we use Euclidean conventions for the gamma matrices, $\{\gamma_\mu , \gamma_\nu \}=2 \delta_{\mu \nu}$. The axial charge is in many ways an ideal quantity for lattice QCD (LQCD). In particular, it can be directly accessed from plateaus in Euclidean correlators and does not contain the noisy quark-disconnected diagrams. However, as a nuclear quantity, it suffers from the signal-to-noise problem and this is only made worse in the three-point function required to create a nucleon, couple it to the axial current and then annihilate it. For some time now, lattice calculations of $g_A$ have been prone to underestimate the quantity.% \footnote{See, for example, Refs.~\cite{MarthaRev2014,GreenRev2016,ETMAvx2016,BhattacharyaAxial2016,QCDSFAxial2015,EigoAMA2016}.} % Possible explanations for this include underestimated systematic uncertainties from extrapolation to the physical pion mass, from finite-volume effects and from excited-state contamination. This work is concerned with the latter. Specifically we are interested in excited-state contamination in the context of a ratio of Euclidean three- and two-point correlators, constructed to satisfy \begin{multline} \label{eq:introesc} R(T,t) \underset{T \gg t \gg 0}{\longrightarrow} g_A \\ + \sum_{n=2} \left ( b_n \big (e^{- \Delta E_n (T - t)}+ e^{- \Delta E_n t} \big ) + c_n e^{- \Delta E_n T} \right ) \,, \end{multline} where we have dropped subleading exponentials as we explain in detail in the following section. Here we introduce $T$ as the nucleon source-sink separation, $t$ as the current insertion time, and $\Delta E_n$ as the gap between the nucleon mass and the $(n-1)$th excited state. The coefficients $b_n$ and $c_n$ are related to finite-volume matrix elements as given in Eqs.~(\ref{eq:bndef}) and (\ref{eq:cndef}) below. The excited-state contribution to $R(T,t)$ has been recently studied in both non-relativistic \cite{BrianGA} and relativistic \cite{BarTwoPoint,BarGA} baryon chiral perturbation theory (ChPT). In both cases the authors find that the leading-order (LO) ChPT predictions are independent of the form of the nucleon interpolators.% \footnote{This assumes local three-quark operators. As is carefully discussed in Ref.~\cite{BarTwoPoint}, the prediction also holds for smeared operators, provided that the smearing radius is sufficiently small.} % This leads to the universal prediction that $b_n>0$, and thus that the excited-state contamination is positive. Since the predictions for $b_n$ and $c_n$ depend only on $g_A$, the pion decay constant, $f_\pi$, and known kinematic quantities, the ChPT expressions could in principle be used to remove the leading excited-state contribution in order to more accurately extract $g_A$. To make use of the LO ChPT results, however, one must ensure that these describe present-day numerical LQCD data. As $g_A$ is often extracted from the central value, $R(T,T/2)$, or by fitting a constant to a range of central values, determining the $T$ values needed for $R(T,T/2)$ to enter the regime of LO ChPT is particularly useful. If the source-sink separation is too small, then the set of finite-volume states needed to estimate $R(T,T/2)$ goes beyond the region described by the leading-order prediction. Indeed, the curvature of nearly all available numerical LQCD data for $R(T,T/2)$ as a function of $T$ is negative, indicating negative excited-state contamination, in contradiction with the LO ChPT prediction.% \footnote{Again see Refs.~\cite{MarthaRev2014,GreenRev2016,ETMAvx2016,BhattacharyaAxial2016,QCDSFAxial2015,EigoAMA2016}. One exception here is the curvature of the correlator data of Ref.~\cite{Ohta2015}. It is unclear why the results of this work differ from the rest. One possibility is that, as compared to other calculations, the interpolators used in this study have enhanced coupling to the lower excited states.} % Similarly, at fixed $T$, $R(T,t)$ is consistently observed to have negative curvature as a function of the current-insertion time, $t$. We take this as strong evidence that, in present day LQCD calculations, the values of $T$ are too small for $R(T,t)$ to be well described by the LO ChPT results. In this paper we show that, under plausible assumptions, one can reproduce the qualitative behavior of numerical LQCD correlators by including the contributions of higher-energy states, taking into account $N \pi$ final-state interactions, and postulating a sign change in the infinite-volume axial-vector transition amplitude, $\langle N \pi, \mathrm{out} \vert A_\mu \vert N \rangle$. Using experimentally-determined $N \pi$ scattering data in a generalization of L\"uscher's quantization condition \cite{Luscher1986, Luscher1990}, we predict the energies of the finite-volume excited states entering $\Delta E_n$. We then use a generalization of the Lellouch-L\"uscher formalism, again with experimental scattering data, to relate the finite-volume matrix elements in $b_n$ and $c_n$ to infinite-volume matrix elements involving $N \pi$ asymptotic states \cite{Lellouch2000}. To complete the construction, we estimate the remaining infinite-volume matrix elements in a model based on LO ChPT, supplemented by the scattering data. Within this set-up we find that a large number of excited states give an important contribution to $R(T,T/2)$ for realistic $T$ values, and that a sign flip in the axial-vector transition can readily accommodate the empirically observed negative excited-state contamination [see Figs.~\ref{fig:esc1} and \ref{fig:esc2} below]. We find that, for physical pion masses, $T \gtrsim 2 \mathrm{\, fm}$ is needed to enter the regime where LO ChPT describes the lattice correlators. This analysis suffers from various limitations that prevent us from offering reliable quantitative predictions. The most important limitation is the neglect of $N \pi \pi$ states. Here we only study the energies and matrix elements of finite-volume $N \pi$ states. Both the L\"uscher quantization condition and the Lellouch-L\"uscher formalism hold only for energies below three-particle production threshold, but in this work we also include energies above $N \pi \pi$ threshold where the relations develop uncontrolled systematic uncertainties. There is evidence that the breakdown of the formalism turns on slowly as one crosses multi-particle thresholds,% \footnote{See, for example, the phase-shifts extracted above multi-particle thresholds in Ref.~\cite{WilsonCoupRho2015}.} % but in the vicinity of the Roper resonance, the neglected three-particle states could have a significant contribution. [See also the discussion in the paragraph following Eq.~(\ref{eq:omNfree}) below.] Other limitations of this study include the modeling of the infinite-volume matrix elements, explained in detail in Sec.~\ref{sec:infME}, as well as the restriction to physical pion masses. The latter is a natural limitation given our approach of working with experimental scattering data. As a result, the predictions for $R(T,t)$ discussed in Sec.~\ref{sec:contam} are most directly applicable to ensembles near the physical point. As an aside we comment that, in order to have a solid theoretical foundation for this work, it was necessary to make contact with the LO ChPT results derived in Refs.~\cite{BarTwoPoint,BarGA}. In these earlier publications, $\Delta E_n$ is approximated using non-interacting $N \pi$ states in a finite volume, so that the work is concerned only with predicting the coefficients, $b_n$ and $c_n$. Since we are using the Lellouch-L\"uscher formalism to predict the coefficients in this study, it was necessary to first understand how this formalism can be used to reproduce the LO ChPT results. We were able to make this connection in detail, re-deriving some of the expressions reported in Refs.~\cite{BarTwoPoint,BarGA}. This is interesting in its own right as it shows how the Lellouch-L\"uscher formalism provides a shortcut for extracting ChPT predictions of these and related quantities. In particular, the numerous one-loop diagrams needed to determine $b_n$ in Ref.~\cite{BarGA} are replaced in the present approach by five tree-level diagrams. Details are given in the appendix. The remainder of this article is organized as follows. In the following section we define the correlators, the ratio $R$ and the parameters $\Delta E_n$, $b_n$ and $c_n$ that describe the excited states. In Sec.~\ref{sec:es} we use experiemental partial wave data to estimate the interacting energy gaps $\Delta E_n$ associated with $N \pi$ states. Then in Sec.~\ref{sec:LL} we give estimates for the coefficients $b_n$ and $c_n$. This leads to estimates of the excited state contamination for typical present-day lattice set-ups, presented in Sec.~\ref{sec:contam}. In the appendix we detail the derivation of various ChPT expressions used in the main text. \section{\label{sec:extractGA}Extracting $g_A$ from the lattice} Various methods exist for using numerical LQCD to determine $g_A$. Common to all approaches is the determination of two- and three-point correlators of the form \begin{align} \begin{split} \label{eq:C3def} C^{}_3(T,t) & \equiv \int d^3 \textbf x \int d^3 \textbf y \ \Gamma'_{\mu, \alpha \beta} \\ & \hspace{50pt} \times \langle \mathcal O_\beta(\textbf x, T) A^{3}_\mu(\textbf y, t) \overline {\mathcal O}_\alpha(0) \rangle \,, \end{split} \\ C^{}_2(T) & \equiv \int d^3 \textbf x \ \Gamma_{\alpha \beta} \langle \mathcal O_\beta(\textbf x, T) \overline {\mathcal O}_\alpha(0) \rangle \,, \label{eq:C2def} \end{align} where $\overline {\mathcal O}_\alpha$, ${\mathcal O}_\beta$ are proton interpolating fields, $A^{3}_\mu$ is the third isospin component of the axial vector current, and $\Gamma'$ and $\Gamma$ are projectors. In this work we restrict attention to states that have zero three-momentum in the finite-volume frame. Defining $\widetilde {\mathcal O}^{}_\beta(T) \equiv \int d^3 \textbf x \ \mathcal O_\beta(\textbf x, T)$,\\ $\widetilde A^{3}_\mu(t) \equiv \int d^3 \textbf y \ A^{3}_\mu(\textbf y, t) $, and performing a spectral decomposition, we reach \begin{align} \label{eq:sd3} \begin{split} C^{}_3(T,t) & \equiv L^{-3} \sum_{n,m} \ \Gamma'_{\mu, \alpha \beta} \langle 0 \vert \widetilde {\mathcal O}^{}_\beta \vert n \rangle \langle n \vert \widetilde A^{3}_\mu \vert m \rangle \\[-10pt] & \hspace{60pt} \times \langle m \vert \widetilde {\overline {\mathcal O}}^{}_\alpha \vert 0 \rangle e^{- E_n(T-t)} e^{- E_m t} \,, \end{split} \\[5pt] C^{}_2(T) & \equiv L^{-3} \sum_{n} \ \Gamma_{\alpha \beta} \langle 0 \vert \widetilde {\mathcal O}^{}_\beta \vert n \rangle \langle n \vert \widetilde{ \overline {\mathcal O}}^{}_\alpha \vert 0 \rangle e^{- E_n T} \,, \label{eq:sd2} \end{align} where we have assumed $T>t>0$ and have used the shorthand $\widetilde {\mathcal O}^{}_\beta \equiv \widetilde {\mathcal O}^{}_\beta(0)$ and similar for $\widetilde A^{3}_\mu$. To treat the fields equivalently we have Fourier transformed $\overline {\mathcal O}_\alpha$ over spatial volume but have also divided by volume to preserve the definitions. Throughout this work all finite-volume states are normalized as $\langle n \vert n \rangle = 1$. We next observe that the lowest state in the sum, denoted by $n,m=1$, is the single nucleon state. From this follows that the ratio of the $n,m=1$ terms in $C_3(T,t)$ and $C_2(T)$ gives $g_A$ \begin{equation} g_A \equiv \frac{\Gamma'_{\mu, \alpha \beta} \langle 0 \vert \widetilde {\mathcal O}^{}_\beta \vert 1 \rangle \langle 1 \vert \widetilde A^{3}_\mu \vert 1 \rangle \langle 1 \vert \widetilde {\overline {\mathcal O}}^{}_\alpha \vert 0 \rangle}{\Gamma_{\alpha \beta} \langle 0 \vert \widetilde {\mathcal O}^{}_\beta \vert 1 \rangle \langle 1 \vert \widetilde{\overline {\mathcal O}}^{}_\alpha \vert 0 \rangle} \,. \end{equation} This relies on the definitions of $\Gamma$ and $\Gamma'$. These are constructed to ensure that the result holds. It follows that $g_A$ can be accessed by identifying a plateau in the ratio \begin{equation} R^{}(T,t) \equiv \frac{C^{}_3(T,t)}{C^{}_2(T)} \,. \end{equation} Substituting the spectral decompositions, Eqs.~(\ref{eq:sd3}) and (\ref{eq:sd2}), taking $T \gg t \gg 0$ and expanding the denominator, we find \begin{multline} \label{eq:Rdecom} R^{}(T,t) = g_A + \sum_{n=2}^\infty \bigg [ b_n \big ( e^{- \Delta E_n (T - t)} + e^{- \Delta E_n t} \big ) \\ + c_n e^{- \Delta E_n T } + \cdots \bigg ] \,, \end{multline} where $\Delta E_n \equiv E_n - E_1 = E_n - m_N + \mathcal O(e^{- M_\pi L})$, with $E_n$ the energy of the $(n-1)$th excited state, $m_N$ the nucleon mass and $M_\pi$ the pion mass. Here we have introduced $L$ as the linear spatial extent of the volume and have used the fact that finite-volume corrections to the nucleon mass are exponentially suppressed. We neglect such corrections throughout. In Eq.~(\ref{eq:Rdecom}) we have also introduced \begin{align} \label{eq:bndef} b_n & \equiv \frac{ \Gamma'_{\mu, \alpha \beta} \langle 0 \vert {\widetilde {\mathcal O}}^{}_\beta \vert n \rangle \langle n \vert \widetilde A^{3}_\mu \vert 1 \rangle \langle 1 \vert \widetilde {\overline {\mathcal O}}^{}_\alpha \vert 0 \rangle}{ \Gamma_{ \alpha \beta} \langle 0 \vert {\widetilde {\mathcal O}}^{}_\beta \vert 1 \rangle \langle 1 \vert \widetilde {\overline {\mathcal O}}^{}_\alpha \vert 0 \rangle } \,, \\ c_n & \equiv - g_A c_{2,n} + c_{3,n} \,, \label{eq:cndef} \end{align} where \begin{align} c_{2,n} & = \frac{\Gamma_{ \alpha \beta} \langle 0 \vert {\widetilde {\mathcal O}}^{}_\beta \vert n \rangle \langle n \vert \widetilde {\overline {\mathcal O}}^{}_\alpha \vert 0 \rangle }{\Gamma_{ \alpha \beta} \langle 0 \vert {\widetilde {\mathcal O}}^{}_\beta \vert 1 \rangle \langle 1 \vert \widetilde {\overline {\mathcal O}}^{}_\alpha \vert 0 \rangle} \,, \label{eq:c2ndef}\\ \label{eq:c3ndef} c_{3,n} & = \frac{\Gamma'_{\mu, \alpha \beta} \langle 0 \vert {\widetilde {\mathcal O}}^{}_\beta \vert n \rangle \langle n \vert \widetilde A^{3}_\mu \vert n \rangle \langle n \vert \widetilde {\overline {\mathcal O}}^{}_\alpha \vert 0 \rangle}{\Gamma_{ \alpha \beta} \langle 0 \vert {\widetilde {\mathcal O}}^{}_\beta \vert 1 \rangle \langle 1 \vert \widetilde {\overline {\mathcal O}}^{}_\alpha \vert 0 \rangle} \,. \end{align} Note that the definition for $b_n$, Eq.~(\ref{eq:bndef}), directly arises from the coefficient on the first exponential, $ e^{- \Delta E_n (T - t)} $, whereas the factor multiplying the second exponential, $ e^{- \Delta E_n t}$, has a different definition. However, as long as $\Gamma'_{\mu}$ is anti-hermitian and $\Gamma$ is hermitian, then Euclidean definitions of charge-conjugation and time-reversal invariance imply $R(T,t) = R(T,T-t)=R^*(T,t)$. Thus the two coefficients are identically equal and we take $b_n$ as the coefficient for both source-to-current and current-to-sink time dependence. As can be seen by comparing the definitions, Eqs.~(\ref{eq:bndef}) and (\ref{eq:cndef}), the matrix elements required to access the source-to-sink coefficient, $c_n$, are more complicated than those needed for $b_n$. The first term in the definition of $c_n$, proportional to $c_{2,n}$ defined in Eq.~(\ref{eq:c2ndef}), arises from expanding the excited-state contamination of $C_2(T)$ in the denominator. This term depends on the same matrix elements that appear in the definition of $b_n$ and can be studied using the same approach. The second term in $c_n$, $c_{3,n}$ defined in Eq.~(\ref{eq:c3ndef}), arises from source-to-sink contributions in $C_3(T,t)$ and is thus more complicated. This term turns out to be numerically suppressed in LO ChPT and thus unimportant in our qualitative study. With this in mind, in this work we simply use the LO ChPT result for $c_{3,n}$ and only apply the Lellouch-L\"uscher like analysis to $b_n$ and $c_{2,n}$. The ellipsis in Eq.~(\ref{eq:Rdecom}) stands for terms suppressed by additional factors of $e^{- \Delta E_m t}$, $e^{- \Delta E_m (T-t)}$ or $e^{- \Delta E_m T}$. These neglected terms arise for two reasons. One contribution is from higher orders in the expansion of $C_2(T)$ in the denominator. This expansion is not required but is a good approximation and simplifies the resulting expressions. The second neglected contribution is from terms in Eq.~(\ref{eq:sd3}) with $n \neq m$, with both indices corresponding to excited states. Such terms involve two-to-two (rather than one-to-two) axial-vector matrix elements and are expected to be suppressed relative to those we keep. We caution that these two-to-two transitions are not necessarily volume suppressed. For example, LO ChPT predicts the same volume dependence for $c_{2,n}$ and $c_{3,n}$. This is the case because, at leading order, the current mediating the two-to-two transition couples only to one of the two particles. When the current couples to both particles an extra factor of volume suppression does arise.% \footnote{For full details on the generalization of the Lellouch-L\"uscher approach to two-to-two matrix elements with one-to-one subprocesses, see Ref.~\cite{BH2to2}.}% The aim of this work is to estimate the value of the sum in Eq.~(\ref{eq:Rdecom}) for given $T$ and $t$. In the following section we study $\Delta E_n$ and in Sec.~\ref{sec:LL} we turn to $b_n$ and $c_n$. \section{\label{sec:es} Estimating the excited-state energies} The finite-volume quantization condition derived by L\"uscher~\cite{Luscher1986,Luscher1990} has since been extended to include moving frames, non-identical and non-degenerate particles, coupled two-particle channels, and particles with spin~\cite{Rummukainen1995, KSS2005, Christ2005, Lage2009, Bernard2010, Doring2011, HSmultiLL, BricenoTwoPart2012, Fu2012, Gockeler2012, BricenoSpin, BHOneToTwoSpin}. These extensions can be used to estimate the finite-volume energies that appear in $R(T,t)$. In particular, in the range $m_N + M_\pi < E_n < m_N + 2 M_\pi$, the finite-volume energies can be determined using the L\"uscher quantization condition by inputting the experimentally determined phase shift for $N \pi$ scattering. It is useful to consider these energies relative to the energies of non-interacting particles in a finite-volume. The non-interacting levels are determined by constraining the momentum to satisfy $\textbf p = 2 \pi \textbf n/L$, where $L$ is the linear extent of the volume and $\textbf n$ is a three-vector of integers. This constraint is appropriate to a cubic finite spatial volume with periodic boundary conditions, and in this work we restrict ourselves to this simplest set-up. In Fig.~\ref{fig:freelevels} we display the non-interacting energies as a function of $M_\pi L$, given by \begin{multline} E_n \in \Big \{ \{ \omega_{\pi, \textbf n} + \omega_{N, \textbf n} \} , \{ \omega_{\pi, \textbf n}+ \omega_{\pi, \textbf m} + \omega_{N, \textbf n + \textbf m} \}, \cdots \Big \} \,, \end{multline} where \begin{align} \omega_{\pi, \textbf n} & \equiv \sqrt{M_\pi^2 + (2 \pi/L)^2 \textbf n^2} \,, \\[5pt] \omega_{N, \textbf n} & \equiv \sqrt{m_N^2 + (2 \pi/L)^2 \textbf n^2} \,, \label{eq:omNfree} \end{align} and where the ellipsis indicates four- (or more) particle states. As described in the figure caption, we are interested in states that have the quantum numbers of a single nucleon, $I(J^{P}) = \nicefrac[]{1}{2} (\nicefrac[]{1}{2}^{+})$. For this reason the state with a pion and nucleon both at rest does not contribute. This state only couples to the $s$-wave and thus has negative parity due to the intrinsic parity of the pion. \begin{figure} \begin{center} \vspace{20pt} \includegraphics[scale=0.45]{figs/fig1.pdf} \end{center} \caption{Energy levels of non-interacting finite-volume states, with quantum numbers of a single nucleon at rest in the finite-volume frame. The location of the $N \pi$ threshold is indicated by the dashed horizontal line. The state with this energy is not included because its parity is opposite that of the nucleon. The lowest solid horizontal line indicates the single nucleon energy, and the gap from here determines the size of the contributions to $R(T,t)$. Finally, we have included three different types of finite-volume states, distinguished by three colors. Blue levels are back-to-back $N \pi$ states, green levels are $N \pi \pi$ states with one pion at rest, and magenta are $N \pi \pi$ states with the nucleon at rest. For the latter two sets only the first few levels are shown to avoid clutter.} \label{fig:freelevels} \end{figure} Also apparent from Fig.~\ref{fig:freelevels} is that, for physical pion masses and realistic values of $M_\pi L$, the L\"uscher formalism only rigorously describes, at most, the first excited state. For $E_n> m_N + 2 M_\pi$, an extended three-particle formalism is required. This has recently been developed by one of us for three-pion states, and the extension to general three-particle systems is underway \cite{LtoK,KtoM}. Because the three-particle formalism is not yet directly applicable to $N \pi \to N \pi \pi$, in this work we restrict attention to the two-particle formalism, but also apply it above threshold where the predictions suffer from systematic uncertainties. As we explain more in Sec.~\ref{sec:contam}, an important conclusion of our analysis is that finite-volume states in the vicinity of the Roper resonance can contribute significantly to $R(T,t)$. Given that the Roper has a $\sim\!40\%$ branching fraction to $N \pi \pi$ \cite{PDG}, three-particle states certainly need to be included to offer reliable quantitative predictions in this region. However, barring delicate cancellations between two- and three-particle states, the {qualitative} conclusions presented here are expected to hold. Three-particle contributions may also be enhanced when the energy of an $N \pi$ pair within $N \pi \pi$ is close to the delta resonance. Indeed, in the ChPT analysis of Ref.~\cite{BrianGA} the delta resonance was also included and was found to reduce the value of $R(T,T/2)$. Finally we stress that one can only use the L\"uscher quantization condition with scattering amplitudes that take the unitary form of elastic two-particle scattering, Eq.~(\ref{eq:MJdef}). Thus, in this approximation we must also neglect the inelasticity in the two-particle scattering amplitude. [See also Footnote 6.] In the following subsections we present our prediction for the finite-volume energy gaps $\Delta E_n$. First, in Sec.~\ref{sec:qc}, we give the quantization condition for general two-particle systems and show how it can be reduced to describe the $N \pi$ states of interest. Then, in Sec.~\ref{sec:expt}, we use the experimental phase-shift data to predict the finite-volume spectrum. \subsection{Reducing the quantization condition} \label{sec:qc} The quantization condition for particles with spin is a straightforward generalization of L\"uscher's original result. Indeed a wide class of generalizations are all described by the same basic form~\cite{Luscher1986, Luscher1990, Rummukainen1995, KSS2005, Christ2005, Bernard2008, Lage2009, Bernard2010, Doring2011, HSmultiLL, BricenoTwoPart2012, Fu2012, Gockeler2012, BricenoSpin, BHOneToTwoSpin} \begin{equation} \label{eq:qc} \det \Big[\mathcal M^{-1}(E_n) + F(E_n, L) \Big ] = 0 \,. \end{equation} Here $\mathcal M$ is the two-to-two scattering amplitude and $F$ is a known geometric function. This result describes any two-particle system, with any number of two-particle channels with identical or non-identical particles, degenerate or non-degenerate masses and with arbitrary spin. To describe a specific system one need only specify the exact definitions, and in particular the index space, for $\mathcal M$ and $F$. As a preliminary example we consider a system with one channel of two non-identical {\em scalars} with masses $M_\pi$ and $m_N$. In this case both $\mathcal M$ and $F$ have two sets of spherical harmonic indices. The scattering amplitude is a diagonal matrix in this space, whose entries are related in the standard way to scattering phase shifts. $F$, by contrast, has on- and off-diagonal entries. This encodes the mixing of partial waves due to the reduced symmetry of the box. $F$ can be written as a sum-integral-difference~\cite{Luscher1986, Luscher1990, Rummukainen1995, KSS2005, Christ2005} \begin{multline} \label{eq:Fmatrix} F_{\ell', m'; \ell, m}(E, L) \equiv \bigg[ \frac{1}{L^3} \sum_{\textbf k} - \int \frac{d^3 \textbf k}{(2 \pi)^3} \bigg ] \\ \times \frac{4 \pi Y^*_{\ell', m'}(\hat{\textbf k} ) Y_{\ell,m}(\hat{\textbf k} ) }{2 \omega_\pi 2 \omega_N (E - \omega_\pi - \omega_N + i \epsilon) } \left ( \frac{k}{p} \right )^{\ell + \ell'} \,, \end{multline} where \begin{align} \omega_\pi \equiv \sqrt{M_\pi^2 + k^2} \,, \ \ \omega_N \equiv \sqrt{m_N^2 + k^2} \,, \end{align} $k = \vert \textbf k \vert$, $\hat {\textbf k} = \textbf k/k$, and the sum runs over all ${\textbf k} = (2\pi/L) {\textbf n}$, ${\textbf n} \in \mathbb{Z}^3$. In Eq.~(\ref{eq:Fmatrix}) an ultraviolet regulator is needed to make the quantity well defined. Since the sum and integral have the same ultraviolet divergence, a universal result is recovered as the regulator is removed. Here $p$ is the magnitude of CM frame momentum for particles with energy $E$ and masses $M_\pi$ and $m_N$ \begin{equation} \label{eq:pdef} E \equiv \sqrt{M_\pi^2 + p^2} + \sqrt{m_N^2 + p^2} \,. \end{equation} To incorporate spin in this system it is most straightforward to first work in the basis where the nucleon is polarized along some fixed direction in its CM frame. This new degree of freedom, denoted by $\sigma$, can be accommodated with two simple modifications. First, the amplitude gains an additional index, $\mathcal M = \mathcal M_{\ell', m', \sigma'; \ell, m, \sigma}$. Second, the kinematic matrix $F$ is multiplied with a Kronecker delta, $\delta_{\sigma' \sigma}$. This completely defines the scalar-nucleon quantization condition. Indeed, the arbitrary-spin quantization condition is given by simply multiplying the $F$ matrices with Kronecker deltas \cite{BricenoSpin,BHOneToTwoSpin}. \bigskip Next, to connect with experimental phase shifts, it is convenient to change to the basis of total angular momentum, $J$, orbital angular momentum, $\ell$, and azimuthal component of total angular momentum, $\mu$. The basis change is effected by contracting both sets of indices with standard Clebsch-Gordan coefficients. The amplitude in the new basis can be written% \footnote{Above three-particle threshold this expression no longer applies and an additional parameter must be introduced to parametrize the inelasticity. Here we are neglecting the inelasticity, even above multi-particle threshold. This approximation, which is consistent with the neglect of $N \pi \pi$ states in the L\"uscher quantization condition, breaks down as the energy increases. } \begin{equation} \label{eq:MJdef} \mathcal M_{J', \ell', \mu'; J, \ell, \mu} \equiv \delta_{J' J} \delta_{\ell' \ell} \delta_{\mu' \mu} \frac{8 \pi E}{p \cot \delta_{J,\ell}(p) - i p} \,. \end{equation} Note that the conservation of orbital angular momentum is special to the meson-baryon system. Generally orbital angular momenta will mix, but in this case conservation of total angular momentum implies that $\ell$ could at most couple with $\ell \pm 1$. Since changing by one unit flips parity, this coupling vanishes and $\ell$ is conserved. $F$ in the new basis is given by~\cite{Gockeler2012,BricenoSpin} \begin{multline} \label{eq:FJdef} F_{J', \ell', \mu'; J, \ell, \mu} \equiv \\ \sum_{m,\sigma,m'} \langle \ell \ m, \nicefrac12 \ \sigma \vert J \mu \rangle \langle \ell' \ m', \nicefrac12 \ \sigma \vert J' \mu' \rangle F_{ \ell', m'; \ell, m} \,. \end{multline} We make one final simplification before introducing approximations. One can show that the imaginary parts of $\mathcal M^{-1}$ and $F$ perfectly cancel in Eq.~(\ref{eq:qc}), giving \begin{equation} \det \Big [ \overline F_{J' \ell' \mu'; J \ell \mu} + \delta_{J' J} \delta_{\ell' \ell} \delta_{\mu' \mu} \cot \delta_{J,\ell}(p) \Big ] = 0 \,, \end{equation} where $\overline F = 8 \pi E \mathrm{Re}[F] /p$. We now reduce the quantization condition to a determinant of a finite-dimensional matrix by ignoring high partial waves. It turns out that, in the even-parity sector, we reach the simplest possible truncation by neglecting $\delta_{J,\ell}$ for $\ell \geq 3$. Then the system is truncated to the $\ell=1$ space. In this space $\overline F_{J' \ell' \mu'; J \ell \mu}$ is a $6 \times 6$ matrix: a $2 \times 2$ block for $\ell=1,J=1/2$, a $4 \times 4$ block for $\ell=1,J=3/2$. To determine its specific form we first note that \begin{equation} \overline F_{\ell'=1, m'; \ell=1, m} = - \frac{1}{q \pi^{3/2}} Z_{00}(1,q^2) \delta_{m'm} \,, \end{equation} where $Z_{00}$ is the L\"uscher zeta-function described in Ref.~\cite{Luscher1990} and $q \equiv p L/(2 \pi)$. The fact that $\overline F$ is proportional to the identity matrix in the $\ell=1$ subspace is preserved when we change to the $J$ basis. Thus, both matrices in the quantization condition are diagonal and the final result is two independent, one dimensional equations \begin{equation} \overline F_{11;11} + \cot \delta_{J, \ell=1}(p) = 0 \,, \end{equation} for $J=1/2$ or $3/2$. These can be reexpressed as \begin{equation} \label{eq:simplestqc} \phi(q) + \delta_{J, \ell=1}(p) = n \pi \,, \end{equation} where $n$ is an arbitrary integer and \begin{equation} \label{eq:phidef} \cot \phi(q) = \overline F_{11;11} = - \frac{1}{q \pi^{3/2}} Z_{00}(1,q^2) \,. \end{equation} We comment that this has the same form as the s-wave, scalar quantization condition. The quantity $\phi$ is often referred to as the pseudophase. The fact that the $J=1/2$ and $J=3/2$ sectors decouple can be explained by examining the symmetry group of the finite-volume system. For the case of one scalar and one spin-half particle in a finite cubic box with zero total momentum, the symmetry group is ${}^2O \otimes S_2$ and the irreps are $G_1^{\pm}, G_2^{\pm}$ and $H^{\pm}$~\cite{Bernard2008}. If we neglect $\ell \geq 3$ and thus also neglect $J \geq 5/2$, then we find a perfect correspondence between finite- and infinite-volume irreps $G_1^- \equiv (J=1/2)$ and $H^- \equiv (J=3/2)$. This implies that, within this approximation, the two partial-waves cannot mix, as we have seen by explicit calculation. \subsection{Predicting the spectrum from the experimental $N \pi$ phase shift} \label{sec:expt} To predict the finite-volume spectrum of $N \pi$ states we use experimental data made available by the George Washington University Institute for Nuclear Studies. Their data analysis center is available online at \url{http://gwdac.phys.gwu.edu/analysis/pin_analysis.html}. In this study we use their partial wave analysis WI08 solution. The relevant phase shift data are plotted in Fig.~\ref{fig:pshifts}. For detailed information about the experimental data set and the WI08 fit solution see Refs.~\cite{arndt_extended_2006, paris_toward_2010}. \begin{figure} \includegraphics[scale=0.45]{figs/fig2.pdf} \caption{The experimental phase shift for $N \pi$ scattering with $I(J^{P}) = \nicefrac[]{1}{2} (\nicefrac[]{1}{2}^{+})$. The slow rise through $\pi/2$ is associated with the broad Roper resonance.} \label{fig:pshifts} \end{figure} \begin{figure} \includegraphics[scale=0.45]{figs/fig3.pdf} \caption{Interacting finite-volume $N \pi$ states with $I(J^{P}) = \nicefrac[]{1}{2} (\nicefrac[]{1}{2}^{+})$. The dashed, black curves show the non-interacting energy levels.} \label{fig:spec} \end{figure} Substituting this phase shift into Eq.~(\ref{eq:simplestqc}) we reach the prediction for the two-particle energies, shown in Fig.~\ref{fig:spec}. Note that, {relative to the gap to the single nucleon state}, the shift is relatively small between free- and interacting levels. This means that it makes little difference whether one uses the free or interacting finite-volume spectrum for the values of $\Delta E_n$ that enter $R(T,t)$.\footnote{In this work we do not plot an explicit comparison but, as we comment in Sec.~\ref{sec:contam} below, if one uses LO ChPT for the infinite-volume matrix elements, then the effect of interactions in the energies and Lellouch-L\"uscher factors affects the prediction for $R(T,T/2)$ at the percent level.} Also apparent from Fig.~\ref{fig:spec} is that no avoided level crossing is visible. This is because the Roper resonance is too broad to generate such an effect. It follows that, near the physical point, no direct association between LQCD energies and the resonance can be made and a careful L\"uscher based analysis will be needed to extract resonance properties from LQCD. To better understand these results consider the form of the pseudophase curves, plotted together with the experimental phase shift for $M_\pi L = 4$ in Fig.~\ref{fig:phaseandps}. The interacting energies, at this $L$ value, are given by the intersections of the curves. This shows that there are universal features for the levels predicted by certain types of phase shifts. In particular, for any phase shift that slowly rises from $0$ to $\pi$, the spectrum is given by a smooth deformation of the free levels. When $\delta(p)$ is near $0$ or $\pi$ the energies coincide with free values. As one follows a given interacting level from high energies to low (by increasing $M_\pi L$) it rises by one free level. This implies that, for any slowly rising phase shift, interacting levels tend to be separated from their neighbors on each side by levels of the free theory. Also, the rise of the phase shift from $0$ to $\pi$ results in exactly one additional energy level relative to the free theory. Finally, as we have already stressed above, in this prediction of the energy levels we neglect the effects of crossing three-particle production threshold. Roughly speaking crossing this threshold has two effects. First, three-particle states appear on top of the two-particle states shown in the figure. Second, the positions of all energies are modified relative to those predicted by the two-particle L\"uscher formula. Strictly it does not make sense to distinguish between two- and three-particle states. All finite-volume states will have both two- and three-particle components once the energy exceeds $2 M_\pi + m_N$. However, for sufficiently weak two-to-three couplings, the levels are well described by being two-particle or three-particle like in certain regions, with avoided level crossings occurring whenever a given level changes from one type to the other. The overlap of the interpolator on a given state is also expected to be suppressed when the state has a large three-particle component, possibility with the exception of energies near the Roper. These observations, and the limitation of the formalism available, motivate us to use the effective spectrum plotted in Fig.~\ref{fig:spec} in our study of excited-state contamination. \begin{figure} \includegraphics[scale=0.45]{figs/fig4.pdf} \vspace{-10pt} \caption{Experimental scattering phase together with L\"uscher pseudophase curves, $(n \pi - \phi(q))$, for $M_\pi L = 4$. Each intersection gives an interacting level in terms of $q^2$, which can be converted to energy via $E = \sqrt{M_\pi^2 + (2 \pi/L)^2 q^2} + \sqrt{m_N^2 + (2 \pi/L)^2 q^2}$.} \label{fig:phaseandps} \end{figure} \section{Estimating the finite-volume matrix elements} \label{sec:LL} In this section we use experimental scattering data, together with LO ChPT and a model to describe $N \pi$ final-state interactions, in order to estimate the finite-volume matrix elements entering $b_n$ and $c_n$. The finite-volume two-particle states, denoted by $\vert n \rangle$, arise from insertions of the identity in Eqs.~(\ref{eq:sd3}) and (\ref{eq:sd2}) and always appear as an outer product of the form $\vert n \rangle \langle n \vert$. This is exactly the structure that is readily accommodated by the generalized Lellouch-L\"uscher formalism, as we now describe. The original work by Lellouch and L\"uscher gives the relation between a finite-volume matrix element and the $K \to \pi \pi$ decay rate \cite{Lellouch2000} \begin{equation} \label{eq:LLresult} \langle \pi \pi, E_n, L \vert \mathcal H(0) \vert K,L \rangle^2 = \frac{\vert \mathcal R \vert}{ 2 M_K L^6} \vert \mathcal H^{\mathrm{out}} \vert^2 \,, \end{equation} where $M_K$ is the kaon mass, $\mathcal H$ is the weak hamiltonian density in position space, $\mathcal H^{\mathrm{out}} \equiv \langle \pi \pi, \mathrm{out} \vert \mathcal H(0) \vert K \rangle $ is the corresponding infinite-volume matrix element and \begin{equation} \vert \mathcal R \vert = \frac{p}{16 \pi M_K} \left [ \frac{\partial}{\partial E} \left( \phi + \delta_{\pi \pi} \right ) \right ]_{E=M_K}^{-1} \,, \end{equation} where $\phi$ is defined in Eq.~(\ref{eq:phidef}) above and $\delta_{\pi \pi}$ is the s-wave phase-shift for elastic pion scattering. The finite-volume states on the right-hand side of Eq.~(\ref{eq:LLresult}) are unit normalized, whereas the infinite-volume states within $\mathcal H^{\mathrm{out}}$ satisfy the standard relativistic normalization. In the derivation of Lellouch and L\"uscher the box size must be tuned so that the two-pion and kaon states are degenerate, $E_n = M_K$. As has become increasingly clear through various subsequent studies, see for example Refs.~\cite{Lellouch2000, KSS2005, Christ2005, MeyerTimelike2011, HarveyPhotodis, HSmultiLL, BricenoTwoPart2012, BernardUnstable2012, BHWOneToTwo, BHOneToTwoSpin}, the conversion factor, $\vert \mathcal R \vert$, can be understood as a relation between the two-particle states defined in finite and infinite volume. This means that essentially the same relation holds even after relaxing a number of assumptions going into the original derivation. In particular one can define a Lellouch-L\"uscher like relation for operators with generic quantum numbers and with nonzero energy and momentum.% \footnote{In the context of $K \to \pi \pi$ this relaxes the need to tune the two-pion state to be degenerate. However, if one does not perform this tuning then the resulting infinite-volume matrix element has an energy difference between incoming and outgoing states, and thus looses its straightforward physical interpretation.} % For products of matrix elements that involve an outerproduct of finite-volume states, the relation can be derived without taking magnitudes of matrix elements \cite{BHWOneToTwo}. Thus, information about the relative sign of transition processes can also be obtained. In the context of this study, the relevant relation, given in Ref.~\cite{BHOneToTwoSpin}, takes the form \begin{multline} \label{eq:genLL} \langle 0 \vert \widetilde {\mathcal O}^{}_\beta(0) \vert n \rangle \langle n \vert \widetilde A^{3}_\mu(0) \vert 1 \rangle = \\ \langle 0 \vert {\mathcal O}_\beta(0) \vert N \pi, \mathrm{in} \rangle \frac{L^{3/2} \mathcal R(E_n,L)}{\sqrt{2 m_N} } \\ \times \langle N \pi, \mathrm{out} \vert A^{3}_\mu(0) \vert N \rangle \,, \end{multline} where $\mathcal R(E_n,L)$ is a matrix generalization of $\vert \mathcal R \vert $, defined in the following subsection. This is the conversion needed for $b_n$. The analog for $c_{2,n}$ is given by \begin{multline} \label{eq:genLL00} \langle 0 \vert \widetilde {\mathcal O}^{}_\beta(0) \vert n \rangle \langle n \vert \widetilde {\overline {\mathcal O}}_\alpha(0) \vert 0 \rangle = \\ \langle 0 \vert {\mathcal O}_\beta(0) \vert N \pi, \mathrm{in} \rangle L^{3} \mathcal R(E_n,L) \\ \times \langle N \pi, \mathrm{out} \vert \widetilde {\overline {\mathcal O}}_\alpha(0) \vert 0 \rangle \,. \end{multline} The key limitation of Eqs.~(\ref{eq:genLL}) and (\ref{eq:genLL00}) is that these only hold for $E_n < 2 M_\pi + m_N$. For such two-particle energies the relation is valid up to exponentially suppressed corrections of the form $e^{- M_\pi L}$, but above three-particle threshold a generalized form with three-particle matrix elements is required. As in the previous section, here we again apply the two-particle formalism outside of its region of validity. We expect this to give a qualitative indication of the nature of excited-state contamination, but only by applying a rigorous three-particle formalism can one reach a reliable quantitative prediction, especially in the vicinity of the Roper. Applying Eqs.~(\ref{eq:genLL}) and (\ref{eq:genLL00}), with $\mathcal R(E_n,L)$ determined using experimental scattering data, it remains only to analyze the matrix elements of the nucleon interpolating operator, $\mathcal O$, and of the axial-vector current, $A_\mu$, in infinite volume. In this way, the details of the finite-volume set-up are factored out. To explain this in detail we find it convenient to assume specific forms for the projectors entering Eqs.~(\ref{eq:C3def}) and (\ref{eq:C2def}). In particular we take $\Gamma = u_+(\textbf 0) \bar u_+(\textbf 0)/m_N$ and $\Gamma'_\mu = \delta_{3\mu} 2 i \Gamma$, were $u$ and $\overline u$ are standard nucleon spinors, already used in Eq.~(\ref{eq:gAdef}). One then finds \begin{align} \label{eq:bnignor} b_n & = B(E_n) \mathcal C(E_n,L) \mathcal A(E_n) \,, \\ c_{2,n} & = 2 m_N \omega_N \omega_\pi L^3 B(E_n) \mathcal C(E_n,L) B^\dagger(E_n) \,, \end{align} where \begin{align} B(E_n) & \equiv \frac{2i}{ 2 \omega_{N} 2 \omega_{\pi} L^3 } \frac{ \langle 0 \vert \mathcal O_+(0) \vert N \pi, E_n, \mathrm{in} \rangle e^{- i \delta} }{ \langle 0 \vert \mathcal O_+(0) \vert N \rangle } \,, \\[5pt] \label{eq:Cdef} \mathcal C(E_n, L) & \equiv 2 \omega_{\pi} 2 \omega_{N} L^3 e^{ i \delta} \mathcal R(E_n, L) e^{ i \delta} \,, \\[10pt] \mathcal A(E_n) & \equiv e^{- i \delta} \langle N \pi, E_n, \mathrm{out} \vert A^{a=3}_{\mu=3}(0) \vert N \rangle \,. \end{align} Here the first factor, $B(E_n)$, is understood as a row-vector on the $J, \ell, \mu$ index space labeling the two-particle state. It depends on the spin-projected interpolator \begin{equation} {\mathcal O}_+ \equiv \frac{1}{\sqrt{m_N}} \, \bar u_+(\textbf 0) \cdot {\mathcal O} \,, \end{equation} as well as the kinematic factors $\omega_\pi$ and $\omega_N$, evaluated at momentum $p$ as defined in Eq.~(\ref{eq:pdef}). The middle factor, $\mathcal C(E_n,L)$, is a matrix on the $J, \ell, \mu$ space. We discuss its definition in detail in the following subsection. Finally $\mathcal A(E_n)$, understood as a column on the same index space, is the infinite-volume axial-vector transition amplitude. We comment that all three of the factors entering Eq.~(\ref{eq:bnignor}) are dimensionless, real functions. The latter claim holds due to the diagonal matrices $e^{- i \delta}$ included in the definitions. Here $\delta$ is a diagonal matrix of $N \pi$ scattering phase shifts. For example, the $J=1/2, \ell=1$ entry is plotted in Fig.~\ref{fig:pshifts}. Watson's theorem states that the complex phase of a two-particle matrix element (below three-particle production threshold) is equal to the elastic $N \pi$ scattering phase in the same channel~\cite{Watson}. Thus the phase matrices in the definitions cancel those in the infinite-volume matrix elements. Above three-particle threshold this no longer holds, but in this work we model the matrix elements with a form satisfying this two-particle unitarity constraint. In other words we build in the approximation that Watson's theorem persists above threshold. [See Sec.~\ref{sec:infME} for details.] Similarly the factors of $e^{ i \delta}$ in Eq.~(\ref{eq:Cdef}) cancel the intrinsic phase in $\mathcal R(E_n,L)$ as we show in the next section. This ensures that $\mathcal C(E_n,L)$ is also a real function. In the following subsection we give the matrix definition of $\mathcal R(E_n, {}L)$ and $\mathcal C(E_n,L)$ and explain that, in the present case, one can truncate these to single entries by applying the same truncation used for the scattering amplitude in the previous section. In Sec.~\ref{sec:predLL} we then use experimental scattering data to calculate the interacting values of $\mathcal C(E_n, L)$. Finally in Sec.~\ref{sec:infME} we use a model, based in LO ChPT supplemented by the experimental scattering data, to estimate both $B(E)$ and $\mathcal A(E)$. We then apply these results in Sec.~\ref{sec:contam}, to give predictions for the excited-state contamination to $g_A$. \subsection{Reducing the Lellouch-L\"uscher-like relation} \label{sec:redLL} We begin this subsection by defining $\mathcal R(E_n, L)$, introduced in Eq.~(\ref{eq:genLL}) above. In this equation the right-hand side should be understood as the product of a column vector $ \langle 0 \vert {\mathcal O}_\beta(0) \vert N \pi, \mathrm{in} \rangle $, followed by the matrix $\mathcal R(E_n, L)$, followed by a row vector $ \langle N \pi, \mathrm{out} \vert A^{3}_\mu(0) \vert N \rangle$. Each of these quantities is defined on the $J, \ell, \mu$ index space, where the three labels correspond to total angular momentum, orbital angular momentum, and azimuthal total angular momentum respectively. The matrix $\mathcal R(E_n,L)$ is defined by \begin{equation} \label{eq:Rintro} \mathcal R(E_{n}, L) \equiv \lim_{E \rightarrow E_{n}} \left[ (E - E_{n}) \frac{1}{F^{-1}(E, L) + \mathcal M(E)}\right] \,, \end{equation} with $\mathcal M$ and $F$ defined in Eqs.~(\ref{eq:MJdef}) and (\ref{eq:FJdef}) respectively. $\mathcal R$ has both on- and off-diagonal elements and, in the context of Eq.~(\ref{eq:genLL}), gives a linear combination of infinite-volume matrix elements that equals a particular finite-volume matrix element. The same matrix structure holds in Eq.~(\ref{eq:bnignor}). To truncate $\mathcal R$ we first observe that the operator $A^3_3(0)$ acting on the infinite-volume single-nucleon state generates a state which couples to both $J=1/2$ and $J=3/2$. In the corresponding finite-volume matrix element this state couples to two-particle finite-volume states in the $G_1^- = 1/2 \oplus \cdots$ and $H^- = 3/2 \oplus \cdots$ representations. Thus, if we choose the two-particle state to transform in the $G_1^-$ irrep, then the right-hand sides of Eqs.~(\ref{eq:genLL}) and (\ref{eq:bnignor}) will contain one term, depending on the $J=1/2$ two-particle scattering state \begin{multline} \langle 0 \vert \widetilde {\mathcal O}_\beta(0) \vert n, G_1^- \rangle \langle n, G_1^- \vert \widetilde A^3_3(0) \vert 1 \rangle = \\ \langle 0 \vert {\mathcal O}_\beta(0) \vert N \pi, J=1/2, \mathrm{in} \rangle \frac{L^{3/2} \mathcal R_{J=1/2}(E_n,L)}{\sqrt{2 m_N} } \\ \times \langle N \pi, J=1/2, \mathrm{out} \vert A^3_3(0) \vert N \rangle \,. \end{multline} Given this truncation we are left only to determine the on-diagonal $J=1/2, \ell=1$ entry of $\mathcal R$. In principle this single entry depends on the full matrix structure of $F^{-1}$ and $\mathcal M$, since they enter via a matrix inverse. However, if we apply the p-wave truncation on $\mathcal M$, as in the previous section, then $\mathcal M$ and $F$ both truncate to single entry matrices. We find \cite{Lellouch2000} \begin{align} \mathcal R(E_{n},L ) & = \left [ \frac{\partial}{\partial E} \left( F^{-1}(E,L) + \mathcal M(E) \right ) \right ]_{E=E_n}^{-1} \,, \\ & \hspace{-40pt} = -\frac{p}{8 \pi E} \left [ \sin^2\! \delta \ e^{ 2i \delta } \frac{\partial}{\partial E} \left( \cot\phi+ \cot\delta \right ) \right ]_{E=E_n}^{-1} \,, \\ & \hspace{0pt} = \frac{p}{8 \pi E} e^{-2i \delta } \left [ \frac{\partial}{\partial E} \left( \phi + \delta \right ) \right ]_{E=E_n}^{-1} \,, \label{eq:R1D} \end{align} where $\delta(p) = \delta_{J=1/2,\ell=1}(p)$, is the $N \pi$ phase shift, shown in Fig.~\ref{fig:pshifts}. To understand the phase in $\mathcal R$, we recall from Watson's theorem that, at energies where only two-particle elastic scattering can occur, the complex phase of zero-to-two and one-to-two transition amplitudes is given by the two-to-two strong scattering phase~\cite{Watson}. Thus the phase in $\mathcal R$ perfectly cancels the phase in the matrix element \begin{align} e^{-i \delta} \langle N \pi, \mathrm{out} \vert A^3_3(0) \vert N \rangle \in \mathbb R \,. \end{align} We conclude by discussing the rescaled quantity $\mathcal C(E_n,L)$, defined in Eq.~(\ref{eq:Cdef}). Substituting Eq.~(\ref{eq:R1D}) into the definition and simplifying, we reach \begin{equation} \label{eq:Cres} \mathcal C(E,L) = 4 \pi^2 q^3 \left( q \frac{\partial \phi}{\partial q} + p \frac{\partial \delta}{\partial p} \right )^{-1} \,. \end{equation} In next section we will also be interested in the non-interacting limit, and thus define \begin{equation} \label{eq:Cres2} \mathcal C^{\mathrm{NI}}(q^2) \equiv 4 \pi^2 q^2 \left( \frac{\partial \phi}{\partial q} \right )^{-1} \,, \end{equation} where $q \equiv p L/(2 \pi)$ was already introduced above. Note that in Eqs.~(\ref{eq:Cres}) and (\ref{eq:Cres2}) we have implicitly extended the definition of $\mathcal C(E,L)$ to all energies. As is clear from Eqs.~(\ref{eq:genLL}) and (\ref{eq:bnignor}), the quantity only has physical \mbox{meaning} when evaluated at the energies of the finite-volume spectrum. However, understanding the continuous form of the function is useful for predicting how $\mathcal C(E_n,L)$ will vary with the strength of the particle interaction. \begin{figure} \begin{center} \vspace{0pt} \includegraphics[scale=0.45]{figs/fig5.pdf} \end{center} \caption{Non-interacting Lellouch-L\"uscher curve. This curve, defined in Eq.~(\ref{eq:Cres2}), only has a clear physical meaning at the non-interacting finite-volume energies, indicated by vertical lines. Here it coincides with the degeneracy of the finite-volume state, $\nu_n$. Considering the form of the curve everywhere is useful for understanding the effect of interactions, as shown in Fig.~\ref{fig:intLL}.} \label{fig:nonintLL} \end{figure} \subsection{\label{sec:predLL}Predicting the Lellouch-L\"uscher factors} In this section we give numerical predictions for the values of $\mathcal C(E_n,L)$ and in doing so also build some intuition about the meaning of this quantity. We begin with the non-interacting version, $\mathcal C^{\mathrm{NI}}$. This is plotted in Fig.~\ref{fig:nonintLL} as a function of the dimensionless squared momentum, $q^2$. The energies for which this curve has physical meaning correspond to $q^2 = \textbf n^2$ with $\textbf n \in \mathbb Z^3$. At these values our rescaled Lellouch-L\"uscher factor takes on particularly simple values \begin{equation} \label{eq:Cfree} \mathcal C^{\mathrm{NI}}(n) = \nu_n \,, \end{equation} where $\nu_n$ is the degeneracy of the $n$th state, equivalently the number of integer vectors that satisfy $\textbf n^2 = n$. The first few values of $\nu_n$ are given in Table~\ref{tab:deg}. These degeneracies are also indicated by the horizontal tick marks crossing each vertical line in Fig.~\ref{fig:nonintLL}. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \\[-9pt] \hline $n$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ $\nu_n$ & 1& 6& 12& 8& 6& 24& 24& 0& 12& 30& 24& 24& 8& 24& 48& 0 \\ \hline \end{tabular} \caption{Degeneracies of states with $q^2 = n$.} \label{tab:deg} \end{table} \begin{figure} \begin{center} \vspace{-25pt} \includegraphics[scale=0.45]{figs/fig6.pdf} \vspace{-25pt} \end{center} \caption{These plots summarize the effect of interactions on $\mathcal C$ for (a) $M_\pi L=4$ and (b) $M_\pi L=5$ as a function of $q^2 = (p L)^2/(2 \pi)^2$. We also indicate the two-particle energies corresponding with certain $q^2$ values. The interacting curve (blue) is lower than the non-interacting (gray) due to a shift proportional to $d \delta(p)/dp$ in the inverse. For ease of comparison we have plotted the phase shift over the same range for each $M_\pi L$ value. However, the most important effect for the interacting value of $\mathcal C$ is the shift in energies. Since the curve is rapidly oscillating, these shifts result in large discrepancies between the interacting and non-interacting $\mathcal C$ values. } \label{fig:intLL} \end{figure} We next turn to the interacting values of $\mathcal C(E_n,L)$, plotted in Fig.~\ref{fig:intLL}. In contrast to $\mathcal C^{\mathrm{NI}}$, the factor in the interacting case depends independently on $E$ and $L$. Here we plot $\mathcal C(E,L)$ as a function of $q^2$ at fixed $M_\pi L = 4$ [Fig.~\ref{fig:intLL}(a)] and $M_\pi L = 5$ [Fig.~\ref{fig:intLL}(b)]. Note that two effects are important in the shift from non-interacting to interacting systems. First, the curve characterizing the Lellouch-L\"uscher factors is reduced, due to the addition of a term proportional to $d \delta/dp$ in the inverse. Comparing the light gray and blue curves, we see that this has a relatively small numerical effect. Second, the physically relevant values on the curve (the finite-volume energies) are shifted to new locations. Interestingly, this second shift has a large effect for both $M_\pi L=4$ and $M_\pi L=5$ (assuming physical pion masses). In particular, the contribution to $R(T,t)$ from the seventh excited state is significantly enhanced due to the large value of $\mathcal C(E_n,L)$. \subsection{\label{sec:infME}Modeling the matrix elements} We now turn to the remaining building blocks entering the definitions of $b_n$ and $c_n$: the overlap factor $B(E_n)$ and the axial-vector transition amplitude $\mathcal A(E_n)$. As already mentioned above, the matrix elements within $B(E)$ depend on the details of the interpolator, $\mathcal O_+$, and cannot be constrained using experimental data. The precise definition of $\mathcal O_+$ depends on the set-up of a particular lattice calculation, and the only clear defining property is that it has the quantum numbers to annihilate a single nucleon. In a variational approach, this operator is designed to reduce the size of the two-particle matrix elements shown in Eq.~(\ref{eq:genLL}). Thus by choosing a sufficiently large basis one can make this contribution arbitrarily small.% \footnote{Given that $\mathcal O$ can be optimized to minimize excited-state contamination, the universal result found by Refs.~\cite{BrianGA,BarTwoPoint,BarGA} may seem surprising. We stress, however, that the ChPT predictions only hold for local and smeared operators. The results thus suggest that $N \pi$ interpolating operators are probably required to systematically reduce the overlap to the low-lying excited states.} By contrast, $\mathcal A(E)$ is a physical matrix element that can in principle be accessed in a scattering experiment. In fact, since we are focusing on the case where both the nucleon and the $N \pi$ state are at rest, parity guarantees that the corresponding matrix element with a vector current must vanish. Thus we can equally well think of $\mathcal A(E)$ as the matrix element of the left-handed (vector-minus-axial) current. The kinematics of $\mathcal A(E)$ correspond to a process such as $p \, \pi^- \to p \, W^- \to p \, e^- \, \bar \nu^e$, in which the current is evaluated at time-like four-momentum. Such a transition is difficult to extract in experiment and the present data is insufficient to constrain the amplitude. We also note that the value of this matrix element at space-like momenta is of great experimental relevance for determining QCD effects in neutrino-nucleon scattering, $\nu_\ell p \to p \pi^+ \ell^-$.% \footnote{See for example Ref.~\cite{AalvarezRuso2014}.} In this work we rely on LO ChPT, together with a model that incorporates the experimental $N \pi$ scattering data, in order to gain insight into the behavior of both the interpolator overlap $B(E)$ and the axial-vector transition amplitude $\mathcal A(E)$. Beginning with $B(E)$, we first give the expression predicted by LO covariant baryon ChPT, derived in the appendix \begin{equation} \label{eq:BnChPT} B_{\mathrm{ChPT}}(E) = \frac{\sqrt{3}}{4 \sqrt{2} f_\pi \omega_{\pi } \omega_{N} L^3 } \left (\frac{\omega_N}{m_N} - 1 \right)^{1/2} (1 - \bar g_A) \,, \end{equation} where \begin{equation} \label{eq:gAbardef} \bar g_A \equiv g_A \frac{E_n+m_N}{E_n-m_N} = g_A \frac{\omega_N + \omega_\pi +m_N}{\omega_N + \omega_\pi -m_N} \,, \end{equation} is a convenient shorthand introduced in Ref.~\cite{BarTwoPoint}, and $f_\pi=93{\,\mathrm{MeV}}$ is the pion decay constant. This result predicts the part of $b_n$ that depends on $\mathcal O$ to be negative and have magnitude $\sim10^{-3}$. In addition, as was already pointed out in Refs.~\cite{BrianGA,BarTwoPoint,BarGA}, the leading-order prediction is independent of the details of the interpolator used. Again, we stress that this only holds for a three-quark interpolating field and, in particular, does not apply to any interpolator built from multi-particle operators, for example $N \pi$- or $N \pi \pi$-like operators. Eq.~(\ref{eq:BnChPT}) is expected to break down at higher energies, when next-to-leading-order ChPT corrections become important. For instance, if $\mathcal O$ is optimized to couple to a single-nucleon state then, depending on the nature of the Roper, the overlap of the operator with two-particle states may also be enhanced in the vicinity of the resonance. This enhancement is not visible in LO ChPT. To model the effect of the Roper we first consider the Bethe-Salpeter equation for the two-particle matrix element within $B(E)$ \begin{multline} \langle 0 \vert \mathcal O_+(0) \vert N \pi, \mathrm{in} \rangle = \langle 0 \vert \mathcal O_+(0) \vert N \pi, \mathrm{in} \rangle_{2\mathrm{PI}} \ + \\ \int \frac{d^4 k}{(2 \pi)^4} \langle 0 \vert \mathcal O_+(0) \vert N \pi, \mathrm{in} \rangle_{2\mathrm{PI}} \, \Delta(k) S(P-k) \, i \mathcal M(E,k) \,, \label{eq:BetheSalpteterBn} \end{multline} where the subscript $2\mathrm{PI}$ refers to the sum of all diagrams that are two-particle irreducible in the s-channel. To reach the full matrix element, the 2PI quantity must be attached, via a two-particle loop, to the two-to-two scattering amplitude, $\mathcal M$. In the loop both the pion [$\Delta (k)$] and nucleon [$S(P-k)$] propagators should be fully dressed and evaluated at the off-shell momenta sampled by the integral. The scattering amplitude is also sampled at off-shell momenta. We cannot evaluate this quantity without making a number of approximations. First we evaluate the $k^0$ integral, approximating the matrix element and $\mathcal M$ to have no important analytic structure, so that we need only encircle the poles in the two-particle loop. This gives \begin{multline} \langle 0 \vert \mathcal O_+(0) \vert N \pi, \mathrm{in} \rangle = \langle 0 \vert \mathcal O_+(0) \vert N \pi, \mathrm{in} \rangle_{2\mathrm{PI}} \ + \\ \int \frac{d^3 \textbf k}{(2 \pi)^3} \langle 0 \vert \mathcal O_+(0) \vert N \pi, \mathrm{in} \rangle_{2\mathrm{PI}} \, \\ \times \left[ -\frac{u(\textbf k) \overline u(\textbf k)}{2 \omega_N 2 \omega_\pi ( E - \omega_N - \omega_{\pi} + i \epsilon)} + \mathcal S(\vec k) \right ] \mathcal M(E,k) \,, \end{multline} where $\mathcal S$ is a smooth function below three-particle production threshold. If we assume the dominant contribution comes form the first term, and drop the smooth part then we are left with a contribution in which both the matrix element and the scattering amplitude are projected on shell. We find \begin{multline} \langle 0 \vert \mathcal O_+(0) \vert N \pi, \mathrm{in} \rangle = \langle 0 \vert \mathcal O_+(0) \vert N \pi, \mathrm{in} \rangle_{2\mathrm{PI}} \\ \times [1 + \mathcal I_R(E) \mathcal M(E) ] \,, \end{multline} where \begin{equation} \mathcal I_R(E) = \frac{i p}{8 \pi E} - \mathrm{PV} \! \int_{R} \! \! \frac{d^3 \textbf k}{(2 \pi)^3} \frac{1}{2 \omega_N 2 \omega_\pi ( E - \omega_N - \omega_{\pi} )} \,, \end{equation} and where PV indicates a principal-value pole prescription. The subscript $R$ indicates that this loop integral requires a regulator, an artifact that has been introduced by our approximations. In this work we choose to regulate by subtracting the integrand evaluated at threshold \begin{multline} \mathcal I(E) \equiv \frac{i p}{8 \pi E} - \mathrm{PV} \! \int \! \! \frac{d^3 \textbf k}{(2 \pi)^3} \frac{1}{2 \omega_N 2 \omega_\pi } \\ \times \left[\frac{1}{ E - \omega_N - \omega_{\pi}} - \frac{1}{ m_N + m_\pi - \omega_N - \omega_{\pi}} \right ] \,. \end{multline} This subtraction is motivated by the observation that the second term in Eq.~(\ref{eq:BetheSalpteterBn}) should not play a role at low energies. Note also that the on-shell restriction projects the scattering amplitude down to its $I(J^{P}) = \nicefrac[]{1}{2} (\nicefrac[]{1}{2}^{+})$ component. To complete the construction of our model we use the fact that the diagrams in the LO ChPT calculation of $B(E)$ are two-particle irreducible, and thus also give the leading-order contribution to the 2PI restriction of this quantity. This leads us to define \begin{equation} B(E,\gamma) \equiv \mathrm{Re} \left [ e^{- i \delta} B_{ \mathrm{ChPT}}(E) \left[ 1 + \gamma \mathcal I(E) \mathcal M_{}(E) \right ] \right ] \,, \end{equation} where we have introduced the free parameter $\gamma$ to partially compensate the {\em ad hoc} procedure that lead us to this expression. Here we have also included the phase factor $e^{- i \delta}$ that is needed to cancel the phase that appears in the two-particle matrix element.% \footnote{This simple phase structure is only strictly valid below three-particle production threshold. In the present model we are using the elastic form for the two-particle scattering amplitude and this has the consequence that the phase is preserved also above production threshold. This is consistent with the neglect of three-particle states in the L\"uscher quantization condition.} % To evaluate both the $e^{- i \delta}$ factor and the scattering amplitude $\mathcal M$ we use the experimentally determined $N \pi$ scattering phase, plotted in Fig.~\ref{fig:pshifts}. Note also that we must discard a small imaginary part that arises in this model. In the case of $\gamma=0$ we omit the $e^{- i \delta}$ phase factor and thus recover the leading order ChPT result. For $\gamma>0$ the rising phase shift causes the matrix element to flip sign roughly in the region of the Roper, with the energy value at the node dependent on the specific value chosen for $\gamma$. For $\gamma<0$ the sign of the ChPT prediction is preserved and a peak is observed in the vicinity of the Roper. Past this energy range the matrix element can flip sign, but for $\gamma<-3$ the crossing is well outside the relevant energy range. In Fig.~\ref{fig:Bnmodels} we plot the energy dependence predicted by various values of $\gamma$. We now turn to the matrix element of the axial-vector current, $\mathcal A(E)$. As we derive in the appendix, the LO ChPT prediction for this quantity is \begin{multline} \mathcal A(E)_{\mathrm{ChPT}} = \frac{ \sqrt{m_N}(\omega_{N} - m_N)^{1/2} }{ 2 \sqrt{6} f_\pi } \\ \times \left[4-\frac{8}{3} g_A \left(\overline g_A-\frac{g_A M_\pi^2}{ 4 \omega_{\pi} m_N-2 M_\pi^2 }\right)\right] \,. \label{eq:AxialChPT} \end{multline} To estimate this quantity beyond ChPT we apply the same model used for the overlap factor \begin{equation} \mathcal A(E ,\alpha) = \mathrm{Re} \left [ e^{- i \delta } \mathcal A(E)_{\mathrm{ChPT}} [1 + \alpha \mathcal I(E) \mathcal M(E)] \right ]\,. \end{equation} As with $B(E,\gamma)$, $\alpha = 0$ gives the LO ChPT prediction, $\alpha<0$ preserves the sign of the matrix element and enhances the magnitude near the resonance, and $\alpha>0$ gives a zero-crossing roughly in the range of the resonance. Also as with $B(E,\gamma)$, we include the phase factor needed to cancel the phase in the matrix element, and discard a small imaginary contribution that arises as an artifact of our model. For $\alpha=0$ the phase factor is not included. In Fig.~\ref{fig:Amodels} we plot the energy dependence of the \mbox{axial} transition amplitude for various choices of $\alpha$. Note that we restrict attention to a range of $\alpha$ that is smaller than that considered for $\gamma$. The LO ChPT prediction for $B(E)$ has a magnitude that decreases with energy whereas $\mathcal A(E)$ is nearly constant at higher energies. This has the consequence that varying $\alpha$ over a given range has a larger effect than varying $\gamma$. We choose the parameters such that the models have a maximum magnitude roughly within a factor of two of the maximum predicted by LO ChPT. We stress again that, unlike with the overlap factor, the functional form of $\mathcal A(E)$ is independent of the lattice set-up and is of direct experimental relevance. In this study we are particularly interested in whether $\mathcal A(E)$ has a node (zero crossing) at some energy, as predicted by the $\alpha>0$ models. Interestingly, such a node is observed in the CLAS scattering data for $e p \to e' \pi^+ \pi^- p'$ in their analysis of the electromagnetic transition amplitude as a function of photon virtuality, $Q^2$ \cite{RoperCLAS}. This node is not directly relevant for $\mathcal A(E)$ because (1) it concerns the electromagnetic transition and (2) it is for space-like momenta. It is nonetheless interesting to note that such crossings are observed. Given that LO ChPT predicts the same sign for $B(E)$ and $\mathcal A(E)$, and thus a positive value for $b_n = B(E_n) \mathcal C(E_n,L) \mathcal A(E_n)$, and given also that the curvature in LQCD correlator data indicates important contributions from states with $b_n<0$, we postulate that a node in $\mathcal A(E)$ might provide a reasonable explanation for the apparent discrepancy. More generally one can identify four basic scenarios: (i) neither $B(E)$ nor $\mathcal A(E)$ cross zero in the relevant energy range, (ii) both cross zero, (iii) only the overlap factor $B(E)$ has a node, or (iv) only the transition amplitude $\mathcal A(E)$ has a node. The first two cases lead to positive excited-state contamination and thus fail to describe present day numerical LQCD correlator data. The third and fourth scenarios can both explain the empirically observed excited state-contamination, as we explain in the next section. \begin{figure} \begin{center} \vspace{20pt} \hspace{-15pt}\includegraphics[scale=0.45]{figs/fig7.pdf} \vspace{-15pt} \end{center} \caption{Three different scenarios for the overlap factor $B(E)$ with $\gamma$ values of $-4$ (lowest), $0$ (middle) and $4$ (highest). The vertical lines indicate the finite-volume energies in a box of size $M_\pi L=4$. The thickness of these lines is proportional to the value of $\mathcal C(E_n,L)$ and thus indicates how the state is weighted in the excited-state contamination.} \label{fig:Bnmodels} \end{figure} \begin{figure} \begin{center} \vspace{19pt} \hspace{-10pt} \includegraphics[scale=0.45]{figs/fig8.pdf} \hspace{10pt} \vspace{-25pt} \end{center} \caption{Three different scenarios for the axial matrix element $\mathcal A(E)$ with $\alpha$ values of $-3$ (lowest), $0$ (middle) and $1$ (highest). As in Fig.~\ref{fig:Bnmodels}, the vertical lines indicate finite-volume energies for $M_\pi L=4$ with line thickness proportional to $\mathcal C(E_n,L)$.} \label{fig:Amodels} \end{figure} \section{\label{sec:contam}Estimating the excited-state contamination} We are now ready to combine the results of the previous section to estimate the ratio $R(T,t)$ and the excited-state contamination to $g_A$. \begin{figure} \vspace{20pt} \hspace{-25pt} \includegraphics[scale=0.48]{figs/fig9.pdf} \hspace{145pt} \vspace{-20pt} \caption{Excited-state contamination for various values of $\alpha$ [parametrizing $\mathcal A(E)$] and $\gamma$ [parametrizing $B(E)$] for $M_\pi L = 4$ (solid) and $M_\pi L = 6$ (dashed). The top pair of curves shows the leading order ChPT prediction, but with the interacting values for $\mathcal C(E_n,L)$, the middle pair shows the leading order ChPT value for $B(E)$ together with the $\alpha=1$ (zero-crossing) scenario for $\mathcal A(E)$. Finally, the bottom pair shows the result of setting $\alpha=1$ together with $\gamma=-4$. Of the parameter sets considered here this choice most closely reproduces the observed LQCD correlator data. This lowest curve compares favorably, for example, with the $M_\pi = 190 \mathrm{\,MeV}$ and $M_\pi L = 3.9$ ensemble in Ref.~\cite{EigoAMA2016}.} \label{fig:esc1} \end{figure} In Fig.~\ref{fig:esc1} we show values for the ratio $R(T,T/2)/g_A$, given three different scenarios for the matrix elements. In each case we show the results for both $M_\pi L = 4$ (solid lines) and $M_\pi L=6$ (dashed lines). To provide comparable predictions, in both cases we sum all excited states up to an energy of 1800 MeV. In the case of $M_\pi L = 4$ this corresponds to the first 8 excited states and for $M_\pi L = 6$ to the first 18. In both cases we find that this number of states is both necessary and sufficient to estimate the saturated value of $R(T, T/2)/g_A$ within a few percent, for the models considered. We also see that the excited state contaminations from the two different volumes are in very good agreement. The highest pair of curves in Fig.~\ref{fig:esc1} shows the prediction from LO ChPT, but with the interacting values of the Lellouch-L\"uscher factors. The excited-state contamination here is comparable to that given in Fig.~5 of Ref.~\cite{BarGA}, in particular the result of that reference that includes ten excited states. Thus we find that, if one uses LO ChPT for the overlap and the axial-vector matrix element, then the effect of interactions on the energies and Lellouch-L\"uscher factors leads to a small (percent level) correction to the predicted value of $R(T,T/2)$. As is also stressed in Ref.~\cite{BarGA}, including excited states beyond the first few requires sampling the LO ChPT predictions outside their expected region of validity. For example, for $M_\pi L=4$ it is well motivated to trust the first two excited states. These have a gap of $\sim 100 \, \mathrm{MeV}$ to the Roper and, as the latter is not included in the ChPT prediction, it is reasonable to require a separation from this state. We thus infer that one can only predict $R(T,T/2)$ using LO ChPT for source-sink separations large enough that the first two states dominate. For physical pion masses this may mean separations of $T > 2 \mathrm{\,fm}$. The middle pair of curves in Fig.~\ref{fig:esc1} is the prediction from combining the LO ChPT prediction for the overlap, $B(E,\gamma=0)$, with the zero-crossing model for the axial-current transition amplitude, $\mathcal A(E,\alpha=1)$. The curve is very flat because there are large cancellations between the positive contributions from lower states and the negative contributions from higher-energy states. The lowest pair of curves in Fig.~\ref{fig:esc1} gives the scenario most consistent with observed LQCD correlators. This follows from combining $\mathcal A(E,\alpha=1)$ and $B(E,\gamma=-4)$ [see again Figs.~\ref{fig:Bnmodels} and \ref{fig:Amodels}]. The negative contribution from the higher excited states overpowers the positive contribution from the first few. \begin{figure} \begin{center} \vspace{20pt} \includegraphics[scale=0.45]{figs/fig10.pdf} \end{center} \caption{Contribution of individual excited states in the $\alpha=1$, $\gamma=-4$ scenario, for $M_\pi L=4$. The seven curves show the excited-state contamination predicted by summing the contributions from the first state through the $n$th state. The thickness of a given curve is proportional to the number of states included in the sum. The three blue curves show the result from including the first (lowest blue), first two (middle blue) and first three (highest blue) excited states. The fourth excited state is the first with $b_n<0$ so that the predicted contamination falls once this state is included (highest green curve). The set of green curves then indicates the sum up to the fourth state (highest green) to the sum up to the seventh state (lowest green). The effects of including additional states beyond the seventh are negligible so that the lowest green curve gives a good indication of the full excited state contamination predicted by this model. } \label{fig:esc2} \end{figure} In Fig.~\ref{fig:esc2} we show the importance of higher excited states in the $\mathcal A(E,\alpha=1)$ and $B(E,\gamma=-4)$ scenario. In particular, for $M_\pi L=4$, we find that the contamination predicted by summing fewer than seven states differs significantly from the saturated curve. In particular the seventh excited state has a significant contribution due to the large value of $\mathcal C(E_n,L)$. \section{Conclusion} In this work we have studied the excited-state contamination in LQCD correlators used to extract the nucleon axial charge, $g_A$. Combining various finite-volume formalisms with experimental scattering data, LO ChPT and a model for the infinite-volume matrix elements, we find that the excited-state behavior empirically observed in lattice correlators can be reproduced by postulating a sign change in the infinite-volume axial-vector transition amplitude, $\langle N \pi, \mathrm{out} \vert A \vert N \rangle$. Such nodes are observed experimentally in other transition amplitudes, but the data is insufficient to make a definitive statement about the quantity at hand. Our findings additionally indicate that a large number of finite-volume excited states, including those at energies around the Roper resonance, give important contributions in the ratios used to access $g_A$. This is based on mild assumptions about how the nucleon interpolators couple to states near the Roper and on the observation that the Lellouch-L\"uscher factors, governing the relation between finite- and infinite-volume states, can be significantly enhanced. The results presented here serve to further emphasize the great importance of using optimal interpolators to minimize the coupling to excited states. Based on numerical LQCD calculations in the meson sector, the most promising approach seems to be the variational method, in which a large basis of operators is used to disentangle the excited states. The situation will also be improved by further advances in improving the signal-to-noise ratio in nucleon correlators. Finally we emphasize that the nature of excited-state contamination depends heavily on the quantity under consideration. Indeed for many quantities, for example average $x$, the LQCD data indicates positive excited-state contamination \cite{ETMAvx2016}. For this observable the same overlap factor $B(E)$ appears, but a different transition-amplitude arises due to the differing current insertion. The observed, positive excited-state contamination can be accommodated if we suppose the matrix element has the same sign as the axial vector at low-energies and does not cross zero in the relevant energy window. Another interesting example is the iso-singlet octet axial-vector. ChPT predictions indicate that matrix elements of this current should be highly suppressed relative to those of the iso-triplet studied here. The iso-singlet suffers from other sources of systematic uncertainty, in particular quark-disconnected diagrams. Given the potential severity of excited-state contaminations, it will be interesting to compare the systematic error budgets for these quantities as methods on both sides improve. \acknowledgements{We thank J. Green, T. Harris, G. von Hippel, P. Junnarkar, D. Mohler, D. Robaina, H. Wittig and all our colleagues in the Mainz lattice group for helpful discussions, encouragement and support. We thank Oliver B\"ar for very helpful comments on the first version of this manuscript.} \section{\label{sec:introduction}Introduction} In the past decades, outstanding progress has been made in reproducing the properties of the strong interaction by numerically calculating QCD correlators on a Euclidean spacetime lattice. One goal of such calculations is to extract various aspects of nuclear structure from the underlying theory, and a target quantity here is the nucleon axial charge, $g_A$, defined via \begin{equation} \label{eq:gAdef} \langle N, \textbf p,\sigma' \vert A^{a}_\mu(0) \vert N, \textbf p, \sigma \rangle = g_A \overline u_{\sigma'}(\textbf p) \Gamma_{A,\mu}^{a} u_{\sigma}(\textbf p) \,, \end{equation} where $A^{ a}_\mu \equiv \overline {\mathcal Q}\Gamma_{A,\mu}^{a } \mathcal Q$ and $\Gamma_{A,\mu}^{a } = T^a \gamma_\mu \gamma_5$. Here $T^a = \tau^a/2$ are the generators of $SU(2)$ isospin and $\mathcal Q$ is a doublet containing the up and down quarks. We have also introduced single nucleon states with momentum $\textbf p$ and spin $\sigma, \sigma'$ as well as their corresponding spinors $\overline u_{\sigma'}(\textbf p)$ and $ u_{\sigma}(\textbf p)$. These are also isospin doublets built from the proton and neutron. In this work we use Euclidean conventions for the gamma matrices, $\{\gamma_\mu , \gamma_\nu \}=2 \delta_{\mu \nu}$. The axial charge is in many ways an ideal quantity for lattice QCD (LQCD). In particular, it can be directly accessed from plateaus in Euclidean correlators and does not contain the noisy quark-disconnected diagrams. However, as a nuclear quantity, it suffers from the signal-to-noise problem and this is only made worse in the three-point function required to create a nucleon, couple it to the axial current and then annihilate it. For some time now, lattice calculations of $g_A$ have been prone to underestimate the quantity.% \footnote{See, for example, Refs.~\cite{MarthaRev2014,GreenRev2016,ETMAvx2016,BhattacharyaAxial2016,QCDSFAxial2015,EigoAMA2016}.} % Possible explanations for this include underestimated systematic uncertainties from extrapolation to the physical pion mass, from finite-volume effects and from excited-state contamination. This work is concerned with the latter. Specifically we are interested in excited-state contamination in the context of a ratio of Euclidean three- and two-point correlators, constructed to satisfy \begin{multline} \label{eq:introesc} R(T,t) \underset{T \gg t \gg 0}{\longrightarrow} g_A \\ + \sum_{n=2} \left ( b_n \big (e^{- \Delta E_n (T - t)}+ e^{- \Delta E_n t} \big ) + c_n e^{- \Delta E_n T} \right ) \,, \end{multline} where we have dropped subleading exponentials as we explain in detail in the following section. Here we introduce $T$ as the nucleon source-sink separation, $t$ as the current insertion time, and $\Delta E_n$ as the gap between the nucleon mass and the $(n-1)$th excited state. The coefficients $b_n$ and $c_n$ are related to finite-volume matrix elements as given in Eqs.~(\ref{eq:bndef}) and (\ref{eq:cndef}) below. The excited-state contribution to $R(T,t)$ has been recently studied in both non-relativistic \cite{BrianGA} and relativistic \cite{BarTwoPoint,BarGA} baryon chiral perturbation theory (ChPT). In both cases the authors find that the leading-order (LO) ChPT predictions are independent of the form of the nucleon interpolators.% \footnote{This assumes local three-quark operators. As is carefully discussed in Ref.~\cite{BarTwoPoint}, the prediction also holds for smeared operators, provided that the smearing radius is sufficiently small.} % This leads to the universal prediction that $b_n>0$, and thus that the excited-state contamination is positive. Since the predictions for $b_n$ and $c_n$ depend only on $g_A$, the pion decay constant, $f_\pi$, and known kinematic quantities, the ChPT expressions could in principle be used to remove the leading excited-state contribution in order to more accurately extract $g_A$. To make use of the LO ChPT results, however, one must ensure that these describe present-day numerical LQCD data. As $g_A$ is often extracted from the central value, $R(T,T/2)$, or by fitting a constant to a range of central values, determining the $T$ values needed for $R(T,T/2)$ to enter the regime of LO ChPT is particularly useful. If the source-sink separation is too small, then the set of finite-volume states needed to estimate $R(T,T/2)$ goes beyond the region described by the leading-order prediction. Indeed, the curvature of nearly all available numerical LQCD data for $R(T,T/2)$ as a function of $T$ is negative, indicating negative excited-state contamination, in contradiction with the LO ChPT prediction.% \footnote{Again see Refs.~\cite{MarthaRev2014,GreenRev2016,ETMAvx2016,BhattacharyaAxial2016,QCDSFAxial2015,EigoAMA2016}. One exception here is the curvature of the correlator data of Ref.~\cite{Ohta2015}. It is unclear why the results of this work differ from the rest. One possibility is that, as compared to other calculations, the interpolators used in this study have enhanced coupling to the lower excited states.} % Similarly, at fixed $T$, $R(T,t)$ is consistently observed to have negative curvature as a function of the current-insertion time, $t$. We take this as strong evidence that, in present day LQCD calculations, the values of $T$ are too small for $R(T,t)$ to be well described by the LO ChPT results. In this paper we show that, under plausible assumptions, one can reproduce the qualitative behavior of numerical LQCD correlators by including the contributions of higher-energy states, taking into account $N \pi$ final-state interactions, and postulating a sign change in the infinite-volume axial-vector transition amplitude, $\langle N \pi, \mathrm{out} \vert A_\mu \vert N \rangle$. Using experimentally-determined $N \pi$ scattering data in a generalization of L\"uscher's quantization condition \cite{Luscher1986, Luscher1990}, we predict the energies of the finite-volume excited states entering $\Delta E_n$. We then use a generalization of the Lellouch-L\"uscher formalism, again with experimental scattering data, to relate the finite-volume matrix elements in $b_n$ and $c_n$ to infinite-volume matrix elements involving $N \pi$ asymptotic states \cite{Lellouch2000}. To complete the construction, we estimate the remaining infinite-volume matrix elements in a model based on LO ChPT, supplemented by the scattering data. Within this set-up we find that a large number of excited states give an important contribution to $R(T,T/2)$ for realistic $T$ values, and that a sign flip in the axial-vector transition can readily accommodate the empirically observed negative excited-state contamination [see Figs.~\ref{fig:esc1} and \ref{fig:esc2} below]. We find that, for physical pion masses, $T \gtrsim 2 \mathrm{\, fm}$ is needed to enter the regime where LO ChPT describes the lattice correlators. This analysis suffers from various limitations that prevent us from offering reliable quantitative predictions. The most important limitation is the neglect of $N \pi \pi$ states. Here we only study the energies and matrix elements of finite-volume $N \pi$ states. Both the L\"uscher quantization condition and the Lellouch-L\"uscher formalism hold only for energies below three-particle production threshold, but in this work we also include energies above $N \pi \pi$ threshold where the relations develop uncontrolled systematic uncertainties. There is evidence that the breakdown of the formalism turns on slowly as one crosses multi-particle thresholds,% \footnote{See, for example, the phase-shifts extracted above multi-particle thresholds in Ref.~\cite{WilsonCoupRho2015}.} % but in the vicinity of the Roper resonance, the neglected three-particle states could have a significant contribution. [See also the discussion in the paragraph following Eq.~(\ref{eq:omNfree}) below.] Other limitations of this study include the modeling of the infinite-volume matrix elements, explained in detail in Sec.~\ref{sec:infME}, as well as the restriction to physical pion masses. The latter is a natural limitation given our approach of working with experimental scattering data. As a result, the predictions for $R(T,t)$ discussed in Sec.~\ref{sec:contam} are most directly applicable to ensembles near the physical point. As an aside we comment that, in order to have a solid theoretical foundation for this work, it was necessary to make contact with the LO ChPT results derived in Refs.~\cite{BarTwoPoint,BarGA}. In these earlier publications, $\Delta E_n$ is approximated using non-interacting $N \pi$ states in a finite volume, so that the work is concerned only with predicting the coefficients, $b_n$ and $c_n$. Since we are using the Lellouch-L\"uscher formalism to predict the coefficients in this study, it was necessary to first understand how this formalism can be used to reproduce the LO ChPT results. We were able to make this connection in detail, re-deriving some of the expressions reported in Refs.~\cite{BarTwoPoint,BarGA}. This is interesting in its own right as it shows how the Lellouch-L\"uscher formalism provides a shortcut for extracting ChPT predictions of these and related quantities. In particular, the numerous one-loop diagrams needed to determine $b_n$ in Ref.~\cite{BarGA} are replaced in the present approach by five tree-level diagrams. Details are given in the appendix. The remainder of this article is organized as follows. In the following section we define the correlators, the ratio $R$ and the parameters $\Delta E_n$, $b_n$ and $c_n$ that describe the excited states. In Sec.~\ref{sec:es} we use experiemental partial wave data to estimate the interacting energy gaps $\Delta E_n$ associated with $N \pi$ states. Then in Sec.~\ref{sec:LL} we give estimates for the coefficients $b_n$ and $c_n$. This leads to estimates of the excited state contamination for typical present-day lattice set-ups, presented in Sec.~\ref{sec:contam}. In the appendix we detail the derivation of various ChPT expressions used in the main text. \section{\label{sec:extractGA}Extracting $g_A$ from the lattice} Various methods exist for using numerical LQCD to determine $g_A$. Common to all approaches is the determination of two- and three-point correlators of the form \begin{align} \begin{split} \label{eq:C3def} C^{}_3(T,t) & \equiv \int d^3 \textbf x \int d^3 \textbf y \ \Gamma'_{\mu, \alpha \beta} \\ & \hspace{50pt} \times \langle \mathcal O_\beta(\textbf x, T) A^{3}_\mu(\textbf y, t) \overline {\mathcal O}_\alpha(0) \rangle \,, \end{split} \\ C^{}_2(T) & \equiv \int d^3 \textbf x \ \Gamma_{\alpha \beta} \langle \mathcal O_\beta(\textbf x, T) \overline {\mathcal O}_\alpha(0) \rangle \,, \label{eq:C2def} \end{align} where $\overline {\mathcal O}_\alpha$, ${\mathcal O}_\beta$ are proton interpolating fields, $A^{3}_\mu$ is the third isospin component of the axial vector current, and $\Gamma'$ and $\Gamma$ are projectors. In this work we restrict attention to states that have zero three-momentum in the finite-volume frame. Defining $\widetilde {\mathcal O}^{}_\beta(T) \equiv \int d^3 \textbf x \ \mathcal O_\beta(\textbf x, T)$,\\ $\widetilde A^{3}_\mu(t) \equiv \int d^3 \textbf y \ A^{3}_\mu(\textbf y, t) $, and performing a spectral decomposition, we reach \begin{align} \label{eq:sd3} \begin{split} C^{}_3(T,t) & \equiv L^{-3} \sum_{n,m} \ \Gamma'_{\mu, \alpha \beta} \langle 0 \vert \widetilde {\mathcal O}^{}_\beta \vert n \rangle \langle n \vert \widetilde A^{3}_\mu \vert m \rangle \\[-10pt] & \hspace{60pt} \times \langle m \vert \widetilde {\overline {\mathcal O}}^{}_\alpha \vert 0 \rangle e^{- E_n(T-t)} e^{- E_m t} \,, \end{split} \\[5pt] C^{}_2(T) & \equiv L^{-3} \sum_{n} \ \Gamma_{\alpha \beta} \langle 0 \vert \widetilde {\mathcal O}^{}_\beta \vert n \rangle \langle n \vert \widetilde{ \overline {\mathcal O}}^{}_\alpha \vert 0 \rangle e^{- E_n T} \,, \label{eq:sd2} \end{align} where we have assumed $T>t>0$ and have used the shorthand $\widetilde {\mathcal O}^{}_\beta \equiv \widetilde {\mathcal O}^{}_\beta(0)$ and similar for $\widetilde A^{3}_\mu$. To treat the fields equivalently we have Fourier transformed $\overline {\mathcal O}_\alpha$ over spatial volume but have also divided by volume to preserve the definitions. Throughout this work all finite-volume states are normalized as $\langle n \vert n \rangle = 1$. We next observe that the lowest state in the sum, denoted by $n,m=1$, is the single nucleon state. From this follows that the ratio of the $n,m=1$ terms in $C_3(T,t)$ and $C_2(T)$ gives $g_A$ \begin{equation} g_A \equiv \frac{\Gamma'_{\mu, \alpha \beta} \langle 0 \vert \widetilde {\mathcal O}^{}_\beta \vert 1 \rangle \langle 1 \vert \widetilde A^{3}_\mu \vert 1 \rangle \langle 1 \vert \widetilde {\overline {\mathcal O}}^{}_\alpha \vert 0 \rangle}{\Gamma_{\alpha \beta} \langle 0 \vert \widetilde {\mathcal O}^{}_\beta \vert 1 \rangle \langle 1 \vert \widetilde{\overline {\mathcal O}}^{}_\alpha \vert 0 \rangle} \,. \end{equation} This relies on the definitions of $\Gamma$ and $\Gamma'$. These are constructed to ensure that the result holds. It follows that $g_A$ can be accessed by identifying a plateau in the ratio \begin{equation} R^{}(T,t) \equiv \frac{C^{}_3(T,t)}{C^{}_2(T)} \,. \end{equation} Substituting the spectral decompositions, Eqs.~(\ref{eq:sd3}) and (\ref{eq:sd2}), taking $T \gg t \gg 0$ and expanding the denominator, we find \begin{multline} \label{eq:Rdecom} R^{}(T,t) = g_A + \sum_{n=2}^\infty \bigg [ b_n \big ( e^{- \Delta E_n (T - t)} + e^{- \Delta E_n t} \big ) \\ + c_n e^{- \Delta E_n T } + \cdots \bigg ] \,, \end{multline} where $\Delta E_n \equiv E_n - E_1 = E_n - m_N + \mathcal O(e^{- M_\pi L})$, with $E_n$ the energy of the $(n-1)$th excited state, $m_N$ the nucleon mass and $M_\pi$ the pion mass. Here we have introduced $L$ as the linear spatial extent of the volume and have used the fact that finite-volume corrections to the nucleon mass are exponentially suppressed. We neglect such corrections throughout. In Eq.~(\ref{eq:Rdecom}) we have also introduced \begin{align} \label{eq:bndef} b_n & \equiv \frac{ \Gamma'_{\mu, \alpha \beta} \langle 0 \vert {\widetilde {\mathcal O}}^{}_\beta \vert n \rangle \langle n \vert \widetilde A^{3}_\mu \vert 1 \rangle \langle 1 \vert \widetilde {\overline {\mathcal O}}^{}_\alpha \vert 0 \rangle}{ \Gamma_{ \alpha \beta} \langle 0 \vert {\widetilde {\mathcal O}}^{}_\beta \vert 1 \rangle \langle 1 \vert \widetilde {\overline {\mathcal O}}^{}_\alpha \vert 0 \rangle } \,, \\ c_n & \equiv - g_A c_{2,n} + c_{3,n} \,, \label{eq:cndef} \end{align} where \begin{align} c_{2,n} & = \frac{\Gamma_{ \alpha \beta} \langle 0 \vert {\widetilde {\mathcal O}}^{}_\beta \vert n \rangle \langle n \vert \widetilde {\overline {\mathcal O}}^{}_\alpha \vert 0 \rangle }{\Gamma_{ \alpha \beta} \langle 0 \vert {\widetilde {\mathcal O}}^{}_\beta \vert 1 \rangle \langle 1 \vert \widetilde {\overline {\mathcal O}}^{}_\alpha \vert 0 \rangle} \,, \label{eq:c2ndef}\\ \label{eq:c3ndef} c_{3,n} & = \frac{\Gamma'_{\mu, \alpha \beta} \langle 0 \vert {\widetilde {\mathcal O}}^{}_\beta \vert n \rangle \langle n \vert \widetilde A^{3}_\mu \vert n \rangle \langle n \vert \widetilde {\overline {\mathcal O}}^{}_\alpha \vert 0 \rangle}{\Gamma_{ \alpha \beta} \langle 0 \vert {\widetilde {\mathcal O}}^{}_\beta \vert 1 \rangle \langle 1 \vert \widetilde {\overline {\mathcal O}}^{}_\alpha \vert 0 \rangle} \,. \end{align} Note that the definition for $b_n$, Eq.~(\ref{eq:bndef}), directly arises from the coefficient on the first exponential, $ e^{- \Delta E_n (T - t)} $, whereas the factor multiplying the second exponential, $ e^{- \Delta E_n t}$, has a different definition. However, as long as $\Gamma'_{\mu}$ is anti-hermitian and $\Gamma$ is hermitian, then Euclidean definitions of charge-conjugation and time-reversal invariance imply $R(T,t) = R(T,T-t)=R^*(T,t)$. Thus the two coefficients are identically equal and we take $b_n$ as the coefficient for both source-to-current and current-to-sink time dependence. As can be seen by comparing the definitions, Eqs.~(\ref{eq:bndef}) and (\ref{eq:cndef}), the matrix elements required to access the source-to-sink coefficient, $c_n$, are more complicated than those needed for $b_n$. The first term in the definition of $c_n$, proportional to $c_{2,n}$ defined in Eq.~(\ref{eq:c2ndef}), arises from expanding the excited-state contamination of $C_2(T)$ in the denominator. This term depends on the same matrix elements that appear in the definition of $b_n$ and can be studied using the same approach. The second term in $c_n$, $c_{3,n}$ defined in Eq.~(\ref{eq:c3ndef}), arises from source-to-sink contributions in $C_3(T,t)$ and is thus more complicated. This term turns out to be numerically suppressed in LO ChPT and thus unimportant in our qualitative study. With this in mind, in this work we simply use the LO ChPT result for $c_{3,n}$ and only apply the Lellouch-L\"uscher like analysis to $b_n$ and $c_{2,n}$. The ellipsis in Eq.~(\ref{eq:Rdecom}) stands for terms suppressed by additional factors of $e^{- \Delta E_m t}$, $e^{- \Delta E_m (T-t)}$ or $e^{- \Delta E_m T}$. These neglected terms arise for two reasons. One contribution is from higher orders in the expansion of $C_2(T)$ in the denominator. This expansion is not required but is a good approximation and simplifies the resulting expressions. The second neglected contribution is from terms in Eq.~(\ref{eq:sd3}) with $n \neq m$, with both indices corresponding to excited states. Such terms involve two-to-two (rather than one-to-two) axial-vector matrix elements and are expected to be suppressed relative to those we keep. We caution that these two-to-two transitions are not necessarily volume suppressed. For example, LO ChPT predicts the same volume dependence for $c_{2,n}$ and $c_{3,n}$. This is the case because, at leading order, the current mediating the two-to-two transition couples only to one of the two particles. When the current couples to both particles an extra factor of volume suppression does arise.% \footnote{For full details on the generalization of the Lellouch-L\"uscher approach to two-to-two matrix elements with one-to-one subprocesses, see Ref.~\cite{BH2to2}.}% The aim of this work is to estimate the value of the sum in Eq.~(\ref{eq:Rdecom}) for given $T$ and $t$. In the following section we study $\Delta E_n$ and in Sec.~\ref{sec:LL} we turn to $b_n$ and $c_n$. \section{\label{sec:es} Estimating the excited-state energies} The finite-volume quantization condition derived by L\"uscher~\cite{Luscher1986,Luscher1990} has since been extended to include moving frames, non-identical and non-degenerate particles, coupled two-particle channels, and particles with spin~\cite{Rummukainen1995, KSS2005, Christ2005, Lage2009, Bernard2010, Doring2011, HSmultiLL, BricenoTwoPart2012, Fu2012, Gockeler2012, BricenoSpin, BHOneToTwoSpin}. These extensions can be used to estimate the finite-volume energies that appear in $R(T,t)$. In particular, in the range $m_N + M_\pi < E_n < m_N + 2 M_\pi$, the finite-volume energies can be determined using the L\"uscher quantization condition by inputting the experimentally determined phase shift for $N \pi$ scattering. It is useful to consider these energies relative to the energies of non-interacting particles in a finite-volume. The non-interacting levels are determined by constraining the momentum to satisfy $\textbf p = 2 \pi \textbf n/L$, where $L$ is the linear extent of the volume and $\textbf n$ is a three-vector of integers. This constraint is appropriate to a cubic finite spatial volume with periodic boundary conditions, and in this work we restrict ourselves to this simplest set-up. In Fig.~\ref{fig:freelevels} we display the non-interacting energies as a function of $M_\pi L$, given by \begin{multline} E_n \in \Big \{ \{ \omega_{\pi, \textbf n} + \omega_{N, \textbf n} \} , \{ \omega_{\pi, \textbf n}+ \omega_{\pi, \textbf m} + \omega_{N, \textbf n + \textbf m} \}, \cdots \Big \} \,, \end{multline} where \begin{align} \omega_{\pi, \textbf n} & \equiv \sqrt{M_\pi^2 + (2 \pi/L)^2 \textbf n^2} \,, \\[5pt] \omega_{N, \textbf n} & \equiv \sqrt{m_N^2 + (2 \pi/L)^2 \textbf n^2} \,, \label{eq:omNfree} \end{align} and where the ellipsis indicates four- (or more) particle states. As described in the figure caption, we are interested in states that have the quantum numbers of a single nucleon, $I(J^{P}) = \nicefrac[]{1}{2} (\nicefrac[]{1}{2}^{+})$. For this reason the state with a pion and nucleon both at rest does not contribute. This state only couples to the $s$-wave and thus has negative parity due to the intrinsic parity of the pion. \begin{figure} \begin{center} \vspace{20pt} \includegraphics[scale=0.45]{figs/fig1.pdf} \end{center} \caption{Energy levels of non-interacting finite-volume states, with quantum numbers of a single nucleon at rest in the finite-volume frame. The location of the $N \pi$ threshold is indicated by the dashed horizontal line. The state with this energy is not included because its parity is opposite that of the nucleon. The lowest solid horizontal line indicates the single nucleon energy, and the gap from here determines the size of the contributions to $R(T,t)$. Finally, we have included three different types of finite-volume states, distinguished by three colors. Blue levels are back-to-back $N \pi$ states, green levels are $N \pi \pi$ states with one pion at rest, and magenta are $N \pi \pi$ states with the nucleon at rest. For the latter two sets only the first few levels are shown to avoid clutter.} \label{fig:freelevels} \end{figure} Also apparent from Fig.~\ref{fig:freelevels} is that, for physical pion masses and realistic values of $M_\pi L$, the L\"uscher formalism only rigorously describes, at most, the first excited state. For $E_n> m_N + 2 M_\pi$, an extended three-particle formalism is required. This has recently been developed by one of us for three-pion states, and the extension to general three-particle systems is underway \cite{LtoK,KtoM}. Because the three-particle formalism is not yet directly applicable to $N \pi \to N \pi \pi$, in this work we restrict attention to the two-particle formalism, but also apply it above threshold where the predictions suffer from systematic uncertainties. As we explain more in Sec.~\ref{sec:contam}, an important conclusion of our analysis is that finite-volume states in the vicinity of the Roper resonance can contribute significantly to $R(T,t)$. Given that the Roper has a $\sim\!40\%$ branching fraction to $N \pi \pi$ \cite{PDG}, three-particle states certainly need to be included to offer reliable quantitative predictions in this region. However, barring delicate cancellations between two- and three-particle states, the {qualitative} conclusions presented here are expected to hold. Three-particle contributions may also be enhanced when the energy of an $N \pi$ pair within $N \pi \pi$ is close to the delta resonance. Indeed, in the ChPT analysis of Ref.~\cite{BrianGA} the delta resonance was also included and was found to reduce the value of $R(T,T/2)$. Finally we stress that one can only use the L\"uscher quantization condition with scattering amplitudes that take the unitary form of elastic two-particle scattering, Eq.~(\ref{eq:MJdef}). Thus, in this approximation we must also neglect the inelasticity in the two-particle scattering amplitude. [See also Footnote 6.] In the following subsections we present our prediction for the finite-volume energy gaps $\Delta E_n$. First, in Sec.~\ref{sec:qc}, we give the quantization condition for general two-particle systems and show how it can be reduced to describe the $N \pi$ states of interest. Then, in Sec.~\ref{sec:expt}, we use the experimental phase-shift data to predict the finite-volume spectrum. \subsection{Reducing the quantization condition} \label{sec:qc} The quantization condition for particles with spin is a straightforward generalization of L\"uscher's original result. Indeed a wide class of generalizations are all described by the same basic form~\cite{Luscher1986, Luscher1990, Rummukainen1995, KSS2005, Christ2005, Bernard2008, Lage2009, Bernard2010, Doring2011, HSmultiLL, BricenoTwoPart2012, Fu2012, Gockeler2012, BricenoSpin, BHOneToTwoSpin} \begin{equation} \label{eq:qc} \det \Big[\mathcal M^{-1}(E_n) + F(E_n, L) \Big ] = 0 \,. \end{equation} Here $\mathcal M$ is the two-to-two scattering amplitude and $F$ is a known geometric function. This result describes any two-particle system, with any number of two-particle channels with identical or non-identical particles, degenerate or non-degenerate masses and with arbitrary spin. To describe a specific system one need only specify the exact definitions, and in particular the index space, for $\mathcal M$ and $F$. As a preliminary example we consider a system with one channel of two non-identical {\em scalars} with masses $M_\pi$ and $m_N$. In this case both $\mathcal M$ and $F$ have two sets of spherical harmonic indices. The scattering amplitude is a diagonal matrix in this space, whose entries are related in the standard way to scattering phase shifts. $F$, by contrast, has on- and off-diagonal entries. This encodes the mixing of partial waves due to the reduced symmetry of the box. $F$ can be written as a sum-integral-difference~\cite{Luscher1986, Luscher1990, Rummukainen1995, KSS2005, Christ2005} \begin{multline} \label{eq:Fmatrix} F_{\ell', m'; \ell, m}(E, L) \equiv \bigg[ \frac{1}{L^3} \sum_{\textbf k} - \int \frac{d^3 \textbf k}{(2 \pi)^3} \bigg ] \\ \times \frac{4 \pi Y^*_{\ell', m'}(\hat{\textbf k} ) Y_{\ell,m}(\hat{\textbf k} ) }{2 \omega_\pi 2 \omega_N (E - \omega_\pi - \omega_N + i \epsilon) } \left ( \frac{k}{p} \right )^{\ell + \ell'} \,, \end{multline} where \begin{align} \omega_\pi \equiv \sqrt{M_\pi^2 + k^2} \,, \ \ \omega_N \equiv \sqrt{m_N^2 + k^2} \,, \end{align} $k = \vert \textbf k \vert$, $\hat {\textbf k} = \textbf k/k$, and the sum runs over all ${\textbf k} = (2\pi/L) {\textbf n}$, ${\textbf n} \in \mathbb{Z}^3$. In Eq.~(\ref{eq:Fmatrix}) an ultraviolet regulator is needed to make the quantity well defined. Since the sum and integral have the same ultraviolet divergence, a universal result is recovered as the regulator is removed. Here $p$ is the magnitude of CM frame momentum for particles with energy $E$ and masses $M_\pi$ and $m_N$ \begin{equation} \label{eq:pdef} E \equiv \sqrt{M_\pi^2 + p^2} + \sqrt{m_N^2 + p^2} \,. \end{equation} To incorporate spin in this system it is most straightforward to first work in the basis where the nucleon is polarized along some fixed direction in its CM frame. This new degree of freedom, denoted by $\sigma$, can be accommodated with two simple modifications. First, the amplitude gains an additional index, $\mathcal M = \mathcal M_{\ell', m', \sigma'; \ell, m, \sigma}$. Second, the kinematic matrix $F$ is multiplied with a Kronecker delta, $\delta_{\sigma' \sigma}$. This completely defines the scalar-nucleon quantization condition. Indeed, the arbitrary-spin quantization condition is given by simply multiplying the $F$ matrices with Kronecker deltas \cite{BricenoSpin,BHOneToTwoSpin}. \bigskip Next, to connect with experimental phase shifts, it is convenient to change to the basis of total angular momentum, $J$, orbital angular momentum, $\ell$, and azimuthal component of total angular momentum, $\mu$. The basis change is effected by contracting both sets of indices with standard Clebsch-Gordan coefficients. The amplitude in the new basis can be written% \footnote{Above three-particle threshold this expression no longer applies and an additional parameter must be introduced to parametrize the inelasticity. Here we are neglecting the inelasticity, even above multi-particle threshold. This approximation, which is consistent with the neglect of $N \pi \pi$ states in the L\"uscher quantization condition, breaks down as the energy increases. } \begin{equation} \label{eq:MJdef} \mathcal M_{J', \ell', \mu'; J, \ell, \mu} \equiv \delta_{J' J} \delta_{\ell' \ell} \delta_{\mu' \mu} \frac{8 \pi E}{p \cot \delta_{J,\ell}(p) - i p} \,. \end{equation} Note that the conservation of orbital angular momentum is special to the meson-baryon system. Generally orbital angular momenta will mix, but in this case conservation of total angular momentum implies that $\ell$ could at most couple with $\ell \pm 1$. Since changing by one unit flips parity, this coupling vanishes and $\ell$ is conserved. $F$ in the new basis is given by~\cite{Gockeler2012,BricenoSpin} \begin{multline} \label{eq:FJdef} F_{J', \ell', \mu'; J, \ell, \mu} \equiv \\ \sum_{m,\sigma,m'} \langle \ell \ m, \nicefrac12 \ \sigma \vert J \mu \rangle \langle \ell' \ m', \nicefrac12 \ \sigma \vert J' \mu' \rangle F_{ \ell', m'; \ell, m} \,. \end{multline} We make one final simplification before introducing approximations. One can show that the imaginary parts of $\mathcal M^{-1}$ and $F$ perfectly cancel in Eq.~(\ref{eq:qc}), giving \begin{equation} \det \Big [ \overline F_{J' \ell' \mu'; J \ell \mu} + \delta_{J' J} \delta_{\ell' \ell} \delta_{\mu' \mu} \cot \delta_{J,\ell}(p) \Big ] = 0 \,, \end{equation} where $\overline F = 8 \pi E \mathrm{Re}[F] /p$. We now reduce the quantization condition to a determinant of a finite-dimensional matrix by ignoring high partial waves. It turns out that, in the even-parity sector, we reach the simplest possible truncation by neglecting $\delta_{J,\ell}$ for $\ell \geq 3$. Then the system is truncated to the $\ell=1$ space. In this space $\overline F_{J' \ell' \mu'; J \ell \mu}$ is a $6 \times 6$ matrix: a $2 \times 2$ block for $\ell=1,J=1/2$, a $4 \times 4$ block for $\ell=1,J=3/2$. To determine its specific form we first note that \begin{equation} \overline F_{\ell'=1, m'; \ell=1, m} = - \frac{1}{q \pi^{3/2}} Z_{00}(1,q^2) \delta_{m'm} \,, \end{equation} where $Z_{00}$ is the L\"uscher zeta-function described in Ref.~\cite{Luscher1990} and $q \equiv p L/(2 \pi)$. The fact that $\overline F$ is proportional to the identity matrix in the $\ell=1$ subspace is preserved when we change to the $J$ basis. Thus, both matrices in the quantization condition are diagonal and the final result is two independent, one dimensional equations \begin{equation} \overline F_{11;11} + \cot \delta_{J, \ell=1}(p) = 0 \,, \end{equation} for $J=1/2$ or $3/2$. These can be reexpressed as \begin{equation} \label{eq:simplestqc} \phi(q) + \delta_{J, \ell=1}(p) = n \pi \,, \end{equation} where $n$ is an arbitrary integer and \begin{equation} \label{eq:phidef} \cot \phi(q) = \overline F_{11;11} = - \frac{1}{q \pi^{3/2}} Z_{00}(1,q^2) \,. \end{equation} We comment that this has the same form as the s-wave, scalar quantization condition. The quantity $\phi$ is often referred to as the pseudophase. The fact that the $J=1/2$ and $J=3/2$ sectors decouple can be explained by examining the symmetry group of the finite-volume system. For the case of one scalar and one spin-half particle in a finite cubic box with zero total momentum, the symmetry group is ${}^2O \otimes S_2$ and the irreps are $G_1^{\pm}, G_2^{\pm}$ and $H^{\pm}$~\cite{Bernard2008}. If we neglect $\ell \geq 3$ and thus also neglect $J \geq 5/2$, then we find a perfect correspondence between finite- and infinite-volume irreps $G_1^- \equiv (J=1/2)$ and $H^- \equiv (J=3/2)$. This implies that, within this approximation, the two partial-waves cannot mix, as we have seen by explicit calculation. \subsection{Predicting the spectrum from the experimental $N \pi$ phase shift} \label{sec:expt} To predict the finite-volume spectrum of $N \pi$ states we use experimental data made available by the George Washington University Institute for Nuclear Studies. Their data analysis center is available online at \url{http://gwdac.phys.gwu.edu/analysis/pin_analysis.html}. In this study we use their partial wave analysis WI08 solution. The relevant phase shift data are plotted in Fig.~\ref{fig:pshifts}. For detailed information about the experimental data set and the WI08 fit solution see Refs.~\cite{arndt_extended_2006, paris_toward_2010}. \begin{figure} \includegraphics[scale=0.45]{figs/fig2.pdf} \caption{The experimental phase shift for $N \pi$ scattering with $I(J^{P}) = \nicefrac[]{1}{2} (\nicefrac[]{1}{2}^{+})$. The slow rise through $\pi/2$ is associated with the broad Roper resonance.} \label{fig:pshifts} \end{figure} \begin{figure} \includegraphics[scale=0.45]{figs/fig3.pdf} \caption{Interacting finite-volume $N \pi$ states with $I(J^{P}) = \nicefrac[]{1}{2} (\nicefrac[]{1}{2}^{+})$. The dashed, black curves show the non-interacting energy levels.} \label{fig:spec} \end{figure} Substituting this phase shift into Eq.~(\ref{eq:simplestqc}) we reach the prediction for the two-particle energies, shown in Fig.~\ref{fig:spec}. Note that, {relative to the gap to the single nucleon state}, the shift is relatively small between free- and interacting levels. This means that it makes little difference whether one uses the free or interacting finite-volume spectrum for the values of $\Delta E_n$ that enter $R(T,t)$.\footnote{In this work we do not plot an explicit comparison but, as we comment in Sec.~\ref{sec:contam} below, if one uses LO ChPT for the infinite-volume matrix elements, then the effect of interactions in the energies and Lellouch-L\"uscher factors affects the prediction for $R(T,T/2)$ at the percent level.} Also apparent from Fig.~\ref{fig:spec} is that no avoided level crossing is visible. This is because the Roper resonance is too broad to generate such an effect. It follows that, near the physical point, no direct association between LQCD energies and the resonance can be made and a careful L\"uscher based analysis will be needed to extract resonance properties from LQCD. To better understand these results consider the form of the pseudophase curves, plotted together with the experimental phase shift for $M_\pi L = 4$ in Fig.~\ref{fig:phaseandps}. The interacting energies, at this $L$ value, are given by the intersections of the curves. This shows that there are universal features for the levels predicted by certain types of phase shifts. In particular, for any phase shift that slowly rises from $0$ to $\pi$, the spectrum is given by a smooth deformation of the free levels. When $\delta(p)$ is near $0$ or $\pi$ the energies coincide with free values. As one follows a given interacting level from high energies to low (by increasing $M_\pi L$) it rises by one free level. This implies that, for any slowly rising phase shift, interacting levels tend to be separated from their neighbors on each side by levels of the free theory. Also, the rise of the phase shift from $0$ to $\pi$ results in exactly one additional energy level relative to the free theory. Finally, as we have already stressed above, in this prediction of the energy levels we neglect the effects of crossing three-particle production threshold. Roughly speaking crossing this threshold has two effects. First, three-particle states appear on top of the two-particle states shown in the figure. Second, the positions of all energies are modified relative to those predicted by the two-particle L\"uscher formula. Strictly it does not make sense to distinguish between two- and three-particle states. All finite-volume states will have both two- and three-particle components once the energy exceeds $2 M_\pi + m_N$. However, for sufficiently weak two-to-three couplings, the levels are well described by being two-particle or three-particle like in certain regions, with avoided level crossings occurring whenever a given level changes from one type to the other. The overlap of the interpolator on a given state is also expected to be suppressed when the state has a large three-particle component, possibility with the exception of energies near the Roper. These observations, and the limitation of the formalism available, motivate us to use the effective spectrum plotted in Fig.~\ref{fig:spec} in our study of excited-state contamination. \begin{figure} \includegraphics[scale=0.45]{figs/fig4.pdf} \vspace{-10pt} \caption{Experimental scattering phase together with L\"uscher pseudophase curves, $(n \pi - \phi(q))$, for $M_\pi L = 4$. Each intersection gives an interacting level in terms of $q^2$, which can be converted to energy via $E = \sqrt{M_\pi^2 + (2 \pi/L)^2 q^2} + \sqrt{m_N^2 + (2 \pi/L)^2 q^2}$.} \label{fig:phaseandps} \end{figure} \section{Estimating the finite-volume matrix elements} \label{sec:LL} In this section we use experimental scattering data, together with LO ChPT and a model to describe $N \pi$ final-state interactions, in order to estimate the finite-volume matrix elements entering $b_n$ and $c_n$. The finite-volume two-particle states, denoted by $\vert n \rangle$, arise from insertions of the identity in Eqs.~(\ref{eq:sd3}) and (\ref{eq:sd2}) and always appear as an outer product of the form $\vert n \rangle \langle n \vert$. This is exactly the structure that is readily accommodated by the generalized Lellouch-L\"uscher formalism, as we now describe. The original work by Lellouch and L\"uscher gives the relation between a finite-volume matrix element and the $K \to \pi \pi$ decay rate \cite{Lellouch2000} \begin{equation} \label{eq:LLresult} \langle \pi \pi, E_n, L \vert \mathcal H(0) \vert K,L \rangle^2 = \frac{\vert \mathcal R \vert}{ 2 M_K L^6} \vert \mathcal H^{\mathrm{out}} \vert^2 \,, \end{equation} where $M_K$ is the kaon mass, $\mathcal H$ is the weak hamiltonian density in position space, $\mathcal H^{\mathrm{out}} \equiv \langle \pi \pi, \mathrm{out} \vert \mathcal H(0) \vert K \rangle $ is the corresponding infinite-volume matrix element and \begin{equation} \vert \mathcal R \vert = \frac{p}{16 \pi M_K} \left [ \frac{\partial}{\partial E} \left( \phi + \delta_{\pi \pi} \right ) \right ]_{E=M_K}^{-1} \,, \end{equation} where $\phi$ is defined in Eq.~(\ref{eq:phidef}) above and $\delta_{\pi \pi}$ is the s-wave phase-shift for elastic pion scattering. The finite-volume states on the right-hand side of Eq.~(\ref{eq:LLresult}) are unit normalized, whereas the infinite-volume states within $\mathcal H^{\mathrm{out}}$ satisfy the standard relativistic normalization. In the derivation of Lellouch and L\"uscher the box size must be tuned so that the two-pion and kaon states are degenerate, $E_n = M_K$. As has become increasingly clear through various subsequent studies, see for example Refs.~\cite{Lellouch2000, KSS2005, Christ2005, MeyerTimelike2011, HarveyPhotodis, HSmultiLL, BricenoTwoPart2012, BernardUnstable2012, BHWOneToTwo, BHOneToTwoSpin}, the conversion factor, $\vert \mathcal R \vert$, can be understood as a relation between the two-particle states defined in finite and infinite volume. This means that essentially the same relation holds even after relaxing a number of assumptions going into the original derivation. In particular one can define a Lellouch-L\"uscher like relation for operators with generic quantum numbers and with nonzero energy and momentum.% \footnote{In the context of $K \to \pi \pi$ this relaxes the need to tune the two-pion state to be degenerate. However, if one does not perform this tuning then the resulting infinite-volume matrix element has an energy difference between incoming and outgoing states, and thus looses its straightforward physical interpretation.} % For products of matrix elements that involve an outerproduct of finite-volume states, the relation can be derived without taking magnitudes of matrix elements \cite{BHWOneToTwo}. Thus, information about the relative sign of transition processes can also be obtained. In the context of this study, the relevant relation, given in Ref.~\cite{BHOneToTwoSpin}, takes the form \begin{multline} \label{eq:genLL} \langle 0 \vert \widetilde {\mathcal O}^{}_\beta(0) \vert n \rangle \langle n \vert \widetilde A^{3}_\mu(0) \vert 1 \rangle = \\ \langle 0 \vert {\mathcal O}_\beta(0) \vert N \pi, \mathrm{in} \rangle \frac{L^{3/2} \mathcal R(E_n,L)}{\sqrt{2 m_N} } \\ \times \langle N \pi, \mathrm{out} \vert A^{3}_\mu(0) \vert N \rangle \,, \end{multline} where $\mathcal R(E_n,L)$ is a matrix generalization of $\vert \mathcal R \vert $, defined in the following subsection. This is the conversion needed for $b_n$. The analog for $c_{2,n}$ is given by \begin{multline} \label{eq:genLL00} \langle 0 \vert \widetilde {\mathcal O}^{}_\beta(0) \vert n \rangle \langle n \vert \widetilde {\overline {\mathcal O}}_\alpha(0) \vert 0 \rangle = \\ \langle 0 \vert {\mathcal O}_\beta(0) \vert N \pi, \mathrm{in} \rangle L^{3} \mathcal R(E_n,L) \\ \times \langle N \pi, \mathrm{out} \vert \widetilde {\overline {\mathcal O}}_\alpha(0) \vert 0 \rangle \,. \end{multline} The key limitation of Eqs.~(\ref{eq:genLL}) and (\ref{eq:genLL00}) is that these only hold for $E_n < 2 M_\pi + m_N$. For such two-particle energies the relation is valid up to exponentially suppressed corrections of the form $e^{- M_\pi L}$, but above three-particle threshold a generalized form with three-particle matrix elements is required. As in the previous section, here we again apply the two-particle formalism outside of its region of validity. We expect this to give a qualitative indication of the nature of excited-state contamination, but only by applying a rigorous three-particle formalism can one reach a reliable quantitative prediction, especially in the vicinity of the Roper. Applying Eqs.~(\ref{eq:genLL}) and (\ref{eq:genLL00}), with $\mathcal R(E_n,L)$ determined using experimental scattering data, it remains only to analyze the matrix elements of the nucleon interpolating operator, $\mathcal O$, and of the axial-vector current, $A_\mu$, in infinite volume. In this way, the details of the finite-volume set-up are factored out. To explain this in detail we find it convenient to assume specific forms for the projectors entering Eqs.~(\ref{eq:C3def}) and (\ref{eq:C2def}). In particular we take $\Gamma = u_+(\textbf 0) \bar u_+(\textbf 0)/m_N$ and $\Gamma'_\mu = \delta_{3\mu} 2 i \Gamma$, were $u$ and $\overline u$ are standard nucleon spinors, already used in Eq.~(\ref{eq:gAdef}). One then finds \begin{align} \label{eq:bnignor} b_n & = B(E_n) \mathcal C(E_n,L) \mathcal A(E_n) \,, \\ c_{2,n} & = 2 m_N \omega_N \omega_\pi L^3 B(E_n) \mathcal C(E_n,L) B^\dagger(E_n) \,, \end{align} where \begin{align} B(E_n) & \equiv \frac{2i}{ 2 \omega_{N} 2 \omega_{\pi} L^3 } \frac{ \langle 0 \vert \mathcal O_+(0) \vert N \pi, E_n, \mathrm{in} \rangle e^{- i \delta} }{ \langle 0 \vert \mathcal O_+(0) \vert N \rangle } \,, \\[5pt] \label{eq:Cdef} \mathcal C(E_n, L) & \equiv 2 \omega_{\pi} 2 \omega_{N} L^3 e^{ i \delta} \mathcal R(E_n, L) e^{ i \delta} \,, \\[10pt] \mathcal A(E_n) & \equiv e^{- i \delta} \langle N \pi, E_n, \mathrm{out} \vert A^{a=3}_{\mu=3}(0) \vert N \rangle \,. \end{align} Here the first factor, $B(E_n)$, is understood as a row-vector on the $J, \ell, \mu$ index space labeling the two-particle state. It depends on the spin-projected interpolator \begin{equation} {\mathcal O}_+ \equiv \frac{1}{\sqrt{m_N}} \, \bar u_+(\textbf 0) \cdot {\mathcal O} \,, \end{equation} as well as the kinematic factors $\omega_\pi$ and $\omega_N$, evaluated at momentum $p$ as defined in Eq.~(\ref{eq:pdef}). The middle factor, $\mathcal C(E_n,L)$, is a matrix on the $J, \ell, \mu$ space. We discuss its definition in detail in the following subsection. Finally $\mathcal A(E_n)$, understood as a column on the same index space, is the infinite-volume axial-vector transition amplitude. We comment that all three of the factors entering Eq.~(\ref{eq:bnignor}) are dimensionless, real functions. The latter claim holds due to the diagonal matrices $e^{- i \delta}$ included in the definitions. Here $\delta$ is a diagonal matrix of $N \pi$ scattering phase shifts. For example, the $J=1/2, \ell=1$ entry is plotted in Fig.~\ref{fig:pshifts}. Watson's theorem states that the complex phase of a two-particle matrix element (below three-particle production threshold) is equal to the elastic $N \pi$ scattering phase in the same channel~\cite{Watson}. Thus the phase matrices in the definitions cancel those in the infinite-volume matrix elements. Above three-particle threshold this no longer holds, but in this work we model the matrix elements with a form satisfying this two-particle unitarity constraint. In other words we build in the approximation that Watson's theorem persists above threshold. [See Sec.~\ref{sec:infME} for details.] Similarly the factors of $e^{ i \delta}$ in Eq.~(\ref{eq:Cdef}) cancel the intrinsic phase in $\mathcal R(E_n,L)$ as we show in the next section. This ensures that $\mathcal C(E_n,L)$ is also a real function. In the following subsection we give the matrix definition of $\mathcal R(E_n, {}L)$ and $\mathcal C(E_n,L)$ and explain that, in the present case, one can truncate these to single entries by applying the same truncation used for the scattering amplitude in the previous section. In Sec.~\ref{sec:predLL} we then use experimental scattering data to calculate the interacting values of $\mathcal C(E_n, L)$. Finally in Sec.~\ref{sec:infME} we use a model, based in LO ChPT supplemented by the experimental scattering data, to estimate both $B(E)$ and $\mathcal A(E)$. We then apply these results in Sec.~\ref{sec:contam}, to give predictions for the excited-state contamination to $g_A$. \subsection{Reducing the Lellouch-L\"uscher-like relation} \label{sec:redLL} We begin this subsection by defining $\mathcal R(E_n, L)$, introduced in Eq.~(\ref{eq:genLL}) above. In this equation the right-hand side should be understood as the product of a column vector $ \langle 0 \vert {\mathcal O}_\beta(0) \vert N \pi, \mathrm{in} \rangle $, followed by the matrix $\mathcal R(E_n, L)$, followed by a row vector $ \langle N \pi, \mathrm{out} \vert A^{3}_\mu(0) \vert N \rangle$. Each of these quantities is defined on the $J, \ell, \mu$ index space, where the three labels correspond to total angular momentum, orbital angular momentum, and azimuthal total angular momentum respectively. The matrix $\mathcal R(E_n,L)$ is defined by \begin{equation} \label{eq:Rintro} \mathcal R(E_{n}, L) \equiv \lim_{E \rightarrow E_{n}} \left[ (E - E_{n}) \frac{1}{F^{-1}(E, L) + \mathcal M(E)}\right] \,, \end{equation} with $\mathcal M$ and $F$ defined in Eqs.~(\ref{eq:MJdef}) and (\ref{eq:FJdef}) respectively. $\mathcal R$ has both on- and off-diagonal elements and, in the context of Eq.~(\ref{eq:genLL}), gives a linear combination of infinite-volume matrix elements that equals a particular finite-volume matrix element. The same matrix structure holds in Eq.~(\ref{eq:bnignor}). To truncate $\mathcal R$ we first observe that the operator $A^3_3(0)$ acting on the infinite-volume single-nucleon state generates a state which couples to both $J=1/2$ and $J=3/2$. In the corresponding finite-volume matrix element this state couples to two-particle finite-volume states in the $G_1^- = 1/2 \oplus \cdots$ and $H^- = 3/2 \oplus \cdots$ representations. Thus, if we choose the two-particle state to transform in the $G_1^-$ irrep, then the right-hand sides of Eqs.~(\ref{eq:genLL}) and (\ref{eq:bnignor}) will contain one term, depending on the $J=1/2$ two-particle scattering state \begin{multline} \langle 0 \vert \widetilde {\mathcal O}_\beta(0) \vert n, G_1^- \rangle \langle n, G_1^- \vert \widetilde A^3_3(0) \vert 1 \rangle = \\ \langle 0 \vert {\mathcal O}_\beta(0) \vert N \pi, J=1/2, \mathrm{in} \rangle \frac{L^{3/2} \mathcal R_{J=1/2}(E_n,L)}{\sqrt{2 m_N} } \\ \times \langle N \pi, J=1/2, \mathrm{out} \vert A^3_3(0) \vert N \rangle \,. \end{multline} Given this truncation we are left only to determine the on-diagonal $J=1/2, \ell=1$ entry of $\mathcal R$. In principle this single entry depends on the full matrix structure of $F^{-1}$ and $\mathcal M$, since they enter via a matrix inverse. However, if we apply the p-wave truncation on $\mathcal M$, as in the previous section, then $\mathcal M$ and $F$ both truncate to single entry matrices. We find \cite{Lellouch2000} \begin{align} \mathcal R(E_{n},L ) & = \left [ \frac{\partial}{\partial E} \left( F^{-1}(E,L) + \mathcal M(E) \right ) \right ]_{E=E_n}^{-1} \,, \\ & \hspace{-40pt} = -\frac{p}{8 \pi E} \left [ \sin^2\! \delta \ e^{ 2i \delta } \frac{\partial}{\partial E} \left( \cot\phi+ \cot\delta \right ) \right ]_{E=E_n}^{-1} \,, \\ & \hspace{0pt} = \frac{p}{8 \pi E} e^{-2i \delta } \left [ \frac{\partial}{\partial E} \left( \phi + \delta \right ) \right ]_{E=E_n}^{-1} \,, \label{eq:R1D} \end{align} where $\delta(p) = \delta_{J=1/2,\ell=1}(p)$, is the $N \pi$ phase shift, shown in Fig.~\ref{fig:pshifts}. To understand the phase in $\mathcal R$, we recall from Watson's theorem that, at energies where only two-particle elastic scattering can occur, the complex phase of zero-to-two and one-to-two transition amplitudes is given by the two-to-two strong scattering phase~\cite{Watson}. Thus the phase in $\mathcal R$ perfectly cancels the phase in the matrix element \begin{align} e^{-i \delta} \langle N \pi, \mathrm{out} \vert A^3_3(0) \vert N \rangle \in \mathbb R \,. \end{align} We conclude by discussing the rescaled quantity $\mathcal C(E_n,L)$, defined in Eq.~(\ref{eq:Cdef}). Substituting Eq.~(\ref{eq:R1D}) into the definition and simplifying, we reach \begin{equation} \label{eq:Cres} \mathcal C(E,L) = 4 \pi^2 q^3 \left( q \frac{\partial \phi}{\partial q} + p \frac{\partial \delta}{\partial p} \right )^{-1} \,. \end{equation} In next section we will also be interested in the non-interacting limit, and thus define \begin{equation} \label{eq:Cres2} \mathcal C^{\mathrm{NI}}(q^2) \equiv 4 \pi^2 q^2 \left( \frac{\partial \phi}{\partial q} \right )^{-1} \,, \end{equation} where $q \equiv p L/(2 \pi)$ was already introduced above. Note that in Eqs.~(\ref{eq:Cres}) and (\ref{eq:Cres2}) we have implicitly extended the definition of $\mathcal C(E,L)$ to all energies. As is clear from Eqs.~(\ref{eq:genLL}) and (\ref{eq:bnignor}), the quantity only has physical \mbox{meaning} when evaluated at the energies of the finite-volume spectrum. However, understanding the continuous form of the function is useful for predicting how $\mathcal C(E_n,L)$ will vary with the strength of the particle interaction. \begin{figure} \begin{center} \vspace{0pt} \includegraphics[scale=0.45]{figs/fig5.pdf} \end{center} \caption{Non-interacting Lellouch-L\"uscher curve. This curve, defined in Eq.~(\ref{eq:Cres2}), only has a clear physical meaning at the non-interacting finite-volume energies, indicated by vertical lines. Here it coincides with the degeneracy of the finite-volume state, $\nu_n$. Considering the form of the curve everywhere is useful for understanding the effect of interactions, as shown in Fig.~\ref{fig:intLL}.} \label{fig:nonintLL} \end{figure} \subsection{\label{sec:predLL}Predicting the Lellouch-L\"uscher factors} In this section we give numerical predictions for the values of $\mathcal C(E_n,L)$ and in doing so also build some intuition about the meaning of this quantity. We begin with the non-interacting version, $\mathcal C^{\mathrm{NI}}$. This is plotted in Fig.~\ref{fig:nonintLL} as a function of the dimensionless squared momentum, $q^2$. The energies for which this curve has physical meaning correspond to $q^2 = \textbf n^2$ with $\textbf n \in \mathbb Z^3$. At these values our rescaled Lellouch-L\"uscher factor takes on particularly simple values \begin{equation} \label{eq:Cfree} \mathcal C^{\mathrm{NI}}(n) = \nu_n \,, \end{equation} where $\nu_n$ is the degeneracy of the $n$th state, equivalently the number of integer vectors that satisfy $\textbf n^2 = n$. The first few values of $\nu_n$ are given in Table~\ref{tab:deg}. These degeneracies are also indicated by the horizontal tick marks crossing each vertical line in Fig.~\ref{fig:nonintLL}. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \\[-9pt] \hline $n$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ $\nu_n$ & 1& 6& 12& 8& 6& 24& 24& 0& 12& 30& 24& 24& 8& 24& 48& 0 \\ \hline \end{tabular} \caption{Degeneracies of states with $q^2 = n$.} \label{tab:deg} \end{table} \begin{figure} \begin{center} \vspace{-25pt} \includegraphics[scale=0.45]{figs/fig6.pdf} \vspace{-25pt} \end{center} \caption{These plots summarize the effect of interactions on $\mathcal C$ for (a) $M_\pi L=4$ and (b) $M_\pi L=5$ as a function of $q^2 = (p L)^2/(2 \pi)^2$. We also indicate the two-particle energies corresponding with certain $q^2$ values. The interacting curve (blue) is lower than the non-interacting (gray) due to a shift proportional to $d \delta(p)/dp$ in the inverse. For ease of comparison we have plotted the phase shift over the same range for each $M_\pi L$ value. However, the most important effect for the interacting value of $\mathcal C$ is the shift in energies. Since the curve is rapidly oscillating, these shifts result in large discrepancies between the interacting and non-interacting $\mathcal C$ values. } \label{fig:intLL} \end{figure} We next turn to the interacting values of $\mathcal C(E_n,L)$, plotted in Fig.~\ref{fig:intLL}. In contrast to $\mathcal C^{\mathrm{NI}}$, the factor in the interacting case depends independently on $E$ and $L$. Here we plot $\mathcal C(E,L)$ as a function of $q^2$ at fixed $M_\pi L = 4$ [Fig.~\ref{fig:intLL}(a)] and $M_\pi L = 5$ [Fig.~\ref{fig:intLL}(b)]. Note that two effects are important in the shift from non-interacting to interacting systems. First, the curve characterizing the Lellouch-L\"uscher factors is reduced, due to the addition of a term proportional to $d \delta/dp$ in the inverse. Comparing the light gray and blue curves, we see that this has a relatively small numerical effect. Second, the physically relevant values on the curve (the finite-volume energies) are shifted to new locations. Interestingly, this second shift has a large effect for both $M_\pi L=4$ and $M_\pi L=5$ (assuming physical pion masses). In particular, the contribution to $R(T,t)$ from the seventh excited state is significantly enhanced due to the large value of $\mathcal C(E_n,L)$. \subsection{\label{sec:infME}Modeling the matrix elements} We now turn to the remaining building blocks entering the definitions of $b_n$ and $c_n$: the overlap factor $B(E_n)$ and the axial-vector transition amplitude $\mathcal A(E_n)$. As already mentioned above, the matrix elements within $B(E)$ depend on the details of the interpolator, $\mathcal O_+$, and cannot be constrained using experimental data. The precise definition of $\mathcal O_+$ depends on the set-up of a particular lattice calculation, and the only clear defining property is that it has the quantum numbers to annihilate a single nucleon. In a variational approach, this operator is designed to reduce the size of the two-particle matrix elements shown in Eq.~(\ref{eq:genLL}). Thus by choosing a sufficiently large basis one can make this contribution arbitrarily small.% \footnote{Given that $\mathcal O$ can be optimized to minimize excited-state contamination, the universal result found by Refs.~\cite{BrianGA,BarTwoPoint,BarGA} may seem surprising. We stress, however, that the ChPT predictions only hold for local and smeared operators. The results thus suggest that $N \pi$ interpolating operators are probably required to systematically reduce the overlap to the low-lying excited states.} By contrast, $\mathcal A(E)$ is a physical matrix element that can in principle be accessed in a scattering experiment. In fact, since we are focusing on the case where both the nucleon and the $N \pi$ state are at rest, parity guarantees that the corresponding matrix element with a vector current must vanish. Thus we can equally well think of $\mathcal A(E)$ as the matrix element of the left-handed (vector-minus-axial) current. The kinematics of $\mathcal A(E)$ correspond to a process such as $p \, \pi^- \to p \, W^- \to p \, e^- \, \bar \nu^e$, in which the current is evaluated at time-like four-momentum. Such a transition is difficult to extract in experiment and the present data is insufficient to constrain the amplitude. We also note that the value of this matrix element at space-like momenta is of great experimental relevance for determining QCD effects in neutrino-nucleon scattering, $\nu_\ell p \to p \pi^+ \ell^-$.% \footnote{See for example Ref.~\cite{AalvarezRuso2014}.} In this work we rely on LO ChPT, together with a model that incorporates the experimental $N \pi$ scattering data, in order to gain insight into the behavior of both the interpolator overlap $B(E)$ and the axial-vector transition amplitude $\mathcal A(E)$. Beginning with $B(E)$, we first give the expression predicted by LO covariant baryon ChPT, derived in the appendix \begin{equation} \label{eq:BnChPT} B_{\mathrm{ChPT}}(E) = \frac{\sqrt{3}}{4 \sqrt{2} f_\pi \omega_{\pi } \omega_{N} L^3 } \left (\frac{\omega_N}{m_N} - 1 \right)^{1/2} (1 - \bar g_A) \,, \end{equation} where \begin{equation} \label{eq:gAbardef} \bar g_A \equiv g_A \frac{E_n+m_N}{E_n-m_N} = g_A \frac{\omega_N + \omega_\pi +m_N}{\omega_N + \omega_\pi -m_N} \,, \end{equation} is a convenient shorthand introduced in Ref.~\cite{BarTwoPoint}, and $f_\pi=93{\,\mathrm{MeV}}$ is the pion decay constant. This result predicts the part of $b_n$ that depends on $\mathcal O$ to be negative and have magnitude $\sim10^{-3}$. In addition, as was already pointed out in Refs.~\cite{BrianGA,BarTwoPoint,BarGA}, the leading-order prediction is independent of the details of the interpolator used. Again, we stress that this only holds for a three-quark interpolating field and, in particular, does not apply to any interpolator built from multi-particle operators, for example $N \pi$- or $N \pi \pi$-like operators. Eq.~(\ref{eq:BnChPT}) is expected to break down at higher energies, when next-to-leading-order ChPT corrections become important. For instance, if $\mathcal O$ is optimized to couple to a single-nucleon state then, depending on the nature of the Roper, the overlap of the operator with two-particle states may also be enhanced in the vicinity of the resonance. This enhancement is not visible in LO ChPT. To model the effect of the Roper we first consider the Bethe-Salpeter equation for the two-particle matrix element within $B(E)$ \begin{multline} \langle 0 \vert \mathcal O_+(0) \vert N \pi, \mathrm{in} \rangle = \langle 0 \vert \mathcal O_+(0) \vert N \pi, \mathrm{in} \rangle_{2\mathrm{PI}} \ + \\ \int \frac{d^4 k}{(2 \pi)^4} \langle 0 \vert \mathcal O_+(0) \vert N \pi, \mathrm{in} \rangle_{2\mathrm{PI}} \, \Delta(k) S(P-k) \, i \mathcal M(E,k) \,, \label{eq:BetheSalpteterBn} \end{multline} where the subscript $2\mathrm{PI}$ refers to the sum of all diagrams that are two-particle irreducible in the s-channel. To reach the full matrix element, the 2PI quantity must be attached, via a two-particle loop, to the two-to-two scattering amplitude, $\mathcal M$. In the loop both the pion [$\Delta (k)$] and nucleon [$S(P-k)$] propagators should be fully dressed and evaluated at the off-shell momenta sampled by the integral. The scattering amplitude is also sampled at off-shell momenta. We cannot evaluate this quantity without making a number of approximations. First we evaluate the $k^0$ integral, approximating the matrix element and $\mathcal M$ to have no important analytic structure, so that we need only encircle the poles in the two-particle loop. This gives \begin{multline} \langle 0 \vert \mathcal O_+(0) \vert N \pi, \mathrm{in} \rangle = \langle 0 \vert \mathcal O_+(0) \vert N \pi, \mathrm{in} \rangle_{2\mathrm{PI}} \ + \\ \int \frac{d^3 \textbf k}{(2 \pi)^3} \langle 0 \vert \mathcal O_+(0) \vert N \pi, \mathrm{in} \rangle_{2\mathrm{PI}} \, \\ \times \left[ -\frac{u(\textbf k) \overline u(\textbf k)}{2 \omega_N 2 \omega_\pi ( E - \omega_N - \omega_{\pi} + i \epsilon)} + \mathcal S(\vec k) \right ] \mathcal M(E,k) \,, \end{multline} where $\mathcal S$ is a smooth function below three-particle production threshold. If we assume the dominant contribution comes form the first term, and drop the smooth part then we are left with a contribution in which both the matrix element and the scattering amplitude are projected on shell. We find \begin{multline} \langle 0 \vert \mathcal O_+(0) \vert N \pi, \mathrm{in} \rangle = \langle 0 \vert \mathcal O_+(0) \vert N \pi, \mathrm{in} \rangle_{2\mathrm{PI}} \\ \times [1 + \mathcal I_R(E) \mathcal M(E) ] \,, \end{multline} where \begin{equation} \mathcal I_R(E) = \frac{i p}{8 \pi E} - \mathrm{PV} \! \int_{R} \! \! \frac{d^3 \textbf k}{(2 \pi)^3} \frac{1}{2 \omega_N 2 \omega_\pi ( E - \omega_N - \omega_{\pi} )} \,, \end{equation} and where PV indicates a principal-value pole prescription. The subscript $R$ indicates that this loop integral requires a regulator, an artifact that has been introduced by our approximations. In this work we choose to regulate by subtracting the integrand evaluated at threshold \begin{multline} \mathcal I(E) \equiv \frac{i p}{8 \pi E} - \mathrm{PV} \! \int \! \! \frac{d^3 \textbf k}{(2 \pi)^3} \frac{1}{2 \omega_N 2 \omega_\pi } \\ \times \left[\frac{1}{ E - \omega_N - \omega_{\pi}} - \frac{1}{ m_N + m_\pi - \omega_N - \omega_{\pi}} \right ] \,. \end{multline} This subtraction is motivated by the observation that the second term in Eq.~(\ref{eq:BetheSalpteterBn}) should not play a role at low energies. Note also that the on-shell restriction projects the scattering amplitude down to its $I(J^{P}) = \nicefrac[]{1}{2} (\nicefrac[]{1}{2}^{+})$ component. To complete the construction of our model we use the fact that the diagrams in the LO ChPT calculation of $B(E)$ are two-particle irreducible, and thus also give the leading-order contribution to the 2PI restriction of this quantity. This leads us to define \begin{equation} B(E,\gamma) \equiv \mathrm{Re} \left [ e^{- i \delta} B_{ \mathrm{ChPT}}(E) \left[ 1 + \gamma \mathcal I(E) \mathcal M_{}(E) \right ] \right ] \,, \end{equation} where we have introduced the free parameter $\gamma$ to partially compensate the {\em ad hoc} procedure that lead us to this expression. Here we have also included the phase factor $e^{- i \delta}$ that is needed to cancel the phase that appears in the two-particle matrix element.% \footnote{This simple phase structure is only strictly valid below three-particle production threshold. In the present model we are using the elastic form for the two-particle scattering amplitude and this has the consequence that the phase is preserved also above production threshold. This is consistent with the neglect of three-particle states in the L\"uscher quantization condition.} % To evaluate both the $e^{- i \delta}$ factor and the scattering amplitude $\mathcal M$ we use the experimentally determined $N \pi$ scattering phase, plotted in Fig.~\ref{fig:pshifts}. Note also that we must discard a small imaginary part that arises in this model. In the case of $\gamma=0$ we omit the $e^{- i \delta}$ phase factor and thus recover the leading order ChPT result. For $\gamma>0$ the rising phase shift causes the matrix element to flip sign roughly in the region of the Roper, with the energy value at the node dependent on the specific value chosen for $\gamma$. For $\gamma<0$ the sign of the ChPT prediction is preserved and a peak is observed in the vicinity of the Roper. Past this energy range the matrix element can flip sign, but for $\gamma<-3$ the crossing is well outside the relevant energy range. In Fig.~\ref{fig:Bnmodels} we plot the energy dependence predicted by various values of $\gamma$. We now turn to the matrix element of the axial-vector current, $\mathcal A(E)$. As we derive in the appendix, the LO ChPT prediction for this quantity is \begin{multline} \mathcal A(E)_{\mathrm{ChPT}} = \frac{ \sqrt{m_N}(\omega_{N} - m_N)^{1/2} }{ 2 \sqrt{6} f_\pi } \\ \times \left[4-\frac{8}{3} g_A \left(\overline g_A-\frac{g_A M_\pi^2}{ 4 \omega_{\pi} m_N-2 M_\pi^2 }\right)\right] \,. \label{eq:AxialChPT} \end{multline} To estimate this quantity beyond ChPT we apply the same model used for the overlap factor \begin{equation} \mathcal A(E ,\alpha) = \mathrm{Re} \left [ e^{- i \delta } \mathcal A(E)_{\mathrm{ChPT}} [1 + \alpha \mathcal I(E) \mathcal M(E)] \right ]\,. \end{equation} As with $B(E,\gamma)$, $\alpha = 0$ gives the LO ChPT prediction, $\alpha<0$ preserves the sign of the matrix element and enhances the magnitude near the resonance, and $\alpha>0$ gives a zero-crossing roughly in the range of the resonance. Also as with $B(E,\gamma)$, we include the phase factor needed to cancel the phase in the matrix element, and discard a small imaginary contribution that arises as an artifact of our model. For $\alpha=0$ the phase factor is not included. In Fig.~\ref{fig:Amodels} we plot the energy dependence of the \mbox{axial} transition amplitude for various choices of $\alpha$. Note that we restrict attention to a range of $\alpha$ that is smaller than that considered for $\gamma$. The LO ChPT prediction for $B(E)$ has a magnitude that decreases with energy whereas $\mathcal A(E)$ is nearly constant at higher energies. This has the consequence that varying $\alpha$ over a given range has a larger effect than varying $\gamma$. We choose the parameters such that the models have a maximum magnitude roughly within a factor of two of the maximum predicted by LO ChPT. We stress again that, unlike with the overlap factor, the functional form of $\mathcal A(E)$ is independent of the lattice set-up and is of direct experimental relevance. In this study we are particularly interested in whether $\mathcal A(E)$ has a node (zero crossing) at some energy, as predicted by the $\alpha>0$ models. Interestingly, such a node is observed in the CLAS scattering data for $e p \to e' \pi^+ \pi^- p'$ in their analysis of the electromagnetic transition amplitude as a function of photon virtuality, $Q^2$ \cite{RoperCLAS}. This node is not directly relevant for $\mathcal A(E)$ because (1) it concerns the electromagnetic transition and (2) it is for space-like momenta. It is nonetheless interesting to note that such crossings are observed. Given that LO ChPT predicts the same sign for $B(E)$ and $\mathcal A(E)$, and thus a positive value for $b_n = B(E_n) \mathcal C(E_n,L) \mathcal A(E_n)$, and given also that the curvature in LQCD correlator data indicates important contributions from states with $b_n<0$, we postulate that a node in $\mathcal A(E)$ might provide a reasonable explanation for the apparent discrepancy. More generally one can identify four basic scenarios: (i) neither $B(E)$ nor $\mathcal A(E)$ cross zero in the relevant energy range, (ii) both cross zero, (iii) only the overlap factor $B(E)$ has a node, or (iv) only the transition amplitude $\mathcal A(E)$ has a node. The first two cases lead to positive excited-state contamination and thus fail to describe present day numerical LQCD correlator data. The third and fourth scenarios can both explain the empirically observed excited state-contamination, as we explain in the next section. \begin{figure} \begin{center} \vspace{20pt} \hspace{-15pt}\includegraphics[scale=0.45]{figs/fig7.pdf} \vspace{-15pt} \end{center} \caption{Three different scenarios for the overlap factor $B(E)$ with $\gamma$ values of $-4$ (lowest), $0$ (middle) and $4$ (highest). The vertical lines indicate the finite-volume energies in a box of size $M_\pi L=4$. The thickness of these lines is proportional to the value of $\mathcal C(E_n,L)$ and thus indicates how the state is weighted in the excited-state contamination.} \label{fig:Bnmodels} \end{figure} \begin{figure} \begin{center} \vspace{19pt} \hspace{-10pt} \includegraphics[scale=0.45]{figs/fig8.pdf} \hspace{10pt} \vspace{-25pt} \end{center} \caption{Three different scenarios for the axial matrix element $\mathcal A(E)$ with $\alpha$ values of $-3$ (lowest), $0$ (middle) and $1$ (highest). As in Fig.~\ref{fig:Bnmodels}, the vertical lines indicate finite-volume energies for $M_\pi L=4$ with line thickness proportional to $\mathcal C(E_n,L)$.} \label{fig:Amodels} \end{figure} \section{\label{sec:contam}Estimating the excited-state contamination} We are now ready to combine the results of the previous section to estimate the ratio $R(T,t)$ and the excited-state contamination to $g_A$. \begin{figure} \vspace{20pt} \hspace{-25pt} \includegraphics[scale=0.48]{figs/fig9.pdf} \hspace{145pt} \vspace{-20pt} \caption{Excited-state contamination for various values of $\alpha$ [parametrizing $\mathcal A(E)$] and $\gamma$ [parametrizing $B(E)$] for $M_\pi L = 4$ (solid) and $M_\pi L = 6$ (dashed). The top pair of curves shows the leading order ChPT prediction, but with the interacting values for $\mathcal C(E_n,L)$, the middle pair shows the leading order ChPT value for $B(E)$ together with the $\alpha=1$ (zero-crossing) scenario for $\mathcal A(E)$. Finally, the bottom pair shows the result of setting $\alpha=1$ together with $\gamma=-4$. Of the parameter sets considered here this choice most closely reproduces the observed LQCD correlator data. This lowest curve compares favorably, for example, with the $M_\pi = 190 \mathrm{\,MeV}$ and $M_\pi L = 3.9$ ensemble in Ref.~\cite{EigoAMA2016}.} \label{fig:esc1} \end{figure} In Fig.~\ref{fig:esc1} we show values for the ratio $R(T,T/2)/g_A$, given three different scenarios for the matrix elements. In each case we show the results for both $M_\pi L = 4$ (solid lines) and $M_\pi L=6$ (dashed lines). To provide comparable predictions, in both cases we sum all excited states up to an energy of 1800 MeV. In the case of $M_\pi L = 4$ this corresponds to the first 8 excited states and for $M_\pi L = 6$ to the first 18. In both cases we find that this number of states is both necessary and sufficient to estimate the saturated value of $R(T, T/2)/g_A$ within a few percent, for the models considered. We also see that the excited state contaminations from the two different volumes are in very good agreement. The highest pair of curves in Fig.~\ref{fig:esc1} shows the prediction from LO ChPT, but with the interacting values of the Lellouch-L\"uscher factors. The excited-state contamination here is comparable to that given in Fig.~5 of Ref.~\cite{BarGA}, in particular the result of that reference that includes ten excited states. Thus we find that, if one uses LO ChPT for the overlap and the axial-vector matrix element, then the effect of interactions on the energies and Lellouch-L\"uscher factors leads to a small (percent level) correction to the predicted value of $R(T,T/2)$. As is also stressed in Ref.~\cite{BarGA}, including excited states beyond the first few requires sampling the LO ChPT predictions outside their expected region of validity. For example, for $M_\pi L=4$ it is well motivated to trust the first two excited states. These have a gap of $\sim 100 \, \mathrm{MeV}$ to the Roper and, as the latter is not included in the ChPT prediction, it is reasonable to require a separation from this state. We thus infer that one can only predict $R(T,T/2)$ using LO ChPT for source-sink separations large enough that the first two states dominate. For physical pion masses this may mean separations of $T > 2 \mathrm{\,fm}$. The middle pair of curves in Fig.~\ref{fig:esc1} is the prediction from combining the LO ChPT prediction for the overlap, $B(E,\gamma=0)$, with the zero-crossing model for the axial-current transition amplitude, $\mathcal A(E,\alpha=1)$. The curve is very flat because there are large cancellations between the positive contributions from lower states and the negative contributions from higher-energy states. The lowest pair of curves in Fig.~\ref{fig:esc1} gives the scenario most consistent with observed LQCD correlators. This follows from combining $\mathcal A(E,\alpha=1)$ and $B(E,\gamma=-4)$ [see again Figs.~\ref{fig:Bnmodels} and \ref{fig:Amodels}]. The negative contribution from the higher excited states overpowers the positive contribution from the first few. \begin{figure} \begin{center} \vspace{20pt} \includegraphics[scale=0.45]{figs/fig10.pdf} \end{center} \caption{Contribution of individual excited states in the $\alpha=1$, $\gamma=-4$ scenario, for $M_\pi L=4$. The seven curves show the excited-state contamination predicted by summing the contributions from the first state through the $n$th state. The thickness of a given curve is proportional to the number of states included in the sum. The three blue curves show the result from including the first (lowest blue), first two (middle blue) and first three (highest blue) excited states. The fourth excited state is the first with $b_n<0$ so that the predicted contamination falls once this state is included (highest green curve). The set of green curves then indicates the sum up to the fourth state (highest green) to the sum up to the seventh state (lowest green). The effects of including additional states beyond the seventh are negligible so that the lowest green curve gives a good indication of the full excited state contamination predicted by this model. } \label{fig:esc2} \end{figure} In Fig.~\ref{fig:esc2} we show the importance of higher excited states in the $\mathcal A(E,\alpha=1)$ and $B(E,\gamma=-4)$ scenario. In particular, for $M_\pi L=4$, we find that the contamination predicted by summing fewer than seven states differs significantly from the saturated curve. In particular the seventh excited state has a significant contribution due to the large value of $\mathcal C(E_n,L)$. \section{Conclusion} In this work we have studied the excited-state contamination in LQCD correlators used to extract the nucleon axial charge, $g_A$. Combining various finite-volume formalisms with experimental scattering data, LO ChPT and a model for the infinite-volume matrix elements, we find that the excited-state behavior empirically observed in lattice correlators can be reproduced by postulating a sign change in the infinite-volume axial-vector transition amplitude, $\langle N \pi, \mathrm{out} \vert A \vert N \rangle$. Such nodes are observed experimentally in other transition amplitudes, but the data is insufficient to make a definitive statement about the quantity at hand. Our findings additionally indicate that a large number of finite-volume excited states, including those at energies around the Roper resonance, give important contributions in the ratios used to access $g_A$. This is based on mild assumptions about how the nucleon interpolators couple to states near the Roper and on the observation that the Lellouch-L\"uscher factors, governing the relation between finite- and infinite-volume states, can be significantly enhanced. The results presented here serve to further emphasize the great importance of using optimal interpolators to minimize the coupling to excited states. Based on numerical LQCD calculations in the meson sector, the most promising approach seems to be the variational method, in which a large basis of operators is used to disentangle the excited states. The situation will also be improved by further advances in improving the signal-to-noise ratio in nucleon correlators. Finally we emphasize that the nature of excited-state contamination depends heavily on the quantity under consideration. Indeed for many quantities, for example average $x$, the LQCD data indicates positive excited-state contamination \cite{ETMAvx2016}. For this observable the same overlap factor $B(E)$ appears, but a different transition-amplitude arises due to the differing current insertion. The observed, positive excited-state contamination can be accommodated if we suppose the matrix element has the same sign as the axial vector at low-energies and does not cross zero in the relevant energy window. Another interesting example is the iso-singlet octet axial-vector. ChPT predictions indicate that matrix elements of this current should be highly suppressed relative to those of the iso-triplet studied here. The iso-singlet suffers from other sources of systematic uncertainty, in particular quark-disconnected diagrams. Given the potential severity of excited-state contaminations, it will be interesting to compare the systematic error budgets for these quantities as methods on both sides improve. \acknowledgements{We thank J. Green, T. Harris, G. von Hippel, P. Junnarkar, D. Mohler, D. Robaina, H. Wittig and all our colleagues in the Mainz lattice group for helpful discussions, encouragement and support. We thank Oliver B\"ar for very helpful comments on the first version of this manuscript.}
2024-02-18T23:40:58.757Z
2016-12-01T02:08:03.000Z
algebraic_stack_train_0000
3,833
25,712
proofpile-arXiv_066-3037
\section{Introduction} Accurate and robust 3D object detection is crucial and indispensable for many real-world applications, such as autonomous driving~\cite{geiger2012we} and augmented reality (AR) ~\cite{Park:2008:MOT:1605298.1605357}. State-of-the-art methods can achieve a high average precision (AP) of 2D object detection ~\cite{ren2015faster,redmon2016you} and have achieved honorable results in the testing of public datasets such as KITTI ~\cite{geiger2012we} and COCO ~\cite{chen2015microsoft}. However, directly extends 2D detection methods to 3D is nontrivial due to the sparseness and irregularity of the point clouds. How to process the point clouds data with the semantic information from the RGB data remains a hot and challenging problem. Currently, researchers have explored several methods to tackle these problems, which aim to obtain geometric information such as target position, size and posture in 3D space. Some works~\cite{chen2016monocular,mousavian20173d,xu2018multi,chen20173d,li2018stereo} make full use of the characteristics of RGB images to propose some networks. However, the key problem of the image-based methods is that the depth information cannot be directly obtained, which results in a large positioning error of the object in 3D space. Even stereo vision~\cite{li2019stereo} is very sensitive to factors such as illumination variations and occlusions, which lead to deviations in depth calculations. Compared with image data, LIDAR point clouds data have accurate depth information and spatial features. At present, most of the state-of-the-art 3D object detection algorithms have focused on processing LIDAR point clouds through projection \cite{simon2019complexer,chen2017multi,ku2018joint,liang2018deep,yang2018hdnet,yang2018pixor} or voxelization \cite{li20173d,engelcke2017vote3deep,zhou2018voxelnet}. However, these works either suffer from the information loss during projectionand quantization \cite{liu2019point} or heavily depend on the performance of 2D object detectors\cite{qi2018frustum}. Recently, some works~\cite{shi2019pointrcnn,yang2019std,Point-GNN} propose to only operate on the point clouds to fulfill 3D object detection. But they achieve an inferior performance especially on cyclist and pedestrian due to the information loss from the image plane. Different from aforementioned methods, we observed that stereo camera can provide large-scale perception from two views and LIDAR sensors can capture accurate 3D structures, while the combination of them could take advantage of their respective advantages while making up for their shortcomings. In other words, the left and right images can provide a more accurate receptive field while achieving comparable depth and position accuracy. Furthermore, we find that the most commonly used PointNet~\cite{qi2017pointnet,qi2017pointnet++} fails to capture local features information at variable scales and leads to the loss of local features because it only processes 3D points independently to maintain permutation invariance. In this way, it ignores the distance metric between the points. Although the latter SAWnet ~\cite{kaul2019sawnet} integrates the global features using shared Multi-Layer Perceptron (MLP) with the dynamic locality information from Dynamic Graph CNNs (DGCNNs) ~\cite{Wang:2019:DGC:3341165.3326362}, it is unable to focus on important features and suppress unnecessary ones in its residual connections ~\cite{he2016deep}. \par Motivated by these observations, we present the Stereo RGB and Deeper LIDAR (SRDL) network for 3D object detection, which takes stereo RGB images and LIDAR point clouds as input and utilizes attention mechanism to achieve robust and accurate 3D detection. Specifically, the left and right views can generate proposals that do not completely overlap from different angles. They can mutually correct each other, and a more precise region can be generated during the fusion phase. Considering that the fused proposals may overlap noisy points and excess space for the objects, a feature-oriented segmentation network in 3D point clouds is designed to strip out the object point clouds from the background. Given the segmented object points and cropped proposals, we propose to encode the bounding boxes by adding more constraints in a novel compact manner. This design benefits for removing more redundancy and locating the size of the objects more accurately while reducing the feature dimensions. The main contributions of our work can be summarized as follows: \begin{itemize} \item To the best of our knowledge, we are the first to propose a novel framework that combines semantic information from stereo images and spatial information from raw point clouds for 3D object detection. \item We propose residual attention learning mechanism to optimize the segmentation network, which can extract deeper geometric features of different levels from the original irregular 3D point clouds. \item We propose a novel 3D bounding box encoding scheme that regresses the oriented 3D boxes in a more compact manner, ensuring higher 3D localization accuracy. \item Our proposed SRDL network achieves comparable results with the state-of-the-art image-based and LIDAR-based methods on the challenging KITTI 3D detection dataset. \end{itemize} \section{Related work} {\bf Image-based 3D object detection.} For processing the RGB images, there are two mainstreams, monocular-based and stereo-based methods. In terms of monocular-based methods, many researches ~\cite{ma2019accurate,chen2016monocular,mousavian20173d,brazil2019m3d} have contributed to share similar framework with 2D detection. Surprisingly, there are only a few works utilizing stereo vision for 3D object detection ~\cite{chen20173d, li2018stereo}. Typically, Li et al ~\cite{li2019stereo} propose the Stereo RCNN to detect and associate object in stereo images by both semantic properties and dense constraints of objects, extending Faster RCNN ~\cite{ren2015faster} for stereo inputs. However, none of the above approaches combines stereo images with point clouds properly to exploit both advantages and they fail to achieve superior performance because of the lack of accurate depth information. {\bf LIDAR-based object detection.} Generally, there are two major ways to process the point clouds from 3D LIDAR sensors, voxelization and raw point clouds. The voxelization based methods ~\cite{liu2019tanet,zhou2018voxelnet,yan2018second,shi2019pv} usually take the voxel as input and apply either 2D convolution or 3D convolution to make prediction. VoxelNet ~\cite{zhou2018voxelnet} is one of the first methods to apply a PointNet-like network to learn low-level geometric feature with several stacked VFE layers in 3D voxelization space. However, the network structure is computationally inefficient as the shallow 3D CNN layers ~\cite{li20173d} are not enough to extract deeper 3D features. Even though SECOND ~\cite{yan2018second} applies sparse convolution to accelerate VoxelNet, it is still unable to overcome the 3D convolution bottleneck. \par Besides, PointNet ~\cite{qi2017pointnet} and PonintNet++ ~\cite{qi2017pointnet++} are the two pioneers to directly operate on raw points to extract features without converting them to other formats. Based on PointNet as backbone network, some researchers have approached to infer 3D objects from point clouds ~\cite{shi2019pointrcnn,chen2019fast,lang2019pointpillars,shi2019part,yang2019std}. Very recently, Point-GNN\cite{Point-GNN} even propose graph neural network to to detect objects from point clouds. {\bf LIDAR and RGB image fusion based object detection.} The majority of the state-of-the-art 3D object detection methods adopt a LIDAR and mono-image fusion scheme to provide accurate 3D information, where they process raw LIDAR input in different representations. Many methods~\cite{chen2017multi,ku2018joint,liang2018deep,yang2018pixor,yang2018hdnet} project point clouds to bird’s view or front view and utilize 2D CNN to obtain more dense information for 3D box generation. However, these methods still have the limitation when detecting small objects such as pedestrians and cyclists and they do not deal with cases with multiple objects in depth direction. F-PointNet ~\cite{qi2018frustum} is the first method of utilizing mature 2D detectors and raw point cloud to predict 3D objects. And PointNet then is employed to process point cloud within every cropped image region to detect 3D objects. However, the mono-image can't extract more comprehensive features better than binocular and PointNet lacks the ability to capture local feature information in the metric space. \section{Proposed method} \label{sec:sim} \begin{figure*}[ht] \centering \includegraphics[height=4cm,width=12cm]{fig1} \caption{Architecture of the proposed SRDL network which contains three modules: (a) Stereo proposal fusion, which takes stereo RGB images as input and utilizes CNN to generate 2D proposals following RoIpooling in the two views respectively. (b) The 3D point clouds segmentation stacks several attention-based layers to separate the points of objects from backgrounds according to the projected proposals after the fusion operation. (c) After refining the boxes, the 3D bounding box regression module proposes to encode the bounding box in a more accurate scheme to get the final detection results.} \label{fig:1} \end{figure*} \begin{wrapfigure}{r}{6.5cm} \centering \includegraphics[width=0.4\textwidth]{fig2} \caption{ Proposals from left and right views. The proposals from the left and right views are not completely overlapped and the final fused proposal is more accurate than either of them.} \label{fig:2} \end{wrapfigure} In this paper, we propose the Stereo RGB and Deeper LIDAR (SRDL) network for 3D object detection, including three modules: stereo proposal fusion, 3D point clouds segmentation and 3D bounding box regression, as shown in Figure~\ref{fig:1}. In the flowing subsections, we will introduce these modules in detail. \subsection{Stereo proposal fusion} In our framework, we take the stereo images as input and leverage mature 2D object detector to generate 2D object proposals for the left and right image respectively. At the same time, we apply convolution-deconvolution in each view to acquire features in a higher resolution. Combining the 2D proposals, RoIpooling is employed for each view to obtain features at the same size. Finally, we fuse the two cropped features of the output in the RoIpooling via element-wise mean operation. As Figure~\ref{fig:2} shows, the two outputs of the left and right branches are not completely overlapped. Instead, each of them generates different proposals from different views. With a known camera projection matrix to offer accurate depth information, each bounding box can be projected into 3D space to form a cross object area. Through the final element-wise fusion, the final proposal contains less space and point clouds which is more accurate than either of the initial ones by mutual supervision and correction. \subsection{3D point cloud segmentation} \label{sec:sim4} As illustrated in Figure~\ref{fig:1}, the fused proposal is fed into the second module with the depth range to outline an appropriate location in 3D space. Given the 2D image region and its corresponding 3D locations, we design a 3D segmentation network to sperate 3D point clouds from background for further 3D coordinate regression. \subsubsection{Architecture overview} The input of the segmentation network architecture is fed into a transformer net that uses attention-based layers to regress a $4\times4$ transformation matrix, the elements of which are the learnt affine transformation values for point clouds alignment. The aligned points are then fed into several stacked attention-based layers to generate a permutation invariant embedding of the points. Among them, the residual attention module serves as a linking bridge between two adjacent layers to transfer information. After that, the outputs of all the previous 1024-D attention-based layers are concatenated together and the max pooling is used to get the final global information aggregation for the point clouds. The information is then fed into a MLP layer to predict a $N\times p$ score matrix and make a point-wise prediction. More details of the specific architecture are described in the supplementary. \subsubsection{Attention-based layer} \label{sec:sim2} \begin{wrapfigure}{r}{6cm} \centering \includegraphics[height=7.3cm,width=4.5cm]{fig3} \caption{ The illustration of the information propagates between two adjacent layers. There are residual-attention connections to transfer local and global information from previous layer to current layer individually.} \label{fig:3} \end{wrapfigure} The architecture of the attention-based layers is shown in Figure~\ref{fig:3}, in which the current layer is considered as an intermediate layer and the features are not only transmitted by the mainstream information flow but the residual attention linking from previous layer. The point embedding from the current is input into two parallel layers, local EdgeConv layer and global MLP layer respectively. The local EdgeConv layer constructs a dynamic graph and incorporates $k$ nearest local neighborhood information. The global MLP layer operates on each point independently and subsequently applies a symmetric function to accumulate features. The outputs of these two layers are connected to the outputs of the same branch of the previous layer in an element-wise manner before concatenating together. The two layers also individually transfer information to the next embedding layer using residual attention connections. \par Specifically, consider $n$ points in a $D$ dimensional embedding point clouds set $P=\{p_1,p_2,...,p_n\}$, where $D$ can be set to 3 simply, which means each point $x_n$ contains three coordinates $(x_i,y_i,z_i)$. These points are processed in parallel by the local EdgeConv layer and global MLP layer in each attention-based layer. In the branch of the local EdgeConv layer, let $h(k)$ denote the input in terms of the $k$ nearest neighbours in the dynamic graph and $e_i(i=1,2)$ denote the edgeconv operation to evaluate a point's dependency on its $k$ nearest neighbours. The extracted edge features are fed into the batch normalization and ReLU computation. The output can be represented as: \begin{equation} \label{eqn:01} E_1=MLP(h(k))=MLP(e_1(h(k)))=\sigma(W_1(h(k))). \end{equation} After applying another edgeconv-BN layer, the output can be represented as: \begin{equation} \label{eqn:01} E_2 =MLP(E_1)=MLP(e_2(E_1))=W_2(E_1). \end{equation} Where $\sigma$ denotes the ReLU function and $W_1$, $W_2$ are the weights of the two MLPs. After the maxpooling over the output, the attention map from the residual module is added to the output $E_2$ in a point-wise manner which can be written as: \begin{equation} \label{eqn:01} L=(1+R_1) \otimes E_2. \end{equation} Similarly, in the global MLP layer, $f(t)$ denotes the transformation on the input points by the shared weighted MLP denoted as $s$. The output of the first shared MLP layer is: \begin{equation} \label{eqn:01} M_1=MLP(f(t))=\sigma(W_1(s(k))). \end{equation} Applying another MLP-BN layer, the output is: \begin{equation} \label{eqn:01} M_2=MLP(M_1)=W_2(s(M_1)). \end{equation} This output is connected to attention-aware feature in the residual attention module in the same way: \begin{equation} \label{eqn:01} G=(1+R_2)\otimes M_2. \end{equation} Where $\otimes $ denotes element-wise multiplication and $R_i(i\in\{1,2\})$ change between 0 and 1 as the reaction to different features. Different from the original ResNet, the output of our residual attention module $R_i(i\in\{1,2\})$ works as the feature filter to weaken the noisy features and amplify the good features. Note that the outputs $L, G$ from the two branches have the same dimension and are transferred to the next layer as well. Finally, they are concatenated together and the embedding points are fed into the next layer as the input. \subsubsection{Attention module} \begin{wrapfigure}{r}{5cm} \centering \includegraphics[height=6.1cm,width=3.8cm]{fig4} \caption{ Architecture of attention module which mainly consists of residual units in a bottom-up and top-down manner.} \label{fig:4} \end{wrapfigure} Attention module not only attempts to emphasize meaningful features but also enhances different representations of objects at certain locations. We design our attention module as a bottom-up and a top-down structure, as shown in Figure~\ref{fig:4}. The bottom-up operation aims to collect global information and the top-down operation combines the global information with the original feature maps. We use the residual unit in ~\cite{he2016identity} as our basic unit in attention module. The attention module contains three blocks. In block1, the max pooling and a residual unit are performed to enlarge the receptive field. After getting the lowest resolution, a symmetrical top-down architecture is designed to infer each pixel to get dense features in block2. Besides, we append skip connections between bottom-up and top-down feature maps to capture features at different scales. In block3, a bilinear interpolation is inserted after a residual unit to up-sample the output. Finally, we use the sigmoid function to normalize the output after two consecutive $1\times1$ convolution layers to balance the dimensions. \subsection{3D bounding box regression} Given the segmented object points, this part regresses the final 3D bounding box by a more accurate bounding box encoding scheme after the proposal refining. \subsubsection{3D proposal refining} \label{sec:sim1} After segmentation operation on the point cloud, we can sperate the object points from the background and acquire the points inside the bounding box in the certain location of the first module. However, the combination of the predefined proposals from the first module and the segmentation network for the points only gets a relatively rough box. Therefore, we propose to pool 3D points and their corresponding features to rescale the proposal. For each 3D box proposal, $b_i=(x_i,y_i,z_i,w_i,h_i,l_i,\theta_i)$, we define a new 3D box by adding a constant $\xi$ to $w_i,h_i,l_i$ respectively to resize the box. For each point, a validation test is performed to decide whether it is inside the resized box or not. If it is true, the point and its features will be kept for refining the box proposal. Further ablation study will illustrate the effectiveness of this operation in improving performance. \subsubsection{3D bounding box encoding} \label{sec:sim3} \begin{wrapfigure}{r}{7cm} \centering \includegraphics[width=0.5\textwidth]{fig5} \caption{ Comparison between different methods for encoding bounding box. We propose to encode the bounding box with three points (two corners + one center) and two heights to reduce redundancy and keep physical connections.} \label{fig:5} \end{wrapfigure} To determine the orientation of a 3D bounding box, we keep consistent with the AVOD which computes $(\cos\theta,\sin\theta)$ to solve the problem of angles wrapping. As for the box encoding, there are several different methods to encode the bounding box as shown in Figure~\ref{fig:5}. The axis aligned is first proposed in ~\cite{song2016deep} which encodes the box with centers and sizes. While in MV3D~\cite{chen2017multi}, Chen et al. claim that 8 corners box encoding works better than axis aligned. And in AVOD~\cite{ku2018joint}, Jason Ku et al. attempt to replace 8 corners with 4 corners and 2 heights to encode box efficiently. However, 8 corners need a 24-D vector to normalize the diagonal length of the proposal box and neglect the physical constraints. The 4 corners + heights encoding method does not take the physical connections between the 4 corners within a plane into account. To reduce more redundancy and keep physical connections, we propose to encode the bounding box with three points (two corners + one center) and two heights representing the offsets from the proposal box to the ground plane. The three points are on the diagonal of the cube, where $c_2$ is the center point of the cube. Therefore, the regression targets is $\{\Delta x_i,\Delta y_i,\Delta z_i,\Delta h_j;i=1,2,3;j=1,2\}$. Despite that our 11-D representation vector is slightly larger than the 10 dimensional one, we not only use fewer points but encode the bounding box compactly in the flowing constrains between these parameters. When regressing $w,l$, we should consider: \begin{enumerate} \item $(\Delta x_{c1},\Delta x_{c3})\rightarrow w$, $(|\Delta x_{c2}-\Delta x_{c1}|,|\Delta x_{c3}-\Delta x_{c2}|)\rightarrow w$. \item $(\Delta y_{c1},\Delta y_{c3})\rightarrow l$, $(|\Delta y_{c2}-\Delta y_{c1}|,|\Delta y_{c3}-\Delta y_{c2}|)\rightarrow l$. \end{enumerate} When regressing $h$, we should ensure that the following equations hold true: \begin{enumerate} \item $|\Delta z_{c3}-\Delta z_{c1}|=|\Delta h_2-\Delta h_1|\rightarrow h$. \item $|\Delta z_{c2}-\Delta z_{c1}|=|\Delta z_{c3}-\Delta z_{c2}|=\frac{1}{2}|\Delta h_2-\Delta h_1|\rightarrow h$. \end{enumerate} where $\rightarrow$ denotes that there exists constrains in the regression process. \subsection{Loss function} We use a multi-task loss to train our network. Our total loss is composed of three main components from the three modules, the fused loss $L_{fuse}$, the segmentation loss $L_{seg}$ and the bounding box regression loss $L_{box}$ as: \begin{equation} \label{eqn:01} L_{total}=\alpha L_{fuse}+\beta L_{seg}+\chi L_{box} \\=\alpha(L_{cls\_1}+L_{reg\_1})+\beta L_{seg}+\chi(L_{cls\_2}+L_{reg\_2}). \end{equation} where weighting parameters $\alpha$, $\beta$ and $\chi$ are used to balance the relative importance of different parts, and their values are set to 1, 4 and 2 respectively. $L_{cls\_1}$ and $L_{cls\_2}$ are the object classification loss. $L_{reg\_1}$ and $L_{reg\_2}$ are box regression loss. In our practice, we apply binary cross entropy for all classification loss. As for regression, we employ Smooth L1 loss for all bounding box and orientation vector regression. For segmentation, we use the focal loss~\cite{lin2017focal} to handle the imbalance problem. More details of the specific losses are described in the supplementary. \section{Experiments} We evaluate our method on the 3D detection benchmark and the bird's eye view detection benchmark of the KITTI test server~\cite{geiger2012we}. For evaluation, we use average precision (AP) metric to compare with different methods and use the official 3D IoU evaluation metrics of 0.7, 0.5, and 0.5 respectively for the categories of car, cyclist, and pedestrian. In this section, we will introduce the experimental results. More description about datasets, implementation and training details are specified in the supplementary. \subsection{Comparing with state-of-the-art methods } \begin{wraptable}{r}{8.5cm} \setlength{\abovecaptionskip}{-5pt} \setlength{\belowcaptionskip}{-0pt} \tiny \caption{Performance comparison on KITTI 3D object detection for car, pedestrian and cyclists.The evaluation metrics is the average precision (AP) on the official test set.} \label{tab:one} \begin{center} \setlength{\tabcolsep}{0.2mm} \begin{tabular}{ccccccccccc} \toprule \multirow{2}{*}{Method} & \multirow{2}{*}{Modality} & \multicolumn{3}{c}{$AP_{car}(\%)$} & \multicolumn{3}{c}{$AP_{pedestrian}(\%)$} & \multicolumn{3}{c}{$AP_{cyclist}(\%)$} \\ \cmidrule(r){3-5} \cmidrule(r){6-8} \cmidrule(r){9-11} & & Easy & Moderate & Hard & Easy & Moderate & Hard & Easy & Moderate & Hard\\ \hline M3D-RPN\cite{brazil2019m3d} & Mono & 15.52 & 11.44 & 9.62 & - & - & - & - & - & -\\ CE3R\cite{ma2019accurate}& Mono & 21.48 & 16.08 & 15.26 & - & - & - & - & - & -\\\hline Stereo RCNN\cite{li2019stereo} & Stereo & 49.23 & 34.05 & 28.39 & - & - & - & - & - & -\\\hline MV3D\cite{chen2017multi} & Mono+Lidar & 71.09 & 62.35 & 55.12 & - & - & - & - & - & -\\ F-Pointnet\cite{qi2018frustum} & Mono+Lidar & 81.20 & 70.39 & 62.19 & 51.21 & 44.89 & 40.23 & 71.96 & 56.77 & 50.39\\ AVOD-FPN\cite{ku2018joint} & Mono+Lidar & 81.94 & 71.88 & 66.38 & 50.80 & 42.81 & 40.88 & 64.00 & 52.18 & 46.61\\ F-ConvNet\cite{wang2019frustum} & Mono+Lidar & 85.88 &76.51 & 68.08 & 52.37 & 45.61 & 41.49 & {\bf79.58} & 64.68 & 57.03\\ MMF\cite{liang2019multi} & Mono+Lidar & 86.81 & 76.75 & 68.41 & - & - & - & - & - & -\\\hline Voxelnet\cite{zhou2018voxelnet} & Lidar & 77.47 & 65.11 & 57.73 & 39.48 & 33.69 & 31.51 & 61.22 & 48.36 & 44.37\\ SECOND\cite{yan2018second} & Lidar & 83.13 & 73.66 & 66.20 & 51.07 & 42.56 & 37.29 & 70.51 & 53.85 & 46.90\\ PointPillars\cite{lang2019pointpillars} & Lidar & 79.05 & 74.99 & 68.30 & 52.08 & 43.43 & 41.49 & 75.78 & 59.07 & 52.92\\ PointRCNN\cite{shi2019pointrcnn} & Lidar & 85.94 & 75.76 & 68.32 & 49.43 & 41.78 & 38.63 & 73.93 & 59.60 & 53.59\\ STD\cite{yang2019std} & Lidar & 86.61 & 77.63 & {\bf76.06} & 53.08 & 44.24 & 41.97 & 78.89 & 62.53 & 55.77\\ Point-GNN\cite{Point-GNN} & Lidar & 88.33 & 79.47 & 72.29 & 51.92 & 43.77 & 40.14 & 78.60 & 63.48 & 57.08\\\hline SRDL(ours) &Stereo+Lidar& {\bf89.27} & {\bf79.95} & 73.79 & {\bf53.44} & {\bf45.91} & {\bf42.61} & 78.68 & {\bf64.88} & {\bf57.74}\\ \bottomrule \end{tabular} \end{center} \end{wraptable} For the 3D object detection and the bird’s view detection test benchmark as shown in Table~\ref{tab:one} and Table~\ref{tab:two}, our proposed method achieves decent results compared with other state-of-the-art methods for all categories on three difficulty levels. For car, our method achieves better or comparable results than most of the methods. For the pedestrian and cyclist, SRDL gets large increases than the mono+Lidar methods due to combining stereo images, especially on the moderate and hard set. For the most important car category, we also report the performance of our method on KITTI val split and the results are shown in the supplementary. \begin{wraptable}{r}{8.5cm} \setlength{\abovecaptionskip}{-35pt} \setlength{\belowcaptionskip}{-5pt} \tiny \caption{Performance comparison on KITTI bird's eye view detection for car, pedestrian and cyclists. The evaluation metrics is the average precision (AP) on the official test set.} \label{tab:two} \begin{center} \setlength{\tabcolsep}{0.4mm} \begin{tabular}{ccccccccccc} \toprule \multirow{2}{*}{Method} & \multirow{2}{*}{Modality} & \multicolumn{3}{c}{$AP_{car}(\%)$} & \multicolumn{3}{c}{$AP_{pedestrian}(\%)$} & \multicolumn{3}{c}{$AP_{cyclist}(\%)$} \\ \cmidrule(r){3-5} \cmidrule(r){6-8} \cmidrule(r){9-11} & & Easy & Moderate & Hard & Easy & Moderate & Hard & Easy & Moderate & Hard\\ \hline M3D-RPN\cite{brazil2019m3d} & Mono & 21.29 & 15.23 & 13.16 & - & - & - & - & - & -\\\hline Stereo RCNN\cite{li2019stereo} & Stereo & 61.27 & 43.87 & 36.44 & - & - & - & - & - & -\\\hline MV3D\cite{chen2017multi} & Mono+Lidar & 86.02 & 76.90 & 68.49 & - & - & - & - & - & -\\ F-Pointnet\cite{qi2018frustum} & Mono+Lidar & 88.70 & 84.00 & 75.33 & 58.09 & 50.22 & 47.20 & 75.38 & 61.96 & 54.68\\ AVOD-FPN\cite{ku2018joint} & Mono+Lidar & 88.53 & 83.79 & 77.90 & 58.75 & 51.05 & 47.54 & 68.09 & 57.48 & 50.77\\ F-ConvNet\cite{wang2019frustum} & Mono+Lidar & 89.69 & 83.08 & 74.56 & 58.90 & 50.48 & 46.72 & 82.59 & 68.62 & 60.62\\ MMF\cite{liang2019multi} & Mono+Lidar & 89.49 & 87.47 & 79.10 & - & - & - & - & - & -\\\hline Voxelnet\cite{zhou2018voxelnet} & Lidar & 89.35 & 79.26 & 77.39 & 46.13 & 40.74 & 38.11 & 66.70 & 54.76 & 50.55\\ SECOND\cite{yan2018second} & Lidar & 88.07 & 79.37 & 77.95 & 55.10 & 46.27 & 44.76 & 73.67 & 56.04 & 48.78\\ PointPillars\cite{lang2019pointpillars} & Lidar & 88.35 & 86.10 & 79.83 & 58.66 & 50.23 & 47.19 & 79.14 & 62.25 & 56.00\\ PointRCNN\cite{shi2019pointrcnn} & Lidar & 89.47 & 85.58 & 79.10 & - & - & - & 81.52 & 66.77 & {\bf60.78}\\ STD\cite{yang2019std} & Lidar & 89.66 & 87.76 & {\bf86.89} & {\bf60.99} & 51.39 & 45.89 & 81.04 & 65.32 & 57.85\\ Point-GNN\cite{Point-GNN} & Lidar & {\bf93.11} & 89.17 & 83.90 & 55.36 & 47.07 & 44.61 & 81.17 & 67.28 & 59.67\\\hline SRDL(ours) &Stereo+Lidar& 90.82 & {\bf89.74} & 81.93 & 59.62 & {\bf51.46} & {\bf48.32} & {\bf82.61} & {\bf69.11} & 60.37\\ \bottomrule \end{tabular} \end{center} \end{wraptable} \subsection{Qualitative results} We present some qualitative results of our proposed SRDL network on the test split on KITTI dataset in Figure~\ref{fig:6}. From the figures we could see that our proposed network could estimate accurate 3D bounding boxes in different scenes. Surprisingly, we observe that our method can still achieve satisfactory detection results even with very sparse point clouds and severe occlusion. \subsection{Ablation studies} In this section, we change components and variants of our proposed SRDL by conducting extensive ablation studies on the validation split of KITTI. We follow the convention and use the car class which contains the largest amount of training examples. The evaluation metric is the average precision (AP \%) on the val set. \begin{figure}[htbp] \centering \includegraphics[width=14cm]{fig6} \caption{Qualitative 3D detection results of SRDL on the KITTI test set. The detected objects are shown with green 3D bounding boxes and the relative labels. The upper row in each image is the 3D object detection result projected onto the RGB image and the bottom is the result in the corresponding point clouds.} \label{fig:6} \end{figure} {\bf Effect of Different Design Choice in the Whole Network.} We illustrate the importance of different components of our network by removing one part and keeping all the others unchanged, as shown in Table~\ref{tab:3}. Without the stereo images as input (the missing of “stereo” stands for mono image), the performance of SRDL drops dramatically which shows that the stereo images could provide rich feature information to locate the object. Similarly, AP decreases significantly by 11.65\%, 12.27\%, 16.33\% respectively for easy, moderate and hard which confirms the indispensability of the 3D bounding box encoding. And the performance degradation caused by the absence of either local or global convolution for segmentation proves that only the combination of them can produce the best results. \begin{table} \begin{minipage}[t]{0.35\linewidth} \tiny \caption{Performance of removing different part of our network. $\times$ denotes removing and $\checkmark$ denotes retaining.} \label{tab:3} \begin{center} \setlength{\tabcolsep}{0.7mm}{ \begin{tabular}{ccccccc} \toprule Stereo & Local & Global & Encoding & Easy & Moderate & Hard \\\hline $\times$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & 83.64 & 73.59 & 67.48 \\ $\checkmark$ & $\times$ & $\checkmark$ & $\checkmark$ & 86.77 & 72.32 & 71.65\\ $\checkmark$ & $\checkmark$ & $ \times$ & $\checkmark$ & 88.46 & 76.71 & 75.69 \\ $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times$ & 78.63 & 67.55 & 62.32 \\ $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & {\bf90.28} & {\bf79.82} & {\bf78.65} \\ \bottomrule \end{tabular}} \end{center} \end{minipage}% \hfill \begin{minipage}[t]{0.28\linewidth} \tiny \caption{Performance comparison of different fusion method with attention mechanism.} \label{tab:4} \begin{center} \setlength{\tabcolsep}{0.6mm}{ \begin{tabular}{cccc} \toprule Fusion Method & Easy & Moderate & Hard \\\hline Global & 83.85 & 71.08 & 65.73 \\ Local & 83.89 & 71.27 & 66.54 \\ Global+Local & 84.51 & 71.33 & 68.79 \\ Global+Attention & 86.77 & 72.32 & 71.65 \\ Local+Attention & 88.46 & 76.71 & 75.69\\ Global+Local+Attention & {\bf90.28} & {\bf79.82} & {\bf78.65} \\ \bottomrule \end{tabular}} \end{center} \end{minipage} \hfill \begin{minipage}[t]{0.25\linewidth} \tiny \caption{Performance comparison on different bounding box encoding methods.} \label{tab:5} \begin{center} \setlength{\tabcolsep}{0.5mm}{ \begin{tabular}{cccc} \toprule Encoding Method & Easy & Moderate & Hard \\\hline Axis & 79.13 & 68.42 & 65.36\\ 8 Corners & 83.09 & 74.51 & 70.82 \\ 4 Corners+2 Heights & 89.61 & 78.37 & 77.64 \\ 3 Points+2 Heights & {\bf90.28} & {\bf79.82} & {\bf78.65} \\ \bottomrule \end{tabular}} \end{center} \end{minipage} \end{table} {\bf Effect of Attention Module.} In order to show the importance of the attention module in Section~\ref{sec:sim2}, we add attention module to the three different designs. As shown in Table~\ref{tab:4}, with attention module to transfer features between connected layers, the fused models override the original ones by 1.24\%, 5.44\%, 8.49\% respectively in the moderate difficulty. And our final fusion method with attention mechanism outperforms the alternative by 5.77\%, 8.49\%, 9.86\% for the three difficulties. \begin{wraptable}{r}{5.5cm} \setlength{\abovecaptionskip}{-5pt} \setlength{\belowcaptionskip}{-5pt} \tiny \caption{Performance of adopting different size of $\xi$ for 3D box refining. The "-" half denotes shrinking and the other half denotes enlarging. 0m is the original size.} \label{tab:6} \begin{center} \setlength{\tabcolsep}{1mm}{ \begin{tabular}{cccc} \toprule refining size($\xi$) & Easy & Moderate & Hard \\\hline 1.5m & 72.62 & 69.86 & 68.57 \\ 1.0m & 79.43 & 70.53 & 70.82\\ 0.8m & 84.59 & 72.58 & 71.94 \\ 0.5m & 87.26 & 76.25 & 74.85 \\ 0m & 89.47 & 78.84 & 77.62 \\ -0.5m & {\bf90.28} & {\bf79.82} & {\bf78.65} \\ -0.8m & 89.12 & 78.76 & 77.38 \\ -1.0m & 87.69 & 77.17 & 75.93 \\ -1.5m & 84.81 & 73.91 & 73.11 \\ \bottomrule \end{tabular}} \end{center} \end{wraptable} {\bf Effect of Different Refining Size $\xi$.} In Section~\ref{sec:sim1}, we propose to refine the proposals by adding a constant $\xi$ to the size of the box. Table~\ref{tab:6} shows the results with different size. $\xi$=-0.5m proves to perform best in our network which denotes that we should shrink the original box by 0.5m. Note that when we enlarge the size of the box, especially over 1m, the value of AP drops sharply. This indicates that the original box already contains redundant space, and continuing to enlarge the box will only include more unrelated areas. At the same time, too large $\xi$ to shrink the box also lead to bad performance since small region may also exclude relative areas. {\bf Effect of Bounding Box Encoding Method.} As stated in Section~\ref{sec:sim3}, there are different bounding box encoding methods including the one we proposed. We use the four different methods to encode boxes in our proposed network. From Table~\ref{tab:5}, we note that although the 4 corners+2 heights method consumes a few dimensions but its performance is worse than our method. For one thing, the 4 corners+2 heights method does not take into account the coordinate relationship between the four corners so the number of points is redundant. For another thing, the constraint relationship between the coordinates of the corners and the heights cannot be established. Our method can establish four sets of constraint relationships to constrain the length, width, and height respectively. \section{Conclusions} In this paper, we have proposed a novel stereo RGB and deeper LIDAR (SRDL) network for 3D object detection in autonomous driving scenarios. Our method takes full advantage of the merits of stereo RGB images and point clouds to form an end-to-end framework. The combination of semantic information from stereo images and spatial information from point clouds contribute together to improve the performance. Extensive experiments on the challenging KITTI 3D detection benchmark demonstrate the efficiency of our method decently. In future research, we will optimize the inference speed, investigate more focus on integrating RGB and point-wise features, and different operations on the point clouds will be added to further improve our detection framework. \section*{Broader Impact} This article belongs to the application of a subtask under the study of autonomous driving. The accurate detection of vehicles and people on the road can greatly promote the development of autonomous driving. We know that our goal is to accurately detect 3D objects on the road to avoid various accidents. In this paper, stereo RGB images and point cloud data are used for joint detection. The dual data from the optical camera and radar camera jointly ensure this detection result, which is helpful for the further implementation of this application. For autonomous driving companies, they can find inspiration for further improving the safety of autonomous driving from the methods in this article, and this article also provides researchers with a new way to make full use of road environmental data. However, we have to admit that the method proposed in this article uses a variety of data, so the hardware requirements in the implementation are relatively high, which will bring some costs. In addition, once the data from a certain camera is missing as input, the algorithm in this paper will immediately fail. \begin{ack} This work is supported by a grant from the National Natural Science Foundation of China (No. 61872068, 61720106004), by a grant from Science \& Technology Department of Sichuan Province of China (No.2018GZ0071, 2019YFG0426), and by a grant from the Fundamental Research Funds for the Central Universities (No.2672018ZYGX2018J014). \end{ack} \small \bibliographystyle{abbrv}
2024-02-18T23:41:00.271Z
2020-06-11T02:13:03.000Z
algebraic_stack_train_0000
3,902
5,905
proofpile-arXiv_066-3070
\section{Introduction} \label{sec:intro} Local feature, peculiarly referring to the local point feature in this paper, is extensively employed in a large number of computer vision applications, such as image stitching~\cite{Brown2007}, content-based image retrieval \cite{videogoogle_2003}, image-based localization \cite{Worldwide2012,localization_song2016}, structure-from-motion (SfM) \cite{Agarwal:2011:BRD}, and simultaneous localization and mapping (SLAM)~\cite{zhang2019localization}. In these applications, the quality of the local feature module significantly influences the overall system performance and thus must be in-depth studied and optimized. \begin{figure}[t] \center \includegraphics[width=7.8cm]{./fig/teaser.pdf} \caption{Desired properties of local features. Detector repeatability (1.1): a visible scene point should be detected on all images. Descriptor repeatability (1.2): the descriptor of the same point is invariant over different images. Detector reliability (2.1): given descriptor, detected keypoints could be distinguished by their descriptors. Descriptor reliability (2.2): given detector, descriptors can distinguish detected keypoints.} \label{fig:teaser} \end{figure} In general, a standard local feature algorithm can be divided into two modules, \textit{i.e.}, keypoint detection and description. For each keypoint, its inner-image location is determined via the detection module, while its descriptor is calculated by summarizing the local context information via the description module. Early works on local feature primarily originated from hand-crafted methodologies, and the representative methods include SIFT \cite{SIFT_2004_ijcv}, SURF \cite{surf_eccv06}, KAZE \cite{kaze_eccv12}, AKAZE \cite{akaze_bmvc13}, BRISK \cite{brisk_iccv11}, ORB \cite{orb_iccv11}, and so on. Although hand-crafted features have been widely used in various computer vision tasks, their nature of rule-based algorithm design prevents the feasibility of further performance enhancement along with the increasing model representation ability. Inspired by the great successes of DNN on a variety of computer vision tasks \cite{krizhevsky2012imagenet,ren2015faster,chen2017deeplab}, researchers have been actively working on designing and learning advanced local feature models. Since local feature consists of both detection and description, each module can be individually replaced and improved by DNN-based methods \cite{keynet_iccv19,ddesc_2015_iccv}. Alternatively, both modules also can be jointly designed using one DNN model. That can be done either by sequentially connected neural networks for firstly calculating keypoint locations and subsequently computing descriptors \cite{lift_eccv16,lfnet_nips18} or by a single network with a shared backbone and two separate branches for regressing detectors and descriptors respectively \cite{delf_iccv17,superpoint_cvpr18,d2net_cvpr19,r2d2_nips19}. However, unlike on most tasks, existing DNN-based local features have not achieved such great progress compared with hand-crafted methods, that indicates it is very challenging to exploit DNN on local feature learning. As one local feature algorithm consists of two modules, we partly attribute this difficulty to the insufficient utilization of their inherent and interactive properties. To alleviate this problem, we analyze the desired properties of local features, including its detector, descriptor, and their mutual relations. As demonstrated in Fig.~\ref{fig:teaser}, the properties can be summarized into two sets, \textit{i.e.}, `repeatability' and `reliability', and explained as: \noindent \textbf{Property 1} \textit{ Repeatability property of local feature. } \noindent \textbf{Property 1.1} \textit{ Detector repeatability: If a scene point is detected as a keypoint in one image, it is should be detected in all images where it is visible. } \noindent \textbf{Property 1.2} \textit{ Descriptor repeatability: The descriptor of a scene point should be invariant across all images. } \noindent \textbf{Property 2} \textit{ Reliability property of local feature. } \noindent \textbf{Property 2.1} \textit{ Detector reliability: Given a descriptor method, the detector should localize the points which could be reliably distinguished by their descriptors. } \noindent \textbf{Property 2.2} \textit{ Descriptor reliability: Given a detector method, the descriptor could reliably distinguish the detected keypoints. } The repeatability is an inherent property of the detector and descriptor, respectively. And the reliability is the interactive property between them. We also note that similar analyses and properties also have been adopted to guide the algorithm design in previous works \cite{superpoint_cvpr18,d2net_cvpr19,r2d2_nips19}. However, instead of optimizing the detector and descriptor at the same time, we propose to optimize each module in turn. When optimizing the detector or descriptor, both its inherent repeatability property and interactive reliability property are exploited to design the training strategies. Specifically, we figure out keypoints with reliable descriptors from all points. These keypoints are taken as ground-truth to optimize the detector, that is guided by the detector reliability property. The optimized detector is then taken to detect keypoints from images. The descriptor is then optimized to reliably distinguish the detected keypoints, that is guided by the descriptor reliability property. This process is iterated until the learned model is convergent. Moreover, several strategies are also adopted to ensure the repeatability property and the convergence of the whole process. This training process is self-evolving as it needs no additional supervised signals. Extensive experiments have been conducted to compare our model with state-of-the-art methods via performing homography estimation, relative pose estimation, structure-from-motion tasks on public datasets, the results verify the effectiveness of our algorithm. Our main contributions can be concluded as follows: \begin{enumerate}[nosep] \item We propose a self-evolving framework guided by the properties of local features, by that an advanced model can be trained effectively using unannotated images. \item Training strategies are elaborately designed and deployed to ensure the computed local feature model aligned with the desired properties. \item Extensive experiments verify the effectiveness of our framework and training strategies by outperforming state-of-the-art methods. \end{enumerate} \section{Related Work} In this section, we briefly review well-known local features, that could be categorized into four main groups: hand-crafted methods and three sets of DNN-based approaches. \textbf{Hand-crafted methods}. Early works on local features primarily rely on hand-crafted rules. One of the most well-known local feature algorithms is SIFT \cite{SIFT_2004_ijcv}, that builds detector by the difference of Gaussian operators and calculates descriptor via computing orientation histograms. After SIFT, plenty of algorithms have been proposed for either approximating the image processing operators to gain computational efficiency or seeking for performance gain by re-designing detector or descriptor. The representative methods include SURF \cite{surf_eccv06}, KAZE \cite{kaze_eccv12}, AKAZE \cite{akaze_bmvc13}, BRISK \cite{brisk_iccv11}, and ORB \cite{orb_iccv11}. To date, despite the nature of rule-based design, hand-crafted features still can achieve leading performance in specific applications~\cite{8584423}. \textbf{DNN-based two-stage methods}. Hand-crafted local feature algorithms typically first detect keypoints in images and subsequently calculate descriptors around each keypoint by cropping and summarizing the local context information. This procedure can also be used in designing DNN-based methods by using sequentially connected neural networks \cite{lift_eccv16,lfnet_nips18}. Each network contains its training strategy, optimizing for the detector or descriptor, respectively. We name this kind of method as two-stage methods, that can utilize previous expert knowledge in this area. The major disadvantage of two-stage based design is its inefficiency in computational costs since sequentially connected networks cannot share a large number of computations and parameters or enable fully parallel computing. \textbf{DNN-based one-stage methods}. To improve the efficiency of DNN-based local features, researchers have proposed the one-stage paradigm, that typically connects a backbone network with two lightweight head branches \cite{delf_iccv17,superpoint_cvpr18,d2net_cvpr19,r2d2_nips19}. Since the backbone network shares most computations for both the detector and descriptor calculation, this type of algorithms could achieve significantly less runtime. For the two lightweight branches, they can be either designed using small neural networks \cite{delf_iccv17,superpoint_cvpr18,r2d2_nips19} or by hand-crafted methods \cite{d2net_cvpr19}. In terms of training strategies, all these methods require annotated information for conducting supervised learning. \cite{delf_iccv17} adopted a landmark image dataset with image-level annotations. \cite{d2net_cvpr19,r2d2_nips19} obtained ground-truth correspondences between images via SfM reconstruction. And \cite{superpoint_cvpr18} relied on synthetic images with generated `corner'-style keypoints. \textbf{DNN-based individual detector/descriptor methods}. There are also a number of methods that only focus on DNN-based detector or descriptor, \textit{e.g.}, \cite{tilde_cvpr15,quadnet_cvpr17,kcnn_cvpr18,keynet_iccv19} proposed DNN-based keypoints detectors, and \cite{ddesc_2015_iccv,hardnet_nips17,l2net_cvpr17,geodesc_eccv18,contextdesc_cvpr19,song2019learning} worked on descriptor computation. However, we usually employ one local feature algorithm as a whole since either detector or descriptor would influence the performance of each other. Those methods can be considered as pluggable modules and used in a two-stage algorithm. In this paper, we focus on developing an advanced DNN-based one-stage model. \section{Formulation and Network Architecture} \label{sec:network} To describe our method better, we first introduce basic denotations along with the network architecture, while the self-evolving framework and training strategies are elaborated in the next section. As shown in Fig.~\ref{fig:network}, our network consists of a shared backbone $\mathcal{N}_{b}$ and two lightweight head branches, \textit{i.e.}, a detector branch $\mathcal{N}_{det}$ and a descriptor branch $\mathcal{N}_{des}$. The backbone $\mathcal{N}_{b}$ consists of 1 convolutional layer and 9 ResNet-v2 blocks \cite{resnet_v2_eccv16}, that extracts feature maps $\leftidx{^{\frac{1}{4}}}{\mathcal{F}} \in \mathbb{R}^{\mathtt{C} \times \frac{\mathtt{H}}{4} \times \frac{\mathtt{W}}{4}}$ from the input image $\mathcal{I} \in \mathbb{R}^{\mathtt{H} \times \mathtt{W}}$. In the above notations, $\mathtt{H}, \mathtt{W}$ are the height and width of the input image $\mathcal{I}$ respectively, and $\mathtt{C}$ is the channels of the extracted feature maps. The hidden feature maps at initial and $\frac{1}{k}$ scale are denoted as $\leftidx{^{1}}{\mathcal{F}}$ and $\leftidx{^{\frac{1}{k}}}{\mathcal{F}}$ respectively. The detector branch $\mathcal{N}_{det}$ consists of 2 deconvolutional layers and 1 softmax layer that predicts the keypoint probability map $\mathcal{P} \in \mathbb{R}^{2 \times \mathtt{H} \times \mathtt{W}}$ from the feature maps $\leftidx{^{\frac{1}{4}}}{\mathcal{F}}$. Moreover, this branch also consists of two shortcut links from low-level features to enhance its localization ability. The descriptor branch $\mathcal{N}_{des}$ consists of 1 ResNet-v2 block and 1 bi-linear up-sampling layer that extracts a descriptor $\mathcal{F}_{\left(h,w\right)}$ of dimension $\mathtt{C}$ for each pixel $\left(h, w\right)$, where $\mathcal{F} \in \mathbb{R}^{\mathtt{C} \times \mathtt{H} \times \mathtt{W}}$ and $\mathcal{F}_{\left( h,w \right) } \in \mathbb{R}^{\mathtt{C}}$. Benefiting from this network structure, our detector and descriptor can share most parameters and computations. \begin{figure}[t] \center \includegraphics[width=7cm]{./fig/network.pdf} \caption{Overview of our network, that consists of a heavy shared backbone and two lightweight head branches for detection and description respectively.} \label{fig:network} \end{figure} \section{Self-Evolving Framework} To train the network constructed in Sec.~\ref{sec:network}, two types of supervisory signals should be pre-provided. The first is the location of each keypoint, and the second is the keypoints correspondence between different images. With the desired properties of local features in mind, we propose to figure out the points with reliable descriptors as keypoints. And pairs of images, along with their correspondences, can be obtained via affine transformation. Then, the network can be trained only using unlabeled images. However, as the training data have no additional annotation information, we must carefully design the training strategies to ensure the performance. The overview of our framework is shown in Fig.~\ref{fig:framework}, that mainly consists of four steps: (a) compute keypoints probability map $\mathcal{P}$ using the current detector and subsequently filter the keypoints via non-maximum suppression (NMS) algorithm; (b) update the descriptor branch using the detected keypoints via heightening their descriptors' repeatability and reliability properties; (c) compute keypoints by figuring out points with reliable (both repeatable and distinct) descriptors; (d) update detector using the newly computed keypoints following detector repeatability and reliability properties. In what follows, we present each step in detail. \begin{figure*}[t] \center \includegraphics[width=12cm]{./fig/framework.pdf} \caption{Overview of our self-evolving framework, that consists of four main steps: (a) detect keypoints using the current detector, (b) update the descriptor with the detected keypoints, (c) compute keypoints with reliable (both repeatable and distinct, the reliability metric is the ratio between the distinctiveness metric and the repeatability metric) descriptors, and (d) refine the detector using newly computed keypoints. } \label{fig:framework} \end{figure*} \subsection{Detect Keypoints using Detector} \label{sec:step1} For an input image $\mathcal{I}$, the backbone network $\mathcal{N}_{b}$ extracts feature maps $\leftidx{^1}{\mathcal{F}}, \leftidx{^{\frac{1}{2}}}{\mathcal{F}}, \leftidx{^{\frac{1}{4}}}{\mathcal{F}}$ via \begin{equation} \leftidx{^1}{\mathcal{F}}, \leftidx{^{\frac{1}{2}}}{\mathcal{F}}, \leftidx{^{\frac{1}{4}}}{\mathcal{F}} = \mathcal{N}_{b} \left( \mathcal{I} \right). \end{equation} The feature maps are subsequently used by the detector branch $\mathcal{N}_{det}$ to estimate the keypoints probability map $\mathcal{P}$ as \begin{equation} \label{eq:p} \mathcal{P} = \mathcal{N}_{det} \left( \leftidx{^1}{\mathcal{F}}, \leftidx{^{\frac{1}{2}}}{\mathcal{F}}, \leftidx{^{\frac{1}{4}}}{\mathcal{F}} \right). \end{equation} Strong response in each pixel in probability map $\mathcal{P}$ indicates a potential keypoint, which is further filtered by non-maximum suppression (NMS). We set the suppression radius as 4 pixel in all experiments and set the maximum number of keypoints as $1,000$ during the training process. However, the above process is not designed to ensure robust detection of the same keypoints under varying conditions. In other words, the detection process is not optimized to satisfy the detector repeatability property 1.1 and might lead to sub-optimal results. To this end, we adopt a dedicated data augmentation strategy, namely affine adaption \cite{superpoint_cvpr18}. Specifically, we first apply random affine transformation and color jitter on each input image, and calculate the keypoint probability map. This process is repeated several times, and an average detection result \begin{equation} \label{eq:det:affine} \overline{\mathcal{P}} = \mathtt{AVG}\left(\leftidx{_1}{\mathcal{P}}, \leftidx{_2}{\mathcal{P}}, \ldots, \leftidx{_m}{\mathcal{P}} \right) \end{equation} is computed as the final output, where $\leftidx{_1}{\mathcal{P}}$ corresponds to the initial image and the others correspond to the transformed counterparts. Representative examples of the detection process are also demonstrated in Fig.~\ref{fig:detection:result}. Note that, the affine adaption is only applied during training. \begin{figure}[t] \center \includegraphics[width=8cm]{./fig/detection_example.pdf} \caption{Representative examples of keypoint detection process. Our detector operates on both the input image as well as its affine transformed counterparts and calculates the average detection results as the final output.} \label{fig:detection:result} \end{figure} As the detector has not been optimized well at iteration 0, another problem is how to detect keypoints at start. As shown in Fig.~\ref{fig:framework} (a$_{0}$), we just randomly select keypoints for each input image. Even so, we show in experiments that the proposed self-evolving framework can converge quickly within just a few iterations. \subsection{Update Keypoint Descriptor} \label{sec:step2} Keypoint descriptor is typically a 2D vector associated with each keypoint, for both re-identifying the same keypoints and distinguishing different keypoints across images. Those descriptor properties are summarized by repeatability property 1.2 and reliability property 2.2 in Sec.~\ref{sec:intro}, that are used as guidelines in our descriptor training process. To show the details, we note that for each image $\mathcal{I}$ the keypoint detection process described in Sec.~\ref{sec:step1} provides a set of keypoints $\mathbf{Q} = \left\lbrace \mathtt{Q}_{i} | \mathtt{Q}_{i} = \left\langle h_i, w_i \right\rangle \right)$. The training process starts by applying random affine transformation and color jitter $\mathcal{H}$ on both ${\mathcal{I}}$ and ${\mathbf{Q}}$, leading to \begin{equation} \hat{\mathcal{I}} = \mathcal{H} \left( \mathcal{I} \right), \end{equation} and \begin{equation} \hat{\mathbf{Q}} = \left\lbrace \hat{\mathtt{Q}}_{i} | \hat{\mathtt{Q}}_{i} = \mathcal{H}\left(h_i, w_i \right) \right\rbrace. \end{equation} By denoting $\left\langle \cdot,\cdot \right\rangle$ a pair of keypoints, $\left\langle \mathtt{Q}_{i}, \hat{\mathtt{Q}}_{i} \right\rangle$ represents a pair of `ground-truth' matched keypoints. According to the descriptor repeatability property 1.2, their descriptors $\mathcal{F}_{\mathtt{Q}_{i}}, \mathcal{F}_{\hat{\mathtt{Q}}_{i}}$ should be close to each other. On the other hand, according to the descriptor reliability property 2.2, $\mathcal{F}_{\mathtt{Q}_{i}}$ should be distinct from others except for its matched keypoint $\mathcal{F}_{\hat{\mathtt{Q}}_{i}}$. The representative example of matched and distinct cases are shown in Fig.~\ref{fig:framework}(b) by green and red lines respectively. Inspired by HardNet \cite{hardnet_nips17}, we use triplet loss along with hard example mining strategy to train the descriptor. Specifically, the loss function is defined as \begin{equation} \label{eq:loss:des0} \begin{aligned} \mathcal{L}_{des} = & \dfrac{1}{n} \sum_{i} \max \left( 0, \mathtt{D}_{i,i} - \min\left( \mathtt{D}_{i,\tilde{i}}, \mathtt{D}_{\tilde{i},i}\right) + m \right) \\ \end{aligned}, \end{equation} where $n$ is the number of keypoints, $m=0.8$ denotes the margin parameter, $||\cdot||_2$ represents the $\mathtt{L}_{2}$ distance, and \begin{equation} \label{eq:loss:des01} \mathtt{D}_{i,i} = \| \mathcal{F}_{\mathtt{Q}_{i}} - \hat{\mathcal{F}}_{\hat{\mathtt{Q}}_{i}} \|_2, ~~~~~~~~ \end{equation} \begin{equation} \label{eq:loss:des02} \mathtt{D}_{i,\tilde{i}} = \min_{j \neq i} \ \| \mathcal{F}_{\mathtt{Q}_{i}} - \hat{\mathcal{F}}_{\hat{\mathtt{Q}}_{j}} \|_2, \end{equation} \begin{equation} \label{eq:loss:des03} \mathtt{D}_{\tilde{i},i} = \min_{j \neq i} \ \| \mathcal{F}_{\mathtt{Q}_{j}} - \hat{\mathcal{F}}_{\hat{\mathtt{Q}}_{i}} \|_2. \end{equation} The triplet loss function \eqref{eq:loss:des0} enables the descriptor with both the repeatability property (by~\eqref{eq:loss:des01}) as well as the reliability property (by~\eqref{eq:loss:des02} and~\eqref{eq:loss:des03}). In addition, as our network shares a common backbone to simultaneously perform keypoint detection and description, the detector branch should also be considered when training the descriptor. To this end, we add a regularization loss term \begin{equation} \mathcal{L}_{det}' = \dfrac{1}{2} \left( \mathtt{MSE} \left(\mathcal{P}, \mathcal{P}' \right) + \mathtt{MSE} \left(\hat{\mathcal{P}}, \hat{\mathcal{P}}' \right) \right) \end{equation} to maintain the detection results unchanged, where $\mathcal{P}$ is given by~\eqref{eq:p} and \begin{equation} \label{eq:3p} \mathcal{P}' \!\!=\!\! \mathcal{N}_{det}' \left( \mathcal{N}_{b}' \left( {\mathcal{I}} \right) \right), \hat{\mathcal{P}} \!\!=\!\! \mathcal{N}_{det} ( \mathcal{N}_{b} ( \hat{\mathcal{I}} ) ), \hat{\mathcal{P}}' \!\!=\!\! \mathcal{N}_{det}' ( \mathcal{N}_{b}' ( \hat{\mathcal{I}} ) ), \end{equation} $\mathcal{N}_{det}'$, $\mathcal{N}_{b}'$ and $\mathcal{N}_{det}$, $\mathcal{N}_{b}$ are the networks before and after this descriptor training step. The final loss to update the descriptor is \begin{equation} \mathcal{L}_{1} = \mathcal{L}_{des} + \alpha \mathcal{L}_{det}', \end{equation} where $\alpha$ is the parameter to balance these two losses and is set to be $1$ empirically. \subsection{Compute Keypoints via Descriptor} \label{sec:step3} The next step of our self-evolving framework is to compute keypoints from the descriptor maps, that remains a challenging problem in the research community. In our work, we propose to calculate keypoints via evaluating the repeatability property 2.1 and reliability property 2.2 of their corresponding descriptors. Furthermore, as reliability property somehow contains repeatability property, these two properties can be summarized as reliability property and divided into two aspect, namely repeatability and distinctness. Specifically, given the outputs of the descriptor branch $\mathcal{F}_{\mathtt{P}}$ and $\hat{\mathcal{F}}_{\hat{\mathtt{P}}}$ from the original image $\mathcal{I}$ and its affine transformed counterpart $\hat{\mathcal{I}}$, the descriptor repeatability can be evaluated at each point as: \begin{equation} \label{eq:desc:rep} \mathtt{D}_{i,i} = \| \mathcal{F}_{\mathtt{P}_{i}} - \hat{\mathcal{F}}_{\hat{\mathtt{P}}_{i}} \|_2. \end{equation} We point out that the lower $\mathtt{D}_{i,i}$ is, the more repeatable the descriptor is. In addition, the distinctness of a descriptor can be evaluated as \begin{equation} \label{eq:desc:dis} \mathtt{D}_{i,\tilde{i}} = \min_{j \neq i} \ \| \mathcal{F}_{\mathtt{P}_{i}} - \hat{\mathcal{F}}_{\hat{\mathtt{P}}_{j}} \|_2. \end{equation} Similarly, the higher $\mathtt{D}_{i,\tilde{i}}$ is, the more distinct the descriptor is. As a reliable descriptor should be both repeatable and distinct, we combine the repeatability and distinctness metric into a single metric following the ratio term \begin{equation} \label{eq:desc:ratio} \mathcal{R}_{i} = \frac{\mathtt{D}_{i,\tilde{i}}}{\mathtt{D}_{i,i}}. \end{equation} Representative examples of computed maps $\mathtt{D}_{i,i}, \mathtt{D}_{i,\tilde{i}}$, and $\mathcal{R}_{i}$ are shown in Fig.~\ref{fig:ratio}. Someone may find that this ratio term \eqref{eq:desc:ratio} is the same as the ratio in the ratio-test algorithm \cite{SIFT_2004_ijcv}, that is a well-known method to find keypoints correspondence. This means that the points with higher ratios could be reliably distinguished by subsequently keypoints correspondence finding algorithms. These points, without doubt, should be detected by the detector as much as possible. Therefore, strongly responsive elements on the ratio map $\mathcal{R}$ are figured out as keypoints via applying NMS algorithm. \begin{figure}[t] \center \includegraphics[width=\linewidth]{./fig/ratio_example.pdf} \caption{Representative maps of repeatability metric $\mathtt{D}_{i,i}$, distinctness metric $\mathtt{D}_{i,\tilde{i}}$, and reliability metric $\mathcal{R}_{i}$.} \label{fig:ratio} \end{figure} Moreover, to ensure high-quality performance, three strategies are applied in the keypoint computing process. Firstly, we note that the ratio map $\mathcal{R}$ does not cover all points in image $\mathcal{I}$, since some elements do not have correspondences in the affine transformed image $\hat{\mathcal{I}}$. Also, to compute keypoints using a single ratio map $\mathcal{R}$ is not preferred in terms of robustness. To this end, we adopt a data augmentation strategy similar to the affine adaption described in Sec.~\ref{sec:step1}. Specifically, we randomly warp the input image via affine transformation, calculate the ratio map, and repeat the same process multiple times to generate an average ratio map \begin{equation} \label{eq:ratio:affine} \overline{\mathcal{R}} = \mathtt{AVG}\left(\leftidx{_1}{\mathcal{R}}, \leftidx{_2}{\mathcal{R}}, \ldots, \leftidx{_m}{\mathcal{R}} \right), \end{equation} where $\leftidx{_i}{\mathcal{R}}$ is corresponding to the $i$th result. An example case of computing the average ratio map is given by Fig.~\ref{fig:ratio:adaption}. Secondly, it is important to point out that it is an extremely heavy task to compute $\mathtt{D}_{i,\tilde{i}}$. To reduce the computations, we modify $\mathtt{D}_{i,\tilde{i}}$ as \begin{equation} \mathtt{D}_{i,\tilde{i}} = \min_{j \neq i, \hat{\mathtt{P}}_{j} \in \mathrm{\Omega} \left( \hat{\mathtt{P}}_{i} \right)} \ \| \mathcal{F}_{\mathtt{P}_{i}} - \hat{\mathcal{F}}_{\hat{\mathtt{P}}_{j}} \|_2, \end{equation} where $ \mathrm{\Omega} \left( \hat{\mathtt{P}}_{i} \right)$ contains the local neighbors of point $\hat{\mathtt{P}}_{i}$. Thirdly, the feature maps $\mathcal{F}$ usually are too coarse for keypoints computing as the descriptor branch consists of a bi-linear up-sampling layer. To this end, we actually use the feature maps $\leftidx{^{\frac{1}{4}}}{\mathcal{F}}$ and $\leftidx{^{1}}{\mathcal{F}}$ to compute a coarse scale and a fine-scale ratio map respectively and fuse them to obtain the final result. \begin{figure}[t] \center \includegraphics[width=\linewidth]{./fig/ratio_affine_adaption.pdf} \caption{Representative examples of average reliability map $\overline{\mathcal{R}}$.} \label{fig:ratio:adaption} \end{figure} \subsection{Update Keypoint Detector} \label{sec:step4} After the keypoints have been computed via their descriptor reliability, they can be taken as ground-truth to train the detector following the detector reliability property 2.1. We formulate the keypoints detection task as a per-pixel classification task to determine whether the point at each pixel is a keypoint or not. Since the keypoints are very sparse among all the points, we adopt focal loss \cite{focal_loss_iccv17} as \begin{equation} \label{eq:det:det:init} \mathcal{L}_{det} = \mathtt{FL}\left(\mathcal{P},\mathcal{Y}\right), \end{equation} where $\mathcal{Y}$ is the computed keypoints. Besides detector reliability property 2.1, the detector also should be with repeatability property 1.1. To this end, we further adopt affine transformation on the input image and obtain its affined image $\hat{\mathcal{I}}$ and detection output $\hat{\mathcal{P}}$. The detector also should rightly detect the keypoints in image $\hat{\mathcal{I}}$, then the detection loss \eqref{eq:det:det:init} is modified as \begin{equation} \mathcal{L}_{det} = \dfrac{1}{2} \left( \mathtt{FL}\left(\mathcal{P},\mathcal{Y}\right) + \mathtt{FL}\left(\hat{\mathcal{P}},\hat{\mathcal{Y}}\right) \right), \end{equation} where $\hat{\mathcal{Y}} = \mathcal{H}\left( \mathcal{Y} \right)$. To further enhance the repeatability property 1.1, we minimize the difference between detection probabilities of corresponding keypoints via the loss \begin{equation} \label{eq:det:rep} \mathcal{L}_{rep} = \dfrac{1}{2} \sum_{i} \left( \mathtt{KLD}\left(\mathcal{P}_{\mathtt{Q}_i} \| \hat{\mathcal{P}}_{\hat{\mathtt{Q}}_i} \right) + \mathtt{KLD}\left(\hat{\mathcal{P}}_{\hat{\mathtt{Q}}_i} \| \mathcal{P}_{\mathtt{Q}_i} \right) \right), \end{equation} where $\mathtt{KLD}\left(\right)$ is the Kullback–Leibler divergence function. To maintain the description results unchanged, we also add a regularization term \begin{equation} \mathcal{L}_{des}' = \dfrac{1}{2} \left( \mathtt{MSE} \left(\mathcal{F}, \mathcal{F}' \right) + \mathtt{MSE} \left(\hat{\mathcal{F}}, \hat{\mathcal{F}}' \right) \right), \end{equation} where $ \mathcal{F}', \hat{\mathcal{F}}'$ are obtained by the initial network before this detector training step. The final loss to update the detector can be defined as \begin{equation} \mathcal{L}_{2} = \mathcal{L}_{det} + \beta \mathcal{L}_{rep} + \lambda \mathcal{L}_{des}', \end{equation} where $\beta = 1, \lambda = 10^{-3}$ empirically in our experiments. \section{Experiments and Comparisons} \label{sec:exp} In this section, we first present the details during training our local feature model, and then compare it with 11 popular methods on homograph estimation, relative pose estimation(stereo), structure-from-motion tasks. At last, we also conduct an ablation experiment to exploit the effectiveness of key training strategies. \subsection{Experimental Details and Comparison Methods} Our local feature model is trained on Microsoft COCO validation dataset \cite{cocodataset_eccv14}, that consists of $5,000$ realistic images. We repeated the self-evolving iteration $5$ times to prevent under-fitting or over-fitting. In each iteration, we train the detector and descriptor $20$ epochs in turn and set the initial learning rate as $0.001$. The learning rate will be multiplied by $0.1$ after the average loss remains un-declining $2$ epochs. The whole training process will take 45 hours on a GPU server with two NVIDIA-Tesla-P100 GPUs. To test the inference speed, we deploy our model on a desktop machine with one NVIDIA-GTX-1080Ti GPU to process 10K images with a resolution $480 \times 640$. Our model can process 301 images per second averagely. We implemented our algorithm based on the PyTorch framework \cite{PyTorchNIPS2017}. For affine adaption, we uniformly sample the in-plane rotation, shear, translation, and scale parameters from $\left[-40\degree, +40\degree \right], \left[-40\degree, +40\degree \right], \left[-0.04, +0.04 \right], \left[0.7, 1.4 \right]$, respectively. For color jitter, we also uniformly sample the brightness, contrast, saturation, and hue parameters from $\left[0.6, 1.4 \right], \left[0.6, 1.4 \right], \left[0.6, 1.4 \right], \left[-0.2, 0.2 \right]$, respectively. For comparison methods, we select 6 hand-crafted methods, \textit{i.e.}, ORB~\cite{orb_iccv11}, AKAZE \cite{akaze_bmvc13}, BRISK~\cite{brisk_iccv11}, SURF~\cite{surf_eccv06}, KAZE~\cite{kaze_eccv12}, and SIFT~\cite{SIFT_2004_ijcv}, that are implemented directly using OpenCV. We also select 5 recently proposed DNN-based methods, \textit{i.e.}, D2-Net \cite{d2net_cvpr19}, DELF \cite{delf_iccv17}, LF-Net \cite{lfnet_nips18}, SuperPoint \cite{superpoint_cvpr18}, and R2D2 \cite{r2d2_nips19}. We implement these methods using the codes and models released by the authors. All of these methods can perform keypoints detection and description. The individual detector or descriptor algorithms are not included in the comparison methods since their combinations are various and it is difficult to conduct a fair comparison with methods mentioned above. Before comparing the performance, we first review the training data (less constraints is better), model size (smaller is better), and dimension of descriptor (lower is better) of each DNN-based method in Tab.~\ref{tab:method:detail}. On all of these aspects, our method is superior or comparable with other methods. \begin{table}[h] \small \begin{center} \caption{The training data (less constraints is better), model size (smaller is better), and dimension of descriptor (lower is better) of each DNN-based method. On all of these aspects, our method is superior or comparable with other methods. } \label{tab:method:detail} \begin{tabular}{@{\hspace{0mm}}l@{\hspace{1mm}}l@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{0mm}}} \toprule Method & Training Data & Model(MB) & Dim. Desc. \\ \midrule D2-Net \cite{d2net_cvpr19} & SfM data & 30.5 & 512 float \\ DELF \cite{delf_iccv17} & landmarks data & 36.4 & 1024 float~~ \\ LF-Net \cite{lfnet_nips18} & SfM data & 31.7 & 256 float \\ SuperPoint \cite{superpoint_cvpr18} & rendered\&web imgs & ~~5.2 & 256 float \\ R2D2 \cite{r2d2_nips19} & web imgs, SfM data & ~~2.0 & 128 float \\ SEKD (ours) & web imgs & ~~2.7 & 128 float \\ \bottomrule \end{tabular} \end{center} \end{table} \subsection{Performance on Homography Estimation} \label{sec:exp:homography} Following many previous works, \textit{e.g.}, \cite{SIFT_2004_ijcv,superpoint_cvpr18}, we also evaluate and compare our method with previous methods via performing the homography estimation task. For benchmark dataset, HPatches \cite{hpatches_cvpr17} is adopted as it is the most popular and largest dataset on this task. It includes 117 sequences of images, where each sequence consists of one reference image and five target images. The homography between the reference image and each target image has been carefully calibrated. There are 57 sequences of images only changing in illumination, and 59 sequences of images only changing in viewpoint. We follow most experimental setups and use the homograpy accuracy metric used in \cite{superpoint_cvpr18}. To estimate the homography, we use our model and 11 comparison methods to extract the top-500 most confidential keypoints from each input image. The correspondences of keypoints are constructed via nearest matching by descriptors. A cross-check step is further applied to eliminate unstable matches. Then the homography is estimated using the RANSAC algorithm with default parameters via directly calling the $\mathtt{findHomography}\left(\right)$ function in OpenCV. \begin{figure*}[t] \center \includegraphics[width=13cm]{./fig/hpatches_homography.pdf} \caption{The homography accuracy curves of our SEKD model and 11 comparison methods along with different reprojection error thresholds from 1 through 10 on HPatches overall data, Illumination subset, and Viewpoint subset, respectively.}% \label{fig:homography} \end{figure*} As shown in Fig.~\ref{fig:homography}, we plot the homography accuracy curve of each method along with different reprojection error thresholds from 1 through 10. The average homography accuracy (Avg.HA@1:10) is also calculated and presented in Tab.~\ref{tab:comparisons}. The results of Illumination subset and Viewpoint subset are also presented respectively. The results show that our SEKD model achieves the best overall performance. On the Illumination subset, DELF \cite{delf_iccv17} achieves the best result. However, its performance on Viewpoint subset is the worst due to its poor keypoints localization ability. On the Viewpoint subset, our SEKD model outperforms all comparison methods. \begin{table*}[t] \small \begin{center} \caption{The average homography accuracy (Avg.HA) of our SEKD model and 11 comparison methods on HPatches dataset. And the mean average accuracy (mAA) of relative pose estimation (stereo) and structure-from-motion (SfM) on IWC dataset.}% \label{tab:comparisons} \begin{tabular}{@{\hspace{1mm}}l@{\hspace{8mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{8mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{1mm}}} \toprule \multirow{2}{*}{Method} & \multicolumn{3}{l}{Avg.HA@1:10 on HPatches} & \multicolumn{3}{c}{mAA on IMC} \\ & Mean & ILL. & VIEW. & Mean & Stereo & SfM \\ \midrule ORB \cite{orb_iccv11} & 48.96\% & 60.28\% & 38.03\% & 0.064 & 0.032 & 0.097 \\ AKAZE \cite{akaze_bmvc13} & 59.22\% & 70.63\% & 48.20\% & 0.190 & 0.079 & 0.302 \\ BRISK \cite{brisk_iccv11} & 61.15\% & 71.08\% & 51.55\% & 0.111 & 0.040 & 0.183 \\ SURF \cite{surf_eccv06} & 66.77\% & 78.94\% & 55.01\% & 0.238 & 0.149 & 0.328 \\ KAZE \cite{kaze_eccv12} & 68.10\% & 81.82\% & 54.84\% & 0.270 & 0.169 & 0.371 \\ SIFT \cite{SIFT_2004_ijcv} & 74.13\% & 84.28\% & \underline{64.33\%} & 0.342 & \underline{0.258} & 0.427 \\ D2-Net \cite{d2net_cvpr19} & 30.96\% & 47.12\% & 15.35\% & 0.025 & 0.025 & 0.025\\ DELF \cite{delf_iccv17}\tablefootnote{On IMC dataset, we reduce the dimension of DELF descriptor from 1024 to 512 using PCA as the benchmark code refuses to take longer descriptors as input.} & 50.84\% & \textbf{98.52\%} & ~4.77\% & 0.048 & 0.043 & 0.053 \\ LF-Net \cite{lfnet_nips18} & 70.31\% & 84.49\% & 56.61\% & 0.176 & 0.137 & 0.216 \\ SuperPoint \cite{superpoint_cvpr18} & \underline{77.65\%} & 93.15\% & 62.67\% & \underline{0.395} & 0.231 & \textbf{0.559} \\ R2D2 \cite{r2d2_nips19}\tablefootnote{R2D2 adopts image pyramid as input for better performance. For a fair comparison, we only compare the results taking the initial image as input. Actually, with image pyramid as input, the mean results of R2D2 and our method should be updated to 72.81\%, 0.442, and, 79.74\%, 0.496 on HPatches, IMC respectively. However, this has no influence on the conclusions.} & 72.15\% & 93.75\% & 51.28\% & 0.338 & 0.221 & 0.455 \\ SEKD (ours) & \textbf{79.98\%} & \underline{95.29\%} & \textbf{65.18\%} & \textbf{0.430} & \textbf{0.307} & \underline{0.553} \\ \bottomrule \end{tabular} \end{center} \end{table*} \subsection{Performance on Stereo and SfM} The HPathes dataset is a planar dataset and the relation between a pair of images is affine transformation. However, images from unconstrained real environment usually are not satisfy with this constraint. To this end, we resort to the Image Matching Challenge (IMC) dataset \cite{imc_2020}, that consists of images from 26 scenes and each image is annotated with ground-truth 6-DoF pose. For each scene, IMC collected adequate images to reconstruct the scene and estimate the pose of each image using SfM algorithm. The estimated poses are taken as pseudo ground-truth. Then only a subset of images are selected for evaluation via performing relative pose estimation and struture-from-motion tasks. Via adjusting the error thresholds from 1 to 10 degrees, IMC calculates mean Average Accuracy (mAA) as the metric to compare each method. Please see the website \cite{imc_2020} for more details about this dataset. We adopt the validation set since both the images and ground-truth have been released at the moment. It consists of three scenes, \textit{i.e.}, sacre coeur, st peters square, and reichstag. We extract up to 2K keypoints from each image using each comparison method. Then the keypoints correspondences between each pair of images are constructed via the same matching algorithm, which is the ratio-test in our experiment for float descriptors and nearest-matching for binary descriptors. The mAA metrics are then figured out via evaluating the relative pose estimation and structure-from-motion results. For fair comparison, besides keypoints extraction, all other processes are implemented using the benchmark code released by IMC \cite{imc_2020} with the same experimental setups and parameters. As demonstrated in Tab.~\ref{tab:comparisons}, our SEKD achieves the best overall performance on the IMC dataset and outperforms the second place method, \textit{i.e.}, SuperPoint \cite{superpoint_cvpr18}, with a large margin of 0.035. Specifically, on relative pose estimation task, our method outperforms the second place with a large margin of 0.049. On structure-from-motion task, SuperPoint \cite{superpoint_cvpr18} slightly outperforms our method with 0.006, however, it achieves unsatisfactory result on relative pose estimation task, that is 0.076 lower than our method. This experiment indicates that, though our SEKD model is trained only using web images with synthetic affine transformations, it has fairly good generalization ability on 3D datasets and problems. \subsection{Effectiveness of Each Training Strategy} To exploit the effectiveness of each key training strategy in our framework, we further conduct an ablation experiment on homography estimation task with HPatches dataset. As shown in Tab.~\ref{tab:ablation:study}, we replace the descriptor repeatability \eqref{eq:desc:rep} and the descriptor distinctness \eqref{eq:desc:dis} with the constant value $1$, respectively, then the Avg.HA@1:10 decreases dramatically, that verifies the rationality of our algorithm. We also delete the detector repeatability loss \eqref{eq:det:rep} and affine adaption \eqref{eq:det:affine}\&\eqref{eq:ratio:affine}, respectively, the performance also decreases, that verifies that these two strategies can improve the stability of our framework along with the trained model. \begin{table}[t] \small \begin{center} \caption{Ablation experiment. We remove each critical training strategy to exploit its influence on homography estimation task via comparing the Avg.HA@1:10 metric.} \label{tab:ablation:study} \begin{tabular}{@{\hspace{0mm}}l@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{0mm}}} \toprule Model & Mean & ILL. & VIEW. \\ \midrule w/o descriptor repeatability \eqref{eq:desc:rep} & 66.58\% & 81.12\% & 52.54\% \\ w/o descriptor distinctness \eqref{eq:desc:dis} & 78.03\% & 93.68\% & 62.91\% \\ w/o detector repeatability \eqref{eq:det:rep} & 78.03\% & 93.92\% & 62.67\% \\ w/o affine adaption \eqref{eq:det:affine}\&\eqref{eq:ratio:affine} & 79.05\% & 94.24\% & 64.37\% \\ full method & \textbf{79.98\%} & \textbf{95.29\%} & \textbf{65.18\%} \\ \bottomrule \end{tabular} \end{center} \end{table} \section{Discussion and Conclusion} In this paper, we analyze the inherent and interactive properties of local feature detector and descriptor. Guided by the properties, a self-evolving framework is elaborately designed to update the detector and descriptor iteratively using unlabeled images. Extensive experiments verify the effectiveness of our method both on planar and 3D datasets, though our model is trained only using planar data. Moreover, as our framework can work well only using unlabeled data, theoretically, besides natural images, it also can be adopted to discover novel local features from other types of data, \textit{e.g.}, medical images, infrared images, and remote sensing images. We leave these as our future work. {\small \bibliographystyle{ieee_fullname}
2024-02-18T23:41:00.401Z
2020-06-11T02:09:37.000Z
algebraic_stack_train_0000
3,911
6,548
proofpile-arXiv_066-3297
\section{Introduction} \begin{figure}[!t] \centering \includegraphics[width=1.0\columnwidth]{fig1} \caption{Comparison between the incoherent (left) and coherent (right) visual-semantic embedding space. Existing methods (left) pull the totally-relevant sentence (a) close to the query image, while pushing away all other sentences (b, c, and d) equally. Therefore, the relative proximity of (b, c, and d) are not necessarily consistent with their relevance degrees to the query (solid black dot). On contrary, our approach (right) explicitly preserves the proper relevance order in the retrieval results.} \label{fig_first} \end{figure} Visual-semantic embedding aims to map images and their descriptive sentences into a common space, so that we can retrieve sentences given query images or vice versa, which is namely cross-modal retrieval~\cite{ji2017cross}. Recently, the advances in deep learning have made significant progress on visual-semantic embedding~\cite{Kiros1,Karpathy1,Karpathy2,VSEPP}. Generally, images are represented by the Convolutional Neural Networks (CNN), and sentences are represented by the Recurrent Neural Networks (RNN). A triplet ranking loss is subsequently optimized to make the corresponding representations as close as possible in the embedding space~\cite{schroff2015facenet,sohn2016improved}. For visual-semantic embedding, previous methods~\cite{hadsell2006dimensionality,schroff2015facenet} tend to treat the relevance between queries and candidates in a bipolar way: for a query image, only the corresponding ground-truth sentence is regarded as \textbf{relevant}, and other sentences are \emph{equally} regarded as \textbf{irrelevant}. Therefore, with the triplet ranking loss, only the relevant sentence is pulled close to the query image, while all the irrelevant sentences are pushed away \emph{equally}, \emph{i.e.}, be pushed from the query by an equal margin. However, among those so-called \textbf{irrelevant} sentences, some are more relevant to the query than others, thus should be treated accordingly. Similarly, it is arguably a disadvantage in recent retrieval evaluation metrics which disregard the ordering/ranking of retrieved ``irrelevant'' results. For example, the most popular Recall@K (\emph{i.e.}, R@K)~\cite{Kiros1,Karpathy1,VSEPP} is purely based on the ranking position of the ground-truth candidates (denoted as \emph{totally-relevant} candidates in this paper); while \emph{neglecting} the ranking order of all other candidates. However, the user experience of a practical cross-modal retrieval system could be heavily impacted by the ranking order of all top-$N$ candidates, including the ``irrelevant'' ones, as it is often challenging to retrieve enough totally-relevant candidates in the top-$N$ results (known as the long-tail query challenge~\cite{downey2007heads}). Given a query from the user, when a exact matching candidate does not exist in the database, a model trained with only bipolar supervision information will likely fail to retrieve those somewhat relevant candidates, and produce a badly ordered ranking result. As demonstrated in Fig.~\ref{fig_first}, given a query image (solid black dot), the ground-truth sentence (a) is the totally-relevant one, which does occupy the top of the retrieved list. Besides that, the sentence (b) is notably more relevant than (c) or (d), so ideally the (b) should be ranked before the (c), and the (d) should be ranked at the bottom. Therefore, it is beneficial to formulate the semantic \textbf{relevance degree} as a continuous variable rather than a binary variable (\emph{i.e.}, relevant or irrelevant). And the relevance degree should be incorporated into embedding space learning, so that the candidates with higher relevance degrees will be closer to the query than those with lower degrees. In this paper, we first propose to measure the relevance degree between images and sentences, based on which we design the \textbf{ladder loss} to learn a \emph{coherent} embedding space. The ``coherent'' means that the similarities between queries and candidates are conformal with their relevance degrees. Specifically, the similarity between the query image $i_q$ and its totally-relevant sentence $t_q$ in the conventional triplet loss~\cite{VSEPP} is encouraged to be greater than the similarity between the $i_q$ and other sentences $t_p$. Likewise, with the ladder loss formulation, we consider the relevance degrees of all sentences, and extend the inequality $s(i_q, t_q)>s(i_q, t_p)$ to an inequality chain, \emph{i.e.}, $s(i_q, t_q)>s(i_q,t_{p_1})>s(i_q,t_{p_2})>\dots>s(i_q,t_{p_L})$, where $t_{p_l}$ is more relevant to $i_q$ than $t_{p_{l+1}}$, and $s(\cdot,\cdot)$ denotes cosine similarity. Using the inequality chain, we design the ladder loss so that the sentences with lower relevance degrees will be pushed away by a larger margin than the ones with higher relevance degrees. As a result, it leads to learn a coherent embedding space, and both the totally-relevant as well as the somewhat-relevant sentences can be properly ranked. In order to better evaluate the quality of retrieval results, we propose a new \textbf{Coherent Score (CS)} metric, which is designed to measure the alignment between the real ranking order and the expected ranking order. The expected ranking order is decided according to the relevance degrees, so that the CS can properly reflect user experience for cross-modal retrieval results. In brief, our contributions are: \begin{enumerate} \item We propose to formulate the relevance degree as a continuous rather than a binary variable, which leads to learn a coherent embedding space, where both the totally-relevant and the somewhat-relevant candidates can be retrieved and ranked in a proper order. \item To learn a coherent embedding space, a ladder loss is proposed by extending the inequality in the triplet loss to an inequality chain, so that candidates with different degrees will be treated differently. \item A new metric, Coherent Score (CS), is proposed to evaluate the ranking results, which can better reflect user experience in a cross-modal retrieval system. \end{enumerate} \section{Related Work} \textbf{Visual-semantic Embedding}, as a kind of multi-modal joint embedding, enables a wide range of tasks in image and language understanding, such as image-caption retrieval~\cite{Karpathy2,Kiros1,VSEPP}, image captioning, and visual question-answering~\cite{Malinowski_2015_ICCV}. Generally, the methods of visual-semantic embedding could be divided into two categories. The first category is based on Canonical Correlation Analysis (CCA) \cite{hardoon2004canonical,gong2014multi,gong2014improving,klein2014fisher} which finds linear projections that maximize the correlation between projected vectors from the two modalities. Extensions of CCA to a deep learning framework have also been proposed \cite{andrew2013deep,yan2015deep}. The second category involves metric learning-based embedding space learning ~\cite{Frome,DeepSP,VSEPP}. DeViSE~\cite{Frome,Socher2} learns linear transformations of visual and textual features to the common space. After that, Deep Structure-Preserving (DeepSP)~\cite{DeepSP} is proposed for image-text embedding, which combines cross-view ranking constraints with within-view neighborhood structure preservation. In \cite{Niu2017}, Niu {\em et al.} propose to learn a hierarchical multimodal embedding space where not only full sentences and images but also phrases and image regions are mapped into the space. Recently, Fartash {\em et al.} \cite{VSEPP} incorporate hard negatives in the ranking loss function, which yields significant gains in retrieval performance. Compared to CCA-based methods, metric learning-based methods scale better to large dataset with stochastic optimization in training. \textbf{Metric learning}, has many other applications such as face recognition~\cite{schroff2015facenet} and fine-grained recognition~\cite{oh2016deep,wu2017sampling,yuan2017hard}. The loss function design in metric learning could be a subtle problem. For example, the contrastive loss~\cite{hadsell2006dimensionality} pulls all positives close, while all negatives are separated by a fixed distance. However, it could be severely restrictive to enforce such fixed distance for all negatives. This motivated the triplet loss~\cite{schroff2015facenet}, which only requires negatives to be farther away than any positives on a per-example basis, \emph{i.e.}, a less restrictive relative distance constraint. After that, many variants of triplet loss are proposed. For example, PDDM~\cite{huang2016local} and Histogram Loss~\cite{ustinova2016learning} use quadruplets. Beyond that, the n-pair loss~\cite{sohn2016improved} and Lifted Structure~\cite{oh2016deep} define constraints on all images in a batch. However, all the aforementioned methods formulate the relevance as a binary variable. Thus, our ladder loss could be used to boost those methods. \section{Our Approach} Given a set of image-sentence pairs $\mathcal{D}=\{(i_n,t_n)_{n=1}^N\}$, the visual-semantic embedding aims to map both images $\{(i_n)_{n=1}^N\}$ and sentences $\{(t_n)_{n=1}^N\}$ into a common space. In previous methods, for each image $i_q$, only the corresponding sentence $t_q$ is regarded as relevant, and the others $\{t_p, (p\in \mathcal{N}^{-q})\}$ are all regarded as irrelevant, where $\mathcal{N}^{-q}=\{n|1\leq n \leq N, \text{and } n\neq q\}$. Thus, only the inequality $s(i_q,t_q)>s(i_q,t_p), (p\in \mathcal{N}^{-q})$ is enforced in previous methods. In contrast, our approach will measure the semantic relevance degree between $i_q$ and each sentence in $\{t_p, (p\in \mathcal{N}^{-q})\}$. Intuitively, the corresponding sentence $t_q$ should have the highest relevance degree, while the others would have different degrees. Thus, in our coherent embedding space, the similarity of an image-sentence pair with higher relevance degree is desired to be greater than the similarity for a pair with lower degree. To this end, we first define a continuous variable to measure the semantic relevance degree between images and sentences (in Sec.~\ref{SRD}). Subsequently, to learn a coherent embedding space, we design a novel ladder loss to push different candidates away by distinct margins according to their relevance degree (in Sec.~\ref{LL}). At last, we propose the Coherent Score metric to properly measure whether the ranking order is aligned with their relevance degrees (in Sec.~\ref{CS}). Our approach only relies on customized loss function and it has no restrictions on the image/sentence representation, so it is flexible to be incorporated into any neural network architecture. \subsection{Relevance Degree} \label{SRD} In our approach, we need to measure the semantic relevance degree for image-sentence pairs. The ideal ground-truth for image-sentence pair is human annotation, but in fact it is infeasible to annotate such a multi-modal pairwise relevance dataset due to the combinatorial explosion in the number of possible pairs. On the other hand, the single-modal relevance measurement (\emph{i.e.}, between sentences) is often much easier than the cross-modal one (\emph{i.e.}, between sentences and images). For example, recently many newly proposed Natural Language Processing (NLP) models~\cite{devlin2018bert,ELMo,MTDNN} achieved very impressive results~\cite{glue} on various NLP tasks. Specifically, on the sentence similarity task the BERT~\cite{devlin2018bert} has nearly reached human performance. Compared to single-modal metric learning in image modality, the natural language similarity measure is more mature. Hence we cast the image-sentence relevance problem as a sentence-sentence relevance problem. Intuitively, for an image $i_q$, the relevance degree of its corresponding sentence $t_q$ is supposed to be the highest, and it is regarded as a reference when measuring the relevance degrees between $i_q$ and other sentences. In other words, measuring the relevance degree between the image $i_q$ and the sentence $t_p,~(p\in \mathcal{N})$ is cast as measuring the relevance degree (i.e. similarity) between the two sentences $t_q$ and $t_p, ~(p\in \mathcal{N})$. To this end, we employ the Bidirectional Encoder Representations Transformers (BERT)~\cite{devlin2018bert}. Specifically, the BERT model we used is fine-tuned on the Semantic Textual Similarity Benchmark (STS-B) dataset\cite{2017STS,devlin2018bert}. The Pearson correlation coefficient of our fine-tuned BERT on STS-B validation set is $0.88$, which indicates good alignment between predictions and human perception. In short, the relevance degree between an image $i_q$ and a sentence $t_p$ is calculated as the similarity score between $t_q$ and $t_p$ with our fine-tuned BERT model: \begin{equation} R(i_q,t_p) = R(t_q,t_p)= \text{BERT}(t_q, t_p).\label{eq:bertrd} \end{equation} \subsection{Ladder Loss Function} \label{LL} In this section, the conventional triplet loss is briefly overviewed, followed by our proposed ladder loss. \subsubsection{Triplet Loss} Let $v_q$ be the visual representation of a query image $i_q$, and $h_p$ indicates the representation of the sentence $t_p$. In the triplet loss formulation, for query image $i_q$, only its corresponding sentence $t_q$ is regarded as the positive (\emph{i.e.}, relevant) sample; while all other sentences $\{t_p, (p\in \mathcal{N}^{-q})\}$ are deemed negative (\emph{i.e.}, irrelevant). Therefore, in the embedding space the similarity between $v_q$ and $h_q$ is encouraged to be greater than the similarity between $v_q$ and $h_p, (p\in \mathcal{N}^{-q})$ by a margin $\alpha$, \begin{equation} s(v_q,h_q) - s(v_q,h_p) > \alpha, (p\in \mathcal{N}^{-q}) , \label{ieq_t} \end{equation} which can be transformed as the triplet loss function, \begin{equation} L_{tri}(q) = \sum_{p\in \mathcal{N}^{-q} } [\alpha- s(v_q,h_q) + s(v_q,h_p)]_+ , \label{loss_t} \end{equation} where $[x]_+$ indicates $\max\{0, x\}$. Considering the reflexive property of the query and candidate, the full triplet loss is \begin{equation} \begin{aligned} \mathcal{L}_{tri}(q) = & \sum_{p\in N^{-q} } [\alpha- s(v_q,h_q) + s(v_q,h_p)]_+ \\ + & \sum_{p\in N^{-q} } [\alpha- s(h_q,v_q) + s(h_q,v_p)]_+ . \label{tri_full} \end{aligned} \end{equation} \subsubsection{Ladder Loss} \begin{figure*}[t!] \centering \includegraphics[width=0.95\linewidth]{fig4.png} \caption{Comparison of the sentence-to-image top-$30$ retrieval results between VSE++ (baseline, $1$st row) and CVSE++ (Ours, $2$nd row). For each query sentence, the ground-truth image is shown on the left, the totally-relevant and totally-irrelevant retrieval results are marked by blue and red overlines/underlines, respectively. Despite that both methods retrieve the totally-relevant images at identical ranking positions, the baseline VSE++ method includes more totally-irrelevant images in the top-$30$ results; while our proposed CVSE++ method mitigates such problem. } \label{fig_vis2} \end{figure*} We first calculate the relevance degrees between image $i_q$ and each sentence $t_p, (p\in \mathcal{N}^{-q})$. After that, these relevance degree values are divided into $L$ levels with thresholds $\theta_l, (l=1,2,\dots,L-1)$. As a result, the sentence index set $\mathcal{N}^{-q}$ is divided into $L$ subsets $\mathcal{N}^{-q}_1,\mathcal{N}^{-q}_2,\dots,\mathcal{N}^{-q}_L$, and sentences in $\mathcal{N}^{-q}_{l}$ are more relevant to the query than the sentences in $\mathcal{N}^{-q}_{l+1}$. To learn a coherent embedding space, the more relevant sentences should be pulled closer to the query than the less relevant ones. To this end, we extend the single inequality Eq.~\eqref{ieq_t} to an inequality chain, \begin{equation} \begin{aligned} s(v_q,h_q) - s(v_q,h_i) & > \alpha_1, (i\in \mathcal{N}^{-q}_1), \\ s(v_q,h_i) - s(v_q,h_j) & > \alpha_2, (i\in \mathcal{N}^{-q}_1, j\in \mathcal{N}^{-q}_2), \\ s(v_q,h_j) - s(v_q,h_k) & > \alpha_3, (j\in \mathcal{N}^{-q}_2, k\in \mathcal{N}^{-q}_3), \\ & \cdots, \end{aligned} \end{equation} where $\alpha_1,\dots,\alpha_L$ are the margins between different non-overlapping sentence subsets. In this way, the sentences with distinct relevance degrees are pushed away by distinct margins. For examples, for sentences in $\mathcal{N}^{-q}_1$, they are pushed away by margin $\alpha_1$, and for sentences in $\mathcal{N}^{-q}_2$, they are pushed away by margin $\alpha_1+\alpha_2$. Based on such inequality chain, we could define the ladder loss function. For simplicity, we just show the ladder loss with three-subset-partition (\emph{i.e.}, $L=3$) as an example, \begin{eqnarray} & L_{lad}(q) = \beta_1 L_{lad}^1(q) + \beta_2 L_{lad}^2(q) + \beta_3 L_{lad}^3(q), \label{loss_tradeoff} \\ & L_{lad}^1(q) = \sum_{i\in \mathcal{N}^{-q}_{1:L}} [\alpha_1- s(v_q,h_q) + s(v_q,h_i)]_+ \nonumber, \\ & L_{lad}^2(q) = \sum_{i\in \mathcal{N}^{-q}_1, j\in \mathcal{N}^{-q}_{2:L}} [\alpha_2- s(v_q,h_i) + s(v_q,h_j)]_+ , \label{loss_lad_2} \\ & L_{lad}^3(q) = \sum_{j\in \mathcal{N}^{-q}_2, k\in \mathcal{N}^{-q}_{3:L}} [\alpha_3- s(v_q,h_j) + s(v_q,h_k)]_+ \nonumber , \end{eqnarray} where $\beta_1$, $\beta_2$ and $\beta_3$ are the weights between $L_{lad}^1(q)$, $L_{lad}^2(q)$ and $L_{lad}^3(q)$, respectively. $\mathcal{N}^{-q}_{l:L}$ indicates the union from $\mathcal{N}^{-q}_l$ to $\mathcal{N}^{-q}_L$. As can be expected, the $L_{lad}^1(q)$ term alone is identical to the original triplet loss, {\em i.e.}, the ladder loss degenerates to the triplet loss if $\beta_2=\beta_3=0$. Note that the dual problem of sentence as a query and images as candidates also exists. Similar to obtaining the full triplet loss Eq.~\eqref{tri_full}, we can easily write the full ladder loss $\mathcal{L}_{lad}(q)$, which is omitted here. \subsubsection{Ladder Loss with Hard Contrastive Sampling} For visual-semantic embedding, the hard negative sampling strategy~\cite{simo2015discriminative,wu2017sampling} has been validated for inducing significant performance improvements, where selected hard samples (instead of all samples) are utilized for the loss computation. Inspired by~\cite{wu2017sampling,VSEPP}, we develop a similar strategy of selecting hard contrastive pairs for the ladder loss computation, which is termed \textbf{hard contrastive sampling (HC)}. Taking the $L_{lad}^2(q)$ in Eq.~\eqref{loss_lad_2} as an example, instead of conducting the sum over the sets $i\in \mathcal{N}^{-q}_1$ and $j\in \mathcal{N}^{-q}_{2:L}$, we sample one or several pairs $(h_i,h_j)$ from $i\in \mathcal{N}^{-q}_1$ and $j\in \mathcal{N}^{-q}_{2:L}$. Our proposed HC sampling strategy involves choosing the $h_j$ closest to the query in $\mathcal{N}^{-q}_{2:L}$, and the $h_i$ furthest to the query in $\mathcal{N}^{-q}_1$ for the loss computation. Thus, the ladder loss part $L_{lad}^2(q)$ with hard contrastive sampling can be written as, \begin{equation} \begin{aligned} L_{lad-HC}^2(q) &= [\alpha_1- s(v_q,h_{i^*}) + s(v_q,h_{j^*})]_+ ,\\ j^* &= \argmax_{j\in \mathcal{N}^{-q}_{2:L}}{s(v_q,h_j)} ,\\ i^* &= \argmin_{i\in \mathcal{N}^{-q}_1}{s(v_q,h_i)} , \end{aligned} \end{equation} where $(i^*,j^*)$ is the index of the hardest contrastive pair $(h_{i^*},h_{j^*})$. According to our empirical observation, this HC strategy not only reduces the complexity of loss computation, but also improves the overall performance. \begin{table*}[!t] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{13}{|c|}{MS-COCO (1000 Test Samples)}\tabularnewline \hline \multirow{2}{*}{Model} & \multicolumn{6}{c|}{Image$\rightarrow$Sentence} & \multicolumn{6}{c|}{Sentence$\rightarrow$Image}\tabularnewline \cline{2-13} \cline{3-13} \cline{4-13} \cline{5-13} \cline{6-13} \cline{7-13} \cline{8-13} \cline{9-13} \cline{10-13} \cline{11-13} \cline{12-13} \cline{13-13} & CS@100 & CS@1000 & Mean R & R@1 & R@5 & R@10 & CS@100 & CS@1000 & Mean R & R@1 & R@5 & R@10\tabularnewline \hline Random & 0.018 & 0.009 & 929.9 & 0.0 & 0.3 & 0.5 & 0.044 & 0.005 & 501.0 & 0.1 & 0.5 & 0.9\tabularnewline \hline VSE++ (VGG19) & 0.235 & 0.057 & 5.7 & 56.7 & 83.9 & 92.0 & 0.237 & 0.057 & 9.1 & 42.6 & 76.5 & 86.8\tabularnewline \hline CVSE++ (VGG19) & 0.256 & 0.347 & 4.1 & 56.8 & 83.6 & 92.2 & 0.257 & 0.223 & 7.3 & 43.2 & 77.5 & 88.1\tabularnewline \hline VSE++ (VGG19,FT) & 0.253 & 0.047 & 2.9 & 62.5 & 88.2 & 95.2 & 0.246 & 0.042 & 6.5 & 49.9 & 82.8 & 91.2\tabularnewline \hline CVSE++ (VGG19,FT) & 0.256 & 0.419 & 2.8 & 63.2 & 89.9 & 95.0 & 0.251 & 0.287 & 5.3 & 50.5 & 83.6 & 92.8\tabularnewline \hline VSE++ (Res152) & 0.238 & 0.079 & 2.8 & 63.2 & 88.9 & 95.5 & 0.236 & 0.080 & 7.3 & 47.4 & 80.3 & 89.9\tabularnewline \hline CVSE++ (Res152) & 0.265 & 0.358 & 2.8 & 66.7 & 90.2 & 94.0 & 0.256 & 0.236 & 6.1 & 48.4 & 81.0 & 90.0\tabularnewline \hline VSE++ (Res152,FT) & 0.241 & 0.071 & 2.4 & 68.0 & 91.9 & 97.4 & 0.239 & 0.068 & 6.3 & 53.5 & 85.1 & 92.5\tabularnewline \hline CVSE++ (Res152,FT) & 0.265 & 0.446 & 2.4 & 69.1 & 92.2 & 96.1 & 0.255 & 0.275 & 4.7 & 55.6 & 86.7 & 93.8\tabularnewline \hline \hline \multicolumn{13}{|c|}{MS-COCO (5000 Test Samples)}\tabularnewline \hline \multirow{2}{*}{Model} & \multicolumn{6}{c|}{Image$\rightarrow$Sentence} & \multicolumn{6}{c|}{Sentence$\rightarrow$Image}\tabularnewline \cline{2-13} \cline{3-13} \cline{4-13} \cline{5-13} \cline{6-13} \cline{7-13} \cline{8-13} \cline{9-13} \cline{10-13} \cline{11-13} \cline{12-13} \cline{13-13} & CS@500 & CS@5000 & Mean R & R@1 & R@5 & R@10 & CS@500 & CS@5000 & Mean R & R@1 & R@5 & R@10\tabularnewline \hline VSE++ (Res152) & 0.227 & 0.078 & 10.6 & 36.3 & 66.8 & 78.7 & 0.224 & 0.084 & 30.9 & 25.6 & 54.0 & 66.9\tabularnewline \hline CVSE++ (Res152) & 0.253 & 0.354 & 9.7 & 39.3 & 69.1 & 80.3 & 0.246 & 0.239 & 25.2 & 25.8 & 54.0 & 67.3\tabularnewline \hline VSE++ (Res152,FT) & 0.231 & 0.073 & 7.7 & 40.2 & 72.5 & 83.3 & 0.228 & 0.073 & 25.1 & 30.7 & 60.7 & 73.3 \tabularnewline \hline CVSE++ (Res152,FT) & 0.255 & 0.439 & 7.4 & 43.2 & 73.5 & 84.1 & 0.242 & 0.280 & 18.6 & 32.4 & 62.2 & 74.6\tabularnewline \hline \end{tabular} } \caption{Comparison between VSE++ and CVSE++ in terms of CS@K and R@K on MS-COCO.} \label{tab_coco} \end{table*} \subsection{Coherent Score} \label{CS} In previous methods, the most popular metric for visual-semantic embedding is R@K, which only accounts for the ranking position of the ground-truth candidates (\emph{i.e.}, the totally-relevant candidates) while neglects others. Therefore, we propose a novel metric Coherent Score (CS) to properly measure the ranking order of all top-$N$ candidates (including the ground-truth and other candidates). The CS@K is defined to measure the alignment between the real ranking list $r_1,r_2,\dots,r_K$ and its expected ranking list $e_1,e_2,\dots,e_K$, where thee expected ranking list is decided according to their relevance degrees. We adopt Kendall's rank correlation coefficient $\tau,~(\tau\in[-1,1])$~\cite{kendall} as the criterion. Specifically, any pair of $(r_i,e_i)$ and $(r_j,e_j)$ where $i<j$ is defined to be concordant if both $r_i>r_j$ and $e_i>e_j$, or if both $r_i<r_j$ and $e_i<e_j$. Conversely, it is defined to be discordant if the ranks for both elements mismatch. The Kendall's rank correlation $\tau$ depends on the number of concordant pairs and discordant pairs. When $\tau=1$, the alignment is perfect, \emph{i.e.} the two ranking lists are identical. Thus, a high CS@K score indicates the good quality and good user experience of the learnt embedding space and retrieval result in terms of coherence, and a model that achieves high CS@K score is expected to perform better in long-tail query challenges~\cite{downey2007heads} where a perfect match to the query does not necessarily exist in the database. \section{Experiments} \label{EXP} \begin{table*}[!t] \centering \resizebox{0.95\textwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{6}{c|}{Image$\rightarrow$Sentence} & \multicolumn{6}{c|}{Sentence$\rightarrow$Image}\tabularnewline \cline{2-13} \cline{3-13} \cline{4-13} \cline{5-13} \cline{6-13} \cline{7-13} \cline{8-13} \cline{9-13} \cline{10-13} \cline{11-13} \cline{12-13} \cline{13-13} & CS@100 & CS@1000 & Mean R & R@1 & R@5 & R@10 & CS@100 & CS@1000 & Mean R & R@1 & R@5 & R@10\tabularnewline \hline \hline Random & 0.02 & -0.005 & 988.3 & 0.0 & 0.3 & 0.4 & -0.033 & -0.003 & 503.0 & 0.2 & 0.6 & 1.1\tabularnewline \hline VSE++ (VGG19) & 0.116 & 0.139 & 18.2 & 40.7 & 68.4 & 78.0 & 0.115 & 0.124 & 26.9 & 28.7 & 58.6 & 69.8 \tabularnewline \hline CVSE++ (VGG19) & 0.129 & 0.255 & 16.4 & 42.8 & 69.2 & 78.9 & 0.127 & 0.144 & 26.4 & 29.0 & 59.2 & 71.1\tabularnewline \hline VSE++ (VGG19,FT) & 0.128 & 0.130 & 14.7 & 44.6 & 73.3 & 82.0 & 0.125 & 0.110 & 22.8 & 31.9 & 63.0 & 74.5\tabularnewline \hline CVSE++ (VGG19,FT) & 0.133 & 0.260 & 13.0 & 44.8 & 73.1 & 82.3 & 0.131 & 0.160 & 20.8 & 33.8 & 63.9 & 75.1\tabularnewline \hline VSE++ (Res152) & 0.126 & 0.127 & 10.2 & 49.3 & 78.9 & 86.4 & 0.115 & 0.112 & 20.0 & 35.9 & 65.9 & 75.6\tabularnewline \hline CVSE++ (Res152) & 0.133 & 0.247 & 9.3 & 50.2 & 78.8 & 87.3 & 0.120 & 0.147 & 20.0 & 37.1 & 66.9 & 76.4\tabularnewline \hline VSE++ (Res152,FT) & 0.130 & 0.122 & 7.8 & 54.1 & 81.0 & 88.7 & 0.122 & 0.114 & 16.2 & 39.8 & 70.0 & 79.0\tabularnewline \hline CVSE++ (Res152,FT) & 0.141 & 0.273 & 7.4 & 56.6 & 82.5 & 90.2 & 0.126 & 0.172 & 15.7 & 42.4 & 71.6 & 80.8\tabularnewline \hline \end{tabular} } \caption{Comparison between VSE++ and CVSE++ in terms of CS@K and R@K on Flickr30K.} \label{tab_f30k} \end{table*} Following related works, Flickr30K~\cite{Flickr30k} and MS-COCO~\cite{coco,coco2} datasets are used in our experiments. The two datasets contain $31,000$ and $123,000$ images, respectively, and each image within them is annotated with $5$ sentences using AMT. For Flickr30K, we use $1,000$ images for validation, $1,000$ for testing and the rest for training, which is consistent with \cite{VSEPP}. For MS-COCO, we also follow \cite{VSEPP} and use $5,000$ images for both validation and testing. Meanwhile, the rest $30,504$ images in original validation set are used for training ($113,287$ training images in total) in our experiments following~\cite{VSEPP}. Our experimental settings follow that in VSE++~\cite{VSEPP}, which is the state-of-the-art for visual-semantic embedding. Note, in terms of image-sentence cross modal retrieval, SCAN~\cite{SCAN} achieves better performance, but it does not learn a joint embedding space for full sentences and full images, and suffers from combinatorial explosion in the number of sample pairs to be evaluated. VGG-19~\cite{VGG} or ResNet-152~\cite{He2015resnet}-based image representation is used for our experiments (both pre-trained on ImageNet). Following common practice, we extract $4096$ or $2048$-dimensional feature vectors directly from the penultimate fully connected layer from these networks. We also adopt random cropping in data augmentation, where all images are first resized to $256\times 256$ and randomly cropped $10$ times at $224\times 224$ resolution. For the sentence representation, we use a Gated Recurrent Unit (GRU), similar to the one used in \cite{VSEPP}. The dimension of the GRU and the joint embedding space is set at $D=1024$. The dimension of the word embeddings used as input to the GRU is set to $300$. Additionally, Adam solver is used for optimization, with the learning rate set at \verb|2e-4| for $15$ epochs, and then decayed to \verb|2e-5| for another 15 epochs. We use a mini-batch of size $128$ in all experiments in this paper. Our algorithm is implemented in PyTorch~\cite{paszke2017automatic}. \subsection{Relevance Degree} \label{exp_srd} The BERT inference is highly computational expensive ({\em e.g.}, a single NVIDIA Titan Xp GPU could compute similarity score for only approximately $65$ sentence pairs per second). Therefore, it is computational infeasible to directly use Eq.~\eqref{eq:bertrd} in practice due to combinatorial explosion of the number of sentence pairs. In this paper, we mitigate the problem by introducing a coarse-to-fine mechanism. For each sentence pair we first employ conventional CBoW~\cite{glue} method to coarsely measure their relevance degree. If the value is larger than a predefined threshold, Eq.~\eqref{eq:bertrd} is used to refine their relevance degree calculation. The CBoW method first calculates each sentence's representation by averaging the GloVe~\cite{glove} word vectors for all tokens, and then computes the cosine similarity between their representations of each sentence pair. With this mechanism, the false-positive ``relevant'' pairs found by the CBoW method would be suppressed by BERT, while those important real relevant pairs would be assigned with more accurate relevance degrees. Thus, the speed of CBoW and the accuracy of BERT are combined properly. We empirically fix the predefined threshold at $0.8$ for our experiments, as the mechanism achieves $0.79$ in person correlation on STS-B. \subsection{Results on MS-COCO} \label{exp_coco} We compare VSE++ (re-implemented) and our Coherent Visual-Semantic Embedding (CVSE++) on the MS-COCO dataset, where VSE++ only focuses on the ranking position of the totally-relevant candidates while our approach cares about the ranking order of all Top-$N$ candidates. The method of VSE++~\cite{VSEPP} is our baseline since it is the state-of-the-art approach for learning visual-semantic embedding. For fair comparison, we use both Recall@K (denoted as ``R@K'') and CS@K as metrics for evaluation, and also fine-tune (denoted by ``FT'') the CNNs following the baseline. In our approach, the hard contrastive sampling strategy is used. Experiments without the hard negative or hard contrastive sampling strategy are omitted because they perform much worse in terms of R@K, as reported in \cite{VSEPP}. In our approach, we need to determine the ladder number $L$ in the loss function, which depends on how many top-ranked candidates (the value of $N$) we care about (\emph{i.e.}, termed the scope-of-interest in this paper). With a small scope-of-interest, \emph{e.g.}, top-$100$, only a few ladders are required, \emph{e.g.}, $L=2$; but with a larger scope-of-interest, \emph{e.g.}, top-$200$, we will need more ladders, \emph{e.g.}, $L=3$, so that the low-level ladder, \emph{e.g.}, $L_{lad}^2(q)$ in Eq.~\eqref{loss_tradeoff}, is responsible for optimizing the ranking order of the very top candidates, \emph{e.g.}, top-$1$ $\sim$ top-$100$; while the high-level ladder, \emph{e.g.}, $L_{lad}^3(q)$ in Eq.~\eqref{loss_tradeoff}, is responsible for optimizing the ranking order of subsequent candidates, \emph{e.g.}, top-$100$ $\sim$ top-$200$. A detailed discussion regarding the scope-of-interest and the choice of ladder number $L$ will be provided in the next section. Practically, we limit our illustrated results to $L=2$ both for computational savings and for the limited scope-of-interest from most human users. With ladder number $L$ fixed at $2$, parameters can be empirically determined by exploiting the validation set, {\em e.g.}, the threshold $\theta_1$ for splitting $\mathcal{N}^{-q}_1$ and $\mathcal{N}^{-q}_2$ is fixed at $0.63$, and the margins $\alpha_{1}=0.2$, $\alpha _2=0.01$, the loss weights $\beta_1=1$, $\beta_2=0.25$. With our proposed CS@K metric, significantly larger $K$ values are chosen than those ({\em e.g.}, $1, 5, 10$) in the classical R@K metric. For instance, we report the CS@100 and CS@1000 with 1000 test samples. Such choices of $K$ allow more insights into both the local and global order-preserving effects in embedding space. In addition, the conventional R@K metrics are also included to measure the ranking performance of the totally-relevant candidates. \begin{table*}[ht!] \centering \resizebox{0.9\textwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{$\beta_2$} & \multicolumn{6}{c|}{Image$\rightarrow$Sentence} & \multicolumn{6}{c|}{Sentence$\rightarrow$Image}\tabularnewline \cline{2-13} \cline{3-13} \cline{4-13} \cline{5-13} \cline{6-13} \cline{7-13} \cline{8-13} \cline{9-13} \cline{10-13} \cline{11-13} \cline{12-13} \cline{13-13} & CS@100 & CS@1000 & Mean R & R@1 & R@5 & R@10 & CS@100 & CS@1000 & Mean R & R@1 & R@5 & R@10\tabularnewline \hline \hline 0.0 & 0.238 & 0.079 & 2.8 & 63.2 & 88.9 & 95.5 & 0.236 & 0.08 & 7.3 & 47.4 & 80.3 & 89.9\tabularnewline \hline 0.25 & 0.265 & 0.358 & 2.8 & 66.7 & 90.2 & 94.0 & 0.256 & 0.236 & 6.1 & 48.4 & 81.0 & 90.0\tabularnewline \hline 1.0 & 0.266 & 0.417 & 3.9 & 64.0 & 88.2 & 93.1 & 0.259 & 0.264 & 6.2 & 47.4 & 79.0 & 88.9 \tabularnewline \hline \end{tabular} } \caption{Performance of the proposed CVSE++(Res152) with respect to the parameter $\beta_2$ (On MS-COCO dataset).} \label{tab_beta} \end{table*} \begin{table*}[ht!] \centering \resizebox{1.0\textwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{L} & \multicolumn{7}{c|}{Image$\rightarrow$Sentence} & \multicolumn{7}{c|}{Sentence$\rightarrow$Image}\tabularnewline \cline{2-15} \cline{3-15} \cline{4-15} \cline{5-15} \cline{6-15} \cline{7-15} \cline{8-15} \cline{9-15} \cline{10-15} \cline{11-15} \cline{12-15} \cline{13-15} \cline{14-15} \cline{15-15} & CS@100 & CS@200 & CS@1000 & Mean R & R@1 & R@5 & R@10 & CS@100 & CS@200 & CS@1000 & Mean R & R@1 & R@5 & R@10\tabularnewline \hline \hline 1 & 0.238 & 0.188 & 0.079 & 2.8 & 63.2 & 88.9 & 95.5 & 0.236 & 0.189 & 0.08 & 7.3 & 47.4 & 80.3 & 89.9\tabularnewline \hline 2 & 0.265 & 0.252 & 0.358 & 2.8 & 66.7 & 90.2 & 94.0 & 0.256 & 0.253 & 0.236 & 6.1 & 48.4 & 81.0 & 90.0\tabularnewline \hline 3 & 0.267 & 0.274 & 0.405 & 3.2 & 65.7 & 89.3 & 94.1 & 0.261 & 0.258 & 0.244 & 6.3 & 48.4 & 80.3 & 89.4\tabularnewline \hline \end{tabular} } \caption{Performance of the proposed CVSE++(Res152) with respect to the ladder number $L$. (On MS-COCO dataset)} \label{tab_numlad} \end{table*} The experimental results on the MS-COCO dataset are presented in Tab.~\ref{tab_coco}, where the proposed CVSE++ approaches evidently outperform their corresponding VSE++ counterparts in terms of CS@K, \emph{e.g.}, from VSE++(Res152): $0.238$ to CVSE++(Res152): $0.265$ in terms of CS@100 for image$\rightarrow$sentence retrieval with 1000 MS-COCO test samples. Moreover, the performance improvements are more significant with the larger scope-of-interest at CS@1000, \emph{e.g.}, where ``CVSE++ (Res152,FT)'' achieves over $5$-fold increase over ``VSE++ (Res152,FT)'' (from $0.071$ to $0.446$) in image$\rightarrow$sentence retrieval. The result indicates that with our proposed ladder loss a coherent embedding space could be effectively learnt, which could produce significantly better ranking results especially in the global scope. Simultaneously, a less expected phenomenon can be observed from Tab.~\ref{tab_coco}: our proposed CVSE++ variants achieve roughly comparable or marginally better performance than their VSE++ counterparts in terms of R@K, \emph{e.g.}, from VSE++(Res152): $63.2$ to CVSE++(Res152): $66.7$ in terms of R@1 for image$\rightarrow$sentence retrieval with 1000 MS-COCO test samples. The overall improvement in R@K is insignificant because it completely neglects the ranking position of those non-ground-truth samples, and CVSE++ is not designed for improving the ranking for ground-truth. Based on these results, we speculate that the ladder loss appears to be beneficial (or at least not harmful) to the inference of totally-relevant candidates. Nevertheless, there are still hyper-parameters ($\beta_1, \beta_2, \cdots, \beta_L$) controlling the balance between the totally-relevant and somewhat-relevant candidates, which will be further analyzed in the next section. To provide some visual comparison between VSE++ and CVSE++, several sentences are randomly sampled from the validation set as queries, and their corresponding retrievals are illustrated in Fig.~\ref{fig_vis2} (sentence$\rightarrow$image). Evidently, our CSVE++ could put more somewhat-relevant candidates and reduce the number of totally-irrelevant candidates on the top-$N$ retrieval list and enhance user experience. \subsection{Results on Flickr30K} Our approach is also evaluated on the Flikr30K dataset and compared with the baseline VSE++ variants, as shown in Tab.~\ref{tab_f30k}. The hyper-parameter settings are identical to that in Tab.~\ref{tab_coco} with MS-COO (1000 Test Samples). As expected, these experimental results demonstrate similar performance improvements both in terms of CS@K and R@K by our proposed CVSE++ variants. \section{Parameter Sensitivity Analysis} In this section, parameter sensitivity analysis is carried out on two groups of hyper-parameters, {\em i.e.}, the balancing parameter $\beta_1, \beta_2, \cdots, \beta_L$ in Eq.~\eqref{loss_tradeoff} and the ladder number $L$. \subsection{Balancing Totally Relevant and Others} \label{exp_beta} In Eq.~\eqref{loss_tradeoff}, the weights between the ranking position optimization of totally-relevant candidates and other candidates in the ladder loss are controlled by the hyper-parameters $\beta_1, \beta_2, \cdots, \beta_L$. With $\beta_2=\cdots=\beta_L=0$, the ladder loss degenerates to the triplet loss, and all emphasis is put on the totally-relevant ones. Conversely, relatively larger $\beta_2, \cdots, \beta_L$ values put more emphasis on the somewhat-relevant candidates. With other parameters fixed ($L$ fixed at $2$, $\beta_1$ fixed at $1$), parameter sensitivity analysis is carried out on $\beta_2$ only. From Tab.~\ref{tab_beta}, we can see that CS@K metrics improve with larger $\beta_2$, but R@K metrics degrade when $\beta_2$ is close to $1.0$. Based on the three $\beta_2$ settings in Tab.~\ref{tab_beta}, we speculate that CS@K and R@K metrics would not necessarily peak simultaneously at the same $\beta_2$ value. We also observe that with excessively large $\beta_2$ values, the R@K metrics drop dramatically. Generally, the ranking orders of the totally-relevant candidates often catch user's attention and they should be optimized with high priority. Therefore, we select $\beta_2=0.25$ in all our other experiments to strike a balance because of R@K and CS@K performance. \subsection{The Scope-of-interest for Ladder Loss} \label{sec_discuss} Our approach focuses on improving the ranking order of all top-$N$ retrieved results (instead of just the totally-relevant ones). Thus, there is an important parameter, \emph{i.e.}, the scope-of-interest $N$ or the size of the desired retrieval list. If the retrieval system user only cares about a few top-ranked results (\emph{e.g.}, top-$100$), two ladders (\emph{e.g.}, $L=2$) are practically sufficient; If a larger scope-of-interest (\emph{e.g.}, top-$200$) is required, more ladders are probably needed in the ladder loss. For example, with $L=3$, the low-level ladder $L_{lad}^2(q)$ is responsible for the optimization of the ranking order of very top candidates, \emph{e.g.}, from top-$1$ $\sim$ top-$100$; while the high-level ladder $L_{lad}^3(q)$ is responsible for the optimization of the ranking order of subsequent candidates, \emph{e.g.}, from top-$100$ $\sim$ top-$200$. Inevitably, larger ladder number results in higher computational complexity. Therefore, a compromise between the scope-of-interest and the computational complexity needs to be reached. For the sensitivity analysis of ladder number $L = 1, 2, 3$, we evaluate our CVSE++ (Res152) approach by comparing top-$100$, top-$200$ and top-$1000$ results, which are measured by CS@100, CS@200 and CS@1000, respectively. Other parameters $\theta_2$, $\alpha_3$, $\beta_3$ are empirically fixed at $0.56$, $0.01$, $0.125$, respectively. The experimental results are summarized in Tab.~\ref{tab_numlad}. With small scope-of-interest $N=100$, we find that two ladder $L=2$ is effective to optimize the CS@100 metric, a third ladder only incurs marginal improvements. However, with larger scope-of-interest, {\em e.g.}, top-$200$, the CS@200 can be further improved by adding one more ladder, {\em i.e.}, $L=3$. Apart from that, a notable side effect with too many ladders (\emph{e.g.} $5$) can be observed, the R@K performance drops evidently. We speculate that with more ladders, the ladder loss is likely to be dominated by high-level ladder terms and leads to some difficulties in optimization of the low-level ladder term. This result indicates that the choice of $L$ should be proportional to the scope-of-interest, \emph{i.e.}, more ladders for larger scope-of-interest and vice versa. \section{Conclusion} In this paper, relevance between queries and candidates are formulated as a continuous variable instead of a binary one, and a new ladder loss is proposed to push different candidates away by distinct margins. As a result, we could learn a coherent visual-semantic space where both the totally-relevant and the somewhat-relevant candidates can be retrieved and ranked in a proper order. In particular, our ladder loss improves the ranking quality of all top-$N$ results without degrading the ranking positions of the ground-truth candidates. Besides, the scope-of-interest is flexible by adjusting the number of ladders. Extensive experiments on multiple datasets validate the efficacy of our proposed method, and our approach achieves the state-of-the-art performance in terms of both CS@K and R@K. For future work, we plan to extend the ladder loss-based embedding to other metric learning applications. \subsection{Acknowledgements} This work was supported partly by National Key R\&D Program of China Grant 2018AAA0101400, NSFC Grants 61629301, 61773312, 61976171, and 61672402. China Postdoctoral Science Foundation Grant 2019M653642, and Young Elite Scientists Sponsorship Program by CAST Grant 2018QNRC001. {\small \bibliographystyle{aaai}
2024-02-18T23:41:01.522Z
2019-11-19T02:24:56.000Z
algebraic_stack_train_0000
3,963
7,058
proofpile-arXiv_066-3342
\section{Introduction} Introducing appropriate inductive biases on deep learning models is a well--known approach to improving sample efficiency and generalization performance \cite{battaglia2018relational}. Graph neural networks (GNNs) represent a general computational framework for imposing such inductive biases when the problem structure can be encoded as a graph or in settings where prior knowledge about entities composing a target system can itself be described as a graph \cite{li2018combinatorial,gasse2019exact,sanchez2018graph}. GNNs have shown remarkable results in various application areas such as node classification \cite{zhuang2018dual,gallicchio2019fast}, graph classification \cite{yan2018spatial} and forecasting \cite{li2017diffusion,wu2019graph} as well as generative tasks \cite{li2018learning,you2018graphrnn}. \begin{wrapfigure}{r}{0.5\textwidth} \centering \includegraphics[scale=0.75]{catchy2.pdf} \caption{A \textit{graph neural ordinary differential equation} (GDE) models vector fields defined on graphs, both in cases when the structure is fixed or changes in time, by utilizing a continuum of \textit{graph neural network} (GNN) layers.} \label{fig:vek} \end{wrapfigure} A different but equally important class of inductive biases is concerned with the type of temporal behavior of the systems from which the data is collected i.e., discrete or continuous dynamics. Although deep learning has traditionally been a field dominated by discrete models, recent advances propose a treatment of neural networks equipped with a continuum of layers \cite{haber2017stable,chen2018neural}. This view allows a reformulation of the forward and backward pass as the solution of the initial value problem of an \textit{ordinary differential equation} (ODE). Such approaches allow direct modeling of ODEs and enhance the performance of neural networks on tasks involving continuous time processes \cite{rubanova2019latent}. In this work we propose the system--theoretic framework of \textit{graph neural ordinary differential equations} (GDEs) by defining ODEs parametrized by GNNs. GDEs are designed to inherit the ability to impose relational inductive biases of GNNs while retaining the dynamical system perspective of continuous--depth models. The structure--dependent vector field learned by GDEs offers a data--driven approach to the modeling of dynamical networked systems \cite{lu2005time,andreasson2014distributed}, particularly when the governing equations are highly nonlinear and therefore challenging to approach with analytical methods. On tasks that explicitly involve dynamical systems, GDEs can adapt the prediction horizon by adjusting the integration interval of the ODE. This allows the model to track the evolution of the underlying system from irregular observations. In general, no assumptions on the continuous nature of the data generating process are necessary in order for GDEs to be effective. Indeed, following recent work connecting different discretization schemes of ODEs \cite{lu2017beyond} to previously known architectures such as FractalNets \cite{larsson2016fractalnet}, we show that GDEs can equivalently be utilized as high--performance general purpose models. In this setting, GDEs offer a grounded approach to the embedding of black--box numerical schemes inside the forward pass of GNNs. Moreover, we show that training GDEs with adaptive ODE solvers leads to deep GNN models without the need to specify the number of layers a--priori, sidestepping known depth--limitations of GNNs \cite{li2019deepgcns}. We summarize our contributions as follows: \begin{itemize} \item We introduce \textit{graph ordinary differential equation networks} (GDEs), continuous--depth counterparts to \textit{graph neural networks} (GNNs). We show that the proposed framework is compatible with most common GNN models and allows for the use of additional inductive biases in the form of governing equations. \item We extend the GDE framework to the spatio--temporal setting and formalize a general autoregressive GDE model as a \textit{hybrid dynamical system}. \item We validate GDEs experimentally on a static semi--supervised node classification task as well as spatio--temporal forecasting tasks. GDEs are shown to outperform their discrete GNN analogues: the different sources of performance improvement are identified and analyzed separately for static and dynamic settings. \end{itemize} \section{Background} \paragraph{Notation} Let $\Nat$ be the set of natural numbers and $\R$ the set of reals. Scalars are indicated as lowercase letters, vectors as bold lowercase, matrices and tensors as bold uppercase and sets with calligraphic letters. Indices of arrays and matrices are reported as superscripts in round brackets. Let $\V$ be a finite set with $|\V| = n$ whose element are called \textit{nodes} and let $\E$ be a finite set of tuples of $\V$ elements. Its elements are called \textit{edges} and are such that $\forall e_{ij}\in\E,~e_{ij} = (v_i,v_j)$ and $v_i,v_j\in\V$. A graph $\G$ is defined as the collection of nodes and edges, i.e. $\G := (\V,\E)$. The \textit{adjeciency} matrix $\mathbf{A}\in\R^{n\times n}$ of a graph is defined as \[ \mathbf{A}^{(ij)} = \left\{ \begin{matrix*}[l] 1 & e_{ij}\in\E\\ 0 & e_{ij}\not\in\E \end{matrix*} \right.~. \] If $\G$ is an \textit{attributed graph}, the \textit{feature vector} of each $v\in\V$ is $\mathbf{x}_v\in\R^d$. All the feature vectors are collected in a matrix $\mathbf{X}\in\R^{n\times d}$. Note that often, the features of graphs exhibit temporal dependency, i.e. $\mathbf{X} := \mathbf{X}_t$. \paragraph{Neural ordinary differential equations} Continuous--depth neural network architectures are built upon the observation that, for particular classes of discrete models such as ResNets \cite{he2016deep}, their inter--layer dynamics: \begin{equation} \mathbf{h}{(s+1)} = \mathbf{h}(s) +\mathbf{f}\left(\mathbf{h}(s), \bm\theta(s)\right),~~~ s\in\Nat, \end{equation} resembles the Euler discretization of an ordinary differential equation (ODE). The continuous counterpart of neural network layers with equal input and output dimensions can therefore be described by a first order ODE of the type: \begin{equation}\label{eq:node} \frac{d\mathbf{h}}{ds} = \mathbf{f}\left( s, \mathbf{h}(s), \bm\theta\right),~~~ s\in\Sa\subset\R, \end{equation} where $\mathbf{f}$ is in general a multi--layer neural network. It has been noted that the choice of discretization scheme to solve (\ref{eq:node}) leads to previously known discrete multi--step architectures \cite{lu2017beyond}. As a result, the \textit{neural ordinary differential equation} (NODE) \cite{chen2018neural} framework is not limited to the modeling of differential equations and can guide the discovery of novel general purpose models. For the sake of compact notation, $\frac{d\mathbf{h}}{ds}$ will be denoted as $\dot\mathbf{h}$ throughout the paper. \paragraph{Related work} There exists a concurrent line of work \cite{zhuang2020ordinary} proposing a continuous variant of \textit{graph convolution networks} (GCNs) \cite{kipf2016semi} with a focus on static node classification tasks. Furthermore, \cite{sanchez2019hamiltonian} proposes using graph networks (GNs) \cite{battaglia2018relational} and ODEs to track Hamiltonian functions whereas \cite{deng2019continuous} introduces a GNN version of continuous normalizing flows for generative modeling. Our goal is developing a unified system--theoretic framework for continuous--depth GNNs covering the main variants of static and spatio--temporal GNN models. We evaluate on both static as well as dynamic tasks with the primary aim of uncovering the sources of performance improvement of GDEs in each setting. \section{Graph Neural Ordinary Differential Equations} We begin by introducing the general formulation of \textit{graph neural differential ordinary equations} (GDEs). \subsection{General Framework} \paragraph{Definition of GDE} Without any loss of generality, the inter--layer dynamics of a GNN node feature matrix can be represented in the form: \begin{equation*} \left\{ \begin{matrix*}[l] \mathbf{H}{(s+1)} = \mathbf{H}(s) + \mathbf{F}_{\G}\left( s, \mathbf{H}(s), \bm\Theta(s)\right) \\ \mathbf{H}(0) = \mathbf{X}_e \end{matrix*} \right.,~~s\in\Nat, \end{equation*} where $\mathbf{X}_e\in\R^{n\times h}$ is an embedding of $\mathbf{X}$\footnote{$\mathbf{X}_e$ can be obtained from $\mathbf{X}$, e.g. with a single linear layer: $\mathbf{X}_e := \mathbf{X}\mathbf{W}$, $\mathbf{W}\in\R^{d\times h}$ or with another GNN layer.}, $\mathbf{F}_{\G}$ is a matrix--valued nonlinear function conditioned on graph $\mathcal{G}$ and $\bm\Theta(s)\in\R^p$ is the tensor of trainable parameters of the $s$-th layer. Note that the explicit dependence on $s$ of the dynamics is justified in some graph architectures, such as diffusion graph convolutions \cite{atwood2016diffusion}. A \textit{graph neural differential ordinary equation} (GDE) is defined as the following Cauchy problem: \begin{equation}\label{eq:GDE} \left\{ \begin{matrix*}[l] \dot\mathbf{H}(s) = \mathbf{F}_{\G}\left( s, \mathbf{H}(s), \bm\Theta\right) \\ \mathbf{H}(0) = \mathbf{X}_e \end{matrix*} \right.,~~s\in\Sa\subset\R, \end{equation} where $\mathbf{F}_{\G}:\Sa\times\R^{n\times h}\times\R^p\rightarrow\R^{n\times h}$ is a depth--varying vector field defined on graph $\mathcal{G}$. To reduce complexity and stiffness of learned vector fields in order to alleviate the computational burden of adaptive ODE solvers, the node features can be augmented \cite{dupont2019augmented} by concatenating additional dimensions or prepending input layers to the GDE. \paragraph{Well--posedness} Let $\Sa:=[0,1]$. Under mild conditions on $\mathbf{F}$, namely Lipsichitz continuity with respect to $\mathbf{H}$ and uniform continuity with respect to $s$, for each initial condition (GDE embedded input) $\mathbf{X}_e$, the ODE in \eqref{eq:GDE} admits a unique solution $\mathbf{H}(s)$ defined in the whole $\Sa$. Thus there is a mapping $\bm\Psi$ from $\R^{n\times h}$ to the space of absolutely continuous functions $\Sa\to\R^{n\times h}$ such that $\mathbf{H} := \bm\Psi(\mathbf{X}_e)$ satisfies the ODE in \eqref{eq:GDE}. This implies the the output $\mathbf{Y}$ of the GDE satisfies \[ \mathbf{Y} = \bm\Psi(\mathbf{X}_e)(1). \] Symbolically, the output of the GDE is obtained by the following \begin{equation*} \mathbf{Y} = \mathbf{X}_e + \int_{\Sa}\mathbf{F}_{\G}(\tau,\mathbf{H}(\tau),\bm\Theta)d\tau. \end{equation*} Note that applying an output layer to $\mathbf{Y}$ before passing it to downstream applications is generally beneficial. \paragraph{Integration domain} We restrict the integration interval to $\Sa=[0,1]$, given that any other integration time can be considered a rescaled version of $\Sa$. Following \cite{chen2018neural} we use the \textit{number of function evaluations} (NFE) of the numerical solver utilized to solve (\ref{eq:GDE}) as a proxy for model depth. In applications where $\Sa$ acquires a specific meaning (i.e forecasting with irregular timestamps) the integration domain can be appropriately tuned to evolve GDE dynamics between \textit{arrival times} \cite{rubanova2019latent} without assumptions on the functional form of the underlying vector field, as is the case for example with exponential decay in GRU--D \cite{che2018recurrent}. \paragraph{GDE training} GDEs can be trained with a variety of methods. Standard backpropagation through the computational graph, adjoint sensitivity method \cite{pontryagin1962mathematical} for $\mathcal{O}(1)$ memory efficiency \cite{chen2018neural}, or backpropagation through a relaxed spectral elements discretization \cite{quaglino2019accelerating}. Numerical instability in the form of accumulating errors on the adjoint ODE during the backward pass of Neural ODEs has been observed in \cite{gholami2019anode}. A proposed solution is a hybrid checkpointing--adjoint scheme commonly employed in scientific computing \cite{wang2009minimal}, where the adjoint trajectory is reset at predetermined points in order control the error dynamics. \paragraph{Incorporating governing differential equation priors} GDEs belong to the toolbox of scientific deep learning \cite{innes2019zygote} along with Neural ODEs and other continuous depth models. Scientific deep learning is concerned with merging prior, incomplete knowledge about governing equations with data-driven models to enhance prediction performance, sample efficiency and interpretability \cite{rackauckas2020universal}. Within this framework GDEs can be extended to settings involving dynamical networks evolving according to different classes of differential equations, such as \textit{second--order} differential equation in the case of mechanics: \begin{equation}\label{eq:2GDE} \left\{ \begin{matrix*}[l] \ddot\mathbf{H}{(s)} = \mathbf{F}_{\G}\left( s, \mathbf{H}(s)\right) \\ \left[\dot\mathbf{H}(0),\mathbf{H}(0)\right] = \mathbf{X}_e\\ \end{matrix*} \right.~, \end{equation} where the feature matrix contains the full state (i.e., position and velocity) of each node in the dynamical network. In this setting, GDEs enforce inductive biases on the ``physics'' of the data generating process, additionally to their intrinsic geometric structure. This approach can also be seen as an early conceptual extension of \cite{yildiz2019ode} to GNNs. \subsection{Static Models} \paragraph{Graph convolution differential equations} Based on graph spectral theory \cite{shuman2013emerging,sandryhaila2013discrete}, the residual version of \textit{graph convolution network} (GCN) \cite{kipf2016semi} layers are in the form: \begin{equation*} \mathbf{H}{(s+1)} = \mathbf{H}(s) + \sigma\left[\tilde{\mathbf{D}}^{-\frac{1}{2}}\tilde\mathbf{A}\tilde\mathbf{D}^{-\frac{1}{2}}\mathbf{H}(s)\bm\Theta(s)\right], \end{equation*} where $\tilde\mathbf{A}:= \mathbf{A} + \mathbb{I}_n$ and $\tilde{\mathbf{D}}$ is a diagonal matrix defined as $\tilde\mathbf{D}^{(ii)}:=\sum_j\tilde \mathbf{A}^{(ij)}$ and $\sigma:\R\rightarrow\R$ is an activation applied element--wise to its argument. The corresponding continuous counterpart, \textit{graph convolution differential equation} (GCDE), is therefore defined as \begin{equation}\label{eq:gde} \dot\mathbf{H}{(s)} = \mathbf{F}_{\tt GCN}(\mathbf{H}(s), \bm\Theta) := \sigma\left(\tilde{\mathbf{D}}^{-\frac{1}{2}}\tilde\mathbf{A}\tilde\mathbf{D}^{-\frac{1}{2}}\mathbf{H}(s)\bm\Theta\right), \end{equation} Note that other convolution filters can be applied as alternatives to the first--order approximation of the Chebychev one. See, e.g., \cite{bruna2013spectral,defferrard2016convolutional,levie2018cayleynets, zhuang2018dual}. Diffusion--type convolution layers \cite{li2017diffusion} are also compatible with the continuous--depth formulation. \paragraph{Additional models and considerations} We include additional derivation of continuous counterparts of common static GNN models such as \textit{graph attention networks} (GAT) \cite{velivckovic2017graph} and general message passing GNNs as supplementary material. While the definition of GDE models is given with $\mathbf{F}$ made up by a single layer, in practice multi--layer architectures can also be used. \begin{figure*}[t] \centering \includegraphics[width=1\linewidth]{figures/traj_new.pdf} \caption{Node embedding trajectories defined by a forward pass of GCDE--dpr5 on Cora, Citeseer and Pubmed. Color differentiates between node classes.} \label{fig:node_emb_traj} \end{figure*} \subsection{Spatio--Temporal Models} For settings involving a temporal component (i.e., modeling dynamical systems), the depth domain of GDEs coincides with the time domain $s\equiv t$ and can be adapted depending on the requirements. For example, given a time window $\Delta t$, the prediction performed by a GDE assumes the form: \begin{equation*} \mathbf{H}{(t + \Delta t)} = \mathbf{H}(t) + \int_t^{t + \Delta t}\mathbf{F}\left( \tau,\mathbf{H}(\tau),\bm\Theta \right) d\tau, \end{equation*} regardless of the specific GDE architecture employed. Here, GDEs represent a natural model class for autoregressive modeling of sequences of graphs $\{\G_t\}$ and seamlessly link to dynamical network theory. \paragraph{Spatio--temporal GDEs as hybrid systems} This line of reasoning naturally leads to an extension of classical spatio--temporal architectures in the form of \textit{hybrid dynamical systems} \cite{van2000introduction,goebel2008}, i.e., systems characterized by interacting continuous and discrete --time dynamics. Let $(\K, >)$, $(\T, >)$ be linearly ordered sets; namely, $\K\subset\Nat\setminus\{0\}$ and $\T$ is a set of time instants, $\T:=\{t_k\}_{k\in\K}$. We suppose to be given a \textit{state--graph data stream} which is a sequence in the form $\left\{\left(\mathbf{X}_t,\G_t\right)\right\}_{t\in\T}$ Our aim is to build a continuous model predicting, at each $t_k\in\T$, the value of $\mathbf{X}_{t_{k+1}}$, given $\left(\mathbf{X}_t,\G_t\right)$. Let us also define a \textit{hybrid time domain} as the set $\I:= \bigcup_{k\in\K}\left([t_k, t_{k+1}],k\right)$ and a \textit{hybrid arc} on $\I$ as a function $\bm\Phi$ such that for each $k\in\K$, $t\mapsto\bm\Phi(t,k)$ is absolutely continuous in $\{t:(t,j)\in\dom\bm\Phi\}$. The core idea is to have a GDE smoothly steering the latent node features between two time instants and then apply some discrete operator, resulting in a ``jump'' of $\mathbf{H}$ which is then processed by an output layer. Therefore, solutions of the proposed continuous spatio--temporal model are hybrid arcs. \begin{figure}[!h] \centering \begin{tikzpicture} % \fill[gray!20, draw = black,thick] (0,0) ellipse (1.7cm and 1.2cm); \draw (0,0) node[align = center](flow) {$\dot{\mathbf{H}} = \mathbf{F}_{\G_{t_k}}\left(\mathbf{H}(s),\bm\Theta\right)$\\$(\dot s= 1)$}; % \draw [thick,->] plot [smooth, tension=1.2] coordinates { (.6,1.15) (1.75,1.2) (1.7cm,0)}; % \draw (3.9,-0.2) node[align = left] {{$k\leftarrow k+1$}\\ $\mathbf{H}^+ = \mathbf{G}_{\G_{t_k}}(\mathbf{H}(s),\mathbf{X}_{t_{k}})$}; \draw (.6,1.5) node {$s={t_{k}}$}; % \end{tikzpicture} \vspace{-3mm} \caption{Schematic of autoregressive GDEs as hybrid automata.} \label{fig:aut} \end{figure} \paragraph{Autoregressive GDEs} The solution of a general autoregressive GDE model can be symbolically represented by: \begin{equation}\label{eq:hybrid} \left\{ \begin{matrix*}[l] \dot{\mathbf{H}}({s}) &= \mathbf{F}_{\G_{t_k}}(\mathbf{H}(s), \bm\Theta) & s\in[t_{k-1}, t_{k}] \\[3pt] \mathbf{H}^+(s) &= \mathbf{G}_{\G_{t_k}}(\mathbf{H}(s), \mathbf{X}_{t_k}) & s = t_{k}\\[3pt] \mathbf{Y} &= \mathbf{K}(\mathbf{H}(s)) & s=t_{k} \end{matrix*} \right.k\in\K, \end{equation} where $\mathbf{F}, \mathbf{G}, \mathbf{K}$ are GNN--like operators or general neural network layers\footnote{More formal definitions of the hybrid model in the form of \textit{hybrid inclusions} can indeed be easily given. However, the technicalities involved are beyond the scope of this paper.} and $\mathbf{H}^+$ represent the value of $\mathbf{H}$ after the discrete transition. The evolution of system (\ref{eq:hybrid}) is indeed a sequence of hybrid arcs defined on a hybrid time domain. A graphical representation of the overall system is given by the \textit{hybrid automata} as shown in Fig. \ref{fig:aut}. Compared to standard recurrent models which are only equipped with discrete jumps, system (\ref{eq:hybrid}) incorporates a continuous flow of latent node features $\mathbf{H}$ between jumps. This feature of autoregressive GDEs allows them to track the evolution of dynamical systems from observations with irregular time steps. Different combinations of $\mathbf{F}, \mathbf{G}, \mathbf{K}$ can yield continuous variants of most common spatio-temporal GNN models. It should be noted that the operators $\mathbf{F}, \mathbf{G}, \mathbf{K}$ can themselves have multi--layer structure. \paragraph{GCDE--GRU} We illustrate the generality of (\ref{eq:hybrid}) by deriving the continuous--depth version of GCGRUs \cite{zhao2018deep} as: \begin{equation*} \label{eq:gcgru} \left\{ \begin{matrix*}[l] \dot\mathbf{H}({s}) &= \mathbf{F}_{\tt GCN}(\mathbf{H}(s), \bm\Theta) & s\in[t_{k-1}, t_{k}] \\[3pt] \mathbf{H}^+(s) &= {\tt GCGRU}(\mathbf{H}(s), \mathbf{X}_{t_{k}}) & s=t_{k}\\ \mathbf{Y} &= \mathbf{K}(\mathbf{H}(s)) & s=t_{k} \end{matrix*} \right.k\in\K, \end{equation*} where $\mathbf{K}$ is a fully--connected neural network. The complete description of a GCGRU layer of computation is included as supplementary material. We refer to this model as GCDE--GRU. Similarly, GCDE--RNNs or GCDE--LSTMs can be obtained by replacing the GRU cell with other commonly used recurrent modules, such as vanilla RNNs or LSTMs \cite{hochreiter1997long}. \section{Experiments} We evaluate GDEs on a suite of different tasks. The experiments and their primary objectives are summarized below: \begin{itemize} \item Semi--supervised node classification on static, standard benchmark datasets Cora, Citeseer, Pubmed \cite{sen2008collective}. We investigate the usefulness of the proposed method in a static setting via an ablation analysis that directly compares GCNs and analogue GCDEs solved with fixed--step and adaptive solvers. \item Trajectory extrapolation task on a synthetic multi--agent dynamical system. We compare Neural ODEs and GDEs while providing a motivating example for the framework of scientific deep learning in the form of second order models (\ref{eq:2GDE}). \item Traffic forecasting on an undersampled version of PeMS \cite{yu2018spatio} dataset. We measure the performance improvement obtained by a correct inductive bias on continuous dynamics and robustness to irregular timestamps. \end{itemize} The code will be open--sourced after the review phase and is included in the submission. \begin{table}[t] \small \centering \setlength\tabcolsep{4pt} \begin{tabular}{lrrr} \toprule Model (NFE) & Cora & Citeseer & Pubmed\\ \midrule GCN & $81.4 \pm 0.5\%$ & $70.9 \pm 0.5\%$ & $79.0 \pm 0.3\%$\\%& -- \\ GCN$^*$ & $82.8 \pm 0.3\%$ & $71.2 \pm 0.4\%$ & $79.5 \pm 0.4\%$ \\%& -- \\ \midrule GCDE--rk2 (2) & $83.0 \pm 0.6\%$ & $72.3 \pm 0.5\%$ & $\textbf{79.9} \pm 0.3\%$\\%& $2$ -- $2$ -- $2$ \\ GCDE--rk4 (4) & $\textbf{83.8} \pm 0.5\%$ & $\textbf{72.5} \pm 0.5\%$ & $79.5 \pm 0.4\%$\\%& $4$ -- $4$ -- $4$\\ GCDE--dpr5 \textbf{(158)} & $81.8 \pm 1.2\%$ & $68.3\pm 1.2\%$ & $78.5 \pm 0.7\%$\\%& $236$ -- $257$ -- $167$ \\ \bottomrule \end{tabular} \vspace{3mm} \caption{Test results in percentages across 100 runs (mean and standard deviation). All models have hidden dimension set to $64$.} \label{tab:allresone} \end{table} \subsection{Transductive Node Classification} \paragraph{Experimental setup} The first task involves performing semi--supervised node classification on static graphs collected from baseline datasets Cora, Pubmed and Citeseer \cite{sen2008collective}. Main goal of these experiments is to perform an ablation study on the source of possible performance advantages of the GDE framework in settings that do not involve continuous dynamical systems. The $L_2$ weight penalty is set to $5\cdot 10^{-4}$ on Cora, Citeseer and $10^{-3}$ on Pubmed as a strong regularizer due to the small size of the training set \cite{monti2017geometric}. We report mean and standard deviation across $100$ training runs. Since our experimental setup follows \cite{kipf2016semi} to allow for a fair comparison, other baselines present in recent GNN literature can be directly compared with Table \ref{tab:allresone}. \paragraph{Models and baselines} All convolution--based models are equipped with a latent dimension of $64$. We include results for best performing vanilla GCN baseline presented in \cite{velivckovic2017graph}. To avoid flawed comparisons, we further evaluate an optimized version of GCN, GCN$^*$, sharing exact architecture, as well as training and validation hyperparameters with the GCDE models. We experimented with different number of layers for GCN$^*$: (2, 3, 4, 5, 6) and select $2$, since it achieves the best results. The performance of \textit{graph convolution differential equation} (GCDE) is assessed with both a fixed-step solver Runge--Kutta \cite{runge1895numerische,kutta1901beitrag} as well as an adaptive--step solver, Dormand--Prince \cite{dormand1980family}. The resulting models are denoted as GCDE--rk4 and GCDE--dpr5, respectively. We utilize the {\tt torchdiffeq} \cite{chen2018neural} PyTorch package to solve and backpropagate through the ODE solver. \paragraph{Continuous--depth models in static tasks} The evaluation of continuous variants of GNNs in concurrent work \cite{zhuang2020ordinary} exclusively considers the static setting, where continuous--depth models do not have an inherent modeling advantage. As an example, ensuring a low error solution to the ODE parametrized by the model with adaptive--step solvers does not offer particular advantages in image classification tasks \cite{chen2018neural} compared to equivalent discrete models. While there is no reason to expect performance improvements solely from the transition away from discrete architectures, continuous--depth allows for the embedding of numerical ODE solvers in the forward pass. Multi--step architectures have previously been linked to ODE solver schemes \cite{lu2017beyond} and routinely outperform their single--step counterparts \cite{larsson2016fractalnet,lu2017beyond}. We investigate the performance gains by employing the GDE framework in static settings as a straightforward approach to the embedding of numerical ODE solvers in GNNs. \paragraph{Results} The variants of GCDEs solved with fixed--step schemes are shown to outperform or match GCN$^*$ across all datasets, with the margin of improvement being highest on Cora and Citeseer. Introducing GCDE--rk2 and GCDE--rk4 is observed to provide the most significant accuracy increases in more densely connected graphs or with larger training sets. In particular, GCDE--rk4 outperforming GCDE--rk2 indicates that, given equivalent network architectures, higher order ODE solvers are generally more effective, provided the graph is dense enough to benefit from the additional computation. Additionally, training GCDEs with adaptive step solvers naturally leads to deeper models than possible with vanilla GCNs, whose layer depth greatly reduces performance. However, the high \textit{number of function evaluation} (NFEs) of GCDE--dpr5 necessary to stay within the ODE solver tolerances causes the model to overfit and therefore generalize poorly. We visualize the first two components of GCDE--dpr5 node embedding trajectories in Figure \ref{fig:node_emb_traj}. The trajectories are divergent, suggesting a non--decreasing classification performance for GCDE models trained with longer integration times. We provide complete visualization of accuracy curves in Appendix C. \begin{wrapfigure}{r}{0.45\textwidth} \centering \includegraphics[width=1\linewidth]{figures/Trk4.pdf} \caption{Cora accuracy of GCDE models with different integration times $s$. Higher values of $S$ do not affect performance negatively but require a higher number of epochs.} \label{fig:trk4} \end{wrapfigure} \paragraph{Resilience to integration time} For each integration time $S \in [1, 5, 10]$, we train $100$ GCDE-dpr5 models on Cora and report average metrics, along with $1$ standard deviation confidence intervals in Figure \ref{fig:trk4}. GCDEs are shown to be resilient to changes in $S$; however, GCDEs with longer integration times require more training epochs to achieve comparable accuracy. This result suggests that, indeed, GDEs are immune to node oversmoothing \cite{oono2019graph}. \subsection{Multi--Agent Trajectory Extrapolation} \paragraph{Experimental setup} We evaluate GDEs and a collection of deep learning baselines on the task of extrapolating the dynamical behavior of a synthetic mechanical multi--particle system. Particles interact within a certain radius with a viscoelastic force. Outside the mutual interactions, captured by a time--varying adjacency matrix $\mathbf{A}_t$, the particles would follow a periodic motion. The adjaciency matrix $\mathbf{A}_t$ is computed along the trajectory as: \[ \mathbf{A}_t^{(ij)} = \left\{\begin{matrix*}[l] 1 & 2\|\mathbf{x}_i(t)-\mathbf{x}_j(t)\|\leq r\\ 0 & \text{otherwise} \end{matrix*}\right.~, \] \begin{wrapfigure}{r}{0.52\textwidth} \centering \includegraphics[width=.9\linewidth]{figures/traj_multiagent_better.pdf} \caption{Example position and velocity trajectories of the multi--particle system.} \label{fig:ma_traj} \end{wrapfigure} where $\mathbf{x}_i(t)$ is the position of node $i$ at time $t$. Therefore, $\mathbf{A}_t$ results to be symmetric, $\mathbf{A}_t = \mathbf{A}_t^\top$ and yields an undirected graph. The dataset is collected by integrating the system for $T = 5s$ with a fixed step--size of $dt = 1.95\cdot10^{-3}$ and is split evenly into a training and test set. We consider $10$ particle systems. An example trajectory is shown in Figure \ref{fig:ma_traj}. All models are optimized to minimize mean--squared--error (MSE) of 1--step predictions using Adam \cite{kingma2014adam} with constant learning rate $0.01$. We measure test \textit{mean average percentage error} (MAPE) of model predictions in different extrapolation settings. \textit{Extrapolation steps} denotes the number of predictions each model $\Phi$ has to perform without access to the nominal trajectory. This is achieved by recursively letting inputs at time $t$ be model predictions at time $t - \Delta t$ i.e $\hat{\mathbf{Y}}_{t+\Delta t} = \phi (\hat{\mathbf{Y}}_{t})$ for a certain number of extrapolation steps, after which the model is fed the actual nominal state $\mathbf{X}$ and the cycle is repeated until the end of the test trajectory. For a robust comparison, we report mean and standard deviation across 10 seeded training and evaluation runs. Additional experimental details, including the analytical formulation of the dynamical system, are provided as supplementary material. \paragraph{Models and baselines} As the vector field depends only on the state of the system, available in full during training, the baselines do not include recurrent modules. We consider the following models: \begin{itemize} \item A 3--layer fully-connected neural network, referred to as \textit{Static}. No assumption on the dynamics \item A vanilla Neural ODE with the vector field parametrized by the same architecture as \textit{Static}. ODE assumption on the dynamics. \item A 3--layer convolution GDE, GCDE. Dynamics assumed to be determined by a blend of graphs and ODEs \item A 3--layer, second order GCDE as described by (\ref{eq:2GDE}) and referred to as \textit{GCDE-II}. GCDE assumptions in addition to second order ODE dynamics. \end{itemize} A grid hyperparameter search on number of layers, ODE solver tolerances and learning rate is performed to optimize \textit{Static} and Neural ODEs. We use the same hyperparameters for GDEs. \paragraph{Results} Figure \ref{fig:mape} shows the growth rate of test MAPE error as the number of extrapolation steps is increased. \textit{Static} fails to extrapolate beyond the 1--step setting seen during training. Neural ODEs overfit spurious particle interaction terms and their error rapidly grows as the number of extrapolation steps is increased. GCDEs, on the other hand, are able to effectively leverage relational information to track the system, as shown in Figure \begin{figure}[!h] \centering \includegraphics[width=0.6\linewidth]{figures/MA_MC.pdf} \caption{Test extrapolation MAPE averaged across 10 experiments. Shaded area and error bars indicate 1--standard deviation intervals. } \label{fig:mape} \end{figure} \ref{fig:extrapo_traj}. Lastly, GCDE-IIs outperform first--order GCDEs as their structure inherently possesses crucial information about the relative relationship of positions and velocities that is accurate with repsect to the observed dynamical system. To further improve performance, additional information about the governing equations can be encoded in the model \cite{rackauckas2020universal}. \begin{wrapfigure}{r}{0.45\textwidth} \centering \includegraphics[width=1\linewidth]{figures/ma_extrapo.pdf} \caption{Test extrapolation, 5 steps. Trajectory predictions of Neural ODEs and GDEs. The extrapolation is terminated after 5 steps and the nominal state is fed to the model, as described in the experimental setup.} \label{fig:extrapo_traj} \end{wrapfigure} \subsection{Traffic Forecasting} \paragraph{Experimental setup} We evaluate the effectiveness of autoregressive GDE models on forecasting tasks by performing a series of experiments on the established PeMS traffic dataset. We follow the setup of \cite{yu2018spatio} in which a subsampled version of PeMS, PeMS7(M), is obtained via selection of 228 sensor stations and aggregation of their historical speed data into regular 5 minute frequency time series. We construct the adjacency matrix $\mathbf{A}$ by thresholding of the Euclidean distance between observation stations i.e. when two stations are closer than the threshold distance, an edge between them is included. The threshold is set to the 40$^{\text{th}}$ percentile of the station distances distribution. To simulate a challenging environment with missing data and irregular timestamps, we undersample the time series by performing independent Bernoulli trials on each data point. Results for 3 increasingly challenging experimental setups are provided: undersampling with $30\%$, $50\%$ and $70\%$ of removal. In order to provide a robust evaluation of model performance in regimes with irregular data, the testing is repeated $20$ times per model, each with a different undersampled version of the test dataset. We collect \textit{root mean square error} (RMSE) and MAPE. More details about the chosen metrics and data are included as supplementary material. \paragraph{Models and baselines} In order to measure performance gains obtained by GDEs in settings with data generated by continuous time systems, we employ a GCDE--GRU as well as its discrete counterpart GCGRU \cite{zhao2018deep}. To contextualize the effectiveness of introducing graph representations, we include the performance of GRUs since they do not directly utilize structural information of the system in predicting outputs. Apart from GCDE--GRU, both baseline models have no innate mechanism for handling timestamp information. For a fair comparison, we include timestamp differences between consecutive samples and \textit{sine--encoded} \cite{petnehazi2019recurrent} absolute time information as additional features. All models receive an input sequence of $5$ graphs to perform the prediction. \begin{table*}[b] \small \centering \setlength\tabcolsep{3pt} \begin{tabular}{lrrrrrr} \toprule Model & MAPE$_{70\%}$ & RMSE$_{70\%}$ & MAPE$_{50\%}$ & RMSE$_{50\%}$ & MAPE$_{30\%}$ & RMSE$_{30\%}$ \\ \midrule GRU & $27.14 \pm 0.45$ & $13.25 \pm 0.11$ & $27.08 \pm 0.26$ & $13.218 \pm 0.065$ & $27.24 \pm 0.191$ & $13.28 \pm 0.05$\\ GCGRU & $23.60 \pm 0.38$ & $11.97 \pm 0.03$ & $22.86 \pm 0.22$ & $11.779 \pm 0.056$ & $21.324 \pm 0.16$ & $11.20 \pm 0.04$\\ GCDE-GRU & $\textbf{22.95} \pm 0.37$ & $\textbf{11.67} \pm 0.10$ & $\textbf{21.25} \pm 0.21$ & $\textbf{11.04} \pm 0.05$ & $\textbf{20.94} \pm 0.14$ & $\textbf{10.95} \pm 0.04$\\ \bottomrule \end{tabular} \caption{Forecasting test results across 20 runs (mean and standard deviation). MAPE$_i$ indicates $i\%$ undersampling of the test set.} \label{tab:traffic_preds} \end{table*} \paragraph{Results} Non--constant differences between timestamps result in a challenging forecasting task for a single model since the average prediction horizon changes drastically over the course of training and testing. Traffic systems are intrinsically dynamic and continuous in nature and, therefore, a model able to track continuous underlying dynamics is expected to offer improved performance. Since GCDE-GRUs and GCGRUs are designed to match in structure we can measure this performance increase from the results shown in Table \ref{tab:traffic_preds}. GCDE--GRUs outperform GCGRUs and GRUs in all undersampling regimes. Additional details and prediction visualizations are included in Appendix C. \section{Discussion} We discuss future extensions compatible with the GDE framework. \paragraph{Unknown or dynamic topology} Several lines of work concerned with learning the graph structure directly from data exist, either by inferring the adjacency matrix within a probabilistic framework \cite{kipf2018neural} or using a soft--attention \cite{vaswani2017attention} mechanism \cite{choi2017gram,li2018adaptive,wu2019graph}. In particular, the latter represents a commonly employed approach to the estimation of a dynamic adjacency matrix in spatio--temporal settings. Due to the algebraic nature of the relation between the attention operator and the node features, GDEs are compatible with its use inside the GNN layers parametrizing the vector field. Thus, if an optimal adaptive graph representation $\mathbf{S}(s, \mathbf{H})$ is computed through some attentive mechanism, standard convolution GDEs can be replaced by: \begin{equation} \dot\mathbf{H} = \sigma\left(\mathbf{S}\mathbf{H}\bm\Theta\right). \end{equation} \paragraph{Introduction of control terms} The GDE formulation allows for the introduction of control inputs: \begin{equation}\label{eq:contGDE} \left\{ \begin{matrix*}[l] \dot\mathbf{H}{(s)} = \mathbf{F}_{\G}\left( s, \mathbf{H}(s), \bm\Theta\right) + \mathbf{U}\left( s \right) \\ \mathbf{H}(0) = \mathbf{X}_e \end{matrix*} \right.,~~s\in\Sa. \end{equation} System (\ref{eq:contGDE}) encompasses a variety of previously proposed approaches involving special residual connections \cite{zhang2019gresnet} and multi--stage or jumping knowledge GNNs \cite{xu2018representation}, unifying them under a control theory point of view. In particular, the control input can be parametrized by an additional neural network $\mathbf{U}\left( s\right) := \mathbf{U}_{\G}\left( s, \mathbf{X} \right)$. \section{Conclusion} In this work we introduce \textit{graph neural ordinary differential equations} (GDE), the continuous--depth counterpart to \textit{graph neural networks} (GNN) where the inputs are propagated through a continuum of GNN layers. The GDE formulation is general, as it can be adapted to include many static and autoregressive GNN models. GDEs are designed to offer a data--driven modeling approach for \textit{dynamical networks}, whose dynamics are defined by a blend of discrete topological structures and differential equations. In sequential forecasting problems, GDEs can accommodate irregular timestamps and track the underlying continuous dynamics, whereas in static settings they offer computational advantages by allowing for the embedding of black--box numerical solvers in their forward pass. GDEs have been evaluated on both static and dynamic tasks and have been shown to outperform their discrete counterparts. Future directions include extending GDEs to other classes of differential equations as well as settings where the number of graph nodes evolves in time. \bibliographystyle{unsrt} \section{Introduction} Introducing appropriate inductive biases on deep learning models is a well--known approach to improving sample efficiency and generalization performance \citep{battaglia2018relational}. Graph neural networks (GNNs) represent a general computational framework for imposing such inductive biases when the problem structure can be encoded as a graph or in settings where prior knowledge about entities composing a target system can itself be described as a graph \citep{li2018combinatorial,gasse2019exact,sanchez2018graph}. \begin{wrapfigure}[18]{r}{0.5\textwidth} \centering \vspace{-6mm} \includegraphics[scale=0.75]{catchy2.pdf} \caption{\textit{Graph neural ordinary differential equations} (GDEs) model vector fields defined on graphs, both in cases when the structure is fixed or changes in time, via a continuum of \textit{graph neural network} (GNN) layers.} \label{fig:vek} \end{wrapfigure}GNNs have shown remarkable results in various application areas such as node classification \citep{zhuang2018dual,gallicchio2019fast}, graph classification \citep{yan2018spatial} and forecasting \citep{li2017diffusion,wu2019graph} as well as generative tasks \citep{li2018learning,you2018graphrnn}. A different but equally important class of inductive biases is concerned with the type of temporal behavior of the systems from which the data is collected i.e., discrete or continuous dynamics. Although deep learning has traditionally been a field dominated by discrete models, recent advances propose a treatment of neural networks equipped with a continuum of layers \citep{haber2017stable,chen2018neural}. This view allows a reformulation of the forward and backward pass as the solution of the initial value problem of an \textit{ordinary differential equation} (ODE). Such approaches allow direct modeling of ODEs and can guide discovery of novel general purpose deep learning models. \newpage \paragraph{Blending graphs and differential equations} In this work we propose the system--theoretic framework of \textit{graph neural ordinary differential equations} (GDEs) by defining ODEs parametrized by GNNs. GDEs are designed to inherit the ability to impose relational inductive biases of GNNs while retaining the dynamical system perspective of continuous--depth models. We validate GDEs experimentally on a static semi--supervised node classification task as well as spatio--temporal forecasting tasks. GDEs are shown to outperform their discrete GNN analogues: the different sources of performance improvement are identified and analyzed separately for static and dynamic settings. \paragraph{Sequences of graphs} We extend the GDE framework to the spatio--temporal setting and formalize a general autoregressive GDE model as a \textit{hybrid dynamical system}. The structure--dependent vector field learned by GDEs offers a data--driven approach to the modeling of dynamical networked systems \citep{lu2005time,andreasson2014distributed}, particularly when the governing equations are highly nonlinear and therefore challenging to approach with analytical methods. Autoregressive GDEs can adapt the prediction horizon by adjusting the integration interval of the ODE, allowing the model to track the evolution of the underlying system from irregular observations. \paragraph{GDEs as general--purpose models} In general, no assumptions on the continuous nature of the data generating process are necessary in order for GDEs to be effective. Indeed, following recent work connecting different discretization schemes of ODEs \citep{lu2017beyond} to previously known architectures such as FractalNets \citep{larsson2016fractalnet}, we show that GDEs can equivalently be utilized as high--performance general purpose models. In this setting, GDEs offer a grounded approach to the embedding of classic numerical schemes inside the forward pass of GNNs. \section{Graph Neural Ordinary Differential Equations} We begin by introducing the general formulation of GDEs. \subsection{General Framework} \paragraph{Definition of GDE} Without any loss of generality, the inter--layer dynamics of a GNN node feature matrix can be represented in the form: \begin{equation*} \left\{ \begin{matrix*}[l] \mathbf{H}{(s+1)} = \mathbf{H}(s) + \mathbf{F}_{\G}\left( s, \mathbf{H}(s), \bm\Theta(s)\right) \\ \mathbf{H}(0) = \mathbf{X}_e \end{matrix*} \right.,~~s\in\Nat, \end{equation*} where $\mathbf{X}_e\in\R^{n\times h}$ is an embedding of $\mathbf{X}$\footnote{$\mathbf{X}_e$ can be obtained from $\mathbf{X}$, e.g. with a single linear layer: $\mathbf{X}_e := \mathbf{X}\mathbf{W}$, $\mathbf{W}\in\R^{d\times h}$ or with another GNN layer.}, $\mathbf{F}_{\G}$ is a matrix--valued nonlinear function conditioned on graph $\mathcal{G}$ and $\bm\Theta(s)\in\R^p$ is the tensor of trainable parameters of the $s$-th layer. Note that the explicit dependence on $s$ of the dynamics is justified in some graph architectures, such as diffusion graph convolutions \citep{atwood2016diffusion}. A \textit{graph neural differential ordinary equation} (GDE) is defined as the following Cauchy problem: \begin{equation}\label{eq:GDE} \left\{ \begin{matrix*}[l] \dot\mathbf{H}(s) = \mathbf{F}_{\G}\left( s, \mathbf{H}(s), \bm\Theta\right) \\ \mathbf{H}(0) = \mathbf{X}_e \end{matrix*} \right.,~~s\in\Sa\subset\R, \end{equation} where $\mathbf{F}_{\G}:\Sa\times\R^{n\times h}\times\R^p\rightarrow\R^{n\times h}$ is a depth--varying vector field defined on graph $\mathcal{G}$. To reduce stiffness of learned vector fields, alleviating the computational burden of adaptive ODE solvers, the node features can be augmented in several ways \citep{dupont2019augmented,massaroli2020dissecting} by concatenating additional dimensions or prepending input layers to the GDE. \paragraph{Well--posedness} Let $\Sa:=[0,1]$. Under mild conditions on $\mathbf{F}$, namely Lipsichitz continuity with respect to $\mathbf{H}$ and uniform continuity with respect to $s$, for each initial condition (GDE embedded input) $\mathbf{X}_e$, the ODE in \eqref{eq:GDE} admits a unique solution $\mathbf{H}(s)$ defined in the whole $\Sa$. Thus there is a mapping $\bm\Psi$ from $\R^{n\times h}$ to the space of absolutely continuous functions $\Sa\to\R^{n\times h}$ such that $\mathbf{H} := \bm\Psi(\mathbf{X}_e)$ satisfies the ODE in \eqref{eq:GDE}. This implies the the output $\mathbf{Y}$ of the GDE satisfies \[ \mathbf{Y} = \bm\Psi(\mathbf{X}_e)(1). \] Symbolically, the output of the GDE is obtained by the following \begin{equation*} \mathbf{Y} = \mathbf{X}_e + \int_{\Sa}\mathbf{F}_{\G}(\tau,\mathbf{H}(\tau),\bm\Theta)d\tau. \end{equation*} Note that applying an output layer or network to $\mathbf{Y}$ before passing it to downstream applications is generally beneficial. \paragraph{Integration domain} We restrict the integration interval to $\Sa=[0,1]$, given that any other integration time can be considered a rescaled version of $\Sa$. Following \citep{chen2018neural} we use the \textit{number of function evaluations} (NFE) of the numerical solver utilized to solve (\ref{eq:GDE}) as a proxy for model depth. In applications where $\Sa$ acquires a specific meaning (i.e forecasting with irregular timestamps) the integration domain can be appropriately tuned to evolve GDE dynamics between \textit{arrival times} \citep{rubanova2019latent} without assumptions on the functional form of the underlying vector field, as is the case for example with exponential decay in GRU--D \citep{che2018recurrent}. \paragraph{GDE training} GDEs can be trained with a variety of methods. Standard backpropagation through the computational graph, adjoint sensitivity method \citep{pontryagin1962mathematical} for $\mathcal{O}(1)$ memory efficiency \citep{chen2018neural}, or backpropagation through a relaxed spectral elements discretization \citep{quaglino2019accelerating}. Numerical instability in the form of accumulating errors on the adjoint ODE during the backward pass of Neural ODEs has been observed in \citep{gholami2019anode}. A proposed solution is a hybrid checkpointing--adjoint scheme commonly employed in scientific computing \citep{wang2009minimal}, where the adjoint trajectory is reset at predetermined points in order control the error dynamics. \section{Taxonomy of GDEs} In the following, we taxonomize GDEs models distinguishing them into \textit{static} and \textit{spatio--temporal} (autoregressive) variants. \subsection{Static Models} \paragraph{Graph convolution differential equations} Based on graph spectral theory \citep{shuman2013emerging,sandryhaila2013discrete}, the residual version of \textit{graph convolution network} (GCN) \citep{kipf2016semi} layers are in the form: \begin{equation}\label{eq:gcn} \mathbf{H}{(s+1)} = \mathbf{H}(s) + \sigma\left(\mathbf{L}_{\G} \mathbf{H}(s) \bm\Theta(s)\right) \end{equation} where $\mathbf{L}_{\G}\in\R^{n\times n}$ is the graph \textit{Laplacian} and $\sigma$ is as nonlinear activation function. We denote with $\C_{\G}$ the graph convolution operator, i.e. $\C_{\G}\mathbf{H}(s)= \mathbf{L}_{\G} \mathbf{H}(s) \bm\Theta(s)$. A general formulation of the continuous counterpart of GCNs, \textit{graph convolution differential equation} (GCDE), is therefore obtained by defining $\mathbf{F}_{\G}$ as a multilayer convolution, i.e. \begin{equation}\label{eq:gde} \dot\mathbf{H}{(s)} = \mathbf{F}_{\tt GCN}(\mathbf{H}(s), \bm\Theta) := \C^N_{\G}\circ ~\sigma\circ\C^{N-1}_{\G}\circ\cdots\circ\sigma\circ\C^1_{\G}\mathbf{H}(s) \end{equation} Note that the Laplacian $\mathbf{L}_{\G}$ can be computed in different ways, see e.g. \citep{bruna2013spectral,defferrard2016convolutional,levie2018cayleynets, zhuang2018dual}. Diffusion--type convolution layers \citep{li2017diffusion} are also compatible with the continuous--depth formulation. \paragraph{Additional models and considerations} We include additional derivation of continuous counterparts of common static GNN models such as \textit{graph attention networks} (GAT) \citep{velivckovic2017graph} and general message passing GNNs as supplementary material. \subsection{Spatio--Temporal Models} For settings involving a temporal component (i.e., modeling dynamical systems), the depth domain of GDEs coincides with the time domain $s\equiv t$ and can be adapted depending on the requirements. For example, given a time window $\Delta t$, the prediction performed by a GDE assumes the form: \begin{equation*} \mathbf{H}{(t + \Delta t)} = \mathbf{H}(t) + \int_t^{t + \Delta t}\mathbf{F}\left( \tau,\mathbf{H}(\tau),\bm\Theta \right) d\tau, \end{equation*} regardless of the specific GDE architecture employed. Here, GDEs represent a natural model class for autoregressive modeling of sequences of graphs $\{\G_t\}$ and seamlessly link to dynamical network theory. This line of reasoning naturally leads to an extension of classical spatio--temporal architectures in the form of \textit{hybrid dynamical systems} \citep{van2000introduction,goebel2008}, i.e., systems characterized by interacting continuous and discrete--time dynamics. Let $(\K, >)$, $(\T, >)$ be linearly ordered sets; namely, $\K\subset\Nat\setminus\{0\}$ and $\T$ is a set of time instants, $\T:=\{t_k\}_{k\in\K}$. We suppose to be given a \textit{state--graph data stream} which is a sequence in the form $\left\{\left(\mathbf{X}_t,\G_t\right)\right\}_{t\in\T}$. Let us also define a \textit{hybrid time domain} as the set $\I:= \bigcup_{k\in\K}\left([t_k, t_{k+1}],k\right)$ and a \textit{hybrid arc} on $\I$ as a function $\bm\Phi$ such that for each $k\in\K$, $t\mapsto\bm\Phi(t,k)$ is absolutely continuous in $\{t:(t,j)\in\dom\bm\Phi\}$. Our aim is to build a continuous model predicting, at each $t_k\in\T$, the value of $\mathbf{X}_{t_{k+1}}$, given $\left(\mathbf{X}_t,\G_t\right)$. The core idea is to have a GDE smoothly steering the latent node features between two time instants and then apply some discrete operator, resulting in a ``jump'' of $\mathbf{H}$ which is then processed by an output layer. Solutions of the proposed continuous spatio--temporal model are therefore hybrid arcs. \begin{wrapfigure}[10]{r}{0.4\textwidth} \centering \begin{tikzpicture % \fill[gray!20, draw = black,thick] (0,0) ellipse (1.7cm and 1.2cm); \draw (0,0) node[align = center](flow) {$\dot{\mathbf{H}} = \mathbf{F}_{\G_{t_k}}\left(\mathbf{H}(s),\bm\Theta\right)$\\\\$(\dot s= 1)$}; \draw [thick,-latex] (1.7,0) to [in=-30, out=-50, distance=1.2cm] (.9,-1.); \draw (2,-1.75) node[align = left] {\\{$k\leftarrow k+1$}\\ $\mathbf{H}^+ = \mathbf{G}_{\G_{t_k}}(\mathbf{H}(s),\mathbf{X}_{t_{k}})$}; \draw (2.5,0.1) node {$s={t_{k}}$}; % \end{tikzpicture} \vspace{-7mm} \caption{Schematic of autoregressive GDEs as hybrid automata.} \label{fig:aut} \end{wrapfigure} \paragraph{Autoregressive GDEs} The solution of a general autoregressive GDE model can be symbolically represented by: \begin{equation}\label{eq:hybrid} \left\{ \begin{matrix*}[l] \dot{\mathbf{H}}({s}) &= \mathbf{F}_{\G_{t_k}}(\mathbf{H}(s), \bm\Theta) & s\in[t_{k-1}, t_{k}] \\[3pt] \mathbf{H}^+(s) &= \mathbf{G}_{\G_{t_k}}(\mathbf{H}(s), \mathbf{X}_{t_k}) & s = t_{k}\\[3pt] \mathbf{Y} &= \mathbf{K}(\mathbf{H}(s)) & s=t_{k} \end{matrix*} \right.k\in\K, \end{equation} where $\mathbf{F}, \mathbf{G}, \mathbf{K}$ are GNN--like operators or general neural network layers\footnote{More formal definitions of the hybrid model in the form of \textit{hybrid inclusions} can indeed be easily given. However, the technicalities involved are beyond the scope of this paper.} and $\mathbf{H}^+$ represents the value of $\mathbf{H}$ after the discrete transition. The evolution of system (\ref{eq:hybrid}) is indeed a sequence of hybrid arcs defined on a hybrid time domain. A graphical representation of the overall system is given by the \textit{hybrid automata} as shown in Fig. \ref{fig:aut}. Compared to standard recurrent models which are only equipped with discrete jumps, system (\ref{eq:hybrid}) incorporates a continuous flow of latent node features $\mathbf{H}$ between jumps. This feature of autoregressive GDEs allows them to track the evolution of dynamical systems from observations with irregular time steps. In the experiments we consider $\mathbf{G}$ to be a GRU cell \citep{cho2014learning}, obtaining \textit{graph convolutional differential equation--GRU} (GCDE--GRU). Alternatives such as GCDE--RNNs or GCDE--LSTMs can be similarly obtained by replacing $\mathbf{G}$ with other common recurrent modules, such as vanilla RNNs or LSTMs \citep{hochreiter1997long}. It should be noted that the operators $\mathbf{F}, \mathbf{G}, \mathbf{K}$ can themselves have multi--layer structure. \section{Experiments} We evaluate GDEs on a suite of different tasks. The experiments and their primary objectives are summarized below: \begin{itemize} \item Semi--supervised node classification on static, standard benchmark datasets Cora, Citeseer, Pubmed \citep{sen2008collective}. We investigate the usefulness of the proposed method in a static setting via an ablation analysis that directly compares GCNs and analogue GCDEs solved with fixed--step and adaptive solvers. \item Trajectory extrapolation task on a synthetic multi--agent dynamical system. We compare Neural ODEs and GDEs, providing a motivating example for the introduction of additional biases in the form of second--order models \citep{yildiz2019ode,massaroli2020dissecting}. \item Traffic forecasting on an undersampled version of PeMS \citep{yu2018spatio} dataset. We measure the performance improvement obtained by a correct inductive bias on continuous dynamics and robustness to irregular timestamps. \end{itemize} The code will be open--sourced after the review phase and is included in the submission. \begin{figure*}[t] \centering \includegraphics[width=1\linewidth]{figures/traj.png} \caption{Node embedding trajectories defined by a forward pass of GCDE--dpr5 on Cora, Citeseer and Pubmed. Color differentiates between node classes.} \label{fig:node_emb_traj} \end{figure*} \begin{table}[b] \footnotesize \centering \setlength\tabcolsep{4pt} \begin{tabular}{lrrr} \toprule Model (NFE) & Cora & Citeseer & Pubmed\\ \midrule GCN & $81.4 \pm 0.5\%$ & $70.9 \pm 0.5\%$ & $79.0 \pm 0.3\%$\\ GCN$^*$ & $82.8 \pm 0.3\%$ & $71.2 \pm 0.4\%$ & $79.5 \pm 0.4\%$ \\ \midrule GCDE--rk2 (2) & $83.0 \pm 0.6\%$ & $72.3 \pm 0.5\%$ & $\textbf{79.9} \pm 0.3\%$\\ GCDE--rk4 (4) & $\textbf{83.8} \pm 0.5\%$ & $\textbf{72.5} \pm 0.5\%$ & $79.5 \pm 0.4\%$\\ GCDE--dpr5 \textbf{(158)} & $81.8 \pm 1.2\%$ & $68.3\pm 1.2\%$ & $78.5 \pm 0.7\%$\\ \bottomrule \end{tabular} \vspace{3mm} \caption{Test results across 100 runs ($\mu$ and $\sigma$). All models have hidden dimension $64$.} \label{tab:allresone} \end{table} \subsection{Transductive Node Classification} \paragraph{Experimental setup} The first task involves performing semi--supervised node classification on static graphs collected from baseline datasets Cora, Pubmed and Citeseer \citep{sen2008collective}. Main goal of these experiments is to perform an ablation study on the source of possible performance advantages of the GDE framework in settings that do not involve continuous dynamical systems. The $L_2$ weight penalty is set to $5\cdot 10^{-4}$ on Cora, Citeseer and $10^{-3}$ on Pubmed as a strong regularizer due to the small size of the training set \citep{monti2017geometric}. We report mean and standard deviation across $100$ training runs. Since our experimental setup follows \citep{kipf2016semi} to allow for a fair comparison, other baselines present in recent GNN literature can be directly compared with Table \ref{tab:allresone}. \paragraph{Models and baselines} All convolution--based models are equipped with a latent dimension of $64$. We include results for best performing vanilla GCN baseline presented in \citep{velivckovic2017graph}. To avoid flawed comparisons, we further evaluate an optimized version of GCN, GCN$^*$, sharing exact architecture, as well as training and validation hyperparameters with the GCDE models. We experimented with different number of layers for GCN$^*$: (2, 3, 4, 5, 6) and select $2$, since it achieves the best results. The performance of \textit{graph convolution differential equation} (GCDE) is assessed with both a fixed-step solver Runge--Kutta \citep{runge1895numerische,kutta1901beitrag} as well as an adaptive--step solver, Dormand--Prince \citep{dormand1980family}. The resulting models are denoted as GCDE--rk4 and GCDE--dpr5, respectively. We utilize the {\tt torchdiffeq} \citep{chen2018neural} PyTorch package to solve and backpropagate through the ODE solver. \paragraph{Continuous--depth models in static tasks} Ensuring a low error solution to the ODE parametrized by the model with adaptive--step solvers does not offer particular advantages in image classification tasks \citep{chen2018neural} compared to equivalent discrete models. While there is no reason to expect performance improvements solely from the transition away from discrete architectures, continuous--depth allows for the embedding of numerical ODE solvers in the forward pass. Multi--step architectures have previously been linked to ODE solver schemes \citep{lu2017beyond} and routinely outperform their single--step counterparts \citep{larsson2016fractalnet,lu2017beyond}. We investigate the performance gains by employing the GDE framework in static settings as a straightforward approach to the embedding of numerical ODE solvers in GNNs. \paragraph{Results} The variants of GCDEs solved with fixed--step schemes are shown to outperform or match GCN$^*$ across all datasets, with the margin of improvement being highest on Cora and Citeseer. Introducing GCDE--rk2 and GCDE--rk4 is observed to provide the most significant accuracy increases in more densely connected graphs or with larger training sets. In particular, GCDE--rk4 outperforming GCDE--rk2 indicates that, given equivalent network architectures, higher order ODE solvers are generally more effective, provided the graph is dense enough to benefit from the additional computation. Additionally, training GCDEs with adaptive step solvers naturally leads to deeper models than possible with vanilla GCNs, whose layer depth greatly reduces performance. However, the high \textit{number of function evaluation} (NFEs) of GCDE--dpr5 necessary to stay within the ODE solver tolerances causes the model to overfit and therefore generalize poorly. We visualize the first two components of GCDE--dpr5 node embedding trajectories in Figure \ref{fig:node_emb_traj}. The trajectories are divergent, suggesting a non--decreasing classification performance for GCDE models trained with longer integration times. We provide complete visualization of accuracy curves in Appendix C. \begin{wrapfigure}[10]{r}{0.42\textwidth} \vspace{-5mm} \centering \includegraphics[width=1\linewidth]{figures/Trk4.pdf} \caption{Cora accuracy of GCDE models with different integration times $s$.} \label{fig:trk4} \end{wrapfigure} \paragraph{Resilience to integration time} For each integration time $S \in [1, 5, 10]$, we train $100$ GCDE-dpr5 models on Cora and report average metrics, along with $1$ standard deviation confidence intervals in Figure \ref{fig:trk4}. GCDEs are shown to be resilient to changes in $S$; however, GCDEs with longer integration times require more training epochs to achieve comparable accuracy. This result suggests that, indeed, GDEs are immune to node oversmoothing \citep{oono2019graph}. \subsection{Multi--Agent Trajectory Extrapolation} \paragraph{Experimental setup} We evaluate GDEs and a collection of deep learning baselines on the task of extrapolating the dynamical behavior of a synthetic mechanical multi--particle system. Particles interact within a certain radius with a viscoelastic force. Outside the mutual interactions, captured by a time--varying adjacency matrix $\mathbf{A}_t$, the particles would follow a periodic motion. The adjaciency matrix $\mathbf{A}_t$ is computed along the trajectory as: \[ \mathbf{A}_t^{(ij)} = \left\{\begin{matrix*}[l] 1 & 2\|\mathbf{x}_i(t)-\mathbf{x}_j(t)\|\leq r\\ 0 & \text{otherwise} \end{matrix*}\right.~, \] \begin{wrapfigure}[11]{r}{0.45\textwidth} \vspace{-5mm} \centering \includegraphics[width=1\linewidth]{figures/traj_multiagent_better.pdf} \caption{Example position and velocity trajectories of the multi--particle system.} \label{fig:ma_traj} \end{wrapfigure} where $\mathbf{x}_i(t)$ is the position of node $i$ at time $t$. Therefore, $\mathbf{A}_t$ results to be symmetric, $\mathbf{A}_t = \mathbf{A}_t^\top$ and yields an undirected graph. The dataset is collected by integrating the system for $T = 5s$ with a fixed step--size of $dt = 1.95\cdot10^{-3}$ and is split evenly into a training and test set. We consider $10$ particle systems. An example trajectory is shown in Figure \ref{fig:ma_traj}. All models are optimized to minimize mean--squared--error (MSE) of 1--step predictions using Adam \citep{kingma2014adam} with constant learning rate $0.01$. We measure test \textit{mean average percentage error} (MAPE) of model predictions in different extrapolation settings. \textit{Extrapolation steps} denotes the number of predictions each model $\Phi$ has to perform without access to the nominal trajectory. This is achieved by recursively letting inputs at time $t$ be model predictions at time $t - \Delta t$ i.e $\hat{\mathbf{Y}}_{t+\Delta t} = \phi (\hat{\mathbf{Y}}_{t})$ for a certain number of extrapolation steps, after which the model is fed the actual nominal state $\mathbf{X}$ and the cycle is repeated until the end of the test trajectory. For a robust comparison, we report mean and standard deviation across 10 seeded training and evaluation runs. Additional experimental details, including the analytical formulation of the dynamical system, are provided as supplementary material. \paragraph{Models and baselines} As the vector field depends only on the state of the system, available in full during training, the baselines do not include recurrent modules. We consider the following models: \begin{itemize} \item A 3--layer fully-connected neural network, referred to as \textit{Static}. No assumption on the dynamics \item A vanilla Neural ODE with the vector field parametrized by the same architecture as \textit{Static}. ODE assumption on the dynamics. \item A 3--layer convolution GDE, GCDE. Dynamics assumed to be determined by a blend of graphs and ODEs \item A 3--layer, second--order \citep{yildiz2019ode,massaroli2020dissecting} GCDE and referred to as \textit{GCDE-II}. GCDE assumptions in addition to second--order ODE dynamics. \end{itemize} A grid hyperparameter search on number of layers, ODE solver tolerances and learning rate is performed to optimize \textit{Static} and Neural ODEs. We use the same hyperparameters for GDEs. \begin{wrapfigure}[13]{r}{0.45\textwidth} \vspace{-5mm} \centering \includegraphics[width=1\linewidth]{figures/MA_MC.pdf} \caption{Test extrapolation MAPE averaged across 10 experiments. Shaded area and error bars indicate 1--standard deviation intervals. } \label{fig:mape} \end{wrapfigure} \paragraph{Results} Figure \ref{fig:mape} shows the growth rate of test MAPE error as the number of extrapolation steps is increased. \textit{Static} fails to extrapolate beyond the 1--step setting seen during training. Neural ODEs overfit spurious particle interaction terms and their error rapidly grows as the number of extrapolation steps is increased. GCDEs, on the other hand, are able to effectively leverage relational information to track the system: we provide complete visualization of extrapolation trajectory comparisons in the supplementary material. Lastly, GCDE-IIs outperform first--order GCDEs as their structure inherently possesses crucial information about the relative relationship of positions and velocities that is accurate with respect to the observed dynamical system. \subsection{Traffic Forecasting} \paragraph{Experimental setup} We evaluate the effectiveness of autoregressive GDE models on forecasting tasks by performing a series of experiments on the established PeMS traffic dataset. We follow the setup of \citep{yu2018spatio} in which a subsampled version of PeMS, PeMS7(M), is obtained via selection of 228 sensor stations and aggregation of their historical speed data into regular 5 minute frequency time series. We construct the adjacency matrix $\mathbf{A}$ by thresholding of the Euclidean distance between observation stations i.e. when two stations are closer than the threshold distance, an edge between them is included. The threshold is set to the 40$^{\text{th}}$ percentile of the station distances distribution. To simulate a challenging environment with missing data and irregular timestamps, we undersample the time series by performing independent Bernoulli trials on each data point. Results for 3 increasingly challenging experimental setups are provided: undersampling with $30\%$, $50\%$ and $70\%$ of removal. In order to provide a robust evaluation of model performance in regimes with irregular data, the testing is repeated $20$ times per model, each with a different undersampled version of the test dataset. We collect \textit{root mean square error} (RMSE) and MAPE. More details about the chosen metrics and data are included as supplementary material. \paragraph{Models and baselines} In order to measure performance gains obtained by GDEs in settings with data generated by continuous time systems, we employ a GCDE--GRU--dopri5 as well as its discrete counterpart GCGRU \citep{zhao2018deep}. To contextualize the effectiveness of introducing graph representations, we include the performance of GRUs since they do not directly utilize structural information of the system in predicting outputs. Apart from GCDE--GRU, both baseline models have no innate mechanism for handling timestamp information. For a fair comparison, we include timestamp differences between consecutive samples and \textit{sine--encoded} \citep{petnehazi2019recurrent} absolute time information as additional features. All models receive an input sequence of $5$ graphs to perform the prediction. \begin{table*}[b] \tiny \centering \setlength\tabcolsep{3pt} \begin{tabular}{lrrrrrrrr} \toprule Model & MAPE$_{30\%}$ & RMSE$_{30\%}$ & MAPE$_{50\%}$ & RMSE$_{50\%}$ & MAPE$_{70\%}$ & RMSE$_{70\%}$ & MAPE$_{100\%}$ & RMSE$_{100\%}$\\ \midrule GRU & $27.14 \pm 0.45$ & $13.25 \pm 0.11$ & $27.08 \pm 0.26$ & $13.22 \pm 0.07$ & $27.24 \pm 0.19$ & $13.28 \pm 0.05$ & $27.20$ & $13.29$\\ GCGRU & $23.60 \pm 0.38$ & $11.97 \pm 0.03$ & $22.86 \pm 0.22$ & $11.78 \pm 0.06$ & $21.33 \pm 0.16$ & $11.20 \pm 0.04$ &$20.92$ & $10.87$\\ GCDE-GRU & $\mathbf{22.95} \pm 0.37$ & $\mathbf{11.67} \pm 0.10$ & $\mathbf{21.25} \pm 0.21$ & $\mathbf{11.04} \pm 0.05$ & $\mathbf{20.94} \pm 0.14$ & $\mathbf{10.95} \pm 0.04$ & $\mathbf{20.46}$ & $\mathbf{10.766}$\\ \bottomrule \end{tabular} \caption{Forecasting test results across 20 runs (mean and standard deviation). MAPE$_i$ indicates $i\%$ undersampling of the test set.} \label{tab:traffic_preds} \end{table*} \paragraph{Results} Non--constant differences between timestamps result in a challenging forecasting task for a single model since the average prediction horizon changes drastically over the course of training and testing. Traffic systems are intrinsically dynamic and continuous in nature and, therefore, a model able to track continuous underlying dynamics is expected to offer improved performance. Since GCDE-GRUs and GCGRUs are designed to match in structure we can measure this performance increase from the results shown in Table \ref{tab:traffic_preds}. GCDE--GRUs outperform GCGRUs and GRUs in all undersampling regimes. Additional details and prediction visualizations are included in Appendix C. \section{Related work} There exists a concurrent line of work \citep{xhonneux2019continuous} introducing a GNN variant evaluated on static node classification tasks where the output is the analytical solution of a linear ODE. \cite{sanchez2019hamiltonian} proposes using graph networks (GNs) \citep{battaglia2018relational} and ODEs to track Hamiltonian functions, whereas \citep{deng2019continuous} introduces a GNN version of continuous normalizing flows \citep{chen2018neural,grathwohl2018ffjord} for generative modeling, extending \citep{liu2019graph}. Our goal is developing a unified system--theoretic framework for continuous--depth GNNs covering the main variants of static and spatio--temporal GNN models. Our work provides extensive experimental evaluations on both static as well as dynamic tasks with the primary aim of uncovering the sources of performance improvement of GDEs in each setting. \section{Discussion} \paragraph{Unknown or dynamic topology} Several lines of work concerned with learning the graph structure directly from data exist, either by inferring the adjacency matrix within a probabilistic framework \citep{kipf2018neural} or using a soft--attention \citep{vaswani2017attention} mechanism \citep{choi2017gram,li2018adaptive,wu2019graph}. In particular, the latter represents a commonly employed approach to the estimation of a dynamic adjacency matrix in spatio--temporal settings. Due to the algebraic nature of the relation between the attention operator and the node features, GDEs are compatible with its use inside the GNN layers parametrizing the vector field. Thus, if an optimal adaptive graph representation $\mathbf{S}(s, \mathbf{H})$ is computed through some attentive mechanism, standard convolution GDEs can be replaced by $\dot\mathbf{H} = \sigma\left(\mathbf{S}\mathbf{H}\bm\Theta\right).$ \paragraph{Addition or removal of nodes} GDE variants operating on sequences of dynamically changing graphs can, without changes to the formulation, directly accommodate addition or removal of nodes as long as its number remains constant during the flows. In fact, the size of parameter matrix $\bm\Theta$ exclusively depends on the node feature dimension, resulting in resilience to a varying number of nodes. \section{Conclusion} In this work we introduce \textit{graph neural ordinary differential equations} (GDE), the continuous--depth counterpart to \textit{graph neural networks} (GNN) where the inputs are propagated through a continuum of GNN layers. The GDE formulation is general, as it can be adapted to include many static and autoregressive GNN models. GDEs are designed to offer a data--driven modeling approach for \textit{dynamical networks}, whose dynamics are defined by a blend of discrete topological structures and differential equations. In sequential forecasting problems, GDEs can accommodate irregular timestamps and track the underlying continuous dynamics, whereas in static settings they offer computational advantages by allowing for the embedding of black--box numerical solvers in their forward pass. GDEs have been evaluated on both static and dynamic tasks and have been shown to outperform their discrete counterparts. \newpage \bibliographystyle{abbrvnat} \section{Graph Neural Ordinary Differential Equations} \paragraph{Notation} Let $\Nat$ be the set of natural numbers and $\R$ the set of reals. Scalars are indicated as lowercase letters, vectors as bold lowercase, matrices and tensors as bold uppercase and sets with calligraphic letters. Indices of arrays and matrices are reported as superscripts in round brackets. Let $\V$ be a finite set with $|\V| = n$ whose element are called \textit{nodes} and let $\E$ be a finite set of tuples of $\V$ elements. Its elements are called \textit{edges} and are such that $\forall e_{ij}\in\E,~e_{ij} = (v_i,v_j)$ and $v_i,v_j\in\V$. A graph $\G$ is defined as the collection of nodes and edges, i.e. $\G := (\V,\E)$. The \textit{adjeciency} matrix $\mathbf{A}\in\R^{n\times n}$ of a graph is defined as \[ \mathbf{A}^{(ij)} = \left\{ \begin{matrix*}[l] 1 & e_{ij}\in\E\\ 0 & e_{ij}\not\in\E \end{matrix*} \right.~. \] If $\G$ is an \textit{attributed graph}, the \textit{feature vector} of each $v\in\V$ is $\mathbf{x}_v\in\R^{n_x}$. All the feature vectors are collected in a matrix $\mathbf{X}\in\R^{n\times n_x}$. Note that often, the features of graphs exhibit temporal dependency, i.e. $\mathbf{X} := \mathbf{X}_t$. \clearpage \subsection{General Static Formulation} For clarity and as an easily accessible reference, we include below a general formulation table for the static case \begin{CatchyBox}{Graph Neural Ordinary Differential Equations} \begin{minipage}[h]{0.45\linewidth \begin{equation*}\label{eq:1 { \left\{ \begin{aligned} \dot\mathbf{H}(s) &= \mathbf{F}_{\G}\left( s, \mathbf{H}(s), \bm\Theta\right)\\ \mathbf{H}(0) &= \mathbf{X}_e\\ \mathbf{Y}(s) &= \mathbf{K}(\mathbf{H}(s)) \end{aligned} \right.~~ s\in S} \end{equation*} % % \hfill \end{minipage} % \hfill \begin{minipage}[h]{.51\linewidth}\small \centering \begin{tabular}{r|c|c} Input & $\mathbf{X}$ & $\R^{n\times n_x}$\\\hline Embedded Input & $\mathbf{X}_e$ & $\R^{n\times h}$\\\hline Output & $\mathbf{Y}(s)$ & $\R^{n\times n_y}$\\\hline Graph & $\G$ & \\\hline Node features & $\mathbf{H}$ & $\R^{n\times h}$\\\hline Parameters & $\bm\Theta$ & $\R^{h\times h}$\\\hline Neural Vector Field & $\mathbf{F}_{\G}$ & $\R^h \rightarrow \R^h$\\\hline Output Network & $\mathbf{K}$ & $\R^h \rightarrow \R^{n_y}$\\ \end{tabular} \end{minipage} \end{CatchyBox} Note that the general formulation provided in \eqref{eq:hybrid} can similarly serve as a reference for the spatio--temporal case. \subsection{Computational Overhead} As is the case for other models sharing the continuous--depth formulation \citep{chen2018neural}, the computational overhead required by GDEs depends mainly by the numerical methods utilized to solve the differential equations. We can define two general cases for \textit{fixed--step} and \textit{adaptive--step} solvers. \paragraph{Fixed--step} In the case of fixed--step solvers of \textit{k--th} order e.g \textit{Runge--Kutta--k} \citep{runge1895numerische}, the time complexity is $O(nk)$ where $n := S/\epsilon$ defines the number of steps necessary to cover $[0, S]$ in fixed--steps of $\epsilon$. \paragraph{Adaptive--step} For general adaptive--step solvers, computational overhead ultimately depends on the error tolerances. While worst--case computation is not bounded \citep{dormand1980family}, a maximum number of steps can usually be set algorithmically. \subsection{Additional GDEs} \paragraph{Message passing GDEs} Let us consider a single node $v\in\V$ and define the set of neighbors of $v$ as $\N(v): = \{u\in\V~:~(v,u)\in\E \lor (u,v)\in\E\}$. Message passing neural networks (MPNNs) perform a spatial--based convolution on the node $v$ as \begin{equation}\label{eq:MPNN} \mathbf{h}^{(v)}{(s+1)} = \mathbf{u}\left[\mathbf{h}^{(v)}(s), \sum_{u\in\N(v)}\mathbf{m}\left({\mathbf{h}^{(v)}(s),\mathbf{h}^{(u)}(s)}\right)\right], \end{equation} where, in general, $\mathbf{h}^{v}(0) = \mathbf{x}_v$ while $\mathbf{u}$ and $\mathbf{m}$ are functions with trainable parameters. For clarity of exposition, let $\mathbf{u}(\mathbf{x},\mathbf{y}) := \mathbf{x} + \mathbf{g}(\mathbf{y})$ where $\mathbf{g}$ is the actual parametrized function. The (\ref{eq:MPNN}) becomes \begin{equation} \mathbf{h}^{(v)}{(s+1)} = \mathbf{h}^{(v)}(s) + \mathbf{g}\left[\sum_{u\in\N(v)}\mathbf{m}\left({\mathbf{h}^{(v)}(s),\mathbf{h}^{(u)}(s)}\right)\right], \end{equation} and its continuous--depth counterpart, \textit{graph message passing differential equation} (GMDE) is: \begin{equation*} \dot\mathbf{h}^{(v)}{(s)} = \mathbf{f}^{(v)}_{\tt MPNN}(\mathbf{H}, \bm\Theta) := \mathbf{g}\left[\sum_{u\in\N(v)}\mathbf{m}\left({\mathbf{h}^{(v)}(s),\mathbf{h}^{(u)}}(s)\right)\right]. \end{equation*} \paragraph{Attention GDEs} Graph attention networks (GATs) \citep{velivckovic2017graph} perform convolution on the node $v$ as \begin{equation} \mathbf{h}^{(v)}{(s+1)} = \sigma\left(\sum_{u\in\N(v)\cup v}{\alpha_{vu}\bm\Theta(s)\mathbf{h}^{(u)}}(s)\right). \end{equation} Similarly, to GCNs, a \textit{virtual} skip connection can be introduced allowing us to define the \textit{graph attention differential equation} (GADE): \begin{equation*} \dot\mathbf{h}^{(v)}{(s)} = \mathbf{f}^{(v)}_{\tt GAT}(\mathbf{H},\bm\Theta):= \sigma\left(\sum_{u\in\N(v)\cup v}{\alpha_{vu}\bm\Theta\mathbf{h}^{(u)}}(s)\right), \end{equation*} where $\alpha_{vu}$ are attention coefficient which can be computed following \citep{velivckovic2017graph}. \section{Spatio--Temporal GDEs} We include a complete description of GCGRUs to clarify the model used in our experiments. \begin{figure*}[t] \centering \includegraphics[width=1\linewidth]{figures/cora_test_acc.pdf} \includegraphics[width=1\linewidth]{figures/citeseer_test_acc.pdf} \caption{Test accuracy curves on Cora and Citeseer (100 experiments). Shaded area indicates the 1 standard deviation interval.} \label{fig:curves} \end{figure*} \subsection{GCGRU Cell} Following GCGRUs \citep{zhao2018deep}, we perform an instantaneous jump of $\mathbf{H}$ at each time $t_{k}$ using the next input features $\mathbf{X}_{t_{k}}$. Let $\mathbf{L}_{\G_{t_k}}$ be the graph Laplacian of graph $\G_{t_k}$, which can computed in several ways \citep{bruna2013spectral,defferrard2016convolutional,levie2018cayleynets, zhuang2018dual}. Then, let \begin{equation} \begin{aligned} \mathbf{Z} &:=\sigma\left(\mathbf{L}_{\G_{t_k}}\mathbf{X}_{t_k}\bm\Theta_{xz} + \mathbf{L}_{\G_{t_k}}\mathbf{H}\bm\Theta_{hz}\right), \\ \mathbf{R} &:=\sigma\left(\mathbf{L}_{\G_{t_k}}\mathbf{X}_{t_k}\bm\Theta_{xr} + \mathbf{L}_{\G_{t_k}}\mathbf{H}\bm\Theta_{hr}\right), \\ \tilde{\mathbf{H}} &:=\tanh \left(\mathbf{L}_{\G_{t_k}}\mathbf{X}_{t_k}\bm\Theta_{xh} + \mathbf{L}_{\G_{t_k}}\left(\mathbf{R} \odot \mathbf{H}\right)\bm\Theta_{hh}\right). \end{aligned} \end{equation} Finally, the \textit{post--jump} node features are obtained as \begin{equation} \begin{aligned} \mathbf{H}^+ &={\tt GCGRU}(\mathbf{H},\mathbf{X}_t):=\mathbf{Z} \odot \mathbf{H}+(\mathbb{1}-\mathbf{Z}), \odot \tilde{\mathbf{H}} \end{aligned} \end{equation} where $\bm\Theta_{xz},~\bm\Theta_{hz},~\bm\Theta_{xr},~\bm\Theta_{hr},~\bm\Theta_{xh},~\bm\Theta_{hh}$ are matrices of trainable parameters, $\sigma$ is the standard sigmoid activation and $\mathbb{1}$ is all--ones matrix of suitable dimensions. \section{Additional experimental details} \paragraph{Computational resources} We carried out all experiments on a cluster of 4x12GB NVIDIA\textsuperscript{\textregistered} Titan Xp GPUs and CUDA 10.1. The models were trained on GPU. \subsection{Node Classification} \paragraph{Training hyperparameters} All models are trained for $2000$ epochs using Adam~\citep{kingma2014adam} with learning rate $lr = 10^{-3}$ on Cora, Citeseer and $lr = 10^{-2}$ on Pubmed due to its training set size. The reported results are obtained by selecting the lowest validation loss model after convergence (i.e. in the epoch range $1000$ -- $2000$). Test metrics are not utilized in any way during the experimental setup. For the experiments to test resilience to integration time changes, we set a higher learning rate for all models i.e. $lr = 10^{-2}$ to reduce the number of epochs necessary to converge. \paragraph{Architectural details} \textit{SoftPlus} is used as activation for GDEs. Smooth activations have been observed to reduce stiffness \citep{chen2018neural} of the ODE and therefore the number of function evaluations (NFE) required for a solution that is within acceptable tolerances. All the other activation functions are \textit{rectified linear units} (ReLU). The exact input and output dimensions for the GCDE architectures are reported in Table \ref{tab:gde_arch}. The vector field $\mathbf{F}$ of GCDEs--rk2 and GCDEs--rk4 is parameterized by two GCN layers. GCDEs--dopri5 shares the same structure without GDE--2 (GCN). Input GCN layers are set to dropout $0.6$ whereas GCN layers parametrizing $\mathbf{F}$ are set to $0.9$. \begin{table \centering \setlength\tabcolsep{3pt} \begin{tabular}[t]{lrrr} \toprule Layer & Input dim. & Output dim. & Activation \\ \midrule GCN--in & dim. in & $64$ & ReLU \\ GDE--1 (GCN) & $64$ & $64$ & Softplus \\ GDE--2 (GCN) & $64$ & $64$ & None \\ GCN-out & 64 & dim. out & None \\ \bottomrule \end{tabular} \vspace{2mm} \caption{General architecture for GCDEs on node classification tasks. GCDEs applied to different datasets share the same architecture. The vector field $\mathbf{F}$ is parameterized by two GCN layers. GCDEs--dopri5 shares the same structure without GDE--2 (GCN).} \label{tab:gde_arch} \end{table} \begin{figure*}[t] \centering \includegraphics[width=1\linewidth]{figures/adj.pdf} \vspace{-5mm} \caption{Snapshots of the evolution of adjacency matrix $\mathbf{A}_t$ throughout the dynamics of the multi--particle system. Yellow indicates the presence of an edge and therefore a reciprocal force acting on the two bodies} \label{fig:adj} \end{figure*} \subsection{Multi--Agent System Dynamics} \paragraph{Dataset} Let us consider a planar multi agent system with states $\mathbf{x}_i$ ($i = 1,\dots,n$) and second--order dynamics: \begin{align*} \ddot \mathbf{x}_i &= -\mathbf{x}_i - \sum_{j\in\N_i}\mathbf{f}_{ij}(\mathbf{x}_i,\mathbf{x}_j,\dot\mathbf{x}_i,\dot\mathbf{x}_j), \end{align*} where \begin{align*} &\mathbf{f}_{ij} = -\left[\alpha\left( \|\mathbf{x}_i - \mathbf{x}_j\| - r \right) + \beta\frac{\langle\dot\mathbf{x}_i - \dot\mathbf{x}_j, \mathbf{x}_i - \mathbf{x}_j\rangle}{\|\mathbf{x}_i - \mathbf{x}_j\|}\right]\mathbf{n}_{ij},\\ &\mathbf{n}_{ij} = \frac{\mathbf{x}_i - \mathbf{x}_j}{\|\mathbf{x}_i - \mathbf{x}_j\|},\quad \alpha,~\beta,~r > 0, \end{align*} and \[ \quad \N_i := \left\{j:2\|\mathbf{x}_i-\mathbf{x}_j\|\leq r \land j\not = i\right\}. \] The force $\mathbf{f}_{ij}$ resembles the one of a spatial spring with drag interconnecting the two agents. The term $-\mathbf{x}_i$, is used instead to stabilize the trajectory and avoid the "explosion" of the phase--space. Note that $\mathbf{f}_{ij} = -\mathbf{f}_{ji}$. The adjaciency matrix $\mathbf{A}_t$ is computed along a trajectory \[ \mathbf{A}_t^{(ij)} = \left\{\begin{matrix*}[l] 1 & 2\|\mathbf{x}_i(t)-\mathbf{x}_j(t)\|\leq r\\ 0 & \text{otherwise} \end{matrix*}\right., \] which indeed results to be symmetric, $\mathbf{A}_t = \mathbf{A}_t^\top$ and thus yields an undirected graph. Figure \ref{fig:adj} visualizes an example trajectory of $\mathbf{A}_t$. \begin{table* \centering \setlength\tabcolsep{3pt} \begin{tabular}[t]{lrrrrrrr} \toprule Model & MAPE$_{1}$ & MAPE$_{3}$ & MAPE$_{5}$ & MAPE$_{10}$ & MAPE$_{15}$ & MAPE$_{20}$ & MAPE$_{50}$ \\ \midrule Static & $26.12$ & $160.56$ & $197.20$ & $ 235.21$ & $261.56$ & $275.60$ & $360.39$ \\ Neural ODE & $ 26.12$ & $52.26$ & $92.31$ & $ 156.26$ & $238.14$ & $301.85$ & $668.47$ \\ GDE & $13.53$ & $15.22$ & $18.76$ & $27.76$ & $33.90$ & $42.22$ & $77.64$ \\ GDE--II & $13.46$ & $14.75$ & $17.81$ & $27.77$ & $32.28$ & $40.64$ & $73.75$ \\ \bottomrule \end{tabular} \caption{Mean MAPE results across the 10 multi--particle dynamical system experiments. MAPE$_i$ indicates results for $i$ extrapolation steps on the full test trajectory.} \label{tab:mape_table} \end{table*} \begin{wrapfigure}[25]{r}{0.45\textwidth} \centering \includegraphics[scale=0.4]{figures/ma_extrapo.pdf} \caption{Test extrapolation, $5$ steps. Trajectory predictions of Neural ODEs and GDEs. The extrapolation is terminated after 5 steps and the nominal state is fed to the model.} \label{fig:extrapo_traj} \end{wrapfigure} We collect a single rollout with $T = 5$, $dt = 1.95\cdot10^{-3}$ and $n = 10$. The particle radius is set to $r = 1$. \paragraph{Architectural details} Node feature vectors are $4$ dimensional, corresponding to the dimension of the state, i.e. position and velocity. Neural ODEs and \textit{Static} share an architecture made up of 3 fully--connected layers: $4n$, $8n$, $8n$, $4n$ where $n = 10$ is the number of nodes. The last layer is linear. We evaluated different hidden layer dimensions: $8n$, $16n$, $32n$ and found $8n$ to be the most effective. Similarly, the architecture of first order GCDEs is composed of 3 GCN layers: $4$, $16$, $16$, $4$. Second--order GCDEs, on the other hand, are augmented by $4$ dimensions: $8$, $32$, $32$, $8$. We experimented with different ways of encoding the adjacency matrix $\mathbf{A}$ information into Neural ODEs and $Static$ but found that in all cases it lead to worse performance. \paragraph{Additional results} We report in Figure~\ref{fig:extrapo_traj} test extrapolation predictions of $5$ steps for GDEs and the various baselines. Neural ODEs fail to track the system, particularly in regions of the state space where interaction forces strongly affect the dynamics. GDEs, on the other hand, closely track both positions and velocities of the particles. \subsection{Traffic Forecasting} \paragraph{Dataset and metrics} The timestamp differences between consecutive graphs in the sequence varies due to undersampling. The distribution of timestamp deltas (5 minute units) for the three different experiment setups (30\%, 50\%, 70\% undersampling) is shown in Figure \ref{fig:traffic_delta_ts}. As a result, GRU takes 230 dimensional vector inputs (228 sensor observations + 2 additional features) at each sequence step. Both GCGRU and GCDE--GRU graph inputs with and 3 dimensional node features (observation + 2 additional feature). The additional time features are excluded for the loss computations. We include MAPE and RMSE test measurements, defined as follows: \begin{figure*}[t] \centering \includegraphics{figures/traffic_train.pdf} \caption{Traffic data training results of 50\% undersampling.} \label{fig:traffic_train} \end{figure*} \begin{figure*}[h!] \centering \begin{tikzpicture} \definecolor{color0}{rgb}{0.12156862745098,0.466666666666667,0.705882352941177} \begin{groupplot}[group style={group size=3 by 1}] \nextgroupplot[ width = 4.5 cm, height = 3cm, tick align=outside, tick pos=left, title={Keep probability 30\%}, x grid style={white!69.0196078431373!black}, xlabel={Delta Time Stamp}, xmin=0, xmax=22, xtick style={color=black}, y grid style={white!69.0196078431373!black}, ylabel={Frequency}, ymin=0, ymax=0.916538592896175, ytick style={color=black} ] \draw[draw=none,fill=color0] (axis cs:1,0) rectangle (axis cs:3,0.253164556962025); \draw[draw=none,fill=color0] (axis cs:3,0) rectangle (axis cs:5,0.124472573839662); \draw[draw=none,fill=color0] (axis cs:5,0) rectangle (axis cs:7,0.0638185654008439); \draw[draw=none,fill=color0] (axis cs:7,0) rectangle (axis cs:9,0.0313818565400844); \draw[draw=none,fill=color0] (axis cs:9,0) rectangle (axis cs:11,0.01292194092827); \draw[draw=none,fill=color0] (axis cs:11,0) rectangle (axis cs:13,0.00659282700421941); \draw[draw=none,fill=color0] (axis cs:13,0) rectangle (axis cs:15,0.00395569620253165); \draw[draw=none,fill=color0] (axis cs:15,0) rectangle (axis cs:17,0.00210970464135021); \draw[draw=none,fill=color0] (axis cs:17,0) rectangle (axis cs:19,0.000791139240506329); \draw[draw=none,fill=color0] (axis cs:19,0) rectangle (axis cs:21,0.000791139240506329); \nextgroupplot[ width = 4.5cm, height = 3cm, tick align=outside, tick pos=left, title={Keep probability 50\%}, x grid style={white!69.0196078431373!black}, xlabel={Delta Time Stamp}, xmin=0.45, xmax=12.55, xtick style={color=black}, y grid style={white!69.0196078431373!black}, ymin=0, ymax=0.916538592896175, ytick style={color=black} ] \draw[draw=none,fill=color0] (axis cs:1,0) rectangle (axis cs:2.1,0.682529743268629); \draw[draw=none,fill=color0] (axis cs:2.1,0) rectangle (axis cs:3.2,0.117265327033643); \draw[draw=none,fill=color0] (axis cs:3.2,0) rectangle (axis cs:4.3,0.0537940456537826); \draw[draw=none,fill=color0] (axis cs:4.3,0) rectangle (axis cs:5.4,0.0281778334376957); \draw[draw=none,fill=color0] (axis cs:5.4,0) rectangle (axis cs:6.5,0.0128081061080435); \draw[draw=none,fill=color0] (axis cs:6.5,0) rectangle (axis cs:7.6,0.00626174076393237); \draw[draw=none,fill=color0] (axis cs:7.6,0) rectangle (axis cs:8.7,0.00313087038196618); \draw[draw=none,fill=color0] (axis cs:8.7,0) rectangle (axis cs:9.8,0.00313087038196619); \draw[draw=none,fill=color0] (axis cs:9.8,0) rectangle (axis cs:10.9,0.00113849832071498); \draw[draw=none,fill=color0] (axis cs:10.9,0) rectangle (axis cs:12,0.000853873740536233); \nextgroupplot[ width = 4.5cm, height = 3cm, tick align=outside, tick pos=left, title={Keep probability 70\%}, x grid style={white!69.0196078431373!black}, xlabel={Delta Time Stamp}, xmin=0.6, xmax=9.4, xtick style={color=black}, y grid style={white!69.0196078431373!black}, ymin=0, ymax=0.916538592896175, ytick style={color=black} ] \draw[draw=none,fill=color0] (axis cs:1,0) rectangle (axis cs:1.8,0.872893897996357); \draw[draw=none,fill=color0] (axis cs:1.8,0) rectangle (axis cs:2.6,0.255009107468124); \draw[draw=none,fill=color0] (axis cs:2.6,0) rectangle (axis cs:3.4,0.0836748633879781); \draw[draw=none,fill=color0] (axis cs:3.4,0) rectangle (axis cs:4.2,0.0281762295081967); \draw[draw=none,fill=color0] (axis cs:4.2,0) rectangle (axis cs:5,0); \draw[draw=none,fill=color0] (axis cs:5,0) rectangle (axis cs:5.8,0.00711520947176684); \draw[draw=none,fill=color0] (axis cs:5.8,0) rectangle (axis cs:6.6,0.00199225865209472); \draw[draw=none,fill=color0] (axis cs:6.6,0) rectangle (axis cs:7.4,0.000569216757741348); \draw[draw=none,fill=color0] (axis cs:7.4,0) rectangle (axis cs:8.2,0.000284608378870674); \draw[draw=none,fill=color0] (axis cs:8.2,0) rectangle (axis cs:9,0.000284608378870674); \end{groupplot} \end{tikzpicture} \caption{Distribution of deltas between timestamps $t_{k+1} - t_k$ in the undersampled dataset. The time scale of required predictions varies greatly during the task.} \label{fig:traffic_delta_ts} \end{figure*} \begin{equation} \text{MAPE}(\mathbf{y}, \hat{\mathbf{y}}) = \frac{100 \%}{pT}\norm{\sum_{t=1}^{T}(\mathbf{y}_t - \hat{\mathbf{y}}_t) \oslash \mathbf{y}_t}_1, \end{equation} where $\mathbf{y}, \text{and} \ \hat{\mathbf{y}} \in \mathbb R^p $ is the set of vectorized target and prediction of models respectively. $\oslash$ and $\norm{\cdot}_1$ denotes Hadamard division and the 1-norm of vector. \begin{align*} \text{RMSE}(\mathbf{y}, \hat{\mathbf{y}}) &= \frac{1}{p}\norm{\sqrt{\frac{1}{T}\sum_{t=1}^{T} (\mathbf{y}_t - \hat{\mathbf{y}}_t)^2}}_1, \end{align*} where $(\cdot)^2$ and $\sqrt{\cdot}$ denotes the element-wise square and square root of the input vector, respectively. $\mathbf{y}_t \ \text{and} \ \hat{\mathbf{y}}_t$ denote the target and prediction vector. \begin{figure*}[t] \centering \input{figures/traffic_preds.tex} \caption{Traffic data prediction results of 50\% undersampling. GCDE--GRUs are able to evolve the latents between timestamps and provide a more accurate fit.} \label{fig:traffic_preds} \end{figure*} \paragraph{Architectural details} We employed two baseline models for contextualizing the importance of key components of GCDE--GRU. \textit{GRUs} architectures are equipped with 1 GRU layer with hidden dimension 50 and a 2 layer fully--connected head to map latents to predictions. \textit{GCGRUs} employ a GCGRU layer with 46 hidden dimension and a 2 layer fully--connected head. Lastly, GCDE--GRU shares the same architecture GCGRU with the addition of the flow $\mathbf{F}$ tasked with evolving the hidden features between arrival times. $\mathbf{F}$ is parametrized by 2 GCN layers, one with tanh activation and the second without activation. ReLU is used as the general activation function. \paragraph{Training hyperparameters} All models are trained for 40 epochs using Adam\citep{kingma2014adam} with $lr = 10^{-2}$. We schedule $lr$ by using cosine annealing method \citep{loshchilov2016sgdr} with $T_0=10$. The optimization is carried out by minimizing the \textit{mean square error} (MSE) loss between predictions and corresponding targets. \paragraph{Additional results} Training curves of the models are presented in the Fig \ref{fig:traffic_train}. All of models achieved nearly 13 in RMSE during training and fit the dataset. However, due to the lack of dedicated spatial modeling modules, GRUs were unable to generalize to the test set and resulted in a mean value prediction.
2024-02-18T23:41:01.693Z
2021-06-23T02:13:41.000Z
algebraic_stack_train_0000
3,969
14,509
proofpile-arXiv_066-3412
\section{Introduction} Photon upconversion \cite{Bloembergen1959, Brown1969, Auzel1984, Ovsyakin1966}, the photoluminescence process in which the emission wavelength is shorter than the excitation wavelength, has exciting applications in many fields such as bio-imaging \cite{Li2006, Chatterjee2010}, anti-counterfeiting \cite{Liu2011, You2015}, and not least in improving the efficiency of solar cells \cite{Shalav2005, Goldschmidt2015, Balling2018}. Different mechanisms are known to be able to upconvert light, but among the most promising is upconversion from trivalent lanthanide ions embedded in a glass or crystalline host material. The multitude of 4f-4f electronic transitions in the lanthanide series allows for a wide range of upconversion wavelengths, and the real intermediate states provide the opportunity to upconvert incoherent and less intense light. However, the active 4f-4f transitions in lanthanide ions are dipole forbidden, causing low absorption and thus low external quantum efficiency \cite{Auzel2004}, which remains an obstacle, especially for applications in solar cells -- the main focus of this investigation. Several pathways for enhancing the upconversion process have been proposed during the past decade, some of which focus on increasing the absorption by co-doping with a sensitizer \cite{Chen2015-3} while others utilize photonic enhancement through wave-guiding effects and Bragg stacks \cite{Johnson2011, Hofmann2018} or through surface-plasmon resonances \cite{Wu2014, Park2015}. Common for all photonic enhancements of lanthanide-based upconversion is that they affect the upconversion process in two ways: First, they can influence the relaxation rates both positively, through enhanced coupling to the radiation field, and negatively, through quenching via ohmic heating in the surrounding material \cite{Wu2014, Park2015}. Second, they can cause an increase in the electric field at the location of the lanthanide ions and hence also increase the absorption, which in turn will improve the upconversion efficiency \cite{Wu2014, Park2015}. \begin{figure*}[t!] \centering \includegraphics[scale=1]{Figure1.pdf} \caption{In a), the excitation process is sketched in two erbium ions. Initial ground-state absorption excites two erbium ions to the intermediate state $^4I_{13/2}$ indicated with green vertical arrows. In $^4I_{13/2}$, the two ions can interact with each other via an energy-transfer-upconversion process illustrated with red curved arrows, which leaves one ion in the ground state and the other in the doubly-excited state: $^4I_{9/2}$. From here, a two-step relaxation process indicated with curly blue arrows is most likely to occur. First, the ion will predominantly relax to the $^4I_{11/2}$ by a nonradiative relaxation due to the small energy difference between the $^4I_{9/2}$ and $^4I_{11/2}$ levels. Second, the ion will relax to the ground state either nonradiative (loss channel) or radiative by emitting a photon of higher energy than was initially absorbed. In b), the model setup is shown. The gold nanostructure (indicated in black) is placed in a \SI{50}{\nano\meter}-high design domain on top of a \SI{320}{\nano\meter}-thick film of erbium-doped \ce{TiO2} (green) on top of a \SI{0.5}{\milli\meter} \ce{SiO2} substrate (gray). In c), the upconversion-luminescence is exemplified by two spectra showing the measured upconversion luminescence with (red) and without (blue) the gold nanostructure present on the film surface.} \label{fig:overview} \end{figure*} This work considers an upconverting material consisting of erbium ions (\ce{Er^3+}) doped into a \ce{TiO_2} thin film (\ce{TiO_2}:\ce{Er}). The \ce{Er^3+} ions are able to absorb light at wavelengths in the vicinity of \SI{1500}{\nano\meter} \cite{Goldschmidt2015} and in turn emit upconverted electromagnetic (EM) radiation at \SI{980}{\nano\meter} due to the process sketched in Fig.~\ref{fig:overview}a. Carefully designed periodic gold nanostructures are placed on the surface of the \ce{TiO_2}:\ce{Er} film, as sketched in Fig.~\ref{fig:overview}b, in order to couple incident \SI{1500}{\nano\meter} EM radiation into the film and thereby concentrate it. These nanostructures lead to an increased emission of upconverted radiation, which is directly measurable, as exemplified in Fig.~\ref{fig:overview}c. Since the wavelength of the upconverted light falls within the absorption range of crystalline silicon solar cells, the upconversion process has the potential to improve the efficiency of such solar cells, and the ability to enhance the upconversion process constitutes an important step toward this goal. There are two main results of this work: First, it is demonstrated that a well-chosen film geometry, which supports wave-guided modes of \SI{1500}{\nano\meter} EM radiation, in combination with light concentration, facilitated by the well-chosen gold nanostructures, leads to an unprecedented enhancement of the upconversion-luminescence (UCL) yield. Second, it is demonstrated how the numerical calculation of electric fields within the upconverting film in combination with a recently developed analytical model for the upconversion process can account rather accurately for the observed UCL enhancement for varying intensities, incidence angles, and polarization orientations. As a result, the relation between the measured enhancements and the concentration factor of the incident radiation becomes apparent, which enables a justified prediction of the steps required before such upconverting materials can reach efficiencies relevant for improving the performance of solar cells. \section{Theory} The upconversion process has a nonlinear dependence on the excitation intensity, which saturates as the intensity is increased. For two-photon upconversion, the upconversion-intensity dependence will saturate from a quadratic dependence in the low-excitation limit toward a linear dependence in the high-excitation limit \cite{Pollnau2000}. In a recent paper \cite{Christiansen2019-1}, we have shown that for a bare upconverting film the intensity dependence of the UCL yield, $Y_{\mathrm{UCL}}$, is given by \begin{equation} \label{eq:Ysat} \begin{split} Y_{\mathrm{UCL}} &= A\left\lbrace 1 + \ln \left[\frac{\sqrt{1 + 2 I/I_{\mathrm{sat}}} + 1}{2}\right] \right. \\ &\left. + \frac{I}{2I_{\mathrm{sat}}} - \sqrt{1+ 2I/I_{\mathrm{sat}}} \right\rbrace \\ &\equiv A f \left( I/I_{\mathrm{sat}} \right) \end{split} \end{equation} where $A$ is an amplitude factor, accounting for material parameters and detection efficiency, and $I_{\mathrm{sat}}$ is the excitation intensity where the upconversion process starts to saturate. A good upconverter has a low saturation intensity, exemplified by the fact that an increased absorption cross section decreases the saturation intensity, whereas an increased nonradiative relaxation will increase the saturation intensity \cite{Christiansen2019-1}. When gold nanostructures are added to the film surface, causing the enhanced UCL, as exemplified in Fig.~\ref{fig:overview}c, the model becomes more involved. However, the UCL enhancement is straightforward to determine experimentally by measuring the total UCL yield (area under each curve in Fig.~\ref{fig:overview}c) with and without the nanostructures present on the surface of the upconverter (red and blue curves, respectively) at the same excitation conditions. In other words, the enhancement is a very convenient tool for characterizing the impact of photonic nanostructures, and it is, therefore, worthwhile to seek a theoretical understanding of this enhancement. By combining the above-mentioned analytical model with simulated electric-field distributions inside the film, the theoretical UCL enhancement, $L_\mathrm{UCL}$, defined as the ratio of $Y_\mathrm{UCL}$ with and without gold nanostructures present on the film surface, can be calculated as \begin{equation} \label{eq:Lsat} L_{\mathrm{UCL}} = \frac{ {\displaystyle \int_{\mathrm{\mathcal{V}_{\mathrm{UC}}}} } f\left( \frac{|\mathbf{\tilde{E}}|^2} {\overline{|\mathbf{\tilde{E}}_{\mathrm{s}}|^2}} \frac{I}{I_\mathrm{sat}} \right) \mathrm{d}V } { {\displaystyle \int_{\mathrm{\mathcal{V}_{\mathrm{UC}}}} } f\left( \frac{|\mathbf{\tilde{E}}_\mathrm{b}|^2} {\overline{|\mathbf{\tilde{E}}_{\mathrm{s}}|^2}} \frac{I}{I_\mathrm{sat}} \right) \mathrm{d}V }, \end{equation} where $f$ is the saturation function defined in Eq.~\eqref{eq:Ysat}, and the integration is taken over the volume, $\mathcal{V}_{\mathrm{UC}}$, of the upconverting film below one unit cell of the gold nanostructures, see Fig.~\ref{fig:overview}b. All electric fields in this expression are simulated according to the relevant experimental conditions (i.e., angle of incidence and polarization), and $\mathbf{\tilde{E}}$ and $\mathbf{\tilde{E}}_{\mathrm{b}}$ correspond to the fields in the presence and absence of gold nanostructures, respectively. Due to the linearity of the absorption process, the squared electric field inside the upconverting film is proportional to the intensity, $I$, of the incoming laser beam and hence only needs to be simulated once. The correct scaling is calibrated by reference to the laser-beam intensity, $I_{\mathrm{sat}}$, which drives the upconverter material into saturation, and the corresponding simulated electric field, $\mathbf{\tilde{E}_{\mathrm{s}}}$, for the experimental conditions of this calibration measurement. The bar over $|\mathbf{\tilde{E}_{\mathrm{s}}}|^2$ in Eq.~\eqref{eq:Lsat} denotes volume-averaging over $\mathcal{V}_{\mathrm{UC}}$. Variations in the relaxations rates, e.g., quenching, are neglected since these effects will only be significant for the small population of erbium ions in close vicinity of the gold nanostructure, in agreement with Ref.~\cite{Eriksen2019}. A detailed derivation of Eq.~\eqref{eq:Lsat} is available in Sec.~4.1 of the supporting information; later in this manuscript, we shall explain how the enhancement relates to the more general description of light concentration and improvement of photovoltaics. Fig.~\ref{fig:overview}b shows the model setup for the simulation of the electric fields consisting a (x,y)-periodic unit cell of a \SI{320}{\nano\meter}-thick \ce{TiO_2}:\ce{Er} film and \SI{50}{\nano\meter}-tall gold nanostructures placed on top, in the design domain. The goal is to obtain nanostructure designs, which enable efficient coupling of the incident light to guided modes in the \ce{TiO_2}:\ce{Er} film, and thereby, enhance the light intensity inside the film. The first step in designing such structures is to calculate the electric field accurately. Assuming nonmagnetic, linear, and isotropic materials, the Maxwell equations can be recast into the time-harmonic vector-wave equation \cite{Novotny2006} \begin{equation} \label{eq:wave_equation} \nabla \times (\nabla \times \bm{E} ) - \omega^2\mu_0\epsilon(\bm{r}) \bm{E} = 0, \end{equation} where $\omega$ is the angular frequency, $\mu_{0}$ is the free space permeability, and $\epsilon(\bm{r})$ is the position-dependent (complex) electric permittivity, with $\bm{r}=(x,y,z)$ denoting the spatial position. Eq.~\eqref{eq:wave_equation} is solved numerically using the finite element method \cite{Jin2014}. Once the electric field can be calculated, the next step is to optimize the nanostructure design to enhance the UCL yield, which is numerically evaluated through an appropriate merit function (to be introduced). The approach taken here, known as topology optimization, is to define a material distribution in the design domain and maximize the merit function by optimizing this distribution. This is achieved by changing $\epsilon(\bm{r})$ in Eq.~\eqref{eq:wave_equation}, through a spatially dependent nonphysical field $\rho \in [0,1]$ and expressing the permittivity as $\epsilon(\eta(\rho),\kappa(\rho))$ \cite{RChristiansen2019}, where $\eta$ and $\kappa$ are the refractive index and extinction coefficient, respectively. The design domain is divided into equally sized voxels (cubes), and a single $\rho$-value is assigned to each voxel with a value of $\rho = 0$ corresponding to air and $\rho = 1$ to gold. Since $\rho$ is allowed to be continuous, gradient-based optimization algorithms can be used to solve the optimization problem efficiently. Any nonphysical material mixes (where $\rho\neq 0$ and $\rho\neq 1$) are gradually removed over the course of the optimization using standard penalization tools \cite{Guest2004,Wang2011,Christiansen2015}, eventually forcing $\rho$ toward either 0 or 1. This introduction outlines the general idea behind topology optimization. For an in-depth presentation of topology optimization applied to nano and microscale electromagnetism, the reader is referred to Ref.~\cite{Jensen2010}. We choose to optimize the upconversion performance in the middle of the excitation regime, between the linear and quadratic intensity dependence, that is, for a cubic dependence on the electric field. The optimization problem is therefore formulated with the merit function \cite{Madsen2015,Johannsen2015,VesterPetersen2017,VesterPetersen2018} \begin{equation} \Phi_{i,j} = \frac{\int_{\mathcal{V}_{\mathrm{UC}}}\vert\bm{\tilde{E}}^i(\bm{\rho},\lambda_j)\vert^3~\mathrm{d}V} {\int_{\mathcal{V}_{\mathrm{UC}}}\vert\bm{\tilde{E}}^i_{\mathrm{b}}(\bm{0},\lambda_j)\vert^3~\mathrm{d}V}, \label{eq:objective_function} \end{equation} where $i$ and $\lambda_j$ parameterize the polarization and wavelength of the incoming field, respectively. In order to promote a design with a reasonable robustness against slight experimental fabrication errors, the optimization problem is formulated as a \textit{min-max} problem such that $i \in {\mathrm{s},\mathrm{p}}$ and $\lambda_j \in \{\SI{1490}{\nano\meter},\SI{1500}{\nano\meter},\SI{1510}{\nano\meter}\}$ are considered simultaneously. By minimizing the worst performance in the set of all realizations, $\left\{-\Phi_{i,j}\right\}$, the sensitivity toward variation in polarization and wavelength, or equivalently size variations in the realized nanostructure array, is decreased. \section{Materials and methods} \subsection{Numerical calculations} The topology optimization was performed with a voxel size of ($5 \times 5 \times 5$) \si{\cubed\nano\meter} and with a 2D $\rho$-field, extruded to a height of \SI{50}{\nano\meter} to ensure that the designs can be fabricated using electron-beam lithography (EBL). A unit-cell period of \SI{780}{\nano\meter} was chosen to allow for efficient coupling to waveguide modes in the film \cite{VesterPetersen2018}. Since the optimization problem is nonconvex, different geometries may emerge for different initial $\rho$'s. A set of calculations have been made with a random initial material distribution, resulting in a (smaller) set of converged geometries. These geometries have been numerically characterized, and we have chosen one, which shows robustness toward a change in incidence angle, while still yielding high enhancements. In this procedure, we have on purpose excluded designs such as a simple disk grating, which may work better for a particular wavelength and angle of incidence, but the extreme narrow-band response makes such structures irrelevant for developing devices that must utilize the broad-bandwidth solar radiation. A movie showing the course of the topology-optimization process leading to the chosen design is found in the supplementary material, and the chosen design, denoted P780*, can be seen in Fig.~\ref{fig:designs}a. Inspired by the simple square-ring structure of P780*, we have shape-optimized designs consisting of a square and a ring by tuning the square width, ring radius, and ring thickness using a derivative-free optimization algorithm (Nelder-Mead \cite{Arora2017}) to maximize the merit function in Eq.~\eqref{fig:RecordEnhancement}, as for P780*. Unit-cell periods of \SI{780}{\nano\meter}, \SI{800}{\nano\meter}, and \SI{1000}{\nano\meter} were chosen and all other parameters kept the same as for the topology-optimized sample P780*. The resulting three geometries denoted P780, P800, and P1000 can be seen in Fig.~\ref{fig:designs}b-d. More details of the numerical calculations are available in Sec.~1 of the supporting information. \subsection{Sample fabrication} \label{subsec:sample_fabrication} Starting with a \SI{0.5}{\milli\meter}-\ce{SiO_2} substrate, the \ce{TiO_2}:\ce{Er} was deposited using a radio-frequency (RF) magnetron-sputtering system. The targets were commercially produced from powders of \ce{TiO_2} and \ce{Er_2O_3} at an erbium concentration of 5.1 at.\%. The sputtering process was conducted in an argon atmosphere with \SI{2}{\percent} oxygen at a pressure of \SI{3}{\mmHg}. The sputtering was done with a fixed RF power of \SI{100}{\watt} and with the substrate temperature fixed at \SI{350}{\degreeCelsius}. These conditions were found to minimize unwanted nonradiative relaxations in the thin films \cite{Lakhotiya2018-JAP}. The deposition time was calibrated to achieve a film thickness of \SI{320}{\nano\meter}. The gold nanostructures were fabricated by EBL in combination with physical-vapor deposition on an ($1.2 \times 1.2$) \si{\milli\meter\square} area of the thin films to allow upconversion measurements on and off the gold nanostructures on the same sample by moving the laser spot on the surface, and to minimize the fabrication time. The EBL process was corrected for proximity effects using the procedure described in Ref.~\cite{Eriksen2018} to obtain a better resolution. The fabricated gold nanostructure had sidewalls with an inclination angle of \ang{75} as opposed to vertical sides (\ang{90}) assumed in the topology optimization. For more information on the EBL, the reader is referred to Sec.~2.2 of the supporting information. \subsection{Optical diffraction measurements} \label{subsec:Opt_Difrac} After EBL fabrication, the mean unit-cell period of the nanostructures was measured using optical diffraction. The measurements were conducted by transmitting a helium-neon laser onto the nanostructure facing a screen where the created diffraction pattern could be seen. The unit-cell period was then calculated from Bragg's law using: the wavelength of the laser, the distance to the wall, and the distance from the zeroth to the first-order diffraction peak on the screen. This investigation showed an asymmetry in the horizontal and vertical directions (defined by the SEM image), which leads to ambiguities when measuring the optical properties at different polarization orientations and angles of incidence. We have chosen to present the results with the electric field of the excitation source, parallel to the vertical period of the nanostructures here, while the case with the electric field parallel to the horizontal period can be seen in Fig.~S2 in the supporting information. A thorough explanation of these measurements and results are available in Sec.~3.1 of the supporting information. \subsection{Upconversion luminescence measurements} \label{subsec:UCLMeas} The UCL yield was measured by illuminating the samples with \SI{1500}{\nano\meter} laser light and recording the luminescence with an integrated spectrometer and CCD camera. The UCL-enhancement measurements were conducted with the samples placed in an integrating sphere with a diameter of \SI{150}{\milli\meter} to obtain identical collection efficiencies for all used geometries. The UCL yield is determined by integrating across the \SI{980}{\nano\meter} peak shown in Fig.~\ref{fig:overview}c. The enhancement is defined as the ratio of the UCL yield on and off the gold nanostructures. The UCL enhancements were measured at varying angles of incidence between \ang{0} and \ang{25}, for both s and p-polarization, and for two excitation intensities, \SI{323 \pm 97}{\watt\per\square\centi\meter} and \SI{5.8 \pm 0.5}{\watt\per\square\centi\meter}, obtained using two different laser-beam areas. The rather high uncertainty in the high-excitation intensity originates from the laser-beam area, see Sec.~3.3 of the supporting information. The enhancements with the horizontal period parallel to the electric field are shown in Fig.~S2 of the supporting information. To allow comparison with Eq.~\eqref{eq:Lsat}, the saturation intensity of the sample was measured at a \ang{50} angle of incidence and a p-polarized excitation, and by fitting UCL intensity-dependence data to Eq.~\eqref{eq:Ysat}, we have determined the corresponding saturation intensity to $I_{\mathrm{sat}} = \SI{20 \pm 6}{\watt\per\square\centi\meter}$, see data and fit in Fig.~S3 of the supporting information. A thorough account of the UCL measurements, including the determination of the saturation intensity, is available in Sec.~3.2 of the supporting information. \section{Results} \label{sec:results} \begin{figure*}[!ht] \centering \includegraphics[width = \textwidth]{Figure2.pdf} \caption{Optimized and fabricated designs a) P780*, b) P780, c) P800, and d) P1000. In the top row, the unit-cell designs are shown with white representing air and black representing gold, with the gray frame indicating the unit-cell boundary. In the middle row, the polarization-averaged energy-density enhancement in the film, $1/2 \sum_{\mathrm{s},\mathrm{p}} |\bm{E}|^2/|\bm{E}_{\mathrm{b}}|^2$, at $\lambda = \SI{1500}{nm}$ is shown. In the bottom row, SEM pictures of the EBL fabricated gold nanostructures are shown with the scale-bars indicating \SI{1}{\micro\meter}.} \label{fig:designs} \end{figure*} In Fig.~\ref{fig:designs}a, the topology-optimized design is shown in the upper row with the electric-field enhancement and distribution underneath, and a scanning-electron microscope (SEM) image of the EBL produced sample in the lowest row. The samples presented in Fig.~\ref{fig:designs}b-d will be discussed later. \begin{figure*}[ht!] \centering \includegraphics[scale=1]{Figure3.pdf} \caption{UCL enhancements for all four nanostructure designs at high excitation intensity \SI{323}{\watt\per\square\centi\meter} (upper row), and low intensity \SI{5.8}{\watt\per\square\centi\meter} (lower row). The measurements are plotted by circles, and the calculated values are represented with solid curves. Red is used for the p-polarized case and blue for the s-polarized case.} \label{fig:Enh_Y} \end{figure*} The measured UCL enhancements with the vertical period parallel to the electric field are presented in Fig.~\ref{fig:Enh_Y} with the high and low-intensity cases in the upper and lower panels, respectively. The theoretically predicted upconversion enhancements, $L_{\mathrm{UCL}}$, shown as solid curves in Fig.~\ref{fig:Enh_Y}, were calculated using Eq.~\eqref{eq:Lsat}. A reasonable qualitative agreement between the measurements and simulations is observed: the order of magnitude, as well as the trend of the variations with incidence angle and polarization, agree mostly. We stress that the calculated UCL enhancements are computed from only the simulated electric-field distribution, and the experimentally determined saturation intensity. Moreover, we emphasize that the electric-field calculations are made from the ideal unit-cell design with the periods and slanting angle scaled to the measured values, as explained in Sec.~\ref{subsec:sample_fabrication} and \ref{subsec:Opt_Difrac}. The fabrication imperfections observed by comparing the design and SEM images in the first and third row of Fig.~\ref{fig:designs}, respectively, are thus not accounted for in the calculations, and the model is hence only expected to be able to describe the general trend of the UCL enhancement caused by the nanostructures. A significant UCL enhancement has been found for a wide range of incidence angles, especially for p-polarized excitation. This is in stark contrast to what is generally reported in the literature, where high UCL enhancements typically are coupled to a strong angular dependence \cite{Hofmann2018, Verhagen2009}. A broad angular acceptance is essential when considering the end goal of improving photovoltaics by concentrating solar radiation. Noteworthy is also the fact that the enhancements reported here are obtained in thick samples (\SI{320}{\nano\meter}) contrary to the case of previously reported high UCL enhancements, which were recorded from upconverting film thicknesses below \SI{25}{\nano\meter} \cite{Verhagen2009, Zhang2012}. Due to the low absorption cross section of erbium, thick upconverters are essential to achieve adequate external quantum efficiency. From the absorption cross section of erbium \cite{Miniscalco1991} and the general concentration level of erbium-doped upconverters the absorption length is on the order of \SI{1}{\milli\meter} at \SI{1500}{\nano\meter}, hence we will need even thicker films to achieve adequate external quantum efficiencies; however, the results reported here provide an initial step on the way to the realization of upconversion-enhanced solar cells. The three additional designs in Fig.~\ref{fig:designs}b-d were found using parameter optimization to investigate how the unit-cell period affects the UCL yield, as they exhibit, based on simple grating-coupling condition, an efficient, a moderate, and a weak waveguide coupling at normal incidence, respectively. The additional designs also further allow us to validate the UCL model under different illumination conditions. The obtained designs have been fabricated, and their upconversion properties measured similarly to P780*. Looking at the field enhancements in the second row of Fig.~\ref{fig:designs}, it is clear that the field distribution of P780 (and P780*) resembles that of a waveguide mode with field enhancement spanning the entire film. In contrast, P1000 shows a field distribution with distinctly localized enhancements typically attributed to plasmonic resonances. In Fig.~\ref{fig:Enh_Y}, large UCL enhancements are observed both experimentally and numerically at normal incidence for P780 (and P780*), while the samples P800 and P1000 show significantly lower values at normal incidence and additionally have their peak efficiency at a nonzero angle of incidence. This observation can be attributed to the phase-matching criterion for coupling the incoming light to the waveguide mode \cite{VesterPetersen2018}. The criterion is not exactly met at normal incidence but is instead fulfilled better at some nonzero angle. The UCL enhancements in Fig.~\ref{fig:Enh_Y} generally show a reasonable agreement between measured and calculated values in all cases, except in P800 at angles where the numerical data show strong resonances, which may not be present in the fabricated samples or is simply not captured due to the finite number of measuring angles. This generally good agreement between measurements and calculations is further backed by measured and calculated extinction cross sections available in Fig.~S5 of the supporting information. For all designs, we observe a drastic increase in the UCL enhancement by decreasing the excitation intensity (compare the upper and lower panels of Fig.~\ref{fig:Enh_Y}), indicating that the data are measured under the influence of saturation. To demonstrate this, we investigate the best-performing sample, P780, at the optimal conditions, i.e., \ang{1} angle of incidence and s-polarized excitation, at even lower excitation intensities. The laser power, and hence the intensity, is attenuated with neutral-density (ND) filters, thus maintaining the same laser-beam area as was the case for the results presented in the lower panels of Fig.~\ref{fig:Enh_Y}. In this case, we measured a maximum UCL enhancement factor of \num{913 \pm 51} (see blue open-faced circles in Fig.~\ref{fig:RecordEnhancement}) at an excitation intensity of \SI{1.7}{\watt\per\square\centi\meter}. The blue data point in Fig.~\ref{fig:RecordEnhancement} at high intensity and low UCL enhancements, measured at a similar angle of incidence and polarization but with the small laser-beam area taken from the upper panel of Fig.~\ref{fig:Enh_Y}b, is included to present all relevant data. The measurements indicate that even stronger enhancements could be measured by decreasing the excitation intensity even more, which is, however, impractical due to the signal-to-noise ratio of the integrating-sphere setup used. Nevertheless, we have measured UCL enhancement factors between 30 and 913 for the same photonic structure at the same polarization and angle of incidence of the excitation source by merely varying the excitation intensity. This considerable span of enhancement factors demonstrates the arbitrariness in stating a UCL enhancement factor alone without explaining the proper experimental context. We wish to remove this arbitrariness, once and for all, such that it is possible to make a sensible quantification of the ability of a photonic structure to enhance the UCL. To achieve this, we shall define below a light-concentration factor $C_{\mathrm{ns}}$ as the proper quantification of the photonic structure. In order to get there, we remind the reader that the UCL enhancement is defined by the ratio $Y_{\mathrm{UCL}}^{\mathrm{on}}/Y_{\mathrm{UCL}}^{\mathrm{off}}$, where we know from Eq.~\eqref{eq:Ysat} that $Y_{\mathrm{UCL}}^{\mathrm{off}} \propto f(I/I_{\mathrm{sat}})$. It is thus tempting to investigate if we can find a similar relation for the UCL enhancement on the nanostructure. Therefore, we measure carefully the intensity dependence of the $Y_{\mathrm{UCL}}$ as exemplified in Fig.~\ref{fig:overview}c, both on and off the gold nanostructures at the same polarization and incidence angle as in the enhancement measurements just described. These intensity-dependence measurements are conducted without the use of an integrating sphere to improve the signal-to-noise ratio significantly, and the results are shown in Fig.~\ref{fig:RecordEnhancement} on (red circles) and off (red squares) the gold nanostructures. For both data sets, the solid black curves correspond to fits to Eq.~\eqref{eq:Ysat}. In other words, the experimental signals follow the model $Y_{\mathrm{UCL}}^{\mathrm{off}}(I) = A^{\mathrm{off}}f(I/I_{\mathrm{sat}}^{\mathrm{off}})$ and $Y_{\mathrm{UCL}}^{\mathrm{on}}(I) = A^{\mathrm{on}}f(I/I_{\mathrm{sat}}^{\mathrm{on}})$, where the fitted saturation intensities are $I_{\mathrm{sat}}^{\mathrm{on}} = \SI{0.59 \pm 0.05}{\watt\per\square\centi\meter}$ and $I_{\mathrm{sat}}^{\mathrm{off}} = \SI{39 \pm 11}{\watt\per\square\centi\meter}$, respectively\footnote{The difference in saturation intensities found off the nanostructure here and the value stated in Sec.~\ref{subsec:UCLMeas} are due to different polarization and incidence angle of the excitation laser}. The proportionality factors $A^{\mathrm{off}}$ and $A^{\mathrm{on}}$ depend on the experimental light collection efficiency, which is not the same on and off the nanostructures due to, among other, variations in the light-emission pattern. While we expect from Ref.~\cite{Christiansen2019-1} that Eq.~\eqref{eq:Ysat} provides a good description for the intensity-dependence measurement off the nanostructures, it is quite surprising, at first glance, that a satisfactory fit can also be obtained on the nanostructures. One may be tempted to conclude that the approximation $\overline{f\left( x \right)} \approx f\left( \overline{x} \right)$ is valid even when the intensity distribution in the film is highly nonuniform under the influence of the nanostructures. However, in Sec.~4.2 of the supporting information, we argue that a better approximation is $\overline{ f\left(x \right)} \approx \zeta f \left(\overline{x}/\zeta \right)$, where $\zeta$ is a measure of the inhomogeneity of the intensity distribution inside the upconverting film and must be well-chosen in the range from 0.38 to 1. This fact explains why the UCL-intensity dependence on the gold nanostructure follow Eq.~\eqref{eq:Ysat}, and Sec.~4.2 of the supporting information also explains that the corresponding on-structure saturation intensity is given by \begin{equation} \label{eq:Isat_on} I_{\mathrm{sat}}^{\mathrm{on}} = I_{\mathrm{sat}}^{\mathrm{off}}\zeta \cdot \frac{\overline{|\bm{E}_{\mathrm{b}}|^2}}{\overline{|\bm{E}|^2}} \equiv I_{\mathrm{sat}}^{\mathrm{off}}\zeta/C_{\mathrm{ns}}, \end{equation} where the concentration factor $C_{\mathrm{ns}} \equiv \overline{|\bm{E}|^2}/\overline{|\bm{E}_\mathrm{b}|^2}$ of the mean EM energy density was defined. Hence, if we can determine the value of $\zeta$, it is possible in turn to calculate $C_{\mathrm{ns}}$ as a reliable, intensity-independent measure of the performance of the photonic structure. In order to achieve this, we show in Sec.~4.2 of the supporting information that the above-mentioned success of fitting the red data points in Fig.~\ref{fig:RecordEnhancement} to the $f$-function in Eq.~\eqref{eq:Ysat} immediately leads to the conclusion that the theoretically predicted enhancement in Eq.~\eqref{eq:Lsat} can be simplified to \begin{equation} \label{eq:Lsattilde} \tilde{L}_{\mathrm{UCL}} = \zeta \frac{ f \left( \frac{I}{I_{\mathrm{sat}}^{\mathrm{on}}} \right) } { f \left( \frac{I}{I_{\mathrm{sat}}^{\mathrm{off}}} \right)}, \end{equation} where $f \left( \frac{I}{I_{\mathrm{sat}}^{\mathrm{on}}} \right)$ and $f \left( \frac{I}{I_{\mathrm{sat}}^{\mathrm{off}}} \right)$ are exactly the functions known from the fitting in Fig.~\ref{fig:RecordEnhancement}. It should be noted that using the same polarization and incidence angle for all data points in Fig.~\ref{fig:RecordEnhancement} is required for this relation to be valid. Hence, to establish the value of $\zeta$, we only need to calibrate Eq.~\eqref{eq:Lsattilde} to the experimental UCL enhancement measurements. The dotted blue curve in Fig.~\ref{fig:RecordEnhancement} is obtained exactly in this way by fitting Eq.~\eqref{eq:Lsattilde} to the blue data points (excluding the outlier) while treating $\zeta$ as the only free parameter. With this, we have determined a reasonable $\zeta$-value of 0.48, well within the allowed range. The outlier is measured at such high excitation intensities that deviations from Eq.~\eqref{eq:Ysat} are expected to occur due to significant excitation to higher-energy states of \ce{Er^3+} not included in the model. This was previously observed for a core-shell \ce{NaYF_4}:\ce{Er} sample with a saturation intensity similar to the $I_{\mathrm{sat}}^{\mathrm{on}}$ stated above \cite{Christiansen2019-1}. With the $\zeta$-value at hand, we can now determine $C_{\mathrm{ns}} = \zeta I_{\mathrm{sat}}^{\mathrm{off}} /I_{\mathrm{sat}}^{\mathrm{on}} = \num{32 \pm 10}$. In comparison, the simulated concentration factor is moderately smaller at a value of \num{23}, consistent with the fact that the experimental UCL enhancements are somewhat higher than the simulation for P780 at these experimental conditions, see Fig.~\ref{fig:Enh_Y}b. \begin{figure}[t!] \centering \includegraphics[scale = 1]{figure4.pdf} \caption{The UCL enhancement for the P780 sample is plotted against excitation intensity in blue open-faced points. The red circles and squares are intensity-dependence curves measured on and off the gold nanostructure, respectively, for the P780 sample with corresponding fits shown by the black-solid curves. The blue dotted curve showing the theoretically expected enhancement is obtained by fitting the blue data points (excluding the outlier) to Eq.~\eqref{eq:Lsattilde} with $\zeta$ as the only fitting parameter.} \label{fig:RecordEnhancement} \end{figure} \section{Discussion} \label{sec:Discussion} Let us use our new understanding to consider the applicability of existing upconverters for solar-cell improvements under realistic excitation conditions, i.e., one sun. The introduced concentration factor, $C_{\mathrm{ns}}$, becomes important for two reasons: First, the total-absorption rate of an upconverting film is enhanced exactly by $C_{\mathrm{ns}}$ when any kind of photonic enhancement is present, see Eq.~(25) in Sec.~4.2 of the supporting information. Second, together with the $\zeta$-parameter the concentration factor can be interpreted as a lowering of the saturation intensity by the factor $\zeta/C_{\mathrm{ns}}$, and as a result, the saturation intensity approaching the ideal excitation condition of $I \approx 10I_{\mathrm{sat}}$, suggested in Ref.~\cite{Christiansen2019-1}. Nevertheless, the impressive $C_{\mathrm{ns}}$-factor reported in this article is still not enough for a working solar-cell device with currently available erbium-based upconverters. To arrive at this conclusion, consider that the available radiation energy from the sun in the absorption band of trivalent erbium is about \SI{3e-03}{\watt\per\centi\meter\squared}. This excitation intensity is roughly four orders of magnitude lower than the desired excitation regime for a \ce{TiO2}:\ce{Er} upconverter. Even if we had chosen the more efficient upconverter system of \ce{NaYF_4}:\ce{Er}, with a significantly lower saturation intensity \cite{Christiansen2019-1}, we are still between one and two orders of magnitude short in concentration factor. This estimate is even assuming that we could achieve similar photonic concentration in the \ce{NaYF4}, which is highly unlikely due to the smaller refractive index limiting the waveguiding efficiencies \cite{VesterPetersen2018}. Moreover, not much can be gained from a further photonic concentration since this must come at the expense of lowering the bandwidth of the photonic structure. A naive estimate on the bandwidth lowering is $\SI{1500}{\nano\meter}/C_{\mathrm{ns}} \approx \SI{50}{\nano\meter}$, which is already similar to the absorption bandwidth of \ce{Er^3+}. Instead, new materials could be introduced to improve performance, such as fluorescent concentrators, concentrating the entire available EM radiation from the sun in the range from \SIrange{1100}{1450}{\nano\meter} into the absorption band of \ce{Er^3+} as proposed in Ref.~\cite{Goldschmidt2008}. However, most desirable would be, also, to find new upconverting materials or hosts with larger absorption cross sections of the active material and ideally with high refractive index ensuring the capability of efficient waveguide coupling. After all, this will lower the saturation intensity as well as increase the total absorption rate, and thus significantly lower the demand for a high concentration of the EM radiation in large upconverting volumes. \section{Conclusion} \label{sec:conclusion} In conclusion, we have designed and fabricated highly efficient photonic structures for enhancing the upconversion process in thin films of \ce{TiO2}:\ce{Er}. The UCL enhancement dependence on the unit-cell period of the nanostructure is studied through parametrically optimized designs in spired by the topology optimization. An unprecedented UCL enhancement factor of \num{913 \pm 51} has been measured at an excitation intensity of \SI{1.7}{\watt\per\square\centi\meter}. A model for the UCL enhancement is developed, which agrees reasonably with the measured UCL enhancements of the fabricated structures. The model further allows for an experimental determination of the concentration of the mean EM energy density in the upconverting film -- an essential parameter when assessing the applicability of photonic enhanced upconverters for solar-cell applications. \section{Acknowledgments} This work is supported by Innovation fund Denmark under the project "SunTune".
2024-02-18T23:41:01.991Z
2019-11-19T02:14:59.000Z
algebraic_stack_train_0000
3,983
6,175
proofpile-arXiv_066-3634
\section{Introduction} Consider a bounded subset $K$ of $\mathbb{R}^n$. We would like to find an arrangement of $m$ affine hyperplanes in $\mathbb{R}^n$ that cut through $K$ as evenly as possible; see Figure~\ref{fig:tessellation} for an illustration. The intuitive notion of an ``even cut'' can be expressed more formally in the following way: The fraction of the hyperplanes separating any pair $x,y \in K$ should be proportional (up to a small additive error) to the Euclidean distance between $x$ and $y$. What is the smallest possible number $m=m(K)$ of hyperplanes with this property? Besides having a natural theoretical appeal, this question is directly motivated by a certain problem of information theory which we will describe later. \begin{figure}[htp] \centering \includegraphics[height=2.7cm]{tessellation.pdf} \caption{A hyperplane tessellation of a set in the plane} \label{fig:tessellation} \end{figure} In the beginning it will be most convenient to work with subsets $K$ of the unit Euclidean sphere $S^{n-1}$, but we will lift this restriction later. Let $d(x,y)$ denote the normalized geodesic distance on $S^{n-1}$, so the distance between the opposite points on the sphere equals $1$. A (linear) hyperplane in $\mathbb{R}^n$ can be expressed as $a^\perp$ for some $a \in \mathbb{R}^n$. We say that points $x,y \in \mathbb{R}^n$ are separated by the hyperplane\footnote{For convenience of presentation we prefer the sign function to take values $\{-1,1\}$, so we define it as $\sign(t)=1$ for $t \ge 0$ and $\sign(t) = -1$ for $t<0$.} if $\sign\< a,x\> \ne \sign\< a,y\> $. \begin{definition}[Uniform tessellation] Consider a subset $K \subseteq S^{n-1}$ and an arrangement of $m$ hyperplanes in $\mathbb{R}^n$. Let $d_A(x,y)$ denote the fraction of the hyperplanes that separate points $x$ and $y$ in $\mathbb{R}^n$. Given $\delta>0$, we say that the hyperplanes provide a {\em $\delta$-uniform tessellation} of $K$ if \begin{equation} \label{eq:uniform tessellation} | d_A(x,y) - d(x,y) | \le \delta, \quad x,y \in K. \end{equation} \end{definition} The main result of this paper is a bound on the minimal number $m = m(K,\delta)$ of hyperplanes that provide a uniform tessellation of a set $K$. It turns out that for a fixed accuracy $\delta$, an almost optimal estimate on $m$ depends only on one global parameter of $K$, namely the mean width. Recall that the {\em Gaussian mean width} of $K$ is defined as \begin{equation} \label{eq:mean width} w(K) = \E \sup_{x \in K} |\< g,x \> | \end{equation} where $g \sim \mathcal{N}(0, I_n)$ is a standard Gaussian random vector in $\mathbb{R}^n$. \begin{theorem}[Random uniform tessellations] \label{thm:tessellations} Consider a subset $K \subseteq S^{n-1}$ and let $\delta > 0$. Let $$ m \ge C \delta^{-6} w(K)^2 $$ and consider an arrangement of $m$ independent random hyperplanes in $\mathbb{R}^n$ uniformly distributed according to the Haar measure. Then with probability at least $1-2\exp(-c \delta^2 m)$, these hyperplanes provide a $\delta$-uniform tessellation of $K$. Here and later $C, c$ denote positive absolute constants. \end{theorem} \begin{remark}[Tessellations in stochastic geometry] By the rotation invariance of the Haar measure, it easily follows that $\E d_A(x,y) = d(x,y)$ for each pair $x,y \in \mathbb{R}^n$. Theorem~\ref{thm:tessellations} states that with high probability, $d_A(x,y)$ almost matches its expected value {\em uniformly} over all $x,y \in K$. This observation highlights the principal difference between the problems studied in this paper and the classical problems on random hyperplane tessellations studied in stochastic geometry. The classical problems concern the shape of a {\em specific} cell (usually the one containing the origin) or certain {\em statistics} of cells (e.g. ``how many cells have volume greater than a fixed number''?), see \cite{Calka}. In contrast to this, the concept of uniform tessellation we propose his paper concerns {\em all} cells simultaneously; see Section~\ref{s:cells} for a vivid illustration. \end{remark} \subsection{Embeddings into the Hamming cube} Theorem~\ref{thm:tessellations} has an equivalent formulation in the context of metric embeddings. It yields that {\em every subset $K \subseteq S^{n-1}$ can be almost isometrically embedded into the Hamming cube $\{-1,1\}^m$ with $m = O(w(K)^2)$}. \smallskip To explain this statement, let us recall a few standard notions. An $\varepsilon$-isometry (or almost isometry) between metric spaces $(X,d_X)$ and $(Y, d_Y)$ is a map $f : X \to Y$ which satisfies $$ | d_Y(f(x),f(x')) - d_X(x,x') | \le \varepsilon, \quad x,x' \in X, $$ and such that for every $y \in Y$ one can find $x \in X$ satisfying $d_Y(y,f(x)) \le \varepsilon$. A map $f : X \to Y$ is an $\varepsilon$-isometric embedding of $X$ into $Y$ if the map $f : X \to f(X)$ is an $\varepsilon$-isometry between $(X,d_X)$ and the subspace $(f(X),d_Y)$. It is not hard to show that $X$ can be $2 \varepsilon$-isometrically embedded into $Y$ (by means of a suitable map $f$) if $X$ has the Gromov-Haussdorff distance at most $\varepsilon$ from some subset of $Y$. Conversely, if there is an $\varepsilon$-isometry between $X$ and $f(X)$ then the Gromov-Haussdorff distance between $X$ and $f(X)$ is bounded by $\varepsilon$. Finally, recall that the Hamming cube is the set $\{-1,1\}^m$ with the (normalized) Hamming distance $d_{H}(u,v) = \frac{1}{m} \sum_{i=1}^m \mathbb{1}_{\{u_i \neq v_i\}} = $ the fraction of the coordinates where $u$ and $v$ are different. \smallskip An arrangement of $m$ hyperplanes in $\mathbb{R}^n$ defines a {\em sign map} $f : \mathbb{R}^n \to \{-1,1\}^m$ which sends $x \in \mathbb{R}^n$ to the sign vector of the orientations of $x$ with respect to the hyperplanes. The sign map is uniquely defined up to the isometries of the Hamming cube. Let $a_1,\ldots, a_m \in \mathbb{R}^n$ be normals of the hyperplanes, and consider the $m \times n$ matrix $A$ with rows $a_i$. The sign map can be expressed as $$ f(x) = \sign Ax, \quad f: \mathbb{R}^n \to \{-1,1\}^m, $$ where $\sign Ax$ denotes the vector of signs of the coordinates $\< a_i, x\> $ of $Ax$. The fraction $d_A(x,y)$ of the hyperplanes that separate points $x$ and $y$ thus equals $$ d_A(x,y) = d_H(\sign Ax, \sign Ay), \quad x,y \in \mathbb{R}^n. $$ Then looking back at the definition of uniform tessellations, we observe the following fact: \begin{fact}[Embeddings by uniform tessellations] \label{fact:embeddings} Consider a $\delta$-uniform tessellation of a set $K \subseteq S^{n-1}$ by $m$ hyperplanes. Then the set $K$ (with the induced geodesic distance) can be $\delta$-isometrically embedded into the Hamming cube $\{-1,1\}^m$. The sign map provides such an embedding. \qed \end{fact} This allows us to state Theorem~\ref{thm:tessellations} as follows: \begin{theorem}[Embeddings into the Hamming cube] \label{thm:embeddings} Consider a subset $K \subseteq S^{n-1}$ and let $\delta > 0$. Let $$ m \ge C \delta^{-6} w(K)^2. $$ Then $K$ can be $\delta$-isometrically embedded into the Hamming cube $\{-1,1\}^m$. Moreover, let $A$ be an $m \times n$ random matrix with independent $\mathcal{N}(0,1)$ entries. Then with probability at least $1-2\exp(-c \delta^2 m)$, the sign map \begin{equation} \label{eq:f} f(x) = \sign Ax, \quad f : K \to \{-1,1\}^m \end{equation} is an $\delta$-isometric embedding. \qed \end{theorem} \subsection{Almost isometry of $K$ and the tessellation graph.} The image of the sign map $f$ in \eqref{eq:f} has a special meaning. When the Hamming cube $\{-1,1\}^m$ is viewed as a graph (in which two points $u$, $v$ are connected if they differ in exactly one coordinate), the image of $f$ defines a subgraph of $\{-1,1\}^m$, which is called the {\em tessellation graph} of $K$. The tessellation graph has a vertex for each cell and an edge for each pair of adjacent cells, see Figure~\ref{fig:tessellation-graph}. Notice that the graph distance in the tessellation graph equals the number of hyperplanes that separate the two cells. Therefore the definition of a uniform tessellation yields: \begin{fact}[Graphs of uniform tessellations] \label{fact:graph} Consider a $\delta$-uniform tessellation of a set $K \subseteq S^{n-1}$. Then $K$ is $\delta$-isometric to the tessellation graph of $K$. \qed \end{fact} Hence we can read the conclusion of Theorem~\ref{thm:tessellations} as follows: {\em $K$ is $\delta$-isometric to the graph of its tessellation by $m$ random hyperplanes, where $m \sim \delta^{-6} w(K)^2$}. \begin{figure}[htp] \centering \includegraphics[height=2.7cm]{tessellation-graph.pdf} \caption{The graph of a tessellation of a set in the plane. The dashed lines represent the edges.} \label{fig:tessellation-graph} \end{figure} \subsection{Computing mean width} Powerful methods to estimate the mean width $w(K)$ have been developed in connection with stochastic processses. These methods include Sudakov's and Dudley's inequalities which relate $w(K)$ to the covering numbers of $K$ in the Euclidean metric, and the sharp technique of majorizing measures (see \cite{LT, T Chaining}). Mean width has a simple (and known) geometric interpretation. By the rotational invariance of the Gaussian random vector $g$ in \eqref{eq:mean width}, one can replace $g$ with a random vector $\theta$ that is uniformly distributed on $S^{n-1}$, as follows: $$ w(K) = c_n \sqrt{n} \cdot \bar{w}(K), \quad \text{where} \quad \bar{w}(K) = \E \sup_{x \in K} |\< \theta,x\> |. $$ Here $c_n$ are numbers that depend only on $n$ and such that $c_n \le 1$ and $\lim_{n \to \infty} c_n = 1$. We may refer to $\bar{w}(K)$ as the {\em spherical mean width} of $K$. Let us assume for simplicity that $K$ is symmetric with respect to the origin. Then $2\sup_{x \in K} |\< \theta,x\> |$ is the width of $K$ in the direction $\theta$, which is the distance between the two supporting hyperplanes of $K$ whose normals are $\theta$. The spherical mean width $\bar{w}(K)$ is then twice the average width of $K$ over all directions. \subsection{Dimension reduction} Our results are already non-trivial in the particular case $K = S^{n-1}$. Since $w(S^{n-1}) \le \sqrt{n}$, Theorems~\ref{thm:tessellations} and \ref{thm:embeddings} hold with $m \sim n$. But more importantly, many interesting sets $K \subset S^{n-1}$ satisfy $w(K) \ll \sqrt{n}$ and therefore make our results hold with $m \sim w(K)^2 \ll n$. In such cases, one can view the sign map $f(x) = \sign Ax$ in Theorem~\ref{thm:embeddings} as a dimension reduction mechanism that transforms an $n$-dimensional set $K$ into a subset of $\{-1,1\}^m$. A heuristic reason why dimension reduction is possible is that the quantity $w(K)^2$ measures the {\em effective dimension} of a set $K \subseteq S^{n-1}$. The effective dimension $w(K)^2$ of a set $K \subseteq S^{n-1}$ is always bounded by the algebraic dimension, but it may be much smaller and it is robust with respect to perturbations of $K$. In this regard, the notion of effective dimension is parallel to the notion of effective rank of a matrix from numerical linear algebra (see e.g. \cite{RV JACM}). With these observations in mind, it is not surprising that the ``true'', effective dimension of $K$ would be revealed (and would be the only obstruction according to Theorem~\ref{thm:embeddings}) when $K$ is being squeezed into a space of smaller dimension. \smallskip Let us illustrate dimension reduction on the example of finite sets $K \subset S^{n-1}$. Since $w(K) \le C \sqrt{\log |K|}$ (see e.g. \cite[(3.13)]{LT}), Theorem~\ref{thm:embeddings} holds with $m \sim \log |K|$, and we can state it as follows. \begin{corollary}[Dimension reduction for finite sets] \label{cor:JL} Let $K \subset S^{n-1}$ be a finite set. Let $\delta>0$ and $m \ge C \delta^{-6} \log |K|$. Then $K$ can be $\delta$-isometrically embedded into the Hamming cube $\{-1,1\}^m$. \qed \end{corollary} This fact should be compared to the {\em Johnson-Lindenstrauss lemma} for finite subsets $K \subset \mathbb{R}^n$ (\cite{JL}, see \cite[Section~15.2]{Matousek}) which states that if $m \ge C \delta^{-2} \log |K|$ then $K$ can be Lipschitz embedded into $\mathbb{R}^m$ as follows: $$ \big| \|\bar{A}x - \bar{A}x'\|_2 - \|x-x'\|_2 \big| \le \delta \|x-x'\|_2, \quad x,x' \in K. $$ Here $\bar{A} = m^{-1/2} A$ is the rescaled random Gaussian matrix $A$ from Theorem~\ref{thm:embeddings}. Note that while the Johnson-Lindenstrauss lemma involves a Lipschitz embedding from $\mathbb{R}^n$ to $\mathbb{R}^m$, it is generally impossible to provide a Lipschitz embedding from subsets of $\mathbb{R}^n$ to the Hamming cube (if there are points $x,x' \in K$ that are very close to each other); this is why we consider $\delta$-isometric embeddings. Like the Johnson-Lindenstrauss lemma, Corollary~\ref{cor:JL} can be proved directly by combining concentration inequalities for $d_A(x,y)$ with a union bound over $|K|^2$ pairs $(x,y) \in K \times K$. In fact, this method of proof allows for the weaker requirement $m \geq C \delta^{-2} \log |K|$. However, as we discuss later, this argument cannot be generalized in a straightforward way to prove Theorem~\ref{thm:embeddings} for general sets $K$. The Hamming distance $d_A(x,y)$ is highly discontinuous, which makes it difficult to extend estimates from points $x,y$ in an $\varepsilon$-net of $K$ to nearby points. \subsection{Cells of uniform tessellations} \label{s:cells} We mentioned two nice features of uniform tessellations in Facts~\ref{fact:embeddings} and \ref{fact:graph}. Let us observe one more property: all cells of a uniform tessellation have small diameter. Indeed, $d_A(x,y) = 0$ iff points $x,y$ are in the same cell, so by \eqref{eq:uniform tessellation} we have: \begin{fact}[Cells are small] \label{fact:cells} Every cell of a $\delta$-uniform tessellation has diameter at most $\delta$. \qed \end{fact} With this, Theorem~\ref{thm:tessellations} immediately implies the following: \begin{corollary}[Cells of random uniform tessellations] \label{cor:cells} Consider a tessellation of a subset $K \subseteq S^{n-1}$ by $m \geq C\delta^{-6} w(K)^2$ random hyperplanes. Then, with probability at least $1-\exp(-c \delta^2 m)$, all cells of the tessellation have diameter at most $\delta$. \end{corollary} This result has also a direct proof, which moreover gives a slightly better bound $m \sim \delta^{-4} w(K)^2$. We present this ``curvature argument'' in Section~\ref{sec:curvature}. \subsection{Uniform tessellations in $\mathbb{R}^n$} So far, we only worked with subsets $K \subseteq S^{n-1}$. It is not difficult to extend our results to bounded sets $K \subset \mathbb{R}^n$. This can be done by embedding such a set $K$ into $S^{n}$ (the sphere in one more dimension) with small bi-Lipschitz distortion. This elementary argument is presented in Section~\ref{sec:Rn}, and it yields the following version of Theorem~\ref{thm:tessellations}: \begin{theorem}[Random uniform tessellations in $\mathbb{R}^n$] \label{thm:tessellations Rn} Consider a bounded subset $K \subset \mathbb{R}^n$ with $\diam(K) = 1$. Let \begin{equation} \label{eq:tessellations Rn m} m \ge C \delta^{-12} w(K-K)^2. \end{equation} Then there exists an arrangement of $m$ affine hyperplanes in $\mathbb{R}^n$ and a scaling factor $\lambda>0$ such that $$ \big| \lambda \cdot d_A(x,y) - \|x-y\|_2 \big| \le \delta, \quad x,y \in K. $$ Here $d_A(x,y)$ denotes the fraction of the affine hyperplanes that separate $x$ and $y$. \end{theorem} \begin{remark}[Mean width in $\mathbb{R}^n$] \label{rem:K-K} While the quantity $w(K-K)$ appearing in \eqref{eq:tessellations Rn m} is clearly bounded by $2w(K)$, it is worth noting that the quantity $w(K-K)$ captures more accurately than $w(K)$ the geometric nature of the ``mean width'' of $K$. Indeed, $w(K-K) = \E h(g)$ where $h(g) = \sup_{x \in K} \< g,x\> - \inf_{x \in K} \< g,x\> $ is the distance between the two parallel supporting hyperplanes of $K$ orthogonal to the random direction $g$, scaled by $\|g\|_2$. \end{remark} \subsection{Optimality} The main object of our study is $m(K) = m(K,\delta)$, the smallest number of hyperplanes that provide a $\delta$-uniform tessellation of a set $K \subseteq S^{n-1}$. One has \begin{equation} \label{eq:upper lower} \log_2 N(K,\delta) \le m(K,\delta) \le C \delta^{-6} w(K)^2, \end{equation} where $N(K,\delta)$ denotes the covering number of $K$, i.e.~the smallest number of balls of radius $\delta$ that cover $K$. The upper bound in \eqref{eq:upper lower} is the conclusion of Theorem~\ref{thm:tessellations}. The lower bound holds because a $\delta$-uniform tessellation provides a decomposition of $K$ into at most $2^m$ cells each of which lies in a ball of radius $\delta$ by Fact~\ref{fact:cells}. To compare the upper and lower bounds in \eqref{eq:upper lower}, recall Sudakov's inequality \cite[Theorem~3.18]{LT} that yields $$ \log N(K,\delta) \le C \delta^{-2} w(K)^2. $$ While Sudakov's inequality cannot be reversed in general, there are many situations where it is sharp. Moreover, according to Dudley's inequality (see \cite[Theorem 11.17]{LT} and \cite[Lemma~2.33]{M}), Sudakov's inequality can always be reversed for some scale $\delta>0$ and up to a logarithmic factor in $n$. (See also \cite{LMPT} for a discussion of sharpness of Sudakov's inequality.) So the two sides of \eqref{eq:upper lower} are often close to each other, but there is in general some gap. We conjecture that the optimal estimate is $$ c w(K)^2 \le \sup_{\delta>0} \delta^2 m(K,\delta) \le C w(K)^2, $$ so the mean width of $K$ seems to be completely responsible for the uniform tessellations of $K$. \smallskip Note that the lower bound in \eqref{eq:upper lower} holds in greater generality. Namely, it is not possible to have $m < \log_2 N(K,\delta)$ for {\em any} decomposition of $K$ into $2^m$ pieces of diameter at most $\delta$. However, from the upper bound we see that with a slightly larger value $m \sim w(K)^2$, {\em an almost best decomposition of $K$ is achieved by a random hyperplane tessellation}. \smallskip In this paper we have not tried to optimize the dependence of $m(K,\delta)$ on $\delta$. This interesting problem is related to the open question on the optimal dependence on distortion in Dvoretzky's theorem. We comment on this in Section~\ref{sec:Dvoretzky}. \subsection{Related work: embeddings of $K$ into normed spaces} Embeddings of subsets $K \subseteq S^{n-1}$ into normed spaces were studied in geometric functional analysis \cite{KM, Sch}. In particular, Klartag and Mendelson \cite{KM} were concerned with embeddings into $\ell_2^m$. They showed that for $m \ge C \delta^{-2} w(K)^2$ there exists a linear map $A: \mathbb{R}^n \to \mathbb{R}^m$ such that $$ \big| m^{-1/2} \|Ax\|_2 - 1 \big| \le \delta, \quad x \in K. $$ One can choose $A$ to be an $m \times n$ random matrix with Gaussian entries as in Theorem~\ref{thm:embeddings}, or with sub-gaussian entries. Schechtman \cite{Sch} gave a simpler argument for a Gaussian matrix, which also works for embeddings into general normed spaces $X$. In the specific case of $X = \ell_1^m$, Schechtman's result states that for $m \ge C \delta^{-2} w(K)^2$ one has $$ \big| m^{-1} \|Ax\|_1 - 1 \big| \le \delta, \quad x \in K. $$ This result also follows from Lemma~\ref{lem:concentration} below. \subsection{Related work: one-bit compressed sensing} Our present work was motivated by the development of {\em one-bit compressed sensing} in \cite{BB, JLBB, PV} where Theorem~\ref{thm:embeddings} is used in the following context. The vector $x$ represents a signal; the matrix $A$ represents a measurement map $\mathbb{R}^n \to \mathbb{R}^m$ that produces $m \ll n$ linear measurements of $x$; taking the sign of $Ax$ represents quantization of the measurements (an extremely coarse, one-bit quantization). The problem of one-bit compressed sensing is to recover the signal $x$ from the quantized measurements $f(x) = \sign Ax$. The problem of one-bit compressed sensing was introduced by Boufounos and Baraniuk \cite{BB}. Jacques, Laska, Boufounos and Baraniuk \cite{JLBB} realized a connection of this problem to uniform tessellations of the set of sparse signals $K = \{x \in S^{n-1}:\; |\supp(x)| \le s\}$, and to almost isometric embedding of $K$ into the Hamming cube $\{-1,1\}^m$. For this set $K$, they proved Corollary~\ref{cor:cells} with $m \sim \delta^{-1} s \log(n/\delta)$ and a version of Theorem~\ref{thm:embeddings} for $m \sim \delta^{-2} s \log(n/\delta)$. The authors of the present paper analyzed in \cite{PV} a bigger set of ``compressible'' signals $K' = \{ x \in S^{n-1}:\; \|x\|_1 \le \sqrt{s} \}$ and proved for $K'$ a version of Corollary~\ref{cor:cells} with $m \sim \delta^{-4} s \log(n/s)$. Since the mean widths of both sets $K$ and $K'$ are of the order $\sqrt{s \log(n/s)}$, Theorem~\ref{thm:embeddings} holds for these sets with $m \sim \delta^{-6} s \log(n/s)$. In other words, apart from the dependence of $\delta$ (which is an interesting problem), the prior results follow as partial cases from Theorem~\ref{thm:embeddings}. It is important to note that Theorem~\ref{thm:embeddings} addresses only the theoretical aspect of one-bit compressed sensing problem, which guarantees that the quantized measurement map $f(x) = \sign Ax$ well preserves the geometry of signals. But one also faces an algorithmic challenge -- how to efficiently recover $x$ from $f(x)$, and specifically in polynomial time. We will not touch on this algorithmic aspect here but rather refer the reader to \cite{PV} and to our forthcoming work which is based on the results of this paper. \subsection{Related work: locality-sensitive hashing} \textit{Locality-sensitive hashing} is a method of dimension reduction. One takes a set of high-dimensional vectors in $\mathbb{R}^n$ and the goal is to hash nearby vectors to the same bin with high probability. More generally, one may desire that the distance between bins be nearly proportional to the distance between the original items. There have been a number of papers which suggest to create such mappings onto the Hamming cube \cite{GW, AP, Charikar, ASS, KOR}, some of which use a random hyperplane tessellation as defined in this paper. The new challenge considered herein is to create a locality-sensitve hashing for an infinite set. \subsection{Overview of the argument} Let us briefly describe our proof of the results stated above. Since the distance in the Hamming cube $\{-1,1\}^m$ can be expressed as $(2m)^{-1}\|x-y\|_1$, the Hamming cube is isometrically embedded in $\ell_1^m$. Before trying to embed $K \subseteq S^{n-1}$ into the Hamming cube as claimed in Theorem~\ref{thm:embeddings}, we shall make a simpler step and embed $K$ almost isometrically into the bigger space $\ell_1^m$ with $m \sim \delta^{-2} w(K)^2$. A result of this type was given by Schechtman \cite{Sch}. In Section~\ref{sec:into ell1} we prove a similar result by a simple and direct argument in probability in Banach spaces. Our next and non-trivial step is to re-embed the set from $\ell_1^m$ into its subset, the Hamming cube $\{-1,1\}^m$. In Section~\ref{sec:curvature} we give a simple ``curvature argument'' that allows us to deduce Corollary~\ref{cor:cells} on the diameter of cells, and even with a better dependence on $\delta$, namely $m \sim \delta^{-4} w(K)^2$. However, a genuine limitation of the curvature argument makes it too weak to deduce Theorem~\ref{thm:tessellations} this way. We instead attempt to prove Theorem~\ref{thm:tessellations} by an $\varepsilon$-net argument, which typically proceeds as follows: (a) show that $d_A(x,y) \approx d(x,y)$ holds for a fixed pair $x,y \in K$ with high probability; (b) take the union bound over all pairs $x,y$ in an finite $\varepsilon$-net $N_\varepsilon$ of $K$; (c) extend the estimate from $N_\varepsilon$ to $K$ by approximation. Unfortunately, as we indicate in Section~\ref{sec:soft} the approximation step (c) must fail due to the discontinuity of the Hamming distance $d_A(x,y)$. A solution proposed in \cite{B, JLBB} was to choose $\varepsilon$ so small that none of the random hyperplanes pass near points $x,y \in N_\varepsilon$ with high probability. This strategy was effective for the set $K = \{x \in S^{n-1}:\; |\supp(x)| \le s\}$ because the covering number of this specific set $K$ has a mild (logarithmic) dependence on $\varepsilon$, namely $\log N(K,\varepsilon) \le s \log (Cn/\varepsilon s)$. However, adapting this strategy to general sets $K$ would cause our estimate on $m$ to increase by a factor of $n$. The solution we propose in the present paper is to ``soften'' the Hamming distance; see Section~\ref{sec:soft} for the precise notion. The {\em soft Hamming distance} enjoys some continuity properties as described in Lemmas~\ref{lem:continuity} and \ref{lem:continuity L1}. In Section~\ref{sec:proof tessellations} we develop the $\varepsilon$-net argument for the soft Hamming distance. Interestingly, the approximation step (c) for the soft Hamming distance will be based on the embedding of $K$ into $\ell_1^m$, which incidentally was our point of departure. \subsection{Notation} Throughout the paper, $C$, $c$, $C_1$, etc.~denote positive absolute constants whose values may change from line to line. For integer $n$, we denote $[n] = \{1,\ldots,n\}$. The $\ell_p$ norms of a vector $x \in \mathbb{R}^n$ for $p \in \{0,1,2,\infty\}$ are defined as\footnote{Note that, strictly speaking, $\|\cdot\|_0$ is not a norm on $\mathbb{R}^n$.} $$ \|x\|_0 = |\supp(x)| = | \{ i \in [n]: x(i) \ne 0 \} |, \; \|x\|_1 = \sum_{i=1}^n |x_i|, \; \|x\|_2 = \big( \sum_{i=1}^n x_i^2 \big)^{1/2}, \; \|x\|_\infty = \max_{i \in [n]} |x_i|. $$ We shall work with normed spaces $\ell_p^n = (\mathbb{R}^n, \|\cdot\|_p)$ for $p \in \{1,2,\infty\}$. The unit Euclidean ball in $\mathbb{R}^n$ is denoted $B_2^n = \{ x \in \mathbb{R}^n:\; \|x\|_2 \le 1 \}$ and the unit Euclidean sphere is denoted $S^{n-1} = \{ x \in \mathbb{R}^n:\; \|x\|_2 = 1 \}$. As usual, $\mathcal{N}(0,1)$ stands for the univariate normal distribution with zero mean and unit variance, and $\mathcal{N}(0,I_n)$ stands for the multivariate normal distribution in $\mathbb{R}^n$ with zero mean and whose covariance matrix is identity $I_n$. \section{Embedding into $\ell_1$} \label{sec:into ell1} \begin{lemma}[Concentration] \label{lem:concentration} Consider a bounded subset $K \subset \mathbb{R}^n$ and independent random vectors $a_1,\ldots,a_m \sim \mathcal{N}(0,I_n)$ in $\mathbb{R}^n$. Let $$ Z = \sup_{x \in K} \Big| \frac{1}{m} \sum_{i=1}^m |\< a_i,x\> | - \sqrt{\frac{2}{\pi}} \|x\|_2 \Big|. $$ (a) One has \begin{equation} \label{eq:L1 embedding} \E Z \le \frac{4 w(K)}{\sqrt{m}}. \end{equation} (b) The following deviation inequality holds: \begin{equation} \label{eq:isoperimetric deviation inequality} \Pr{Z > \frac{4 w(K)}{\sqrt{m}} + u} \le 2 \exp \Big(-\frac{m u^2}{2 d(K)^2} \Big), \qquad u>0 \end{equation} where $d(K) = \max_{x \in K} \|x\|_2$. \end{lemma} \begin{proof} (a) Note that $\E |\< a_i,x\> | = \sqrt{\frac{2}{\pi}} \|x\|_2$ for all $i$. Let $\varepsilon_1, \hdots, \varepsilon_m$ be a sequence of iid rademacher random variables. A standard symmetrization argument (see \cite[Lemma 6.3]{LT}) followed by the contraction principle (see \cite[Theorem 4.12]{LT}) yields that $$ \E Z \le 2\E \sup_{x \in K} \Big| \frac{1}{m} \sum_{i=1}^m \varepsilon_i \abs{\< a_i,x\>} \Big| \le 4\E \sup_{x \in K} \Big| \frac{1}{m} \sum_{i=1}^m \varepsilon_i \< a_i,x\> \Big| = 4\E \sup_{x \in K} \Big| \Big\langle \frac{1}{m} \sum_{i=1}^m \varepsilon_i a_i, x \Big\rangle \Big|. $$ By the rotational invariance of the Gaussian distribution, $\frac{1}{m} \sum_{i=1}^m \varepsilon_i a_i$ is distributed identically with $g/\sqrt{m}$ where $g \sim \mathcal{N}(0,I_n)$. Therefore $$ \E Z \le \frac{4}{\sqrt{m}} \E \sup_{x \in K} |\< g,x\> | = \frac{4 w(K)}{\sqrt{m}}. $$ This proves the upper bound in \eqref{eq:L1 embedding}. (b) We combine the result of (a) with the Gaussian concentration inequality. To this end, we must first show that the map $A \mapsto Z = Z(A)$ is Lipschitz where $A = (a_1,\ldots,a_m)$ is considered as a matrix in the space $\mathbb{R}^{nm}$ equipped with Frobenius norm $\|\cdot\|_F$ (which coincides with the Euclidean norm on $\mathbb{R}^{nm}$). It follows from two applications of the triangle inequality followed by two applications of the Cauchy-Schwarz inequality that for $A = (a_1,\ldots,a_m), \, B = (b_1,\ldots,b_m) \in \mathbb{R}^{nm}$ we have $$ \abs{Z(A) - Z(B)} \leq \sup_{x \in K} \frac{1}{m} \sum_{i=1}^m \abs{\< a_i - b_i, x \> } \leq \frac{d(K)}{m} \sum_{i=1}^m \twonorm{a_i - b_i} \leq \frac{d(K)}{\sqrt{m}} \fronorm{A - B}. $$ Thus $Z$ has Lipschitz constant bounded by $d(K)/\sqrt{m}$. We may now bound the deviation probability for $Z$ using the Gaussian concentration inequality (see \cite[Equation 1.6]{LT}) as follows: \[\Pr{\abs{Z - \E Z} \geq u} \leq 2 \exp(-mu^2/2d(K)^2).\] The deviation inequality \eqref{eq:isoperimetric deviation inequality} now follows from the bound on $\E Z$ from (a). \end{proof} \begin{remark}[Random matrix formulation] \label{rem:rmt} One can state Lemma~\ref{lem:concentration} in terms of random matrices. Indeed, let $A$ be an $m \times n$ random matrix with independent $\mathcal{N}(0,1)$ entries. Then its rows $a_i$ satisfy the assumption of Lemma~\ref{lem:concentration}, and we can express $Z$ as \begin{equation} \label{eq:Z} Z = \sup_{x \in K} \Big| \frac{1}{m} \|Ax\|_1 - \sqrt{\frac{2}{\pi}} \|x\|_2 \Big|. \end{equation} \end{remark} Using this remark for the set $K-K$, we obtain a linear embedding of $K$ into $\ell_1$: \begin{corollary}[Embedding into $\ell_1$] \label{cor:into L1} Consider a subset $K \subset \ell_2^n$ and let $\delta>0$. Let $$ m \ge C \delta^{-2} w(K)^2. $$ Then, with probability at least $1 - 2 \exp(-m\delta^2/32)$, the linear map $f : K \to \ell_1^m$ defined as $f(x) = \frac{1}{m} \sqrt{\frac{\pi}{2}} Ax$ is a $\delta$-isometry. Thus $K$ can be linearly embedded into $\ell_1^m$ with Gromov-Haussdorff distortion at most $\delta$. \end{corollary} \begin{proof} Let $A$ be the random matrix as in Remark~\ref{rem:rmt}. Using Lemma~\ref{lem:concentration} for $K-K$ and noting the form of $Z$ in \eqref{eq:Z}, we conclude that the following event holds with probability at least $1 - 2 \exp(-m\delta^2/32)$: $$ \Big| \frac{1}{m} \|Ax-Ay\|_1 - \sqrt{\frac{2}{\pi}} \|x-y\|_2 \Big| \le \frac{8w(K-K)}{\sqrt{m}} \le \frac{16 w(K)}{\sqrt{m}} \le \delta, \qquad x,y \in K. $$ \end{proof} \begin{remark} The above argument shows in fact that Corollary~\ref{cor:into L1} holds for $$ m \ge C \delta^{-2} w(K-K)^2. $$ As we noticed in Remark~\ref{rem:K-K}, the quantity $w(K-K)$ more accurately reflects the geometric meaning of the mean width than $w(K)$. \end{remark} \begin{remark}[Low M$^*$ estimate] Note that for the subspace $E = \ker A$ we have from \eqref{eq:Z} that $Z \ge \sup_{x \in K \cap E} \sqrt{\frac{2}{\pi}} \|x\|_2 = \sqrt{\frac{2}{\pi}} \, d(K \cap E)$. Then Lemma~\ref{lem:concentration} implies that \begin{equation} \label{eq:low M*} \E d(K \cap E) \le \frac{6 w(K)}{\sqrt{m}}. \end{equation} By rotation invariance of Gaussian distribution, inequality \eqref{eq:low M*} holds for a random subspace $E$ in $\mathbb{R}^n$ of given codimension $m \le n$, uniformly distributed according to the Haar measure. This result recovers (up to the absolute constant $6$ which can be improved) the so-called {\em low M$^*$ estimate} from geometric functional analysis, see \cite[Section 15.1]{LT}. \end{remark} \begin{remark}[Dimension reduction] As we emphasized in the introduction, for many sets $K \subset \mathbb{R}^n$ one has $w(K) \ll n$. In such cases Corollary~\ref{cor:into L1} works for $m \ll n$. The embedding of $K$ into $\ell_1^m$ yields dimension reduction for $K$ (from $n$ to $m\ll n$ dimensions). For example, if $K$ is a finite set then $w(K) \le C \sqrt{\log |K|}$ (see e.g. \cite[(3.13)]{LT}), and so Corollary~\ref{cor:into L1} applies with $m \sim \log |K|$. This gives the following variant of the {\em Johnson-Lindenstrauss Lemma}: every finite subset of a Euclidean space can be linearly embedded in $\ell_1^m$ with $m \sim \log|K|$ and with small distortion in the Gromov-Haussdorff metric. Stronger variants of Johnson-Lindenstrauss lemma are known for {\em Lipschitz} rather than Gromov-Haussdorff embeddings into $\ell_2^m$ and $\ell_1^m$ \cite{AC, Sch}. However, for general sets $K$ (in particular for any set with nonempty interior) a Lipschitz embedding into lower dimensions is clearly impossible; still a Gromov-Haussdorff embedding exists due to Corollary~\ref{cor:into L1}. \end{remark} \section{Proof of Corollary~\ref{cor:cells} by a curvature argument} \label{sec:curvature} In this section we give a short argument that leads to a version of Corollary~\ref{cor:cells} with a slightly better dependence of $m$ on $\delta$. \begin{theorem}[Cells of random uniform tessellations] \label{thm:cells} Consider a subset $K \subseteq S^{n-1}$ and let $\delta > 0$. Let $$ m \ge C \delta^{-4} w(K)^2 $$ and consider an arrangement of $m$ independent random hyperplanes in $\mathbb{R}^n$ that are uniformly distributed according to the Haar measure. Then, with probability at least $1-2\exp(-c \delta^4 m)$, all cells of the tessellation have diameter at most $\delta$. \end{theorem} The argument is based on Lemma~\ref{lem:concentration}. If points $x,y \in K$ belong to the same cell, then the midpoint $z = \frac{1}{2}(x+y)$ also belongs to the same cell (after normalization). Using Lemma~\ref{lem:concentration} one can then show that $\|z\|_2 \approx \frac{1}{2}(\|x\|_2 + \|y\|_2) = 1$. Due to the curvature of the sphere, this forces the length of the interval $\|x-y\|_2$ to be small, which means that the diameter of the cell is small. The formal argument is below. \begin{proof} We represent the random hyperplanes as $\{a_i\}^\perp$, where $a_1,\ldots,a_m \sim \mathcal{N}(0,I_n)$ are independent random vectors in $\mathbb{R}^n$. Let $\delta, m$ be as in the assumptions of the theorem. We shall apply Lemma~\ref{lem:concentration} for the sets $K$ and $\frac{1}{2}(K+K)$ and for $u = \varepsilon/2$, where we set $\varepsilon = \delta^2/16$. Since the diameters of both these sets are bounded by $1$, we obtain that with probability at least $1-2\exp(-c \delta^4 m)$ the following event holds: \begin{equation} \label{eq:v} \Big| \sqrt{\frac{\pi}{2}} \frac{1}{m} \sum_{i=1}^m |\< a_i,v\> | - \|v\|_2 \Big| < \varepsilon, \qquad v \in K \cup \frac{1}{2}(K+K). \end{equation} Assume that the event \eqref{eq:v} holds. Consider a pair of points $x,y \in K$ that belong to the same cell of the tessellation, which means that $$ \sign \< a_i,x\> = \sign \< a_i, y\> , \qquad i \in [m]. $$ To complete the proof is suffices to show that $\|x-y\|_2 \le \delta$. This will give desired diameter $\delta$ in the Euclidean metric. Furthermore, since for small $\delta$ the Euclidean and the geodesic distances are equivalent, the conclusion will hold for the geodesic distance as well. We shall use \eqref{eq:v} for $x, y \in K$ and for the midpoint $z := \frac{1}{2}(x+y) \in \frac{1}{2}(K+K)$. Clearly $\sign \< a_i, z\> = \sign \< a_i,x\> = \sign \< a_i, y\> $, hence $$ |\< a_i,z\> | = |\< a_i, x\> | + |\< a_i,y\> |, \qquad i \in [m]. $$ Therefore we obtain from \eqref{eq:v} that \begin{align} \label{eq:z large} \|z\|_2 &\ge \sqrt{\frac{\pi}{2}} \frac{1}{m} \sum_{i=1}^m |\< a_i,z\> | - \varepsilon = \frac{1}{2} \Big[ \sqrt{\frac{\pi}{2}} \frac{1}{m} \sum_{i=1}^m |\< a_i,x\> | + \sqrt{\frac{\pi}{2}} \frac{1}{m} \sum_{i=1}^m |\< a_i,y\> | \Big] - \varepsilon \\ &\ge \frac{1}{2} (\|x\|_2-\varepsilon + \|y\|_2-\varepsilon) - \varepsilon = 1-2\varepsilon. \nonumber \end{align} By the parallelogram law, we conclude that $$ \|x-y\|_2^2 = 4 - \|x+y\|_2^2 = 4(1-\|z\|_2^2) \le 16\varepsilon = \delta^2. $$ This completes the proof. \end{proof} \subsection{Limitations of the curvature argument} Unfortunately, the curvature argument does not lend itself to proving the more general result, Theorem~\ref{thm:tessellations} on uniform tessellations. To see why, suppose $x,y \in K$ do not belong to the same cell but instead $d_A(x,y) = d$ for some small $d \in (0,1)$. Consider the set of mismatched signs $$ T := \big\{ i \in [m]:\; \sign \< a_i,x\> \ne \sign \< a_i, y\> \big\}; \qquad \frac{|T|}{m} = d. $$ These signs create an additional error term in the right hand side of \eqref{eq:z large}, which is \begin{equation} \label{eq:mismatch} \sqrt{\frac{\pi}{2}} \frac{1}{m} \sum_{i \in T} |\< a_i,v_i\> | \qquad \text{where } v_i \in \{x,y\}. \end{equation} By analogy with Lemma~\ref{lem:concentration}, we can expect that this term should be approximately equal $|T|/m = d$. If this is true, then \eqref{eq:z large} becomes in our situation $\|z\|_2 \ge 1 - 2\varepsilon - d$, which leads as before to $\|x-y\|_2^2 \lesssim \varepsilon + d$. Ignoring $\varepsilon$, we see that the best estimate the curvature argument can give is $d(x,y) \lesssim \sqrt{d_A(x,y)}$ rather than $d(x,y) \lesssim d_A(x,y)$ that is required in Theorem~\ref{thm:tessellations}. \smallskip The weak point of this argument is that it takes into account the size of $T$ but ignores the nature of $T$. For every $i \in T$, the hyperplane $\{a_i\}^\perp$ passes through the arc connecting $x$ and $y$. If the length of the arc $d(x,y)$ is small, this creates a strong constraint on $a_i$. Conditioning the distribution of $a_i$ on the constraint that $i \in T$ creates a bias toward smaller values of $|\< a_i,x\> |$ and $|\< a_i,y\> |$. As a result, the conditional expected value of the error term \eqref{eq:mismatch} should be smaller than $d$. Computing this conditional expectation is not a problem for a given pair $x,y$, but it seems to be difficult to carry out a uniform argument over $x,y \in K$ where the (conditional) distribution of $a_i$ depends on $x,y$. We instead propose a different and somewhat more conceptual way to deduce Theorem~\ref{thm:tessellations} from Lemma~\ref{lem:concentration}. This argument will be developed in the rest of this paper. \subsection{Dvoretzky theorem and dependence on $\delta$} \label{sec:Dvoretzky} The unusual dependence $\delta^{-4}$ in Theorem~\ref{thm:cells} is related to the open problem of the optimal dependence on distortion in the Dvoretzky theorem. Indeed, consider the special case of the tessellation problem where $K = S^{n-1}$ and $w(K) \sim \sqrt{n}$. Then Lemma~\ref{lem:concentration} in its geometric formulation (see equation \eqref{eq:Z} and Corollary~\ref{cor:into L1}) states that $\ell_2^n$ embeds into $\ell_1^m$ whenever $m \ge C \varepsilon^{-2} n$, meaning that $$ (1-\varepsilon) \|x\|_2 \le \|\Phi x\|_1 \le (1+\varepsilon) \|x\|_2, \qquad x \in \mathbb{R}^n, $$ where $\Phi = \sqrt{\frac{\pi}{2}} \frac{1}{m} A$. Equivalently, there exists an $n$-dimensional subspace of $\ell_1^m$ that is $(1+\varepsilon)$-Euclidean, where $n \sim \varepsilon^2 m$. This result recovers the well known Dvoretzky theorem in V.~Milman's formulation (see \cite[Theorem 4.2.1]{GM}) for the space $\ell_1^m$, and with the best known dependence on $\varepsilon$. However, it is not known whether $\varepsilon^2$ is the optimal dependence for $\ell_1^m$; see \cite{Sch} for a discussion of the general problem of dependence on $\varepsilon$ in Dvoretzky theorem. These observation suggest that we can reverse our logic. Suppose one can prove Dvoretzky theorem for $\ell_1^m$ with a better dependence on $\varepsilon$, thereby constructing a $(1+\varepsilon)$-Euclidean subspace of dimension $n \sim f(\varepsilon) m$ with $f(\varepsilon) \gg \varepsilon^2$. Then such construction can replace Lemma~\ref{lem:concentration} in the curvature argument. This will lead to Theorem~\ref{thm:cells} for $K = S^{n-1}$ with an improved dependence on $\delta$, namely with $m \sim f(\delta^2) n$. Concerning lower bounds, the best possible dependence of $m$ on $\delta$ should be $\delta^{-1}$, which follows by considering the case $n=2$. This dependence will be achieved if Dvoretzky theorem for $\ell_1^m$ is valid with $n \sim \varepsilon^{1/2} m$. This is unknown. \section{Toward Theorem~\ref{thm:tessellations}: a soft Hamming distance} \label{sec:soft} Our proof of Theorem~\ref{thm:tessellations} will be based on a covering argument. A standard covering argument of geometric functional analysis would proceed in our situation as follows: \begin{enumerate}[(a)] \item Show that $d_A(x,y) \approx d(x,y)$ with high probability for a fixed pair $x,y$. This can be done using standard concentration inequalities. \item Prove that $d_A(x,y) \approx d(x,y)$ uniformly for all $x,y$ in a finite $\varepsilon$-net $N_\varepsilon$ of $K$. Sudakov's inequality can be used to estimate the cardinality of $N_\varepsilon$ via the mean width $w(K)$. The conclusion will follow from step 1 by the union bound over $(x,y) \in N_\varepsilon \times N_\varepsilon$. \item Extend the estimate $d_A(x,y) \approx d(x,y)$ from $x,y \in N_\varepsilon$ to $x,y \in K$ by approximation. \end{enumerate} While the first two steps are relatively standard, step (c) poses a challenge in our situation. The Hamming distance $d_A(x,y)$ is a discontinuous function of $x,y$, so it is not clear whether the estimate $d_A(x,y) \approx d(x,y)$ can be extended from a pair points $x,y \in N_\varepsilon$ to a pair of nearby points. In fact, for some tessellations this task is impossible. Figure~\ref{fig:non-uniform-tessellation} shows that there exist very non-uniform tessellations that are nevertheless very uniform for an $\varepsilon$-net, namely one has $d_A(x,y) = d(x,y)$ for all $x,y \in N_\varepsilon$. The set $K$ in that example is a subset of the plane $\mathbb{R}^2$, and one can clearly embed such a set with into the sphere $S^2$ as well. \begin{figure}[htp] \centering \includegraphics[height=2.2cm]{non-uniform-tessellation.pdf} \caption{This hyperplane tessellation of the set $K =[-\frac{1}{2}, \frac{1}{2}] \times [-\frac{\varepsilon}{2},\frac{\varepsilon}{2}]$ is very non-uniform, as all cells have diameter at least $1$. The tessellation is nevertheless very uniform for the $\varepsilon$-net $N_\varepsilon = \varepsilon\mathbb{Z} \cap K$, as $d_A(x,y) = \|x-y\|_2$ for all $x,y \in N_\varepsilon$.} \label{fig:non-uniform-tessellation} \end{figure} To overcome the discontinuity problem, we propose to work with a soft version of the Hamming distance. Recall that $m$ hyperplanes are determined by their normals $a_1,\ldots,a_m \in \mathbb{R}^n$, which we organize in an $m \times n$ matrix $A$ with rows $a_i$. Then the usual (``hard'') Hamming distance $d_A(x,y)$ on $\mathbb{R}^n$ with respect to $A$ with can be expressed as \begin{equation} \label{eq:hard} d_A(x,y) = \frac{1}{m} \sum_{i=1}^m {\bf 1}_{\mathcal{E}_i}, \quad \text{where} \quad \mathcal{E}_i = \{ \sign\< a_i, x\> \ne \sign\< a_i, y\> \}. \end{equation} \begin{definition}[Soft Hamming distance] Consider an $m \times n$ matrix $A$ with rows $a_1,\ldots,a_m$, and let $t \in \mathbb{R}$. The {\em soft Hamming distance} $d_A^t(x,y)$ on $\mathbb{R}^n$ is defined as \begin{gather} d_A^t(x,y) = \frac{1}{m} \sum_{i=1}^m {\bf 1}_{\mathcal{F}_i}, \quad \text{where} \nonumber\\ \mathcal{F}_i = \{ \< a_i, x\> > t, \; \< a_i, y\> < -t \} \cup \{ -\< a_i, x\> > t, \; -\< a_i, y\> < -t \}. \label{eq:soft} \end{gather} \end{definition} Both positive and negative $t$ may be considered. For positive $t$ the soft Hamming distance counts the hyperplanes that separate $x,y$ well enough; for negative $t$ it counts the hyperplanes that separate or nearly separate $x,y$. \begin{remark}[Comparison of soft and hard Hamming distances] Clearly $d_A^t(x,y)$ is a non-increasing function of $t$. Moreover, \begin{align*} d_A^t(x,y) = d_A(x,y) \quad &\text{for } t = 0; \\ d_A^t(x,y) \le d_A(x,y) \quad &\text{for } t \ge 0; \\ d_A^t(x,y) \ge d_A(x,y) \quad &\text{for } t \le 0. \end{align*} \end{remark} The soft Hamming distance for a fixed $t$ is as discontinuous as the usual (hard) Hamming distance. However, some version of continuity emerges when we allow $t$ to vary slightly: \begin{lemma}[Continuity] \label{lem:continuity} Let $x,y,x',y' \in \mathbb{R}^n$, and assume that $\|Ax'\|_\infty \le \varepsilon$, $\|Ay'\|_\infty \le \varepsilon$ for some $\varepsilon>0$. Then for every $t \in \mathbb{R}$ one has $$ d_A^{t+\varepsilon}(x,y) \le d_A^t(x+x',y+y') \le d_A^{t-\varepsilon}(x,y). $$ \end{lemma} \begin{proof} Consider the events $\mathcal{F}_i = \mathcal{F}_i (x,y,t)$ from the definition of the soft Hamming distance \eqref{eq:soft}. By the assumptions, we have $|\< a_i, x'\> | \le \varepsilon$, $|\< a_i, y'\> | \le \varepsilon$ for all $i \in [m]$. This implies by the triangle inequality that $$ \mathcal{F}_i(x,y,t+\varepsilon) \subseteq \mathcal{F}_i(x+x',y+y',t) \subseteq \mathcal{F}_i(x,y,t-\varepsilon). $$ The conclusion of the lemma follows. \end{proof} We are ready to state a stronger version of Theorem~\ref{thm:tessellations} for the soft Hamming distance. \begin{theorem}[Random uniform tessellations: soft version] \label{thm:tessellations soft} Consider a subset $K \subseteq S^{n-1}$ and let $\delta > 0$. Let $$ m \ge C \delta^{-6} w(K)^2 $$ and pick $t \in \mathbb{R}$. Consider an $m \times n$ random (Gaussian) matrix $A$ with independent rows $a_1,\ldots,a_m \sim \mathcal{N}(0,I_n)$. Then with probability at least $1-\exp(-c \delta^2 m)$, one has $$ | d_A^t(x,y) - d(x,y) | \le \delta + 2|t|, \quad x,y \in K. $$ \end{theorem} Note that if we take $t = 0$ in the above theorem, we recover Theorem~\ref{thm:tessellations}. However, we find it easier to prove the result for general $t$, since in our argument we will work with different values of the $t$ for the soft Hamming distance. Theorem~\ref{thm:tessellations soft} is proven in the next section. \section{Proof of Theorem~\ref{thm:tessellations soft} on the soft Hamming distance} We will follow the covering argument outlined in the beginning of Section~\ref{sec:soft}, but instead of $d_A(x,y)$ we shall work with the soft Hamming distance $d_A^t(x,y)$. \subsection{Concentration of distance for a given pair} At the first step, we will check that $d_A^t(x,y) \approx d(x,y)$ with high probability for a fixed pair $x,y$. Let us first verify that this estimate holds in expectation, i.e. that $\E d_A^t(x,y) \approx d(x,y)$. One can easily check that \begin{equation} \label{eq:Ed again} \E d_A(x,y) = d(x,y), \end{equation} so we may just compare $\E d_A^t(x,y)$ to $\E d_A(x,y)$. Here is a slightly stronger result: \begin{lemma}[Comparing soft and hard Hamming distances in expectation] \label{eq:Edt Ed} Let $A$ be a random Gaussian matrix be as in Theorem~\ref{thm:tessellations soft}. Then, for every $t \in \mathbb{R}$ and every $x,y \in \mathbb{R}^n$, one has $$ | \E d_A^t(x,y) - d(x,y) | \le \E |d_A^t (x,y) - d_A(x,y)| \le 2|t|. $$ \end{lemma} \begin{proof} The first inequality follows from \eqref{eq:Ed again} and Jensen's inequality. To prove the second inequality, we use the events $\mathcal{E}_i$ and $\mathcal{F}_i$ from Equations \eqref{eq:hard}, \eqref{eq:soft} defining the hard and soft Hamming distances, respectively. It follows that \begin{align*} \E |d_A^t (x,y) - d_A(x,y)| &= \E \Big| \frac{1}{m} \sum_{i=1}^m ({\bf 1}_{\mathcal{E}_i} - {\bf 1}_{\mathcal{F}_i}) \Big| \\ &\le \E |{\bf 1}_{\mathcal{E}_1} - {\bf 1}_{\mathcal{F}_1}| \qquad \text{(by triangle inequality and identical distribution)} \\ &= \mathbb{P} \{ \mathcal{E}_1 \bigtriangleup \mathcal{F}_1\} \\ &\le \mathbb{P} \{ |\< a_1,x\> | \le |t| \} + \mathbb{P} \{ |\< a_1,y\> | \le |t| \} \\ &\le 2 \mathbb{P} \{ |g| \le |t| \} \qquad \text{(where $g \sim \mathcal{N}(0,1)$)} \\ &\le 2|t| \qquad \text{(by the density of the normal distribution).} \qedhere \end{align*} \end{proof} Now we upgrade Lemma~\ref{eq:Edt Ed} to an concentration inequality: \begin{lemma}[Concentration of distance] \label{lem:concentration dist} Let $A$ be a random Gaussian matrix as in Theorem~\ref{thm:tessellations soft}. Then, for every $t \in \mathbb{R}$ and every $x,y \in \mathbb{R}^n$, the following deviation inequality holds: $$ \mathbb{P} \big\{ |d_A^t(x,y) - d(x,y)| > 2|t| + \delta \big\} \le 2 \exp(-2 \delta^2 m), \quad \delta > 0. $$ \end{lemma} \begin{proof} By definition, $m \cdot d_A^t(x,y)$ has the binomial distribution $\text{Bin}(m,p)$. The parameter $p = \E d_A^t(x,y)$ satisfies by Lemma~\ref{eq:Edt Ed} that $$ |p-d(x,y)| \le 2|t|. $$ A standard Chernoff bound for binomial random variables states that $$ \mathbb{P} \big\{ |d_A^t(x,y) - p| > \delta \big\} \le 2 \exp(-2 \delta^2 m), \quad \delta > 0, $$ see e.g. \cite[Corollary~A.1.7]{AS}. The triangle inequality completes the proof. \end{proof} \subsection{Concentration of distance over an $\varepsilon$-net} \label{sec:over net} Let us fix a small $\varepsilon>0$ whose value will be determined later. Let $N_\varepsilon$ be an $\varepsilon$-net of $K$ in the Euclidean metric. By Sudakov's inequality (see \cite[Theorem~3.18]{LT}), we can arrange the cardinality of $N_\varepsilon$ to satisfy \begin{equation} \label{eq:net} \log |N_\varepsilon| \le C \varepsilon^{-2} w(K)^2. \end{equation} We can decompose every vector $x \in K$ into a {\em center} $x_0$ and a {\em tail} $x'$ so that \begin{equation} \label{eq:decomposition} x = x_0 + x', \quad \text{where} \quad x_0 \in N_\varepsilon, \quad x' \in (K-K) \cap \varepsilon B_2^n. \end{equation} We first control the centers by taking a union bound in Lemma~\ref{lem:concentration dist} over the net $N_\varepsilon$: \begin{lemma}[Concentration of distance over a net] \label{lem:concentration net} Let $A$ a random Gaussian matrix be as in Theorem~\ref{thm:tessellations soft}. Let $N_\varepsilon$ be a subset of $S^{n-1}$ whose cardinality satisfies \eqref{eq:net}. Let $\delta>0$, and assume that \begin{equation} \label{eq:concentration net m} m \ge C \varepsilon^{-2} \delta^{-2} w(K)^2. \end{equation} Let $t \in \mathbb{R}$. Then the following holds with probability at least $1 - 2\exp(-\delta^2 m)$: $$ |d_A^t(x_0,y_0) - d(x_0,y_0)| \le 2|t| + \delta, \quad x_0,y_0 \in N_\varepsilon. $$ \end{lemma} \begin{proof} By Lemma~\ref{lem:concentration net} and a union bound over the set of pairs $(x_0,y_0) \in N_\varepsilon \times N_\varepsilon$, we obtain $$ \mathbb{P} \Big\{ \sup_{x,y \in N_\varepsilon} |d_A^t(x,y) - d(x,y)| > 2|t| + \delta \Big\} \le |N_\varepsilon|^2 \cdot 2 \exp(-2 \delta^2 m) \le 2 \exp(-\delta^2 m) $$ where the last inequality follows by \eqref{eq:net} and \eqref{eq:concentration net m}. The proof is complete. \end{proof} \subsection{Control of the tails} Now we control the tails $x' \in (K-K) \cap \varepsilon B_2^n$ in decomposition \eqref{eq:decomposition}. \begin{lemma}[Control of the tails] \label{lem:tails} Consider a subset $K \subseteq S^{n-1}$ and let $\varepsilon>0$. Let $$ m \ge C \varepsilon^{-2} w(K)^2. $$ Consider independent random vectors $a_1,\ldots,a_m \sim \mathcal{N}(0,I_n)$. Then with probability at least $1-2\exp(-cm)$, one has $$ \frac{1}{m} \sum_{i=1}^m |\< a_i,x'\> | \le \varepsilon \quad \text{for all } x' \in (K-K) \cap \varepsilon B_2^n. $$ \end{lemma} \begin{proof} Let us apply Lemma~\ref{lem:concentration} for the set $T = (K-K) \cap \varepsilon B_2^n$ instead of $K$, and for $u = \varepsilon/8$. Since $d(K) = \max_{x' \in T} \|x'\|_2 \le \varepsilon$, we obtain that the following holds with probability at least $1 - 2\exp(-cm)$: \begin{align} \sup_{x' \in T} \frac{1}{m} \sum_{i=1}^m |\< a_i,x'\> | &\le \sup_{x' \in T} \Big| \frac{1}{m} \sum_{i=1}^m |\< a_i,x'\> | - \sqrt{\frac{2}{\pi}} \|x'\|_2 \Big| + \sqrt{\frac{2}{\pi}} \, \varepsilon \nonumber\\ &\le \frac{4 w(T)}{\sqrt{m}} + \frac{\varepsilon}{8} + \sqrt{\frac{2}{\pi}} \, \varepsilon. \label{eq:over T} \end{align} Note that $w(T) \le w(K-K) \le 2w(K)$. So using the assumption on $m$ we conclude that the quantity in \eqref{eq:over T} is bounded by $\varepsilon$, as claimed. \end{proof} \subsection{Approximation} Now we establish a way to transfer the distance estimates from an $\varepsilon$-net $N_\varepsilon$ to the full set $K$. This is possible by a continuity property of the soft Hamming distance, which we outlined in Lemma~\ref{lem:continuity}. This result requires the perturbation to be bounded in $L_\infty$ norm. However, in our situation the perturbations are going to be bounded only in $L_1$ norm due to Lemma~\ref{lem:tails}. So we shall prove the following relaxed version of continuity: \begin{lemma}[Continuity with respect to $L_1$ perturbations] \label{lem:continuity L1} Let $x,y,x',y' \in \mathbb{R}^n$, and assume that $\|Ax'\|_1 \le \varepsilon m$, $\|Ay'\|_1 \le \varepsilon m$ for some $\varepsilon>0$. Then for every $t \in \mathbb{R}$ and $M \ge 1$ one has \begin{equation} \label{eq:continuity L1} d_A^{t+M\varepsilon}(x,y) - \frac{2}{M} \le d_A^t(x+x',y+y') \le d_A^{t-M\varepsilon}(x,y) + \frac{2}{M}. \end{equation} \end{lemma} \begin{proof} Consider the events $\mathcal{F}_i = \mathcal{F}_i (x,y,t)$ from the definition of the soft Hamming distance \eqref{eq:soft}. By the assumptions, we have $$ \sum_{i=1}^m |\< a_i, x'\> | \le \varepsilon m, \quad \sum_{i=1}^m |\< a_i, y'\> | \le \varepsilon m. $$ Therefore, the set $$ T := \big\{ i \in [m]:\; |\< a_i, x'\> | \le M\varepsilon, \; |\< a_i, y'\> | \le M\varepsilon \big\} \quad \text{satisfies} \quad |T^c| \le 2m/M. $$ By the triangle inequality, we have $$ \mathcal{F}_i(x,y,t+M\varepsilon) \subseteq \mathcal{F}_i(x+x',y+y',t) \subseteq \mathcal{F}_i(x,y,t-M\varepsilon), \quad i \in T. $$ Therefore \begin{align*} d_A^{t+M\varepsilon}(x,y) &= \frac{1}{m} \sum_{i=1}^m {\bf 1}_{\mathcal{F}_i(x,y,t+M\varepsilon)} \le \frac{|T^c|}{m} + \frac{1}{m} \sum_{i \in T} {\bf 1}_{\mathcal{F}_i(x,y,t+M\varepsilon)} \\ &\le \frac{2}{M} + \frac{1}{m} \sum_{i \in T} {\bf 1}_{\mathcal{F}_i(x+x',y+y',t)} \le \frac{2}{M} + d_A^t(x+x',y+y'). \end{align*} This proves the first inequality in \eqref{eq:continuity L1}. The proof of the second inequality is similar. \end{proof} \subsection{Proof of Theorem~\ref{thm:tessellations soft}.} \label{sec:proof tessellations} Now we are ready to combine all the pieces and prove Theorem~\ref{thm:tessellations soft}. To this end, consider the set $K$, numbers $\delta$, $m$, $t$, and the random matrix $A$ as in the theorem. Choose $\varepsilon = \delta^2/100$ and $M = 10/\delta$. Consider an $\varepsilon$-net $N_\varepsilon$ of $K$ as we described in the beginning of Section~\ref{sec:over net}. Let us apply Lemma~\ref{lem:concentration net} that controls the distances on $N_\varepsilon$ along with Lemma~\ref{lem:tails} that controls the tails. By the assumption on $m$ in the theorem and by our choice of $\varepsilon$, both requirements on $m$ in these lemmas hold. By a union bound, with probability at least $1-4\exp(-c\delta^2 m)$ the following event holds: for every $x_0, y_0 \in N_\varepsilon$ and $x',y' \in (K-K) \cap \varepsilon B_2^n$, one has \begin{gather} |d_A^{t-M\varepsilon}(x_0,y_0) - d(x_0,y_0)| \le 2|t-M\varepsilon| + \delta/2, \label{eq:t-Me}\\ |d_A^{t+M\varepsilon}(x_0,y_0) - d(x_0,y_0)| \le 2|t+M\varepsilon| + \delta/2, \nonumber\\ \|Ax'\|_1 \le \varepsilon m, \quad \|Ay'\|_1 \le \varepsilon m. \label{eq:tail bounds} \end{gather} Let $x,y \in K$. As we described in \eqref{eq:decomposition}, we can decompose the vectors as \begin{equation} \label{eq: decomposition x y} x = x_0 + x', \quad y = y_0 + y', \quad \text{where} \quad x_0,y_0 \in N_\varepsilon, \quad x',y' \in (K-K) \cap \varepsilon B_2^n. \end{equation} The bounds in \eqref{eq:tail bounds} guarantee that the continuity property \eqref{eq:continuity L1} in Lemma~\ref{lem:continuity L1} holds. This gives \begin{align*} d_A^t(x,y) &\le d_A^{t-M\varepsilon}(x_0,y_0) + \frac{2}{M} \\ &\le d(x_0,y_0) + 2|t| + 2M\varepsilon + \frac{\delta}{2} + \frac{2}{M} \qquad \text{(by \eqref{eq:t-Me} and the triangle inequality).} \end{align*} Furthermore, using \eqref{eq: decomposition x y} we have $$ |d(x_0,y_0) - d(x,y)| \le d(x_0,x) + d(y_0,y) \le \|x_0-x\|_2 + \|y_0-y\|_2 \le 2 \varepsilon. $$ It follows that $$ d_A^t(x,y) \le d(x,y) + 2|t| + 2M\varepsilon + \frac{\delta}{2} + \frac{2}{M} + 2\varepsilon. $$ Finally, by the choice of $\varepsilon$ and $M$ we obtain $$ d_A^t(x,y) \le d(x,y) + 2|t| + \delta. $$ A similar argument shows that $$ d_A^t(x,y) \ge d(x,y) - 2|t| - \delta. $$ We conclude that $$ | d_A^t(x,y) - d(x,y) | \le \delta + 2|t|. $$ This completes the proof of Theorem~\ref{thm:tessellations soft}. \qed \section{Proof of Theorem~\ref{thm:tessellations Rn} on tessellations in $\mathbb{R}^n$} \label{sec:Rn} In this section we deduce Theorem~\ref{thm:tessellations Rn} from Theorem~\ref{thm:tessellations} by an elementary lifting argument into $\mathbb{R}^{n+1}$. We shall use the following notation: Given a vector $x \in \mathbb{R}^n$ and a number $t \in \mathbb{R}$, the vector $x \oplus t \in \mathbb{R}^{n} \oplus \mathbb{R} = \mathbb{R}^{n+1}$ is the concatenation of $x \in \mathbb{R}^n$ and $t$. Furthermore, $K \oplus t$ denotes the set of all vectors $x \oplus t$ where $x \in K$. Assume $K \subset \mathbb{R}^n$ has $\diam(K) = 1$. Translating $K$ if necessary we may assume that $0 \in K$; then \begin{equation} \label{eq:radius} \frac{1}{2} \le \sup_{x \in K} \|x\|_2 \le 1. \end{equation} Also note that by assumption we have \begin{equation} \label{eq:m affine} m \ge C \delta^{-12} w(K-K) \ge C \delta^{-12} w(K). \end{equation} Fix a large number $t \ge 2$ whose value will be chosen later and consider the set $$ K' = Q(K \oplus t) \subseteq S^{n} $$ where $Q: \mathbb{R}^{n+1} \to S^n$ denotes the spherical projection map $Q(u) = u/\|u\|_2$. We have \begin{align*} w(K') &\le t^{-1} w(K \oplus t) \qquad \text{(as $\|u\|_2 \ge t$ for all $u \in K \oplus t$)} \\ &\le t^{-1} (w(K) + t \E|\gamma|) \qquad \text{(where $\gamma \sim \mathcal{N}(0,1)$)} \\ &= t^{-1} w(K) + \sqrt{2/\pi} \le 3w(K) \end{align*} where the last inequality holds because $w(K) \ge \sqrt{2/\pi} \sup_{x \in K} \|x\|_2 \ge 1/\sqrt{2\pi}$ by \eqref{eq:radius}. Then Theorem~\ref{thm:tessellations} implies that if $m \ge C \delta_0^{-6} w(K)^2$ for some $\delta_0>0$, then there exists an arrangement of $m$ hyperplanes in $\mathbb{R}^{n+1}$ such that \begin{equation} \label{eq:x'y'} |d_A(x',y') - d(x',y')| \le \delta_0, \quad x',y' \in K'. \end{equation} Consider arbitrary vectors $x$ and $y$ in $K$ and the corresponding vectors $x' = Q(x \oplus t)$ and $y' = Q(x \oplus t)$ in $K'$. Let us relate the distances between $x'$ and $y'$ appearing in \eqref{eq:x'y'} to corresponding distances between $x$ and $y$. Let $a_i \oplus a \in \mathbb{R}^{n+1}$ denote normals of the hyperplanes. Clearly, $x'$ and $y'$ are separated by the $i$-th hyperplane if and only if $x \oplus t$ and $y \oplus t$ are. This in turn happens if and only if $x$ and $y$ are separated by the affine hyperplane that consists of all $x \in \mathbb{R}^n$ satisfying $\< a_i \oplus a, x \oplus t\> = \< a_i, x\> + at = 0$. In other words, the hyperplane tessellation of $K'$ induces an {\em affine} hyperplane tessellation of $K$, and the fraction $d_A(x',y')$ of the hyperplanes separating $x'$ and $y'$ equals the fraction of the affine hyperplanes separating $x$ and $y$. With a slight abuse of notation, we express this observation as \begin{equation} \label{eq:linear affine} d_A(x',y') = d_A(x,y). \end{equation} Next we analyze the normalized geodesic distance $d(x',y')$, which satisfies \begin{equation} \label{eq:geodesic Euclidean} \big| \pi \cdot d(x',y') - \|x'-y'\|_2 \big| \le C_0 \|x'-y'\|_2^2. \end{equation} Denoting $t_x = \|x \oplus t\|_2$ and $t_y = \|y \oplus t\|_2$ and using the triangle inequality, we obtain \begin{align} \varepsilon := \big| \|x'-y'\|_2 - t^{-1} \|x-y\|_2 \big| &= \big| \big\| t_x^{-1} (x \oplus t) - t_y^{-1} (y \oplus t) \big\|_2 - \|t^{-1} x - t^{-1} y\|_2 \big| \nonumber \\ &\le \|x\| \, |t_x^{-1} - t^{-1}| + \|y\| \, |t_y^{-1} - t^{-1}| + t \, |t_x^{-1} - t_y^{-1}|. \label{eq:contracted dist} \end{align} Note that \eqref{eq:radius} yields that $t \le t_x, t_y \le \sqrt{t^2+1}$. It follows that $|t_x^{-1} - t^{-1}| \le 0.5 t^{-3}$ and the same bound holds for the other two similar terms in \eqref{eq:contracted dist}. Using this and \eqref{eq:radius} we conclude that $\varepsilon \le t^{-2}$. Putting this into \eqref{eq:geodesic Euclidean} and using the triangle inequality twice, we obtain $$ \big| \pi \cdot d(x',y') - t^{-1} \|x-y\|_2 \big| \le C_0 \big( t^{-1} \|x'-y'\|_2 + \varepsilon \big)^2 + \varepsilon \le C_0 \big( 2t^{-1} + t^{-2} \big)^2 + t^{-2} \le C_1 t^{-2}. $$ Finally, we use this bound and \eqref{eq:linear affine} in \eqref{eq:x'y'}, which gets us \begin{equation} \label{eq:final dist} \big| \pi t \cdot d_A(x,y) - \|x-y\|_2 \big| \le \pi t \delta_0 + C_1 t^{-1}. \end{equation} Now we can assign the values $t := 2C_1/\delta$ and $\delta_0 = \delta^2/(4\pi C_1)$ so the right hand side of \eqref{eq:final dist} is bounded by $\delta$, as required. Note that the condition $m \ge C \delta_0^{-6} w(K)^2$ that we used above in order to apply Theorem~\ref{thm:tessellations} is satisfied by \eqref{eq:m affine}. This completes the proof of Theorem~\ref{thm:tessellations Rn}. \qed
2024-02-18T23:41:02.952Z
2013-09-27T02:09:28.000Z
algebraic_stack_train_0000
4,030
10,966
proofpile-arXiv_066-3677
\section{Introduction} A classical nova is a cataclysmic nuclear explosion on the surface of a white dwarf star resulting from the accretion of hydrogen-rich gas from a companion star. A~sequence of nuclear reactions produces a sudden luminosity increase by up to a factor of a million and ejects matter from the white dwarf. The time scales of explosive hydrogen burning processes are influenced by the duration of reaction cycles closed by ($p$,$\alpha$) reactions, with break out via ($p$,$\gamma$) reactions competing with $\beta$-decays. The SiP cycle is one such cycle, which is of particular interest for understanding novae like Nova Her 1991 that are observed to exhibit high sulfur abundances compared to solar values \cite{Wil94,Mat94}. In the SiP cycle the $^{31}$S($p$,$\gamma$)$^{32}$Cl reaction is believed to be the dominant break-out reaction~\cite{Vou94}. The rate of the $^{31}$S($p$,$\gamma$)$^{32}$Cl reaction at nova temperatures is dominated by resonances corresponding to states in the compound nucleus $^{32}$Cl. The~$^{31}$S($p$,$\gamma$)$^{32}$Cl reaction rate was previously calculated based on some measured resonance properties and estimates of others based upon the mirror nucleus $^{32}$P~\cite{Ili99}. Uncertainties in the rate were provided in a subsequent reanalysis~\cite{Ili10}. The $^{32}$Cl excitation energies near the proton threshold of $Q_p$ = 1581.3(6)~keV~\cite{Bha08} used by Ref.~\cite{Ili99,Ili10} were based on earlier measurements that disagreed at the 2 sigma level (by 10-20 keV) \cite{Vou94,Jea89}, and the resonance strengths were only constrained based upon properties of states in the mirror. A recent study of 5 states below 2.3~MeV via the $^{32}$S($^{3}$He,$t$)$^{32}$Cl reaction \cite{Wr210} is in agreement with the excitation energies reported by Ref. \cite{Vou94}. However, significant uncertainties remain regarding resonance strengths and size of systematic uncertainties in the resonance energies. We have studied proton-unbound states in $^{32}$Cl using the~$^{32}$S($^{3}$He,$t$)$^{32}$Cl charge-exchange reaction. Excitation energies and proton-branching ratios for states of astrophysical interest for the $^{31}$S($p$,$\gamma$)$^{32}$Cl reaction rate were determined. In the following sections we describe the experiment and new results for states in $^{32}$Cl, including the observation of a~predicted, but previously unobserved, level. We then present calculations of a new recommended $^{31}$S($p$,$\gamma$)$^{32}$Cl reaction rate based on these new results including statistical and systematic uncertainties. \section{Experiment} \begin{figure*} \begin{center} \includegraphics[width=0.9\linewidth]{all} \caption{Spectra showing position of tritons at the focal plane measured by the ionization chamber with the Enge spectrograph set at 3$^{\circ}$. Three spectra show results from measurements with three targets, 240 $\mu$g/cm$^2$ ZnS, 350 $\mu$g/cm$^2$ ZnS and a~300~$\mu$g/cm$^2$ Si target. Peaks are identified with the final level in the nucleus produced. } \label{f:spectra} \end{center} \end{figure*} We used the charge-exchange reaction $^{32}$S($^{3}$He,$t$)$^{32}$Cl to populate states in $^{32}$Cl. A~30-MeV $^{3}$He$^{2+}$ beam from the Extended Stretched TransUranium (ESTU) tandem Van de Graaff accelerator at the Wright Nuclear Structure Laboratory (WNSL) at Yale University bombarded ZnS targets. Targets with thicknesses of 240 $\mu$g/cm$^2$ and 350 $\mu$g/cm$^2$ both on 5 $\mu$g/cm$^2$ carbon substrates were produced via evaporation at Oak Ridge National Laboratory and used in the experiment at the WNSL. Their thicknesses were determined with about 10\% uncertainty via energy loss measurements of $\alpha$ particles from a $^{241}$Am source. A~150 $\mu$g/cm$^2$ CdS target on a 20 $\mu$g/cm$^2$ substrate was used at one angle (5$^{\circ}$) but had worse energy resolution and lower counting rates and was therefore not used at other angles. Additional data were taken with a 300 $\mu$g/cm$^2$ natural Si target for calibration and a~900 $\mu$g/cm$^2$ Zn target for background analysis. Reaction products were separated and analyzed with the Enge split-pole spectrograph at the WNSL. The~spectrograph separates particles according to their magnetic rigidities, $B \rho$, so that discrete positions at the focal plane correspond to discrete momenta. The positions of detected particles at an Enge focal plane were determined by a~position-sensitive ionization drift chamber filled with 150 Torr of isobutane gas~\cite{Par06}. As the ions pass through the gas, the amount of charge collected by the cathode determines the energy lost in the detector. The residual energy was measured by a thick plastic scintillator located behind the ionization chamber. Measurements were conducted at spectrograph (laboratory) angles of 3$^{\circ}$, 5$^{\circ}$, 10$^{\circ}$ and 20$^{\circ}$. At the 3$^{\circ}$ setting the protons emitted from excited states in $^{32}$Cl were detected in coincidence with outgoing tritons by the Yale Lamp Shade Array (YLSA)~\cite{Vis03} consisting of four 16-strip silicon detectors arranged in a lamp-shade configuration. The YLSA detectors covered an angular range of $\theta_{lab}$ = 131$^{\circ}$ to $\theta_{lab}$ = 166$^{\circ}$ and were calibrated with $\alpha$ particles from a $^{241}$Am source. \section{Level Energies} Tritons were identified at the focal plane of the spectrograph using the~$E_{res}$ vs. $\Delta E$ relationship from the ionization chamber and the~scintillator. Focal-plane position spectra gated on tritons are shown in Fig.~\ref{f:spectra}, and states in $^{32}$Cl populated via the $^{32}$S($^{3}$He,$t$)$^{32}$Cl charge-exchange reaction are labeled. Tritons corresponding to states in $^{16}$F resulting from the $^{16}$O($^{3}$He,$t$)$^{16}$F reaction due to oxygen contamination in the targets are also seen in Fig.~\ref{f:spectra}. Positions at the focal plane were calibrated using known states in $^{28}$P populated from the $^{28}$Si($^{3}$He,$t$)$^{28}$P reaction on a silicon target. The 424 keV $^{16}$F state populated from oxygen contaminants in the targets, and the ground and 90 keV first excited states in $^{32}$Cl were also used to calibrate the position spectra, providing 14 calibration points spanning the region of interest. Centroids were obtained by fitting a Gaussian function to each peak with a linear background, and the centroids of the calibration peaks were fit by their magnetic rigidities. Both 2$^{nd}$ and 3$^{rd}$ order polynomial functions were fit. Excitation energies for all states of astrophysical interest agreed within 1 keV from both fits, but the 3$^{rd}$ order fit function was adopted as it provided a better fit to the lowest and highest energy calibration peaks. The magnetic rigidities of the recoiling tritons were calculated from the reaction kinematics with the newest atomic mass values implemented~\cite{AW03,nucmas}, including new values for $^{28}$P and $^{32}$Cl obtained recently, with mass excess of $-7147.5(12)$~keV, $-13333.8(12)$ keV, respectively~\cite{Wre10}. Small corrections ($<$30~keV) were made for the energy loss of the incident $^{3}$He ions and recoiling tritons in the target using the energy loss code STOPIT ~\cite{stopit}, assuming the reaction happened in the center of the target. The fit function was then used to obtain magnetic rigidities from the centroids for the peaks of interest. The energies of the levels were calculated from reaction kinematics taking energy losses into account. The statistical uncertainty in the energy of each state, $\Delta E_{i}$, was calculated as a combination of the uncertainties originating from the centroid determination and the fitting function uncertainties estimated from covariance analysis. The energy resolution for each triton peak was typically about 40~keV (FWHM), and no peaks wider than the instrumental resolution were observed. Excitation energies were obtained for each level at several angles and with the different targets. The final weighted average for each state was calculated from these energies. The uncertainty in the excitation energy was assigned as the larger of the average uncertainty \begin{equation} \Delta E_{av} = \sqrt{ \frac{\displaystyle 1} {\displaystyle \sum_{i=1}^n \frac{\displaystyle 1}{\displaystyle (\Delta E_i)^2}} } \end{equation} and the scatter uncertainty \begin{equation} \Delta E_{scat} = \sqrt{ \frac{1}{n-1} \frac{\displaystyle \sum_{i=1}^n \frac{(E - E_i)^2}{\displaystyle (\Delta E_i)^2} } {\displaystyle \sum_{i=1}^n \frac{\displaystyle 1}{\displaystyle (\Delta E_i)^2}} } . \end{equation} Systematic uncertainties were estimated using a Monte Carlo simulation, where the target thicknesses and mass values were randomly varied with Gaussian distribution probabilities. Mass and thickness uncertainties were used as standard deviations, $\sigma$. The systematic error was determined as the standard deviation of the Gaussian fit for the level energy distributions obtained in the Monte Carlo simulation. The systematic uncertainty was found to be 4~keV for all states, except for the 462 keV and 1167 keV states, where the systematic uncertainty was determined to be 2~keV and 3~keV, respectively. The excitation energies determined from this measurement with statistical uncertainties are given in Table~\ref{t:results}. They are compared with other measurements \cite{Jea89, Vou94,Wr210} and the evaluation by \cite{End98}. The evaluation favored \cite{Vou94} over \cite{Jea89}, where results from both measurements were available and generally adds 11~keV to \cite{Jea89} if only this one is available. The level energies determined in $^{32}$Ar($\beta ^+$p) experiments include 461.1(1)~keV, 1168.5(2)~keV, and 4076(10)~keV from~\cite{Bjo85} and 4072(9)~keV from~\cite{Sch93}, all agreeing well with our results. A recent measurement also using the $^{32}$S($^{3}$He,$t$)$^{32}$Cl reaction provides energies for five levels between 1.3 and 2.3 MeV that are systematically higher than the current work by about 4 keV ~\cite{Wr210}. \begin{table}[b \caption{\label{t:results} Excitation energies in $^{32}$Cl measured in this work compared to other measurements and evaluations. All listed uncertainties are statistical. Systematic uncertainties have been estimated in this work as 4~keV for all states, except for the 462 keV state with the systematic uncertainty of 2~keV and the 1167 keV state with the systematic uncertainty of 3~keV.} \setlength{\extrarowheight}{1.3pt} \begin{tabular}{ccccc} \hline this & Jeanperrin & Vouzoukas & Endt & Wrede\\ work & \cite{Jea89} & \cite{Vou94} & \cite{End98} & \cite{Wr210} \\ \ [keV] & [keV] & [keV] & [keV] & [keV] \\ \hline 462.0(10) & 447(7) & & 461.1(1) & \\ 1167.0(21) & 1157(5) & & 1168.5(2) & \\ 1327.4(29) & 1326(5) & 1329(3) & 1331(3) & 1331.2(5) \\ 1734.2(14) & 1719(4) & 1735(3) & 1733(2) & 1736.7(6) \\ 2127.5(19) & 2122(5) & 2129(3) & 2130(3) & 2131.1(4) \\ 2203.1(28) & 2193(7) & 2213(3) & 2212(3) & 2209.5(5) \\ 2278.6(25) & 2270(5) & 2281(3) & 2281(3) & 2283.5(5) \\ 2610.9(30) & & & & \\ 2677.0(16) & 2665(10) & & 2676(10) & \\ 2859.2(14) & 2858(5) & & 2869(5) & \\ 2931.5(17) & 2941(5) & & 2952(5) & \\ 3054.7(14) & 3056(5) & & 3067(5) & \\ 3163.9(11) & 3166(5) & & 3177(5) & \\ 3280.8(23) & 3290(10) & & 3301(10) & \\ 3695.0(9) & 3692(7) & & 3703(7) & \\ 3874.8(17) & 3883(5) & & 3894(5) & \\ 3999.5(12) & 4002(6) & & 4013(6) & \\ 4073.6(11) & 4080(7) & & 4074(7) & \\ 4349.9(23) & 4356(7) & & 4367(7) & \\ 4577.1(30) & 4590(8) & & 4601(8) & \\ \hline \end{tabular} \end{table} We have also discovered an excited state in $^{32}$Cl at 2611(5)~keV, shown in Fig.~\ref{f:newsbranch}. The measured excitation energy for the new state is in good agreement with a $1^{+}$ state predicted at 2574(50)~keV based on the mirror nucleus $^{32}$P and estimated using the IMME equation~\cite{Ili99}. \begin{figure}[tb] \begin{center} \includegraphics[width=0.9\linewidth]{newstate2} \caption{(Color online) Focal-plane position spectrum with the Enge spectrograph set at 10$^{\circ}$ showing the newly discovered 2611 keV state in $^{32}$Cl produced in the 350 $\mu$g/cm$^2$ ZnS target.} \label{f:newsbranch} \end{center} \end{figure} \section{Proton Unbound States} The YLSA silicon-detector array was installed covering backward angles at the target position to measure decays of proton-unbound states in $^{32}$Cl in coincidence with tritons detected with the spectrograph at $\theta_{lab}$ = 3$^{\circ}$. Figure \ref{f:tritonproton} shows the energy of the protons detected by YLSA versus the position (corresponding to momentum) of the tritons at the focal plane. \begin{figure}[b] \begin{center} \includegraphics[width=1\linewidth]{tp} \caption{(Color online) Triton energy - proton energy coincidence spectrum. The gated band corresponds to the~$^{32}$S($^{3}$He,$t$)$^{32}$Cl$^*$($p$)$^{31}$S reaction.} \label{f:tritonproton} \end{center} \end{figure} A gate is shown around events corresponding to the proton decay of $^{32}$Cl$^{*}$ to the ground state of $^{31}$S ($J^{\pi} = 1/2^+$). Proton decay to excited states of $^{31}$S was not possible for $^{32}$Cl excited states below 2.8~MeV due to the $^{32}$Cl proton-separation energy of $Q_p$ = 1581.3(6)~keV~\cite{Bha08} and the first excited state in $^{31}$S being at 1248.9(2)~keV~\cite{End90}. The events below the gated region in Fig.~\ref{f:tritonproton} corresponds to the proton decay of $^{16}$F$^{*}$, while events above are caused by leakage of deuterons into the triton window in the particle identification cut. \begin{figure*}[tb] \begin{center} \includegraphics[width=0.75\linewidth]{distribution} \caption{(Color online) Triton-proton angular correlation probabilities from the $^{32}$S($^{3}$He,$t$)$^{32}$Cl$^∗$($p$)$^{31}$S reaction at 30 MeV for the various states in $^{32}$Cl as listed in Table~\ref{t:results}. Red squares are the experimentally determined values with uncertainties, the black lines are the fits with even Legendre-polynomial terms. The polynomial orders are listed in Table~\ref{t:protons}. Although the 2$^{nd}$ order Legendre polynomial (dashed line) for the 2859 keV and 3659 keV states describes the data well, we assume the 4$^{th}$ order Legendre polynomial to be correct (solid line).} \label{f:protonangular} \end{center} \end{figure*} The angular probability distributions of the emitted protons for the $^{32}$Cl states between 2.1 and 3.9 MeV are shown in Fig.~\ref{f:protonangular}. Two neighboring YLSA strips are coupled together to reduce scatter due to the low statistics. The values come from the ratios of the number of events in the $t$-$p$ coincidence peaks to the total number of tritons populating a given state in $^{32}$Cl, both with background subtracted. These numbers were then divided by the YLSA efficiency estimated via the use of a Monte Carlo model~\cite{Vis03}. Angular correlations of isolated nuclear levels can be described by a linear combination of even terms of the Legendre polynomials, $P_{2k}\left[\cos ( \theta_{c.m.})\right]$, of center-mass angle $\theta_{c.m.}$ up to two times the proton orbital angular momentum $\ell$, i.e. \begin{equation} W(\theta) = \sum_{k=0}^{\ell} A_k P_{2k} \left[\cos (\theta_{c.m.})\right], \label{eq:angular} \end{equation} which is symmetric around $\theta_{c.m.} = 90^{\circ}$. The minimum order of the Legendre polynomial needed to fit the data is defined by statistical significance testing with a p-value~\cite{Wal02} required to be $>$ 0.05, corresponding to $\chi^2 < $ 14.07, 12.59, and 11.07 for 7, 6, and 5 free parameters, respectively. This function was constrained to be positive at each point, and the integral over the full solid angle must be $\le 1$. The function was then integrated over the full solid angle to obtain a total proton-branching ratio, $b_p$, as \begin{equation} b_p = \int_{\theta=0}^{\pi} 2 \pi sin(\theta) W(\theta) d\theta. \end{equation} The results are shown in Table~\ref{t:protons} with $P_{fit}$ being the order of the Legendre polynomial used in the fit. The quoted proton branching-ratio uncertainties come from the uncertainties in~the~fit parameters. The proton-branching ratio for the first excited state above the proton separation energy, $E_{x}$ = 1734.2~keV, is expected to be small. However, even if it was a significant branch, the proton energy resulting from the $E_{r}$ = 153 keV resonance is below the YLSA detector threshold, and would not be observed. The angular distributions of states with excitation energies of 2128~keV, 2203~keV, 2279~keV, 2611~keV, 2677~keV, and 2932~keV are well fit with just a constant function, requiring only the first term in Eq.~\ref{eq:angular}. The new state at 2611~keV presents an interesting case. Fits are shown in Fig. \ref{f:protonangular} using only an isotropic term (solid line) and the 4$^{th}$ order Legendre polynomial (dashed line). The angular distribution for this state is better fit with the 4$^{th}$ order Legendre polynomial, but the isotropic fit fulfills the p-value test~\cite{Wal02} at the 95\% confidence level and cannot be ruled out. As the excitation energy agrees well with that expected for a state corresponding to a 1$^{+}$ state in the mirror \cite{Ili99}, we adopt the 1$^{+}$ assignment and use the isotropic angular distribution required from a $\ell$=0 proton orbital angular momentum. \begin{table}[b \caption{\label{t:protons} Proton-branching ratios for states in $^{32}$Cl. Spin-parities assigned based on~\cite{Ili99}, except for 3695 keV and 3875 keV states that we assigned based on the mirror symmetry.} \setlength{\extrarowheight}{1.3pt} \begin{tabular}{cccc} \hline $E_x$ [keV] & $J^{\pi}$ & $P_{fit}$ & $b_p$ [\%] \\ \hline 2128 & 3$^+$ & 0 & 7 $\pm$ 4 \\ 2203 & 1$^+$ & 0 & 54 $\pm$ 7 \\ 2279 & 2$^+$ & 0 & 66 $\pm$ 13 \\ 2611 & 1$^+$ & 0 & $>$62\footnote{Fit result is (95$\pm$32)\%.} \\ 2677 & 2$^+$ & 0 & $>$78\footnote{Fit result is (94$\pm$16)\%.} \\ 2859 & 3$^+$ & 4 & $>$95 \\ 2932 & 2$^-$ & 0 & $>$88 \\ 3055 & 4$^-$ & 4 & $>$97 \\ 3164 & 3$^-$ & 4 & $>$96 \\ 3281 & 2$^+$ & 4 & $>$88 \\ 3695 & 2$^+$ & 4 & $>$96 \\ 3875 & 3$^+$ & 4 & $>$94 \\ \hline \end{tabular} \end{table} \begin{table \caption{\label{t:strength} Properties of proton-unbound states in $^{32}$Cl and corresponding resonances relevant for the $^{31}$S($p$,$\gamma$)$^{32}$Cl reaction. Energies (including systematic and statistical uncertainties) are the result from this work, except for 3767(10)~keV from~\cite{Bjo85} and 3397(50)~keV predicted by~\cite{Ili99}. The spin-parity assignments from~\cite{Ili99} were used for the states up to 3.5~MeV, from~\cite{Bjo85} for the 3767 keV state, for the 3695 and 3875 keV states were tentatively assigned based on the mirror symmetry. } \setlength{\extrarowheight}{1.4pt} \begin{tabular}{cccccccc} \hline $E_x$ & $E_r$ & $J^\pi$ & $\Gamma_\gamma$ & $\Gamma_{p}$ & $\omega \gamma$ & $\sigma(\omega \gamma)$ & \\% & $\sigma_{log}(\omega \gamma)$ \\ \ [keV] & [keV] & &[meV] & [meV] & [meV] & [meV] \\ \hline 1734 & 153(5) & 3$^+$ & 1.0 & 2.8$\times$10$^{-8}$ & 4.9$\times$10$^{-8}$ & 1.0$\times$10$^{-8}$ \footnote{The uncertainty distribution of the 153 keV resonance strength includes an~additional log-normal component with $\sigma_{log}(\omega \gamma) = 0.58$.} \\% & 0.58 \\ 2128 & 546(5) & 3$^+$ & 7.9 & 0.59 & 0.96 & 0.61 \\% & -- \\ 2203 & 622(5) & 1$^+$ & 15.5 & 18.2 & 6.3 & 3.6 \\% & -- \\ 2279 & 697(5) & 2$^+$ & 3.1 & 6.0 & 2.54 & 0.76 \\% & --\\ 2611 & 1030(5) & 1$^+$ & 20.2 & & 14.4 & 7.5 \\% & -- \\ 2677 & 1096(5) & 2$^+$ & 57.9 & & 68 & 30 \\% -- \\ 2859 & 1278(4) & 3$^+$ & 5.4 & & 9.5 & 3.7 \\% & -- \\ 2932 & 1350(5) & 2$^-$ & 2.3 & & 2.84 & 0.76 \\% & -- \\ 3055 & 1473(5) & 4$^-$ & 0.8 & & 1.81 & 0.42 \\% & -- \\ 3164 & 1583(4) & 3$^-$ & 2 & & 3.51 & 0.96 \\% & -- \\ 3281 & 1700(5) & 2$^+$ & 15 & & 18.3 & 8.5 \\% & -- \\ 3397 & 1816(50) & 4$^+$ & 1.8 & & 4.1 & 1.4 \\% & -- \\ 3695 & 2114(4) & 2$^+$ & 28 & & 35 & 20 \\% & -- \\ 3767 & 2186(10) & 1$^+$ & 110 & & 83 & 34 \\% & -- \\ 3875 & 2294(4) & 3$^+$ & 59 & & 104 & 65 \\% & -- \\ \hline \end{tabular} \end{table} For the 2859 keV state, the 2$^{nd}$ order Legendre polynomial fulfills the p-value test giving a total proton-branching ratio of 75$\pm$5 \%. However, a fit with the 4$^{th}$ order Legendre polynomial (dashed line in Fig.~\ref{f:protonangular}) differs from the previous one only outside the area covered with our data points, giving a total proton-branching ratio $>$95\%. The neutron spectroscopic factor for the mirror state in $^{32}$P has been measured to be 0.03 \cite{Gas73} and 0.008 \cite{Eck89}. While there is a discrepancy between the measurements, even the lower value implies an expected proton width for the 2859 keV level that would be about 3 orders of magnitude larger than the expected gamma width. Therefore, we adopt the result from the 4$^{th}$ order Legendre polynomial fit. The situation is similar for the 3695 keV state. We find a total proton-branching ratio of 59$\pm$4 \% coming from the fit with the 2$^{nd}$ order Legendre polynomial and a branching ratio of $>$96\% using a 4$^{th}$ order Legendre polynomial. While the mirror assignment is not as clear for the 3695 keV state, the most likely candidate, the 3880 keV ($2^+$), has a measured spectroscopic factor of 0.028~\cite{Gas73}, in agreement with $\approx$0.03 predicted for the $^{32}$Cl state by shell model calculations~\cite{Bro11}. This is a factor of 15 more than the spectroscopic factor required for the branching ratio of 59\% and we therefore adopt $>$96\% from the 4$^{th}$ order Legendre polynomial fit. For the 3055, 3164, 3281 and 3875 keV states, a 4$^{th}$ order Legendre polynomial fit is required, and the resulting branching ratio is consistent with $b_p =$ 100\%. The lower limit for these states is then statistically estimated based on the number of events. The obtained values for minimum proton orbital angular momenta are in a good agreement with assumed spins and parities. The spin-parity assignments for states with $E_x < 3.5$~MeV are taken from~\cite{Ili99}. The spins of the 3695 and 3875 keV states were tentatively assigned based on the mirror symmetry corresponding to the 3880.3 keV 2$^+$ and 3989.8 keV $(3^+)$ states in $^{32}$P~\cite{End98}, as the 3796.1 keV $(1^+)$ in $^{32}$P is assumed to be the mirror state to the 3767 keV state in $^{32}$Cl that is known to be $J^\pi=1^+$~\cite{Bjo85}. \section{$^{31}$S(\lowercase{p},$\gamma$)$^{32}$C\lowercase{l} Reaction Rate} At nova temperatures, $T\approx$0.1--0.3~GK, the states just above the proton-separation energy ($Q_{p}$ = 1581.3~keV) dominate the $^{31}$S($p$,$\gamma$)$^{32}$Cl reaction rate. As the resonances are generally narrow and well separated, the resonant component of the reaction rate (in cm$^{3}$mol$^{-1}$s$^{-1}$) can be approximated by \begin{eqnarray} N_A \langle \sigma v \rangle & = & 1.54 \times 10^{11} (\mu T_9)^{-3/2} \nonumber \\ & & \times \sum_r (\omega \gamma)_r \exp(-11.605 E_r/T_9), \label{eq:rate} \end{eqnarray} where $T_9$ is the temperature in GK, $E_r$ is the energy of the $^{32}$Cl resonance in MeV, $\mu$ is the reduced mass in atomic mass units, and $(\omega \gamma)_r$ is the resonance strength in MeV, given by the spin of the resonance, $J_r$ and its partial ($\Gamma_p$,$\Gamma_{\gamma}$) and total ($\Gamma$) widths as \begin{equation} (\omega \gamma)_r = \frac{(2 J_r +1)}{4} \frac{\Gamma_{p} \Gamma_{\gamma}}{\Gamma} . \label{eq:strength} \end{equation} The resonance reaction rate depends exponentially on the resonance energies, $E_r$, and linearly on the partial widths through the resonance strengths, though the proton partial width, $\Gamma_p$, also has an exponential dependence on energy through the penetrability. Therefore, our improved measurement of the resonance energies and the proton-branching ratios, corresponding to $\Gamma_{p} / \Gamma = \Gamma_{p} / (\Gamma_{p} + \Gamma_\gamma$), has a direct impact on the uncertainty in the $^{31}$S($p$,$\gamma$)$^{32}$Cl reaction rate. \begin{table}[t \caption{\label{t:totalrate} Recommended stellar reaction rates as a function of the temperature $T$ for the reaction $^{31}$S($p$,$\gamma$)$^{32}$Cl. Lower and upper limits cover the 68.2\% confidence level. } \setlength{\extrarowheight}{-3pt} \begin{tabular}{cccc} \hline \rule{0cm}{0.25cm} Tempe- & Recommended & Low & High \\ rature & rate & rate & rate \\ $T$ & $N_A \langle \sigma v \rangle$ & $N_A \langle \sigma v \rangle$ & $N_A \langle \sigma v \rangle$ \\ \ [GK] & [cm$^{3}$mol$^{-1}$s$^{-1}$] & [cm$^{3}$mol$^{-1}$s$^{-1}$] & [cm$^{3}$mol$^{-1}$s$^{-1}$] \\ \hline \rule{0cm}{0.3cm} 0.01 \rule{0cm}{0.3cm} & 3.68$\times 10^{-44}$ & 2.33$\times 10^{-44}$ & 5.12$\times 10^{-44}$ \\ 0.015 & 1.84$\times 10^{-37}$ & 1.17$\times 10^{-37}$ & 2.56$\times 10^{-37}$ \\ 0.02 & 3.08$\times 10^{-33}$ & 1.95$\times 10^{-33}$ & 4.28$\times 10^{-33}$ \\ 0.03 & 6.33$\times 10^{-28}$ & 4.36$\times 10^{-28}$ & 1.02$\times 10^{-27}$ \\ 0.04 & 5.39$\times 10^{-23}$ & 1.39$\times 10^{-23}$ & 2.12$\times 10^{-22}$ \\ 0.05 & 2.69$\times 10^{-19}$ & 8.50$\times 10^{-20}$ & 8.10$\times 10^{-19}$ \\ 0.06 & 7.56$\times 10^{-17}$ & 2.86$\times 10^{-17}$ & 1.91$\times 10^{-16}$ \\ 0.07 & 4.10$\times 10^{-15}$ & 1.76$\times 10^{-15}$ & 9.13$\times 10^{-15}$ \\ 0.08 & 7.98$\times 10^{-14}$ & 3.77$\times 10^{-14}$ & 1.62$\times 10^{-13}$ \\ 0.09 & 7.86$\times 10^{-13}$ & 4.00$\times 10^{-13}$ & 1.48$\times 10^{-12}$ \\ 0.1 & 4.82$\times 10^{-12}$ & 2.59$\times 10^{-12}$ & 8.61$\times 10^{-12}$ \\ 0.15 & 9.73$\times 10^{-10}$ & 6.17$\times 10^{-10}$ & 1.48$\times 10^{-9}$ \\ 0.2 & 1.22$\times 10^{-8}$ & 8.42$\times 10^{-9}$ & 1.71$\times 10^{-8}$ \\ 0.3 & 9.86$\times 10^{-7}$ & 5.70$\times 10^{-7}$ & 1.45$\times 10^{-6}$ \\ 0.4 & 1.41$\times 10^{-4}$ & 7.90$\times 10^{-5}$ & 2.05$\times 10^{-4}$ \\ 0.5 & 2.99$\times 10^{-3}$ & 1.73$\times 10^{-3}$ & 4.26$\times 10^{-3}$ \\ 0.6 & 2.27$\times 10^{-2}$ & 1.33$\times 10^{-2}$ & 3.20$\times 10^{-2}$ \\ 0.7 & 9.47$\times 10^{-2}$ & 5.61$\times 10^{-2}$ & 1.33$\times 10^{-1}$ \\ 0.8 & 2.73$\times 10^{-1}$ & 1.63$\times 10^{-1}$ & 3.81$\times 10^{-1}$ \\ 0.9 & 6.17$\times 10^{-1}$ & 3.70$\times 10^{-1}$ & 8.59$\times 10^{-1}$ \\ 1 & 1.18$\times 10^{0}$ & 7.16$\times 10^{-1}$ & 1.63$\times 10^{0}$ \\ 1.5 & 8.39$\times 10^{0}$ & 5.59$\times 10^{0}$ & 1.11$\times 10^{1}$ \\ 2 & 2.37$\times 10^{1}$ & 1.69$\times 10^{1}$ & 3.01$\times 10^{1}$ \\ 3 & 6.99$\times 10^{1}$ & 5.25$\times 10^{1}$ & 8.71$\times 10^{1}$ \\ 4 & 1.20$\times 10^{2}$ & 9.08$\times 10^{1}$ & 1.48$\times 10^{2}$ \\ 5 & 1.63$\times 10^{2}$ & 1.25$\times 10^{2}$ & 2.00$\times 10^{2}$ \\ 6 & 1.97$\times 10^{2}$ & 1.53$\times 10^{2}$ & 2.41$\times 10^{2}$ \\ 7 & 2.23$\times 10^{2}$ & 1.75$\times 10^{2}$ & 2.70$\times 10^{2}$ \\ 8 & 2.41$\times 10^{2}$ & 1.91$\times 10^{2}$ & 2.91$\times 10^{2}$ \\ 9 & 2.53$\times 10^{2}$ & 2.02$\times 10^{2}$ & 3.06$\times 10^{2}$ \\ 10 & 2.61$\times 10^{2}$ & 2.08$\times 10^{2}$ & 3.14$\times 10^{2}$ \\ \hline \end{tabular} \end{table} We have calculated the gamma widths for the states in $^{32}$Cl using the known half-lives of mirror $^{32}$P states, $T_{1/2}$, as well as $\gamma$-branching ratios, $b_{\gamma}$, and energies, $E_{\gamma_i}$, of the corresponding transitions from the states \cite{Kan97}. Assuming that the reduced transition probabilities, $B(E_i)$ and $B(M_i)$, are the same for both mirror nuclei, one can calculate the $\gamma$ width of a state in the mirror nucleus as a sum through all possible final state transitions: \begin{equation} \Gamma_\gamma \left(^{32}{\text {Cl}}\right) = \sum_i \frac{E_{\gamma_i}^{(\lambda_i+2)}(^{32}{\text {Cl}})}{E_{\gamma_i}^{(\lambda_i+2)}(^{32}{\text P})} \frac{b_{\gamma} \hbar \ln(2)}{T_{1/2}(^{32}{\text P})}, \end{equation} where $\lambda$ is electric or magnetic multipolarity. This follows a similar prescription as was used in \cite{Ili99,Ili10}. The lowest possible multipolarities were assumed. In the case of $M1/E2$ transitions, studies of the mirror nucleus~\cite{Kan97} showed that $M1$ transitions mostly dominate, and thus $M1$ transitions were adopted in the present reaction rate calculations. For the excited states between 2.1~MeV and 2.3~MeV, where the proton-branching ratio was determined to be finite but less than 100\%, we calculated the proton widths, $\Gamma_{p}$, directly from the gamma widths with our measured proton-branching ratios. For the higher energies, the resonance strength becomes insensitive to the proton width as $\Gamma_{p} \gg \Gamma_\gamma$ and $\Gamma_{p} \Gamma_{\gamma} / \Gamma \sim \Gamma_{\gamma}$. The 1734 keV state, corresponding to the 153 keV resonance, is the one state where the proton decay width is important, but where no information was extracted from our measurement. In this case we calculated the proton width using the prescription also followed in \cite{Ili99}, \begin{equation} \Gamma_{p} = 2 \frac{\hbar^2}{\mu a_c^2}P_c C^2 S_p \theta_{sp}^2 , \end{equation} where $\mu$ is the reduced mass and $a_c = 5.6$~fm is the channel radius. The single-particle reduced width, $\theta_{sp}^2 = 0.32$, was derived from the parameterization for nuclei with the mass number $A$ = 12--50 and bombarding energies $E \le$ 1000~keV based on optical-model computations and $R$-matrix expressions~\cite{Ili97}. The penetrability was calculated to be $P_c = 2.9 \times 10^{-15}$. The spectroscopic factor, $C^2 S_p$, was obtained from the reaction studies with the mirror nucleus $^{32}$P produced via a neutron transfer, $^{31}$P($d$,$p$)$^{32}$P, at deuteron energies of 10~MeV \cite{Gas73} and 20~MeV~\cite{Eck89}. The spectroscopic factors reported in these measurements are discrepant, 0.011 \cite{Gas73} and 0.0054 \cite{Eck89}, and the average value was adopted in \cite{Ili99,Ili10}. However, we conducted a reanalysis of the experimental cross section data from \cite{Gas73} and \cite{Eck89} using the FRESCO code~\cite{Tho88}. We find differential cross-sections from both experiments to be best fit with a spectroscopic factor of 0.011, in agreement with \cite{Gas73}. Thus, we adopt the higher value for the spectroscopic factor and from this calculate the proton width of the 153(5) keV resonance to be 2.8$\times$10$^{-8}$ meV. The calculated proton and gamma widths, as well as resonance strengths calculated using Eq.~\ref{eq:strength}, are listed in Table~\ref{t:strength}. The resonance energies determined in this work have been used for the calculations, except for the 3767 keV state measured in the $^{32}$Ar $\beta$-decay studies \cite{Bjo85} and the 3397 keV state predicted by~\cite{Ili99} based on the mirror symmetry. To cover temperatures below the regions dominated by the resonances, we have adopted the direct capture parameterization from~\cite{Ili99}. The recommended total $^{31}$S($p$,$\gamma$)$^{32}$Cl reaction rate for the stellar temperature range $T$ = 0.01--10 GK is given in Table~\ref{t:totalrate}. The individual contributions of direct and resonant capture are illustrated in Fig.~\ref{f:ratecontr}. \begin{figure*}[t] \begin{center} \includegraphics[width=0.9\linewidth]{ratecontr} \caption{(Color online) $^{31}$S($p$,$\gamma$)$^{32}$Cl reaction rate as a function of the stellar temperature $T$. Resonances with a contribution of at least 10\% contribution are shown. } \label{f:ratecontr} \end{center} \end{figure*} The rate has been parameterized in the Reaclib format~\cite{Thi87} as the sum of three exponentials, each with a set of 7 parameters in the form \begin{eqnarray} N_A \langle \sigma v \rangle = \exp & \left[ a_0 + a_1/T_9 + a_2/T_9^{1/3} + a_3 T_9^{1/3} \right. \nonumber \\ & \left. + a_4 T_9 + a_5 T_9^{5/3} + a_6\times \ln(T_9) \right] \end{eqnarray} where $T_9$ is the temperature in GK, using the tools in~\cite{nucastro}. The fit agrees with the data within 1.5\%. The new Reaclib parameters are listed in Table~\ref{t:reaclib}. Uncertainties in the reaction rate have been estimated using a Monte Carlo technique, as a~combination of normal and log-normal distributions of uncertainties complicate the analysis. Uncertainties in the resonance energies contribute to the log-normal distribution, as the energy is in the exponential of Eq.~\ref{eq:rate}. The gamma width contributes to the normal distribution, as its uncertainty originated from the half-life uncertainty, mirror-symmetry assumption (that we have estimated to be 20\% based on the mirror states in the neighboring nuclei) and the uncertainty of the proton-branching ratio. The uncertainty in the proton width will have a pure normal distribution, if extracted from the proton-branching ratio. For the 153 keV resonance, the exponential dependence of the penetrability on the energy contributes to the log-normal distribution, but a normally distributed contribution also originates in the spectroscopic factor uncertainty that we have estimated to be 30\%, the spectroscopic factor being different for both mirror nuclei due to the effect of Coulomb and other isospin-nonconserving interactions ($\sim$10\%), a small uncertainty of the wavelength in the penetrability ($\sim$2\%), and the reduced width uncertainty that we have estimated to be 20\%. For the 3695 keV and 3875 keV states, the uncertainty in the spins is considered. Estimated uncertainties for the resonance strengths are shown in Table~\ref{t:strength}. Direct capture uncertainties were taken directly from~\cite{Ili10}. \begin{table}[t] \caption{\label{t:reaclib} Recommended Reaclib parameters for the $^{31}$S($p$,$\gamma$)$^{32}$Cl reaction rate within $T$ = 0.01--10 GK.} \begin{tabular}{cccccccc} \hline set & a$_0$ & a$_1$ & a$_2$ & a$_3$ & a$_4$ & a$_5$ & a$_6$ \\ \hline 1 & 297.13 & -2.6702 & 114.07 & -485.67 & 63.95 & -5.699 & 140.32 \\ 2 & -35.362 & 4.1263 & -324.5 & 373.08 & -18.087 & 0.9159 & -205.99 \\ 3 & 1315.5 & -1.8787 & 330.21 & -1911.9 & 261.17 & -25.381 & 511.02 \\ \hline \end{tabular} \end{table} \begin{figure}[t] \begin{center} \includegraphics[height=17cm]{ratedist} \caption{(Color online) Total reaction rate probability density functions as a~result of the Monte Carlo simulation of the input-uncertainty propagation for various temperatures. The left figures show the distributions in the logarithmic scale normalized to the recommended value, the right figures show the same distributions in logarithmic scale without normalization. } \label{f:ratedistr} \end{center} \end{figure} In the simulation, the values of the energies and resonance strengths were varied randomly as a Gaussian distribution. Correlations, when dependent on the same parameters, were taken into account. The resulting reaction rate distributions (see Fig.~\ref{f:ratedistr}) have various shapes for different temperatures, including a nearly pure log-normal distribution at $T$ = 0.05~GK and normal distribution at $T$ = 5~GK. To give final uncertainties that would correspond to the standard deviation $\sigma$ in both distributions, we found a lower limit as a value with a percentile of 15.9 and an upper limit with a percentile of 84.1, covering the 68.2\% confidence level. The results from the Monte Carlo simulation are listed as the low and high rates in Table~\ref{t:totalrate}. \section{Discussion} The total $^{31}$S($p$,$\gamma$)$^{32}$Cl reaction rate and individual contributions based upon this work are illustrated in Fig.~\ref{f:ratecontr}. Direct capture dominates the reaction rate up to $T \sim $ 0.03~GK. The 153 keV, 546 keV, 622 keV, and 1096 keV resonances, corresponding to 1734 keV, 2128 keV, 2203 keV, and 2677 keV levels in $^{32}$Cl, dominate the rate over nearly all temperatures. The 697 keV resonance contributes more than 10\% at nova temperatures, and the 1030 keV resonance must be considered at X-ray burst temperatures, $T\approx 2$~GK. The 2186 keV, and 2294 keV resonances do not contribute except at very high temperatures, $T > 5$~GK, and the 2859 keV and 3695 keV levels (which had some ambiguity as to the shape of the proton angular distribution) make a negligible contribution to the reaction rate. The $^{31}$S($p$,$\gamma$)$^{32}$Cl reaction rate with uncertainties was recently calculated by \cite{Ili10} using the previous work of~\cite{Ili99} with the evaluated level energies from~\cite{End98}. The ratio of the rate from Ref.~\cite{Ili10} to our rate is shown in Fig.~\ref{f:ratecomp}. The uncertainties in both rates are illustrated by the hashed regions. The agreement for $T < 0.03$~GK is expected, as the direct capture rate was calculated based on the same parameters~\cite{Ili99}, and the lower and upper limits we take from~\cite{Ili10}. \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{ratecomp} \caption{(Color online) The $^{31}$S($p$,$\gamma$)$^{32}$Cl reaction rate with uncertainties calculated by~\cite{Ili10} (red), with only statistical uncertainties considered, is compared to the results from this work (blue), with both statistical and systematic uncertainties considered. Values are shown normalized to the recommended rate from this work. } \label{f:ratecomp} \end{center} \end{figure} Over much of the range of nova temperatures our recommended rate is significantly greater than even the ``high rate'' recommended in Ref. \cite{Ili10}. This arises from the contribution of individual resonances. In Fig.~\ref{f:individualres} the individual resonance reaction rates from Ref.~\cite{Ili10} are compared to our results. Our higher reaction rate at most nova temperatures arises from the fact that the resonance energies adopted in Ref. \cite{Ili99} (and derived from \cite{End98}) are greater than our energies by 6-15 keV (9 keV on average). The excitation energies adopted by \cite{End98} and \cite{Ili99} primarily reflect a weighted average of \cite{Vou94} and \cite{Jea89} after the results of \cite{Jea89} were shifted to match the 1168 keV excitation energy. The 3-5 keV uncertainties in the adopted excitation energies do not properly reflect systematic uncertainties, such as uncertainties in the calibration used or the discrepancies between measurements. \begin{figure}[] \begin{center} \includegraphics[width=1\linewidth]{individualrescomp} \caption{(Color online) The ratio of the individual resonance reaction rates from Ref.~\cite{Ili10} to rates from this work as a function of stellar temperature for the most important resonances contributing to the $^{31}$S($p$,$\gamma$)$^{32}$Cl rate.} \label{f:individualres} \end{center} \end{figure} Our rate also differs from that of \cite{Ili10} due to improved values for resonance strengths. It should be noted that while we adopt a resonance strength for the 153 keV resonance that is nearly the same as \cite{Ili99,Ili10}, this arises from two significant changes that largely cancel each other. Our lower resonance energy results in a significantly smaller penetrability, but we recommend a significantly greater spectroscopic factor based on our reanalysis of the transfer data of ~\cite{Gas73,Eck89}. Our reaction rate near peak nova temperatures, $T \approx 0.3$~GK, becomes smaller than that of \cite{Ili10} due to our improved values for the proton branching ratios of the 546 and 622 keV resonances. We find $\Gamma_p$ for the 546 keV resonance to be about 30\% smaller than estimated by \cite{Ili99} and $\Gamma_p$ for the 622 keV resonance to be about 2 orders of magnitude smaller. The smaller proton widths result in smaller resonance strengths and reaction rates, though the magnitude of the effect is mitigated by the lower value of the resonance energies for these states that makes for a smaller decrease in the reaction rate than would otherwise be. At higher temperatures (above about 2 GK) our rate increases in comparison to~\cite{Ili99} due to our inclusion of resonances above 2~MeV that were not previously considered. In summary, we have significantly improved the resonance energies and resonance strengths for some of the most important resonances in the $^{31}$S($p$,$\gamma$)$^{32}$Cl reaction. An important aspect of the current work is that we have given careful consideration to uncertainties, including systematic uncertainties in the level energies, states used for calibration, and target thickness effects. The largest uncertainties in the reaction rate at nova temperatures arise from the systematic uncertainty in the resonance energies and the resonance strength of the 153 keV resonance. Our excitation energy for the 1734.2(14) keV state (corresponding to the 153 keV resonance) differs from the value of 1736.7(6) reported by \cite{Wr210} (which used a slightly different set of calibration reactions) by 2.5 keV or slightly more than $1\sigma$. However, we estimate the systematic uncertainty in the resonance energies to be 4 keV, which is in agreement with the fact that the excitation energies for all levels reported by \cite{Wr210} are higher on average than this work by about 4 keV. Additional experimental information leading to an improvement in the resonance energies would therefore be valuable. As the states most important for novae have substantial branches for gamma decay, an accurate measurement of gamma-ray energies using a complementary approach, for example as in \cite{Sew07}, would be particularly helpful in reducing the systematic uncertainties that arise largely from Q-value uncertainties in reaction studies like this one. A direct measurement of the resonance strength of the 153 keV resonance (or corresponding proton width) is also desired, but would be experimentally challenging. \section*{Acknowledgments} The authors would like to thank the staff of the WNSL for their help and support during the measurement. The authors also thank B.~Alex Brown for his help with the state configurations in $^{32}$P needed for the DWBA calculation, the estimate of the effect of the~Coulomb and other isospin-nonconserving  interactions on the spectroscopic factor, as well as the shell-model spectroscopic factor calculation for the 3695~keV state, and Christian Iliadis for helpful discussions and for providing unpublished information regarding details of the analysis in \cite{Ili99}. This work is supported by the U.S. Department of Energy Office of Nuclear Physics under Contract No. DE-AC02-06CH11357 and DE-FG02-91ER40609, and grant No. DE-FG02-96ER40978.
2024-02-18T23:41:03.106Z
2011-11-22T02:03:09.000Z
algebraic_stack_train_0000
4,043
7,078
proofpile-arXiv_066-3784
\section{Introduction} Nuclear energy density functionals (EDF) provide a microscopic, globally accurate, and yet economic description of ground-state properties and collective excitations over the whole nuclide chart. Even though it originates in the effective interaction between nucleons, a generic density functional is not necessarily related to any given nucleon-nucleon (NN) potential and, in fact, some of the most successful modern functionals are entirely empirical \cite{Bender:2003jk}. Of course it is very much desirable to have a fully microscopic foundation for a universal density functional, and this is certainly one of the major challenges in low-energy nuclear structure physics \cite{review}. The EDF approach to nuclear structure is analogous to Kohn-Sham density functional theory (DFT) and, because it includes correlations, goes beyond the Hartree-Fock approximation. Kohn-Sham DFT has the advantage of being a local scheme, but its usefulness crucially depends on our ability to construct accurate approximations for the most important part of the functional, that is, the universal exchange-correlation functional \cite{DG.90}. In a series of recent articles \cite{FKV.03,FKV.04,FKV.06} concepts of effective field theory and density functional theory have been used to derive a microscopic relativistic EDF-based model of nuclear many-body dynamics constrained by in-medium QCD sum rules and chiral symmetry. The density dependence of the effective nucleon-nucleon couplings in this model (called FKVW in the following) is determined from the long- and intermediate-range interactions generated by one- and two-pion exchange processes. They are computed using in-medium chiral perturbation theory, explicitly including $\Delta(1232)$ degrees of freedom \cite{Fri.04}. Divergent contributions to the nuclear matter energy density, calculated at three-loop level, are absorbed by few contact terms. These constants are understood to encode unresolved short-distance dynamics. The relativistic FKVW model has successfully been employed in studies of ground-state properties of spherical and deformed nuclei using the Relativistic Hartree-Bogoliubov framework (RHB \cite{VALR.05}). In the description of open-shell nuclei, in particular, a hybrid model has been used with the FKVW Kohn-Sham potential in the particle-hole ($ph$) channel and, like in most applications of RHB-based models, the pairing part of the empirical Gogny force~\cite{BGG.91} in the particle-particle ($pp$) channel. Even though this approach has been very successful, it is not theoretically consistent because of the choice of the empirical effective interaction in the $pp$ channel. Quite recently, as part of a larger program to develop a framework of fully microscopic nuclear energy density functionals, much effort has been devoted to designing non-empirical pairing functionals \cite{Duguet:2007be,Les.09,Heb.09, Pair3b,Lesinski:2011rn}. The aim of this work is to formulate a consistent microscopic framework for open-shell nuclei, in which both the {\it ph} and the {\it pp} channels of the effective inter-nucleon interaction are determined by chiral pion-nucleon dynamics. Thus we consider a separable $pp$ interaction based on a microscopic pairing interaction constrained by chiral dynamics (see Ref. \cite{KNV.05} for previous calculations involving the N$^2$LO chiral potential), combine it with the FKVW functional in the $ph$ channel and, employing the corresponding RHB model, present a study of pairing gaps in isotopic and isotonic chains of spherical open-shell nuclei. We will use the realistic NN potential developed by the Idaho group at next-to-next-to-next-to-leading order (N$^3$LO) in the chiral expansion \cite{Machleidt:2011zz} (see also Ref. \cite{Epelbaum:2008ga}), and a two-body density-dependent potential derived from the relevant diagrams at the N$^2$LO order in the three body sector \cite{Holt:2009ty} (see also Refs. \cite{Hebeler:2009iv,Hebeler:2010xb,Lesinski:2011rn,Gandolfi,Lovato:2010ef} for similar approaches and pertinent details). The paper is organized as follows. In Sec. \ref{sec_inf} we discuss results for the pairing gap of nuclear matter in the BCS approximation. Sec. \ref{sec_tyan} briefly reviews the method introduced by Y. Tian {\it et al.} \cite{TMR.09a, TMR.09b} to apply realistic pairing interactions to calculations of finite nuclei. In Sec. \ref{sec_res} we analyze pairing gaps in spherical nuclei for several isotopic and isotonic chains. Sec. \ref{sec_con} summarizes the principal results. \section{Pairing gap in a homogeneous infinite system} \label{sec_inf} The momentum and density-dependent pairing field $\Delta(k,k_F)$ in infinite matter is determined by the solution of the BCS gap equation \begin{equation} \label{gapeq} \Delta(k,k_F) = -\frac{1}{4\pi^2} \int_0^\infty{\frac{p^2 V(p,k) \Delta(p,k_F)} {\sqrt{[ {\cal E}(p,k_F)-{\cal E}(k_F,k_F)]^2+\Delta(p,k_F)^2}}\; {\rm d}p} \;, \end{equation} where $V(p,k)$ represents the off-shell pairing potential in momentum space, ${\cal E}(p,k_F)$ is the quasiparticle energy, and ${\cal E}(k_F,k_F)$ is the Fermi energy. The effective force in the pairing channel is, in principle, generated by the sum of all particle-particle irreducible Feynman diagrams~\cite{Mig.67}. In most application to nuclear and neutron matter, however, only the lowest-order term, that corresponds to the bare nucleon-nucleon interaction, is retained~\cite{DH-J.03}. Terms of higher order in the effective pairing interaction represent screening corrections to the bare force, caused by medium polarization effects (see Refs. \cite{Sch.96,Lombardo:2005sw} and references therein). In the present analysis we only consider the bare interaction, while a study of polarization effects will be carried out in a forthcoming paper. For the pairing potential $V(p,k)$ we employ the simple ansatz: \begin{equation} \label{V_pk} V(p,k) = V_{2B} (p,k) + V_{3B}(p,k,m) \simeq V_{2B} (p,k) + \bar{V}_{2B} (k_F,p,k) \; , \end{equation} where the three-body potential is approximated by an effective two-body density-dependent potential $\bar{V}_{2B}$ derived by Holt {\it et al.} in Ref \cite{Holt:2009ty}. These authors showed that in the singlet channel ($^1S_0$) the overall effect of $\bar{V}_{2B} (k_F,p,k)$ is to reduce the strong S-wave attraction (cf. Fig. 6 of Ref. \cite{Holt:2009ty}). As suggested in Ref. \cite{Holt:2009ty}, here we neglect a possible isotopic dependence that, in any case, is expected to be rather small for the nuclei considered in the present analysis (see also Ref. \cite{Lesinski:2011rn}). For both terms in Eq.~(\ref{V_pk}) we follow standard procedures for the regulator functions, and refer the reader to the original papers for details. For the single-particle spectrum that appears in the denominator of the gap equation (\ref{gapeq}) we employ the simple quadratic form \begin{equation} {\cal{E}} (p, k_F) - {\cal{E}} (k_F,k_F) = \frac{p^2 -k_F^2}{2M^\ast(k_F)} \;. \end{equation} This approximation should suffice because the momenta $p$ around $k_F$ give the dominant contribution to the integral in Eq. (\ref{gapeq}). The effective nucleon mass $M^\ast(k_F)$ was obtained in a very recent calculation (Fig. 9 of Ref. \cite{Holt:2011nj}), in which the nuclear energy density functional was derived to first order in the two- and three-nucleon interaction using a density matrix expansion\footnote{In Ref. \cite{Holt:2011nj} the two-body interaction comprises long-range one- and two-pion exchange contributions and a set of contact terms contributing up to fourth power in momenta (N$^3$LOW potential developed lowering the cut-off scale to $\Lambda= $ 414 MeV). In addition, the authors employ the leading order chiral three-nucleon interaction with the corresponding parameters $c_E$, $c_D$ and $c_{1,3,4}$ adjusted in calculations of few-body systems. Even though the results are in good agreement with previous calculations \cite{Fritsch:2004nx}, one should note that higher-order corrections could have non-negligible effects \cite{Holt:2011nj2}. }. Fig. \ref{fig1} displays the pairing gap $\Delta(k_F,k_F)$ in symmetric nuclear matter as function of the Fermi momentum $k_F$. We plot results of the complete calculation that includes two and three-body forces (solid curve), and the pairing gap obtained with only the two-body NN potential at N$^3$LO (dashed curve). Our results are shown in comparison with those obtained in Ref. \cite{Sedrakian:2003cc} using the $V_{\rm lowk}$ potential (with single particle energies computed in Brueckner-Hartree-Fock theory). \section{Mapping procedure} \label{sec_tyan} To implement the chiral NN potential at N$^3$LO in the pairing channel of the RHB framework for finite nuclei, we adopt the approach of Refs.~\cite{TMR.09a,TMR.09b} where a separable form of the pairing interaction was introduced, with parameters adjusted to reproduce the pairing properties of the Gogny force in nuclear matter. In nuclear matter the pairing force is separable in momentum space: \begin{equation} \left\langle {\bm k}\right\vert V^{^{1}S_{0}}\left\vert {\bm k}^{\prime}\right\rangle =-Gp(k)p(k^{\prime})\;. \label{sep_pair}% \end{equation} By assuming a simple Gaussian ansatz $p(k)=e^{-a^{2}k^{2}}$, the two parameters $G$ and $a$ have been adjusted to reproduce the density dependence of the gap at the Fermi surface, calculated with a Gogny force. For the D1S parameterization~\cite{BGG.91} of the Gogny force: $G=728\;\mathrm{MeVfm}% ^{3}$ and $a=0.644\;\mathrm{fm}$. Here we apply the same procedure to the chiral NN potential at the N$^3$LO order. For finite nuclei, when the pairing force Eq.~(\ref{sep_pair}) is transformed from momentum to coordinate space, it takes the form: \begin{equation} V({\mbox{\boldmath $r$}}_{1},{\mbox{\boldmath $r$}}_{2},{\mbox{\boldmath $r$}}% _{1}^{\prime},{\mbox{\boldmath $r$}}_{2}^{\prime})=G\delta\left( {\mbox{\boldmath $R$}}-{\mbox{\boldmath $R$}}^{\prime}\right) P(r)P(r^{\prime})\frac{1}{2}\left( 1-P^{\sigma}\right) , \label{pp-force}% \end{equation} where ${\mbox{\boldmath $R$}}=\frac{1}{2}\left( {\mbox{\boldmath $r$}}% _{1}+{\mbox{\boldmath $r$}}_{2}\right) $ and ${\mbox{\boldmath $r$}}% ={\mbox{\boldmath $r$}}_{1}-{\mbox{\boldmath $r$}}_{2}$ denote the center-of-mass and the relative coordinates, $P(r)$ is the Fourier transform of $p(k)$: \begin{equation} P(r)=\frac{1}{\left( 4\pi a^{2}\right) ^{3/2}% }e^{-r^{2}/4a^{2}}\;, \label{P3D}% \end{equation} and the factor $1/2\left(1-P^{\sigma}\right)$ projects on the $^{1}S_0$ channel. The pairing force has a finite range and, because of the presence of the factor $\delta\left( {\mbox{\boldmath $R$}}-{\mbox{\boldmath $R$}}^{\prime}\right) $, it preserves translational invariance. Even though $\delta\left( {\mbox{\boldmath $R$}}-{\mbox{\boldmath $R$}}^{\prime}\right) $ implies that this force is not completely separable in coordinate space, the corresponding anti-symmetrized $pp$ matrix elements can be represented as a sum of a finite number of separable terms, using a method developed by Talmi and Moshinsky. When the nucleon wave functions are expanded in a harmonic oscillator basis, spherical or deformed, the sum converges relatively quickly. A relatively small number of separable terms reproduces with high accuracy the result of a calculation performed in the complete basis. We refer the reader to \cite{TMR.09a} for more details. The parameters of the separable pairing force take the values: $G = 892.0$ MeVfm$^{3}$ and $a= 0.74$ fm for the N$^3$LO potential, and $G =1045.0$ MeVfm$^{3}$ and $a= 0.86$ fm for the complete potential $V(p,k)$. We note that recently a similar approach was employed in Refs. \cite{Les.09,Lesinski:2011rn}, where a low-rank separable representation was used to reproduce directly $V_{\rm low k}$ and $V_{3N}$ in the $^1S_0$ channel (for $V_{3N}$ the density dependence was parametrized by a polynomial in the Fermi momentum). \section{Results for finite nuclei} \label{sec_res} Employing the RHB model with the FKVW functional in the $ph$ channel and the separable pairing force Eq.~(\ref{pp-force}) in the $pp$ channel, we have calculated the self-consistent ground-state solutions for several sequences of isotopes (nickel, tin and lead) and isotones ($N=28$, $N=50$ and $N=82$). The total binding energies and average pairing gaps are compared to available data in Figs.~\ref{fig2} and \ref{fig3}. The experimental masses are from Ref.~\cite{AW.03}, and the average proton and neutron gaps \cite{Bender:2000xk} \begin{equation} \bar \Delta = \frac{ \sum_k \Delta_k u_k v_k}{\sum_k v^2_k} \label{av-gap} \end{equation} are compared to empirical values determined using the 5-point formula \cite{Moller:1992zz} for even-even nuclei \begin{equation} \label{delta5} \Delta^{(5)} (N_0) = -\frac{1}{8} \left[ E(N_0+2) - 4E(N_0+1) + 6E(N_0) - 4E(N_0-1) + E(N_0-2) \right] \;. \end{equation} $E(N_0)$ denotes the experimental binding energy of a nucleus with $N_0$ neutrons ($Z_0$ for protons). In Eq.~(\ref{av-gap}) the sum is over proton or neutron canonical states, $\Delta_k$ is the diagonal matrix element of the pairing field in the canonical state $k$, and $v_k$ denotes the corresponding eigenvalue of the one-body density matrix (occupation factor)\footnote{ In Refs. \cite{Heb.09,Pair3b,Lesinski:2011rn} a somewhat different prescription was used for the pairing gaps. The theoretical gap $\Delta_{LCS}$ (Lowest Canonical State) corresponds to the diagonal pairing matrix element $\Delta_i$ in the canonical single-particle state $\phi_i$ whose quasi-particle energy is the lowest, whereas experimental gaps are deduced from binding energies using three-point mass differences centered on odd-mass nuclei. }. The theoretical gaps shown in Figs.~\ref{fig2} and \ref{fig3} have been calculated using the values of the parameters $G$ and $a$ that correspond to the nuclear matter pairing gaps in Fig.~\ref{fig1}. The gaps calculated by including only the interaction $V_{2B}$ (blue diamonds) reproduce on a quantitative level the isotopic and isotopic trends of the empirical gaps. Including the three-nucleon interaction $V_{3B}$ induces a sizable reduction of the calculated gaps (green diamonds), and we note that similar conclusions were also reached in Ref. \cite{Lesinski:2011rn}. The calculated gaps for the isotopic chains $Z=28$, $Z=50$ and $Z=82$ indicate that missing contributions like, for instance particle-vibration coupling, could play an important role \cite{Barranco:2005yk,Gori:2005ym}. Fig.~\ref{fig3} displays similar results for the proton pairing gaps of the isotonic chains $N=28$, $N=50$ and $N=82$ (we note that here the contribution of the Coulomb interaction in the pairing channel is neglected). The subshell closures that appear at $N=40$ in the nickel chain \cite{Broda}, and at $Z=58$ in the $N=82$ chain \cite{Long:2006dj}, lead to a strong reduction of pairing correlations in the corresponding calculated ground-states. The influence of three-body forces on the total binding energy is much less pronounced, as shown in the lower panels of Figs.~\ref{fig2} and \ref{fig3}, where we display the absolute deviations of the calculated binding energies from the experimental values. On the one hand this is because pairing correlations contribute much less than the mean-field self-energies to the total binding. On the other hand this is a well known characteristic of a self-consistent calculation in that, for a given nucleus, a reduction of pairing results in an effective enhancement of the mean-field contribution to the total energy, and vice versa. In general, the combination of the FKVW $ph$ effective interaction and the separable pairing force Eq.~(\ref{pp-force}) produces results for the total binding energies that are comparable to those obtained with the best empirical non-relativistic and relativistic energy density functionals. For the nickel isotopes the largest deviations are in the region $Z \approx N$, where one expects additional contributions from proton-neutron correlations that are not explicitly included in the FKVW functional. In the tin isotopes the calculated masses start deviating from data in neutron-rich nuclei beyond the major shell closure at $N=82$, whereas for lead nuclei the deviations are most pronounced in the lightest, neutron-deficient isotopes that are characterized by soft potentials and shape coexistence. \section{Conclusions} \label{sec_con} A consistent microscopic approach to the structure of open-shell nuclei has been introduced, in which both the {\it ph} and the {\it pp} channels of the effective nuclear interaction are fully determined by chiral pion-nucleon dynamics. By employing an ansatz for the pairing force that is separable in momentum space, we have performed an efficient mapping of the chiral potential in the pairing channel (at the N$^3$LO order and the N$^2$LO in the two-body and three-body sectors, respectively) to an effective $pp$ interaction for finite nuclei. The two parameters of the separable pairing force are adjusted to reproduce the density dependence of the pairing gaps in symmetric nuclear matter. The resulting effective pairing interaction thus enables, on the one hand, the treatment of pairing correlations in finite nuclei using pairing functionals constrained by chiral dynamics and, on the other hand, calculations in the $pp$ channel with a finite-range interaction. The significant advantage is that the computational cost is greatly reduced when compared to nonlocal finite-range forces like, for instance, the empirical Gogny force. A noteworthy result of the present investigation is that it confirms the important role of three-body forces in determining pairing gaps in finite nuclei. This work was partly supported by INFN, MIUR and MZOS (project 1191005-1010). We acknowledge useful discussions with N. Kaiser, W. Weise and E. Vigezzi, and would like to thank J. W. Holt for providing numerical values for the three-body potential. \newpage
2024-02-18T23:41:03.569Z
2011-11-22T02:07:40.000Z
algebraic_stack_train_0000
4,065
2,943
proofpile-arXiv_066-3794
\section{Introduction} Newton's inverse-square force law for the universal attraction of gravity successfully explained Kepler's laws of planetary motion and other solar-system observations. However, with the advent of Lorentz invariance, Einstein's general theory of relativity integrated Newtonian gravitation into a consistent relativistic framework that naturally explained the excess perihelion precession of Mercury and has thus far successfully withstood various observational challenges. On the other hand, on galactic scales, the existence of dark matter has been assumed for decades in order to resolve observational problems, such as the ``flat" rotation curves of spiral galaxies. As the nature of dark matter is still unknown, we take the view that what appears as dark matter may very well be a novel aspect of the gravitational interaction. Starting from first principles, a nonlocal generalization of Einstein's theory of gravitation has been developed in recent papers~\cite{nonlocal, NonLocal, BCHM, BM}, where it is demonstrated that nonlocal gravity can naturally simulate dark matter. However, the main motivation for the development of a nonlocal theory of gravitation came from the principle of equivalence, once the general outline of \emph{nonlocal special relativity} had become clear~\cite{BM2}. In nonlocal special relativity, nonlocality is induced by the \emph{acceleration} of observers in Minkowski spacetime. Inertia and gravitation are intimately linked in accordance with the principle of equivalence of inertial and gravitational masses. Hence, gravity is expected to be nonlocal as well. That nonlocality can simulate dark matter emerged later in the course of studying the physical implications of nonlocal general relativity~\cite{nonlocal, NonLocal}. In this theory of nonlocal gravity, gravitation is described by a local field that satisfies nonlocal integro-differential equations. Thus gravity is in this sense fundamentally nonlocal and its nonlocality is introduced through a ``constitutive" kernel. \emph{This nonlocal kernel is a basic ingredient of the gravitational interaction that must ultimately be determined from observation.} A \emph{direct} nonlocal generalization of the standard form of Einstein's general relativity would encounter severe difficulties~\cite{Bahram:2007}; however, it turns out that one can also arrive at general relativity via a special case of the translational gauge theory of gravity, namely, the teleparallel equivalent of general relativity. It can be shown that this theory is amenable to generalization through the introduction of a constitutive kernel. In fact, the simplest case of a \emph{scalar} constitutive kernel has been employed in~\cite{nonlocal, NonLocal, BCHM, BM} to develop a consistent nonlocal generalization of Einstein's theory of gravitation. In this approach to nonlocal gravity, nonlocality survives even in the Newtonian regime and appears to provide a natural explanation for ``dark matter"~\cite{SR, C, S, B, G}; that is, nonlocal gravity simulates dark matter. Indeed, the nonlocal theory naturally contains the Tohline-Kuhn phenomenological extension of Newtonian gravity to the galactic realm~\cite{Tohline, Kuhn, Jacob1988}. Poisson's equation of Newtonian gravity, \begin{equation}\label{1} \nabla^2\Phi_{N}(t,\mathbf{x})=4\pi G \rho(t,\mathbf{x}) \end{equation} is modified in the nonlocal theory to read~\cite{BCHM, BM} \begin{equation}\label{2} \nabla^2\Phi(\mathbf{x})+\sum_i\int\frac{\partial \Bbbk(\mathbf{x},\mathbf{y})}{\partial x^i}\frac{\partial\Phi (\mathbf{y})}{\partial y^i}d^3y=4\pi G\rho(\mathbf{x})\,. \end{equation} Here $\Phi_{N}$ is the Newtonian gravitational potential and any possible temporal dependence of the gravitational potential $\Phi$ and matter density $\rho$ has been suppressed in Eq.~\eqref{2} for the sake of simplicity. Moreover, the nonlocal kernel $\Bbbk$ is a smooth function of $\mathbf{u}$ and $v$, so that $\Bbbk(\mathbf{x},\mathbf{y}) = K(\mathbf{u},v)$, where \begin{equation}\label{3} \mathbf{u} := \mathbf{x}-\mathbf{y}, \quad v := \frac{|\nabla_{\mathbf{y}}\Phi(\mathbf{y})|}{|\nabla_{\mathbf{x}} \Phi(\mathbf{x})|}. \end{equation} Thus $v$ is the source of nonlinearity in Eq.~\eqref{2}. This nonlocal and nonlinear modification of Poisson's equation is invariant under a constant change of the potential, namely, $\Phi \mapsto \Phi + C$, where $C$ is a constant. In addition, Eq.~\eqref{2} satisfies a scaling law; that is, if the matter density is scaled by a \emph{constant} scale factor $s$, $\rho \mapsto s\rho$, then the potential is scaled by the same constant factor, $\Phi \mapsto s \Phi$. The functional form of the nonlocal kernel is unknown. Let us tentatively assume that it does have a \emph{dominant} linear component $k(\mathbf{u})$ for some gravitational systems; that is, \begin{equation}\label{3a} \Bbbk = k(\mathbf{u})+\kappa(\mathbf{u}, v)\,, \end{equation} where $\kappa(\mathbf{u}, v)$ is a relatively small nonlinear perturbation. The physical justification for this supposition is that the implications of the \emph{linear} form of Eq.~\eqref{2}, for the corresponding linear gravitational potential $\Phi_{\ell}$, compare favorably with the physics of the flat rotation curves of spiral galaxies~\cite{nonlocal, NonLocal, BCHM, BM}. In the linear approximation, the scalar constitutive kernel $\Bbbk$ is only a function of $\mathbf{u}$ and we expect intuitively that even nonlocal gravity would weaken with increasing distance, so that $\Bbbk$ should go to zero as $u:=|\mathbf{u}| \to \infty$. Let us therefore start with Eq.~\eqref{2} and consider, for simplicity, a linear kernel of the form $\Bbbk(\mathbf{x},\mathbf{y}) := k(\mathbf{u})$, so that $\partial \Bbbk / \partial x^i = - \partial \Bbbk / \partial y^i$ in this case. Furthermore, let us assume that as $y:=|\mathbf{y}| \to \infty$, $|\Bbbk(\mathbf{x},\mathbf{y})\nabla_{\mathbf{y}}\Phi_{\ell}(\mathbf{y})|$ falls off to zero faster than $1/y^2$; then, using integration by parts and Gauss's theorem, Eq.~\eqref{2} can be written in this case as \begin{equation}\label{2a} \nabla^2\Phi_{\ell}(\mathbf{x})+ \int k(\mathbf{x}-\mathbf{y}) \nabla^2\Phi_{\ell}(\mathbf{y})d^3y = 4\pi G\rho(\mathbf{x})\,. \end{equation} We assume that this Fredholm integral equation of the second kind has a unique solution~\cite{T}, which can be expressed by means of the reciprocal convolution kernel $q(\mathbf{u})$ as \begin{equation}\label{2b} \nabla^2\Phi_{\ell}(\mathbf{x}) = 4\pi G\rho(\mathbf{x})+ 4\pi G \int q(\mathbf{x}-\mathbf{y}) \rho(\mathbf{y})d^3y\,. \end{equation} The conditions for the validity of this assumption will be examined in the following section. On the other hand, it is an immediate consequence of Eq.~\eqref{2b} that nonlocal gravity acts like dark matter; that is, the gravitational potential in the linear regime is due to the presence of matter of density $\rho$ as well as ``dark matter" of density $\rho_D$ given by \begin{equation}\label{2c} \rho_D(\mathbf{x}) = \int q(\mathbf{x}-\mathbf{y}) \rho(\mathbf{y})d^3y\,. \end{equation} Thus the Laplacian of the gravitational potential is given by $4\pi G(\rho+\rho_D)$, where, in the linear approximation, $\rho_D$ is the convolution of $\rho$ and the reciprocal kernel $q$. In particular, for a point mass $M$, $\rho(\mathbf{y})=M\delta(\mathbf{y})$, we have $\rho_D=Mq$. A similar, but more intricate, result holds when nonlinearity is taken into consideration; that is, nonlocality still simulates dark matter, but the connection between $\rho_D$ and $\rho$ goes beyond Eq.~\eqref{2c}. This assertion is based on the assumption that the nonlocal kernel is of the form of Eq.~\eqref{3a}; then, Eq.~\eqref{2} takes a similar form as Eq.~\eqref{2a}, but with an extra source term ${\cal S}(\mathbf{x})$ due to nonlinearity. That is, \begin{equation}\label{2ca} \nabla^2\Phi(\mathbf{x})+ \int k(\mathbf{x}-\mathbf{y}) \nabla^2\Phi(\mathbf{y})d^3y = 4\pi G[\rho(\mathbf{x})+ {\cal S}(\mathbf{x})]\,. \end{equation} Here ${\cal S}(\mathbf{x})$ can be expressed as the divergence of a vector field, namely, \begin{equation}\label{2cb} {\cal S}=-\nabla \cdot \boldsymbol {\nu}, \quad \boldsymbol {\nu}(\mathbf{x})= \frac{1}{4\pi G}\int \kappa(\mathbf{u}, v)\nabla_{\mathbf{y}}\Phi(\mathbf{y})d^3y\,. \end{equation} Assuming as before that Eq.~\eqref{2ca} has a unique solution via the reciprocal convolution kernel $q(\mathbf{u})$, we find \begin{equation}\label{2cc} \nabla^2\Phi(\mathbf{x}) = 4\pi G\{\rho(\mathbf{x})+ {\cal S}(\mathbf{x}) + \int q(\mathbf{x}-\mathbf{y}) [\rho(\mathbf{y}) + {\cal S}(\mathbf{y})]d^3y\}\,. \end{equation} It follows that the density of ``dark matter" does have contributions from the nonlinear part of the nonlocal kernel. Neglecting nonlinearities, we note that the \emph{linear} relationship between $\Phi_{\ell}$ and $\rho$ implies that one can write \begin{equation}\label{2d} \Phi_{\ell}(\mathbf{x}) = \int {\cal G}(\mathbf{x}-\mathbf{y}) \rho(\mathbf{y})d^3y\,. \end{equation} Here the Green function ${\cal G}(\mathbf{u})$ is a solution of \begin{equation}\label{2e} \nabla^2{\cal G}(\mathbf{u}) = 4\pi G [\delta(\mathbf{u})+q(\mathbf{u})]\,. \end{equation} If $\rho(\mathbf{x})=M\delta(\mathbf{x})$ for a point mass $M$ in Eq.~\eqref{2b}, we have $\Phi_{\ell}=M{\cal G}$, which means that once the nonlocal gravitational potential is known for a \emph{point mass}, one can determine the potential for any mass distribution via linearity as expressed by Eq.~\eqref{2d}. Assuming that $q(\mathbf{u})$ can be determined from observational data, the inverse problem must be solved to find the linear kernel $k(\mathbf{u})$ from $q(\mathbf{u})$. In this paper, we study this inverse problem in connection with the rotation curves of spiral galaxies; furthermore, we provide a general discussion of the solutions of Eq.~\eqref{2} and investigate some of their physical implications. Consider the motion of stars within the disk of a spiral galaxy in accordance with the Kepler-Newton-Einstein tradition. For revolution on a circle of radius $r:=|\mathbf{x}|$ about the central spherical bulge, the speed of rotation $V$ of a star is given by $V^2=G{\cal M}/r$, where ${\cal M}$ is the mass of the bulge, which we take to be the effective mass of the galaxy. Observational data indicate that $V^2$ is nearly constant; therefore, keeping the standard theory forces us to assume that mass ${\cal M}$ has a dark component that increases linearly with $r$. Assuming spherical symmetry for the distribution of this dark matter, we find that \begin{equation}\label{2f} \rho_D(\mathbf{x}) = \frac{V_0^2}{4\pi G} \frac{1}{r^2}\,, \end{equation} where $V_0$ is the constant asymptotic velocity of stars in the disk of the spiral galaxy. If dark matter does not exist, but what appears to be dark matter is in fact a manifestation of the nonlocal character of the gravitational interaction, then Eq.~\eqref{2c} together with Eq.~\eqref{2f} implies that $q(\mathbf{u})=u^{-2}/(4\pi \lambda)$ for $\rho(\mathbf{x})=M\delta(\mathbf{x})$. Here $M$ is the mass of the point source that represents the spiral galaxy and $\lambda=GM/V_0^2$ is a constant length. With this explicit form for the reciprocal kernel $q$, Eq.~\eqref{2b} becomes identical to an equation that was first introduced by Kuhn in the phenomenological Tohline-Kuhn approach to modified gravity as an alternative to dark matter. Let us digress here and briefly mention some relevant aspects of the \emph{linear} phenomenology associated with the flat rotation curves of spiral galaxies. To avoid the necessity of introducing dark matter into astrophysics, Tohline~\cite{Tohline} suggested in 1983 that the Newtonian gravitational potential of a point mass $M$ (representing, in effect, the mass contained in the nuclear bulge of a spiral galaxy) could instead be modified by a logarithmic term of the form \begin{equation}\label{2g} \Phi_{\ell}(r)=-\frac{GM}{r} +\frac{GM}{\lambda}\ln\left(\frac{r}{\lambda}\right) \,. \end{equation} Here $GM/ \lambda=V_0^2$, where $V_0$ is the approximately constant rotation velocity of stars in the disk of a spiral galaxy of mass $M$. Thus the constant length $\lambda$ is of the order of 1\,kpc; henceforth, we will assume for the sake of definiteness that $\lambda \approx$ 10 kpc. It follows that in this modified gravity model $M\propto V_0^2$, which disagrees with the empirical Tully-Fisher law~\cite{TullyFisher}. The Tully-Fisher relation involves a correlation between the infrared luminosity of a spiral galaxy and the corresponding asymptotic rotation speed $V_0$. This relation, in combination with other observational data regarding mass-to-light ratio for spiral galaxies, roughly favors $M\propto V_0^4$, instead of $M\propto V_0^2$ that follows from Tohline's proposal. On the physical side, however, it should be clear that the Tully-Fisher empirical relation is based on the electromagnetic radiation emitted by galactic matter and thus contains much more physics than just the law of gravity for a point mass~\cite{Kuhn, Ton}. Tohline's suggestion was taken up and generalized several years later by Kuhn and his collaborators---see~\cite{Kuhn} and an illuminating review of the Tohline-Kuhn work in~\cite{Jacob1988}. Indeed, in Kuhn's linear phenomenological scheme of modified gravity~\cite{Jacob1988}, a nonlocal term is introduced into Poisson's equation, namely, \begin{equation}\label{2h} \nabla^2\Phi_{\ell}=~4 \pi G \Big [\rho+ \frac{1}{4\pi\lambda} \int \frac{\rho(\mathbf{y})}{|\mathbf{x}-\mathbf{y}|^2}d^3y\Big] \,, \end{equation} such that for a point source, $\rho(\mathbf{x})=M\delta(\mathbf{x})$, Eq.~\eqref{2g} is a solution of Eq.~\eqref{2h}. It follows immediately from a comparison of Eq.~\eqref{2b} with Eq.~\eqref{2h} that \begin{equation}\label{2i} q(|\mathbf{x}-\mathbf{y}|)=\frac{1}{4\pi\lambda} \frac{1}{|\mathbf{x}-\mathbf{y}|^2}\,. \end{equation} Therefore, to make contact with observational data regarding the rotation curves of spiral galaxies, we suppose, as in previous work~\cite{nonlocal, NonLocal, BCHM, BM}, that the reciprocal kernel $q$ in Eq.~\eqref{2b} is approximately given by the Kuhn kernel in Eq.~\eqref{2i} from the bulge out to the edge of spiral galaxies. A remark is in order here regarding the nature of $\lambda$. While for a \emph{point} mass, $\lambda$ in Eq.~\eqref{2g} is a universal constant, the situation may be different for the \emph{interior} potential of a bounded distribution of matter. Consider, for instance, the Newtonian gravitational acceleration for a uniform spherical distribution of density $\rho_0$ and radius $R$. The exterior acceleration has the universal form $d\Phi_{N}/dr = GM/r^2$ for $r>R$, where $M = (4\pi\rho_{0}/3)R^3$; as expected, this is identical to the acceleration for a point source of fixed mass $M$. For $0\le r\le R$, the interior Newtonian gravitational acceleration at a fixed radius $r$ is given by $d\Phi_{N}/dr = (GM/R^3)r$, which decreases with increasing $R$ when $M$ and $r$ are held fixed. Extrapolating from this natural consequence of Newtonian gravity to the nonlocal domain, one might expect that in the interior of spiral galaxies, for instance, $\lambda$ in Eq.~\eqref{2h} might depend on the size of the system. Indeed, in Kuhn's work, $\lambda$, with a magnitude of more or less around 10 kpc in Eq.~\eqref{2h}, is taken to be larger for larger systems~\cite{Kuhn, Jacob1988}. Rotation curves of spiral galaxies can thus provide some information regarding the nature of the reciprocal kernel $q(\mathbf{u})$ in the linear case, where Eq.~\eqref{2b} can be directly compared with the Tohline-Kuhn scheme. However, to determine the corresponding nonlocal kernel $k(\mathbf{u})$, we need to know the functional form of $q(\mathbf{u})$ over all space. The simplest possibility would be to assume that $q(u)=u^{-2}/(4\pi \lambda)$ holds over all space; however, the corresponding $k(u)$ does \emph{not} exist---see~\cite{NonLocal}, especially Appendix E, for a detailed discussion of this point. We therefore take up the crucial question of the existence of the linear kernel $k(u)$ for spiral galaxies in sections II and III. The general \emph{nonlinear} inverse problem of nonlocal gravity is beyond the scope of this paper; therefore, we concentrate in sections II and III on the linear problem of finding $k(u)$ from certain very simple extensions of $q(u)$ beyond $q(u)\propto u^{-2}$ that is implied (in the galactic disk) by the flat rotation curves of spiral galaxies. We then turn to the other main goal of this paper, which is to tackle the general \emph{nonlinear} form of Eq.~\eqref{2}. Unfortunately, the general form of the nonlocal kernel is unknown at present. Nevertheless, a formal treatment of Eq.~\eqref{2} is developed in Sec. IV without recourse to Eq.~\eqref{3a} or any specific assumption about the nature of the nonlocal kernel. As an application of this new approach, we consider the gravitational potential of a \emph{point mass} in Sec. V. Physically, we regard the point mass as an idealization for a spherical distribution of matter; in practice, it will stand, for instance, for the mass of a spiral galaxy, most of which is usually concentrated in a central spherical bulge---see, in this connection, section IV of ~\cite{BCHM}. The \emph{linear} regime, where the kernel is independent of $v$, will be investigated in the first subsection of Sec. V; in fact, we illustrate the effectiveness of our procedure by demonstrating how previous results can be recovered in the new setting. Specifically, we show that the Tohline-Kuhn scheme can be recovered in this case from the general treatment of Sec. IV. Finally, Sec. VI contains a discussion of our results. \section{Inverse Problems in Linear Nonlocal Gravity} The nonlocal theory of gravitation under consideration in this paper is based on the existence of a suitable ``constitutive" kernel $\Bbbk$. For the gravitational potential of spiral galaxies, we assume at the outset that $\Bbbk$ contains a dominant linear part $k(u)$ and a nonlinear part that we can ignore in the context of the present discussion. The reciprocal of $k(u)$ for a spiral galaxy is expected to be of the form of the Kuhn kernel~\eqref{2i} in order that the nonlocal theory could simulate dark matter and be therefore consistent with observational data. It turns out that with the simple form of $q$ given in Eq.~\eqref{2i}, the corresponding reciprocal kernel does \emph{not} exist. That is, if we naturally extend the simple Kuhn kernel to the whole space beyond a galaxy, then there is an infinite amount of simulated dark matter and we have a $q(u)\propto u^{-2}$ for which there is no finite $k(u)$. However, the nonlocal theory is based on the existence of a finite smooth nonlocal kernel. This important problem is taken up in this section. To determine $k(u)$ from its reciprocal, it is necessary to extend the functional form of kernel~\eqref{2i} so that it becomes smoothly applicable over all space and falls off rapidly to zero at infinity. Let $q(u)$ be this extended reciprocal kernel. We must ensure that its reciprocal $k(u)$, the constitutive kernel of nonlocal gravity in the linear regime, indeed exists and properly falls off to zero as $u\to \infty$. To this end, we show in this section that it is sufficient to require that $q(u)$ and $k(u)$ be smooth absolutely integrable as well as square integrable functions over all space, so that their Fourier integral transforms exist as well. In the \emph{linear} Newtonian regime of nonlocal gravity, the nonlocally modified Poisson equation is a Fredholm integral equation of the second kind, cf. Eq.~\eqref{2a}, \begin{equation}\label{30} g(\mathbf{x})+ \int k(\mathbf{x}-\mathbf{y}) g(\mathbf{y})d^3y = f(\mathbf{x})\ , \end{equation} which we assume has a unique solution that is expressible by means of the reciprocal kernel $q$ as \begin{equation}\label{31} f(\mathbf{x})+ \int q(\mathbf{x}-\mathbf{y}) f(\mathbf{y})d^3y = g(\mathbf{x})\ . \end{equation} The inverse problem, however, involves finding $k(\mathbf{u})$ once $q(\mathbf{u})$ is known; that is, we wish to obtain Eq.~\eqref{30} starting from Eq.~\eqref{31}. Let us note that our methods can be used in either direction due to the obvious symmetry of Eqs.~\eqref{30} and~\eqref{31}. It is useful to express Eq.~\eqref{30} in operator form as $(I+{\cal K})g=f$, where $I$ is the identity operator and $\cal K$ is the convolution operator ${\cal K}g=k\star g $. Formally, we expect the solution of this equation is $(I+{\cal Q})f=g$, where ${\cal Q}f=q\star f $. Moreover, $f=(I+{\cal K})g= (I+{\cal Q})^{-1}g$ would be equivalent to Eq.~\eqref{30}. From a comparison of Eq.~\eqref{30} with Eq.~\eqref{2a}, we see that the quantities of interest are $f=4\pi G \rho$ and $g= \nabla^2\Phi_{\ell}$. Here, the function $\rho$ models the density of matter in space and $\Phi_{\ell}$ is the linear gravitational potential. Both of these functions can be considered to be \emph{smooth} in the continuum limit for matter distributions under consideration throughout this work. \subsection{Liouville-Neumann Method} It is \emph{formally} possible to obtain Eq.~\eqref{30} from Eq.~\eqref{31}, or the other way around, by the application of the Liouville-Neumann method of successive substitutions~\cite{T}. That is, we start with Eq.~\eqref{31} and write it in the form \begin{equation}\label{D1} f(\mathbf{x}) = g(\mathbf{x})- \int q(\mathbf{x}-\mathbf{y}) f(\mathbf{y})d^3y\ . \end{equation} Then, we replace $f(\mathbf{y})$ in the integrand by its value given by Eq.~\eqref{D1}. Iterating this process eventually results in an infinite series---namely, the Neumann series---that may or may not converge. A uniformly convergent Neumann series leads to a unique solution of the Fredholm integral equation~\eqref{D1}. Moreover, one can determine $k$ in terms of $q$. The procedure for calculating kernel $k$ of Eq.~\eqref{30} in terms of the iterated $q$ kernels has been discussed, for instance, in~\cite{BCHM}; however, a sign error there in the formula for iterated kernels must be corrected: An overall minus sign is missing on the right-hand side of Eq. (3) of~\cite{BCHM}. The spherical symmetry of $q$, in the cases under consideration in this section, implies that all iterated kernels are functions of $u$. Thus let $q_n(u)$, $n=1,2,...$, be the relevant iterated kernels such that $q_1=q$ and \begin{equation}\label{D2} q_{n+1}(|\mathbf{x}-\mathbf{y}|) = - \int q(|\mathbf{x}-\mathbf{z}|) q_n(|\mathbf{z}-\mathbf{y}|)d^3z\ . \end{equation} It follows that in this case the nonlocal kernel is \begin{equation}\label{D3} k(|\mathbf{x}-\mathbf{y}|) = -\sum_{n=1}^{\infty} q_n(|\mathbf{x}-\mathbf{y}|)\ . \end{equation} In trying to determine $k$ from $q$, one can therefore start from the study of the Neumann series. Using the approach developed in~\cite{T}, and working in the Hilbert space of square-integrable functions, it can be shown by means of the Schwarz inequality that the Neumann series converges and the nonlocal kernel exists for $\lVert q \rVert <1$; moreover, the solution of the Fredholm integral equation~\eqref{31} by means of the Neumann series is \emph{unique}. However, the norms of the convolution operators of interest in our work are not square integrable, since \begin{equation}\label{32} \lVert q \rVert^2= \int q^{2}(\mathbf{x}-\mathbf{y}) d^3x d^3y = \int d^3x \int q^2(\mathbf{u})d^3u=\infty \,, \end{equation} so that the standard Hilbert space theory developed in~\cite{T} is not applicable here. That is, the $L^2$ norm of $q$ could be finite, but the norm of the corresponding convolution operator in $L^2$ is infinite. On the other hand, let us suppose that the space of functions of interest is a Banach space ${\cal B}$ and that ${\cal Q}$ is a bounded linear operator on ${\cal B}$, ${\cal Q}: {\cal B} \to {\cal B}$, with $\lVert {\cal Q} \rVert<1$; then, one can show that $(I+{\cal Q})$ has an inverse given by $\sum_{n=0}^{\infty}(-{\cal Q})^n$, where the series converges uniformly in the set of all bounded linear operators on ${\cal B}$. In this formula, $(-{\cal Q})^n$ in the series corresponds to the iterated kernel $q_n$, $n=1,2,3,...,$ in the Neumann series and the convergence of the series is equivalent to the existence of kernel $k$. It seems that the sufficient condition $\lVert {\cal Q} \rVert<1$ for the convergence of the Neumann series cannot be satisfied for any physically reasonable extension of the Kuhn kernel~\eqref{2i}; that is, our various attempts in this direction have been unsuccessful. In short, the Neumann series diverges; therefore, we resort to the Fourier transform method in this work. We caution that our mathematical approach may not be unique, as the theory may work in other function spaces that we have not considered here. The general mathematical problem is, of course, beyond the scope of this paper. \subsection{Fourier Transform Method} Following well-known mathematics (see, for example,~\cite[\S 9.6] {PS}), precise conditions can be determined for our convolution operators to be invertible. The basic idea is to use the Fourier transform $\mathcal{F}$ defined for $L^1$ functions (that is, functions which are absolutely integrable over all of space) by \begin{equation}\label{33} \mathcal{F}[f](\boldsymbol{\xi}) =\hat{f} (\boldsymbol{\xi}) = \int f(\mathbf{x}) e^{-i \boldsymbol{\xi} \cdot \mathbf{x}}~ d^3x\, \end{equation} to prove a basic lemma: \emph{ If $q$ is in $L^1$, then its Fourier transform $\hat q$ is continuous and the convolution operator ${\cal Q}$ given by ${\cal Q}f=q \star f$ is a bounded operator on $L^2$ whose spectrum is the closure of the range of the $\hat q$.} The proof outlined here uses standard results of $L^2$ theory. By an elementary argument, the Fourier transform of an arbitrary $L^1$ function is continuous. A deeper result (Plancherel's theorem) states that the Fourier transform can be extended to a bounded invertible operator on $L^2$ that preserves the $L^2$ norm; in other words, the extended Fourier transformation is an isometric isomorphism of the Hilbert space $L^2$. This extended operator (also denoted by $ \mathcal{F}$ and simply called the Fourier tranform) maps convolutions to products: ${\cal F}[f\star g]={\cal F}[f] {\cal F}[g]$. Let $\hat{\mathcal{Q}}$ be the (multiplication) operator defined on $L^2$ by $\hat{\mathcal{Q}} f=\hat{q} f$, where $\hat q$ is the Fourier transform of the kernel of $\cal Q$. If follows from the definitions and the action of the Fourier transform on convolutions that $\mathcal{Q}= {\cal F}^{-1} \hat{ \mathcal{Q}} {\cal F}$; therefore, the spectra of the operators $\mathcal{Q}$ and $\hat{\mathcal{Q}}$ coincide. The spectrum of the multiplication operator $\hat{ \mathcal{Q}}$ (with its continuous multiplier $\hat q$) on $L^2$ is the closure of the range of $\hat{q}$, as required. A corollary of the lemma is the result that we require to analyze our integral equations: \emph{If the number $-1$ is not in the closure of the range of $\hat q$, then $I+\mathcal{Q}$ is invertible. } To see how these ideas can be used to obtain Eq.~\eqref{30} more explicitly, we consider integral equation~\eqref{31} in the form $f+q\star f=g$ and apply the Fourier transform---under the assumption that $f$ and $g$ are in $L^2$ and $q$ is in $L^1$---to obtain the equivalent equation \begin{equation}\label{34} (1+\hat{q})\hat{f} = \hat{g}\,. \end{equation} If $1+\hat{q}\ne 0$, then \begin{equation}\label{35} \hat{f}=\frac{1}{1+\hat{q}} \hat {g}= (1+\frac{-\hat{q}}{1+\hat{q}}) \hat {g}= \hat {g}+\tilde{ k} \hat {g}\,, \end{equation} where \begin{equation}\label{35.1} \tilde k:=\frac{-\hat{q}}{1+\hat{q}}\,. \end{equation} Applying the inverse Fourier transform, it follows that \begin{equation}\label{36} f= g+ {\cal F}^{-1}[\tilde k \hat g]\,. \end{equation} \emph{ If there is an $L^1$ function $k$ such that ${\cal F}[k]=\tilde k$, then} \begin{equation}\label{38} f=g+k\star g \end{equation} \emph{and $I+\mathcal{K}$, where $\mathcal{K} g=k\star g$, is the inverse of $I+\mathcal{Q}$.} These results are illustrated in detail in the next section via specific examples. In these applications, we will employ a useful lemma regarding the Fourier sine transform. Consider the integral \begin{equation}\label{40} J(\xi)= \int_0^{\infty} h(x, \xi)\sin(\xi x)dx\,. \end{equation} \emph{For each $\xi \in (0, \infty)$, let $h(x,\xi)$ be a smooth positive integrable function that monotonically decreases over the interval of integration; then, $J(\xi)>0$.} To prove this assertion for each $\xi >0$, we divide the integration interval in Eq.~\eqref{40} into segments $(2\pi \xi^{-1}n, 2\pi \xi^{-1}n +2\pi \xi^{-1})$ for $n = 0, 1, 2, ...$. In each such segment, the corresponding sine function, $\sin(\xi x)$, goes through a complete cycle and is positive in the first half and negative in the second half. On the other hand, the monotonically decreasing function $h(x, \xi)>0$ is consistently larger in the first half of the cycle than in the second half; therefore, the result of the integration over each full cycle is positive and consequently $J(\xi)>0$. For $\xi \to 0$, $\sin (\xi x) \to 0$ and hence $J(0)=0$, while for $\xi \to \infty$, the integration segments shrink to zero and $J$ tends to $0$ in the limit as $\xi \to \infty$, if the corresponding limit of $h(x, \xi)$ is finite everywhere over the integration domain. This latter conclusion is, of course, a variation on the Riemann-Lebesgue lemma. \section{Existence of the Linear Kernel: Examples} We wish to determine $k(u)$ from a knowledge of $q(u)$. Let us note that if $q(\mathbf{u})$ is only a function of the radial coordinate $u$, Eq.~\eqref{33} reduces to \begin{equation}\label{B2} \hat{q}(\xi)=\frac{4\pi}{\xi}\int_0^{\infty}rq(r)\sin(\xi r)dr\,, \end{equation} where $\xi:0\to \infty$ is the magnitude of $\boldsymbol{\xi}$. That is, in Eq.~\eqref{33}, we introduce spherical polar coordinates $(r, \theta, \phi)$ and imagine that the coordinate system is so oriented that $\boldsymbol{\xi}$ points along the polar axis; then, the angular integrations can be simply carried out using the fact that \begin{equation}\label{B3} \int_0^{\pi}e^{-i \xi r \cos \theta}\sin \theta d\theta=2 \frac{\sin(\xi r)}{\xi r}\,. \end{equation} In general, the Fourier transform $\hat{q}(\boldsymbol{\xi})$ of a square-integrable function $q(\mathbf{u})$ is square integrable. In case $1+\hat{q}\ne 0$ and $\hat{k}(\boldsymbol{\xi})$ is an $L^2$ function, the Fourier transform method of the previous section becomes applicable here. Thus a suitable nonlocal kernel $k(\mathbf{x})$ exists in this case and is given by \begin{equation}\label{B4} k(\mathbf{x}) = \frac{1}{(2\pi)^3} \int \hat{k} (\boldsymbol{\xi}) e^{i \boldsymbol{\xi} \cdot \mathbf{x}}~ d^3\boldsymbol{\xi}\,, \quad \hat{k} (\boldsymbol{\xi})=-\frac{\hat{q} (\boldsymbol{\xi})}{1+\hat{q} (\boldsymbol{\xi})}\,. \end{equation} For the explicit radial examples under consideration, Eq.~\eqref{B4} takes the form \begin{equation}\label{B5} k(r)=-\frac{1}{2\pi^2 r}\int_0^{\infty}\frac{\xi\hat{q}(\xi)}{1+\hat{q}(\xi)}\sin(\xi r)d\xi\,. \end{equation} \subsection{An Example} To extend Kuhn's kernel, Eq.~\eqref{2i}, smoothly over all space, one may consider, for instance, \begin{equation}\label{BB1} q(u)=\frac{1}{4\pi \lambda}\frac{d}{du}\Big[-\frac{F(u)}{a+u}\Big]\,. \end{equation} Here $a$ is a constant length scale characteristic of the radius of nuclei of spiral galaxies, while $F(u)$ is a smooth function that is nearly unity over much of the interval $(0, A)$, but then rapidly drops off to zero as $u\to \infty$. The constant $A$ represents another length scale characteristic of the radius of galactic disks. Thus $a\ll \lambda < A$ under physically reasonable conditions, and for $a\ll u < A$, one can show that Eq.~\eqref{BB1} essentially coincides with Kuhn's kernel. To recover the flat rotation curves of spiral galaxies, Eq.~\eqref{BB1} should agree with Kuhn's kernel from the bulge, which is, say, at a distance of about $\lambda \approx 10$ kpc from the galactic center, out to the edge of the galaxy, which is, say, at a distance of about $3\lambda$. The function $F(u)$ can be chosen so as to render $1+\hat{q} > 0$; to illustrate this point, we choose \begin{equation}\label{BB2} F(u)= e^{-u/A}\,. \end{equation} Let us now consider the application of Eqs.~\eqref{B2} and~\eqref{B5} to the particular case of Eqs.~\eqref{BB1} -- \eqref{BB2}. Regarding the parameters that appear in these equations, it is sufficient to suppose at the outset only that $\lambda>0$, $a>0$ and $A>0$ are length scales of interest; moreover, we introduce, for the sake of simplicity, \begin{equation}\label{B6} \alpha :=\frac{1}{A}\,. \end{equation} We note that $q(u)$ is smooth and positive everywhere and rapidly decreases to zero at infinity; indeed, $q(u)$ is integrable as well as square integrable. It is preferable to work with dimensionless quantities; thus, we let all lengths---such as $r$, $u$, $\lambda$, $a$ and $A$---be expressed in units of $\lambda$. Then, $\lambda^3q$, $\lambda \xi$, $\lambda \alpha$ and $\hat{q}$ are dimensionless. Similarly, $\lambda^3k$ and $\hat{k}$ are dimensionless. Henceforth, we deal with these dimensionless quantities; in effect, this means that $\lambda=1$ in the following formulas. \emph{It is possible to show that in this case for any $\xi \ge 0$, $\hat{q}(\xi)> -a$.} Substituting Eqs.~\eqref{BB1} and~\eqref{BB2} in Eq.~\eqref{B2} and integrating by parts, we find \begin{equation}\label{B7} \hat{q}(\xi)=\frac{1}{\xi}\int_0^{\infty}\frac{e^{-\alpha r}}{a+r}~\frac{d}{dr}[r\sin(\xi r)]~dr\,. \end{equation} Next, differentiating $r\sin(\xi r)$ and noting that \begin{equation}\label{B8} \sin(\xi r) + \xi r \cos(\xi r)=[\sin(\xi r)-a \xi \cos(\xi r)]+(a+r)\xi \cos(\xi r)\,, \end{equation} Eq.~\eqref{B7} can be written as \begin{equation}\label{B9} \hat{q}(\xi)= {\cal I} + \int_0^{\infty}e^{-\alpha r}\cos(\xi r)dr\,, \end{equation} where \begin{equation}\label{B10} \int_0^{\infty}e^{-\alpha r}\cos(\xi r)dr =\frac{\alpha}{\alpha^2 + \xi^2}\,, \end{equation} according to formulas 3.893 on page 477 of Ref.~\cite{G+R}, and \begin{equation}\label{B11} {\cal I}=\frac{1}{\xi}\int_0^{\infty}\frac{e^{-\alpha r}}{a+r}~[\sin(\xi r)-a \xi \cos(\xi r)]~dr\,. \end{equation} Let us now introduce an angle $\gamma$ connected with $a\xi$ such that \begin{equation}\label{B12} a\xi :=\tan \gamma\,, \end{equation} and note that as $\xi: 0\to\infty$, we have $\gamma: 0\to \pi/2$ and $\gamma/\xi: a\to 0$. It is useful to introduce a new variable $X$ in Eq.~\eqref{B11}, $r=X+\gamma/\xi$, since \begin{equation}\label{B13} \sin(\xi r)-a \xi \cos(\xi r)=\frac{1}{\cos \gamma}\sin(\xi r - \gamma)\,. \end{equation} Then, Eq.~\eqref{B11} can be written as \begin{equation}\label{B14} {\cal I}=\frac{e^{-\alpha \gamma/\xi}}{\xi \cos \gamma}\int_{-\gamma/\xi}^{\infty}~\frac{e^{-\alpha X}}{a+\frac{\gamma}{\xi}+X}~\sin(\xi X)~dX\,. \end{equation} In this expression, the integration from $X=-\gamma/\xi$ to $\infty$ can be expressed as a sum of two terms, one from $0$ to $\infty$ and the other from $X=-\gamma/\xi$ to $0$. That is, ${\cal I}={\cal P}+{\cal N}$, where \begin{equation}\label{B15} {\cal P}=\frac{e^{-\alpha \gamma/\xi}}{\xi \cos \gamma}\int_0^{\infty}~\frac{e^{-\alpha X}}{a+\frac{\gamma}{\xi}+X}~\sin(\xi X)~dX \end{equation} is positive by the argument presented at the end of Sec. II, since $\exp(-\alpha X)/(X+a+\gamma/\xi)$ is a smooth positive integrable function that monotonically decreases for $X: 0\to \infty$, and \begin{equation}\label{B16} {\cal N}=\frac{e^{-\alpha \gamma/\xi}}{\xi \cos \gamma}\int_{-\gamma/\xi}^0~\frac{e^{-\alpha X}}{a+\frac{\gamma}{\xi}+X}~\sin(\xi X)~dX\,, \end{equation} which is negative. This latter point can be made explicit by introducing a new variable $\xi X = - Y$ into Eq.~\eqref{B16}; then, we have \begin{equation}\label{B17} -\xi \cos \gamma ~ {\cal N}(\xi)=\int_0^{\gamma}~\frac{e^{-\frac{\alpha}{\xi}(\gamma-Y)}}{(\gamma-Y)+a\xi}~\sin Y~dY\,. \end{equation} The right-hand side of this equation involves an integrand that increases monotonically from $0$ to $\sin \gamma/(a\xi)$ as $Y: 0\to \gamma$. Thus the right-hand side of Eq.~\eqref{B17} is less than $\gamma \sin \gamma/(a\xi)$; consequently, ${\cal N}(\xi)> -\gamma / \xi$ by Eq.~\eqref{B12}. As $0\le \gamma / \xi \le a$, we conclude that ${\cal N}(\xi)> -a$. Collecting our results, we therefore have that $\hat{q}> - a$ and \begin{equation}\label{B18} 1+\hat{q}(\xi) > 1-a\,. \end{equation} \emph{Hence the Fourier transform method of the previous section is applicable to Eq.~\eqref{BB1} if $a<1$. It then follows from Eq.~\eqref{B4} that $|\hat{k}|<|\hat{q}|/(1-a)$, so that $\hat{k}(\xi)$ is in $L^2$ as well, and we can find the nonlocal kernel $k$ from Eq.~\eqref{B5}.} In connection with the rotation curves of spiral galaxies, it is useful to consider the amount of sham dark matter that is associated with such a model. An estimate of the net amount of simulated dark matter $M_D$ can be obtained from the integration of Eq.~\eqref{2c} over all space, where we set $\rho(\mathbf{y})=M\delta(\mathbf{y})$ for the sake of simplicity. Thus \begin{equation}\label{B19} \frac{M_D}{M} \approx 4\pi \int_0^\infty r^2q(r)dr\,. \end{equation} For the example under consideration, we find from Eqs.~\eqref{BB1},~\eqref{BB2} and~\eqref{B19} via integration by parts that \begin{equation}\label{B20} \frac{M_D}{M} \approx \frac{2}{\alpha} \int_0^\infty\frac{\zeta e^{-\zeta}}{a\alpha + \zeta}d\zeta\,, \end{equation} where the definite integral can be expressed in terms of the exponential integral function as $1+a\alpha \exp(a\alpha) {\rm Ei} (- a\alpha)$---see, for instance, page 311 of Ref.~\cite{G+R}. Thus nonlocality in this case simulates, in effect, a net amount of dark matter that is nearly $2/\alpha$ times the actual mass $M$ of the galaxy, since the integral term on the right-hand side of Eq.~\eqref{B20} is nearly unity for physically reasonable values of the parameters, namely, $0<a\alpha \ll 1$. Indeed, for $0<x\ll 1$, ${\rm Ei}(-x)\approx {\cal C} + \ln x$, where ${\cal C}=0.577...$ is the Euler-Mascheroni constant---see page 927 of Ref.~\cite{G+R}. Further considerations involving kernels $q$ and $k$ are relegated to Appendix A. Choosing dimensionless parameters $\alpha= 0.1$ and $a= 0.001$ in this example, the corresponding numerical results for $\hat{q}$ and $k$ are presented in Figures 1 and 2. \subsection{A Second Example} The purpose of this subsection is to discuss a second example of a nonlocal kernel. We start with the reciprocal kernel \begin{equation}\label{C1} q(u)=\frac{1}{4\pi \lambda}~ \frac{1+\alpha(a+u)}{u(a+u)}~e^{-\alpha u}\,. \end{equation} Here $\alpha :=1/A$, as before. This reciprocal kernel has a \emph{central cusp} and behaves as $1/u$ for $u\ll a$, which is reminiscent of the density of dark matter in certain dark matter models---see, for instance, Ref.~\cite{S}. Moreover, for $a \ll u<A$, $q$ behaves like the Kuhn kernel, while for $u\gg A$, it falls off exponentially to zero. Kernel~\eqref{C1} is a smooth positive integrable function that is in $L^2$. Using dimensionless quantities, Eq.~\eqref{B2} takes the form \begin{equation}\label{C2} \hat{q}(\xi)=\frac{1}{\xi}\int_0^{\infty}(\alpha + \frac{1}{a+r})e^{-\alpha r}\sin(\xi r)dr\,. \end{equation} From formulas 3.893 on page 477 of Ref.~\cite{G+R}, we find \begin{equation}\label{C3} \hat{q}(\xi)=\frac{\alpha}{\alpha^2 + \xi^2} + \frac{1}{\xi}\int_0^{\infty}\frac{e^{-\alpha r}}{a+r}\sin(\xi r)dr\,. \end{equation} Here we can directly use the lemma given at the end of Sec. II, since $\exp(-\alpha r)/(a+r)$ is a smooth positive integrable function that decreases monotonically for $r: 0 \to \infty$, to conclude that for $0\le \xi <\infty$, $\hat{q}(\xi)> 0$, while $\hat{q}(\xi)\to 0$ as $\xi \to \infty$ by the Riemann-Lebesgue lemma. It follows that $|\hat{k}(\xi)| \le |\hat{q}(\xi)|$, so that $\hat{k}$ is in $L^2$ as well and the nonlocal kernel $k$ can be determined via Eq.~\eqref{B5}. \begin{figure}\label{Fig:1} \includegraphics[scale=0.8,angle=0]{Fig1.eps} \caption{Plot of $\hat{q}$ versus $\lambda \xi$ for the reciprocal kernel $q$ given in Eqs.~\eqref{BB1} and~\eqref{BB2}. The parameter values are $\lambda \alpha = 0.1$ and $a/\lambda = 0.001$. The function $\hat{q}$ starts from $\hat{q}(0) \approx 20$ and rapidly falls off initially, but then slowly decreases to zero as $\lambda \xi \to \infty$.} \end{figure} \begin{figure}\label{Fig:2} \includegraphics[scale=0.8,angle=0]{Fig2.eps} \caption{Plot of $-\lambda^3 k$ versus $u/\lambda$ when the reciprocal kernel $q$ is given by Eqs.~\eqref{BB1} and~\eqref{BB2}. The parameter values are $\lambda \alpha = 0.1$ and $a/\lambda = 0.001$, just as in Figure 1. The function $-\lambda^3 k$ starts from $\approx 80101$ at $u=0$ and drops off to nearly zero very fast; in fact, for $u/\lambda \ge 2.5$ it is essentially zero at the level of accuracy of this plot.} \end{figure} \begin{figure}\label{Fig:3} \includegraphics[scale=0.8,angle=0]{Fig3.eps} \caption{Plot of $\hat{q}$ versus $\lambda \xi$ for the reciprocal kernel $q$ given in Eq.~\eqref{C1}. The parameter values are $\lambda \alpha = 0.1$ and $a/\lambda = 0.001$, just as in Figure 1. As pointed out in Appendix A, $\hat{q}$ is in this case always slightly larger than the one in Figure 1, but this is hardly noticeable for the parameter values under consideration. For instance, at $\lambda \xi =2$, $\hat{q} \approx 0.773$ in Figure 1, while here $\hat{q} \approx 0.779$.} \end{figure} \begin{figure}\label{Fig:4} \includegraphics[scale=0.8,angle=0]{Fig4.eps} \caption{Plot of $-\lambda^3 k$ versus $u/\lambda$ when the reciprocal kernel $q$ is given by Eq.~\eqref{C1}. The parameter values are $\lambda \alpha = 0.1$ and $a/\lambda = 0.001$, just as in Figure 3. The function $-\lambda^3 k$ starts from $\infty$ at $u=0$ and drops off to nearly zero very fast; in fact, for $u/\lambda \ge 2.5$ it is essentially zero at the level of accuracy of this plot. Though this figure appears to be indistinguishable from Figure 2 in the plotted range, their numerical values are indeed different.} \end{figure} In this case, the analog of Eq.~\eqref{B19} is given by \begin{equation}\label{C4} \frac{M_D}{M} \approx \frac{2}{\alpha} + a e^{a\alpha} {\rm Ei}(-a\alpha)\,, \end{equation} which is, for $0<a\alpha \ll 1$, nearly the same as in Eq.~\eqref{B20}. For this second example, the numerical results involving $\hat{q}$ and $k$ for $\alpha= 0.1$ and $a= 0.001$ are presented in Figures 3 and 4. Appendix A contains further useful mathematical results relevant to the examples described in this section and the corresponding numerical work presented in the figures. The nonlocal ``constitutive'' kernel $k(u)$ turns out to be negative for models of spiral galaxies under consideration in this work. Moreover, as Figures 2 and 4 indicate, nonlocality in this case involves sampling sufficiently close spatial regions. Indeed, around any point $\mathbf{x}$, the influence of the field amplitude at point $\mathbf{y}$ may be significant only when $u=|\mathbf{x}-\mathbf{y}|$ is smaller than around $2.5\lambda$, or about 25 kpc. \emph{In fact, as expected, at any given field point, the nonlocal influence of the field amplitude at a nearby point decreases with increasing distance extremely fast.} \section{Nonlocal and Nonlinear Poisson's Equation} The purpose of this section is to present the main outlines of a formal approach to the modified Poisson equation with a general nonlinear kernel. That is, we do \emph{not} assume here that the nonlocal kernel consists of a dominant linear part together with a small nonlinear perturbation. The right-hand side of Eq.~\eqref{2} can be replaced by the Laplacian of the Newtonian potential via Eq.~\eqref{1}. Furthermore, it is straightforward to see that the nonlocal contribution to Eq.~\eqref{2} can be written as the divergence of a vector field. It follows from these remarks that modified Poisson's equation can thus be written as $\nabla \cdot \boldsymbol{\Psi}=0$, where \begin{equation}\label{4} \boldsymbol{\Psi} = \nabla \Phi + \int \Bbbk(\mathbf{x},\mathbf{y})\nabla_{\mathbf{y}}\Phi(\mathbf{y})d^3y - \nabla \Phi_{N}\,. \end{equation} For a bounded matter distribution, we can write the solution of Eq.~\eqref{1} as \begin{equation}\label{4a} \Phi_{N}(\mathbf{x}) = -G \int \frac{\rho(\mathbf{y})}{|\mathbf{x} - \mathbf{y}|}d^3y, \end{equation} so that, as is customary, the Newtonian gravitational potential is assumed to be zero at infinity~\cite{Kellogg}. We are interested in the solution of the nonlinear integral equation \begin{equation}\label{5} \nabla \Phi + \int \Bbbk(\mathbf{x},\mathbf{y})\nabla_{\mathbf{y}}\Phi(\mathbf{y})d^3y = \nabla \Phi_{N} + \boldsymbol{\Psi}\,. \end{equation} The divergence-free vector field $\boldsymbol{\Psi}$ must be such that Eq.~\eqref{5} is integrable. Indeed, the integrability condition for Eq.~\eqref{5} is that \begin{equation}\label{6} \int \nabla_{\mathbf{x}} \Bbbk(\mathbf{x},\mathbf{y}) \times \nabla_{\mathbf{y}}\Phi(\mathbf{y})d^3y = \nabla \times \boldsymbol{\Psi}\,. \end{equation} In effect, the modified Poisson equation has thus been once integrated and reduced to Eqs.~\eqref{5} and \eqref{6}, which are, however, still unwieldy. Let $\mathbf{U}$ represent the left-hand side of Eq.~\eqref{6}; then, one can express Eq.~\eqref{6} as $\nabla \times \boldsymbol{\Psi}=\mathbf{U}$, where $\mathbf{U}$ is divergence-free. The divergence of $\boldsymbol{\Psi}$ vanishes and its curl is $\mathbf{U}$; therefore, the curl of $-\mathbf{U}$ is the Laplacian of $\boldsymbol{\Psi}$, \begin{equation}\label{7} \nabla^2\boldsymbol{\Psi}=-\nabla \times \mathbf{U}\,. \end{equation} If $\mathbf{U}(\mathbf{x})$ is bounded for small $r=|\mathbf{x}|$, falls off to zero faster than $1/r$ for large $r$ and $\boldsymbol{\Psi} \to 0$ as $r \to \infty$, then \begin{equation}\label{8} \boldsymbol{\Psi}(\mathbf{x})=\frac{1}{4\pi} \nabla \times \int \frac{\mathbf{U}(\mathbf{y})} {|\mathbf{x}-\mathbf{y}|}d^3y\,, \quad \mathbf{U}(\mathbf{x})=\nabla \times \int \Bbbk(\mathbf{x},\mathbf{y})\nabla_{\mathbf{y}}\Phi(\mathbf{y})d^3y\,. \end{equation} In this way, one can determine $\boldsymbol{\Psi}$ from the integrability condition, namely, Eq.~\eqref{6}, and substitute it back into Eq.~\eqref{5}. \subsection{Formal Solution via Successive Approximations} The solution of the integral equation for the gravitational potential $\Phi$ is expected to consist of the Newtonian gravitational potential $\Phi_N$ together with nonlocal corrections as in a Neumann series; therefore, it is natural to devise a formal solution of Eq.~\eqref{5} using the method of successive approximations~\cite{T}. In view of the divergence of the Neumann series in connection with the considerations of the first part of this paper, we must assume here that the gravitational system under consideration in this subsection cannot be approximated by a point mass. Let $\Phi_0, \Phi_1, ..., \Phi_n, ...$ be a series of approximations to the gravitational potential such that $\Phi_0:=\Phi_N$ and $\Phi_n$ approaches $\Phi$ in the limit as $n \to \infty$. Moreover, for $n>0$, we define $\Phi_n$ to be such that \begin{equation}\label{9} \nabla \Phi_1 = \nabla \Phi_N + \boldsymbol{\Psi}_0 - \int K(\mathbf{u}, v_0)\nabla_{\mathbf{y}}\Phi_0(\mathbf{y})d^3y, \end{equation} ~~~~~~~~~~~~~~~~~~~~~~~~~~~............................ \begin{equation}\label{10} \nabla \Phi_n = \nabla \Phi_N + \boldsymbol{\Psi}_{n-1} - \int K(\mathbf{u}, v_{n-1})\nabla_{\mathbf{y}}\Phi_{n-1}(\mathbf{y})d^3y, \end{equation} \begin{equation}\label{11} \nabla \Phi_{n+1} = \nabla \Phi_N + \boldsymbol{\Psi}_n - \int K(\mathbf{u}, v_n)\nabla_{\mathbf{y}}\Phi_n(\mathbf{y})d^3y, \end{equation} and so on. Let us recall here that $\Bbbk(\mathbf{x},\mathbf{y})=K(\mathbf{u},v)$ and we have extended the definition of $v$ in Eq.~\eqref{3} such that $v_n=|\nabla_{\mathbf{y}}\Phi_n(\mathbf{y})| / |\nabla_{\mathbf{x}} \Phi_n(\mathbf{x})|$. Moreover, $\boldsymbol{\Psi}_0, \boldsymbol{\Psi}_1, ..., \boldsymbol{\Psi}_n, ...$ are such that the integrability condition is satisfied at each step of the approximation process, namely, \begin{equation}\label{12} \nabla \times \boldsymbol{\Psi}_n=\int \nabla_{\mathbf{x}} K(\mathbf{u},v_n) \times \nabla_{\mathbf{y}}\Phi_n(\mathbf{y})d^3y \end{equation} for $n=0,1,2,...$. Here $\boldsymbol{\Psi}_n$, for instance, can be expressed in terms of $\Phi_n$ using the method described in Eqs.~\eqref{7} and \eqref{8}; then, the result may be employed in the expression for $\nabla \Phi_{n+1}$ in Eq.~\eqref{11} of the successive approximation scheme. As $n \to \infty$, we expect that $\Phi_n$ approaches $\Phi$ and $\boldsymbol{\Psi}_n$ approaches $\boldsymbol{\Psi}$, so that the limiting form of Eq.~\eqref{12} coincides with Eq.~\eqref{6}. The \emph{convergence} of this successive approximation process depends of course upon the nature of the kernel and its treatment is beyond the scope of this work, as the general form of kernel $K(\mathbf{u},v)$ is unknown at present. The general solution of Eq.~\eqref{2} presented in this section can be used, in principle, to restrict the form of kernel $K(\mathbf{u},v)$ on the basis of observational data. Nonlocal gravity simulates dark matter~\cite{nonlocal, NonLocal, BCHM, BM}; therefore, it may be possible to determine the general \emph{nonlinear} kernel $K$ from the comparison of our general solution of Eq.~\eqref{2} with astrophysical data regarding dark matter. However, the treatment of this general inverse problem of nonlocal gravity is a task for the future. \section{Gravitational Potential of a Point Mass} As a simple application of the formal procedure developed in the previous section, we will consider here the gravitational potential due to a point mass $M$ at the origin of spatial coordinates, so that $\rho(\mathbf{x})=M\delta(\mathbf{x})$. The corresponding Newtonian potential is $\Phi_N=-GM/r$ and we expect that $\Phi$ is also just a function of $r$ as a consequence of the spherical symmetry of the point source. Similarly, it is natural to assume that the kernel's dependence on $\mathbf{u}$ is only through its magnitude $u$ due to the isotropy of the source. A detailed investigation reveals that $\boldsymbol{\Psi}=0$ in this case; we outline below the main steps in this analysis. In computing the integral term in Eq.~\eqref{5}, we introduce the spherical polar coordinate system $(y, \vartheta, \varphi)$ in which the polar axis is taken to be along the $\mathbf{x}$ direction. The kernel in Eq.~\eqref{5} is then just a function of $r$, $y$ and $\cos\vartheta$; moreover, $\nabla_{\mathbf{y}}\Phi(\mathbf{y})$ equals $d\Phi(y)/dy$ times the unit vector in the $y$ direction. The azimuthal components of this unit vector vanish upon integration over all angles and only its polar component remains. Therefore, $\boldsymbol{\Psi}$ is purely radial in this case, namely, \begin{equation}\label{13} \boldsymbol{\Psi} = \chi(r) \mathbf{x}\,, \end{equation} which satisfies the integrability condition given in Eq.~\eqref{6}, since in this case the curl of $\boldsymbol{\Psi}$ identically vanishes. Here $\chi(r)$ can be determined from the requirement that $\nabla \cdot \boldsymbol{\Psi}=0$. It then follows that $\chi=m/r^3$, where $m$ is an integration constant. Thus $\boldsymbol{\Psi}=m\mathbf{x}/r^3$; that is, the right-hand side of Eq.~\eqref{5} is radial in direction and is given by $(GM+m)\mathbf{x}/r^3$. The resulting $\boldsymbol{\Psi}$ in effect indicates the presence of an extra delta-function source at the origin of spatial coordinates. We therefore set $m$, which is effectively a new mass parameter, equal to zero, as it simply renormalizes the mass of the source. Thus $\boldsymbol{\Psi}=0$ and Eq.~\eqref{5} reduces in this case to \begin{equation}\label{14a} \frac{d\Phi}{dr} + 2 \pi \int_0^{\pi} \int_0^{\infty}K(u, v) \frac{d\Phi(y)}{dy}y^2dy \cos \vartheta \sin \vartheta d\vartheta = \frac{GM}{r^2}, \end{equation} where $u$ and $v$ are given by \begin{equation}\label{14b} u = \sqrt{r^2+y^2-2ry\cos \vartheta} \,, \quad v = |\frac{d\Phi(y)}{dy}/\frac{d\Phi(r)}{dr}|\,. \end{equation} The extra factor of $\cos \vartheta$ in Eq.~\eqref{14a} is due to the fact that in Eq.~\eqref{5} the component of $\mathbf{y}/y$ along the polar axis is $\cos \vartheta$. \subsection{Linear Kernel} Let us assume, for the sake of simplicity, that $\Bbbk(\mathbf{x},\mathbf{y})=k(u)$, so that in this subsection we are only concerned with a nonlocally modified Poisson's equation that is \emph{linear} with a kernel that depends only on $u$ as a result of the spherical symmetry of the point source. Then, Eq.~\eqref{14a} for the \emph{linear} gravitational potential $\Phi_{\ell}$ reduces to \begin{equation}\label{14} \frac{d\Phi_{\ell}}{dr} + 2 \pi \int_0^{\pi} \int_0^{\infty}k(\sqrt{r^2+y^2-2ry\cos \vartheta} \,)\frac{d\Phi_{\ell}(y)}{dy}y^2dy \cos \vartheta \sin \vartheta d\vartheta = \frac{GM}{r^2}\,, \end{equation} which means that a linear integral operator with kernel $k$ acting on $d\Phi_{\ell}/dr$ results in $GM/r^2$. We note in passing that the successive approximation method of the previous section leads in this case to the standard Liouville-Neumann solution of Eq.~\eqref{14} via iterated kernels of the Fredholm integral equation of the second kind~\cite{T}; however, as discussed Sec. II, the Neumann series diverges in this case and the corresponding solution does not exist under physically reasonable conditions. Therefore, we adopt the Fourier transform method and let $q(u)$ be the kernel that is reciprocal to $k(u)$; then, $d\Phi_{\ell}/dr$ is given by the linear integral operator, with $k$ replaced by $q$, acting on $GM/r^2$. That is, \begin{equation}\label{15} \frac{d\Phi_{\ell}}{dr} = \frac{GM}{r^2} + 2 \pi GM \int_0^{\pi} \int_0^{\infty}q(\sqrt{r^2+y^2-2ry\cos \vartheta} \,)dy \cos \vartheta \sin \vartheta d\vartheta \,. \end{equation} Substituting the Kuhn kernel~\eqref{2i} for $q$ in Eq.~\eqref{15} and performing the $y$-integration in the resulting integral first, we find that for $\vartheta \in (0, \pi]$, \begin{equation}\label{19} \int_0^{\infty}\frac{dy}{(y-r\cos \vartheta)^2+r^2\sin^2 \vartheta}=\frac{\pi - \vartheta}{r \sin \vartheta}\,.\end{equation} The $\vartheta$-integration is then straightforward and the end result is \begin{equation}\label{20} \frac{d\Phi_{\ell}}{dr}=\frac{GM}{r^2} + \frac{GM}{\lambda} \frac{1}{r}\,, \end{equation} in agreement with the radial derivative of Eq.~\eqref{2g}. In this way, starting from our general solution of the modified Poisson equation, we again recover the Tohline-Kuhn scheme of modified gravity. \subsection{Nonlinear Kernel} To gain some insight into the role of nonlinearity in Eq.~\eqref{14a}, let us suppose that nonlinearity constitutes a very small perturbation on a background linear kernel. In fact, we set $K(u,v) = k(u) + \epsilon P(u, v_{\ell})$, where $\epsilon$, $0<\epsilon \ll 1$, is a sufficiently small parameter and $v_{\ell}$ is obtained from $v$ in Eq.~\eqref{14b} by replacing $\Phi$ with $\Phi_{\ell}$. We thus expand $\Phi$ to first order in $\epsilon$ and thereby develop a simple linear perturbation theory for Eq.~\eqref{14a} such that \begin{equation}\label{21} \Phi =\,\Phi_{\ell} + \epsilon \Phi_{n\ell}\,. \end{equation} Here $\Phi_{\ell}(r)$ is given in general by Eq.~\eqref{15} and $\Phi_{n\ell}$ is the perturbation potential due to nonlinearity. Moreover, Eq.~\eqref{14a} implies that \begin{equation}\label{22} \frac{d\Phi_{n\ell}}{dr} + 2 \pi \int_0^{\pi} \int_0^{\infty}k(\sqrt{r^2+y^2-2ry\cos \vartheta} \,)\frac{d\Phi_{n\ell}(y)}{dy}y^2dy \cos \vartheta \sin \vartheta d\vartheta = N(r)\,, \end{equation} where $N(r)$ is due to the nonlinear part of the kernel and is given by \begin{equation}\label{23} N(r)=- 2 \pi \int_0^{\pi} \int_0^{\infty} P(u, v_{\ell})\frac{d\Phi_{\ell}(y)}{dy}y^2dy \cos \vartheta \sin \vartheta d\vartheta \,. \end{equation} As in the previous subsection, Eq.~\eqref{22} can be solved by means of kernel $q(u)$ that is reciprocal to $k(u)$ and we find \begin{equation}\label{24} \frac{d\Phi_{n\ell}}{dr} = N(r) + 2 \pi \int_0^{\pi} \int_0^{\infty}q(\sqrt{r^2+y^2-2ry\cos \vartheta} \,)N(y)y^2 dy \cos \vartheta \sin \vartheta d\vartheta \,. \end{equation} A consequence of this result should be noted here: Inspection of Eqs.~\eqref{15}, \eqref{23} and \eqref{24} reveals that $\Phi_{n\ell}(r)$ is simply proportional to the gravitational constant $G$. This feature is an example of the general scaling property of Eq.~\eqref{2}, which implies that any solution $\Phi$ of Eq.~\eqref{2} must be proportional to $G$. It is therefore possible to see that our nonlocal as well as nonlinear modification of Newtonian gravity cannot behave as in the Modified Newtonian Dynamics (MOND) approach to the breakdown of Newtonian gravity~\cite{M, SM, Ibata}. From the scaling property of our modified Poisson's equation, we expect that the gravitational potential $\Phi$ is in general proportional to the gravitational constant $G$, since the source term in Eq.~\eqref{2} is proportional to $G$. Therefore, the nonlocal theory, as a consequence of its particular \emph{nonlinear} form in the Newtonian domain, does \emph{not} contain a MOND regime, where the gravitational potential would then be proportional to $G^{1/2}$. \section{Discussion} In recent papers~\cite{nonlocal, NonLocal, BCHM, BM}, nonlocality has been introduced into classical gravitation theory via a scalar kernel $k$. However, observational data can provide information about its reciprocal kernel $q$. This is similar to the situation in general relativity, where gravitation is identified with spacetime curvature, but observations generally do not directly measure the curvature of spacetime, except possibly in relativistic gravity gradiometry. We make a beginning in this paper in the treatment of the inverse problem of nonlocal gravity. The scalar nonlocal kernel $k$ must be determined from observational data that involve the reciprocal kernel $q$. Our preliminary study involves the Newtonian regime, where the nonlocally modified Poisson's equation is investigated in its linearized convolution form. We present a detailed mathematical analysis of the resulting Fredholm integral equation using the Fourier transform method and prove the existence of the nonlocal convolution kernel $k(\mathbf{u})$ when its reciprocal $q(\mathbf{u})$ satisfies certain physically reasonable conditions. Simple explicit examples are worked out in connection with the linear gravitational potential of spiral galaxies. To extend our treatment beyond the Newtonian domain, it would be necessary to consider relativistic generalizations of the Kuhn kernel along the lines indicated in Sec. III of Ref.~\cite{BCHM}. Next, we present a general treatment of the nonlocal and \emph{nonlinear} modification of Poisson's equation that represents nonlocal gravity in the Newtonian regime. The method of successive approximations is then employed to provide a formal solution. The utility of this general approach is illustrated for the determination of the gravitational potential of a point mass when nonlinearities are assumed to be relatively small. In this case, we recover anew the Tohline-Kuhn phenomenological modified gravity approach to the dark matter problem in astrophysics~\cite{Tohline,Kuhn,Jacob1988}. To place our work in the proper context, we note that nonlocal special relativity (developed since 1993, cf.~\cite{BM2}) and the principle of equivalence imply the necessity of a nonlocal generalization of Einstein's theory of gravitation. Here nonlocality is encoded in a nonlocal ``constitutive" kernel $k$ that must be determined from observation. In working out the physical consequences of nonlocal gravity, it was soon discovered~\cite{nonlocal, NonLocal} that it reproduces the 1980s Tohline-Kuhn phenomenological approach to dark matter as modified gravity~\cite{Tohline, Kuhn}. This connection is the most fundamental contact of the new theory with observation and indicates to us that we are on the right physical track. To verify this, we must compute the nonlocal kernel $k$ from the rotation curves of spiral galaxies and show that it has the proper physical properties expected of such a kernel. Our present paper accomplishes this task. That is, we extend the Kuhn kernel $q$ analytically to all space and then use the result to solve the inverse problem of finding kernel $k$ by means of Fourier integral transforms. The resulting $k$ has indeed just the expected properties and puts the nonlocal theory of gravity on a more solid observational foundation. \begin{acknowledgments} B.M. is grateful to F.W. Hehl and J.R. Kuhn for valuable comments and helpful correspondence. \end{acknowledgments}
2024-02-18T23:41:03.624Z
2012-04-16T02:02:29.000Z
algebraic_stack_train_0000
4,067
10,473
proofpile-arXiv_066-4092
\section{Introduction} HH~1 and 2 were the first detected HH objects (Herbig 1951; Haro 1952). Since then, they have played an important role in the study of outflows from young stars, particularly because most of the general characteristics of HH outflows were first seen in HH~1 and 2 (see the review of Raga et al. 2011). The first measurement of proper motions in HH objects (Herbig \& Jones 1981) showed that HH~1 and 2 formed part of a bipolar outflow. The source of this bipolar outflow (centered between HH~1 and 2) was discovered in the radio continuum by Pravdo et al. (1985). This ``VLA~1'' source was later shown to have a jet-like structure (of $\sim 1''\times 0''.2$, see Rodr\'\i guez et al. 2000), aligned with the HH~1/2 axis. A jet-like structure directed towards HH1 (the ``HH1-jet'') is observed at optical wavelengths (Strom et al. 1985 point out this feature, which was also visible in older images of the region). The base of this jet-like structure approaches the position of the outflow source (VLA~1) in images taken at progressively longer IR wavelengths (Roth et al. 1989). Reipurth et al. (2000) presented optical and IR images of the HH1-jet region obtained with the HST. Their NICMOS H$_2$~2.12$\mu m$ and [Fe~II]~1.64$\mu m$ images show that the emission of the HH1-jet extends to within $\sim 2''$ from the VLA~1 outflow source. It is well known that all these multiwavelength jets are a manifestation of the same astrophysical phenomena (see e.g. the review by Raga et al. 2010). The fact that the observed HH1-jet emission does not extend to the position of the VLA~1 source and that a counterjet (directed towards HH~2) is not detected appears to be due to the presence of a dense molecular structure approximately centered on VLA~1 (see Torrelles et al. 1994; Choi \& Lee 1998 and Cernicharo et al. 2000). In this paper we present new {\it Spitzer} observations in which the counterjet (directed from the VLA~1 source towards HH~2) is detected for the first time, and ending nearly thirty years of speculation about the nature of its absence. \section{Observations} The observations of HH~1/2 system come from the GTO Spitzer Space Telescope program by Giovani Fazio (PID 43) on the 'Orion Streamers' obtained with the infrared camera IRAC (Fazio et al. 2004) in February 2004. The data, consisting of the basic calibrated frames or BCDs, have been recovered from the Spitzer Legacy Archive, version S8.18. The original surveyed area is large and covers approximately 0.75\hbox{$^\circ$}$\times$3.3\hbox{$^\circ$} in the four bands (1, 2, 3, 4) = (3.6, 4.5, 5.8, 8.0) \mum. The data were collected using the High-Dynamic-Range (HDR) mode with a 12sec integration time for the 'long' frames and 0.6sec for the 'short' ones. For this work we have used the 'long' frames only, with a total integration time of 48sec per pixel. The BCDs were then reprocessed with the HiREs deconvolution software AWAIC (A WISE Astronomical Image Co-Adder) developed by the Wide Field Infrared Survey Explorer (WISE) for the creation of their Atlas images (see e.g. Masci \& Fowler 2009).\footnote[3]{http://wise2.ipac.caltech.edu/staff/fmasci/awaicpub.html} The AWAIC software optimizes the coaddition of individual frames by making use of the Point Response Function (PRF) as an interpolation kernel, to avoid flux losses in undersampled arrays like those of IRAC, and also allows a resolution enhancement (HiRes) of the final image, by removing its effect from the data in the deconvolution process. A similar method has been applied on the {\it Spitzer} data of young stellar outflows like HH 46/47 (Noriega-Crespo et al. 2004a; Velusamy et al. 2007) and Cep E (Moro-Mart\'\i n et al. 2001; Noriega-Crespo et al. 2004b; Velusamy et al. 2011), quite successfully. On IRAC images, the HiRes processing enhances the angular resolution from the standard $\sim 2$\hbox{$^{\prime\prime}$}~to $\sim$ 0.6\hbox{$^{\prime\prime}$}~- 0.8\hbox{$^{\prime\prime}$}~(Velusamy et al. 2007). In Figures 1 and 2 we show the images of the HH~1/2 system obtained in the four IRAC bands (3.6, 4.5, 5.8 and 8.0\mum) before (i.e. standard coaddition of the BCDs) and after the HiRes reprocessing (iteration 25), respectively. At the iteration 25 the HiRes AWAIC processing reaches an optimal angular resolution, simultaneously preserving most of the structure of the surrounding diffuse emission, for instance that of the arc near the South of the images. Some small artifacts, however, are present in all bands, indicating that not further iterations are needed. Figure 3 shows a closer look of the 4.5\mum~image, where the newly discovered counterjet is the brightest, marking as well some of the well known optical knots of the HH~1 and 2 objects, and the position of the VLA~1 source. The bright infrared source along the outflow symmetry axis, the so called Cohen-Schwartz stars (C-S), that once was thought to be the driving source (Cohen \& Schwartz 1979) is also indicated, plus the VLA~2 source that drives the HH~144 outflow (Reipurth et al. 1993) One can appreciate the performance of the HiRes processing and the ability of the mid-IR observations to discover new features by comparing the processed IRAC 4.5\mum~ with archival data at optical and near-IR observations of the HH~1/2 system obtained by the {\it Hubble Space Telescope}, with $\sim 5$ times better angular resolution (Fig. 4). The optical image was taken by the Wide Field Planetary Camera 2 (WFPC2) in 2007 (Hartigan et al. 2011) at [S~II] 6717/31~\AA, while the near-IR was taken with the NICMOS3 camera in 1998 at v=1-0 H$_2$ 2.12\mum~emission line (Reipurth et al. 2000). It is quite remakable how similar are the vibrational (at 2.12\mum) and rotational H$_2$ emission that arises from the S(9), S(10), S(11), S(12) transitions at 4.952, 4.408, 4.180 and 3.996\mum, covered by IRAC 4.5\mum~band. About halfway between the VLA~1 source and HH~2, there are two or three condensations detectable at 2.12 and 4.5\mum, and given their positions could belong to counter flow as well. \begin{figure} \centerline{ \includegraphics[width=0.475\textwidth]{f1.pdf}} \caption{IRAC maps of the HH~1/2 system, from top left clockwise, at 3.6, 4.5, 5.8 and 8.0\mum; using a inverse grayscale where dark regions represent high intensity. The field-of-view is $\sim$ 5\hbox{$^\prime$}. North is up and East is left.} \label{fig1} \end{figure} \begin{figure} \centerline{ \includegraphics[width=0.475\textwidth]{f2.pdf}} \caption{IRAC observations of the HH~1/2 system as in Fig 1, after 25 iterations using the HiRes AWAIC algorithm.} \label{fig2} \end{figure} \section{The HH~1 jet and counterjet} A closer look of the [SII] 6717/31~\AA, 2.12, 4.5 and 8.0\mum~emission around the jet/counter region is shown in Fig. 5. A detailed analysis of the optical and near-IR HH~1-jet properties has been carried by Reipurth et al. (2000), where they showed how the H$_2$ 2.12\mum~and [Fe~II] 1.64\mum~emission arise closer to the VLA~1 source, only 2.5\hbox{$^{\prime\prime}$}~NW of it, than that of [S~II]. Their analysis on the extinction, using the fact that the [Fe~II] 1.54\mum~and [S~II] 0.67\mum have similar excitation energies and ionization potential, and that their ratio is nearly constant for weak shocks, allowed them to determine a 4 mag increase in the $\sim 5$\hbox{$^{\prime\prime}$}~that the nir-IR emission gets closer to the VLA~1 source. In the mid-IR at 4.5\mum~both the jet and counterjet are clearly detected, and both can be traced back to the VLA~1 source. The jet/counterjet path, unfortunately, is very close to two bright infrared sources; o n the NE, the C-S source and at $\sim 10$\hbox{$^{\prime\prime}$}~SE of VLA~1 by a fainter one. And so it is possible that very close to the circular edges were the AWAIC HiRes algorithm suppresses the sources, that couple of knots are affected by this artifact. Conservatively, the jet has a $\sim 12.9$\hbox{$^{\prime\prime}$}~length in the NW direction arising from VLA~1, while the counterjet has a $\sim 10$\hbox{$^{\prime\prime}$}~length in the opposite direction. Both the optical and the mid-IR jet extend out to the same NW position, the A$_{j}$ knot (Eisl\"offel, Mundt \& B\"ohm 1994). Anf finally, at both 4.5 and 8.0\mum~couple of arcseconds NW of VLA~1, the emission broadens following the structure observed at the base of the 2.12\mum~jet, the so called 'X' nebula (Reipurth et al. 2000). This nebula seems to be connected with the knots East of the jet; they do have a similar structure at 2.12 and 4.5\mum. \begin{figure} \centerline{ \includegraphics[width=0.475\textwidth]{f3.pdf}} \caption{A closer view of the H 1/2 system at 4.5\mum, after the HiRes AWAIC processing, where the jet and counterjet are best detected. The names of some of the well known optical knots of HH~2 (SE) and HH~1 (NW) are included (see e.g. Raga, Barnes \& Mateo 1990), as well as those of HH~144 flow (Reipurth et al. 2000) its driving source (VLA~2), and the bright IR Cohen-Schwartz (C-S) source (Cohen \& Schwartz 1979).} \label{fig3} \end{figure} \subsection{The Medium Surrounding the Counterjet} In Figure 6, we present 4.5 and 5.8\mum\ intensity tracings along the jet/counterjet system. In order to obtain these tracings, we have defined an axis parallel to the direction of the outflow (the $x$-axis of Figure 6, with $x=0$ corresponding to the position of the VLA~1 source, and positive $x$ towards the NW), and averaged the intensity in a direction perpendicular to the outflow, in a box extending $\pm 3''$ to each side of the outflow axis. A background was computed from contiguous boxes on each side of the outflow (of $1''.5$ widths), and subtracted at all positions $x$ from the axial box in order to obtain the jet emission. \begin{figure} \centerline{ \includegraphics[width=270pt,height=160pt,angle=0]{f4.pdf}} \caption{\label{fig4} {A comparison of the HH~1/2 system, from left to right at optical (WFPC2 [SII] 6717+31 \AA), near-IR (NICMOS 2.12\mum) and mid-IR wavelengths (IRAC 4.5\mum~HiRes [25 iterations] processing). The position of VLA~1 source is marked with a 3\hbox{$^{\prime\prime}$}~in diameter circle.}} \end{figure} \begin{figure} \centerline{ \includegraphics[width=260pt,height=130pt,angle=0]{f5.pdf}} \caption{\label{fig5} {A closer view of the jet/counterjet region, using once again the {\it Hubble Space Telescope} observations in [SII] and 2.12\mum, and compared with those at IRAC 4.5 HiRes (25 iterations) processing. The position of VLA~1 source is marked with a 1\hbox{$^{\prime\prime}$}~in diameter circle.}} \end{figure} The intensity tracings show that the emission from the NW jet is generally stronger than the counterjet emission. The bottom frame of Figure 6 shows the jet/counterjet intensity ratios as a function of distance from the VLA~1 source at 4.5 (left; broken line) and 5.8 (right; solid line). It is clear that the jet/counterjet intensity ratio is $>1$ at 5.8\mum (at all positions). At 4.5\mum, the jet/counterjet intensity ratio is $>1$ out to $\sim 8''$ from the source, and $<1$ at $\sim 10''$ from VLA~1. This region (with jet/counterjet intensity ratio $<1$) is associated with the last knot along the counterjet, seen close to the stellar source $\sim 12''$ to the SE of VLA~1 (see Figures 3 and 5), and might therefore be contaminated by the deconvolved point spread function of the star. Therefore, we conclude that the observations are consistent with jet/counterjet intensity ratios $>1$ at all positions both at 4.5 and 5.8\mum\ (at least out to $\sim 8''$ from the VLA~1 source, see Figure 6). If one assumes that the jet and counterjet emission is intrinsically symmetric, one would attribute the position-dependence of the jet/counterjet intensity ratio ($I_j/I_{cj}$, see the bottom frame of Figure 6) to the extinction produced by a high density clump surrounding VLA~1. The fact that $I_j/I_{cj}$ reaches a peak (with a value of $\approx 4.5$ at 4.5\mum, and of $\approx 3$ at 5.8\mum, see Figure 6) at $x\sim 3''$ would then indicate that the high density clump has a projected diameter of $\sim 6''$, and that for larger values of $x$ the counterjet emerges from behind the clump (lowering the observed $I_j/I_{cj}$ ratio). This $\sim 6''$ size is consistent with the extension along the outflow axis of the flattened H$^{13}$CO$^+$ clump observed by Choi \& Lee (1998), surrounding VLA~1. The formation of stellar outflows arising from young stellar objects (YSOs) are expected to be symmetric, since there is a preferential rotational axis and to the best of our understanding of the formation of proto-stellar jets requires a magnetohydrodynamical coupling of the accreted gas with the spherical symmetric protostar, likely to have also a symmetric bipolar magnetic structure (see e.g. Pudritz et al. 2007, and references therein). The accreted gas is provided by a disk-like structure created as a result of the original cloud spinning during the protostellar collapse (see e.g. Klein et al. 2007, and references therein). One could imagine, however, that the accretion process does not need to be symmetric, and could feed the protostar in a time dependent alternative way (e.g. one star pole at the time). If this was the case, then the symmetry of the stellar jet could be broken, at least over a certain period of time, with no material ejected in one direction or another. And if this was the case then one could trace back the steps to understand where (e.g. given the morphology of the protostar's magnetic field, or its coupling with the accretion disk or the transfer of gas and angular momentum, etc) this symmetry is broken. A true asymmetric young stellar jet could have profound consequences in our understanding of the low mass star formation process. For nearly thirty years we have wondered if the HH~1/2 system had a counterjet, although as mentioned in the introduction evidence for very dense gas (n(H$_2$ = $10^4$ \cc) structure around VLA~1, using Ammonia as a tracer (Torrelles et al. 1994), suggested that extinction was playing a major in hidding that component of the flow. The NH$_3$ observations have an angular resolution of $\sim 4$\hbox{$^{\prime\prime}$}~($\sim 2\times10^3$ AU at the distance of Orion), sampling very well a 2\hbox{$^\prime$}~$\times$3\hbox{$^\prime$}~region covering the HH~1/2 outflow, and enough to distinguish a pancake-like structure perpendicular to the outflow axis and a East-West temperature gradient in it that indicates a further asymmetry. It is no a simple disk-like structure, but certainly a dense structure consistent with the idea of "a collapsing interstellar ($\sim$ 0.4pc) toroid around VLA~1" (Torrelles et al. 1994). NH$_3$ has been used as tracer of high density and low temperature molecular gas around young stellar outflows for more than two decades. The NH3(1,1) transition requires H2 densities higher than 5$\times 10^3$\cc~to be detected reliably (Torrelles et al. 1983). Typical values around YSOs range from n(H2) = 5$\times 10^3$\cc~to $\sim 10^5$\cc~ and Temperatures of 15 - 30 K (Torrelles et al. 1983). The structure around the VLA~1 in HH~1/2 has n(H2) $\sim 10^4$ \cc and 12 K. Similar conclusion ws reached by Choi \& Zhou (1997) using other high density molecular tracer (HCO$^+$). Slightly better angular resolution observations (4.3\hbox{$^{\prime\prime}$}$\times$2.8\hbox{$^{\prime\prime}$}~beam, Choi \& Lee 1998) have traced even closer to the VLA~1 source the toroidal structure. Very close to the VLA~1 source, within a diameter of 4 AU, Cernicharo et al. 2000 estimated a very high visual extinction (80-100 magnitudes), but they were let's concerned with the "toroidal structure". In Figure 7 we show a comparison of the NH$_3$(1,1) observations by Torrelles et al. (1994) with the mid-IR emission at 4.5\mum. Two things are worth noticing, First, that high density molecular gas does indeed prevent us from detecting the counterjet at shorter wavelengths, and second, that the morphology of this dense structure is likely to prevent us to detect the HH~144 counterflow. Independently of detailed structure of the "toroid", it is clear that a large column density of material is a long the line-of-sight of the counterjet, and this is higher than in line-of-sight of the visible jet. An estimate of the extinction around the counterjet is possible, although requires some necessary assumptions. For instance, based on symmetry one could assume that the emission properties at 3.6 and 4.5/mum~ of jet and counterjet are very similar over the same area, and that any difference between the jet (North) and the counterjet (South) are due only to extinction. A comparison of the ratio of the surface brightness of the 3.6 to 4.5/mum~images, indeed shows smaller values on the South with respect to the North. The average ratio of the North and South sections of the jet, using a 2.4\hbox{$^{\prime\prime}$}$\times$39.0\hbox{$^{\prime\prime}$}~box, give values of 0.40$\pm$0.27 and 0.20$\pm$0.16, respectively, i.e. the counterjet is twice as faint when compared to the jet. Two recent studies have analyzed the properties of the mid-IR extinction (Indebetouw et al. 2005, Cambresy et al. 2011), in regions located in the Galactic plane. i.e. RCW49 and the Trifid, with very similar results, so one can assume that such extinction approximately holds in the environment around the HH~1/2 system. Taking the ratios of extinction at Visual, K$_s$ (2.2\mum), 3.5 and 4.5\mum~to be A$_K$/A$_V$ = 0.112, A$_{3.6}$/A$_K$ = 0.611 and A$_{4.5}$/A$_K$ = 0.500 (Cambresy et al. 2011), and neglecting the extinction at 4.5/mum~(since both jet/counterjet are detected) then at 3.6\mum~ at the counterjet one obtains, A$_{3.6}$ = 0.75, that corresponds to A$_K$ = 1.2 and A$_V$ = 11.0 magnitudes, respectively. \begin{figure} \centerline{ \includegraphics[width=0.475\textwidth]{f6.pdf}} \caption{Top frame: surface brightness (averaged across the width of the jet, in MJy/sr) at 4.5 (dashed line) and 5.8\mum~(solid line) along the NW jet (positive $x$) and SE counterjet (negative $x$) centered on the VLA~1 source, showing the asymmetry of the two outflow lobes. Bottom frame: the jet/counterjet intensity ratio as a function of distance from the VLA~1 source at 4.5 (left panel; broken line) and 5.8 \mum~(right panel; solid line).} \label{fig6} \end{figure} \begin{figure} \centerline{ \includegraphics[width=0.475\textwidth]{f7.pdf}} \caption{A comparison of the dense structure detected in NH$_3$(1,1) at 10.5 \kms~(dark contours; from Torrelles et al. 1994) with the emission at 4.5\mum~around the HH~1/2 jet/counterjet region.} \label{fig7} \end{figure} \section{The HH~1/2 mid-IR knots} Since some of the first IRAC images of stellar outflows from {\it Spitzer} (e.g. Noriega-Crespo et al. 2004a,b, Morris et al. 2004), it has been clear that the bands at 4.5 and 5.8\mum\ are particularly suitable to study them. There are seven pure rotational H$_2$ emission lines, from S(12) at 3.996\mum~to S(6) 6.107\mum that fall within their bandpasses. The 8\mum~band includes some bright v=0-0 H$_2$ lines (e.g. S(5) 6.907\mum), but the emission in this band tends to be dominated by broad band features from very small UV stochastically heated dust particles or Polycyclic Aromatic Hydrocarbons (PAHs) (see e.g. Tielens 2008). Recently, Ybarra \& Lada (2009) have used a combination of IRAC colors to uniquely identify the thermal emission arising from collisionally excited H$_2$ as a function of the HI gas density and temperature, and thus provide a reliable way to study the thermal structure of the stellar outflows by using the ([3.6]$-$[4.5]) and ([4.5]$-$[5.8]) colors. Some of the knots of the HH~1/2 system are clearly compact and at a first approximation one can treat them as point sources and transform their photometric fluxes into magnitudes (see e.g. ``IRAC Instrument Handbook'' \footnote[4]{http://irsa.ipac.caltech.edu/data/SPITZER/docs/irac/iracinstrumenthandbook} , \S 4.11.1, 'Best Practices for Extended Sources'). One needs to be aware that interpreting surface brightness measurements into colors can be off by 5\% - 10\% (``IRAC Instrument Handbook'', \S 4.11.1). We adopt a 10\% uncertainty across bands in our measurements, and use the standard IRAC zero points (``IRAC Instrument Handbook'', Table 4.1) of F$_{\nu0}$ = 280.9, 179.7, 115.0 and 64.9 Jy for the 3.6. 4.5, 5.8 and 8.0\mum~bands, respectively. The results are presented in Table 1 and display in a color-color diagram in Figure 8, following Ybarra \& Lada (2009). The diagram shows the properties of shocked excited H$_2$ as a function of constant HI density (dotted line) and temperature (solid line). At a temperature higher than 4000 K, the H$_2$ is likely to be dissociated (Ybarra \& Lada 2009). The excitation properties of the HH~1/2 knots at optical and UV wavelengths are very well known (see e.g. Solf, B\"ohm \& Raga 1998; Solf et al. 1991; B\"ohm \& Solf 1992; Schwartz et al. 1993; B\"ohm, Noriega-Crespo \& Solf 1993; Moro-Martin et al. 1996; Molinari \& Noriega-Crespo 2002) and are the result of their shock velocity and relative geometry. In general, high excitation knots are compact and interact through strong shocks (60 - 100 \kms) with the surrounding medium, while low excitation emission arises from weaker shocks or the ``wings'' of bowshock-like condensations. For example, knots like HH~1F, HH~2A, HH~2G and HH~2H are high excitation, while HH~2I, HH~2L are low excitation (Raga, B\"ohm, \& Cant\'o 1996). For the time variable jet, where the relative velocity between knots determines essentially the shock velocity, the overall excitation is known to be low and the shocks to be weak (Reipurth et al. 2000). Stronger shocks will produce higher post-shock gas densities and temperatures, but could also dissociate H$_2$; stronger H$_2$ emission indeed is expected to arise from low velocity shocks (either J-type or C-type (see e.g. Draine \& McKee 1993, for a review). \begin{figure} \centerline{\includegraphics[width=0.475\textwidth]{f8.pdf}} \caption{IRAC color-color diagram (after Ybarra \& Lada 2009) of compact knots in the HH 1/2 outflow with optical counterparts, including HH~1F (1F), HH~2 (A, E, F, G, H, I, K, and Z) and regions around the jet (Jet), counterjet (CtrJet) and the VLA~1 source (see Table 1).} \label{fig8} \end{figure} So how the properties of the mid-IR knots, dominated by emission of H$_2$ rotational lines at 4.5 and 5.8\mum, compare with those of the optical knots? One of the highest optical excitation knots, HH~1F, is slightly above the 4000 K model, the region of the diagram where indeed H$_2$ dissociates. The jet and counterjet, low excitation regions, do fall within the low temperature $\sim 1500$K and low density ($\sim 10^3$ \cc) models. Notice also that their IRAC colors are very close to each other, providing support to the assumption that their emission properties are nearly identical. The low excitation HH~2F and the region around VLA~1, are closer to the low 1500 K temperature model, but at higher gas density $\sim 10^5$ and $10^4$ \cc, respectively. The rest of the mid-IR knots are found between these two extremes. We notice that HH~2H and G, that are along the highest density model (10$^5$\cc) are also colder than the low excitation knots HH~2I and L, and this suggests that faster cooling at higher densities could play a role in reducing the excitation in the mid-IR. \section{Summary and Conclusions} We present previously unpublished {\it Spitzer} observations of the HH~1/2 outflow. The outflow is detected in the four IRAC channels (3.6, 4.5, 5.8 and 8.0\mum). In the 4.5\mum\ channel, a well collimated jet/counterjet system is seen emanating from the VLA~1 radio source, which represents the discovery of the SE counterjet (previous optical and IR observations only detected the NW jet directed towards HH1. We find that the ratio between the jet and counterjet emission is strongly dependent on distance from the VLA~1 source. The jet/counterjet ratio is $\sim 2$ times larger for the 4.5\mum\ than for the 5.8\mum\ emission. This result is consistent an extinction effect (which would be stronger at shorter wavelengths). The fact that the jet/counterjet ratio peaks at a distance of $\sim 3''$ from the VLA~1 source indicates that the extinction is probably produced by a dense structure having an extent of $\sim 6''$ along the outflow axis. Such a dense molecular structure around VLA~1 has been observed by Choi \& Lee (1998) and Cernicharo et al. (2000). We also show that the large structure of the dense gas traced by NH$_3$. plays role in hidding both the HH~1/2 and HH~144 counterjets. Under the assumption that the ratio of 3.6 to 4.5\mum~is constant for the jet/counterjet, we estimate a A$_V$ = 11 mag. Finally, by using the IRAC colors ([3.6] - [4.5]) and ([4.5] - [5.8]) and assuming that outflow emission is dominated by shocked excited H$_2$ (Ybarra \& lada 2009), we show that some of the compact mid-IR knots share similar excitation properties as those of determined for the optical knots (Raga, B\"ohm \& Cant\'o 1996). \acknowledgements We thank Frank Masci for the development of AWAIC HiRes software and making it available to us. We also thank Dr. Sean Carey for useful conversations, and the anonymous referee for her/his careful reading of the manuscript. This work is based in part on observations made with the {\it Spitzer Space Telescope} which is operated by the Jet Propulsion Laboratory, California Institute of Technology under NASA contract 1407. And also based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA). The work of AR was supported by the CONACyT grants 61547, 101356 and 101975. \setlength{\bibhang}{2.0em}
2024-02-18T23:41:05.277Z
2012-03-07T02:00:05.000Z
algebraic_stack_train_0000
4,121
4,700
proofpile-arXiv_066-4135
\section{Introduction} The goal of this paper is to describe, using matched asymptotics, the asymptotic behavior near blow-up points of a class of nonradially symmetric solutions of the following Keller-Segel system. \begin{subequations} \begin{align} u_{t} & =\Delta u-\nabla\left( u\nabla v\right) , \;\;\;\;x\in \mathbb{R}^{2}, \ t>0,\label{S1E1}\\ 0 & =\Delta v + u,\;\;\;\;x\in\mathbb{R}^{2},\ t>0,\label{S1E2} \end{align} \end{subequations} The Keller-Segel system, which was introduced in \cite{KS}, is a classical model of chemotactic aggregation. In this model $u$ is the density of a biological organism and $v$ is the concentration of a chemical substance produced by it having chemoattractant properties. It was conjectured in \cite{Childress} and rigorously proven in \cite{JL}, in the case of bounded domains, that solutions of \eqref{S1E1}-\eqref{S1E2} may blow-up in finite time, showing the fact that is usually interpreted as the formation of a high density aggregate of cells. The mathematical properties of \eqref{S1E1}-\eqref{S1E2} have been extensively studied by many authors. One of the most peculiar features of \eqref{S1E1}-\eqref{S1E2} is the existence of a critical mass $m_{0}$ such that for solutions with initial total mass of organism $\int u_{0}$ larger than $m_{0},$ blow-up takes place, whereas solutions with smaller values of $\int u_{0}$ yield global existence of solutions (cf. \cite{Biler,Nagai1}, for bounded domains, \cite{DP,Nagai2} in the case of $\mathbb{R}^{2}$). It has been already proven that blow-up consists in the formation of a Dirac mass in finite time with an amount of mass larger than $4\pi$ in the case of Neumann boundary conditions and blow-up taking place at the boundary of the domain, and larger than $8\pi$ in the case of blow-up taking place at interior points (cf.~\cite{SS}). The literature about the Keller-Segel system is huge and we will not attempt to summarize hear all the existent research concerning singularity formation and global existence for \eqref{S1E1}, \eqref{S1E2}. Some of the main results in this direction can be found in \cite{Biler,BCM,DP,JL,Nagai1,Nagai2}. In the case of radially symmetric solutions, the asymptotic behavior of solutions of \eqref{S1E1}-\eqref{S1E2} near blow-up points was obtained in \cite{HV1} using asymptotic methods, and a rigorous construction of such solutions was given in \cite{HV2}. Actually the paper \cite{HV1} describes formally the asymptotics of the blow-up solutions also in the parabolic-parabolic case in which (\ref{S1E2}) is replaced by a parabolic equation. The rigorous construction of the corresponding solutions is given in \cite{HV3}. The solutions constructed in \cite{HV2} produce the aggregation of a Dirac mass with the mass $8\pi$. On the other hand, continuation of solutions after blow-up has been considered using formal arguments in \cite{V2,V3}, and rigorous mathematical analysis in \cite{DS,LSV}. We will describe in this paper the asymptotics of solutions of (\ref{S1E1}), (\ref{S1E2}) yielding formation of Dirac masses whose amount of mass is $8\pi N$ with $N=2,3,4,...$ These solutions will be obtained by means of the coalescence at time $t=T$ of $N$ peaks of mass placed at distances of order $\sqrt{T-t},$ each of the peaks containing an amount of mass asymptotically close to $8\pi.$ The behavior of such solutions will be obtained using matched asymptotics. The peaks where most of the mass is concentrated near the blow-up time are placed at the vertices of some polygons to be described in detail later. We summarize the main result of this paper in the following Theorem. We emphasize that the results of this paper are obtained at the level of formal asymptotic expansions but not a rigorous Theorem in the sense of Mathematical Analysis. \begin{theorem} \label{Mainresult} It is possible to find formal asymptotic expansions for solutions of the Keller-Segel system (\ref{S1E1}), (\ref{S1E2}) that blow up at the time $t=T$ at the point $x=x_{0}$ and at each time $t<T$ the mass is concentrated around the points $x_{j}\left( t\right) ,\;\;j=1,2$ where: \[ x_{j}\left( t\right) =x_{0}+\left( -1\right) ^{j+1}\mathbf{a}\sqrt {T-t},\quad\;\;\;j=1,2 \] with $\mathbf{a}=(2,0)\in\mathbb{R}^{2}$. More precisely, the formal solutions described by the asymptotics found in this paper have the following property. For any $\nu>0$ arbitrarily small, there exists $R>0$ sufficiently large such that: \[ \lim_{t\rightarrow T^{-}}\left\vert \int_{B_{R\delta\left( t\right) }\left( x_{j}\left( t\right) \right) }u\left( x,t\right) dx-8\pi\right\vert \leq\nu \] with \[ \delta\left( t\right) =\sqrt{T-t}e^{-\alpha\left\vert \log\left( T-t\right) \right\vert }\ \ \] for some $\alpha>0.$ Moreover, the total amount of mass concentrating at the point $x=x_{0}$ as $t\rightarrow T^{-}$ is $16\pi.$ More precisely, for any function $\eta\left( t\right) $ such that $\lim_{t\rightarrow T^{-}} \eta\left( t\right)/\sqrt{T-t} =\infty$ and $\lim_{t\rightarrow T^{-}}\eta\left( t\right) =0$ one has: \[ \lim_{t\rightarrow T^{-}}\int_{B_{\eta\left( t\right) }\left( x_{0}\right) }u\left( x,t\right) dx=16\pi. \] \end{theorem} \begin{remark} \label{constantA}The argument used in the construction suggests that it would be possible to obtain solutions yielding the aggregation of an arbitrary number of multiples of $8\pi.$ However, the feasibility of such a construction requires to check that a certain elliptic problem, associated to suitable singular self-similar solutions of \eqref{S1E1}, \eqref{S1E2} (cf.~Section \ref{Asympt}), satisfy some sign condition that will be discussed in detail in Section \ref{outer} for the case in which two peaks aggregate. We have checked that this sign condition holds in this particular case solving numerically an elliptic equation. Analogous sign conditions should be checked for aggregations of multiple peaks, which we have not attempted in this paper. Precise asymptotic formulas for the solutions described in Theorem \ref{Mainresult} will be given in the rest of the paper. In particular, we will derive precise formulas for the width of the regions around the points $x_{i}\left( t\right) $ where the mass concentrates. The final profile of the solution at the blow-up time will be described in Remark \ref{Final}. \end{remark} The results of this paper are of a local nature. For this reason we just restrict our analysis to the case in which the system is solved in the whole $\mathbb{R}^{2}.$ Similar results could be derived for the Cauchy-Neumann problem in bounded domains with non-flux boundary conditions (cf.~Section \ref{CNprob}). We finally remark that numerical simulations showing aggregation of several peaks at the time of the singularity formation were obtained in \cite{S}. \section{\label{Asympt}Notation and preliminaries.} As indicated in the Introduction we will denote as $T$ the blow-up time. We will use repeatedly in the rest of the paper the following self-similar variables: \begin{subequations} \begin{align} u\left( x,t\right) & =\frac{1}{T-t}\Phi\left( y,\tau\right) ,\quad\ v\left( x,t\right) =W\left( y,\tau\right) ,\label{S1E3}\\ y & =\frac{x-x_{0}}{\sqrt{T-t}},\quad\;\tau=-\log\left( T-t\right) . \label{S1E4 \end{align} The system (\ref{S1E1}), (\ref{S1E2}) becomes in these variables: \end{subequations} \begin{subequations} \begin{align} \Phi_{\tau} & =\Delta\Phi-\frac{y\nabla\Phi}{2}-\nabla\left( \Phi\nabla W\right) -\Phi,\label{S1E5}\\ 0 & =\Delta W+\Phi. \label{S1E6} \end{align} It is natural to expect a self-similar behavior for the solutions of (\ref{S1E5}), (\ref{S1E6}). Self-similar solutions of (\ref{S1E1}), (\ref{S1E2}) solve: \end{subequations} \begin{subequations} \begin{align} \Delta\Phi-\frac{y\nabla\Phi}{2}-\nabla\left( \Phi\nabla W\right) -\Phi & =0,\label{S1E7}\\ \Delta W+\Phi & =0 \label{S1E8} \end{align} in the variable \eqref{S1E3}, \eqref{S1E4}. The solutions that we construct in this paper approach asymptotically as $\tau\rightarrow\infty$ the singular steady states \end{subequations} \begin{equation} \Phi_{s}=8\pi\sum_{\ell=1}^{N}\delta\left( y-y_{\ell}\right) \label{U1E1} \end{equation} with the points $y_{\ell}$ satisfying: \begin{equation} \frac{y_{j}}{2} - 4\sum_{\ell=1,\;\ell\neq j}^{N} \frac{y_{j}-y_{\ell}}{\left\vert y_{j}-y_{\ell}\right\vert^{2}} =0,\quad\ j=1,2,...,N. \label{U1E2} \end{equation} The solutions \eqref{U1E1}, \eqref{U1E2} solve \eqref{S1E7}, \eqref{S1E8} in the sense that they can be obtained as a limit of bounded solutions $\left( \Phi_{n},W_{n}\right) $ of \eqref{S1E7}, \eqref{S1E8} in bounded domains $B_{R_{n}}$ with $R_{n}\rightarrow\infty$ as $n\rightarrow\infty.$ The reason for requiring the solutions to be obtained in such a way, is because we want these solutions to appear as a limit of bounded solutions of (\ref{S1E5}), (\ref{S1E6}) as $\tau\rightarrow\infty.$ Seemingly this implies that the mass at each aggregation point must be $8\pi.$ We would not attempt to give a precise meaning to these solutions in this paper, although it is likely that they could be given a precise meaning using some of the methods used in \cite{DS,LSV,SSbook} to define solutions of the two-dimensional Keller-Segel system for measures containing Dirac masses. Another alternative seems to be to use ideas analogous to the ones obtained in \cite{GrTa}. The solutions obtained in this paper will behave asymptotically as in (\ref{U1E1}), (\ref{U1E2}) as $\tau\rightarrow\infty.$ A particular case of these solutions corresponds to the case of radially symmetric solutions considered in \cite{HV1,HV2}. An alternative way of deriving the asymptotics of these solutions can be found in \cite{V1}. In this radially symmetric case, the corresponding solution of \eqref{S1E7}, \eqref{S1E8} has the form: \begin{equation} \Phi_{r,s}\left( y\right) =8\pi\delta\left( y\right) . \label{S4E1 \end{equation} As was seen in \cite{HV1,V1}, the solutions of \eqref{S1E1}-\eqref{S1E2} with the asymptotics near the blow-up characterized by (\ref{S4E1}) have the mass concentrated in a region of size \begin{equation} \varepsilon\left( \tau\right) =Ke^{-\sqrt{\frac{\tau}{2}}}, \label{S4E3 \end{equation} where $K=2e^{-\frac{2+\gamma}{2}}$ with classical Euler's constant $\gamma$. The region where the mass aggregates can be described by means of a rescaling with a factor $\varepsilon\left( \tau\right) $ of the following stationary solution found in \cite{Childress}: \begin{equation} u_{s}\left( x\right) =\frac{8}{\left( 1+\left\vert x\right\vert ^{2}\right) ^{2}},\quad\ v_{s}\left( x\right) =-2\log\left( 1+\left\vert x\right\vert ^{2}\right) . \label{S4E2 \end{equation} In this paper we will give most of the details concerning the asymptotics of solutions of \eqref{S1E1}-\eqref{S1E2} which are bounded for $t<T$ and blows up at $t=T$ in the particular case of a limit function $\Phi_{s}$, a solution of \eqref{S1E7}, \eqref{S1E8} with the form (\ref{U1E1}) concentrated in two peaks, (i.e. $N=2)$. The reason is twofold. First, the computations become more cumbersome for an increasing number of peaks, but without requiring essentially different ideas. On the other hand, the construction requires to check a sign condition for a suitable elliptic problem, as indicated in Remark \ref{constantA}, and this is what we have made numerically only in the case of two peaks. In any case, solutions of \eqref{S1E7}, \eqref{S1E8} with the form \eqref{U1E1} will be discussed in Section \ref{selfSimSing}. Due to the symmetry of the problem under rotations we can restrict ourselves to the case in which $\Phi_{s}$ is given by \begin{equation} \Phi_{s}\left( y\right) =8\pi\left[ \delta\left( y-y_{1}\right) +\delta\left( y-y_{2}\right) \right] ,\quad\ y_{1}=\mathbf{a},\quad y_{2}=-\mathbf{a},\quad\mathbf{a}=\left( 2,0\right) . \label{U1E3a \end{equation} The detailed structure near the points $y_{\ell},\ \ell=1,2$, can be computed by introducing boundary layers having many similarities to the ones described in \cite{HV1,V1}. The rescaling factor $\varepsilon\left( \tau\right) $ will have a form similar to the one given in (\ref{S4E3}), although the value of the constant $K$ will differ in general from the one obtained for the radially symmetric case. Actually, in the case of the asymptotics given by (\ref{U1E1}), the value of this constant could be different for each of the aggregation points. This will not be the case if $\Phi\left( y,\tau\right) $ approaches the singular stationary solution $\Phi_{s}$ in (\ref{U1E3a}) due to symmetry considerations. A large portion of this paper consists in the detailed description of the boundary layers describing the regions of mass aggregation near the points $y_{1}, y_{2}.$ The computation of these layers will be made using the methods developed in \cite{V1} because the validity of some of the arguments in \cite{HV1} is restricted to the radially symmetric case. We now describe shortly our strategy to compute the asymptotics of the solutions near the blow-up points. We will obtain outer\ and inner expansions for the solutions. The outer expansion is valid in the region where $\left\vert y\right\vert \approx1$ and $\left\vert y-y_{\ell}\right\vert \gg e^{-\alpha\sqrt{\tau}}$ as $\tau\rightarrow\infty,$ $\ell=1,2,\ $\ for some $\alpha>0$ to be revealed later. The inner expansion is valid in the regions where $\left\vert y-y_{\ell}\right\vert \approx e^{-\alpha\sqrt{\tau}},\ \ell=1,2.$ Both expansions are obtained under the assumption that the mass aggregating near the points $y_{\ell}$ concentrates in a region with width $\varepsilon_{\ell}\left( \tau\right) \ll 1,$ whose precise value will be computed later. Such assumption will be shown to be self-consistent with the derived asymptotics. There is a common region of validity where both outer and inner expansions make sense. The matching condition between both types of expansion in that intermediate region provides a set of differential equations for the functions $\varepsilon_{\ell}\left( \tau\right)$ and these equations yield the asymptotics of such functions. We make extensive use of the asymptotic notation. We write $f\ll g$ as $x\rightarrow x_{0}$ to indicate $\lim_{x\rightarrow x_{0}} f/g =0,$ whereas $f\sim g$ as $x\rightarrow x_{0}$ to denote $\lim_{x\rightarrow x_{0}}f/g =1.$ The notation $f\approx g$ as $x\rightarrow x_{0}$ indicates that the terms $f$ and $g$ have a comparable order of magnitude, that is, the existence of $C>0$ such that $1/C \leq\lim\inf_{x\rightarrow x_{0}}f/g \leq\lim\sup_{x\rightarrow x_{0}} f/g \leq C.$ \section{\label{Innerexp}Inner expansions.} \subsection{Expansion of the solutions.} We compute the asymptotics of the functions $\Phi,\;W$ defined in \eqref{S1E3}, \eqref{S1E4}. In the case of radially symmetric solutions it is assumed that $\nabla\Phi\left( y_{\ell},\tau\right) =0$ with $y_{\ell}=0.$ However, due to the lack of symmetry, points where the maximum of $\Phi$ are attained could change in time. We assume the existence of functions $\left\{ \bar{y}_{\ell}\left( \tau\right) :\ell=1,2,...,N \right\}$ such that: \begin{align} \nabla \Phi \left( \bar{y}_{\ell}\left( \tau \right) , \tau\right) & =0, \label{S4E4}\\ \lim_{\tau\rightarrow\infty}\bar{y}_{\ell}\left( \tau\right) & = y_{\ell}. \label{S4E5} \end{align} It will be checked later that all these assumptions are self-consistent as usual in matched asymptotics. Let us introduce the following set of variables to describe the inner solutions near each point $y_{\ell}: \begin{subequations} \begin{align} \xi & = \frac{y-\bar{y}_{\ell}\left( \tau\right)}{\varepsilon_{\ell}\left(\tau\right)}, \label{S4E6}\\ \Phi\left( y,\tau\right) & =\frac{1}{\left( \varepsilon_{\ell}\left( \tau\right) \right)^{2}}U\left( \xi,\tau\right). \label{S4E7} \end{align} \end{subequations} On the other hand, we will write, with a little abuse of notation, $W\left( y, \tau \right) = W\left( \xi,\tau\right)$. Notice that the variables $\xi,\;U\left( \xi,\tau\right)$, and $W\left( \xi,\tau\right)$ depend on $\ell$, but these dependencies will not be explicitly written unless needed. Using \eqref{S1E5}, \eqref{S1E6}, \eqref{S4E6}, and \eqref{S4E7} we obtain: \begin{subequations} \begin{align} \varepsilon_{\ell}^{2} \frac{\partial U}{\partial \tau} & = \Delta_{\xi} U - \nabla_{\xi} \left( U \nabla_{\xi} W \right) + \left( 2\varepsilon_{\ell} \varepsilon_{\ell,\tau} - \varepsilon_{\ell}^{2}\right) \left( U + \frac{\xi\nabla_{\xi} U}{2} \right) + \left( \varepsilon_{\ell} \bar{y}_{\ell,\tau} - \frac{\varepsilon_{\ell} \bar{y}_{\ell}}{2} \right) \nabla_{\xi} U, \label{S4E8}\\ 0 & =\Delta_{\xi}W+U. \label{S4E9 \end{align} \end{subequations} We will now assume that the function $\varepsilon_{\ell}\left( \tau\right)$ satisfies: \begin{align} \varepsilon_{\ell}\left( \tau\right) & \ll 1 \quad \text{ as }\tau \rightarrow\infty, \label{S5E1}\\ \left\vert \varepsilon_{\ell,\tau\tau}\right\vert +\left\vert \varepsilon_{\ell,\tau}\right\vert & \ll\varepsilon_{\ell} \quad \text{ as }\tau \rightarrow\infty. \label{S5E3} \end{align} Assumptions similar to (\ref{S5E1}), (\ref{S5E3}) are made in \cite{V1}. In addition, we will also assume in this paper \begin{equation} \left\vert \bar{y}_{\ell,\tau}\right\vert \ll 1 \quad \text{ as }\tau\rightarrow\infty. \label{S5E4} \end{equation} We now define in a precise manner the functions $\varepsilon_{\ell}\left( \tau\right)$. We expect $U,\, W$ to behave like the stationary solution (\ref{S4E2}). The steady states of (\ref{S1E1}), (\ref{S1E2}) can be defined up to rescaling. Therefore the functions $\varepsilon_{\ell}\left( \tau\right) $ could be computed up to a rescaling factor. The assumption $U\left( \xi,\tau\right) \rightarrow u_{s} ( \xi)$ as $\tau\rightarrow\infty$ would prescribe uniquely the leading order asymptotics of $\varepsilon_{\ell}\left( \tau\right) .$ Moreover, we can prescribe uniquely the function $U$, imposing the normalization \begin{equation} U\left( 0,\tau\right) =8 \label{S5E5 \end{equation} or, in an equivalent manner \begin{equation} \Phi\left( \bar{y}_{\ell}\left( \tau\right) ,\tau\right) =\frac{8}{\left( \varepsilon_{\ell}\left( \tau\right) \right) ^{2}}. \label{S5E6 \end{equation} We then look for solutions of the system (\ref{S4E8}), (\ref{S4E9}) with the form of the following expansions: \begin{align} U\left( \xi,\tau\right) & =u_{s}\left( \xi\right) +U_{1}\left( \xi ,\tau\right) +U_{2}\left( \xi,\tau\right) +U_{3}\left( \xi,\tau\right) +U_{4}\left( \xi,\tau\right) +...,\label{S5E7}\\ W\left( \xi,\tau\right) & =v_{s}\left( \xi\right) +W_{1}\left( \xi ,\tau\right) +W_{2}\left( \xi,\tau\right) +W_{3}\left( \xi,\tau\right) +W_{4}\left( \xi,\tau\right) +..., \label{S5E8 \end{align} where $(u_{s},\,v_{s})$ are the stationary solution as in (\ref{S4E2}). Notice that the function $v_{s}$ is prescribed up to the addition of an arbitrary constant, but this can be ignored due to the form of the system \eqref{S1E1}-\eqref{S1E2}. On the other hand, it will be assumed, as in \cite{V1}, the terms $U_{1},W_{1}$ contain terms whose order of magnitude is $\varepsilon_{\ell}$ and that the terms $U_{2},W_{2}$ contain terms whose order of magnitude is $\left( \varepsilon_{\ell}\right) ^{2}$ or $\varepsilon_{\ell}\bar{y}_{\ell,\tau}$ up to logarithmic corrections like $\left\vert \log \varepsilon_{\ell} \right\vert ^{\beta ,\tau^{\beta}$ or similar ones. Such logarithmic corrections will arise from terms like $\varepsilon_{\ell,\tau}/\varepsilon_{\ell}$ or similar ones. The notation introduced in \cite{V1} and used also in this paper consists in writing all these terms as $\varepsilon_{\ell}^{2}$ $(w.l.a)$ (with logarithmic accuracy). We will include in $U_{1},W_{1}$ also the terms whose order of magnitude is $\varepsilon_{\ell}\ \left( w.l.a\right) $. Therefore: \begin{equation} \left( U_{1},W_{1}\right) \approx\varepsilon_{\ell}\ \left( w.l.a\right) \text{ \ as\ }\tau\rightarrow\infty. \label{S5E10 \end{equation} On the other hand we will include in $U_{2},W_{2}$ also the terms whose order of magnitude is $\varepsilon_{\ell}\bar{y}_{\ell,\tau}$ $\left( w.l.a\right) .$ Therefore: \begin{equation} \left( U_{2},W_{2}\right) \approx\varepsilon_{\ell}^{2}+\varepsilon_{\ell }\bar{y}_{\ell,\tau}\;\left( w.l.a\right) \text{ \ as\ }\tau\rightarrow \infty\label{S5E10a \end{equation} In a similar manner, including in $\left( U_{3},W_{3}\right) $ terms of order $\varepsilon_{\ell}^{3},\ \varepsilon_{\ell}^{2}\bar{y}_{\ell,\tau}$ and $\varepsilon_{\ell}\bar{y}_{\ell,\tau}^{2}$ $\left( w.l.a\right) $ and including in $\left( U_{4},W_{4}\right) $ terms of order $\varepsilon_{\ell }^{4},\ \varepsilon_{\ell}^{3}\bar{y}_{\ell,\tau},$\ $\varepsilon_{\ell ^{2}\bar{y}_{\ell,\tau}^{2}$ $\left( w.l.a\right) $ we obtain \begin{gather} \left( U_{3},W_{3}\right) \approx\varepsilon_{\ell}^{3}+\varepsilon_{\ell }^{2}\bar{y}_{\ell,\tau}+\varepsilon_{\ell}\bar{y}_{\ell,\tau}^{2}\;\left( w.l.a\right) \quad \text{ as }\tau\rightarrow\infty, \label{S5E11-bis}\\ \left( U_{4},W_{4}\right) \approx\varepsilon_{\ell}^{4}+\varepsilon_{\ell }^{3}\bar{y}_{\ell,\tau}+\varepsilon_{\ell}^{2}\bar{y}_{\ell,\tau ^{2}\;\left( w.l.a\right) \quad \text{ as }\tau\rightarrow\infty. \label{S5E11} \end{gather} Making the assumptions (\ref{S5E10})-(\ref{S5E11}) it follows that the functions $(U_{1},W_{1})$, $(U_{2},W_{2}),(U_{3},W_{3})$, and $(U_{4},W_{4})$ satisfy respectively the following systems: \begin{subequations} \begin{align} 0 & =\Delta_{\xi}U_{1}-\nabla_{\xi}\left( u_{s}\nabla_{\xi}W_{1}\right) -\nabla_{\xi}\left( U_{1}\nabla_{\xi}v_{s}\right) -\frac{\varepsilon_{\ell }\bar{y}_{\ell}}{2}\nabla_{\xi}u_{s},\label{S6E1-1}\\ 0 & =\Delta_{\xi}W_{1}+U_{1}, \label{S6E2-2 \end{align \end{subequations} \begin{subequations} \begin{align} 0 & =\Delta_{\xi}U_{2}-\nabla_{\xi}\left( u_{s}\nabla_{\xi}W_{2}\right) -\nabla_{\xi}\left( U_{1}\nabla_{\xi}W_{1}\right) -\nabla_{\xi}\left( U_{2}\nabla_{\xi}v_{s}\right) \nonumber\\ & +\left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau}-\varepsilon_{\ell ^{2}\right) \left( u_{s}+\frac{\xi\nabla_{\xi}u_{s}}{2}\right) +\varepsilon_{\ell}\bar{y}_{\ell,\tau}\nabla_{\xi}u_{s}-\frac{\varepsilon _{\ell}\bar{y}_{\ell}}{2}\nabla_{\xi}U_{1}~,\label{S6E1}\\ 0 & =\Delta_{\xi}W_{2}+U_{2}, \label{S6E2 \end{align \end{subequations} \begin{subequations} \begin{align} 0 & =\Delta_{\xi}U_{3}-\nabla_{\xi}\left( u_{s}\nabla_{\xi}W_{3}\right) -\nabla_{\xi}\left( U_{1}\nabla_{\xi}W_{2}\right) -\nabla_{\xi}\left( U_{2}\nabla_{\xi}W_{1}\right) -\nabla_{\xi}\left( U_{3}\nabla_{\xi v_{s}\right) \nonumber\\ & +\left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau}-\varepsilon_{\ell ^{2}\right) \left( U_{1}+\frac{\xi\nabla_{\xi}U_{1}}{2}\right) -\varepsilon_{\ell}^{2}\frac{\partial U_{1}}{\partial\tau}+\varepsilon_{\ell }\bar{y}_{\ell,\tau}\nabla_{\xi}U_{1}-\frac{\varepsilon_{\ell}\bar{y}_{\ell }{2}\nabla_{\xi}U_{2}\ ,\label{S6E3}\\ 0 & =\Delta_{\xi}W_{3}+U_{3}, \label{S6E4 \end{align \end{subequations} \begin{subequations} \begin{align} 0 & =\Delta_{\xi}U_{4}-\nabla_{\xi}\left( u_{s}\nabla_{\xi}W_{4}\right) -\nabla_{\xi}\left( U_{1}\nabla_{\xi}W_{3}\right) -\nabla_{\xi}\left( U_{2}\nabla_{\xi}W_{2}\right) -\nabla_{\xi}\left( U_{3}\nabla_{\xi W_{1}\right) -\nabla_{\xi}\left( U_{4}\nabla_{\xi}v_{s}\right) \nonumber\\ & +\left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau}-\varepsilon_{\ell ^{2}\right) \left( U_{2}+\frac{\xi\nabla_{\xi}U_{2}}{2}\right) -\varepsilon_{\ell}^{2}\frac{\partial U_{2}}{\partial\tau}+\varepsilon_{\ell }\bar{y}_{\ell,\tau}\nabla_{\xi}U_{2}-\frac{\varepsilon_{\ell}\bar{y}_{\ell }{2}\nabla_{\xi}U_{3},\label{S6E3-1}\\ 0 & =\Delta_{\xi}W_{4}+U_{4}. \label{S6E4-2 \end{align} \end{subequations} \subsection{Computation of $\left( U_{1},W_{1}\right) ,\ \left( U_{2},W_{2}\right).$} Due to \eqref{S5E5} we must solve \eqref{S6E1}-\eqref{S6E4-2} with conditions: \begin{equation} U_{1}\left( 0,\tau\right) =0,\quad U_{2}\left( 0,\tau\right) =0,\quad U_{3}\left( 0,\tau\right) =0,\quad U_{4}\left( 0,\tau\right) =0. \label{U2E2 \end{equation} We can easily obtain an exact solution of \eqref{S6E1-1}-\eqref{S6E2-2}: \begin{equation} U_{1}(\xi,\tau)=0,\ \ W_{1}(\xi,\tau)=-\frac{\varepsilon_{\ell}\bar{y}_{\ell }{2}\xi. \label{U2E2a \end{equation} In order to compute $\left( U_{2},W_{2}\right) $ we notice that due to the linearity of \eqref{S6E1}, \eqref{S6E2} we can split its solution as \[ U_{2} = U_{2,1}+U_{2,2}+U_{2,3},\qquad W_{2} = W_{2,1}+W_{2,2}+W_{2,3}, \] where $\left( U_{2,j},W_{2,j}\right) $, $j=1,2,3,$ solve respectively \begin{subequations} \begin{gather} 0 = \Delta_{\xi}U_{2,1}-\nabla_{\xi}\left( u_{s}\nabla_{\xi W_{2,1}\right) -\nabla_{\xi}\left( U_{2,1}\nabla_{\xi}v_{s}\right) +\left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau}-\varepsilon_{\ell}^{2}\right) \left( u_{s}+\frac{\xi\nabla_{\xi}u_{s}}{2}\right) ,\label{S6E5a}\\ 0 =\Delta_{\xi}W_{2,1}+U_{2,1}, \label{S6E5b}\\ 0 =\Delta_{\xi}U_{2,2}-\nabla_{\xi}\left( u_{s}\nabla_{\xi W_{2,2}\right) -\nabla_{\xi}\left( U_{2,2}\nabla_{\xi}v_{s}\right) +\varepsilon_{\ell}\bar{y}_{\ell,\tau}\nabla_{\xi}u_{s},\label{S6E6a}\\ 0 =\Delta_{\xi}W_{2,2}+U_{2,2}, \label{S6E6b}\\ 0 =\Delta_{\xi}U_{2,3}-\nabla_{\xi}\left( u_{s}\nabla_{\xi W_{2,3}\right) -\nabla_{\xi}\left( U_{2,3}\nabla_{\xi}v_{s}\right), \label{S6E7a}\\ 0 =\Delta_{\xi}W_{2,3}+U_{2,3}. \label{S6E7b} \end{gather} \end{subequations} We will check later that the term $\bar{y}_{\ell,\tau}$ is of order $\left( \varepsilon_{\ell}\right) ^{2}$ $\left( w.l.a\right) .$ Therefore $U_{2,2}$ will be of order $\left( \varepsilon_{\ell}\right) ^{3}$ $\left( w.l.a\right) .$ Notice that this means that the terms $U_{k}$ do not have a dependence $\left( \varepsilon_{\ell}\right) ^{k}$ $\left( w.l.a\right) .$ On the other hand, at a first glance the system for $\left( U_{2,3 ,W_{2,3}\right) $ could seem a bit odd for the absence of source terms. Actually $\left( U_{2,3},W_{2,3}\right) $ will be chosen as a solution of the problem \eqref{S6E7a}, \eqref{S6E7b} which are smooth for bounded values of $\left\vert \xi\right\vert $, but $W_{2,3}$ becomes unbounded as $\left\vert \xi\right\vert \rightarrow\infty.$ The contribution of $\left( U_{2,3},W_{2,3}\right) $ will be required to obtain a matching with some quadratic terms of the outer expansion having the angular dependencies proportional to $\left\{ \cos\left( 2\theta\right) ,\sin\left( 2\theta\right) \right\} $ and giving corrections of order $\varepsilon_{\ell}^{2}$ $\left( w.l.a\right) $. A detailed analysis of the matching conditions for the terms with this order of magnitude shows that, after a suitable rotation of the coordinate system, we may assume that the angular dependencies of the term $\left( U_{2,3},W_{2,3}\right) $ are proportional to $\cos\left( 2\theta\right) $. We will then assume this angular dependence in the following. Due to (\ref{U2E2}) we must have: \begin{equation} U_{2,k}\left( 0,\tau\right) =0, \qquad k =1,2,3. \label{sS6E7 \end{equation} The solution of \eqref{S6E5a}, \eqref{S6E5b} satisfying the first condition in \eqref{sS6E7} was obtained in \cite{V1} (where a slightly different notation was used). This solution has the form: \begin{equation} U_{2,1}\left( \xi,\tau\right) =Q_{2,1}\left( r,\tau\right) ,\ \ W_{2,1 \left( \xi,\tau\right) =V_{2,1}\left( r,\tau\right) ,\quad r=\left\vert \xi\right\vert . \label{S6E8a \end{equation} where: \begin{subequations} \begin{align} g_{1}\left( r,\tau\right) & = r\frac{\partial V_{2,1}}{\partial r}, \label{S6E8d}\\ 0 & = \frac{1}{r}\frac{\partial}{\partial r}\left( r\frac{\partial V_{2,1}}{\partial r} \right) + Q_{2,1} \label{S6E8c} \end{align} with: \begin{equation} g_{1}\left( r,\tau\right) =\left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau}-\varepsilon_{\ell}^{2}\right) \frac{r^{2}}{\left( 1+r^{2}\right) ^{2}}\int_{0}^{r^{2}} \frac{\left( 1+t\right) ^{2}}{t^{2}}\left[ \log\left( 1+t\right) -\frac{t}{1+t}\right] dt. \label{S6E9b} \end{equation} \end{subequations} According to the formulas (3.26) and (3.27) in \cite{V1}, we have the following asymptotics: \begin{align} Q_{2,1}\left( r,\tau\right) & =\left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau}-\varepsilon _{\ell}^{2}\right) \left[ -\frac{2}{r^{2}}+O\left( \frac{(\log r )^{2}}{r^{4}}\right) \right] \quad \text{ as }r\rightarrow \infty,\label{S6E10a1}\\ \frac{\partial V_{2,1}}{\partial r}\left( r,\tau\right) & =\left( 2\varepsilon_{\ell }\varepsilon_{\ell,\tau}-\varepsilon_{\ell}^{2}\right) \left[ \frac {\log\left( r^{2}\right) }{r}-\frac{2}{r}+O\left( \frac{(\log r)^{2}}{r^{3}}\right) \right] \quad \text{ as }r\rightarrow\infty. \label{S6E10b1 \end{align} The solution of the system \eqref{S6E6a}, \eqref{S6E6b} is given by the following simple formula: \begin{equation} U_{2,2}(\xi,\tau)=0,\quad W_{2,2}(\xi,\tau)=\varepsilon_{\ell}\bar{y _{\ell,\tau}\cdot\xi. \label{U2E1 \end{equation} We now consider the function $(U_{2,3},W_{2,3}).$ As explained before, this function, which is unbounded at infinity, is just a homogeneous solution of the linearized problem. It will be needed due to the effect of the other singular points at the point under consideration. More precisely, the function $W$ due to the points placed near $\bar{y}_{k}$ with $k\neq\ell$ gives a contribution as $\left\vert \xi\right\vert \rightarrow\infty$ that will be matched with the term $W_{2,3}.$ The angular dependence of this term is $\cos\left( 2\theta\right) $ and its size $\varepsilon_{\ell}^{2}\ \left( w.l.a\right) $. Therefore we look for a solution $(U_{2,3},W_{2,3})$ of \eqref{S6E7a}, \eqref{S6E7b} with the form: \begin{equation} U_{2,3}(\xi,\tau)=Q_{2,3}\left( r,\tau\right) \cos\left( 2\theta\right) , \quad W_{2,3}(\xi,\tau)=V_{2,3}\left( r,\tau\right) \cos\left( 2\theta\right) .\label{M0E2 \end{equation} The system \eqref{S6E7a}, \eqref{S6E7b} then reads: \begin{subequations} \begin{align} \frac{1}{r}\frac{\partial}{\partial r}\left( r\frac{\partial Q_{2,3 }{\partial r}\right) -\frac{4}{r^{2}}Q_{2,3}-\frac{du_{s}}{dr}\frac{\partial V_{2,3}}{\partial r}+2u_{s}Q_{2,3}-\frac{dv_{s}}{dr}\frac{\partial Q_{2,3 }{\partial r} & =0,\label{M1E4}\\ \frac{1}{r}\frac{\partial}{\partial r}\left( r\frac{\partial V_{2,3 }{\partial r}\right) -\frac{4}{r^{2}}V_{2,3}+Q_{2,3} & =0. \label{M1E5 \end{align} The smoothness of $(U_{2,3},\ W_{2,3})$ at the origin (cf. also \eqref{S6E7a}, \eqref{S6E7b}) implies \end{subequations} \begin{equation} Q_{2,3}\left( 0,\tau\right) =V_{2,3}\left( 0,\tau\right) =0. \label{M1E6 \end{equation} It was seen in \cite{V1} (cf.~Theorem \ref{Linear} below) that the space of solutions of (\ref{M1E4}), (\ref{M1E5}) is a four dimensional linear space spanned by the set of functions $\left\{ \left( \psi_{k},\omega_{k}\right) :k=1,2,3,4\right\} $. (We remark that the notation $\left( \psi_{k},V_{k}\right) $ was used in \cite{V1} instead, but we modify it here to avoid repetitions). The condition (\ref{M1E6}) implies that \[ \left( Q_{2,3},V_{2,3}\right) =K_{1}\left( \psi_{1},\omega_{1}\right) +K_{3}\left( \psi_{3},\omega_{3}\right) \] for some constants $K_{1},\ K_{3}\in\mathbb{R}$. If $K_{3}\neq0$, the growth of $\left( \psi_{3},\omega_{3}\right) $ as $\left\vert \xi\right\vert \rightarrow\infty$ would imply that $\left( \Phi,W\right) $ are very large for $\left\vert y\right\vert $ of order one, and this would contradict the hypothesis that $\Phi$ approaches the steady state in (\ref{U1E1}) as $\tau\rightarrow\infty.$ Therefore $K_{3}=0$ and $\left( Q_{2,3},V_{2,3}\right) $ is given by \begin{equation} Q_{2,3}\left( r,\tau\right) =\frac{8B_{2,3}r^{2}}{\left( r^{2}+1\right) ^{3}}\left( r^{2}+3\right), \quad\ V_{2,3}\left( r,\tau\right) =\frac{B_{2,3}r^{2}}{\left( r^{2}+1\right)}\left( r^{2}+3\right), \label{M2E2 \end{equation} where $B_{2,3}=B_{2,3}(\tau)\in\mathbb{R}$. Actually $B_{2,3}$ can be expected to be a function of $\tau$ changing slowly with respect to this variable. By this we mean that $B_{2,3}(\tau)$ does not have a factor like $e^{-\kappa\tau}$ with $\kappa\not =0$. The precise value of $B_{2,3}$ will be obtained later by matching the inner and the outer expansions. It will turn out to be of order $\left( \varepsilon _{\ell}\right) ^{2}\left( w.l.a\right) .$ Finally, notice that the formulas (\ref{M2E2}) have been obtained for functions with angular dependence $\cos\left( 2\theta\right) ,$ but similar formulas could be obtained if the angular dependence is replaced by $\sin\left( 2\theta\right) .$ The resulting coefficients $B_{2,3}$ will be denoted for functions with such an angular dependence as $\bar{B}_{2,3}.$ In the following arguments, several more variables $B_{4,2},\ \bar{B}_{4,2},\ c_{3}\left( \infty\right) ,..$ will appear. They have some dependence on $\tau,$ but we will not write this dependence explicitly unless needed. We remark also that the solutions of \eqref{S6E7a}, \eqref{S6E7b} cannot contain any radial contribution with angular dependence $\cos \theta$. Indeed, arguing as in the derivation of (\ref{M2E2}) and using the fact that it is always possible to add a constant to $V,$ it follows that such a contribution would yield an additional term in $U_{2,3}$ with the form $K_{1}\left( r^{2}-1\right) \left( r^{2}+1\right)^{-3} + K_{2}r \left( r^{2}+1\right)^{-3}\cos \theta.$ However, if $K_{1}\neq0$ or $K_{2}\neq 0$ there would be a contradiction to (\ref{S4E4}), (\ref{S5E5}). Similar arguments exclude angular dependences $\cos\left( \ell \theta \right)$ with $\ell>2$, since they would imply large values for $\Phi,\ W$ in the outer region where $\left\vert y\right\vert $ is of order one. \subsection{Computation of $\left( U_{3},W_{3}\right) .$} Since $U_{1}=0$ and $-\nabla_{\xi}\left( U_{2}\nabla_{\xi}W_{1}\right) - 2^{-1}\varepsilon_{\ell}\bar{y}_{\ell}\nabla_{\xi}U_{2}=0$ by \eqref{U2E2a}, the system \eqref{S6E3}, \eqref{S6E4} reads: \begin{subequations} \begin{align*} 0 & =\Delta_{\xi}U_{3}-\nabla_{\xi}\left( u_{s}\nabla_{\xi}W_{3}\right) -\nabla_{\xi}\left( U_{3}\nabla_{\xi}v_{s}\right) ,\\ 0 & =\Delta_{\xi}W_{3}+U_{3}. \end{align*} This system is similar to \eqref{S6E7a}, \eqref{S6E7b}. In order to obtain the matching of these terms with the corresponding ones in the outer region, we need an angular dependence proportional to $\cos\left( 3\theta\right) $. This dependence is the only one consistent with the rate of growth of these functions required to obtain the right matching with the outer part. We then write: \end{subequations} \begin{equation} U_{3}\left( \xi,\tau\right) =Q_{3}\left( r, \tau\right) \cos\left( 3\theta\right) , \quad\ W_{3}\left( \xi, \tau\right) =V_{3}\left( r, \tau\right) \cos\left( 3\theta\right), \label{M1E7a} \end{equation} where $(r,\theta )$ is as before. The function $(Q_{3} , V_{3} )$ fulfills: \begin{subequations} \begin{align} \frac{1}{r}\frac{\partial}{\partial r}\left( r\frac{\partial Q_{3}}{\partial r}\right) -\frac{9}{r^{2}}Q_{3}-\frac{du_{s}}{dr}\frac{\partial V_{3} }{\partial r}+2u_{s}Q_{3}-\frac{dv_{s}}{dr}\frac{\partial Q_{3}}{\partial r} & =0,\label{M1E7}\\ \frac{1}{r}\frac{\partial}{\partial r}\left( r\frac{\partial V_{3}}{\partial r}\right) -\frac{9}{r^{2}}V_{3}+Q_{3} & =0, \label{M1E8} \end{align} and the conditions implied by the regularity properties of $U_{3},\ W_{3}$ \end{subequations} \begin{equation} Q_{3} \left( 0,\tau\right) =V_{3}\left( 0,\tau\right) =0. \label{M1E9 \end{equation} Using the solutions of these equations obtained in \cite[Theorems 4.1--4.3]{V1} we have \begin{equation} Q_{3}\left( r, \tau \right) =\frac{8B_{3}r^{3}}{\left( r^{2}+1\right) ^{3}}\left( 2r^{2}+4\right), \quad V_{3}\left( r, \tau \right) =\frac{B_{3}r^{3}}{r^{2}+1}\left( 2r^{2}+4\right), \label{M2E1} \end{equation} for some $B_{3}\in\mathbb{R}.$ As in the case of $B_{2,3},$ $B_{3}$ could have some slow (meaning non-exponential in $\tau$) dependence on $\tau$. More precisely, it will behave like $\varepsilon_{\ell}^{3}\left( w.l.a\right) .$ We have just written terms with angular dependence $\cos\left( 3\theta \right) ,$ but there are similar terms with dependence $\sin\left( 3\theta\right)$ characterized by means of a coefficient $\bar{B}_{3}.$ \subsection{Computation of $\left( U_{4},W_{4}\right) .$} Using \eqref{S6E3-1}, \eqref{S6E4-2} and (\ref{U2E2a}): \begin{subequations} \begin{align} 0 = & \Delta_{\xi}U_{4}-\nabla_{\xi}\left( u_{s}\nabla_{\xi}W_{4}\right) -\nabla_{\xi}\left( U_{2}\nabla_{\xi}W_{2}\right) -\nabla_{\xi}\left( U_{3}\nabla_{\xi}W_{1}\right) -\nabla_{\xi}\left( U_{4}\nabla_{\xi v_{s}\right) +\nonumber\\ & +\left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau}-\varepsilon_{\ell ^{2}\right) \left( U_{2}+\frac{\xi\nabla_{\xi}U_{2}}{2}\right) -\varepsilon_{\ell}^{2}\frac{\partial U_{2}}{\partial\tau}+\varepsilon_{\ell }\bar{y}_{\ell,\tau}\nabla_{\xi}U_{2}-\frac{\varepsilon_{\ell}\bar{y}_{\ell }{2}\nabla_{\xi}U_{3}, \label{S6E3-1...}\\ 0 = & \Delta_{\xi}W_{4}+U_{4}. \label{S6E4-2...} \end{align} Using (\ref{U2E2a}) and (\ref{U2E1}) we observe that: \end{subequations} \begin{align*} -\nabla_{\xi}\left( U_{3}\nabla_{\xi}W_{1}\right) -\frac{\varepsilon_{\ell }\bar{y}_{\ell}}{2}\nabla_{\xi}U_{3} & =0\ ,\\ -\nabla_{\xi}\left( U_{2}\nabla_{\xi}W_{2,2}\right) +\varepsilon_{\ell \bar{y}_{\ell,\tau}\nabla_{\xi}U_{2} & =0. \end{align*} Then \eqref{S6E3-1...}, \eqref{S6E4-2...} yields: \begin{subequations} \begin{align} 0 = & \Delta_{\xi}U_{4}-\nabla_{\xi}\left( u_{s}\nabla_{\xi}W_{4}\right) -\nabla_{\xi}\left( U_{2}\nabla_{\xi}W_{2,1}\right) -\nabla_{\xi}\left( U_{2}\nabla_{\xi}W_{2,3}\right) -\nonumber\\ & -\nabla_{\xi}\left( U_{4}\nabla_{\xi}v_{s}\right) +\left( 2\varepsilon _{\ell}\varepsilon_{\ell,\tau}-\varepsilon_{\ell}^{2}\right) \left( U_{2}+\frac{\xi\nabla_{\xi}U_{2}}{2}\right) -\varepsilon_{\ell}^{2 \frac{\partial U_{2}}{\partial\tau}, \label{S6E3-10}\\ 0 = & \Delta_{\xi}W_{4}+U_{4}. \label{S6E4-20} \end{align} It is now convenient to split $U_{4},\ W_{4}$ as \end{subequations} \[ U_{4}=U_{4,1}+U_{4,2},\quad W_{4}=W_{4,1}+W_{4,2}, \] where \begin{subequations} \begin{align} 0 = & \Delta_{\xi}U_{4,1}-\nabla_{\xi}\left( u_{s}\nabla_{\xi W_{4,1}\right) -\nabla_{\xi}\left( U_{2,1}\nabla_{\xi}W_{2,1}\right)-\nonumber\\ & -\nabla_{\xi}\left( U_{4,1}\nabla_{\xi}v_{s}\right) +\left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau}-\varepsilon_{\ell}^{2}\right) \left( U_{2,1}+\frac{\xi\nabla_{\xi}U_{2,1}}{2}\right) -\varepsilon_{\ell}^{2} \frac{\partial U_{2,1}}{\partial\tau}, \label{M3E1}\\ 0 = & \Delta_{\xi}W_{4,1}+U_{4,1}, \label{M3E2} \end{align \end{subequations} \begin{subequations} \begin{align} 0 & =\Delta_{\xi}U_{4,2}-\nabla_{\xi}\left( u_{s}\nabla_{\xi W_{4,2}\right) -\nabla_{\xi}\left( U_{4,2}\nabla_{\xi}v_{s}\right) +S_{4,2}\left( \xi,\tau\right), \label{M3E4a}\\ 0 & =\Delta_{\xi}W_{4,2}+U_{4,2} \label{M3E3a \end{align} with \begin{multline} S_{4,2}\left( \xi,\tau\right) = -\nabla_{\xi}\left( U_{2,3}\nabla_{\xi}W_{2,1}\right) - \nabla_{\xi}\left( U_{2}\nabla_{\xi}W_{2,3}\right) + \\ + \left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau} - \varepsilon_{\ell}^{2}\right) \left( U_{2,3}+\frac{\xi\nabla_{\xi}U_{2,3}}{2}\right) -\varepsilon_{\ell}^{2}\frac{\partial U_{2,3}}{\partial\tau}. \label{M3E3c} \end{multline} \end{subequations} The system \eqref{M3E1}, \eqref{M3E2} is the same as (3.16)--(3.18) in \cite{V1} and the solution can be obtained as indicated in that paper (although a slightly different notation for the functions is used there). The relevant information that we will need in this paper is the asymptotics of the solutions for large values of $\left\vert \xi\right\vert $ which can be computed as follows. We define: \begin{equation} U_{4,1}=-\frac{1}{r}\frac{\partial g_{2}}{\partial r}\ ,\quad\ \frac{\partial W_{4,1}}{\partial r}=\frac{g_{2}}{r}. \label{Z1E1} \end{equation} Then: \begin{equation} g_{2}=\varepsilon_{\ell}^{2}\left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau }-\varepsilon_{\ell}^{2}\right) _{\tau}\left[ \frac{r^{2}\log r}{4 -\frac{7r^{2}}{16}+O\left( \left( \log r\right) ^{2}\right) \right] +\left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau}-\varepsilon_{\ell ^{2}\right) ^{2}\left[ -\frac{r^{2}}{8}+O\left( \left( \log r\right) ^{2}\right) \right] \label{Z1E2 \end{equation} as $r\rightarrow\infty.$ Similar asymptotic formulas can be obtained for $\partial g_{2}/\partial r$. In order to solve \eqref{M3E3a}-\eqref{M3E3c} we need to compute $S_{4,2}\left( \xi,\tau\right) $. Using \eqref{S6E8a} and \eqref{M0E2} we obtain, after some elementary but tedious computations \begin{equation} S_{4,2}\left( \xi,\tau\right) = G_{1}\left( r,\tau\right) + G_{2}\left( r,\tau\right) \cos\left( 2\theta \right) + G_{3}\left( r,\tau\right) \cos\left( 4\theta \right), \label{Z1E3} \end{equation} where: \begin{subequations} \begin{align} G_{1}\left( r,\tau\right) = & -\frac{1}{2r}\frac{\partial}{\partial r} \left( rQ_{2,3} \frac{\partial V_{2,3}}{\partial r} \right), \label{Z1E4} \\ G_{2}\left( r,\tau\right) = & \frac{ 4Q_{2,1} V_{2,3} }{ r^{2} } -\frac{1}{r} \frac{\partial}{\partial r} \left( r Q_{2,1} \frac{\partial V_{2,3} }{ \partial r }\right) -\frac{1}{r }\frac{\partial}{\partial r} \left( rQ_{2,3} \frac{\partial V_{2,1}}{\partial r} \right) - \notag \\ & -\varepsilon_{\ell}^{2} \frac{\partial Q_{2,3}}{\partial\tau} + \left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau} - \varepsilon_{\ell}^{2}\right) \frac{48B_{2,3} r^{2}}{\left( r^{2}+1\right)^{4}}, \label{Z1E5}\\ G_{3}\left( r,\tau\right) = & \frac{4Q_{2,3} V_{2,3}}{r^{2}} - \frac{1}{2r}\frac{\partial}{\partial r}\left( rQ_{2,3} \frac{\partial V_{2,3}}{\partial r}\right). \label{Z1E6} \end{align} \end{subequations} The form of $S_{4,2}\left( \xi,\tau\right)$ in (\ref{Z1E3}) suggests to split $\left( U_{4,2},W_{4,2}\right)$ as: \[ U_{4,2}=U_{4,2,1}+U_{4,2,2}+U_{4,2,3},\quad\ W_{4,2}=W_{4,2,1}+W_{4,2,2} + W_{4,2,3} \] with $\left\{ \left( U_{4,2,k},W_{4,2,k}\right) :k=1,2,3\right\}$ having the angular dependencies $\cos\left( 2\left( k-1\right) \theta\right) $. Then: \begin{subequations} \begin{align} 0 & =\Delta_{\xi}U_{4,2,k}-\nabla_{\xi}\left( u_{s}\nabla_{\xi} W_{4,2,k}\right) -\nabla_{\xi}\left( U_{4,2,k}\nabla_{\xi}v_{s}\right) +G_{k}\cos\left( 2\left( k-1\right) \theta\right), \label{Z1E7}\\ 0 & =\Delta_{\xi}W_{4,2,k}+U_{4,2,k} \label{Z1E8 \end{align} with boundary conditions \end{subequations} \begin{equation} U_{4,2,k}\left( 0,\tau\right) =0,\qquad k=1,2,3. \label{Z2E1 \end{equation} The boundary conditions for $U_{4,2,2},\ U_{4,2,3}$ are just consequences of the angular dependence of these functions and their smoothness properties, whereas condition (\ref{Z2E1}) for $U_{4,2,1}$ is just a consequence of (\ref{S5E5}). On the other hand, the angular dependencies of the functions $W_{4,2,2},\,W_{4,2,3}$ yield: \begin{equation} W_{4,2,2}\left( 0,\tau\right) =W_{4,2,3}\left( 0,\tau\right) =0, \label{Z2E2} \end{equation} whereas (\ref{S4E4}) implies: \begin{equation} \frac{\partial W_{4,2,1}}{\partial r}\left( 0,\tau\right) =0. \label{Z2E3 \end{equation} \subsection{Computation of $\left( U_{4,2,1},\, W_{4,2,1}\right) $.} \begin{lemma} Under the conditions \eqref{Z2E1} and \eqref{Z2E3} the system \eqref{Z1E7}, \eqref{Z1E8} with $k = 1$ has a unique exact solution: \vspace{-0.2cm} \begin{equation} U_{4,2,1} =2 \left( B_{2,3} \right)^{2}\frac{r^{4}\left( r^{4} + 4r^{2} + 9 \right)}{\left( r^{2} + 1 \right)^{4}}, \ \quad \frac{\partial W_{4,2,1}}{\partial r} = -\left( B_{2,3}\right)^{2}\frac{r^{5}\left( r^{2}+3\right) }{\left( 1+r^{2}\right) ^{3}}, \label{Z2E9} \end{equation} where $r = |\xi|$ and $B_{2,3}$ is the parameter in \eqref{M2E2}. \end{lemma} \begin{proof} Using (\ref{Z1E4}) and (\ref{Z1E7}) we obtain: \[ 0 =\frac{1}{r}\frac{\partial}{\partial r}\left( r\frac{\partial U_{4,2,1 }{\partial r}\right) -\frac{1}{r}\frac{\partial}{\partial r}\left( ru_{s}\frac{\partial W_{4,2,1}}{\partial r}\right) -\frac{1}{r}\frac {\partial}{\partial r}\left( rU_{4,2,1}\frac{d v_{s}}{dr}\right) -\frac{1}{2r}\frac{\partial}{\partial r}\left( rQ_{2,3 \frac{\partial V_{2,3}}{\partial r}\right) . \] Integrating this equation and using (\ref{M2E2}) as well as (\ref{Z2E1}) we obtain \begin{equation} r\frac{\partial U_{4,2,1}}{\partial r}-u_{s}\left( r\frac{\partial W_{4,2,1 }{\partial r}\right) -rU_{4,2,1}\frac{d v_{s}}{dr} =\frac{rQ_{2,3}}{2}\frac{\partial V_{2,3}}{\partial r}. \label{Z2E4 \end{equation} On the other hand, defining \begin{equation} M_{4,2,1}=r\frac{\partial W_{4,2,1}}{\partial r} \label{Z2E4a \end{equation} and using (\ref{Z1E8}), we obtain \begin{equation} U_{4,2,1}=-\frac{1}{r}\frac{\partial M_{4,2,1}}{\partial r}. \label{T8E1} \end{equation} The smoothness of the function $W_{4,2,1}$ implies $M_{4,2,1}\left( 0, \tau\right) =0$. Using (\ref{Z2E1}) we then obtain \begin{equation} M_{4,2,1}\left( r, \tau\right) =o\left( r^{2}\right) \quad \text{ as } r\rightarrow0. \label{T8E2} \end{equation} Plugging (\ref{Z2E4a}) and (\ref{T8E1}) into (\ref{Z2E4}) we obtain \begin{equation} r\frac{\partial}{\partial r}\left( \frac{1}{r}\frac{\partial M_{4,2,1 }{\partial r}\right) +\frac{8}{\left( 1+r^{2}\right) ^{2}}M_{4,2,1 +\frac{4r}{1+r^{2}}\frac{\partial M_{4,2,1}}{\partial r}=-\frac{rQ_{2,3} {2}\frac{\partial V_{2,3}}{\partial r}. \label{T8E2a \end{equation} In order to solve this equation we use the change of variables \begin{equation} M_{4,2,1}= \frac{r^{2}F_{4,2,1}}{\left( 1+r^{2}\right) ^{2}}. \label{T8E3 \end{equation} Note that (\ref{T8E2}) implie \begin{equation} F_{4,2,1}=o\left( 1\right) \text{ as } r \rightarrow0. \label{T8E4 \end{equation} Plugging (\ref{T8E3}) into (\ref{T8E2a}), we obtain \begin{equation} \frac{r^{2}}{\left( 1+r^{2}\right) ^{2}}\frac{\partial^{2}F_{4,2,1} }{\partial r^{2}}-\frac{r}{\left( r^{2}+1\right) ^{3}}\left( r^{2} -3\right) \frac{\partial F_{4,2,1}}{\partial r}=-\frac{rQ_{2,3}}{2} \frac{\partial V_{2,3}}{\partial r}. \label{Z2E5 \end{equation} Using now (\ref{M2E2}) to compute the right-hand side of (\ref{Z2E5}) we arrive at: \[ r\frac{\partial^{2}F_{4,2,1}}{\partial r^{2}} -\frac{r^{2}-3}{r^{2} + 1}\frac{\partial F_{4,2,1}}{\partial r} =-8\left( B_{2,3}\right)^{2}\frac{r^{3}\left( r^{2}+3\right) }{\left( r^{2}+1\right) ^{3}}\left( r^{4}+2r^{2}+3\right) . \] This is a first order linear differential equation for $\partial F_{4,2,1}/\partial r$ that can be integrated explicitly. After some computations we obtain: \begin{equation} \frac{\partial F_{4,2,1}}{\partial r} =-\frac{4\left( B_{2,3}\right)^{2} r^{3}\left( r^{4}+3r^{2}+3\right) }{\left( r^{2}+1\right)^{2}}. \label{Z2E6 \end{equation} In the derivation of (\ref{Z2E6}) we have used that, due to (\ref{T8E4}), the value of $F_{4,2,1}\left( 0, \tau\right) $ must be finite. Integrating now (\ref{Z2E6}) and using also (\ref{T8E4}) we obtain \[ F_{4,2,1} = -\left( B_{2,3}\right)^{2} \frac{r^{4}\left( r^{2}+3\right) }{r^{2}+1}. \] Using now (\ref{T8E2}) and (\ref{T8E3}) we have \[ M_{4,2,1}=-\left( B_{2,3}\right) ^{2}\frac{r^{6}\left( r^{2}+3\right) }{\left( 1+r^{2}\right) ^{3}}, \] whence (\ref{Z2E4a}) and (\ref{T8E1}) yield the desired result. \end{proof} \subsection{Computation of $\left( U_{4,2,2},W_{4,2,2}\right) .$} \subsubsection{Reduction of the problem to ODEs for $Q_{4,2,2},\ V_{4,2,2}.$} The functions $U_{4,2,2},\ W_{4,2,2}$ satisfy the system \eqref{Z1E7}, \eqref{Z1E8} with $k=2$ together with conditions (\ref{Z2E1}), (\ref{Z2E2}). In order to remove angular dependence we look for solutions of these equations in the form: \[ U_{4,2,2}=Q_{4,2,2}\left( r,\tau\right) \cos\left( 2\theta\right),\quad W_{4,2,2}=V_{4,2,2}\left( r,\tau\right) \cos\left( 2\theta\right), \] where $(r, \theta )$ is as before. It then follows from \eqref{Z1E7} and \eqref{Z1E8} with $k=2$ as well as \eqref{S4E2} that: \begin{subequations} \begin{align} 0 & =\frac{1}{r}\frac{\partial}{\partial r}\left( r\frac{\partial Q_{4,2,2}}{\partial r}\right) -\frac{4}{r^{2}}Q_{4,2,2}+\frac{32r}{\left( r^{2}+1\right) ^{3}}\frac{\partial V_{4,2,2}}{\partial r}+\frac{4r}{r^{2 +1}\frac{\partial Q_{4,2,2}}{\partial r}+\frac{16}{\left( 1+r^{2}\right) ^{2}}Q_{4,2,2}+G_{2},\label{Z2E10}\\ 0 & =\frac{1}{r}\frac{\partial}{\partial r}\left( r\frac{\partial V_{4,2,2}}{\partial r}\right) -\frac{4}{r^{2}}V_{4,2,2}+Q_{4,2,2}. \label{Z2E11 \end{align} The precise formula of $G_{2}$ may be computed, using (\ref{S6E8c}), (\ref{S6E8d}) and (\ref{Z1E5}), as: \end{subequations} \begin{multline} G_{2} =-\frac{8B_{2,3}r\left( r^{4}+4r^{2}+9\right) }{\left(r^{2}+1\right)^{3}} \frac{\partial g_{1}}{\partial r} +\frac{32B_{2,3}\left( r^{2}-3\right)}{\left( r^{2}+1\right)^{4}}g_{1} + \\ + \frac{8B_{2,3}\left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau} - \varepsilon_{\ell}^{2}\right) r^{2}}{\left( r^{2}+1\right) ^{4}} \left( r^{4}+2r^{2}+9\right) -\left( B_{2,3}\right)_{\tau} \frac{8\varepsilon_{\ell}^{2}r^{2}\left( r^{2}+3\right)}{\left( r^{2}+1\right)^{3}}, \label{S8E4} \end{multline} where $g_{1}$ is the function as in (\ref{S6E9b}). The derivation of \eqref{S8E4} requires just a long but elementary computation. On the other hand, the conditions (\ref{Z2E1}) and (\ref{Z2E2}) imply: \begin{equation} Q_{4,2,2}\left( 0,\tau\right) =V_{4,2,2}\left( 0,\tau\right) =0. \label{Z2E12} \end{equation} The system \eqref{Z2E10}, \eqref{Z2E11} is a nonhomogeneous linear system with source term $G_{2}$ as in (\ref{S8E4}). In order to study the asymptotics of their solutions we examine in detail the solutions of the homogeneous part of this system. \subsubsection{Study of the homogeneous system.} The homogeneous part of the system \eqref{Z2E10}, \eqref{Z2E11} can be written as the linear system \begin{subequations} \begin{align} 0 & =\frac{1}{r}\frac{d}{dr}\left( r\frac{d\psi}{dr}\right) - \frac{L^{2}}{r^{2}}\psi+\frac{32r}{\left( r^{2}+1\right)^{3}}\frac{d\omega}{dr} +\frac{4r}{r^{2}+1}\frac{d\psi}{dr} +\frac{16}{\left( 1+r^{2}\right) ^{2}} \psi,\label{Z3E3}\\ 0 & =\frac{1}{r}\frac{d}{dr}\left( r\frac{d\omega}{dr}\right) -\frac{L^{2}}{r^{2}}\omega+\psi \label{Z3E4} \end{align} with $L=2.$ This system was studied in detail in \cite{V1}. Four linearly independent solutions $\left( \psi_{k},\omega_{k}\right) ,\,k=1,2,3,4$, were obtained and their asymptotics for large and small $r$ were computed there. We need to compute an error term in the asymptotics of $\omega_{4}$ for $L=2,3,4,...$ in a manner more detailed than in \cite{V1}. The following result is basically a reformulation of \cite[Theorem 4.3]{V1}. \end{subequations} \begin{theorem} \label{Linear} Suppose that $L=2,3,4,...$ A general solution of \eqref{Z3E3}, \eqref{Z3E4} is a linear combination of four particular functions $\left\{ \left( \psi_{k},\omega_{k}\right) ,\ k=1,2,3,4\right\}$, whose asymptotics are given by: \begin{subequations} \begin{gather} \psi_{1}\left( r\right) = \frac{8r^{L}}{\left( r^{2}+1\right) ^{3}} \left[ \left( L-1\right) r^{2} + L+1 \right] ,\ \ \omega_{1}\left( r\right) =\frac{r^{L}}{r^{2}+1}\left[ \left( L-1\right) r^{2} + L + 1 \right], \label{Z3E5}\\ \psi_{2}\left( r\right) = \frac{8}{r^{L}\left( r^{2}+1\right) ^{3}}\left[ \left( L+1\right) r^{2} + L-1 \right],\ \ \omega_{2} \left( r\right) =\frac{1}{r^{L}\left( r^{2}+1\right) }\left[ \left( L+1\right) r^{2} + L-1 \right], \label{Z3E6}\\ \psi_{3}\left( r\right) \sim 8r^{L} \ \text{ as } r\rightarrow 0^{+}, \quad \omega_{3}\left( r\right) \sim -r^{L} \ \text{ as } r\rightarrow0^{+}, \label{Z3E7}\\ \psi_{3} \left( r\right) \sim 16K_{L}r^{\sqrt{4+L^{2}}-2} \ \text{ as } r\rightarrow\infty, \quad \omega_{3}\left( r\right) \sim -4K_{L}r^{\sqrt{4+L^{2}}} \text{ as } r\rightarrow\infty, \label{Z3E8}\\ \psi_{4}\left( r\right) \sim8r^{-L} \ \text{ as } r \rightarrow 0^{+}, \quad \omega_{4}\left( r\right) \sim -r^{-L} \ \text{ as } r \rightarrow0^{+}, \label{Z3E9}\\ \psi_{4}\left( r\right) \sim 16C_{L}r^{-\sqrt{4+L^{2}}-2} \ \text{ as } r \rightarrow\infty, \quad \omega_{4}\left( r\right) \sim C_{L}\left( \kappa_{L}r^{-L}-4r^{-\sqrt{4+L^{2}}}\right) + o\left( r^{-L-2}\right) \ \text{ as } r \rightarrow \infty \label{Z3E10} \end{gather} for some real numbers $C_{L}, K_{\ell}$ and $\kappa_{L}.$ \end{subequations} \end{theorem} \begin{remark} We have $C_{L}, K_{\ell} >0$, whereas $\kappa_{L}$ could be zero for some $L$. \end{remark} \begin{remark} The only difference between Theorem \ref{Linear} and \cite[Theorem 4.3]{V1} is that the last formula in \eqref{Z3E10} is written as $\omega_{4}=o\left( r^{-L}\right)$ as $r\rightarrow\infty$ in \cite[Theorem 4.3]{V1}. There is a typo there. The correct formula intended in that paper is $\omega_{4}=O\left( r^{-L}\right) $ as $r\rightarrow\infty.$ Formula \eqref{Z3E10} provides a precise asymptotics for $\omega_{4}.$ \end{remark} \begin{proof} We need to prove only (\ref{Z3E10}). To show it we define, given a solution $\left( \psi,\omega\right) $ of \eqref{Z3E3}, \eqref{Z3E4}, two functions $F,G$ by means of: \begin{equation} \psi=\frac{8r^{-L}}{\left( r^{2}+1\right) ^{3}}\left( F+G\right) ,\quad\omega=\frac{r^{-L}}{r^{2}+1}\left( F-G\right) . \label{S9E5a \end{equation} Then: \begin{gather} \frac{d^2 G}{dr^2} +\left( \frac{1-2L}{r}-\frac{8r}{r^{2}+1}\right) \frac{dG}{dr} +\left( \frac{8L+12}{r^{2}+1}-\frac{16}{\left( r^{2}+1\right)^{2}}\right) G =0,\label{S9E5}\\ \frac{d^2 F}{dr^2} +\left( \frac{1-2L}{r}-\frac{4r}{r^{2}+1}\right) \frac{dF}{dr} + \frac{4\left( L+1\right)}{r^{2}+1}F =\frac{4}{ r^{2} + 1 }\left( r \frac{dG}{dr}-\left( L+2\right) G\right) \equiv S\left( r\right). \label{S9E6} \end{gather} It was proven in \cite{V1} that there exists a unique solution of (\ref{S9E5}) satisfying \[ G_{\beta}\left( 0\right) =1,\quad G_{\beta}\left( r\right) \sim C_{L}r^{\beta_{L}} \quad\text{ as }r\rightarrow\infty,\ \beta_{L =4+L-\sqrt{4+L^{2}}. \] Moreover, since the point $r=\infty$ is a regular singular point for (\ref{S9E5}), we can use Frobenius theory to compute a power series expansion for $G_{\beta}\left( r\right) $ as $r\rightarrow\infty$ to observe \begin{equation} G_{\beta}\left( r\right) =C_{L}\left[ r^{\beta_{L}}+\frac{2\sqrt{L^{2 +4}-1}{\sqrt{L^{2}+4}+1}r^{\beta_{L}-2}+O\left( r^{\beta_{L}-4}\right) \right] \quad\text{ as }r\rightarrow\infty\label{S9E7 \end{equation} as well as similar bounds for the derivatives. Two independent solutions of the homogeneous equation associated to (\ref{S9E6}) are given by: \begin{equation} F_{1,h}\left( r\right) =\left( L+1\right) r^{2}+\left( L-1\right) , \quad F_{2,h}\left( r\right) =r^{2L}\left[ \left( L-1\right) r^{2}+\left( L+1\right) \right] . \label{Z4E0 \end{equation} We then look for solutions of (\ref{S9E6}) with the form: \begin{equation} F_{\beta} \left( r\right) =a_{1}\left( r\right) F_{1,h}\left( r\right) +a_{2}\left( r\right) F_{2,h}\left( r\right) \label{Z4E0a} \end{equation} under the constraints on $a_{1}$ and $a_{2}$: \begin{align*} a_{1}^{\prime}\left( r\right) F_{1,h}\left( r\right) +a_{2}^{\prime }\left( r\right) F_{2,h}\left( r\right) & =0,\\ a_{1}^{\prime}\left( r\right) F_{1,h}^{\prime}\left( r\right) +a_{2}^{\prime}\left( r\right) F_{2,h}^{\prime}\left( r\right) & =S\left( r\right) . \end{align*} We set \begin{equation} \Delta_{F}\left( r\right) =\left\vert \begin{array} [c]{cc F_{1,h}\left( r\right) & F_{2,h}\left( r\right) \\ F_{1,h}^{\prime}\left( r\right) & F_{2,h}^{\prime}\left( r\right) \end{array} \right\vert =2\left( L+1\right) L\left( L-1\right) \left( r^{2}+1\right) ^{2}r^{2L-1}. \label{Z4E1 \end{equation} Using then Cramer's formula as well as (\ref{Z4E0}) and (\ref{Z4E1}), we obtain \begin{align} a_{1}^{\prime}\left( r\right) & =-\frac{F_{2,h}\left( r\right) {\Delta_{F}\left( r\right) }S\left( r\right) =-\frac{r\left[ \left( L-1\right) r^{2}+\left( L+1\right) \right] }{2\left( L+1\right) L\left( L-1\right) \left( r^{2}+1\right) ^{2}}S\left( r\right), \label{Z4E3}\\ a_{2}^{\prime}\left( r\right) & =\frac{F_{1,h}\left( r\right) {\Delta_{F}\left( r\right) }S\left( r\right) =\frac{\left( L+1\right) r^{2}+\left( L-1\right) }{2\left( L+1\right) L\left( L-1\right) \left( r^{2}+1\right) ^{2}r^{2L-1}}S\left( r\right). \label{Z4E4} \end{align} Notice that $r=0$ is also a regular singular point for (\ref{S9E5}). Then $G_{\beta}\left( r\right) =1+O\left( r^{2}\right) $ as $r\rightarrow0$ as well as similar estimates for the derivatives. We then observe that the function $S$ in (\ref{S9E6}) satisfies \begin{equation} \left\vert S\left( r\right) \right\vert \leq C\left( 1+r^{\beta_{L -2}\right) ,\quad r>0. \label{Z4E5 \end{equation} Note that $\beta_{L}\in\left( 2,4\right)$ for each $L=2,3,...$ We can then obtain a particular solution of (\ref{S9E6}), choosing $a_{1},\ a_{2}$ in (\ref{Z4E0a}) as \begin{align*} a_{1}\left( r\right) & =-\frac{1}{2\left( L+1\right) L\left( L-1\right) }\int_{0}^{r}\frac{\left[ \left( L-1\right) \eta^{2}+\left( L+1\right) \right] \eta}{\left( \eta^{2}+1\right) ^{2}}S\left( \eta\right) d\eta,\\ a_{2}\left( r\right) & =-\frac{1}{2\left( L+1\right) L\left( L-1\right) }\int_{r}^{\infty}\frac{\left[ \left( L+1\right) \eta ^{2}+\left( L-1\right) \right] \eta}{\left( \eta^{2}+1\right) ^{2 \eta^{2L}}S\left( \eta\right) d\eta. \end{align*} The convergence of both integrals is a consequence of (\ref{Z4E5}). Notice that (\ref{S9E7}) as well as the definition of $S$ in (\ref{S9E6}) implies the asymptotics: \begin{equation} S\left( r\right) =-\frac{4L^{2}C_{L}r^{\beta_{L}-2}}{2+\sqrt{4+L^{2}}} + O\left( r^{\beta_{L}-4}\right) \quad \text{ as }r\rightarrow\infty. \label{Z4E6 \end{equation} Then: \[ \frac{\left[ \left( L-1\right) r^{2}+\left( L+1\right) \right] r}{\left( r^{2}+1\right) ^{2}}S\left( r\right) =-\frac{4\left( L-1\right) L^{2}C_{L}r^{\beta_{L}-3}}{2+\sqrt{4+L^{2}}} +O\left( r^{\beta_{L}-5}\right) \quad \text{ as } r\rightarrow\infty, \] whence \[ a_{1}\left( r\right) =\frac{C_{L}\left[ \left( 2+L\right) +\sqrt{4+L^{2 }\right] r^{\beta_{L}-2}}{2\left( L+1\right) \left( 2+\sqrt{4+L^{2 }\right) }+\int_{0}^{r}W\left( \eta\right) d\eta, \] with \[ \left\vert W\left( r\right) \right\vert \leq\frac{C}{\left( 1+r^{5-\beta_{L}}\right) } \] for some $C>0.$ Since $\beta_{L}\in\left( 2,4\right)$ it then follows that $\int_{0}^{\infty}\left\vert W\left( r\right) \right\vert dr<\infty.$ Therefore \[ a_{1}\left( r\right) -\frac{C_{L}\left[ \left( 2+L\right) +\sqrt{4+L^{2 }\right] r^{\beta_{L}-2}}{2\left( L+1\right) \left( 2+\sqrt{4+L^{2 }\right) } \to \frac{C_{L}\kappa_{L}}{L+1}=\int_{0}^{\infty}W\left( \eta\right) d\eta \] as $r\rightarrow\infty$ for some $\kappa_{L}\in\mathbb{R}$. We then have: \begin{equation} a_{1}\left( r\right) =\frac{C_{L}\left[ \left( 2+L\right) +\sqrt{4+L^{2 }\right] r^{\beta_{L}-2}}{2\left( L+1\right) \left( 2+\sqrt{4+L^{2 }\right) }+\frac{C_{L}\kappa_{L}}{L+1}+O\left( r^{\beta_{\ell}-4}\right) \quad \text{ as }r\rightarrow\infty. \label{Z4E7 \end{equation} On the other hand, using (\ref{Z4E6}) we obtain: \[ \frac{\left[ \left( L+1\right) r^{2}+\left( L-1\right) \right] r}{\left( r^{2}+1\right) ^{2}r^{2L}}S\left( r\right) =-\frac{4L^{2}\left( L+1\right) C_{L}r^{\beta_{L}-3-2L}}{2+\sqrt{4+L^{2}}}+O\left( r^{\beta _{L}-5-2L}\right) \] as $r\rightarrow\infty.$ Therefore \begin{equation} a_{2}\left( r\right) =\frac{C_{L}\left[ \left( 2-L\right) +\sqrt{4+L^{2 }\right] r^{\beta_{L}-2-2L}}{2\left( L-1\right) \left( 2+\sqrt{4+L^{2 }\right) }+O\left( r^{\beta_{L}-4-2L}\right) \quad\text{ as r\rightarrow\infty. \label{Z4E8 \end{equation} Combining (\ref{S9E7})-(\ref{Z4E0a}), (\ref{Z4E7}) with (\ref{Z4E8}) we arrive (after some a bit lengthy computation) at \begin{equation} F_{\beta} \left( r\right) = C_{L}\left[ r^{\beta_{L}} +\kappa_{L}r^{2}-\frac{2\sqrt{L^{2}+4} + 5}{\sqrt{L^{2}+4}+1}r^{\beta_{L}-2} +O\left( 1\right) \right] \quad\text{ as }r\rightarrow\infty. \label{Z4E9} \end{equation} Using (\ref{S9E7}) and (\ref{Z4E9}) in (\ref{S9E5a}) we obtain (\ref{Z3E10}). This concludes the proof. \end{proof} \begin{proposition} \label{Auxlem} Let $C_{L}$ and $K_{L}$ with $L \geq2$ be the constants as in \eqref{Z3E8} and \eqref{Z3E10} respectively. Then the following identity holds: \begin{equation} C_{L} K_{L} = \frac{L}{\sqrt{L^{2}+4}} \label{KCrel \end{equation} \end{proposition} \begin{proof} Set \[ \mathcal{M}\left( r\right) =\left( \begin{array} [c]{cccc \psi_{1}\left( r\right) & \psi_{2}\left( r\right) & \psi_{3}\left( r\right) & \psi_{4}\left( r\right) \\ \omega_{1}\left( r\right) & \omega_{2}\left( r\right) & \omega_{3}\left( r\right) & \omega_{4}\left( r\right) \\ \psi_{1}^{\prime}\left( r\right) & \psi_{2}^{\prime}\left( r\right) & \psi_{3}^{\prime}\left( r\right) & \psi_{4}^{\prime}\left( r\right) \\ \omega_{1}^{\prime}\left( r\right) & \omega_{2}^{\prime}\left( r\right) & \omega_{3}^{\prime}\left( r\right) & \omega_{4}^{\prime}\left( r\right) \end{array} \right) , \qquad\Delta_{L}\left( r\right) =\det\left( \mathcal{M}\left( r\right) \right) . \] Using the system \eqref{Z3E3}, \eqref{Z3E4} we obtain \[ \frac{d\Delta_{L}}{dr}\left( r\right) =-\left[ \frac{2}{r}+\frac{4r {r^{2}+1}\right] \Delta_{L}\left( r\right) . \] The solution of this equation is given by: \[ \Delta_{L}\left( r\right) =\frac{E_{L}}{r^{2}\left( r^{2}+1\right) ^{2} \] for some $E_{L}\in\mathbb{R}$. On the other hand, using (\ref{Z3E5}), (\ref{Z3E6}), (\ref{Z3E7}), and (\ref{Z3E9}) we obtain \[ \Delta_{L}\left( r\right) =-\frac{2^{10}\left( L+1\right) \left( L-1\right) L^{2}}{r^{2}}\left( 1+o\left( 1\right) \right) \quad\text{ as }r\rightarrow0. \] Therefore $E_{L}=-2^{10}\left( L+1\right) \left( L-1\right) L^{2}$ and \begin{equation} \Delta_{L} = -\frac{2^{10}\left( L+1\right) \left( L-1\right) L^{2} {r^{2}\left( r^{2}+1\right) ^{2}}. \label{Delta-L \end{equation} On the other hand the asymptotics of $\psi_{i}$, $\omega_{i}$, $i=1,2,3,4$, given in (\ref{Z3E5}), (\ref{Z3E6}), (\ref{Z3E8}), and (\ref{Z3E10}) imply \[ \Delta_{L}\left( r\right) =-2^{10}L\left( L-1\right) \left( L+1\right) \sqrt{L^{2}+4}K_{L}C_{L}r^{-6}\left( 1+o\left( 1\right) \right) \quad\text{ as }r\rightarrow\infty, \] whence \[ E_{L}=-2^{10}\left( L+1\right) \left( L-1\right) L^{2}=-2^{10}L\left( L-1\right) \left( L+1\right) \sqrt{L^{2}+4} C_{L} K_{L} \] and the result follows. \end{proof} \subsubsection{\label{SolutionU422} Asymptotics of $\left( Q_{4,2,2},\ V_{4,2,2} \right).$} \begin{lemma}\label{Lem80} For any fixed $\tau,$ the problem \eqref{Z2E10}-\eqref{Z2E11}-\eqref{Z2E12} has a one-dimensional family of solutions, parameterized by $B_{4,2}=B_{4,2}\left( \tau\right)$. Its asymptotics as $r\rightarrow \infty$ is given by: \begin{subequations} \begin{align} Q_{4,2,2}\left( r,\tau\right) & =16K_{2}B_{4,2}r^{2\sqrt{2}-2} -2\sqrt{2}C_{2} K_{2} B_{2,3}\left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau }-\varepsilon_{\ell}^{2}+\frac{\left( B_{2,3}\right) _{\tau}}{B_{2,3 }\varepsilon_{\ell}^{2}\right) +O\left( \frac{\varepsilon_{\ell}^{4}}{r^{4-2\sqrt{2}}}\right) \ \ \left( w.l.a\right), \label{Q422}\\ V_{4,2,2}\left( r,\tau\right) & =-4B_{4,3}K_{2}r^{2\sqrt{2}} +\frac{ C_{2} K_{2} B_{2,3} }{2}\left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau }-\varepsilon_{\ell}^{2}+\frac{\left( B_{2,3}\right) _{\tau}}{B_{2,3 }\varepsilon_{\ell}^{2}\right) r^{2}\log r +...\ \ \left( w.l.a\right) \label{Z6E10b \end{align} as $r\rightarrow\infty$, where $C_2$ and $K_2$ are the constants as in Theorem \ref{Linear} and $B_{2,3} = B_{2,3}(\tau )$ is the parameter in \eqref{M2E2}. \end{subequations} \end{lemma} \begin{proof} Although the functions $Q_{4,2,2}$ and $V_{4,2,2}$ depend on $\tau$ through the dependence on $\tau$ of the source term $G_{2}=G_{2}(r,\tau)$ we do not write them explicitly in the proof, because the proof relies purely on standard ODE arguments and the dependence on $\tau$ does not play any role. We look for solutions with the form \[ Q_{4,2,2}\left( r\right) =\sum_{i=1}^{4}b_{i}\left( r\right) \psi _{i}\left( r\right) ,\qquad V_{4,2,2}\left( r\right) =\sum_{i=1}^{4 b_{i}\left( r\right) \omega_{i}\left( r\right) , \] where the functions $\left( \psi_{i},\omega_{i}\right) $ are as in Theorem \ref{Linear} with $L=2.$ We impose the constraints \[ \sum_{i=1}^{4}b_{i}^{\prime}\left( r\right) \psi_{i}\left( r\right) =0,\qquad\sum_{i=1}^{4}b_{i}^{\prime}\left( r\right) \omega_{i}\left( r\right) =0. \] Then: \begin{equation} Q_{4,2,2}^{\prime}\left( r\right) =\sum_{i=1}^{4}b_{i}\left( r\right) \psi_{i}^{\prime}\left( r\right) ,\qquad V_{4,2,2}^{\prime}\left( r\right) =\sum_{i=1}^{4}b_{i}\left( r\right) \omega_{i}^{\prime}\left( r\right) \label{Z5E1 \end{equation} and using \eqref{Z2E10}, \eqref{Z2E11} as well as the fact that the functions $\left( \psi_{i},\omega_{i}\right) $ solve the homogeneous system \eqref{Z3E3}, \eqref{Z3E4} we obtain \begin{equation} \sum_{i=1}^{4}b_{i}^{\prime}\left( r\right) \psi_{i}^{\prime}\left( r\right) +G_{2}\left( r\right) =0, \qquad \sum_{i=1}^{4}b_{i}^{\prime}\left( r\right) \omega_{i}^{\prime}\left( r\right) =0. \label{Z5E2} \end{equation} We can rewrite (\ref{Z5E1}), (\ref{Z5E2}) in the vector form: \begin{equation} \mathcal{M}\left( r\right) \frac{d\mathcal{B}\left( r\right) {dr}=\mathcal{S}\left( r\right) , \label{S8E9 \end{equation} where \[ \mathcal{B}\left( r\right) =\left( \begin{array} [c]{c b_{1}\left( r\right) \\ b_{2}\left( r\right) \\ b_{3}\left( r\right) \\ b_{4}\left( r\right) \end{array} \right) ,\quad\mathcal{S}\left( r\right) =\left( \begin{array} [c]{c 0\\ 0\\ -G_{2}\left( r\right) \\ 0 \end{array} \right) , \] and \text{$\mathcal{M}\left( r\right) $} is the matrix appeared in the proof of Lemma \ref{Auxlem}. We denote as $\mathcal{M}_{k}\left( r;\mathcal{S}\left( r\right) \right) $ the matrix obtained by replacing the $k-$column of $\mathcal{M}\left( r\right) $ by $\mathcal{S}\left( r\right) $. Cramer's formula then yields: \begin{equation} b_{k}^{\prime}\left( r\right) =\frac{\det\left( \mathcal{M}_{k} \left( r ;\mathcal{S}\left( r\right) \right) \right) }{\Delta_{2}\left( r\right) }, \quad k=1,2,3,4, \label{Z5E6 \end{equation} where $\Delta_{2}(r)$ is the Wronskian given in \eqref{Delta-L} with $L=2$. In order to avoid lengthy formulas we will use the following notation. We will denote as $D_{2,m},$ $m=1,2,3,4$, the following determinant \begin{equation} D_{2,m}\left( r\right) =\left\vert \begin{array} [c]{ccc \psi_{i}\left( r\right) & \psi_{j}\left( r\right) & \psi_{k}\left( r\right) \\ \omega_{i}\left( r\right) & \omega_{j}\left( r\right) & \omega_{k}\left( r\right) \\ \omega_{i}^{\prime}\left( r\right) & \omega_{j}^{\prime}\left( r\right) & \omega_{k}^{\prime}\left( r\right) \end{array} \right\vert ,\quad i,j,k\in\left\{ 1,2,3,4\right\} \setminus\left\{ m\right\} ,\quad i<j<k. \label{Z5E7 \end{equation} Then: \begin{equation} b_{m}^{\prime}\left( r\right) =\frac{\left( -1\right) ^{m+1}G_{2}\left( r\right) D_{2,m}\left( r\right) r^{2}\left( r^{2}+1\right) ^{2} {3\cdot2^{12}},\quad m=1,2,3,4. \label{Z5E8 \end{equation} Our next goal is to compute the asymptotics of the functions $b_{m}^{\prime }\left( r\right) $ as $r\rightarrow0.$ To this end we first compute the asymptotics of $D_{2,m}\left( r\right) $ and $G_{2}\left( r\right) .$ Using (\ref{Z3E5}), (\ref{Z3E6}), (\ref{Z3E7}), (\ref{Z3E9}) in Theorem \ref{Linear} with $L=2$ we obtain \begin{subequations} \begin{align} D_{2,1}\left( r\right) & =-\frac{2^{6}}{r^{3}}\left( 1+O\left( r\right) \right), \label{Z5E9a}\\ D_{2,2}\left( r\right) & =-3\cdot2^{6}\cdot r\left( 1+O\left( r\right) \right), \label{Z5E9b}\\ D_{2,3}\left( r\right) & =-\frac{3\cdot2^{6}}{r^{3}}\left( 1+O\left( r\right) \right), \label{Z5E9c}\\ D_{2,4}\left( r\right) & =-3\cdot2^{6}\cdot r\left( 1+O\left( r\right) \right) \label{Z5E9d} \end{align} as $r \to 0$. On the other hand, (\ref{S6E9b}) and (\ref{S8E4}) imply \end{subequations} \begin{equation} G_{2}\left( r\right) =3\cdot2^{3}\left( 3\left( 2\varepsilon_{\ell }\varepsilon_{\ell,\tau}-\varepsilon_{\ell}^{2}\right) B_{2,3} -\left( B_{2,3}\right)_{\tau}\varepsilon_{\ell}^{2}\right) r^{2}\left( 1+O\left( r\right) \right) \label{Z5E10 \end{equation} as $r \to 0$. Combining (\ref{Z5E8})-(\ref{Z5E10}) we obtain \begin{subequations} \begin{align} & b_{1}^{\prime}\left( r\right) =-\frac{r}{2^{3}}\left( 3\left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau}-\varepsilon_{\ell}^{2}\right) B_{2,3}-\left( B_{2,3}\right)_{\tau} \varepsilon_{\ell}^{2}\right) \left( 1+O\left( r\right) \right) ,\label{S9E1}\\ & b_{2}^{\prime}\left( r\right) =\frac{3r^{5}}{2^{3}}\left( 3\left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau}-\varepsilon_{\ell}^{2}\right) B_{2,3}-\left( B_{2,3}\right)_{\tau} \varepsilon_{\ell}^{2}\right) \left( 1+O\left( r\right) \right) ,\label{S9E2}\\ & b_{3}^{\prime}\left( r\right) =-\frac{3r}{2^{3}}\left( 3\left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau}-\varepsilon_{\ell}^{2}\right) B_{2,3}-\left( B_{2,3}\right)_{\tau} \varepsilon_{\ell}^{2}\right) \left( 1+O\left( r\right) \right) ,\label{S9E3}\\ & b_{4}^{\prime}\left( r\right) =\frac{3r^{5}}{2^{3}}\left( 3\left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau}-\varepsilon_{\ell}^{2}\right) B_{2,3}-\left( B_{2,3}\right)_{\tau} \varepsilon_{\ell}^{2}\right) \left( 1+O\left( r\right) \right) \label{S9E4 \end{align} as $r\rightarrow0$. Integrating the equations \eqref{S9E1}-\eqref{S9E4} and using (\ref{Z2E12}) we obtain \end{subequations} \begin{subequations} \begin{align} b_{1}\left( r\right) & =\beta_{1}-\frac{ 3\left( 2\varepsilon _{\ell}\varepsilon_{\ell,\tau}-\varepsilon_{\ell}^{2}\right) B_{2,3} -\left( B_{2,3}\right)_{\tau} \varepsilon_{\ell}^{2}}{2^{4}}r^{2} \left( 1+O\left( r\right) \right) ,\label{Z6E1a}\\ b_{2}\left( r\right) & =\frac{3\left( 2\varepsilon_{\ell }\varepsilon_{\ell,\tau}-\varepsilon_{\ell}^{2}\right) B_{2,3} -\left( B_{2,3}\right)_{\tau} \varepsilon_{\ell}^{2}}{2^{4}}r^{6} \left( 1+O\left( r\right) \right) ,\label{Z6E1b}\\ b_{3}\left( r\right) & =\beta_{3}-\frac{3\left( 3\left( 2\varepsilon _{\ell}\varepsilon_{\ell,\tau}-\varepsilon_{\ell}^{2}\right) B_{2,3 -\left( B_{2,3}\right)_{\tau} \varepsilon_{\ell}^{2}\right)}{2^{4}}r^{2} \left( 1+O\left( r\right) \right) ,\label{Z6E1c}\\ b_{4}\left( r\right) & =\frac{3\left( 2\varepsilon_{\ell }\varepsilon_{\ell,\tau}-\varepsilon_{\ell}^{2}\right) B_{2,3}- \left( B_{2,3}\right)_{\tau} \varepsilon_{\ell}^{2}} {2^{4}}r^{6}\left( 1+O\left( r\right) \right) \label{Z6E1d \end{align} as $r\rightarrow0$ for some $\beta_{1},\beta_{3}\in\mathbb{R}$ to be determined. Notice that we can then compute the functions $b_{m}$ by means of \end{subequations} \begin{subequations} \begin{align} b_{m}\left( r\right) & =\beta_{m}+\frac{\left( -1\right) ^{m+1} {3\cdot2^{12}}\int_{0}^{r}G_{2}\left( \eta\right) \eta^{2}\left( \eta ^{2}+1\right) ^{2}D_{2,m}\left( \eta\right) d\eta,\quad m=1,2,3,4;\label{Z6E2}\\ \beta_{2} & =\beta_{4}=0. \label{Z6E3 \end{align} We now proceed to compute the asymptotics of the terms $b_{m}\left( r\right)$ as $r\rightarrow\infty$. To this end we need to derive the asymptotics of the determinants $D_{2,m}.$ We begin with $D_{2,1}.$ Expanding with respect to the first row of the determinant in (\ref{Z5E7}) we obtain \end{subequations} \begin{equation} D_{2,1}\left( r\right) =\psi_{2}\left( r\right) \left\vert \begin{array} [c]{cc \omega_{3}\left( r\right) & \omega_{4}\left( r\right) \\ \omega_{3}^{\prime}\left( r\right) & \omega_{4}^{\prime}\left( r\right) \end{array} \right\vert -\psi_{3}\left( r\right) \left\vert \begin{array} [c]{cc \omega_{2}\left( r\right) & \omega_{4}\left( r\right) \\ \omega_{2}^{\prime}\left( r\right) & \omega_{4}^{\prime}\left( r\right) \end{array} \right\vert +\psi_{4}\left( r\right) \left\vert \begin{array} [c]{cc \omega_{2}\left( r\right) & \omega_{3}\left( r\right) \\ \omega_{2}^{\prime}\left( r\right) & \omega_{3}^{\prime}\left( r\right) \end{array} \right\vert. \label{Z6E4 \end{equation} Using (\ref{Z3E6}), (\ref{Z3E8}), and (\ref{Z3E10}) we obtain: \begin{align} \psi_{2}\left( r\right) \left\vert \begin{array} [c]{cc \omega_{3}\left( r\right) & \omega_{4}\left( r\right) \\ \omega_{3}^{\prime}\left( r\right) & \omega_{4}^{\prime}\left( r\right) \end{array} \right\vert & =O\left( r^{2\sqrt{2}-9}\right) ,\label{Z6E5a}\\ \psi_{4}\left( r\right) \left\vert \begin{array} [c]{cc \omega_{2}\left( r\right) & \omega_{3}\left( r\right) \\ \omega_{2}^{\prime}\left( r\right) & \omega_{3}^{\prime}\left( r\right) \end{array} \right\vert & =-3\cdot2^{7}K_{2}C_{2}r^{-5}\left( \sqrt{2}+1\right) +o\left( r^{-5}\right) \label{Z6E5b \end{align} as $r\rightarrow\infty$. Using the asymptotics (\ref{Z3E6}) and (\ref{Z3E10}) we obtain: \begin{equation} \omega_{2}\left( r\right) \omega_{4}^{\prime}\left( r\right) -\omega _{2}^{\prime}\left( r\right) \omega_{4}\left( r\right) =2^{3}\cdot 3(\sqrt{2}-1)C_{2}r^{-2\sqrt{2}-3}\left( 1+o\left( 1\right) \right) \label{Z6E5d \end{equation} as $r\rightarrow\infty$, whence, using (\ref{Z6E4})-(\ref{Z6E5d}) and taking into account that $2\sqrt{2}<4,$ we conclude \begin{subequations} \begin{equation} D_{2,1}\left( r\right) = - 2^{8}\cdot 3C_{2} K_{2} r^{-5}\left( 1+o\left( 1\right) \right) \quad\text{ as }r\rightarrow\infty. \label{Z6E6 \end{equation} The asymptotics of $D_{2,m}\left( r\right) ,\ m=2,3,4,$ can be computed by a similar manner. The following asymptotics then follow: \begin{align} D_{2,2}\left( r\right) & = 2^{6} C_{2}K_{2}\kappa_{2} r^{2\sqrt{2}-3}\left( 1+o\left( 1\right) \right) \quad\text{ as }r\rightarrow\infty ,\label{Z6E7a}\\ D_{2,3}\left( r\right) & =-2^{6}\cdot3C_{2}r^{-2\sqrt{2}-3}\left( 1+o\left( 1\right) \right) \quad\text{ as }r\rightarrow\infty ,\label{Z6E7b}\\ D_{2,4}\left( r\right) & =-2^{6}\cdot3K_{2}r^{2\sqrt{2}-3}\left( 1+o\left( 1\right) \right) \quad\text{ as }r\rightarrow\infty. \label{Z6E7c \end{align} Our next goal is to compute the asymptotics of the functions $b_{m}$ for large $r.$ To this end, we first compute the asymptotics of $G_{2}\left( r\right)$ as $r\rightarrow\infty.$ Using (\ref{S8E4}) as well as (\ref{S6E9b}) we obtain \end{subequations} \begin{equation} G_{2}\left( r\right) \sim-\frac{8B_{2,3}}{r^{2}}\left( \left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau}-\varepsilon_{\ell}^{2}\right) +\frac{\left( B_{2,3}\right) _{\tau}}{B_{2,3}}\varepsilon_{\ell}^{2}\right) \quad \text{ as }r\rightarrow\infty. \label{Z6E8 \end{equation} Using now (\ref{Z6E6})-(\ref{Z6E7c}) as well as (\ref{Z3E5}), (\ref{Z3E6}), (\ref{Z3E8}), (\ref{Z3E10}), (\ref{Z6E2}), and (\ref{Z6E3}) we obtain the following asymptotics \begin{equation} b_{1}\left( r\right) \sim\frac{ C_{2} K_{2} B_{2,3} }{2}\left( \left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau}-\varepsilon_{\ell}^{2}\right) +\frac{\left( B_{2,3}\right) _{\tau}}{B_{2,3}}\varepsilon_{\ell}^{2}\right) \log r \quad \text{ as }r\rightarrow\infty, \label{Z6E9a} \end{equation} \begin{equation} b_{2}\left( r\right) \sim\frac{C_{2} K_{2} \kappa_{2} B_{2,3}}{3\cdot 2^{4}\left( \sqrt{2}+1\right) }\left( \left( 2\varepsilon_{\ell }\varepsilon_{\ell,\tau}-\varepsilon_{\ell}^{2}\right) +\frac{\left( B_{2,3}\right) _{\tau}}{B_{2,3}}\varepsilon_{\ell}^{2}\right) r^{2\sqrt{2}+2} \quad \text{ as }r\rightarrow\infty. \label{Z6E9b} \end{equation} In the case of $b_{3}\left( r\right)$ we have that $\int_{0}^{\infty}\left\vert G_{2}\left( \eta\right) \right\vert \eta^{2}\left( \eta^{2} + 1 \right)^{2} \left\vert D_{2,1}\left( \eta\right) \right\vert d\eta<\infty.$ Assuming that $\beta_{3}= O\left( \varepsilon_{\ell}^{4}\right)$ $\left( w.l.a \right),$ as it corresponds to this class of terms we would then obtain \begin{equation} b_{3}\left( r\right) \rightarrow B_{4,2}=\beta_{3}+\frac{1}{3\cdot2^{12 }\int_{0}^{\infty}G_{2}\left( \eta\right) \eta^{2}\left( \eta^{2}+1\right) ^{2}D_{2,3}\left( \eta\right) d\eta, \label{Z6E9c} \end{equation} where $B_{4,2}=O\left( \varepsilon_{\ell}^{4}\right) \ \ \left( w.l.a\right)$ uniformly for large $r.$ Finally: \begin{equation} b_{4}\left( r\right) \sim-\frac{K_{2}B_{2,3}}{2^{4}\left( \sqrt {2}+1\right) }\left( \left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau }-\varepsilon_{\ell}^{2}\right) +\frac{\left( B_{2,3}\right) _{\tau }{B_{2,3}}\varepsilon_{\ell}^{2}\right) r^{2\sqrt{2}+2} \quad \text{ as } r\rightarrow\infty. \label{Z6E9d \end{equation} We now compute the asymptotics of $Q_{4,2,2}\left( r\right) ,\ V_{4,2,2 \left( r\right) $. We use (\ref{Z3E5}), (\ref{Z3E6}), (\ref{Z3E8}), (\ref{Z3E10}) combined with (\ref{Z6E9a})-(\ref{Z6E9d}) to obtain the asymptotics of $V_{4,2,2}$ as in \eqref{Z6E10b} an \begin{align} Q_{4,2,2}\left( r\right) = & 16K_{2}B_{4,2}r^{2\sqrt{2}-2}-\frac {C_{2} K_{2} B_{2,3}}{\sqrt{2}+1}\left( \left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau}-\varepsilon_{\ell}^{2}\right) +\frac{\left( B_{2,3}\right) _{\tau}}{B_{2,3}}\varepsilon_{\ell}^{2}\right) -\nonumber\\ & -\frac{\psi_{3}\left( r\right) }{3\cdot2^{12}}\int_{r}^{\infty G_{2}\left( \eta\right) \eta^{2}\left( \eta^{2}+1\right) ^{2 D_{2,3}\left( \eta\right) d\eta+O\left( \frac{\varepsilon_{\ell}^{4 }{r^{4-2\sqrt{2}}}\right) \ \ \left( w.l.a\right) \ \label{Z6E9e \end{align} as $r\rightarrow\infty$. It is important to remark that in the computation of the asymptotics of $V_{4,2,2}$ there is a cancellation of the leading order of $b_{2}\left( r\right) \omega_{2}\left( r\right) +b_{4}\left( r\right) \omega_{4}\left( r\right) .$ We now estimate the integral term on the right-hand side of (\ref{Z6E9e}). The leading order of the integral is then computed as \[ \int_{r}^{\infty}G_{2}\left( \eta\right) \eta^{2}\left( \eta^{2}+1\right) ^{2}D_{2,3}\left( \eta\right) d\eta\sim2^{8}\cdot3C_{2}B_{2,3}\left( \left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau}-\varepsilon_{\ell ^{2}\right) +\frac{\left( B_{2,3}\right) _{\tau}}{B_{2,3}}\varepsilon _{\ell}^{2}\right) \frac{r^{2-2\sqrt{2}}}{\sqrt{2}-1 \] as $r\rightarrow\infty$. Combining this formula with (\ref{Z6E9e}) as well as \eqref{Z3E8} we obtain \eqref{Q422}. \end{proof} \begin{remark} It will be seen later in Subsection \ref{Match4} that in the outer region the terms $K_{2} B_{4,2} r^{2\sqrt{2}}$ would give a contribution for $\Phi$ of order $B_{4,2}\varepsilon_{\ell}^{-(2+2\sqrt{2})} \left\vert y - \bar{y}_{\ell} \right\vert ^{2\sqrt{2}}$ in the outer variables. In order for this term to be smaller than $\varepsilon_{\ell}^{2}$ we would need $B_{4,2}= O( \varepsilon_{\ell}^{4+2\sqrt{2}})$. This implies basically that $B_{4,2}$ is very small in this region. \end{remark} \subsection{Computation of $\left( U_{4,2,3},W_{4,2,3}\right) .$} We now compute the functions $U_{4,2,3}, W_{4,2,3}$ which satisfy \eqref{Z1E7}, \eqref{Z1E8} with $k=3$ together with boundary condition (\ref{Z2E1}), (\ref{Z2E2}). We can ignore the presence of homogeneous terms of the equations, because all such terms can be included in the parameters $B_{2,3},\ \bar{B}_{2,3}$ (cf. (\ref{M2E2})). We can assume also that the angular dependence of the functions $U_{4,2,3},W_{4,2,3}$ is $\cos\left( 4\theta\right) :$ \begin{equation} U_{4,2,3}=Q_{4,2,3}\left( r,\tau\right) \cos\left( 4\theta\right) ,\quad W_{4,2,3}=V_{4,2,3}\left( r,\tau\right) \cos\left( 4\theta\right) \label{Z7E1 \end{equation} Plugging (\ref{Z7E1}) into the system \eqref{Z1E7}, \eqref{Z1E8} with $k=3$, we obtain: \begin{subequations} \begin{align} 0 & =\frac{1}{r}\frac{\partial}{\partial r}\left( r\frac{\partial Q_{4,2,3}}{\partial r}\right) -\frac{16}{r^{2}}Q_{4,2,3}-\frac{du_{s}}{dr} \frac{\partial V_{4,2,3}}{\partial r}+2u_{s}Q_{4,2,3}-\frac{\partial Q_{4,2,3}}{\partial r}\frac{dv_{s}}{dr} + G_{3} ,\label{Z7E2}\\ 0 & =\frac{1}{r}\frac{\partial}{\partial r}\left( r\frac{\partial V_{4,2,3}}{\partial r}\right) -\frac{16}{r^{2}}V_{4,2,3}+Q_{4,2,3}. \label{Z7E3 \end{align} The conditions (\ref{Z2E1}) and (\ref{Z2E2}) respectively imply \begin{equation} Q_{4,2,3}(0,\tau)=0,\quad V_{4,2,3}(0,\tau)=0. \label{Z2E23bound} \end{equation} \end{subequations} \begin{lemma} The problem \eqref{Z7E2}-\eqref{Z7E3}-\eqref{Z2E23bound} has a two-dimensional family of solutions parametrized by $c_1 (\infty ),\ c_3 (\infty )$. Moreover, its asymptotics as $r \to\infty$ is given by \begin{subequations} \begin{align} Q_{4,2,3}\left( r, \tau\right) & = 16K_{4}r^{2\sqrt{5}-2}c_{3}\left( \infty\right) +\left( 24c_{1}\left( \infty\right) + \sqrt{5}C_{4}K_{4} \left( B_{2,3}\right) ^{2} \right) + o\left( 1\right), \label{F-Q423}\\ V_{4,2,3}\left( r, \tau\right) & = -4K_{4}c_{3}\left( \infty \right)r^{2\sqrt{5}} + 3c_{1}\left( \infty\right) r^{4} + \left[ 2c_{1} \left( \infty\right) - \frac{C_{4} K_{4} \left( B_{2,3}\right) ^{2} \sqrt{5} }{24} \right] r^{2} + o\left( r^{2} \right) \label{F-V423} \end{align} as $r\rightarrow\infty$, \end{subequations} where $C_4$ and $K_4$ are the constants as in Theorem \ref{Linear} and $B_{2,3} = B_{2,3}(\tau )$ is the parameter in \eqref{M2E2}. \end{lemma} \begin{proof} For the same reason as in the proof of Lemma \ref{Lem80} we avoid writing explicitly the dependencies on $\tau$ in the proof unless needed. The solutions of the homogeneous equations associated to \eqref{Z7E2}, \eqref{Z7E3} have been described in Theorem \ref{Linear}. We then look for solutions of \eqref{Z7E2}, \eqref{Z7E3} in the form: \[ Q_{4,2,3}\left( r\right) =\sum_{i=1}^{4}c_{i}\left( r\right) \psi_{i}\left( r\right), \quad V_{4,2,3}\left( r\right) =\sum_{i=1}^{4}c_{i}\left( r\right) \omega_{i}\left( r\right) \] under the constraint: \[ \sum_{i=1}^{4}c_{i}^{\prime}\left( r\right) \psi_{i}\left( r\right) =\sum_{i=1}^{4}c_{i}^{\prime}\left( r\right) \omega_{i}\left( r\right) =0. \] Arguing as in the proof of Lemma \ref{Lem80}, we then obtain \[ c_{k}^{\prime}\left( r\right) =\frac{\left( -1\right) ^{k}G_{3}\left( r\right) D_{4,k}\left( r\right) }{\Delta_{4}\left( r\right) }, \] where \[ D_{4,m}\left( r\right) =\left\vert \begin{array} [c]{ccc \psi_{i}\left( r\right) & \psi_{j}\left( r\right) & \psi_{k}\left( r\right) \\ \omega_{i}\left( r\right) & \omega_{j}\left( r\right) & \omega_{k}\left( r\right) \\ \omega_{i}^{\prime}\left( r\right) & \omega_{j}^{\prime}\left( r\right) & \omega_{k}^{\prime}\left( r\right) \end{array} \right\vert ,\ i,j,k\in\left\{ 1,2,3,4\right\} \setminus\left\{ m\right\} ,\quad i<j<k \] and where $\Delta_{4}\left( r\right) $ is the Wronskian given in \eqref{Delta-L} with $L=4$. Hence: \begin{equation} c_{m}^{\prime}\left( r\right) =\frac{\left( -1\right) ^{m+1}r^{2}\left( r^{2}+1\right) ^{2}G_{3}\left( r\right) D_{4,m}\left( r\right) } {2^{14}\cdot3\cdot5},\quad m=1,2,3,4. \label{G1E1} \end{equation} By Theorem \ref{Linear} we obtain the following asymptotics \begin{subequations} \begin{align} D_{4,1}\left( r\right) & \sim-\frac{2^{7}\cdot3}{r^{5}},\qquad D_{4,2}\left( r\right) \sim-2^{7}\cdot 5 \cdot r^{3}, \label{G1E2a}\\ D_{4,3}\left( r\right) & \sim-\frac{2^{7} \cdot 3 \cdot 5}{r^{5}},\qquad D_{4,4}\left( r\right) \sim - 2^{7} \cdot 3 \cdot 5\cdot r^{3} \label{G1E2b \end{align} as $r \to 0$. Using now \eqref{M2E1} and \eqref{Z1E6} we have: \end{subequations} \begin{equation} G_{3}\left( r\right) \sim2^{8}\cdot3\cdot\left( B_{2,3}\right) ^{2 r^{4}\quad\text{ as }r\rightarrow0. \label{G1E3 \end{equation} Combining \eqref{G1E1} with \eqref{G1E2a}, \eqref{G1E2b} we deduce the asymptotics \begin{align*} c_{1}^{\prime}\left( r\right) & \sim-\frac{6\left( B_{2,3}\right) ^{2}r}{5},\qquad c_{2}^{\prime}\left( r\right) \sim 2\left( B_{2,3}\right)^{2}r^{9},\\ c_{3}^{\prime}\left( r\right) & \sim-6\left( B_{2,3}\right) ^{2} r,\qquad c_{4}^{\prime}\left( r\right) \sim 6\left( B_{2,3}\right)^{2}r^{9} \end{align*} as $r \to 0$. Using then the conditions \eqref{Z2E23bound} as well as \eqref{Z3E6} and \eqref{Z3E10}, we arrive at \begin{subequations} \begin{align} c_{1}\left( r\right) & =\gamma_{1}+\frac{1}{2^{14}\cdot3\cdot5}\int _{0}^{r}\xi^{2}\left( \xi^{2}+1\right) ^{2}G_{3}\left( \xi\right) D_{4,1}\left( \xi\right) d\xi,\label{ciasympt1}\\ c_{2}\left( r\right) & =-\frac{1}{2^{14}\cdot3\cdot5}\int_{0}^{r}\xi ^{2}\left( \xi^{2}+1\right) ^{2}G_{3}\left( \xi\right) D_{4,2}\left( \xi\right) d\xi,\label{ciasympt2}\\ c_{3}\left( r\right) & =\gamma_{3}+\frac{1}{2^{14}\cdot3\cdot5}\int _{0}^{r}\xi^{2}\left( \xi^{2}+1\right) ^{2}G_{3}\left( \xi\right) D_{4,3}\left( \xi\right) d\xi,\label{ciasympt3}\\ c_{4}\left( r\right) & =-\frac{1}{2^{14}\cdot3\cdot5}\int_{0}^{r}\xi ^{2}\left( \xi^{2}+1\right) ^{2}G_{3}\left( \xi\right) D_{4,4}\left( \xi\right) d\xi. \label{ciasympt4 \end{align} We then proceed to compute the asymptotics of $c_{m}\left( r\right) $ as $r\rightarrow\infty$. To this end, we compute the asymptotics of $G_{3}\left( r\right) $ and $D_{4,m}\left( r\right) $ as $r\rightarrow\infty.$ Using \eqref{M2E1} and \eqref{Z1E6} we obtain: \end{subequations} \begin{equation} G_{3}\left( r\right) =\frac{32\left( B_{2,3}\right) ^{2}}{r^{2}}\left( 1+O\left( \frac{1}{r^{2}}\right) \right) \quad\text{ as }r\rightarrow \infty. \label{G3asp \end{equation} In order to compute the asymptotics of $D_{4,1}\left( r\right) $ as $r\rightarrow\infty$ we write: \begin{equation} D_{4,1}\left( r\right) =\psi_{2}\left( r\right) \left\vert \begin{array} [c]{cc \omega_{3}\left( r\right) & \omega_{4}\left( r\right) \\ \omega_{3}^{\prime}\left( r\right) & \omega_{4}^{\prime}\left( r\right) \end{array} \right\vert -\psi_{3}\left( r\right) \left\vert \begin{array} [c]{cc \omega_{2}\left( r\right) & \omega_{4}\left( r\right) \\ \omega_{2}^{\prime}\left( r\right) & \omega_{4}^{\prime}\left( r\right) \end{array} \right\vert +\psi_{4}\left( r\right) \left\vert \begin{array} [c]{cc \omega_{2}\left( r\right) & \omega_{3}\left( r\right) \\ \omega_{2}^{\prime}\left( r\right) & \omega_{3}^{\prime}\left( r\right) \end{array} \right\vert. \label{G1E4 \end{equation} We now have, using Theorem \ref{Linear}, the following asymptotics \begin{subequations} \begin{align} \left\vert \begin{array} [c]{cc \omega_{3}\left( r\right) & \omega_{4}\left( r\right) \\ \omega_{3}^{\prime}\left( r\right) & \omega_{4}^{\prime}\left( r\right) \end{array} \right\vert & =8C_{4}K_{4}\kappa_{4}\left( 2+\sqrt{5}\right) r^{2\sqrt {5}-5}\left( 1+o\left( 1\right) \right),\\ \left\vert \begin{array} [c]{cc \omega_{2}\left( r\right) & \omega_{3}\left( r\right) \\ \omega_{2}^{\prime}\left( r\right) & \omega_{3}^{\prime}\left( r\right) \end{array} \right\vert & =-40K_{4}\left( 2+\sqrt{5}\right) r^{2\sqrt{5}-5}\left( 1+o\left( 1\right) \right) \end{align} as $r \to \infty$. In the computation of the second term on the right of (\ref{G1E4}) we must take into account the cancellations of the leading order term. Theorem \ref{Linear} yields: \end{subequations} \[ \left\vert \begin{array} [c]{cc \omega_{2}\left( r\right) & \omega_{4}\left( r\right) \\ \omega_{2}^{\prime}\left( r\right) & \omega_{4}^{\prime}\left( r\right) \end{array} \right\vert =\left\vert \begin{array} [c]{cc 5r^{-4}\left( 1+O\left( r^{-2} \right) \right) & C_{4}\left( \kappa_{4}r^{-4}-4r^{-2\sqrt{5}}+O\left( r^{-6} \right) \right) \\ - 20r^{-5} \left( 1 + O\left( r^{-2} \right) \right) & C_{4}\left( -4\kappa_{4}r^{-5}+8\sqrt{5}r^{-2\sqrt{5}-1} +O\left( r^{-7} \right) \right) \end{array} \right\vert \] as $r \to \infty$. Using that $\sqrt{4+L^{2}}-L<2$ as well as the fact that $L=4$ we obtain \begin{equation} \left\vert \begin{array} [c]{cc \omega_{2}\left( r\right) & \omega_{4}\left( r\right) \\ \omega_{2}^{\prime}\left( r\right) & \omega_{4}^{\prime}\left( r\right) \end{array} \right\vert =40\left( \sqrt{5}-2\right) C_{4}r^{-2\sqrt{5}-5} +O\left( r^{-10}\right) \label{G1E5a \end{equation} as $r \to \infty$. The use of \eqref{G1E4}-\eqref{G1E5a} as well as the asymptotics of $\psi_{i}$ in Theorem \ref{Linear} yield \begin{subequations} \begin{equation} D_{4,1}\left( r\right) =-5\cdot2^{8}\cdot\sqrt{5} C_{4} K_{4} r^{-7} \left( 1+o\left( 1\right) \right) \label{G1E5} \end{equation} as $r \to \infty$. On the other hand, a direct computation using Theorem \ref{Linear} gives: \begin{align} & D_{4,2}\left( r\right) =3\cdot2^{7}C_{4}K_{4}\kappa_{4}r^{2\sqrt{5}-3} - 3\cdot 2^8 \cdot \sqrt{5}C_{4}K_{4} r\left( 1+o\left( 1\right) \right), \label{G1E6}\\ & D_{4,3}\left( r\right) = - 5\cdot 3\cdot 2^{7} C_{4}r^{-2\sqrt{5}-3}\left( 1+o\left( 1\right) \right), \label{G1E7}\\ & D_{4,4}\left( r\right) = -5\cdot 3\cdot 2^{7}K_{4}r^{2\sqrt{5}-3} - 5\cdot 3\cdot 2^{8}K_{4}r^{2\sqrt{5}-5}\left( 1+o\left( 1\right) \right) \label{G1E8 \end{align} as $r \to \infty$. We can now compute the asymptotics of the functions $c_{m}\left( r\right) $ as $r\rightarrow\infty$. Since $c_{1}(r)$ and $c_{3}(r)$ are convergent to finite numbers as $r\rightarrow\infty$, we may write them respectively as: \end{subequations} \begin{subequations} \begin{align} c_{1}\left( r\right) & =c_{1}\left( \infty\right) -\frac{1}{2^{14 \cdot3\cdot5}\int_{r}^{\infty}\xi^{2}\left( \xi^{2}+1\right) ^{2 G_{3}\left( \xi\right) D_{4,1}\left( \xi\right) d\xi,\label{c1asp71}\\ c_{3}\left( r\right) & =c_{3}\left( \infty\right) -\frac{1}{2^{14 \cdot3\cdot5}\int_{r}^{\infty}\xi^{2}\left( \xi^{2}+1\right) ^{2 G_{3}\left( \xi\right) D_{4,3}\left( \xi\right) d\xi\label{c1asp73 \end{align} with \begin{align} & c_{1}\left( \infty\right) =\gamma_{1}+\frac{1}{2^{14}\cdot3\cdot5 \int_{0}^{\infty}\xi^{2}\left( \xi^{2}+1\right) ^{2}G_{3}\left( \xi\right) D_{4,1}\left( \xi\right) d\xi,\\ & c_{3}\left( \infty\right) =\gamma_{3}+\frac{1}{2^{14}\cdot3\cdot5 \int_{0}^{\infty}\xi^{2}\left( \xi^{2}+1\right) ^{2}G_{3}\left( \xi\right) D_{4,3}\left( \xi\right) d\xi. \end{align} Since \eqref{G3asp} and \eqref{G1E5} imply \end{subequations} \[ \frac{1}{2^{14}\cdot3\cdot5}\int_{r}^{\infty}\xi^{2}\left( \xi^{2}+1\right) ^{2}G_{3}\left( \xi\right) D_{4,1}\left( \xi\right) d\xi =-\frac{\sqrt{5} C_{4} K_{4} \left( B_{2,3}\right) ^{2}}{2\cdot3\cdot4}\frac{1}{r^{2 }(1+o(1)), \] it follows from \eqref{c1asp71} that \[ c_{1}\left( r\right) = c_{1}\left( \infty\right) + \frac{\sqrt{5}C_{4} K_{4}\left( B_{2,3}\right) ^{2}}{2\cdot3\cdot4}\frac{1}{r^{2}}\left( 1+o\left( 1\right) \right) \quad\text{ as }r\rightarrow\infty, \] whence: \begin{equation} c_{1}(r)\omega_{1}(r) = 3c_{1}(\infty ) r^{4} + \left[ \frac{\sqrt{5}C_{4} K_{4}\left( B_{2,3}\right)^{2}}{8} + 2c_{1}(\infty)\right] r^{2} + o(r^{2}) \quad\text{ as }r\rightarrow\infty. \label{Y6E2} \end{equation} The full formulas of $c_{2}(r),c_{4}(r)$ given in \eqref{ciasympt2}, \eqref{ciasympt4} as well as Theorem \ref{Linear} show that \begin{align*} & c_{2}\left( r\right) \omega_{2}\left( r\right) +c_{4}\left( r\right) \omega_{4}\left( r\right) \\ = & -\frac{1}{2^{14}\cdot3}\frac{1}{r^{4}}\int_{0}^{r}\xi^{2}\left( \xi ^{2}+1\right) ^{2}G_{3}\left( \xi\right) \left[ D_{4,2}\left( \xi\right) +\frac{C_{4}\kappa_{4}D_{4,4}\left( \xi\right) }{5}\right] d\xi-\\ & -\frac{1}{2^{14}\cdot3}\frac{1}{r^{4}}O\left( \frac{1}{r^{2}}\right) \int_{0}^{r}\xi^{2}\left( \xi^{2}+1\right) ^{2}G_{3}\left( \xi\right) D_{4,2}\left( \xi\right) d\xi+\\ & +\frac{C_{4}}{2^{14}\cdot3\cdot5}\frac{1}{r^{4}}\left( 4r^{-2\sqrt{5 +4}+o\left( \frac{1}{r^{2}}\right) \right) \int_{0}^{r}\xi^{2}\left( \xi^{2}+1\right) ^{2}G_{3}\left( \xi\right) D_{4,4}\left( \xi\right) d\xi\\ \equiv & -I_{1}-I_{2}+I_{3}. \end{align*} A quick check using \eqref{G3asp} and Theorem \ref{Linear} shows that $I_{2}$ grow at most with the rate of $O(r^{2\sqrt{5}-4})$ as $r\rightarrow\infty$. On the other hand, it is readily seen by \eqref{G3asp} and \eqref{G1E8} that \[ I_{3}= -\frac{C_{4}K_{4}\left( B_{2,3}\right)^{2}}{2\left( \sqrt{5} + 1 \right)}r^{2} \left( 1+o \left(1 \right) \right) \] as $r\rightarrow\infty$. To compute $I_{1}$ we note that: \[ D_{4,2}\left( r\right) +\frac{C_{4}\kappa_{4}D_{4,4}\left( r\right)}{5} =-3\cdot2^{8}\cdot\sqrt{5}C_{4} K_{4} r \left( 1 + o\left( 1\right) \right) \] as $r \to \infty$ due to \eqref{G1E6} and \eqref{G1E8}. Using this as well as \eqref{G3asp}, we obtain: \[ -I_{1} = \frac{\sqrt{5}C_{4} K_{4}\left( B_{2,3}\right) ^{2}}{12}r^{2} \left( 1+o \left(1 \right) \right) \] as $r\rightarrow\infty$. Summarizing, we have obtained the asymptotics: \begin{equation} c_{2}\left( r\right) \omega_{2}\left( r\right) +c_{4}\left( r\right) \omega_{4}\left( r\right) =\frac{C_{4}K_{4}\left( B_{2,3}\right) ^{2} {12}\cdot\frac{\sqrt{5}-1}{\sqrt{5}+1}r^{2}+o\left( r^{2}\right) \label{Y6E2alpha \end{equation} as $r\rightarrow\infty$. Since \[ c_{3}\left( r\right) = c_{3}\left( \infty \right) -\frac{1}{2^{14}\cdot3\cdot5}\int_{r}^{\infty}\xi ^{2}\left( \xi^{2}+1\right) ^{2}G_{3}\left( \xi\right) D_{4,3}\left( \xi\right) d\xi, \] it follows from \eqref{Z3E8} and \eqref{G3asp} that \[ c_{3}\left( r\right) = c_{3}\left( \infty \right) + \frac{C_{4}\left( B_{2,3}\right)^{2}}{2^{3}\left( \sqrt{5}-1\right) }r^{-2\sqrt{5}+2}\left( 1+o\left( 1\right) \right) \] as $r \to \infty$, whence: \begin{equation} c_{3}\left( r\right) \omega_{3}\left( r\right) = -4K_{4}c_{3}\left( \infty \right) r^{2\sqrt{5}} -\frac{C_{4}K_{4}\left( B_{2,3}\right)^{2}}{2\left( \sqrt{5}-1\right) }r^{2} \left( 1+o\left( 1\right) \right) \label{Y6E2beta \end{equation} as $r \to \infty$. We can then compute the whole asymptotics of $V_{4,2,3}\left( r\right) $ as in \eqref{F-V423}. By \eqref{Y6E2}-\eqref{Y6E2beta} we conclude \eqref{F-Q423}. \end{proof} \begin{remark} To see the contribution due to $c_{3}\left( r\right) \omega_{3}\left( r\right)$ we need to study the asymptotics of $Q_{4,2,3}$. Using \eqref{Z3E8} we obtain the matching condition: \[ Q_{4,2,3}\left( r,\tau\right) \sim 16 c_{3}\left( \infty\right) K_{4}r^{2\sqrt{5}-2} \quad \text{ as } r\rightarrow\infty. \] This contribution would give terms of order $\varepsilon_{\ell}^{4-2\sqrt {5}+2}=\varepsilon_{\ell}^{6-2\sqrt{5}}\gg\varepsilon_{\ell}^{2}$ in the self-similar region where $\vert y - \bar{y}_{\ell} \vert$ is of order one. Then the contribution of this term to $\Phi$ would be much larger than one unless $c_{3}\left( \infty\right)$ is small as $\tau\rightarrow\infty.$ It will be seen in Subsection \ref{Match4} that $c_{3}( \infty ) = O( \varepsilon_{\ell}^{2\sqrt{5} +2} )$. \end{remark} \section{Outer expansions.\label{outer}} In the analysis of inner expansions we have derived the asymptotics of the solution for a general number of peaks, but we will compute outer expansions only for the case of two peaks for the reason stated in the previous sections. The notation of the singularities has been denoted by $\{ y_{\ell} \}_{\ell=1}^{N}$, which solves \eqref{U1E2}, in the analysis of inner expansions. In the particular case where $N =2$ we will write $y_{1} =\mathbf{a}$ and $y_{2} = - \mathbf{a}$ with $\vert\mathbf{a} \vert= 2$. We may assume, without loss of generality, that $\mathbf{a}=(2,0)$. In this section we derive outer expansions for the solution of (\ref{S1E1}), (\ref{S1E2}), i.e. for regions where $\left\vert y\right\vert $ is of order one. To this end we argue as in the derivation of (3.40)--(3.48) in \cite{V1}. We look for expansions with the form: \begin{subequations} \begin{align} \Phi(y,\tau) & =\varepsilon_{\ell}^{2}\Omega(y)+\varepsilon_{\ell} \varepsilon_{\ell,\tau}Z(y)+\cdots,\label{Y1E1}\\ W(y,\tau) & =\mathcal{W}_{0}(y,\tau)+\varepsilon_{\ell}^{2} \mathcal{W _{1}(y,\tau)+\cdots. \label{Y1E1a \end{align} Then, to the leading order, we obtain \end{subequations} \[ -\Delta\mathcal{W}_{0} = 0, \quad y\neq y_{\ell \] with matching condition \[ \nabla_{y}\mathcal{W}_{0}(y)\sim-\frac{4(y\mp\mathbf{a})}{|y\mp\mathbf{a |^{2}}\quad\text{ as } y\rightarrow\pm\mathbf{a}, \] where $\mathbf{a} = (2,0)$. Then \begin{equation} \nabla_{y}\mathcal{W}_{0}(y)=-\left( \frac{4(y-\mathbf{a})}{|y-\mathbf{a} |^{2}}+\frac{4(y+\mathbf{a})}{|y+\mathbf{a}|^{2}}\right). \label{Y2E4 \end{equation} Neglecting the terms of order $\varepsilon_{\ell}^{2}$ we obtain \begin{equation} \Phi_{\tau}=\Delta\Phi-\frac{y\cdot\nabla\Phi}{2}+\left( \frac{4(y-\mathbf{a )}{|y-\mathbf{a}|^{2}}+\frac{4(y+\mathbf{a})}{|y+\mathbf{a}|^{2}}\right) \cdot\nabla\Phi-\Phi. \label{Y1E2 \end{equation} Plugging the expansion (\ref{Y1E1}) into (\ref{Y1E2}) we obtain the equations \begin{equation} L(\Omega)=0,\quad y\neq\pm\mathbf{a}, \label{Y1E3 \end{equation \begin{equation} L\left( Z\right) =2\Omega,\quad y\neq\pm\mathbf{a}, \label{M1E1 \end{equation} where \begin{equation} L(\Omega)=\Delta\Omega-\frac{y\cdot\nabla\Omega}{2}+\left( \frac {4(y-\mathbf{a})}{|y-\mathbf{a}|^{2}}+\frac{4(y+\mathbf{a})}{|y+\mathbf{a |^{2}}\right) \cdot\nabla\Omega-\Omega. \label{Y1E4} \end{equation} Due to (\ref{S4E2}) and (\ref{S5E7}) we should impose the following matching condition for $\Omega: \begin{equation} \Omega\left( y\right) \sim\frac{8}{\left\vert y\mp\mathbf{a}\right\vert ^{4}} \quad\text{ as }y\rightarrow\pm\mathbf{a}. \label{Y1E5 \end{equation} On the other hand, in order to obtain matchings in the regions where $\left\vert x\right\vert $ is of order one, we must assume that $\Omega\left( y\right) $ increases at most algebraically as $\left\vert y\right\vert \rightarrow\infty.$ The function $\Omega$ cannot be computed in this case by means of a closed formula as in the radial case considered in \cite{V1}. Nevertheless, it is possible to prove that the problem \eqref{Y1E3}-\eqref{Y1E4}-\eqref{Y1E5} define uniquely a function $\Omega$ with the properties required to describe the leading asymptotics of $\Phi$ in the outer region. More precisely, the following result holds. \begin{lemma} \label{Prop1} Assume that $\left\vert \mathbf{a}\right\vert =2.$ Then for every $D_{1}, D_{2}\in\mathbb{R}$ there exists a unique solution of \eqref{Y1E3}, \eqref{Y1E4} satisfying: \begin{align} \Omega\left( y\right) & \sim\frac{D_{1}}{\left\vert y-a\right\vert ^{4}} \quad \text{ as }y\rightarrow\mathbf{a},\quad\Omega\left( y\right) \sim \frac{D_{2}}{\left\vert y + \mathbf{a}\right\vert ^{4}} \quad \text{ as }y\rightarrow - \mathbf{a}, \label{Y1E6a}\\ \left\vert \Omega\left( y\right) \right\vert & \leq\left\vert y\right\vert^{m} \quad\text{ for }\left\vert y\right\vert \geq 5 \ \text{for some } m>0. \label{Y1E6b} \end{align} Moreover, the asymptotics of $\Omega$ near the singular points $\pm\mathbf{a}$ are given by: \begin{subequations} \begin{align} \Omega\left( y\right) & \sim D_{1}\left[ \frac{1}{\left\vert y-\mathbf{a}\right\vert ^{4}}+\Psi_{1}\left( y-\mathbf{a}\right) +\Psi _{2}\left( y-\mathbf{a}\right) +\Psi_{3}\left( y-\mathbf{a}\right) +A\right], \label{T2E5b}\\ \Omega\left( y\right) & \sim D_{2}\left[ \frac{1}{\left\vert y+\mathbf{a}\right\vert ^{4}}+\Psi_{1}\left( y+\mathbf{a}\right) -\Psi _{2}\left( y+\mathbf{a}\right) +\Psi_{3}\left( y+\mathbf{a}\right) +A\right], \label{T2E5 \end{align} where $A=A\left( D_{1},D_{2}\right)$ is a constant and: \end{subequations} \[ \Psi_{1}\left( Y\right) =\frac{1}{16}\left[ \frac{2}{\left\vert Y\right\vert ^{2}}+\frac{\left( \mathbf{a}\cdot Y\right) ^{2}}{\left\vert Y\right\vert ^{4}}\right] ,\ \Psi_{2}\left( Y\right) =\frac{\left( \mathbf{a}\cdot Y\right) }{96\left\vert Y\right\vert ^{2}}\left[ 3-\frac{\left( \mathbf{a}\cdot Y\right) ^{2}}{\left\vert Y\right\vert ^{2 }\right] ,\ \Psi_{3}\left( Y\right) =\frac{1}{256}\frac{\left( \mathbf{a}\cdot Y\right) ^{4}}{\left\vert Y\right\vert ^{4}}.\ \] \end{lemma} \begin{remark} \label{RemonA} Concerning the constant $A$ in the asymptotics \eqref{T2E5b}, \eqref{T2E5}, we have computed its value using the PDE solver "PDE tool box" from the Matlab package. We have observed that the numerical value of $A$ is between $-1$ and $-0.9$ for $D_1 = D_2 = 8$. The crucial fact is that $A<0.$ This negativity is a sufficient condition to ensure that a certain differential equation satisfied by $\varepsilon_{\ell}$ has solutions approaching zero as $\tau\rightarrow \infty$. (See Subsection \ref{ODE}). \end{remark} \begin{remark} A result analogous to Lemma \ref{Prop1} could be shown using similar methods for singularities of $\Omega$ at arbitrary number of points, or more precisely for operators with the form \begin{equation} \bar{L}\left( \Omega\right) =\Delta\Omega-\frac{y\cdot\nabla\Omega}{2 +\sum_{j=1}^{N}\frac{4(y-\mathbf{a}_{j})}{\left\vert y-\mathbf{a}_{j}\right\vert ^{2}} \cdot\nabla\Omega-\Omega\label{T1E3a} \end{equation} with $\mathbf{a}_{j}\in\mathbb{R}^{2},\ j=1,...,N,$ $\mathbf{a}_{j \neq\mathbf{a}_{k}$ for $j\neq k$ satisfying \begin{equation} \frac{\mathbf{a}_{k}}{2}=\sum_{j=1,\ j\neq k}^{N} \frac{4(\mathbf{a}_{k}-\mathbf{a}_{j})}{\left\vert \mathbf{a}_{k}-\mathbf{a}_{j}\right\vert^{2}}. \label{T1E3 \end{equation} If \eqref{T1E3} does not hold, the form of the asymptotics \eqref{T2E5b}, \eqref{T2E5} would contain additional terms with the homogeneity of $1/| y - \mathbf{a}_{j}|^{3}.$ Actually the condition $\left\vert \mathbf{a}\right\vert =2$ in Lemma \ref{Prop1} is just the condition \eqref{T1E3} in the case of two peaks, i.e. $N=2$. The functions $\{ \Psi_{i} \}_{i=1}^{N}$ that would appear in the study of the general case \eqref{T1E3a} have similar homogeneity properties to the ones described in Lemma \ref{Prop1}, but slightly different functional forms. \end{remark} \begin{remark} A characteristic feature of the expansions \eqref{T2E5b}, \eqref{T2E5} is the absence of logarithmic terms in $\Psi_{3}.$ Very likely this property holds in general under the assumption \eqref{T1E3} for every integer $N \ge2$. We prove, however, such absence of logarithmic terms just in the case of two peaks. \end{remark} \begin{proof} In what follows, we denote by $Y_{1}=y-\mathbf{a},\ r_{1}=\left\vert Y_{1}\right\vert ,\ Y_{2 =y+\mathbf{a},\ r_{2}=\left\vert Y_{2}\right\vert$ for notational simplicity. The key point is to define suitable sub- and supersolutions having the expected asymptotics near the singular points. To this end we define auxiliary functions $\hat{W}_{j}$ as: \begin{equation} \hat{W}_{j}\left( y\right) =\left[ \frac{1}{r_{j}^{4}}+\Psi_{1}\left( Y_{j}\right) + \left( -1 \right)^{j+1}\Psi_{2}\left( Y_{j}\right) +\Psi_{3}\left( Y_{j}\right) +\omega_{j}\left( Y_{j}\right) \right] \eta\left( Y_{j}\right) , \quad j=1,2, \label{T3E0 \end{equation} where $\eta\left( \xi\right)$ is a $C^{\infty}$ cutoff function satisfying $\eta\left( \xi\right) =1$ for $\left\vert \xi\right\vert \leq 1,\ \eta\left( \xi\right) =0$ for $\left\vert \xi\right\vert \geq 2,\ 0 \leq \eta \leq 1$ and where the functions $\omega_{j}\left( Y_{j}\right)$ will be defined later. We construct a supersolution $\Omega^{+}$ and a subsolution $\Omega^{-}$ of the form: \begin{subequations} \begin{align} \Omega^{+}\left( y\right) & =D_{1}\hat{W}_{1}\left( y\right) +D_{2 \hat{W}_{2}\left( y\right) + K, \label{T3E0super}\\ \Omega^{-}\left( y\right) & =D_{1}\hat{W}_{1}\left( y\right) +D_{2 \hat{W}_{2}\left( y\right) - K \label{T3E0sub} \end{align} with a constant $K>0$ to be selected later. Some explicit but rather tedious computations yield: \end{subequations} \begin{align} L\left( \frac{1}{r_{j}^{4}}+\Psi_{1}\left( Y_{j}\right) \right) & =-\frac{\left( a\cdot Y_{j}\right) ^{2}}{2r_{j}^{4}r_{\tau\left( j\right) }^{2}}-\frac{2\left( a\cdot Y_{j}\right) ^{3}}{r_{j}^{6}r_{\tau\left( j\right) }^{2}}-\frac{1}{2}\Psi_{1}\left( Y_{j}\right) +\frac{\left( Y_{1}\cdot Y_{2}\right) }{r_{j}^{4}r_{\tau\left( j\right) }^{2} +\frac{4\left( a\cdot Y_{j}\right) }{r_{j}^{4}r_{\tau\left( j\right) ^{2}}+\nonumber\\ & \quad+\frac{1}{64r_{j}^{4}}\left[ 1-\frac{16}{r_{\tau\left( j\right) }^{2}}\right] \left[ \left\vert a\right\vert ^{2}\left( Y_{1}\cdot Y_{2}\right) -2\left( a\cdot Y_{1}\right) \left( a\cdot Y_{2}\right) +4\left( a\cdot Y_{j}\right) ^{2}\frac{\left( Y_{1}\cdot Y_{2}\right) }{r_{j}^{2}}\right] \label{T3E1 \end{align} for $j=1,2$, where $\tau\left( j\right) =3-j$ for $j=1,2.$ In the derivation of these formulas we have used: \[ -\frac{1}{4}Y_{\tau\left( j\right) }\cdot\nabla\left( \frac{1}{r_{j}^{4 }\right) +\frac{4Y_{\tau\left( j\right) }}{r_{\tau\left( j\right) }^{2 }\nabla\left( \frac{1}{r_{j}^{4}}\right) =O\left( \frac{1}{r_{j}^{4 }\right) \quad\text{ as }r_{j}\rightarrow0,\quad j=1,2. \] This formula holds due to the assumption that $\left\vert \mathbf{a \right\vert =2.$ In all these computations we often use \[ r_{\tau\left( j\right) }^{2}-16=r_{j}^{2}-4\left( -1\right) ^{j}\left( \mathbf{a}\cdot Y_{j}\right) ,\quad j=1,2. \] It follows from (\ref{T3E1}) that \[ L\left( \frac{1}{r_{j}^{4}}+\Psi_{1}\left( Y_{j}\right) \right) =\frac{\left( -1\right) ^{j+1}}{8}\frac{\left( \mathbf{a}\cdot Y_{j}\right) }{r_{j}^{6}}\left[ 3r_{j}^{2}-\left( \mathbf{a}\cdot Y_{j}\right) ^{2}\right] +O\left( \frac{1}{r_{j}^{2}}\right) \ \ \text{as }r_{j}\rightarrow0,\quad j=1,2. \] We now use \[ \Delta\left( \Psi_{2}\right) \left( Y\right) +\frac{4}{\left\vert Y\right\vert ^{2}}\left( Y\cdot\nabla\right) \left( \Psi_{2}\right) \left( Y\right) =-\frac{\left( \mathbf{a}\cdot Y\right) }{8r^{6}}\left[ 3\left\vert Y\right\vert ^{2}-\left( \mathbf{a}\cdot Y\right) ^{2}\right] \] as well as the fact that the terms in $L$ that are not $\Delta$ and $\frac{4}{\left\vert Y\right\vert ^{2}}\left( Y\cdot \nabla\right)$ yield only lower order contributions. Therefore, after some computations, it follows that: \[ L\left( \frac{1}{r_{j}^{4}}+\Psi_{1}\left( Y_{j}\right) +\left( -1\right) ^{j+1}\Psi_{2}\left( Y_{j}\right) \right) =-\frac{3\left( \mathbf{a}\cdot Y_{j}\right) ^{2}}{16r_{j}^{4}}+\frac{\left( \mathbf{a}\cdot Y_{j}\right) ^{4}}{16r_{j}^{6}}+O\left( \frac{1}{r_{j}}\right) \quad \text{ as } r_{j}\rightarrow 0. \] Using then that: \[ \Delta\Psi_{3}+\frac{4\left( Y\cdot\nabla\right) \Psi_{3}}{\left\vert Y\right\vert ^{2}}=\frac{3\left( \mathbf{a}\cdot Y\right) ^{2}}{16\left\vert Y\right\vert ^{4}}-\frac{\left( \mathbf{a}\cdot Y\right) ^{4}}{16\left\vert Y\right\vert ^{6} \] as well as the fact that the remaining part of $L$ gives only lower order contributions, we obtain: \begin{equation} L\left( \frac{1}{r_{j}^{4}}+\Psi_{1}\left( Y_{j}\right) +\left( -1\right) ^{j+1}\Psi_{2}\left( Y_{j}\right) +\Psi_{3}\left( Y_{j}\right) \right) =\sum_{k=0}^{5}\beta_{k}\frac{\left( \mathbf{a}\cdot Y_{j}\right) ^{k }{\left\vert Y_{j}\right\vert ^{k+1}}+g_{j}\left( Y_{j}\right) \ \ j=1,2 \label{T3E2 \end{equation} with $g_{j}\in L^{\infty}\left( B_{2}\left( \left( -1\right)^{j+1}\mathbf{a}\right) \right)$ and some suitable $\beta_{k}\in\mathbb{R}$. Using a separation variables argument we can construct functions $\omega_{j},\ j=1,2,$ with the form \[ \omega_{j}\left( Y_{j}\right) =\sum_{k=0}^{5}\kappa_{k}\frac{\left( \mathbf{a}\cdot Y_{j}\right) ^{k}}{\left\vert Y_{j}\right\vert ^{k-1}}, \quad\ j=1,2. \] The constant $\kappa_{k}$ is selected in order that functions $\omega_{j}$ may satisfy: \begin{equation} \left( \Delta+\frac{4\left( Y_{j}\cdot\nabla\right) }{\left\vert Y_{j}\right\vert ^{2}}\right) \omega_{j}\left( Y_{j}\right) =-\sum _{k=0}^{5}\beta_{k}\frac{\left( \mathbf{a}\cdot Y_{j}\right) ^{k }{\left\vert Y_{j}\right\vert ^{k+1}},\quad\ j=1,2. \label{T3E3 \end{equation} We then define the functions $\hat{W}_{j},\ j=1,2$, as in (\ref{T3E0}). It follows from (\ref{T3E2}) and (\ref{T3E3}) that \[ L(\hat{W}_{j})=f_{j}(y),\quad j=1,2 \] with $\Vert f_{j}\Vert_{L^{\infty}\left( \mathbb{R}^{2}\right) }<\infty.$ Using the boundedness of $f_{j}$ as well as the fact that the functions $\hat{W}_{j}$, $j = 1,2$, are compactly supported, we observe that $\Omega^{+},\ \Omega^{-}$ in \eqref{T3E0super}, \eqref{T3E0sub} are respectively super- and subsolutions of (\ref{Y1E3}) in $\mathbb{R}^{2}\setminus\left\{ -\mathbf{a},\mathbf{a}\right\} $ if $K$ is chosen sufficiently large. We now define a family of domains $D_{\delta,R}$ as: \begin{equation} D_{\delta,R}=B_{R}(0)\backslash\lbrack B_{\delta}(-\mathbf{a}) \cup B_{\delta }(\mathbf{a})], \quad0<\delta<1,\quad R>8. \label{T3E2a \end{equation} Let us consider the following family of boundary value problems \begin{subequations} \begin{align} L\left( \Omega_{\delta,R}\right) & =0\quad\text{ in }D_{\delta ,R},\label{T3E3a}\\ \Omega_{\delta,R} & =D_{j}\left[ \frac{1}{r_{j}^{4}}+\Psi_{1}\left( Y_{j}\right) +(-1)^{j+1}\Psi_{2}\left( Y_{j}\right) +\Psi_{3}\left( Y_{j}\right) \right] \text{ on }\partial B_{\delta}\left( \left( -1\right) ^{j+1}\mathbf{a}\right) ,\ j=1,2,\label{T3E3b}\\ \Omega_{\delta,R} & =0\quad\text{ on }\partial B_{R}\left( 0\right) . \label{T3E3c \end{align} Classical results on elliptic equations (cf.~\cite[Corollary 9.18]{GTr} for instance) show that the functions $\Omega_{\delta,R}$ are uniquely defined for any $\delta$ and $R$ in (\ref{T3E2a}). Moreover, since $\Omega^{-}<\Omega _{\delta,R}<\Omega^{+}$ on $\left[ \bigcup_{j=1,2}\partial B_{\delta}\left( \left( -1\right) ^{j+1}\mathbf{a}\right) \right] \cup\partial B_{R}\left( 0\right)$ for $K>0$ sufficiently large independent of $\delta$ and $R$, it follows by comparison that: \end{subequations} \begin{equation} \Omega^{-}<\Omega_{\delta,R}<\Omega^{+} \quad\text{ in }D_{\delta,R}. \label{T3E4 \end{equation} Classical regularity theory for elliptic equations implies that $\left\vert \nabla^{k}\Omega_{\delta,R}\right\vert $, $k=1,2,3,$ are bounded in compact sets of $D_{\delta,R}.$ A compactness argument then shows that there exists a smooth function $\Omega$ satisfying $\Omega^{-}<\Omega<\Omega^{+}$ in $\mathbb{R}^{2} \setminus\left\{ -\mathbf{a},\mathbf{a}\right\} $ and a subsequence $\left\{ \left( \delta_{\ell},R_{\ell}\right) \right\} $ such that $\left( \delta_{\ell},R_{\ell}\right) \rightarrow\left( 0,\infty\right) $ and: \[ \Omega_{\delta_{\ell},R_{\ell}}\rightarrow\Omega\text{ \ as\ \ \ell\rightarrow\infty\ \ \text{in\ \ }C^{2}\left( \mathcal{K}\right) \] for each compact $\mathcal{K}\subset\mathbb{R}^{2}\setminus\left\{ -\mathbf{a},\mathbf{a}\right\} .$ Therefore $L\left( \Omega\right) =0$ in $\mathbb{R}^{2}\setminus\left\{ -\mathbf{a},\mathbf{a}\right\} .$ Moreover, the functions: \[ \phi_j \left( y\right) \equiv \Omega\left( y\right) -D_{j}\left[ \frac{1 {r_{j}^{4}}+\Psi_{1}\left( Y_{j}\right) +(-1)^{j+1}\Psi_{2}\left( Y_{j}\right) +\Psi_{3}\left( Y_{j}\right) \right], \quad j=1,2, \] are bounded in a neighborhood of the points $\left\{ -\mathbf{a},\mathbf{a}\right\}$ and satisfy: \[ L\left( \phi_j \right) =\frac{Q_{j}\left( y\right) }{\left\vert Y_{j}\right\vert }, \] wher \begin{equation} \left\vert Q_{j}\left( y\right) \right\vert \leq C \quad\text{ in } 0<\left\vert Y_{j}\right\vert \leq1,\quad j=1,2. \label{T3E5a \end{equation} To conclude the proof it only remains to show that the limits $\lim_{y\rightarrow a}\phi_1\left( y\right)$ and $\lim_{y\rightarrow -a}\phi_2 \left( y\right)$ exist. To this end we estimate the derivatives of $\phi_j$ as follows. For each $0<R<1$ we define $\varphi_{R, j}\left( \xi\right) = \phi_j \left( \pm\mathbf{a + R\xi\right).$ Then \[ \Delta_{\xi}\varphi_{R, j} + \frac{4\xi\cdot\nabla_{\xi} \varphi_{R,j}}{\left\vert \xi\right\vert ^{2}}+a_{R}\left( \xi\right) \cdot \nabla_{\xi} \varphi_{R,j} =O\left( R\right),\quad\frac{1}{4}\leq\left\vert \xi\right\vert \leq 4, \] where $\left\vert a_{R}\left( \xi\right) \right\vert \leq CR.$ Classical regularity theory for elliptic equations yields $\left\vert \nabla_{\xi }\varphi_{R}\right\vert \leq C$ in $1/2\leq \left\vert \xi\right\vert \leq2$. Therefore: \begin{equation} \left\vert \phi_j \left( y\right) \right\vert +\left\vert Y_{j}\right\vert\left\vert \nabla\phi\left( y\right) \right\vert \leq C\quad\text{ in }0<\left\vert Y_{j}\right\vert \leq1,\quad j=1,2 \label{T3E5 \end{equation} for some $C>0.$ In order to prove the existence of the limits $\lim_{y\rightarrow\pm\mathbf{a}}\phi_j \left( y\right)$ we now use a Fourier analysis argument. We use polar coordinates defined as \[ Y_{j}=y\mp\mathbf{a}=\rho_{j}\left( \cos \theta_{j} ,\sin \theta_{j}\right),\quad j=1,2. \] We then write \[ \phi_j \left( y\right) =\Phi_{j}\left( \rho_{j},\theta_{j}\right) =\sum_{n=-\infty}^{\infty}c_{n}\left( \rho_{j}\right) e^{in\theta_{j}}. \] The functions $c_{n}\left( \rho_{j} \right)$ solve a second order ODE, which can be solved explicitly \[ c_{n}\left( \rho_{j}\right) = A_{1,n}\rho_{j}^{\alpha_{n}^{+}}+A_{2,n}\rho_{j}^{\alpha_{n}^{-}} - \int_{\rho_{j}}^{1}\left( \frac{\rho_{j}}{s}\right)^{\alpha_{n}^{+}} \frac{Q_{j,n}\left( s\right) }{\alpha_{n}^{+}-\alpha_{n}^{-}}ds - \int_{\rho_{j}}^{1}\left( \frac{\rho_{j}}{s}\right)^{\alpha_{n}^{-}} \frac{Q_{j,n}\left( s\right)}{\alpha_{n}^{-}-\alpha_{n}^{+}}ds,\ j=1,2, \] where \begin{align} & Q_{j,n}\left( s\right) =\frac{1}{2\pi}\int_{0}^{2\pi}Q_{j}\left( \pm\mathbf{a}+s\left( \cos \theta_{j},\ \sin \theta_{j} \right) \right) e^{-in\theta_{j}}d\theta_{j},\label{T3E6a}\\ & \alpha_{n}^{+}=-2+\sqrt{4+n^{2}},\quad\alpha_{n}^{-}=-2-\sqrt{4+n^{2 },\quad n\in\mathbb{Z} \label{T3E6} \end{align} and where $A_{1,n},\ A_{2,n}$ are constants related to the Fourier coefficients of the functions $\Phi_{j}\left( 1,\theta_{j}\right)$. Since these functions are in $C^{\infty}\left( S^{1}\right)$, for every $\beta>0$, there exists a constant $C_{\beta}>0$ such that \begin{equation} \left\vert A_{1,n}\right\vert +\left\vert A_{2,n}\right\vert \leq \frac{C_{\beta}}{1+\left\vert n\right\vert ^{\beta}}, \quad n \in \mathbb{Z}. \label{T3E7 \end{equation} On the other hand, due to (\ref{T3E5a}) and (\ref{T3E5}) the coefficients $c_{n}\left( \rho_{j}\right) $ and $Q_{j,n}\left( \rho _{j}\right)$ are bounded for $0<\rho_{j}<1$. This implies \[ A_{2,n}=\int_{0}^{1}\left( \frac{1}{s}\right) ^{\alpha_{n}^{-}}\frac {Q_{j,n}\left( s\right) }{\alpha_{n}^{-}-\alpha_{n}^{+}}ds, \] whence \[ c_{n}\left( \rho_{j}\right) =A_{1,n}\rho_{j}^{\alpha_{n}^{+}}-\int_{\rho _{j}}^{1}\left( \frac{\rho_{j}}{s}\right) ^{\alpha_{n}^{+}}\frac {Q_{j,n}\left( s\right) }{\alpha_{n}^{+}-\alpha_{n}^{-}}ds +\int_{0}^{\rho_{j}}\left( \frac{\rho_{j}}{s}\right) ^{\alpha_{n}^{- }\frac{Q_{j,n}\left( s\right)}{\alpha_{n}^{-}-\alpha_{n}^{+}}ds, \quad j=1,2,\quad n\in\mathbb{Z}. \] Using (\ref{T3E5a})-(\ref{T3E7}) we obtain \[ \left\vert c_{n}\left( \rho_{j}\right) -A_{1,0}\delta_{n,0}\right\vert \leq\frac{C\rho_{j}^{\sqrt{5}-2}}{1+\left\vert n\right\vert ^{2}}+\frac {C\rho_{j}}{1+\left\vert n\right\vert ^{2}}\leq\frac{C\rho_{j}^{\sqrt{5}-2 }{1+\left\vert n\right\vert ^{2}}, \] where symbol $\delta_{n,0}$ stands for the Kronecker delta. Then \[ \left\vert \phi\left( y\right) -A_{1,0}\delta_{n,0}\right\vert \leq C\rho_{j}^{\sqrt{5}-2}\sum_{n=-\infty}^{\infty}\frac{1}{1+\left\vert n\right\vert ^{2}}\leq C\rho_{j}^{\sqrt{5}-2 \] and the therefore the limits $\lim_{y\rightarrow\pm\mathbf{a}}\phi_j \left( y\right)$ exist. The fact that the value of $A$ is the same in \eqref{T2E5b} and \eqref{T2E5} follows by a symmetry argument. To prove the uniqueness result we construct a supersolution for \eqref{Y1E3}. Consider a function $\Omega^{+}$ in \eqref{T3E0super} with $K$ sufficiently large. We then modify the function $\Omega^{+}$ so that the constant $K$ becomes the polynomial $\left\vert y\right\vert^{m}$ for large values of $\left\vert y\right\vert$. Since the main terms in the operator $L$ for large values of $\left\vert y\right\vert $ are $2^{-1}y\cdot\nabla\Omega$ and $-\Omega$, the modified function $\bar{\Omega}^{+}$ satisfies $L( \bar{\Omega}^{+}) \leq 0$. This modification is possible, because these leading terms yield positive contributions. The difference of two solutions of \eqref{Y1E3} satisfying \eqref{Y1E6a} may be estimated by $\varepsilon\bar{\Omega}^{+}$ for $y \rightarrow \pm \mathbf{a}$ and for $\left\vert y\right\vert =R$ with $\varepsilon>0$ arbitrarily small and $R>0$ large enough. A comparison argument then shows that the difference is bounded by $\varepsilon\bar{\Omega}^{+}$ in the regions $B_{R}\left( 0\right) \setminus B_{\delta}\left( \pm \mathbf{a} \right) $ for $\delta$ small. Taking the limit $\varepsilon\rightarrow0$ we know that both functions are the same, whence the uniqueness follows. \end{proof} \begin{remark}\label{RemAs} Equation \eqref{Y1E3} suggests that $\Omega\left( y\right) \sim \varphi\left( \theta\right) /\left\vert y\right\vert ^{2}$ as $\left\vert y\right\vert \rightarrow\infty,$ for some function $\varphi\left( \theta\right)$ whose precise formula does not seem easy to derive. However, we will not attempt to compute this function in detail in this paper. \end{remark} We also need to study the function $Z$, a solution of \eqref{M1E1}, satisfying: \begin{align} Z\left( y\right) & =o\left( \frac{1}{\left\vert y \mp \mathbf{a}\right\vert^{4}}\right) \quad\text{ as } y \rightarrow \pm \mathbf{a}, \label{M1E2}\\ \left\vert Z\left( y\right) \right\vert & \leq\left\vert y\right\vert^{m} \quad\text{ for }\left\vert y\right\vert \geq5\quad\text{ for some }m>0. \label{M1E3} \end{align} \begin{lemma} Suppose that $\left\vert \mathbf{a}\right\vert =2$. Let $\Omega,\ D_{1}$, and $D_{2}$ be as in Lemma \ref{Prop1}. Then there exists a unique solution of \eqref{M1E1} satisfying \eqref{M1E2} and \eqref{M1E3}. Its asymptotic behavior near the singular points $\left\{ -\mathbf{a},\mathbf{a}\right\} $ is given by \begin{subequations} \begin{align} Z\left( y\right) & \sim D_{1}\left[ -\frac{1}{2}\frac{1}{|y-\mathbf{a |^{2}}+\frac{1}{8}\log |y-\mathbf{a}| -\frac{1}{16 \frac{\left( \mathbf{a}\cdot\left( y-\mathbf{a}\right) \right) ^{2 }{\left\vert y-\mathbf{a}\right\vert ^{2}}+B\right] \text{ as y\rightarrow\mathbf{a},\label{Y4E6}\\ Z\left( y\right) & \sim D_{2}\left[ -\frac{1}{2}\frac{1}{|y+\mathbf{a |^{2}}+\frac{1}{8}\log |y+\mathbf{a}| -\frac{1}{16 \frac{\left( \mathbf{a}\cdot\left( y+\mathbf{a}\right) \right) ^{2 }{\left\vert y+\mathbf{a}\right\vert ^{2}}+B\right] \text{ as y\rightarrow-\mathbf{a} \label{Y4E7 \end{align} for some constant $B\in\mathbb{R}.$ \end{subequations} \end{lemma} \begin{proof} The proof is similar to the one of Lemma \ref{Prop1}. Due to the linearity and the symmetry of the problem it is enough to consider the case $D_{1}=1,$ $D_{2}=0.$ We can obtain sub- and supersolutions of (\ref{M1E1}) with the help of the auxiliary function \begin{equation} \tilde{W}\left( y\right) =\left[ -\frac{1}{2}\frac{1}{|Y_{1}|^{2}}+\frac {1}{8}\log |Y_{1}| +\frac{1}{8}-\frac{1}{16}\frac{\left( \mathbf{a}\cdot Y_{1}\right) ^{2}}{\left\vert Y_{1}\right\vert ^{2}}\right] \eta\left( Y_{1}\right) , \label{Y2E1 \end{equation} where $\eta\left( \xi\right) $ is a $C^{\infty}$ cutoff function as in Lemma \ref{Prop1}. We then construct sub and supersolutions in the form \[ Z^{+}\left( y\right) =\tilde{W}\left( y\right) + K, \qquad Z^{-}\left( y\right) =\tilde{W}\left( y\right) -K. \] The terms between the brackets in (\ref{Y2E1}) have been chosen in order to balance terms of the function $\Omega\left( y\right) .$ This requires some tedious, but otherwise straightforward computations. Arguing then as in the proof of Lemma \ref{Prop1} we obtain the result. \end{proof} \bigskip We also need to study the asymptotics of the function $\mathcal{W}_{1}$ in (\ref{Y1E1a}), which solves the equation: \begin{equation} -\Delta\mathcal{W}_{1}=\Omega,\quad y\neq\pm\mathbf{a}. \label{Y3E6 \end{equation} We obtain the following result: \begin{lemma}\label{Prop3} Suppose that $\left\vert \mathbf{a}\right\vert =2$. Let $\Omega$ be as in Lemma \ref{Prop1}. Then for every $M_{1,\mathcal{W}_{1}}^{(\mathbf{a})}, M_{1,\mathcal{W}_{1}}^{(-\mathbf{a})}\in\mathbb{R}$, there exists at least one solution of \eqref{Y3E6} satisfying \begin{subequations} \begin{align} & \mathcal{W}_{1}\left( y\right) -D_{1}G\left( y-\mathbf{a} \right) -D_{1} M_{1,\mathcal{W}_{1}}^{\left( \mathbf{a} \right) }\log \left\vert y-\mathbf{a}\right\vert = O\left( 1\right) \quad \text{ as } y\rightarrow\mathbf{a},\label{Y3E7a}\\ & \mathcal{W}_{1}\left( y\right) -D_{1}G\left( y+ \mathbf{a} \right) -D_{1} M_{1,\mathcal{W}_{1}}^{\left( -\mathbf{a} \right) }\log \left\vert y+\mathbf{a}\right\vert = O\left( 1\right) \quad \text{ as }y\rightarrow-\mathbf{a},\label{Y3E7b}\\ & \lim_{\left\vert y\right\vert \rightarrow\infty}\frac{\left\vert \mathcal{W}_{1}\left( y\right) \right\vert }{\left\vert y\right\vert }=0, \label{Y3E7c \end{align} where \begin{equation} G\left( Y\right) =-\frac{1}{4\left\vert Y\right\vert ^{2}}-\frac{1}{8} \left( \log \left\vert Y\right\vert \right) ^{2} + \frac{1}{32} \cos\left( 2\theta \right) \label{Y3E7bis} \end{equation} and where $\theta=\theta\left( Y\right) $ is the angle between the $y_{1}$ axis and $Y.$ Moreover, two arbitrary solutions of \eqref{Y3E6} satisfying \eqref{Y3E7a}-\eqref{Y3E7c} differ by a constant. We have the following asymptotics for $\mathcal{W}_{1}\left( y\right) $ as $y\rightarrow \mathbf{a}$ and $y\rightarrow-\mathbf{a}: \end{subequations} \begin{subequations} \begin{align} & \mathcal{W}_{1}\left( y\right) =D_{1}G\left( y-\mathbf{a}\right) +D_{1}M_{1,\mathcal{W}_{1}}^{\left( \mathbf{a} \right) }\log \left\vert y-\mathbf{a}\right\vert - \frac{D_{1}}{2^{7}\cdot 3}\left\vert y-\mathbf{a}\right\vert \cos\left( 3\theta_{\left( \mathbf{a} \right) }\right) +\nonumber\\ & \qquad\qquad\qquad+ A_{1}^{\left( \mathbf{a} \right) }+D_{1 K_{1}^{\left( \mathbf{a} \right) }\cdot\left( y-\mathbf{a} \right) +O\left( \left\vert y-\mathbf{a} \right\vert ^{2}\right) ,\label{Y3E8}\\ & \mathcal{W}_{1}\left( y\right) =D_{2}G\left( y+\mathbf{a} \right) +D_{2}M_{1,\mathcal{W}_{1}}^{\left( -\mathbf{a}\right) }\log \left\vert y+\mathbf{a} \right\vert - \frac{D_{2}}{2^{7}\cdot 3}\left\vert y+\mathbf{a} \right\vert \cos\left( 3\theta_{\left( -\mathbf{a} \right) }\right) +\nonumber\\ & \qquad\qquad\qquad+A_{1}^{\left( -\mathbf{a} \right) }+ D_{2 K_{1}^{\left( -\mathbf{a}\right) }\cdot\left( y+\mathbf{a}\right) +O\left( \left\vert y+\mathbf{a}\right\vert ^{2}\right) , \label{Y3E9 \end{align} where $\theta_{\left( \mathbf{a} \right) }, \theta_{\left( -\mathbf{a} \right) }$ are the angles between the horizontal axis and the vectors $y - \mathbf{a}$ and $y + \mathbf{a}$ respectively. The vectors $K_{1}^{(\mathbf{a})},\ K_{1}^{( -\mathbf{a} )}\in\mathbb{R}^{2}$ and the constants $A_{1}^{( \mathbf{a} )},\ A_{1}^{( -\mathbf{a})}\in\mathbb{R}$ depend on an affine manner on the values of $M_{1,\mathcal{W}_{1}}^{(\mathbf{a})},\ M_{1,\mathcal{W}_{1}}^{(-\mathbf{a})}.$ \end{subequations} \end{lemma} \begin{proof} The proof is similar to the one of Lemma \ref{Prop1}. It reduces just to compute explicitly the solutions of the Poisson equation having as sources the terms in the asymptotics \eqref{T2E5b}, \eqref{T2E5}. After removing the effect of these singular contributions, it only remains to obtain a solution of Poisson equation with a source term bounded as $C/(1+\left\vert y\right\vert ^{2})$. This can be made using a supersolution behaving as $C\left( \log\left( \left\vert y\right\vert \right) \right) ^{2}$ for large values of $\left\vert y\right\vert .$ The uniqueness result is a consequence of Liouville's theorem for the Laplace equation. \end{proof} \section{Matching of the different terms} In this Section we match the different terms in the inner and outer expansions and consequently derive evolution equations for the functions $\varepsilon _{\ell}\left( \tau\right) $ providing the width of the peaks. We will assume in the following that, due to symmetry considerations, all the functions $\varepsilon_{\ell}$ at the different peaks are the same. In general this does not need to be so. Moreover, there are non-symmetric singular self-similar solutions (cf.~Section \ref{Asympt}) for which the corresponding values of the functions $\varepsilon_{\ell}\left( \tau\right) $ cannot be expected to be the same. The question of determining the relative sizes of the functions $\varepsilon_{\ell}\left( \tau\right) $ is interesting, but it will not be considered in this paper. Due to (\ref{S4E7}) this question is equivalent to determining the relative sizes of the maximum value of the function $u$ at each of the different peaks (Notice however that all of them have the same mass $8\pi$). We now describe how to match the different terms in the asymptotics as $\left\vert \xi\right\vert \rightarrow\infty$ of the expansions (\ref{S5E7}), (\ref{S5E8}). We begin with the leading order terms. Since we restrict our analysis to the case of two peaks we assume in the following that $\varepsilon_{1}=\varepsilon_{2}=\varepsilon.$ We write for further reference the expansion of $\nabla_{y}\mathcal{W}_{0}$ near $y=\mathbf{a}$ (cf.\thinspace\eqref{Y2E4}): \begin{align} \nabla_{y}\mathcal{W}_{0} & =-\frac{4Y}{|Y|^{2}}-\frac{2\mathbf{a }{\left\vert \mathbf{a}\right\vert ^{2}}-\frac{Y}{\left\vert \mathbf{a \right\vert ^{2}}+\frac{2\mathbf{a}\left( \mathbf{a}\cdot Y\right) }{\left\vert \mathbf{a}\right\vert ^{4}}+\frac{|Y|^{2}\mathbf{a}}{2\left\vert \mathbf{a}\right\vert ^{4}}-\frac{2\left( \mathbf{a}\cdot Y\right) ^{2}\mathbf{a}}{\left\vert \mathbf{a}\right\vert ^{6}}+\frac{\left( \mathbf{a}\cdot Y\right) Y}{\left\vert \mathbf{a}\right\vert ^{4 }-\nonumber\\ & -\frac{\left( \mathbf{a}\cdot Y\right) |Y|^{2}\mathbf{a}}{\left\vert \mathbf{a}\right\vert ^{6}}-\frac{|Y|^{4}\mathbf{a}}{16\left\vert \mathbf{a}\right\vert ^{6}}+\frac{2\left( \mathbf{a}\cdot Y\right) ^{3}\mathbf{a}}{\left\vert \mathbf{a}\right\vert ^{8}}+\frac{\left\vert Y\right\vert ^{2}Y}{4\left\vert \mathbf{a}\right\vert ^{4}}-\frac{\left( \mathbf{a}\cdot Y\right) ^{2}Y}{\left\vert \mathbf{a}\right\vert ^{6 }-...\ \label{WTaylo \end{align} where $Y=y-\mathbf{a}$ and we have kept in this formula all the terms until third order in $\left\vert Y\right\vert $. \subsection{Leading terms.} The leading order in (\ref{S5E7}), (\ref{S5E8}) is respectively given by the functions $u_{s}\left( \xi\right) ,\ v_{s}\left( \xi\right) $. We will denote as $\Phi_{0,match},\ W_{0,match}$ the terms to be matched in the intermediate region $\left\vert \xi\right\vert \gg1,\left\vert y-y_{\ell}\right\vert \ll1$ due to these terms in the expansion. Keeping just terms of order $\varepsilon_{\ell}^{2}$ $\left( w.l.a\right) $ in the region where $\left\vert y-y_{\ell}\right\vert $ becomes of order one, we then obtain: \begin{equation} \Phi_{0,match}\left( y,\tau \right) =\frac{8\varepsilon_{\ell}^{2}}{\left\vert y-\bar{y}_{\ell}\right\vert ^{4}}, \quad \nabla_{y} W_{0,match}\left( y,\tau \right) = -\frac{4\left( y-\bar{y}_{\ell}\right) }{\left\vert y-\bar{y}_{\ell}\right\vert ^{2}}. \label{Y2E5} \end{equation} The matching of the term $\nabla_{y}W_{0,match}$ has been already taken into account in the derivation of (\ref{Y2E4}) that gives the asymptotics of the chemical field for $\left\vert y\right\vert $ of order one up to corrections of order $\varepsilon_{\ell}^{2}.$ On the other hand, due to \eqref{T2E5b}, \eqref{T2E5} we obtain the matching of (\ref{Y2E5}) with (\ref{Y1E1}), assuming $D_{1}=8.$ \subsection{Terms coming from $U_{1},\ W_{1}.$} We now match the terms $U_{1},\ W_{1}$ in (\ref{S5E7}), (\ref{S5E8}) with suitable terms in (\ref{Y1E1}), (\ref{Y1E1a}), respectively. We denote as $\Phi _{1,match},\ W_{1,match}$ the terms to be matched in the intermediate region $\left\vert \xi\right\vert \gg1,\ \left\vert y-\bar{y}_{\ell}\right\vert \ll1$ due to these terms in the expansion. Notice that (\ref{U2E2a}) shows: \[ \Phi_{1,match}\left( y,\tau \right) =0, \quad \nabla_{y} W_{1,match}\left( y,\tau \right) = - \frac{\bar{y}_{\ell}}{2}. \] We only need to match the term $\nabla_{y} W_{1,match}$ with some of the terms in (\ref{Y2E4}). Let $\lim_{\tau\rightarrow\infty}\bar{y}_{\ell}=\bar{y}_{\ell}=\mathbf{a},$ since the case $\lim_{\tau\rightarrow\infty}\bar{y}_{\ell}=-\mathbf{a}$ can be treated in a symmetric way. The most singular term of \eqref{Y2E4} has been matched with $\nabla_{y} W_{0,match}.$ The next order in the expansion of $\nabla_{y}\mathcal{W}_{0}$ is $- 2\mathbf{a}/\left\vert \mathbf{a}\right\vert^{2}$ (cf.\thinspace (\ref{WTaylo})) and this matches with $-\bar{y}_{\ell}/2 =-\mathbf{a}/2$ if we impose $\left\vert \mathbf{a}\right\vert =2$. Therefore matching of the terms of order $\varepsilon$ $\left( w.l.a\right)$ in the region where $\left\vert \xi\right\vert $ is of order one becomes possible if we impose that the drift terms due to the change to the self-similar variables and the chemotactic terms balance with each other. \subsection{Terms coming from $U_{2},\ W_{2}.$} Let us denote as $\Phi_{2,match},\ W_{2,match}$ the terms appearing in the matching condition arising from the terms $U_{2},\ W_{2}$ in the inner expansion. Using (\ref{S6E10a1}), (\ref{S6E10b1}), (\ref{U2E1}), and (\ref{M2E2}) we obtain the following formulas in the intermediate region $\varepsilon_{\ell}\ll\left\vert y-\bar{y}_{\ell}\right\vert \ll1:$ \begin{subequations} \begin{align} \Phi_{2,match}\left( y,\tau\right) & \sim\left( 2\varepsilon_{\ell }\varepsilon_{\ell,\tau}-\varepsilon_{\ell}^{2}\right) \left[ -\frac {2}{\left\vert y-\bar{y}_{\ell}\right\vert ^{2}}+O\left( \frac{\varepsilon _{\ell}^{2}}{\left\vert y-\bar{y}_{\ell}\right\vert ^{4}}\right) \right] +\label{Y3E1}\\ & +\frac{8B_{2,3}\cos\left( 2\theta\right) }{\left\vert y-y_{\ell }\right\vert ^{2}}+\frac{8\bar{B}_{2,3}\sin\left( 2\theta\right) }{\left\vert y-\bar{y}_{\ell}\right\vert ^{2}}+...\ \left( w.l.a\right) \nonumber\\ W_{2,match}\left( y,\tau\right) & \sim\bar{y}_{\ell,\tau}\cdot\left( y-\bar{y}_{\ell}\right) +V_{2,1}+\frac{B_{2,3}}{\varepsilon_{\ell}^{2 }\left\vert y-\bar{y}_{\ell}\right\vert ^{2}\cos\left( 2\theta\right) +\frac{\bar{B}_{2,3}}{\varepsilon_{\ell}^{2}}\left\vert y-\bar{y}_{\ell }\right\vert ^{2}\sin\left( 2\theta\right) +..., \label{Y3E2 \end{align} where $V_{2,1}$ is a radial term. It is more convenient to rewrite (\ref{Y3E2}) in cartesian coordinates: \end{subequations} \begin{multline} W_{2,match}\left( y,\tau\right) \sim\bar{y}_{\ell,\tau}\cdot\left( y-\bar{y}_{\ell}\right) +V_{2,1}+\frac{B_{2,3}}{\varepsilon_{\ell}^{2 }\left[ \left( y_{1}-\bar{y}_{\ell,1}\right) ^{2}-\left( y_{2}-\bar {y}_{\ell,2}\right) ^{2}\right] +\\ +\frac{2\bar{B}_{2,3}}{\varepsilon_{\ell}^{2}}\left( y_{1}-\bar{y}_{\ell ,1}\right) \left( y_{2}-\bar{y}_{\ell,2}\right) +... \label{Y3E3 \end{multline} The last two terms of this formula can be matched with the quadratic terms of the expansion of $\nabla_{y} \mathcal{W}_{0}(y)$ near the points $\bar{y}_{\ell}.$ Using (\ref{WTaylo}) it follows that $\nabla W_{2,match}$ matches with $\nabla_{y}\mathcal{W}_{0}(y)$ if \begin{equation} B_{2,3}=\frac{\varepsilon_{\ell}^{2}}{8},\quad\bar{B}_{2,3}=0, \label{Y3E4} \end{equation} where we use that $\left\vert \mathbf{a}\right\vert =2.$ Using (\ref{Y3E4}) in (\ref{Y3E1}) and transforming the resulting formula to cartesian coordinates we obtain: \begin{equation} \Phi_{2,match}\left( y,\tau\right) \sim\varepsilon_{\ell}^{2}\left[ \frac{3\left( y_{1}-2\right) ^{2}+\left( y_{2}\right) ^{2}}{\left\vert y-\bar{y}_{\ell}\right\vert ^{4}}\right] -\frac{4\varepsilon_{\ell }\varepsilon_{\ell,\tau}}{\left\vert y-\bar{y}_{\ell}\right\vert ^{2 }+O\left( \frac{\varepsilon_{\ell}^{4}}{\left\vert y-\bar{y}_{\ell }\right\vert ^{4}}\right) ...\ \left( w.l.a\right) \label{Y3E5 \end{equation} and the first term in (\ref{Y3E5}) matches exactly with the term in the outer region multiplying $\Psi_{1}\left( y-\mathbf{a}\right)$ in (\ref{T2E5b}) due to the fact that $D_{1}=8.$ It is illuminating to compute $\bar{y}_{\ell,\tau}$ in the first term of (\ref{Y3E3}), matching the first term on the right-hand side of this formula with one of the terms in the outer expansion (\ref{Y1E1a}). We will examine the case in which $\lim_{\tau\rightarrow\infty}\bar{y}_{\ell}=\mathbf{a},$ since the case in which $\lim_{\tau\rightarrow\infty}\bar{y}_{\ell}=-\mathbf{a}$ is similar. Using (\ref{Y1E1a}), (\ref{Y3E8}) as well as the fact that $D_{1}=8$ we obtain the following terms in the outer expansion of $\nabla_{y}W$ which require to be matched with terms from the inner expansion: \begin{multline} \varepsilon_{\ell}^{2} \Bigg[ \frac{4Y}{\left\vert Y\right\vert ^{4} -2 \left( \log \left\vert Y\right\vert \right) \frac{Y}{\left\vert Y\right\vert ^{2}}-\frac{1}{2}\sin\left( 2\theta_{\left( \mathbf{a}\right) }\right) \frac{Y^{\perp}}{\left\vert Y\right\vert ^{2}}+\frac {8M_{1,\mathcal{W}_{1}}^{\left( \mathbf{a}\right) }Y}{\left\vert Y\right\vert ^{2}} + \\ + 8K_{1}^{\left( \mathbf{a}\right) -\frac{\cos\left( 3\theta_{\left( \mathbf{a}\right) }\right) }{2^{4 \cdot3}\frac{Y}{\left\vert Y\right\vert }+\frac{\sin\left( 3\theta_{\left( \mathbf{a}\right) }\right) }{2^{4}}\frac{Y^{\perp}}{\left\vert Y\right\vert }+O\left( \left\vert Y\right\vert \right) \Bigg], \label{Y4E1a} \end{multline} where $Y=y-\mathbf{a}$ and we define $Y^{\perp}=\left( -y_{2},y_{1}\right)$ for $Y=\left( y_{1},y_{2}\right) \in \mathbb{R}^2$. The first term in (\ref{Y4E1a}) matches with a similar term coming from the function $v_{s}$ in (\ref{S4E2}) using Taylor series as $\left\vert \xi\right\vert \rightarrow\infty.$ The second term in (\ref{Y4E1a}) matches, to the leading order, with the first term on the right-hand side of (\ref{S6E10b1}). The third term in (\ref{Y4E1a}) matches with the first corrective term that results in the Taylor expansion of $V_{2,3}$ in (\ref{M2E2}) as $\left\vert \xi\right\vert \rightarrow\infty$. Notice that we use also in this matching (\ref{Y3E4}). The matching of the term $8M_{1,\mathcal{W}_{1}}^{\left( \mathbf{a}\right) }Y /\left\vert Y\right\vert ^{2}$ plays a relevant role in determining $\bar{y}_{\ell,\tau}.$ Indeed, the contributions of similar order in the inner region are due to the terms $\log\left( r^{2}\right) /r$ and $-2/r$ in \eqref{S6E10b1}. Due to the change of variables $r=\left\vert \xi\right\vert = \left\vert Y\right\vert /\varepsilon_{\ell}$ it follows that, to the leading order $8M_{1,\mathcal{W}_{1}}^{\left( \mathbf{a}\right)} =\log \varepsilon_{\ell}$. Lemma \ref{Prop3} thus yields $K_{1}^{\left( \mathbf{a}\right) } =B_{1}^{\left( \mathbf{a}\right)}\log \varepsilon_{\ell}$ as $\tau \rightarrow\infty$ to the leading order. We can then match the term $8K_{1}^{\left( \mathbf{a}\right) }\varepsilon_{\ell}^{2}$ in (\ref{Y4E1a}) with the first term in (\ref{Y3E3}), whence: \[ \bar{y}_{\ell,\tau} \sim B_{1}^{\left( \mathbf{a}\right) }\varepsilon_{\ell} ^{2}\left( \tau \right)\log\left( \varepsilon_{\ell}\left( \tau \right) \right) , \quad \bar{y}_{\ell} \sim \mathbf{a} -B_{1}^{\left( \mathbf{a}\right) }\int_{\tau}^{\infty}\varepsilon_{\ell ^{2}\left( s\right) \log\left( \varepsilon_{\ell}\left( s\right) \right) ds \quad \text{ as } \tau\rightarrow\infty. \] This gives the desired asymptotic formula of the peaks stated in (\ref{S4E5}). The terms with the angular dependence $3\theta$ in (\ref{Y4E1a}) are matched with some of the high order corrections coming from (\ref{M2E1}). However, this terms give smaller contributions and we do not pursue this computation in detail. \subsection{Terms coming from $U_{3},\ W_{3}.$} We now match the terms coming from $U_{3},\ W_{3}$ which can be computed by means of (\ref{M1E7a}), (\ref{M2E1}). We notice that, to the leading order, $U_{3}$ must match with the term $\varepsilon_{\ell}^{2}\Psi_{2}\left( Y\right) $ in (\ref{T2E5b}), (\ref{T2E5}). Using that $D_{1}\Psi_{2}\left( Y\right) =- 6^{-1}\cos\left( 3\theta \right) /\left\vert Y\right\vert$ (since $D_{1}=8$) we can match this term with the leading matching term coming from $U_{3}$, which can be written as (cf.~\eqref{M2E1}): \[ \Phi_{3,match}\left( y\right) =\frac{16B_{3}}{\varepsilon_{\ell}}\frac {\cos\left( 3\theta_{\left( \mathbf{a}\right) }\right) }{\left\vert y-\mathbf{a}\right\vert \] whence: \begin{equation} B_{3}=-\frac{\varepsilon_{\ell}^{3}}{2^{5}\cdot3},\quad\bar{B}_{3}=0. \label{Y4E3 \end{equation} We see that this gives also a matching for the terms in (\ref{WTaylo}) with angular dependence $\cos ( 3\theta_{( \mathbf{a})}),$ $\sin ( 3\theta_{(\mathbf{a})})$. Using that $\left\vert \mathbf{a}\right\vert =2$ we can write those terms as \begin{equation} \frac{|Y|^{2}\mathbf{a}}{2\left\vert \mathbf{a}\right\vert ^{4} -\frac{2\left( \mathbf{a}\cdot Y\right) ^{2}\mathbf{a}}{\left\vert \mathbf{a}\right\vert ^{6}}+\frac{\left( \mathbf{a}\cdot Y\right) Y}{\left\vert \mathbf{a}\right\vert ^{4}}=\frac{|Y|^{2}}{2^{4}}\left[ \left( -\cos\left( 2\theta\right) ,\sin\left( 2\theta\right) \right) \right] . \label{Y4E4 \end{equation} On the other hand, we can compute two terms of the asymptotics of $\nabla_{\xi}( V_{3}( r ) \cos ( 3\theta )$ as $\left\vert \xi\right\vert \rightarrow\infty$ using Taylor series. Rewriting the resulting expansion using the $y-$variable we obtain the following terms to be matched from the inner expansion: \begin{equation} \frac{6B_{3}}{\varepsilon_{\ell}}r^{2}\left( \cos\left( 2\theta\right) ,-\sin\left( 2\theta\right) \right) +\frac{B_{3}}{\varepsilon_{\ell }\left( 2\cos\left( 3\theta\right) \frac{Y}{|Y|}-6\sin\left( 3\theta\right) \frac{Y^{\perp}}{|Y|}\right) . \label{Y4E5 \end{equation} Using (\ref{Y4E3}) we obtain that the first term in (\ref{Y4E5}) matches with the term in (\ref{Y4E4}) and the second one matches with the terms in (\ref{Y4E1a}) with angular dependence $3\theta.$ \subsection{\label{Match4}Terms coming from $U_{4},W_{4}$} We now match the asymptotics as $\left\vert \xi\right\vert \rightarrow\infty$ in the terms $U\left( \xi,\tau\right) ,\ V\left( \xi,\tau\right) $ with the terms in the outer expansions (\ref{Y1E1}), (\ref{Y1E1a}) that are of order $\varepsilon_{\ell}^{2}$ $\left( w.l.a\right) $ as $y\rightarrow\pm\mathbf{a}.$ These are terms in the outer expansion multiplying $\Psi_{3}\left( Y\right) $ and $A$ in (\ref{T2E5b}), (\ref{T2E5}) as well as the terms multiplying $16^{-1}\left( \mathbf{a}\cdot\left( y-\mathbf{a}\right) \right)^{2}/\left\vert y-a\right\vert^{2}$ and $B$ in (\ref{Y4E6}), (\ref{Y4E7}). Therefore, using also that $D_{1}=8,$ we obtain that the outer expansion for $\Phi$ to be matched as $y\rightarrow\mathbf{a}$ is: \begin{equation} \frac{\varepsilon_{\ell}^{2}}{2^{5}}\frac{\left( \mathbf{a}\cdot Y\right) ^{4}}{\left\vert Y\right\vert ^{4}}+8A\varepsilon_{\ell}^{2}-\frac {\varepsilon_{\ell}\varepsilon_{\ell,\tau}}{2}\frac{\left( \mathbf{a}\cdot Y\right) ^{2}}{\left\vert Y\right\vert ^{2}}+8B\varepsilon_{\ell \varepsilon_{\ell,\tau}+\varepsilon_{\ell}\varepsilon_{\ell,\tau}\log |Y|, \label{Y4E8} \end{equation} where $Y=y-\mathbf{a}.$ Concerning the inner expansion we notice that the only radial terms giving contributions of order $\varepsilon_{\ell}^{2}$ $\left( w.l.a\right) $ in the matching region are the terms $U_{4,1}+U_{4,2,1}$. Using (\ref{Z1E1}), (\ref{Z1E2}), and (\ref{Z2E9}) as well as the change of variables we obtain the following radial terms for $\Phi$ to be matched: \begin{equation} -\left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau} -\varepsilon_{\ell}^{2}\right)_{\tau} \left( \frac{\log |Y|}{2} -\frac{\log \varepsilon_{\ell}}{2} -\frac{5}{8}\right) +\frac{\left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau}-\varepsilon_{\ell }^{2} \right) ^{2}}{4\varepsilon_{\ell}^{2}} +\frac{\varepsilon_{\ell}^{2}}{2^{5}}, \label{Y4E9} \end{equation} where we have used (\ref{Y3E4}). On the other hand, we can decompose the terms in (\ref{Y4E8}) in radial terms and in terms with angular dependences $\cos\left( 2\theta\right) $ and $\cos\left( 4\theta\right) $. Using also that $\left\vert \mathbf{a} \right\vert =2$ we observe that the radial terms are: \[ \frac{3\varepsilon_{\ell}^{2}}{16}+8A\varepsilon_{\ell}^{2} +\left( 8B-1\right) \varepsilon_{\ell}\varepsilon_{\ell,\tau}+\varepsilon_{\ell} \varepsilon_{\ell,\tau}\log |Y|. \] We notice that the term containing $\log |Y|$ can be matched, to the leading order, with a similar term in (\ref{Y4E9}). On the other hand, the matching of the remaining terms provides an equation for $\varepsilon_{\ell}$ in the same manner as in \cite{V1}: \begin{multline} \frac{\left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau}-\varepsilon_{\ell }^{2}\right) _{\tau}}{2} \log \varepsilon_{\ell} +\frac{5}{8}\left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau} -\varepsilon_{\ell}^{2}\right)_{\tau} +\frac{\left( 2\varepsilon_{\ell}\varepsilon_{\ell,\tau} -\varepsilon_{\ell}^{2}\right) ^{2}}{4\varepsilon_{\ell}^{2}} +\frac{\varepsilon_{\ell}^{2}}{2^{5}}\\ =\frac{3\varepsilon_{\ell}^{2}}{16}+8A\varepsilon_{\ell}^{2}+\left( 8B-1\right) \varepsilon_{\ell}\varepsilon_{\ell,\tau}. \label{Y5E1} \end{multline} We now consider the matching of the terms with angular dependence $\cos\left( 2\theta\right) .$ The terms in the outer region (cf.\thinspace(\ref{Y4E8})) with such dependence are: \begin{equation} \left( \frac{\varepsilon_{\ell}^{2}}{4}-\varepsilon_{\ell}\varepsilon _{\ell,\tau}\right) \cos\left( 2\theta\right) . \label{Y5E2 \end{equation} This term must be matched with the contributions due to $U_{4,2,2}.$ Using (\ref{Q422}) we obtain that we need to match (\ref{Y5E2}) with: \begin{equation} \left[ \frac{16K_{2}B_{4,2}}{\left( \varepsilon_{\ell}\right)^{2\sqrt{2}} }\left\vert Y\right\vert ^{2\sqrt{2}-2} +\sqrt{2}C_{2}K_{2}\left( \frac{\varepsilon_{\ell}^{2}}{4}-\varepsilon_{\ell}\varepsilon_{\ell,\tau }\right) \right] \cos\left( 2\theta\right) . \label{Y5E3} \end{equation} The matching of (\ref{Y5E2}) and (\ref{Y5E3}) requires: \begin{equation} B_{4,2}=O\left( \left( \varepsilon_{\ell}\right) ^{2\sqrt{2}+2}\right) \text{ as }\tau\rightarrow\infty. \label{Y5E4 \end{equation} Computing higher order terms in the outer expansion it would be possible to derive more precise formulas for $B_{4,2}$. Basically this would require to compute higher order asymptotics of the function $\Omega\left( y\right) $ as $y\rightarrow\pm\mathbf{a}.$ The next order correction to $\Omega$ in (\ref{T2E5b}) is of order $C\left\vert Y\right\vert ^{2\sqrt{2}-2}\cos\left( 2\theta\right) $ for some $C\in\mathbb{R}$. This would give exactly the behavior (\ref{Y5E4}). However, since the detailed form of these terms will not play any role in the following, we will not continue with this analysis. The matching of (\ref{Y5E2}) and (\ref{Y5E3}) requires also: $\sqrt{2}C_{2}K_{2}=1$ and this is just a consequence of (\ref{KCrel}). We now consider the matching of the terms with dependence $\cos\left( 4\theta\right) .$ The term with this angular dependence in (\ref{Y4E8}) is \[ \frac{\varepsilon_{\ell}^{2}}{2^{4}}\cos\left( 4\theta\right) \] This term must be matched with the contributions due to $U_{4,2,3}.$ Due to \eqref{F-Q423} the inner contribution to be matched is \[ \left[ \frac{16K_{4}c_{3}\left( \infty\right) }{\varepsilon_{\ell ^{2\sqrt{5}}}\left\vert Y\right\vert ^{2\sqrt{5}-2} + \frac{24 c_{1}\left( \infty\right) + \sqrt{5}C_{4}K_{4}\left( B_{2,3}\right)^{2}} {\varepsilon_{\ell}^{2}} \right] \cos\left( 4\theta\right). \] Arguing as in the derivation of (\ref{Y5E4}) we observe that $c_{3}( \infty) = O( \varepsilon_{\ell}^{2\sqrt{5}+2})$, showing that these terms are very small in the inner region. Taking into account (\ref{KCrel}) we have: \begin{equation} c_{1}\left( \infty\right) =\frac{\varepsilon_{\ell}^{4}}{2^{8}\cdot3}, \label{Y6E8 \end{equation} which concludes the matching to this order of the functions $\Phi.$ We can also obtain matchings for the functions $V$. We are just interested in the first term on the right-hand side of \eqref{F-V423} since it gives a term of order one for $\left\vert Y\right\vert $ of order one. The remaining terms give contributions of order $\varepsilon_{\ell}^{2}$ and we will ignore them. The term to be matched for $V$ is $3(c_{1}( \infty )/\varepsilon_{\ell}^{4}) \left\vert Y\right\vert ^{4}\cos\left( 4\theta\right)$. The gradient of this term with respect to $y$ yields: \[ \frac{12c_{1}\left( \infty\right) }{\varepsilon_{\ell}^{4}}\left\vert Y\right\vert ^{3}\cos\left( 4\theta\right) \frac{Y}{\left\vert Y\right\vert } -\frac{12c_{1}\left( \infty\right) }{\varepsilon_{\ell}^{4}}\left\vert Y\right\vert ^{3}\sin\left( 4\theta\right) \frac{Y^{\perp}}{\left\vert Y\right\vert } \] and using polar coordinates, as well as (\ref{Y6E8}) this becomes \begin{equation} \frac{\left\vert Y\right\vert ^{3}}{2^{6}}\left( \cos\left( 3\theta\right) , -\sin\left( 3\theta\right) \right). \label{Y7E1 \end{equation} On the other hand, the term in (\ref{WTaylo}) containing cubic terms is \[ -\frac{\left( \mathbf{a}\cdot Y\right) |Y|^{2}\mathbf{a}}{\left\vert \mathbf{a}\right\vert ^{6}}+\frac{2\left( \mathbf{a}\cdot Y\right) ^{3}\mathbf{a}}{\left\vert \mathbf{a}\right\vert ^{8}}+\frac{\left\vert Y\right\vert ^{2}Y}{4\left\vert \mathbf{a}\right\vert ^{4}}-\frac{\left( \mathbf{a}\cdot Y\right) ^{2}Y}{\left\vert \mathbf{a}\right\vert ^{6}}, \] which can be transformed, using polar coordinates in \begin{equation} \frac{\left\vert Y\right\vert ^{3}}{2^{6}}\left( 4\cos^{3} \theta -3\cos \theta , \sin \theta -4\cos^{2} \theta \sin \theta \right). \label{Y7E2 \end{equation} Standard trigonometric formulas show that (\ref{Y7E1}) and (\ref{Y7E2}) are the same. \subsection{\label{ODE}Analysis of the ODE (\ref{Y5E1}) and the derivation of the final profile.} Neglecting terms of order $O(( \varepsilon_{\ell,\tau})^{2})$ that will be seen to have a size of order $O(\left( \varepsilon_{\ell}\right)^{2}/\tau)$ as $\tau\rightarrow\infty$ we obtain: \[ \varepsilon_{\ell}\varepsilon_{\ell,\tau}\log\varepsilon_{\ell}+M\varepsilon _{\ell}\varepsilon_{\ell,\tau}=L\varepsilon_{\ell}^{2}\ \] with \begin{equation} M \equiv \frac{5}{4}+8B\quad\text{and} \quad L \equiv \frac{3}{32}-8A. \label{LM \end{equation} Integrating this equation, we obtain: \[ \frac{d}{d\tau}(\log\varepsilon_{\ell})^{2}+2M\frac{d}{d\tau}\left( \log\varepsilon_{\ell}\right) =2L+O\left( \left( \varepsilon_{\ell,\tau}\right)^{2}\right), \] whence: \[ (\log\varepsilon_{\ell})^{2}+2M\left( \log\varepsilon_{\ell}\right) =2L\tau+O\left( \left( \varepsilon_{\ell,\tau}\right) ^{2}\right) \quad \text{ as }\tau\rightarrow\infty, \] where we have used that $\log\varepsilon_{\ell}$ is of order $\sqrt{\tau}$ to the leading order. Therefore: \[ \log\varepsilon_{\ell}=-\sqrt{2L\tau}-M+o(1)\quad\text{ as }\tau \rightarrow\infty. \] Then: \begin{equation} \varepsilon_{\ell}=\beta e^{-\alpha\sqrt{\tau}}\cdot(1+o(1))\quad\text{ as }\tau\rightarrow\infty, \label{CHL \end{equation} where \[ \alpha \equiv \sqrt{2L}=\sqrt{\frac{3}{16}-16A},\qquad \beta \equiv e^{-M}=e^{-5/4-8B}. \] In the original variable, the leading order corresponding to \eqref{CHL} is: \begin{equation} \beta\sqrt{T-t}\ e^{-\alpha\sqrt{|\log(T-t)|}}. \label{Was} \end{equation} Notice that since $A<0,$ which we have checked numerically as was already mentioned in Remark \ref{RemonA}, the constant $L$ in (\ref{LM}) is positive and then $\alpha$ is a real positive number. The asymptotics (\ref{Was}) characterizes the width of the peaks where the mass of $u$ is concentrated. The characteristic distance between these peaks is of order \begin{equation} D=4\sqrt{T-t}. \label{distance \end{equation} \begin{remark} It is interesting to notice that the formulas \eqref{distance} provide information about the characteristic distance to which two peaks, with masses close to $8\pi,$ and concentrated in a width of order $w,$ must be, in order to obtain blow-up with two peaks aggregating together. Notice that, for $w$ small we have the following approximation for the critical distance required to have simultaneous blow-up and aggregation of the two peak \[ D=\frac{4e^{-\alpha^{2}}w}{\beta}\exp\left( \alpha\sqrt{2\left\vert \log\left( w\right) \right\vert }\right) \quad\text{ as }w\rightarrow0. \] By critical distance we understand the distance at which two peaks containing a mass close to $8\pi$ in an area with radius $w,$ should be localized in order to obtain singularity formation with an aggregating mass $16\pi.$ The numerical factor $4e^{-\alpha^{2}}/\beta$ cannot be expected to be really accurate if the concentrating masses in the initial peaks are not distributed exactly according to the stationary solutions \eqref{S4E2}. \end{remark} \begin{remark}\label{Final} Assuming that the asymptotics for $\Omega\left( y\right)$ stated in Remark \ref{RemAs} holds, we can obtain an asymptotic formula for $u\left( x,T\right) $ as $x \to x_{0}$ using the methods in (\cite{V1}). Indeed, using Remark \ref{RemAs} as well as \eqref{S1E3}, \eqref{Y1E1} we can approximate $u\left( x,\bar{t}\right) $ for any $\bar {t}<T,$ $\bar{t}\rightarrow T$ and $\left\vert x - x_{0} \right\vert =L\sqrt{T-\bar{t}},$ $L$ large. In such regions $u$ is basically constant in domains with a "parabolic size" $\sqrt{T-\bar{t}}.$ Therefore the equation \eqref{S1E1} can be approximated as an ODE for times $\bar{t}\leq t<T$. This allows to approximate $u\left( x,T\right)$ as: \[ u\left( x,T\right) \sim \frac{\beta^{2}}{\left\vert x - x_{0} \right\vert ^{2}} \exp \left( -2\alpha \sqrt{\left\vert \log\left\vert x - x_{0}\right\vert ^{2}\right\vert }\right) \varphi\left( \theta\right) \quad \text{ as } x \to x_{0}. \] It is interesting to notice that the function $\varphi\left( \theta\right)$ mentioned in Remark \ref{RemAs} gives the angular dependence of $u$ at the blow-up point. Therefore, a more detailed study of the asymptotics of the solutions of \eqref{Y1E3} as $\left\vert y\right\vert \rightarrow\infty$ would be in order. \end{remark} \section{Geometric configurations of singular self-similar solutions.\label{selfSimSing}} In most of the previous computations we have assumed that $\Phi\left( y,\tau\right)$ approaches one very specific singular solution of \eqref{S1E7}, \eqref{S1E8} with the form \eqref{U1E3a}. However, there exist many other solutions of the system \eqref{S1E7}, \eqref{S1E8} that could be taken as possible limits of $\Phi\left( y, \tau\right) $. The problem \eqref{S1E7}, \eqref{S1E8} is meaningless if we assume that $\Phi$ is just a measure, or even a sum of Dirac masses. However, having in mind the matching arguments in the previous sections, it is natural to assume that $\Phi$ has the form (\ref{U1E1}) (i.e. all the masses of the peaks are $8\pi$) and also that the equation must be understood as (\ref{U1E2}) or, in an equivalent way, that a given peak does not interact with itself, something that can be justified "a posteriori" due to the local symmetry of the peaks during the process of aggregation. In this section we just obtain a few examples of solutions of (\ref{U1E2}). It is important to remark that the existence of these solutions does not guarantee the existence of solutions of the original problem \eqref{S1E1}-\eqref{S1E2}. Indeed, although the formal arguments described in the previous Sections can be extended without much difficulty to more general self-similar solutions a crucial condition that must be satisfied, in order to obtain a meaningful equation for the width of the peaks $\varepsilon_{\ell},$ is the inequality: $16^{-1} + 2^{-5} -8A_{\ell} >0$ with $A_{\ell}$ would be a constant defined in a manner analogous to Lemma \ref{Prop1} for the corresponding elliptic problem. We do not attempt to derive a complete classification of all the solutions of (\ref{U1E2}). However, we will describe some particular classes of these solutions in order to illustrate the type of geometries that can arise during the aggregation of multiple peaks. The cases under consideration will be the following ones: points in a line, regular polygons, several polygons with different sizes combined, complete classification of solutions for $N=2,3,$ and particular results for $N=4,5.$ We remark that the sum of the right hand side of \eqref{U1E2} vanishes for any $N \ge 2$ and for any configuration of points $\left\{ y_{j}\right\}$ as it can be seen by symmetrization: \begin{equation} \sum_{j=1}^{N} y_{j}=0. \label{Y8E5} \end{equation} \subsection{Solutions where all the peaks are in a line.\label{line}} We begin with solutions of (\ref{U1E2}) where all the points $\left\{ y_{j}\right\}$ are placed in a line. We can assume that this line is the horizontal coordinate axis.\ Then $y_{j}=\left( x_{j},0\right) $ for some real numbers $\{ x_{j}\}_{j=1}^{N}$. Then \eqref{U1E2} becomes: \begin{equation} \frac{x_{j}}{2}-4\sum_{\ell=1,\;\ell\neq j}^{N} \frac{x_{j}-x_{\ell}}{\left\vert x_{j}-x_{\ell}\right\vert ^{2}}=0, \quad j=1,2,...,N,\quad N\geq 2. \label{Y8E1} \end{equation} \begin{proposition} For every integer $N \ge2$ there exists a unique solution of \eqref{Y8E1}. The solution is invariant, up to the rearrangement of indexes, by the transformation $x_{j} \mapsto- x_{j}$. \end{proposition} \begin{proof} This problem can be reformulated in a variational form because the solutions of (\ref{Y8E1}) can be obtained as the minimizers of \begin{equation} E\left( x_{1},x_{2},...,x_{N}\right) =\sum_{k=1}^{N}\frac{\left( x_{k}\right) ^{2}}{4}-2\sum_{\ell=1}^{N}\sum_{k=1,\ell\neq k}^{N} \log \left\vert x_{k}-x_{\ell}\right\vert. \label{Y8E2 \end{equation} The functional $E\left( x_{1},x_{2},...,x_{N}\right) $ is strictly convex and lower bounded in the convex set $\{-\infty<x_{1}<x_{2}<...<x_{N} <\infty\}.$ Therefore there exists a unique minimizer where (\ref{Y8E1}) holds. Moreover, symmetry considerations prove the invariance mentioned in the statement. \end{proof} \begin{remark} The solutions of \eqref{U1E2} can be characterized in general by means of the extremal points of a functional similar to the one in \eqref{Y8E2} if the points $\left\{ y_{j}\right\}$ are not aligned. However, in such general cases, the convexity properties of the functional are not satisfied and therefore, the functional does not allow to obtain information about the solutions in an easy manner. \end{remark} \subsection{Regular polygons.\label{polygon}} \begin{proposition} For every integer $N\geq2$ there exists a solution of \eqref{U1E2} with the points $\left\{ y_{j}\right\} $ placed at the vertices of a regular $N$-sided polygon centered at the origin. The solution is unique up to rotation of coordinates. Moreover, the points lie on the circle with radius $2\sqrt{N-1}$ centered at the origin. \end{proposition} \begin{proof} It is convenient to reformulate (\ref{U1E2}) using complex variables. Let us write $y_{j}=\left( y_{j,R},y_{j,I}\right) $ and $z_{j}=y_{j,R}+iy_{j,I \in\mathbb{C}.$ Then (\ref{U1E2}) becomes \[ \frac{z_{j}}{2}=4\sum_{\ell=1,\;\ell\neq j}^{N} \frac{z_{j}-z_{\ell}}{\left\vert z_{j}-z_{\ell}\right\vert ^{2}},\ \ j=1,...N, \] or equivalently, \begin{equation} \bar{z}_{j}=8\sum_{\ell=1,\ell\neq j}^{N}\frac{1}{z_{j}-z_{\ell}},\ \ j=1,...N. \label{Y8E3} \end{equation} We now look for solutions with the form: \begin{equation} z_{j}=\rho e^{\frac{2\pi j}{N}i},\quad\ j=1,...N,\ \ \rho> 0. \label{Y8E4 \end{equation} Plugging (\ref{Y8E4}) into (\ref{Y8E3}) we obtain: \begin{equation} \frac{\rho^{2}}{8} =\left[ \frac{N-1}{2}\right] +\frac{1+\left(-1\right)^{N}}{4}, \label{Y8E4a} \end{equation} where $[x]$ stands for the largest integer not greater than $x \in \mathbb{R}$. This equation determines $\rho$ for each value of $N$. We actually have: \begin{equation} \rho= 2\sqrt{N-1}. \end{equation} This shows that there exists a solution of \eqref{U1E2} constructing a regular $N$-sided polygon. The center of the polygon is necessarily at the origin because of \eqref{Y8E5}. \end{proof} \subsection{Classification of solutions for the cases $N=2$ and $N=3.$} In these particular cases we can characterize uniquely all the solutions of (\ref{U1E2}). The problem becomes more complicated if the number $N$ increases, because, as it will be seen later, the number of geometrical configurations increases with $N$. \subsubsection{The case $N=2.$} \begin{proposition} Suppose that $N=2.$ Then a solution of \eqref{U1E2} is uniquely given by $y_{1} = (-2,0),$ $y_{2} = (2,0)$ up to rotation of coordinates. \end{proposition} \begin{proof} Due to (\ref{Y8E5}) we have $y_{2}=-y_{1}.$ We can assume, up to rotation, that $y_{1}=\left( x_{1},0\right)$ with $x_{1}>0.$ Then (\ref{U1E2}) is reduced to: \[ \frac{x_{1}}{2}=\frac{2}{x_{1}}, \] whence $x_{1}=2.$ This simultaneously proves the uniqueness of the obtained solution in the class of solutions studied in Subsections \ref{line} and \ref{polygon} when $N=2$. \end{proof} \subsubsection{The case $N=3.$} This case is still sufficiently simple to obtain a complete classification of the solutions. There are just two solutions of (\ref{U1E2}) up to rotation. Either the three points are in a line as in Subsection \ref{line} or in an equilateral triangle as in Subsection \ref{polygon}. \begin{proposition} Suppose that $N = 3$. Then for every solution of \eqref{U1E2} the points $\{ y_1 ,y_2 ,y_3 \}$ are placed, up to rotation, either at the ends and the intermediate point of a segment with length being $4\sqrt{3}$ or at the vertices of the regular polygon with the length of the sides being $2\sqrt{6}$. \end{proposition} \begin{proof} Suppose first that the three points are in a line, i.e, $y_j = (x_j ,0)$ for some $x_j \in \mathbb{R}$. Then the line crosses the origin due to \eqref{Y8E5} and, up to rotation, the resulting solution is the one described by means of the minimizers of the functional $E$ in (\ref{Y8E2}). In this case, they can be computed explicitly. Indeed, the invariance of the solution under the transformation $x_{j}\rightarrow-x_{j}$ implies that, under the assumption $x_{1}<x_{2}<x_{3},$ we have $x_{2}=0,$ $x_{1}=-x_{3}.$ Then (\ref{Y8E1}) reduces to \[ \frac{x_{3}}{2}=4\left[ \frac{1}{x_{3}}+\frac{2x_{3}}{\left( 2x_{3}\right) ^{2}}\right] =\frac{6}{x_{3}}, \] whence $x_{3}=-x_{1}=2\sqrt{3}.$ Suppose now that the three points $\left\{ y_{j}\right\}$ are not in aline. We will prove that in this case the three points are placed at the vertices of an equilateral triangle. It is convenient to use the complex notation of Subsection \ref{polygon}. We may assume, without loss of generality, that $z_{3}=\bar{z}_{3}.$ On the other hand, using also (\ref{Y8E5}) we then observe that (\ref{U1E2}) becomes \begin{equation} \bar{z}_{1}=\frac{24z_{1}}{(z_{1}-z_{2})(z_{1}-z_{3})},\quad\bar{z}_{2 =\frac{24z_{2}}{(z_{2}-z_{1})(z_{2}-z_{3})}. \label{Y8E6 \end{equation} Due to (\ref{Y8E5}) we have $z_{1}\cdot z_{2}\neq0,$ since otherwise the three points would be aligned against the assumption. Taking the absolute value of (\ref{Y8E6}), we then have \begin{equation} \left\vert z_{1}-z_{2}\right\vert \left\vert z_{1}-z_{3}\right\vert =\left\vert z_{2}-z_{1}\right\vert \left\vert z_{2}-z_{3}\right\vert =24. \label{Y8E6a \end{equation} Therefore $\left\vert z_{1}-z_{3}\right\vert =\left\vert z_{2}-z_{3} \right\vert =: \sigma>0.$ On the other hand, there is nothing special about the point $z_{3}$ and, using the rotational invariance of (\ref{U1E2}) we may replace $z_{3}$ by $z_{1}$ and prove in a similar way that $\left\vert z_{3}-z_{1}\right\vert =\left\vert z_{2}-z_{1}\right\vert = \sigma.$ Therefore the three points are at the vertices of an equilateral triangle and the obtained solution is the corresponding one considered in Subsection \ref{polygon}. The precise size of the triangle can be computed using (\ref{Y8E6a}) as $\sigma=2\sqrt{6}.$ \end{proof} \subsection{The case $N=4.$} We have not obtained a complete classification of the solutions of (\ref{U1E2}) if $N=4$ but we have some partial results suggesting that there exist at least three solutions (up to rotation). Notice first that we can obtain two solutions as in Subsections \ref{line} and \ref{polygon}. Actually they can be computed explicitly. In the case of solutions with the four peaks in a line we write \[ x_{1}=-R,\ \ x_{2}=-\theta R,\ \ x_{3}=\theta R,\ \ x_{4}=R \] with $R>0$ and $0<\theta<1.$ The equation (\ref{Y8E1}) then becomes: \begin{equation} R^{2} =8\left[ \frac{1}{1-\theta} +\frac{1}{1+\theta} +\frac{1}{2}\right] ,\quad \theta R^{2} =8\left[ -\frac{1}{1-\theta} +\frac{1}{2\theta}+\frac{1}{1+\theta}\right] , \label{Y8E7} \end{equation} whence, eliminating $R,$ we obtain after some computations: $8\theta^{2} = ( 1-\theta^{2})^{2}$, whence: \[ \theta=\sqrt{5-2\sqrt{6}}. \] since $\theta^{2}\in\left( 0,1\right)$. Using then the first equation in (\ref{Y8E7}) we obtain: \[ R= 2\sqrt{\sqrt{6}+3} \] and this concludes the characterization of the solution with $N=4$ and all the peaks aligned. If $N=4$ we can obtain a solution with all the peaks at the vertices of a square as indicated in Subsection \ref{polygon}. Using (\ref{Y8E4}) and (\ref{Y8E4a}) we obtain that the vertices are at the points: \[ z_{j}=2\sqrt{3}e^{\frac{\pi j}{2}i},\quad j=0,1,2,3. \] We remark that it is possible to obtain another solution in the case $N=4$ that is neither of the ones in Subsections \ref{line} nor \ref{polygon}. \begin{proposition} Suppose that $N = 4$. Then there exist a solution of \eqref{U1E2} with one peak at the origin and three remaining peaks at the vertices of an equilateral triangle. \end{proposition} \begin{proof} We look for a solution with the form: $y_{j}=\rho e^{(2j /3)\pi i},\ j=1,2,3$, $y_4 = 0$. Due to the symmetry of solutions under the rotation of an angle $2\pi/3$, the equation \eqref{U1E2} becomes: \[ \frac{1}{\rho^{2}}\left( y_{1}+y_{2}+y_{3}\right) =0, \qquad \frac{\rho^{2}}{8} = 1+\frac{ 1-e^{\frac{2\pi}{3}i} }{\left\vert 1-e^{\frac{2\pi}{3}i} \right\vert ^{2}}+\frac{ 1-e^{-\frac{2\pi}{3} i} } {\left\vert 1-e^{-\frac {2\pi}{3}i}\right\vert ^{2}} . \] The first equation is automatically satisfied by \eqref{Y8E5}, whereas the second one gives $\rho = 4$. \end{proof} \subsection{$N=5$ case} In this case we do not attempt to obtain a complete classification of the solutions, but indicate some examples to illustrate what type of solution can arise. We can obtain solutions with all the peaks in a line as in Subsection \ref{line}. In this case we have, due to the symmetry of the problem: \[ y_{1}=-R,\ y_{2}=-\theta R\ ,\ y_{3}=0\ ,\ y_{4}=\theta R\ ,\ y_{5}=R, \] where $R>0$ and $\theta\in(0,1)$. The equations (\ref{U1E2}) are then reduced to \[ \frac{R^{2}}{8}=\frac{3}{2}+\frac{2}{1-\theta^{2}}, \qquad \frac{\theta R^{2}}{8}=\frac{3}{2\theta}-\frac{2\theta}{1-\theta^{2}}. \] Eliminating $R^{2}$ we obtain $3\theta^{4}-14\theta^{2}+3=0,$ whence \[ \theta=\sqrt{\frac{7}{3}-\frac{2}{3}\sqrt{10}}. \] Therefore: \[ R=\frac{1}{3}\sqrt{3}\sqrt[4]{10}\sqrt{\sqrt{10}-2}\left( \sqrt{10}+2\right) . \] We can obtain also a solution where the peaks are placed at the vertices of a regular pentagon. Using (\ref{Y8E4}) and (\ref{Y8E4a}) we obtain: \[ z_{j}=4e^{\frac{2\pi j}{5}i},\quad j=0,1,2,3,4. \] in the complex notation. There is also one solution that consists of one peak at the origin and the other four peaks at the vertices of one square centered at the origin. Assuming that the peaks are at the points \thinspace$z_{j}=\rho e^{\frac{\pi j}{2 i},\ j=0,1,2,3$, we obtain: \[ z_{j}=2\sqrt{5}e^{\frac{\pi j}{2}i},\ j=0,1,2,3. \] solving the equations (\ref{U1E2}). We finally remark that in the case $N=5$ it is possible to obtain one distribution of peaks whose only symmetry is the reflection with respect to a line. More precisely, we have: \begin{proposition} There exists a solution of \eqref{U1E2} with the points $\{ y_{k} \}$ placed, in terms of the complex notation of Subsection \ref{polygon}, at the following positions: \begin{equation} z_{k}=x_{k}\in\mathbb{R}\quad\text{ for }k=1,2,3,\ \ z_{4} =\alpha +i\beta,\ \ z_{5}=\alpha-i\beta \label{Y9E1a} \end{equation} with $\alpha< 0,\ \beta>0.$ \end{proposition} \begin{proof} We prove the existence of a solution of (\ref{U1E2}) with the form (\ref{Y9E1a}) by means of a topological argument. Due to (\ref{Y8E5}) we have: \begin{equation} \alpha=-\frac{x_{1}+x_{2}+x_{3}}{2}. \label{Y9E2} \end{equation} We assume that $\alpha$ is chosen as in \eqref{Y9E2}. On the other hand, we can obtain an equation for $\beta$ using the vertical component (or imaginary part in complex notation) of (\ref{U1E2}) with $j=4$: \begin{equation} \frac{1}{8}=\sum_{k=1}^{3}\frac{1}{\left( \alpha-x_{k}\right)^{2} + \beta^{2}} + \frac{1}{2\beta^{2}} \label{Y9E3} \end{equation} with $\alpha$ given by (\ref{Y9E2}). Since the right-hand side of (\ref{Y9E3}) is a decreasing function of $\beta,$ we see that there exists a unique solution of (\ref{Y9E3}) with $\beta>0$ for any $\left( x_{1},x_{2},x_{3}\right)$ in the set $-\infty<x_{1}<x_{2}<x_{3}<\infty$. We denote it as $\beta\left( x_{1},x_{2},x_{3}\right) $. Moreover, notice that (\ref{Y9E3}) implies \begin{equation} \beta\left( x_{1},x_{2},x_{3}\right) >2. \label{Y9E3a \end{equation} Equations (\ref{U1E2}) with $j=1,2,3$ is reduced, due to (\ref{Y9E1a}), to: \begin{equation} \frac{x_{k}}{8}= \sum_{j=1,j\neq k}^{3}\frac{x_{k}-x_{j}}{\left\vert x_{k}-x_{j}\right\vert^{2}} +\frac{2\left( x_{k}-\alpha\right)}{\left( \alpha-x_{k}\right)^{2} +\beta^{2}}, \quad k=1,2,3, \label{Y9E4} \end{equation} where $\alpha$ as in (\ref{Y9E2}). In order to prove that there exist solutions of (\ref{Y9E4}) in the cone \[ \mathcal{C}=\left\{ x=\left( x_{1},x_{2},x_{3}\right) :-\infty<x_{1 <x_{2}<x_{3}<\infty\right\} , \] we treat (\ref{Y9E4}) as a perturbation of the equation: \[ F_{k}\left( x\right) =\frac{x_{k}}{8}-\sum_{j=1,j\neq k}^{3} \frac{x_{k}-x_{j}}{\left\vert x_{k}-x_{j}\right\vert ^{2} =0,\quad k=1,2,3, \] using topological degree. Since the function $F\left( x\right) =\left( F_{1},F_{2},F_{3}\right) \left( x\right) $ becomes singular at the boundary of the cone $\mathcal{C}$ we construct a subset $\mathcal{U}$ with the property that $\left\vert F\left( x\right) \right\vert \geq 100$ on the boundary $\partial\mathcal{U}$. The functions $G_{k}\left( x\right) = 2\left(x_{k}-\alpha\right) /( \left( \alpha-x_{k}\right)^{2}+\beta^{2})$, $k=1,2,3$, are bounded in $\mathcal{C}$ by $2$ as it can be easily checked considering separately the cases $\left\vert x_{k}-\alpha\right\vert \geq1$ and $\left\vert x_{k}-\alpha\right\vert \leq1$ and using (\ref{Y9E3a}). Therefore we would have $\left\vert G\left( x\right) \right\vert <\left\vert F\left( x\right) \right\vert $ on $\partial\mathcal{U}$. On the other hand, there is a unique nondegenerate solution of the equation $F\left( x\right) =0$ in $\mathcal{C}$ due to the results in Subsection \ref{line}. Classical degree theory then shows that there exists at least one solution of $\left( F+G\right) \left( x\right) =0$ in $\mathcal{U}$, whence the existence of the desired solution of (\ref{Y9E4}) follow. We shall construct the subset $\mathcal{U}$ of the form: \[ \mathcal{U}= \left\{ x\in\mathcal{C}:x_{1}+\varepsilon<x_{2},\ x_{2} + \varepsilon <x_{3},\ -R<x_{k}<R,\ k=1,2,3\right \}, \] where $\varepsilon>0$ and $R>0$ are constants to be determined. Notice that the boundary $\partial \mathcal{U}$ is contained in the planes $\Pi_{1,2}=\left\{ x_{2 -x_{1}=\varepsilon\right\} ,\ \Pi_{2,3}=\left\{ x_{3}-x_{2}=\varepsilon \right\} ,\ \Pi_{-R}=\left\{ x_{1}=-R\right\} ,\ \Pi_{R}=\left\{ x_{3}=R\right\} .$ We will assume that $1/\varepsilon$ is much larger than $R.$ Along the part of the boundary $\partial \mathcal{U}$ contained in the planes $\Pi_{1,2},\ \Pi_{2,3}$ we then have \[ F_{1}\left( x\right) \geq\frac{1}{\varepsilon}-\frac{R}{8}\geq\frac {1}{2\varepsilon}. \] We then proceed to consider the part of $\partial \mathcal{U}$ contained in $\Pi_{R}.$ Suppose first that $x_{3}-x_{1}\leq1.$ Then $x_{1}\geq R-1$ and we obtain \[ F_{1}\left( x\right) =\frac{x_{1}}{8}+\frac{1}{x_{2}-x_{1}}+\frac{1 {x_{3}-x_{1}}>\frac{R-1}{8}, \] which can be made larger than $100$ assuming that $R > 801$. Suppose now that $x_{3}-x_{1}>1.$ We distinguish two cases. Suppose firstly that $x_{3}-x_{2}>1.$ Then \[ F_{3}\left( x\right) =\frac{x_{3}}{8}-\frac{1}{x_{3}-x_{1}}-\frac{1 {x_{3}-x_{2}}\geq\frac{R}{16 \] if $R$ is large, because the last two terms are bounded by one. Suppose secondly that $x_{3}-x_{2}\leq1.$ Let us assume firstly that $x_{2}-x_{1}\leq1/4.$ Then $x_{3}-x_{2}\geq 3/4$ and we obtain again $F_{3}\left( x\right) \geq R/16.$ Suppose secondly that $x_{2}-x_{1}>1/4.$ Then \[ F_{2}\left( x\right) =\frac{x_{2}}{8}-\frac{1}{x_{2}-x_{1}} +\frac{1}{x_{3}-x_{2}} >\frac{x_{2}}{8}-\frac{1}{x_{2}-x_{1}} \geq \frac{x_{2}}{8}-4. \] Since $x_{3}-x_{2}\leq1$ we obtain $x_{2}\geq R-1$ and therefore $F_{2}\left( x\right) \geq R/16.$ We then have $\left\vert F\left( x\right) \right\vert \geq 100$ for $x\in\partial\mathcal{U}\cap\Pi_{R}.$ The case of $x\in \partial\mathcal{U}\cap\Pi_{-R}$ is similar. We shall observe the existence of the desired solutions of (\ref{Y9E4}). It only remains to prove that the equation (\ref{U1E2}) with $j=4$ holds. This equation is just: \[ \frac{\alpha}{8} =\sum_{k=1}^{3}\frac{\alpha-x_{k}}{\left(\alpha - x_{k}\right)^{2} +\beta^{2}}. \] In order to check that this equation holds, we just notice that this is equivalent to: \[ -\frac{x_{1}+x_{2}+x_{3}}{16}=\sum_{k=1}^{3}\frac{\left( \alpha-x_{k}\right) }{\left( \alpha-x_{k}\right) ^{2}+\beta^{2}} \] due to \eqref{Y9E2}. According to \eqref{Y9E4} this equation is equivalent to: \[ \sum_{k=1}^{3}\sum_{j=1,j\neq k}^{3}\frac{\left( x_{k}-x_{j}\right) }{\left\vert x_{k}-x_{j}\right\vert ^{2}} =0. \] The last identity is trivially satisfied by symmetrization. \end{proof} \begin{remark} We have made some computations suggesting that in the case $N=4$ the only trapezoidal solution is the square. The only rhombic solution is also the square. Increasing the value of $N$ it becomes possible to show that there are also solutions with nested squares, triangles, etc. However, we will not continue this discussion here. It would be interesting to determine the smallest number $N$ yielding solutions without any symmetry group. \end{remark} \section{Bounded domains.} \label{CNprob} Solving the Keller-Segel model in the half circle, it is possible to obtain a wealth of shapes yielding aggregation at the boundary. The mass is, in all the cases $4\pi m$ with positive integers $m$. It is possible to obtain for instance $8\pi$ instead of $4\pi,$ just keeping one point at the interior of the domain. Notice that one must choose symmetric point configurations in order to ensure that the homogeneous Neumann boundary conditions are satisfied. \bigskip \noindent \textbf{Acknowledgements.} The authors thank C.\,Cuesta, M.\,Fontelos, and B.\,Gamboa for their help on the numerical computation for the constant $A$ in Lemma \ref{Prop1}.
2024-02-18T23:41:05.448Z
2012-03-06T02:05:28.000Z
algebraic_stack_train_0000
4,130
31,202
proofpile-arXiv_066-4182
\section{Introduction} We study the zero dissipation limit of the solution to the Navier-Stokes equations of a compressible heat-conducting gas in Lagrangian coordinate: \begin{equation} \left\{ \begin{array}{l} \displaystyle v_t-u_x=0,\\ \displaystyle u_t+p_x=\varepsilon(\frac{u_x}{v})_x,\\ \displaystyle (e+\frac{u^2}{2})_t+(pu)_x=\kappa(\frac{\theta_x}{v})_x+\varepsilon(\frac{uu_x}{v})_x \end{array} \right. \label{NS} \end{equation} with Riemann initial data \begin{equation} (v,u,\theta)(0,x)=\left\{ \begin{array}{ll} (v_-,u_-,\theta_-),&x<0,\\ (v_+,u_+,\theta_+),&x>0, \end{array} \right.\label{DD} \end{equation} where the functions $v(x,t)>0,u(x,t),\theta(x,t)>0$ represent the specific volume, velocity and the absolute temperature of the gas, respectively. And $p=p(v,\theta)$ is the pressure, $e=e(v,\theta)$ is the internal energy, $\varepsilon>0$ is the viscosity constant and $\kappa>0$ is the coefficient of heat conduction. Here we consider an ideal and polytropic gas, that is \begin{equation*} p=\frac{R\theta}{v},\qquad e=\frac{R\theta}{\gamma-1}, \label{state-equa} \end{equation*} with $\gamma>1, R>0$ being gas constants. The study of the asymptotic behavior of viscous flows, as the viscosity tends to zero, is one of the important problems in the theory of compressible fluid flows. When the solution of the inviscid flow is smooth, the zero dissipation limit problem can be solved by classical scaling method. However, the inviscid compressible flow contains discontinuities, such as shock waves, in general. In this case, it is also conjectured that a general weak entropy solution to the inviscid flow should be the strong limit of the solution to the corresponding viscous flows with the same initial data as the viscosity vanishes. It is well known that the solution to the Riemann problem for the Euler equations consists of three basic wave patterns, that is, shock, rarefaction wave and contact discontinuity. Moreover, the Riemann solution is essential in the theory for the Euler equations as it captures both local and global behavior of general solutions. For hyperbolic conservation laws with the uniform viscosity $$ u_t+f(u)_x=\varepsilon u_{xx}, $$ where $f(u)$ satisfies some assumptions to ensure the hyperbolic nature of the corresponding inviscid system, Goodman-Xin \cite{GX} verified the limit for piecewise smooth solutions separated by non-interacting shock waves using a matched asymptotic expansion method. Later, Yu \cite{Y} proved it for hyperbolic conservation laws with both shock and initial layers. In 2005, important progress made by Bianchini-Bressan\cite{BB} justifies the vanishing viscosity limit in BV-space even though the problem is still unsolved for the physical system such as the compressible Navier-Stokes equations. For the compressible isentropic Navier-Stokes equations where the conservation of energy in \eqref{NS} is neglected in the isentropic regime, Hoff-Liu \cite{HL} firstly proved the vanishing viscosity limit for a piecewise constant shock with initial layer. Later, Xin \cite{X} justified the limit for rarefaction waves. Then, Wang \cite{WH} generalized the result of Goodman-Xin \cite{GX} to the isentropic Navier-Stokes equations. Recently, Chen-Perepelitsa \cite{CP} proved the convergence of the isentropic compressible Navier-Stokes equations to the compressible Euler equations as the viscosity vanishes in Eulerian coordinates for general initial data by using compensated compactness method if the far field does not contain vacuum. Note that this result allows the initial data containing vacuum in the interior domain. However, the framework of compensated compactness is basically limited to $2\times 2$ systems so far, so that this result could not be applied to the full compressible Navier-Stokes equations (\ref{NS}). For the full compressible Navier-Stokes equations, there are investigations on the limits to the Euler system for the basic wave patterns in the literature. We refer to Jiang-Ni-Sun \cite{JNS} and Xin-Zeng \cite{XZ} for the rarefaction wave, Wang \cite{W} for the shock wave, Ma \cite{M} for the contact discontinuity and Huang-Wang-Yang \cite{HWY, HWY1} for the superposition of two rarefaction waves and a contact discontinuity and the superposition of rarefaction and shock waves, respectively. We should point out that the limit shown in \cite{JNS} was for the discontinuous initial data while the other results mentioned were for (well-prepared) smooth data. In this paper, we shall investigate the zero dissipation limit of the full Navier-Stokes equations \eqref{NS} with Riemann initial data \eqref{DD} in the case of the superposition of two rarefaction waves and a contact discontinuity. The local and global well-posedness of the full system \eqref{NS} or the corresponding isentropic system with discontinuous initial data is systematically studied by Hoff, etc., see \cite{H1, H2, H3, H4, H5, H6, CHT}. In order to get the zero dissipation limit to the Riemann solution of the Euler system, we shall combine the local existence of solutions with discontinuous data from \cite{H3} and the time-asymptotic stability analysis to the compressible Navier-Stokes equations \eqref{NS1}. Compared with the previous result \cite{HWY} where the same limit process is studied for (well-prepared) smooth initial data, the main difficulty in the proof here lies in the discontinuity of the initial data. The discontinuity of the initial data for the volume $v(t,x)$ will propagate for all the time along the particle path due to the hyperbolic regime while the smoothing effects will also be performed on the velocity $u(t,x)$ and the temperature $\theta(t,x)$ by the parabolic structure, and this interaction of the discontinuity and smoothing effects brings technical difficulties. To circumvent such difficulties, we shall choose suitable weight functions to carry out the weighted energy estimates in terms of the superposition wave structure (see Remark 3.7), and use the energy method of Huang-Li-Matsumura \cite{HLM} for the stability of two rarefaction waves with a contact discontinuity in the middle, where the authors obtained a new estimate on the heat kernel which can be applied to the study of the stability of the viscous contact wave in the framework of the rarefaction wave (see Lemma \ref{lemma3}). Namely, the anti-derivative variable of the perturbation is not necessary and the estimates to the perturbation itself are also available to get the stability of the viscous contact wave. Without loss of generality, we assume the following relation between the viscosity constant $\varepsilon$ and the heat-conducing coefficient $\kappa$ of system \eqref{NS} as in \cite{JNS}: \begin{equation} \left\{ \begin{array}{l} \displaystyle \kappa=O(\varepsilon),\qquad \qquad \rm as \qquad\varepsilon\rightarrow0;\\ \displaystyle \nu\doteq\frac{\kappa(\varepsilon)}{\varepsilon}\geq c>0\qquad {\rm for ~some ~positive~ constant}~ c,\quad \rm as \quad\varepsilon\rightarrow0. \end{array} \right. \label{viscosity} \end{equation} If $\kappa=\varepsilon=0$ in \eqref{NS}, then the corresponding Euler system reads as \begin{equation} \left\{ \begin{array}{l} v_t-u_x=0,\\ u_t+p_x=0,\\[2mm] \displaystyle{\Big( e+\frac{u^2}{2}\Big)_t+(pu)_x=0.} \end{array} \right.\label{euler} \end{equation} It can be easily computed that the eigenvalues of the Jacobi matrix of the flux function to \eqref{euler} are \begin{equation} \lambda_1=-\sqrt{\frac{\gamma p}{v}},\quad \lambda_2=0,\quad \lambda_3=\sqrt{\frac{\gamma p}{v}}. \label{eigen} \end{equation} It is well known that the first and third characteristic fields of \eqref{euler} are genuinely nonlinear and the second one is linearly degenerate (see\cite{S}). For the Euler equations, we know that there are three basic wave patterns, shock, rarefaction wave and contact discontinuity. And the Riemann solution to the Euler equations has a basic wave pattern consisting the superposition of these three waves with the contact discontinuity in the middle. For later use, let us firstly recall the wave curves for the two types of basic waves studied in this paper. Given the right end state $(v_+,u_+,\theta_+)$ with $v_+, \theta_+>0$, the following wave curves in the phase space $\{(v,u,\theta)|v>0, \theta>0\}$ are defined for the Euler equations. $\bullet$ Contact discontinuity curve: \begin{equation} CD(v_+,u_+,\theta_+)= \{(v,u,\theta) | u=u_+, p=p_+, v \not\equiv v_+ \}. \label{CD} \end{equation} $\bullet$ $i$-Rarefaction wave curve $(i=1,3)$: \begin{equation} R_i (v_+, u_+, \theta_+):=\Bigg{ \{} (v, u, \theta)\Bigg{ |}u<u_+ ,~u=u_+-\int^v_{v_+} \lambda_i(\eta, s_+) \,d\eta,~ s(v, \theta)=s_+\Bigg{ \}},\label{Ri} \end{equation} where $s_+=s(v_+,\theta_+)$ and $\lambda_i=\lambda_i(v,s)$ defined in \eqref{eigen} is the $i$-th characteristic speed of the Euler system \eqref{euler}. Now, we define the solution profile that consists of the superposition of two rarefaction waves and a contact discontinuity. Let $ (v_-,u_-,\theta_-) \in$ $R_1$-$CD$-$R_3(v_+,u_+,\theta_+)$. Then, there exist uniquely two intermediate states $(v_*, u_*,\theta_*)$ and $(v^*, u^*,\theta^*)$, such that $(v_*, u_*,\theta_*)\in R_1(v_-, u_-,\theta_-)$, $(v_*, u_*,\theta_*)\in CD(v^*, u^*,\theta^*)$ and $(v^*, u^*,\theta^*)\in R_3(v_+,u_+,\theta_+)$. Thus, the wave pattern $(\bar V,\bar U,\bar\Theta)(t,x)$ consisting of 1-rarefaction wave, 2-contact discontinuity and 3-rarefaction wave that solves the corresponding Riemann problem of the Euler system \eqref{euler} can be defined by \begin{eqnarray} \left(\begin{array}{cc} \bar V\\ \bar U \\ \bar \Theta \end{array} \right)(t, x)= \left(\begin{array}{cc}v^{r_1}+ v^{cd}+ v^{r_3}\\ u^{r_1}+ u^{cd}+ u^{r_3} \\ \theta^{r_1}+ \theta^{cd}+ \theta^{r_3} \end{array} \right)(t, x) -\left(\begin{array}{cc} v_*+v^*\\ u_*+u^*\\ \theta_*+\theta^* \end{array} \right) ,\label{RS} \end{eqnarray} where $(v^{r_1}, u^{r_1}, \theta^{r_1} )(t,x)$ is the 1-rarefaction wave defined in \eqref{Ri} with the right state $(v_+, u_+, \theta_+)$ replaced by $(v_*, u_*, \theta_* )$, $(v^{cd}, u^{cd}, \theta^{cd} )(t,x)$ is the contact discontinuity defined in \eqref{CD} with the states $(v_-, u_-, \theta_-)$ and $(v_+, u_+, \theta_+)$ replaced by $(v_*, u_*, \theta_* )$ and $(v^*, u^*, \theta^* )$ respectively, and $(v^{r_3}, u^{r_3}, \theta^{r_3})(t,x)$ is the 3-rarefaction wave defined in \eqref{Ri} with the left state $(v_-, u_-, \theta_-)$ replaced by $(v^*, u^*, \theta^* )$. Now we state the main result as follows. \begin{theorem}\label{limit-th} Given a Riemann solution $(\bar V,\bar U,\bar\Theta)(t,x)$ defined in \eqref{RS}, which is superposition of two rarefaction waves and a contact discontinuity for the Euler system \eqref{euler}, there exist small positive constants $\delta_0$ and $\varepsilon_0$, such that if $\varepsilon\leq\varepsilon_0$ and the wave strength $\delta\doteq|(v_+-v_-,u_+-u_-,\theta_+-\theta_-)|\leq \delta_0$, then the compressible Navier-Stokes equations \eqref{NS} with \eqref{DD} and \eqref{viscosity} admits a unique global piece-wise smooth solution $(v^\varepsilon,u^\varepsilon,\theta^\varepsilon)(t,x)$ satisfying that \begin{itemize} \item The quantities $u^\varepsilon,\theta^\varepsilon$, $p(v^\varepsilon,\theta^\varepsilon)-\varepsilon\frac{u^\varepsilon_x}{v^\varepsilon}$ and $\frac{\theta^\varepsilon_x}{v^\varepsilon}$ are continuous for $t>0$, and the jumps in $v^\varepsilon,u^\varepsilon_x,\theta^\varepsilon_x$ at $x=0$ satisfies \begin{equation*} |([v^\varepsilon(t,0)],[u^\varepsilon_x(t,0)],[\theta^\varepsilon_x(t,0)])|\leq Ce^{-\frac{ct}{\varepsilon}}, \end{equation*} where the constants $C$ and $c$ are independent of $t$ and $\varepsilon$. \item Moreover, under the condition \eqref{viscosity}, it holds that \begin{equation} \lim_{\varepsilon\rightarrow 0}\sup_{(t,x)\in\Sigma_h}|(v^\varepsilon,u^\varepsilon,\theta^\varepsilon)(t,x)-(\bar V,\bar U,\bar\Theta)(t,x)|=0,\quad \forall h>0, \label{limit} \end{equation} where $\Sigma_h=\big\{(t,x)|t\geq h, \frac{x}{\sqrt{\varepsilon+t}}\geq h \varepsilon^{\alpha},0\leq\alpha<\f12\big\}$. \end{itemize} \end{theorem} \begin{remark} Theorem \ref{limit-th} shows that, away from the initial time $t=0$ and the contact discontinuity located at $x=0$, there exists a unique global solution $(v^\varepsilon,u^\varepsilon,\theta^\varepsilon)(t,x)$ of the compressible Navier-Stokes equations \eqref{NS} which converges to the Riemann solution $(\bar V,\bar U,\bar\Theta)(t,x)$ consisting of two rarefaction waves and a contact discontinuity when $\varepsilon$ and $\kappa$ satisfy the relation \eqref{viscosity} and $\varepsilon$ tends to zero. Moreover, the convergence is uniform on the set $\Sigma_h$ for any $h>0$. \end{remark} \noindent{\bf Notations.} In the paper, we always use the notation $\displaystyle \Xint-_{\mathbf R}=\int_{\mathbf{R}^+}+\int_{\mathbf{R}^-}$, $\|\cdot\|$ to denote the usual $L^2(\mathbf{R})$ norm, $\|\cdot \Xparallel-$ to denote the piecewise $L^2$ norm, that is, $\displaystyle \|f \Xparallel-^2=\Xint-_{\mathbf R}f^2dy$. $\|\cdot\|_1$ and $\|\cdot\Xparallel-_1$ represent the $H^1(\mathbf{R})$ norm and piece-wise $H^1(\mathbf{R}^\pm)$ norm, respectively. And the notation $[\cdot]$ represents the jump of the function $\cdot$ at $x=0$ or $y=0$ if without confusion. \section{Approximate profiles} \setcounter{equation}{0} Introduce the following scaled variables \begin{equation} y=\frac{x}{\varepsilon},\quad \tau=\frac{t}{\varepsilon}, \label{scaling} \end{equation} and set \begin{equation*}\label{new-unknown} (v^\varepsilon,u^\varepsilon,\theta^\varepsilon)(t,x)=(v,u,\theta)(\tau,y). \end{equation*} Then the new unknown functions $(v,u,\theta)(\tau,y)$ satisfies the system \begin{equation}\label{NS1} \left\{ \begin{array}{l} \displaystyle v_\tau-u_y=0,\\ \displaystyle u_\tau+p_y=(\frac{u_y}{v})_y,\\ \displaystyle \frac{R}{\gamma-1}\theta_\tau+pu_y=\nu(\frac{\theta_y}{v})_y+\frac{u^2_y}{v}, \end{array} \right. \end{equation} with the scaled heat conductivity $\nu=\frac{\kappa}{\varepsilon}$ in \eqref{viscosity} satisfying $$ \nu_0\leq \nu\leq \nu_1, ~~{\rm uniformly}~{\rm in}~\varepsilon ~{\rm as}~ \varepsilon\rightarrow0+,~{\rm for}~ {\rm some} ~ {\rm positive}~ {\rm constants}~ \nu_0~ {\rm and}~ \nu_1. $$ Note that the Riemann solution $(\bar V,\bar U,\bar\Theta)(t,x)$ in \eqref{RS} is invariant under the scaling transformation \eqref{scaling}, thus to prove the limit \eqref{limit} in Theorem \ref{limit-th}, it is sufficient to show the following limit \begin{equation} \lim_{\varepsilon\rightarrow 0}\sup_{(\tau,y)\in\Sigma^1_h}|(v,u,\theta)(\tau,y)-(\bar V,\bar U,\bar\Theta)(\tau,y)|=0,\quad \forall h>0, \label{limit-1} \end{equation} where $\Sigma_h^1$ is the corresponding region of $\Sigma_h$ in the new coordinates $(\tau,y)$ defined by \begin{equation}\label{sigma-1} \Sigma^1_h=\Big\{(\tau,y)|\tau\geq \frac h\varepsilon, \frac{y}{\sqrt{1+\tau}}\geq \frac {h}{\varepsilon^{\f12-\alpha}}, 0\leq \alpha<\f12\Big\}. \end{equation} Now we study the Navier-Stokes equations \eqref{NS1}. The corresponding wave profiles to \eqref{CD} and \eqref{Ri} can be defined approximately as follows. We start from the viscous contact wave to \eqref{CD}. \subsection{Viscous contact wave} If $(v_-,u_-,\theta_-)\in CD(v_+,u_+,\theta_+)$, i.e., $$ u_-=u_+,~p_-=p_+,~v_-\neq v_+, $$ then the Riemann problem, that is, the Euler system \eqref{euler} with Riemann initial data $$ (v,u,\theta)(\tau=0,y)=\left\{ \begin{array}{ll} (v_-,u_-,\theta_-),\qquad & y<0,\\ (v_+,u_+,\theta_+),\qquad & y>0, \end{array} \right. $$ admits a single contact discontinuity solution \begin{equation*} (v^{cd},u^{cd},\theta^{cd})(\tau,y)=\left\{ \begin{array}{ll} (v_-,u_+,\theta_-),\qquad & y<u_+\tau,~ \tau>0,\\ (v_+,u_+,\theta_+),\qquad & y>u_+\tau,~ \tau>0. \end{array} \right. \end{equation*} As in \cite{HMX}, the viscous version of the above contact discontinuity, called viscous contact wave $(V^{CD},U^{CD},\Theta^{CD})(\tau,y)$, can be defined as follows. Since it is expected that $$ P^{CD}\approx p_+=p_-, \quad {\rm and}\quad |U^{CD}-u_+|\ll1, $$ the leading order of the energy equation $\eqref{NS1}_3$ is $$ \frac{R}{\gamma-1}\Theta_\tau+p_+U_y=\nu(\frac{\Theta_y}{V})_y. $$ Then, similar to \cite{HLM} or \cite{HWY}, one can get the following nonlinear diffusion equation $$ \Theta_\tau=a \Big(\frac{\Theta_y}{\Theta}\Big)_y,\quad \Theta(\tau,\pm)=\theta_\pm,\quad a=\frac{\nu p_+(\gamma-1)}{R^2\gamma}. $$ The above diffusion equation has a unique self-similar solution $\hat\Theta(\tau,y)=\hat\Theta(\frac{y}{\sqrt{1+\tau}})$. Thus, the viscous contact wave $(V^{CD},U^{CD},\Theta^{CD})(\tau,y)$ can be defined by \begin{equation} \begin{array}{ll} \displaystyle V^{CD}(\tau,y)=\frac{R\hat\Theta(\tau,y)}{p_+},\\[4mm] \displaystyle U^{CD}(\tau,y)=u_+ +\frac{\nu(\gamma-1)}{R\gamma}\frac{\hat\Theta_{y}(\tau,y)}{\hat\Theta(\tau,y)},\\[5mm] \displaystyle \Theta^{CD}(\tau,y)=\hat\Theta(\tau,y)+\frac{R\gamma-\nu(\gamma-1)}{\gamma p_+}\hat\Theta_\tau. \end{array} \label{Viscous-CD} \end{equation} Here, it is straightforward to check that the viscous contact wave defined in \eqref{Viscous-CD} satisfies \begin{equation} |\hat\Theta-\theta_\pm|+(1+\tau)^{\f12}|\hat\Theta_y|+(1+\tau)|\hat\Theta_{yy}| =O(1)\delta^{CD} e^{-\frac{c_0y^2}{1+\tau}}, \quad\mbox{as }|y|\rightarrow+\infty , \label{CD-P} \end{equation} where $\delta^{CD}=|\theta_+-\theta_-|$ represents the strength of the viscous contact wave and $c_0$ is a positive constant. Note that in \eqref{Viscous-CD}, the higher order term $\frac{R\gamma-\nu(\gamma-1)}{\gamma p_+}\hat\Theta_\tau$ is introduced in $\Theta^{CD}(\tau,y)$ to make the viscous contact wave $(V^{CD},U^{CD},\Theta^{CD})(\tau,y)$ satisfy the momentum equation exactly. Correspondingly, $(V^{CD},U^{CD},\Theta^{CD})(\tau,y)$ satisfies the system \begin{equation} \left\{ \begin{array}{l} \displaystyle V^{\scriptscriptstyle CD}_{\tau}-U^{CD}_{y}=0,\\[2mm] \displaystyle U^{CD}_{\tau}+P^{CD}_{y}=\Big(\frac{U^{CD}_{y}}{V^{CD}}\Big)_y,\\[4mm] \displaystyle \frac{R}{\gamma-1}\Theta^{CD}_{\tau}+P^{CD}U^{CD}_{y}=\nu\Big(\frac{\Theta^{CD}_{y}}{V^{CD}}\Big)_y+\frac{(U^{CD}_{y})^2}{V^{CD}}+Q^{CD}, \end{array} \right. \label{CD-system} \end{equation} where $\displaystyle P^{CD}=\frac{R\Theta^{CD}}{V^{CD}}$ and the error term $Q^{CD}$ satisfies \begin{equation} \displaystyle Q^{CD} =O(1)\delta^{CD}(1+\tau)^{-2}e^{-\frac{c_0y^2}{1+\tau}},\qquad {\rm as}~~|y|\rightarrow +\infty, \label{Q-CD} \end{equation} for some positive constant $c_0$. \subsection{Approximate rarefaction waves} We now turn to the approximate rarefaction waves to \eqref{Ri}. Since there is no exact rarefaction wave profile for the Navier-Stokes equations, the following approximate rarefaction wave profile, which satisfies the Euler equations, is motivated by \cite{X}. For the completeness of presentation, we include its definition and the properties in this subsection. If $(v_-, u_-, \theta_-) \in R_i (v_+, u_+, \theta_+), (i=1,3)$, then there exists an $i$-rarefaction wave $(v^{r_i}, u^{r_i}, \theta^{r_i})(y/\tau)$ which is a global solution of the following Riemann problem: \begin{eqnarray} \left\{ \begin{array}{l} \displaystyle v_{\tau}- u_{y}= 0,\\[1mm] \displaystyle u_{\tau} + p_{y}(v, \theta) = 0 , \\[2mm] \displaystyle \frac{R}{\gamma-1}\theta_\tau + p(v, \theta) u_y =0,\\[1mm] \displaystyle (v, u, \theta)(0,y)=\left\{ \begin{array}{l} \displaystyle (v_-, u_-, \theta_-), ~~~~ y< 0 ,\\ \displaystyle (v_+, u_+, \theta_+),~~~~ y> 0 . \end{array} \right. \end{array} \right.\label{rarefaction} \end{eqnarray} Consider the following inviscid Burgers equation with Riemann data: \begin{equation} \left\{ \begin{array}{l} w_\tau+ww_y=0,\\[2mm] w(\tau=0,y)=\left\{ \begin{array}{ll} w_-,&y<0,\\ w_+,&y>0. \end{array} \right. \end{array} \right.\label{Burgers} \end{equation} If $w_-<w_+$, then the Riemann problem \eqref{Burgers} admits a rarefaction wave solution \begin{equation} w^r(\tau,y)=w^r(\frac y\tau)=\left\{ \begin{array}{ll} w_-,&\frac y\tau\leq w_-,\\[1mm] \frac y\tau,&w_-\leq \frac y\tau\leq w_+,\\[1mm] w_+,&\frac y\tau\geq w_+. \end{array} \right.\label{B-R} \end{equation} Thus, the Riemann solution in \eqref{rarefaction} can be expressed explicitly through the above rarefaction wave \eqref{B-R} to the Burgers equation, that is, \begin{eqnarray} \left\{ \begin{array}{l} \displaystyle s^{r_i}(\tau,y)=s(v^{r_i}(\tau,y),\theta^{r_i}(\tau,y))=s_+, \\[2mm] \displaystyle w_\pm=\lambda_{i\pm}:=\lambda_i(v_\pm,\theta_\pm), \\[1mm] \displaystyle w^r(\frac y\tau)= \lambda_i(v^{r_i}(\tau,y),s_+), \\[1mm] \displaystyle u^{r_i}(\tau,y)=u_+-\int^{v^{r_i}(\tau,y)}_{v_+} \lambda_i(v,s_+) dv. \end{array} \right.\label{AR-NS-1} \end{eqnarray} In order to construct the approximate rarefaction wave $(V^{R_i}, U^{R_i}, \Theta^{R_i})(\tau,y)$ corresponding to \eqref{Ri}, we first consider the following approximate rarefaction wave to the Burgers equation: \begin{eqnarray} \left\{ \begin{array}{l} \displaystyle w_{\tau}+ww_{y}=0,\\ \displaystyle w( 0,y )=w_0(y)=\frac{w_++w_-}{2}+\frac{w_+-w_-}{2}\tanh y. \end{array} \right.\label{AB} \end{eqnarray} Note that the solution $w^R(\tau,y)$ of the problem \eqref{AB} is given by $$ w^R(\tau,y)=w_0(x_0(\tau,y)),\qquad x=x_0(\tau,y)+w_0(x_0(\tau,y))\tau. $$ And $w^R(\tau,y)$ has the following properties, the proof of which can be found in \cite{MN1,X}: \vskip 2mm \begin{lemma}\label{lemma-R} Let $w_-<w_+$, then $\eqref{AB}$ has a unique smooth solution $w^R(\tau,y)$ satisfying \begin{enumerate} \item[(1)] $w_-< w^R(\tau,y)<w_+,~(w^R)_y(\tau,y)> 0 $; \item[(2)] For any $1\leq p\leq +\infty$, there exists a constant $C$ such that $$ \begin{array}{ll} \| \frac{\partial}{\partial y}w^R(\tau,\cdot)\|_{L^p(\mathbf{R})}\leq C\min\big{\{}(w_+-w_-),~ (w_+-w_-)^{1/p}\tau^{-1+1/p}\big{\}}, \\[2mm] \| \frac{\partial^2}{\partial y^2}w^R(\tau,\cdot)\|_{L^p(\mathbf{R})}\leq C\min\big{\{}(w_+-w_-),~ \tau^{-1}\big{\}}; \end{array} $$ \item[(3)] If $ y-w_-\tau<0$, then $$ \begin{array}{l} |w^R(\tau,y)-w_-|\leq (w_+-w_-)e^{-2|y-w_-\tau|},\\[2mm] |\frac{\partial}{\partial y}w^R(\tau,y)|\leq 2(w_+-w_-)e^{-2|y-w_-\tau|}; \end{array} $$ If $ y-w_+\tau> 0$, then $$ \begin{array}{l} |w^R(\tau,y)-w_+|\leq (w_+-w_-)e^{-2|y-w_+\tau|},\\[2mm] |\frac{\partial}{\partial x}w^R(\tau,y)|\leq 2(w_+-w_-)e^{-2|y-w_+\tau|}; \end{array} $$ \item[(4)]$\sup\limits_{y\in\mathbf{R}} |w^R(\tau,y)-w^r(\frac y\tau)|\leq \min\big\{w_+-w_-,\f1{\tau}\ln(1+\tau)\big\}$. \end{enumerate} \end{lemma} Then, corresponding to \eqref{AR-NS-1}, the approximate rarefaction wave profile denoted by $(V^{R_i}, U^{R_i}, \Theta^{R_i}) (\tau,y)~(i=1,3)$ to \eqref{Ri} can be defined by \begin{eqnarray} \left\{ \begin{array}{l} \displaystyle S^{R_i}(\tau,y)=s(V^{R_i}(\tau,y),\Theta^{R_i}(\tau,y))=s_+,\\[1mm] \displaystyle w_\pm=\lambda_{i\pm}:=\lambda_i(v_\pm,\theta_\pm), \\[2mm] \displaystyle w^R(1+\tau,y)= \lambda_i(V^{R_i}(\tau,y),s_+),\\[1mm] \displaystyle U^{R_i}(\tau,y)=u_+-\int^{V^{R_i}(\tau,y)}_{v_+} \lambda_i(v,s_+) dv. \end{array} \right.\label{AR-NS} \end{eqnarray} Note that $(V^{R_i}, U^{R_i}, \Theta^{R_i})(\tau,y)$ defined above satisfies \begin{equation} \left\{ \begin{array}{ll} \displaystyle V^{R_i}_\tau-U^{R_i} _{y} = 0, \\[1mm] \displaystyle U^{R_i}_\tau+P^{R_i}_y =0,\\[2mm] \displaystyle \frac{R}{\gamma-1} \Theta^{R_i}_\tau + P^{R_i} U^{R_i}_y =0, \end{array} \right.\label{R-system} \end{equation} where $ P^{R_i}=p( V^{R_i}, \Theta ^{R_i})$. By virtue of Lemmas \ref{lemma-R}, the properties on the approximate rarefaction waves $(V^{R_i}, U^{R_i}, \Theta^{R_i})(\tau,y)$ can be summarized as follows. \begin{lemma}\label{lemma-R-NS} The approximate rarefaction waves $(V^{R_i}, U^{R_i}, \Theta^{R_i})(\tau,y)~(i=1,3)$ constructed in \eqref{AR-NS} have the following properties: \begin{enumerate} \item[(1)] $U^{R_i}_x(\tau,y)>0$ for $y\in \mathbf{R}$, $\tau>0$; \item[(2)] For any $1\leq p\leq +\infty,$ the following estimates holds, $$ \begin{array}{ll} \|(V^{R_i},U^{R_i}, \Theta^{R_i})_y\|_{L^p(dy)} \leq C\min\big{\{}\delta^{R_i},~ (\delta^{R_i})^{1/p}(1+\tau)^{-1+1/p}\big{\}},\\[2mm] \|(V^{R_i},U^{R_i}, \Theta^{R_i})_{yy}\|_{L^p(dy)} \leq C\min\big{\{}\delta^{R_i},~ (1+\tau)^{-1}\big{\}},\\ \end{array} $$ where $\delta^{R_i}=|(v_+,v_-, u_+,u_-, \theta_+,\theta_-)|$ is the $i$-rarefaction wave strength and the positive constant $C$ is independent of $\tau$, but may only depend on $p$ and the wave strength; \item[(3)] If $y\geq \lambda_{1+}(1+\tau)$, then $$ \begin{array}{l} |(V^{R_1},U^{R_1},\Theta^{R_1})(\tau,y)-(v_-,u_-,\theta_-)|\leq C\delta^{R_1}e^{-2|y-\lambda_{1+}(1+\tau)|},\\[2mm] |(V^{R_1},U^{R_1},\Theta^{R_1})_y(\tau,y)|\leq C \delta^{R_1}e^{-2|y-\lambda_{1+}(1+\tau)|}; \end{array} $$ If $y\leq \lambda_{3-}(1+\tau)$, then $$ \begin{array}{l} |(V^{R_3},U^{R_3},\Theta^{R_3})(\tau,y)-(v_+,u_+,\theta_+)|\leq C\delta^{R_3}e^{-2|y-\lambda_{3-}(1+\tau)|},\\[2mm] |(V^{R_3},U^{R_3},\Theta^{R_3})_y(\tau,y)|\leq C\delta^{R_3} e^{-2|y-\lambda_{3-}(1+\tau)|}; \end{array} $$ \item[(4)] There exists a positive constant $C$, such that for all $\tau>0,$ $$ \sup_{y\in\mathbf{R}}|(V^{R_i},U^{R_i}, \Theta^{R_i})(\tau,y)-(v^{r_i},u^{r_i}, \theta^{r_i})(\frac y\tau)|\leq \frac{C}{1+\tau}\ln(1+\tau). $$ \end{enumerate} \end{lemma} \subsection{Superposition of rarefaction waves and contact discontinuity} Corresponding to \eqref{RS}, the approximate wave pattern $(V,U,\Theta)(\tau,y)$ of the compressible Navier-Stokes equations \eqref{NS1} can be defined by \begin{eqnarray} \left(\begin{array}{cc} V\\ U \\ \Theta \end{array} \right)(\tau,y)= \left(\begin{array}{cc} V^{R_1}+ V^{CD}+ V^{R_3}\\ U^{R_1}+ U^{CD}+ U^{R_3} \\ \Theta^{R_1}+ \Theta^{CD}+ \Theta^{R_3} \end{array} \right)(\tau,y) -\left(\begin{array}{cc} v_*+v^*\\ u_*+u^*\\ \theta_*+\theta^* \end{array} \right) ,\label{sup-wave} \end{eqnarray} where $(V^{R_1}, U^{R_1}, \Theta^{R_1} )(\tau,y)$ is the approximate 1-rarefaction wave defined in \eqref{AR-NS} with the right state $(v_+, u_+, \theta_+)$ replaced by $(v_*, u_*, \theta_* )$, $(V^{CD}, U^{CD}, \Theta^{CD} )(\tau,y)$ is the viscous contact wave defined in \eqref{Viscous-CD} with the states $(v_-, u_-, \theta_-)$ and $(v_+, u_+, \theta_+)$ replaced by $(v_*, u_*, \theta_* )$ and $(v^*, u^*, \theta^* )$ respectively, and $(V^{R_3}, U^{R_3}, \Theta^{R_3} )(\tau,y)$ is the approximate 3-rarefaction wave defined in \eqref{AR-NS} with the left state $(v_-, u_-, \theta_-)$ replaced by $(v^*, u^*, \theta^* )$. Thus, from the properties of the viscous contact wave in \eqref{CD-P} and the approximate rarefaction wave in Lemma 2.3, we have the following relation between the approximate wave pattern $(V,U,\Theta)(\tau,y)$ and the exact inviscid wave pattern $(\bar V,\bar U,\bar\Theta)(\tau,y)$ of the Euler equations \begin{equation} \displaystyle |(V,U,\Theta)(\tau,y)-(\bar V,\bar U,\bar\Theta)(\tau,y)| \displaystyle \leq \frac{C}{1+\tau}\ln(1+\tau)+C\delta^{CD}e^{-\frac{cy^2}{1+\tau}}. \label{profile-s} \end{equation} Hence, to prove the zero dissipation limit \eqref{limit-1} on the set $\Sigma_h^1$ defined in \eqref{sigma-1}, it is sufficient to show the following time-asymptotic behavior of the solution to \eqref{NS1} around the approximate wave profile \eqref{sup-wave}, i.e., \begin{equation}\label{con} \lim_{\tau\rightarrow+\infty}\|(v,u,\theta)(\tau,\cdot)-(V,U,\Theta)(\tau,\cdot)\|_{L^\infty}=0. \end{equation} First, by \eqref{CD-system} and \eqref{R-system}, the superposition wave profile $(V,U,\Theta)(\tau,y)$ defined in \eqref{sup-wave} satisfies the following system \begin{equation*} \left\{ \begin{array}{ll} \displaystyle V_\tau-U _{y} = 0, \\ \displaystyle U_\tau+P_y = (\frac{U_{y}}{V}) _{y}+Q_1,\\ \displaystyle \frac{R}{\gamma-1} \Theta _\tau+P U_y =\nu( \frac{\Theta_{y}}{ V})_y+ \frac{U_y^2}{ V} +Q_2, \end{array} \right. \end{equation*} where $ P =p( V , \Theta )$ and $$\begin{array}{ll} \displaystyle Q_1&\displaystyle=(P-P^{R_1}-P^{CD}-P^{R_3})_y-\left(\frac{U_y}{V}-\frac{U^{CD}_y}{V^{CD}}\right)_y,\\ \displaystyle Q_2&\displaystyle= (PU_y-P^{R_1}U^{R_1}_y-P^{CD}U^{CD}_y-P^{R_3}U^{R_3}_y) -\nu\left(\frac{\Theta_y}{V}-\frac{\Theta^{CD}_y}{V^{CD}}\right)_y\\ &\displaystyle-\left(\frac{U_y^2}{ V}-\frac{(U^{CD}_y)^2}{ V^{CD}}\right)-Q^{CD}. \end{array}$$ A direct calculation shows that \begin{equation} \begin{array}{lll} \displaystyle Q_1&=&\displaystyle O(1)\Big\{|(V^{R_1}_y,\Theta^{R_1}_y)||(V^{CD}-v_*,\Theta^{CD}-\theta_*,V^{R_3}-v^*,\Theta^{R_3}-\theta^*)|\\[2mm] &&\displaystyle +|(V^{R_3}_y,\Theta^{R_3}_y)||(V^{R_1}-v_*,\Theta^{R_1}-\theta_*,V^{CD}-v^*,\Theta^{CD}-\theta^*)|\\[2mm] &&\displaystyle+|(V^{CD}_y,\Theta^{CD}_y,U^{CD}_{y})||(V^{R_1}-v_*,\Theta^{R_1}-\theta_*,V^{R_3}-v^*,\Theta^{R_3}-\theta^*)|\\[2mm] && \displaystyle +|(U^{CD}_y,V^{CD}_y)||(U^{R_1}_y,V^{R_1}_y,U^{R_3}_y,V^{R_3}_y)| +|(U^{R_1}_y,V^{R_1}_y)||(U^{R_3}_y,V^{R_3}_y)|\Big\}\\[2mm] &&\displaystyle+O(1)\Big\{|U^{R_1}_{yy}|+|U^{R_3}_{yy}|+|U^{R_1}_y||V^{R_1}_y|+|U^{R_3}_y||V^{R_3}_y|\Big\}\\[2mm] & :=&\displaystyle Q_{11}+Q_{12}. \end{array} \label{Q1} \end{equation} Similarly, we have \begin{equation} \begin{array}{lll} \displaystyle Q_2&=&\displaystyle O(1)\Big\{|U^{R_1}_y||(V^{CD}-v_*,\Theta^{CD}-\theta_*,V^{R_3}-v^*,\Theta^{R_3}-\theta^*)|\\[2mm] &&\displaystyle +|U^{R_3}_y||(V^{R_1}-v_*,\Theta^{R_1}-\theta_*,V^{CD}-v^*,\Theta^{CD}-\theta^*)|\\[2mm] &&\displaystyle+|(U^{CD}_{y},V^{CD}_y,\Theta^{CD}_y)||(V^{R_1}-v_*,\Theta^{R_1}-\theta_*,V^{R_3}-v^*,\Theta^{R_3}-\theta^*)|\\[2mm] &&\displaystyle +|(U^{CD}_y,V^{CD}_y,\Theta^{CD}_y)||(U^{R_1}_y,V^{R_1}_y,\Theta^{R_1}_y,U^{R_3}_y,V^{R_3}_y,\Theta^{R_1}_y)| \\[2mm] &&\displaystyle +|(U^{R_1}_y,V^{R_1}_y,\Theta^{R_1}_y)||(U^{R_3}_y,V^{R_3}_y,\Theta^{R_3}_y)|\Big\}\\[2mm] &&\displaystyle +O(1)\Big\{|\Theta^{R_1}_{yy}|+|\Theta^{R_3}_{yy}|+|(U^{R_1}_y,V^{R_1}_y, \Theta^{R_1}_y,U^{R_3}_y,V^{R_3}_y,\Theta^{R_3}_y)|^2\Big\}+|Q^{CD}|\\[2mm] & :=&\displaystyle Q_{21}+Q_{22}+|Q^{CD}|. \end{array} \label{Q2} \end{equation} Here $Q_{11}$ and $Q_{21}$ represent the wave interaction terms coming from the wave patterns in the different family, $Q_{12}$ and $Q_{22}$ stand for the error terms due to the inviscid approximate rarefaction wave profiles, and $Q^{CD}$ is the error term defined in \eqref{Q-CD} due to the viscous contact wave. In fact, one can estimate the interaction terms $Q_{11}$ and $Q_{21}$ by dividing the whole domain $\Omega=\{(\tau,y)|(\tau,y)\in\mathbf{R}\times\mathbf{R}\}$ into three regions: \begin{eqnarray*} && \Omega_-=\{(\tau,y)\; |\; 2y\leq \lambda_{1*}(1+\tau)\}, \\ && \Omega_{CD}=\{(\tau,y)\; |\; \lambda_{1*}(1+\tau)<2y<\lambda_3^*(1+\tau)\}, \\ && \Omega_+=\{(\tau,y)\; |\; 2y\geq \lambda_3^*(1+\tau)\}, \end{eqnarray*} where $\lambda_{1*}=\lambda_1(v_*,\theta_*)$ and $\lambda_3^*=\lambda_3(v^*,\theta^*)$. Then, in each section the following estimates follow from \eqref{CD-P} and Lemma \ref{lemma-R-NS}. \begin{itemize} \item In $\Omega_-$, \begin{eqnarray*} && |(V^{R_3}-v^*, V^{R_3}_y)| = O(1)\delta^{R_3}e^{-2\{|y|+|\lambda_3^*|(1+\tau)\}}, \\ && |(V^{CD}-v_*,V^{CD}-v^*,V^{CD}_y)| = O(1)\delta^{CD}e^{-\frac{C\{|\lambda_{1*}|(1+\tau)\}^2}{1+\tau}} =O(1)\delta^{CD}e^{-C(1+\tau)}; \end{eqnarray*} \item In $\Omega_{CD}$, \begin{eqnarray*} && |(V^{R_1}-v_*, V^{R_1}_y) | =O(1)\delta^{R_1} e^{-2\{|y|+|\lambda_{1*}|(1+\tau)\}}, \\ && |(V^{R_3}-v^*, V^{R_3}_y)|=O(1)\delta^{R_3} e^{-2\{|x|+|\lambda_3^*|(1+\tau)\}}; \end{eqnarray*} \item In $\Omega_+$, \begin{eqnarray*} && |(V^{R_1}-v_*, V^{R_1}_y)|=O(1)\delta^{R_1} e^{-2\{|x|+|\lambda_{1*}|(1+\tau)\}}, \\ && |(V^{CD}-v_*, V^{CD}-v^*, V^{CD}_y)|=O(1)\delta^{CD} e^{-\frac{C\{|\lambda_3^*|(1+\tau)\}^2}{1+\tau}} =O(1)\delta^{CD} e^{-C(1+\tau)}. \end{eqnarray*} \end{itemize} Keep in mind that each individual wave strength is controlled by the total wave strength by \eqref{CD} and \eqref{Ri}, that is, $$ \delta^{R_1}+\delta^{R_3}+\delta^{CD}\leq C\delta. $$ Hence, in summary, it follows from \eqref{Q1}, \eqref{Q2} and the above arguments that \begin{equation*} |(Q_{11},Q_{21})|=O(1)\delta e^{-C\{|y|+(1+\tau)\}}, \end{equation*} for some positive constant $C$ independent of $\tau$ and $y$. \setcounter{equation}{0} \section{Proof of the main result} In this section, we shall prove the main result Theorem \ref{limit-th}. By virtue of the arguments in Section 2.3, it is sufficient to show \eqref{con} besides the regularity of the solution. To this end, we first reformulate the problem. \subsection{Reformulation of the problem} Set the perturbation around the wave profile $(V,U,\Theta)(\tau,y)$ by \begin{equation*} (\phi,\psi,\zeta)(\tau,y)=(v,u,\theta)(\tau,y)-(V,U,\Theta)(\tau,y). \end{equation*} Then, after a straightforward calculation, the perturbation $(\phi,\psi,\zeta)(\tau,y)$ satisfies the system \begin{equation} \left\{ \begin{array}{ll} \displaystyle \phi_{\tau}-\psi_{y}=0, \\ \displaystyle \psi_{\tau}+(p-P)_{y}=(\frac {u_{y}}v-\frac{U_y}{V})_{y}- Q_1,\\[2mm] \displaystyle \frac{R}{\gamma-1}\zeta_{\tau}+ (pu_y-PU_y)=\nu(\frac{\theta_{y}}{v}-\frac{\Theta_y}{V})_y+(\frac{u_y^2}{v}-\frac{U^2_{y}}{V})-Q_2,\\[4mm] \displaystyle (\phi,\psi,\zeta)(\tau=0,y)=(\phi_0,\psi_0,\zeta_0)(y), \end{array} \right. \label{P} \end{equation} where the initial data $(\phi_0,\psi_0,\zeta_0)(y)$ and its derivatives are sufficiently smooth away from but up to $y=0$, and $$ (\phi_0,\psi_0,\zeta_0)(y)\in L^2(\mathbf{R}), \phi_{0y}\in L^2(\bf R^\pm). $$ For simplicity, denote \begin{equation*} \mathcal{N}_0:=\|(\phi_0,\psi_0,\zeta_0)\|^2+\|\phi_{0y}\Xparallel-^2. \end{equation*} In order to prove \eqref{con}, we easily see that it suffices to show \begin{proposition}\label{prop1} There exists a positive constant $\delta_0$, such that if the wave strength $\delta$ and the initial data satisfy $$ \delta+\mathcal{N}_0\leq \delta_0, $$ then the problem \eqref{P} admits a unique global solution $(\phi,\psi,\zeta)(t,y)$ satisfying \begin{itemize} \item[(i)] There exists a positive constant $C$ independent of $t$, such that \begin{equation*} \sup_{\tau\geq0}\Big(\|(\phi,\psi,\zeta)(\tau,\cdot)\|^2+\|\phi_y(\tau,\cdot)\Xparallel-^2\Big) +\int_0^{+\infty}\|(\phi_y,\psi_y,\zeta_y)(\tau,\cdot)\Xparallel-^2d\tau \leq C(\mathcal{N}_0+\delta^{\f14}). \end{equation*} \item[(ii)] For any $\tau_0>0$, there exists a positive constant $C=C(\tau_0)$, such that \begin{equation*} \sup_{\tau\geq\tau_0}\|(\psi_y,\zeta_y,\psi_\tau,\zeta_\tau)(\tau,\cdot)\Xparallel-^2 +\int_{\tau_0}^{+\infty}\|(\psi_{yy},\zeta_{yy},\psi_{y\tau},\zeta_{y\tau})(\tau,\cdot)\Xparallel-^2d\tau \leq C(\tau_0)(\mathcal{N}_0+\delta^{\f14}). \end{equation*} \item[(iii)] The jump condition of $\phi(\tau,y)$ at $y=0$ admits the bound \begin{equation} |[\phi](\tau)|\leq Ce^{-c\tau}\label{JE} \end{equation} where the positive constants $C$ and $c$ are independent of $\tau\in(0,+\infty)$. \end{itemize} \end{proposition} Assume that Proposition \ref{prop1} holds, then for any $\tau_0>0$, one has \begin{equation*} \int_{\tau_0}^{+\infty}\Big(\|(\phi_y,\psi_y,\zeta_y)\Xparallel-^2+|\frac{d}{d\tau}\|(\phi_{y},\psi_{y},\zeta_{y})\Xparallel-^2|\Big)d\tau <+\infty , \end{equation*} whence, $$ \lim_{\tau\rightarrow\infty}\|(\phi_y,\psi_y,\zeta_y)\Xparallel-^2=0, $$ which, together with Proposition \ref{prop1} and Sobolev's inequality, implies that $$ \lim_{\tau\rightarrow\infty}\sup_{y\neq0}\|(\phi,\psi,\zeta)\|_{L^\infty}^2\leq C\lim_{\tau\rightarrow\infty}\|(\phi,\psi,\zeta)\|\|(\phi_y,\psi_y,\zeta_y)\Xparallel-\ \leq C\lim_{\tau\rightarrow\infty}\|(\phi_y,\psi_y,\zeta_y)\Xparallel-\ =0. $$ The above inequality combined with \eqref{JE} gives \eqref{con}. Thus, the main result Theorem \ref{limit-th} follows from \eqref{con} and \eqref{profile-s}. Denote \begin{equation*} \begin{array}{l} \displaystyle N(\tau_*,\tau^*)=\sup_{\tau\in[\tau_*,\tau^*]}\Big\{\|(\phi,\psi,\zeta)(\tau,\cdot)\|^2 +\|(\phi_y,\psi_y,\zeta_y)(\tau,\cdot)\Xparallel-^2+\|(\psi_\tau,\zeta_\tau)(\tau,\cdot)\|^2\Big\},\\[4mm] \displaystyle N(\tau_*)=N(\tau_*,\tau_*), \end{array} \end{equation*} and define the solution space by \begin{equation*} X[\tau_*,\tau^*]=\left\{(\phi,\psi,\zeta)\left| \begin{array}{l} \displaystyle (\phi,\psi,\zeta)(\tau,y)\in C([\tau_*,\tau^*];H^1({\bf R}^\pm)),\\[1mm] \displaystyle (\psi_y,\zeta_y)\in L^2(\tau_*,\tau^*; H^1({\bf R}^\pm)),~ \phi_y\in L^2(\tau_*,\tau^*;L^2({\bf R}^\pm)),\\\displaystyle (\psi_\tau,\zeta_\tau)\in L^\infty(\tau_*,\tau^*; L^2({\bf R}^\pm))\cap L^2(\tau_*,\tau^*;H^1({\bf R}^\pm)). \end{array} \right.\right\} \end{equation*} Since the local existence of solutions to \eqref{P} is proved in \cite{H3}, we just state it and omit its proof for brevity. \begin{proposition}\label{local-e}(Local existence) Suppose that $\mathcal{N}_0$ and the wave strength $\delta$ are suitably small such that $\inf v_0$ and $\inf \theta_0$ are positive. Then there exists a positive time $\tau_0=\tau_0(N(0),\delta)>0$, such that the Cauchy problem \eqref{P} admits a unique solution $(\phi,\psi,\zeta)(\tau,y)\in X[0,\tau_0]$ satisfying \begin{equation*} A(\tau_0)+B(\tau_0)+F(\tau_0)\leq C(\mathcal{N}_0+\delta), \end{equation*} where \begin{eqnarray*} && A(\tau_0)=\displaystyle \sup_{0\leq \tau\leq \tau_0}\Big\{\|(\phi,\psi,\zeta)(\tau,\cdot)\|^2+\|\phi_y\Xparallel-^2\Big\}+\int_0^{\tau_0}\|(\psi_y,\zeta_y)\|^2 d\tau, \\ && B(\tau_0)=\displaystyle \sup_{0\leq \tau\leq \tau_0}\Big\{g(\tau)^\f12\|\psi_y\|^2+g(\tau)\|\phi_y\Xparallel-^2\Big\} +\int_0^{\tau_0}g(\tau)^{\f12+\vartheta}(\|\psi_\tau\|^2+\|(\frac{u_y}{v})_y\Xparallel-^2)d\tau \\ &&\displaystyle\qquad +\int_0^{\tau_0}g(\tau)(\|\psi_y^2\|^2+\|\theta_\tau\|^2+\|(\frac{\theta_y}{v})_y\Xparallel-^2)d\tau,\\[4mm] && F(\tau_0)=\displaystyle \sup_{0\leq \tau\leq \tau_0}\Big\{g(\tau)^{\f32+\vartheta}(\|\psi_\tau\|^2+\|(\frac{u_y}{v})_y\Xparallel-^2) +g(\tau)^3(\|\zeta_\tau\|^2+\|(\frac{\theta_y}{v})_y\Xparallel-^2)\Big\} \\ && \qquad \displaystyle +\int_0^{\tau_0}g(\tau)^{\f32+\vartheta}\|\psi_{y\tau}\Xparallel-^2+g(\tau)^3\|\zeta_{y\tau}\Xparallel-^2)d\tau , \end{eqnarray*} with $g(\tau)=\tau\wedge1=\min\{\tau,1\}$ and $\vartheta\in (0,1)$. Moreover, $v,u,\theta$ have the same regularity as in Theorem \ref{limit-th}. Thus, $v,u_x,\theta_x$ have one-side limit at $y=0$ and satisfy the jump conditions $$ \Big[p-\frac{u_y}{v}\Big]=\Big[\frac{\theta_y}{v}\Big]=0. $$ Finally, one has the following estimate on the jump at $y=0$, \begin{equation*} |[v](\tau)|\leq C\delta e^{-c\tau},\qquad \tau >0 \end{equation*} for some positive constants $C$ and $c$ independent of $\tau$. \end{proposition} Hence, in view of the local existence and the standard continuation process, we see that to prove Proposition \ref{prop1}, it suffices to show the following (uniform) a priori estimate. \begin{proposition}\label{priori} (A priori estimate) Suppose that the Cauchy problem \eqref{P} has a solution $(\phi,\psi,\zeta)(\tau,y)\in X[\tau_1,\tau_2]$. There exists a positive constant $\eta_1$, such that if \begin{equation}\label{priori-a} N(\tau_1,\tau_2)+\delta\leq \eta_1, \end{equation} then, \begin{equation} N(\tau_1,\tau_2)+\int_{\tau_1}^{\tau_2}\Big\{\|\phi_y(\tau,\cdot)\Xparallel-^2 +\|(\psi_y,\zeta_y)(\tau,\cdot)\Xparallel-_{1}^2+\|(\psi_{y\tau},\zeta_{y\tau})\Xparallel-^2(\tau,\cdot)\Big\}d\tau \leq C(N(\tau_1)+\delta^{\f14}), \label{p1} \end{equation} where the positive constant $C$ is independent of $\tau$. \end{proposition} \subsection{Energy estimates} In this section we will derive the a priori estimate given in Proposition \ref{priori}. Note that under the a priori assumption \eqref{priori-a}, if $\eta\ll1,$ then if holds that $$ \inf_{[\tau_1,\tau_2]\times\mathbf{R}}\{(V+\phi, \Theta+\zeta)(\tau,y)\}\geq C_0 $$ for some positive constant $C_0$. First, one has the following Lemma: \begin{lemma}\label{lemma1} Under the assumptions of Proposition \ref{priori}, there exists a constant $C>0$, such that for any $\tau\in [\tau_1,\tau_2]$, \begin{equation*} \begin{array}{ll} \displaystyle \|(\phi,\psi,\zeta,\phi_y)(\tau,\cdot)\Xparallel-^2+\int_{\tau_1}^\tau \Big\{\|\sqrt{(U^{R_1}_y,U^{R_3}_y)}(\phi,\zeta)\|^2+\|(\phi_y,\psi_y,\zeta_y)\Xparallel-^2\Big\} d\tau\\[3mm] \leq \displaystyle C\|(\phi,\psi,\zeta,\phi_{y})\Xparallel-^2(\tau_1) \displaystyle +C\int_{\tau_1}^\tau (1+\tau)^{-\f76}\|(\phi,\psi,\zeta)(\cdot,\tau)\|^2d\tau+C\delta^{\f14}\\[4mm] \displaystyle ~~+C\delta \int_{\tau_1}^\tau \Xint-_{\bf R} (1+\tau)^{-1}e^{-\frac{c_0y^2}{1+\tau}}|(\phi,\zeta)|^2dy d\tau. \end{array} \end{equation*} \end{lemma} \noindent{\bf Proof:} Let $$ \Phi(z)=z-1-\ln z. $$ Arguing similarly to that in \cite{HLM} or \cite{HWY}, one can get the following equality \begin{equation} \begin{array}{ll} & \displaystyle I_{1\tau}(\tau,y)+H _{1y}(\tau,y)+\frac{\Theta\psi_ y^2}{v\theta}+\nu\frac{\Theta\zeta _y^2}{v\theta^2} +P(U^{R_1}_y+U^{R_3}_y)\left(\Phi(\frac{\theta V}{v\Theta})+\gamma\Phi(\frac{v}{V})\right)\\[3mm] &\displaystyle =Q_3-Q_1\psi-Q_2\frac{\zeta}{\theta}, \end{array} \label{(3.13)} \end{equation} where \begin{equation*} I_1(\tau,y)=R\Theta\Phi(\frac{v}{V})+\frac{\psi^2}{2}+\frac{R\Theta}{\gamma-1}\Phi(\frac{\theta}{\Theta}), \label{I1} \end{equation*} \begin{equation} H_1(\tau,y)=(p-P)\psi-(\frac{u _y}{v}-\frac{U_ y}{V})\psi-\nu(\frac{\theta _y}{v}-\frac{\Theta _y}{V})\frac{\zeta}{\theta}, \label{H1} \end{equation} and \begin{equation} \begin{array}{ll} Q_3=&\displaystyle -PU^{CD}_{y}\left(\Phi(\frac{\theta V}{v\Theta})+\gamma\Phi(\frac{v}{V})\right)+\left(\nu(\frac{\Theta_y}{V})_y+\frac{U_y^2}{V} + Q_2\right)\Big\{(\gamma-1)\Phi(\frac{v}{V})\\[4mm] &\displaystyle+\Phi(\frac{\theta}{\Theta})-\frac{\zeta^2}{\theta\Theta}\Big\} - (\frac{1}{v}-\frac{1}{V})U_ y\psi_y +(\f1v-\f1V)U_y^2\frac{\zeta}{\theta}+2\frac{\zeta\psi_y U_y}{v\theta}+\nu \frac{\Theta_y\zeta_y\zeta}{v\theta^2}\\[4mm] &\displaystyle -\nu(\frac{1}{v}-\frac{1}{V})\frac{\Theta\T_y\zeta_y}{\theta^2}+\nu(\frac{1}{v}-\frac{1}{V})\frac{\zeta\Theta_y^2}{\theta^2} . \end{array} \label{Q3} \end{equation} Integration of the equality \eqref{(3.13)} with respect to $y$ and $\tau$ over ${\mathbf R}^\pm\times [\tau_1,\tau]$ yields that \begin{equation}\label{20} \begin{array}{ll} \displaystyle \int I_1(\tau,y)dy+\int_{\tau_1}^\tau\big[H_1\big](\tau)d\tau+\int_{\tau_1}^\tau\Xint-_{\bf R} \bigg(\frac{\Theta\psi_ y^2}{v\theta}+\nu\frac{\Theta\zeta _y^2}{v\theta^2}\bigg)dyd\tau\\[4mm] \qquad \displaystyle +\int_{\tau_1}^\tau\Xint-_{\bf R}P(U^{R_1}_y+U^{R_3}_y)\left(\Phi(\frac{\theta V}{v\Theta}) +\gamma\Phi(\frac{v}{V})\right)dyd\tau \\ [4mm] \displaystyle=\int I_1(\tau_1,y)dy+\int_{\tau_1}^\tau\Xint-_{\bf R} \big(Q_3-Q_1\psi-Q_2\frac{\zeta}{\theta}\big)dyd\tau. \end{array} \end{equation} It is easy to observe that the jump of $H_1$ in \eqref{H1} across $y=0$ vanishes, i.e., \begin{equation*} \begin{array}{ll} \displaystyle \big[H_1\big](\tau)&\displaystyle=\big[(p-\frac{u_y}{v})\psi\big]-\big[(P-\frac{U_y}{V})\psi\big] -\nu\big[(\frac{\theta_y}{v}-\frac{\Theta_y}{V})\frac{\zeta}{\theta}\big]\\ &\displaystyle =\big[p-\frac{u_y}{v}\big]\psi(\tau,0)-\big[P-\frac{U_y}{V}\big]\psi(\tau,0) -\nu\Big(\big[\frac{\theta_y}{v}\big]-\big[\frac{\Theta_y}{V}\big]\Big)\frac{\zeta(\tau,0)}{\theta(\tau,0)}=0. \end{array} \end{equation*} Recalling that $$ \Phi(1)=\Phi^\prime(1)=0,\qquad \Phi^{\prime\prime}(z)=z^{-2}>0, $$ there exists a positive constant $C$, such that if $z$ is near 1, then $$ C^{-1}(z-1)^2\leq \Phi(z)\leq C(z-1)^2. $$ Thus under the a priori assumptions \eqref{priori-a}, one gets \begin{equation} C^{-1}|\phi|^2\leq \Phi(\frac{v}{V})\leq C|\phi|^2,\qquad C^{-1}|\zeta|^2\leq \Phi(\frac{\theta}{\Theta})\leq C|\zeta|^2 \label{(3.17)} \end{equation} and \begin{equation} C^{-1}|(\phi,\zeta)|^2\leq\Phi(\frac{\theta V}{v\Theta})+\gamma\Phi(\frac{v}{V})\leq C|(\phi,\zeta)|^2. \label{(3.18)} \end{equation} Now it follows from \eqref{Q3}, \eqref{(3.17)}, \eqref{(3.18)} and Cauchy-Schwarz's inequality that \begin{equation} \begin{array}{ll} \displaystyle |Q_3|\leq&\displaystyle \frac{\Theta\psi_ y^2}{4v\theta}+\frac{\nu\Theta\zeta _y^2}{4v\theta^2}+C\Big\{(|\Theta^{CD}_y|^2,|\Theta^{CD}_{yy}|)+(|(V^{R_1}_{y},U^{R_1}_{y},\Theta^{R_1}_{y})|^2,|\Theta^{R_1}_{yy}|)\\[3mm] &\displaystyle +(|(V^{R_3}_{y},U^{R_3}_{y},\Theta^{R_3}_{y})|^2,|\Theta^{R_3}_{yy}|)+|Q_2|\Big\}(\phi^2+\zeta^2). \end{array} \label{(3.19)} \end{equation} By the properties of the viscous contact wave, one can obtain \begin{equation*} \int_{\tau_1}^\tau\Xint-_{\bf R} (|\Theta^{CD}_{y}|^2,|\Theta^{CD}_{yy}|)(\phi^2+\zeta^2)dyd\tau\leq C \delta \int_{\tau_1}^\tau\Xint-_{\bf R}(1+\tau)^{-1}e^{-\frac{c_0 y^2}{1+\tau}}|(\phi,\zeta)|^2dyd\tau , \end{equation*} while by the properties of the approximate rarefaction wave in Lemma \ref{lemma-R-NS}, we have that for $i=1,3,$ \begin{equation*} \begin{array}{ll} & \displaystyle \int_{\tau_1}^\tau\Xint-_{\bf R} (|(V^{R_i}_{y},U^{R_i}_{y},\Theta^{R_i}_{y})|^2,|\Theta^{R_i}_{yy}|)(\phi^2+\zeta^2)dyd\tau\\[4mm] & \displaystyle\leq \int_{\tau_1}^\tau (\|(V^{R_i}_{y},U^{R_i}_{y},\Theta^{R_i}_{y})\|^2 +\|\Theta^{R_i}_{yy}\|_{L^1} )\|(\phi,\zeta)\|^2_{L^\infty}d\tau\\[4mm] &\displaystyle \leq C\int_{\tau_1}^\tau (1+\tau)^ {-1}\|(\phi,\zeta)\|\|(\phi_y,\zeta_y)\|d\tau\\[4mm] &\displaystyle \leq\mu\int_{\tau_1}^\tau \|(\phi_y, \zeta_y)\|^2d\tau+C_\mu \int_{\tau_1}^\tau (1+\tau)^ {-2}\|(\phi,\zeta)\|^2d\tau , \end{array} \end{equation*} where and in the sequel $\mu$ is a small positive constant to be determined and $C_\mu$ is some positive constant depending on $\mu$. Now, it remains to estimate the terms $Q_1\psi$, $Q_2\frac{\zeta}{\theta}$ on the right-hand side of \eqref{20} and the term $|Q_2|(\phi^2+\zeta^2)$ on the right-hand side of \eqref{(3.19)}. For simplicity, we only estimate $Q_2\frac{\zeta}{\theta}$. By \eqref{Q2}, we find that \begin{equation*} \begin{array}{ll} \displaystyle \int_{\tau_1}^\tau \Xint-_{\mathbf{R}}|Q_{2}\frac{\zeta}{\theta}|dy d\tau \leq C\int_{\tau_1}^\tau \|\zeta\|_{L^\infty_y}\|Q_2\|_{L^1_y} d\tau\\[4mm] \quad\displaystyle \leq C\int_{\tau_1}^\tau \|\zeta\|^{\f12}\|\zeta_y\|^{\f12}\Big(\|Q_{21}\|_{L^1_y} +\|Q_{22}\|_{L^1_y}+\|Q^{CD}\|_{L^1_y}\Big)d\tau\\[4mm] \quad\displaystyle \leq C\int_{\tau_1}^\tau \|\zeta\|^{\f12}\|\zeta_y\|^{\f12}\Big(\delta e^{-C(1+\tau)} +(\delta^{r_1}+\delta^{r_3})^{\f18}(1+\tau)^{-\f78}+\delta (1+\tau)^{-\f32}\Big)d\tau\\[4mm] \quad\displaystyle \leq \mu\int_{\tau_1}^\tau \|\zeta_y\|^2 d\tau+C_\mu~ \delta^{\f16}\int_{\tau_1}^\tau \|\zeta\|^{\f23}(1+\tau)^{-\f76} d\tau\\[4mm] \quad\displaystyle \leq\mu \int_{\tau_1}^\tau \|\zeta_y\|^2 d\tau+C_\mu \int_{\tau_1}^\tau\|\zeta\|^2 (1+\tau)^{-\f76} d\tau+C_\mu~\delta^{\f14}. \end{array} \end{equation*} Similarly, one can control the term $Q_1\psi$ and $|Q_2|(\phi^2+\zeta^2)$. Thus, substituting all the above estimates into \eqref{20} and choosing $\mu$ in the front of the integral $\displaystyle \int_{\tau_1}^\tau\|(\psi_y,\zeta_y)\|^2d\tau$ small enough, so that the integral can be absorbed by the left-hand side of \eqref{20}, one concludes \begin{equation} \begin{array}{ll} \displaystyle \|(\phi,\psi,\zeta)(\tau,\cdot)\|^2+\int_{\tau_1}^\tau\big\{\|(\psi_y,\zeta_y)(\tau,\cdot)\|^2 +\|\sqrt{(U^{R_1}_y, U_y^{R_3})}(\phi,\zeta)(\tau,\cdot)\|^2\big\}d\tau\\ \displaystyle \leq C\|(\phi,\psi,\zeta)(\tau_1,\cdot)\|^2+C\int_{\tau_1}^\tau(1+\tau)^{-\f76}\|(\phi,\psi,\zeta)\|^2 d\tau+C\delta^{\f14}\\ \displaystyle +C\mu\int_{\tau_1}^\tau\|\phi_{y}(\tau,\cdot)\Xparallel-^2d\tau +C\delta \int_{\tau_1}^\tau\Xint-_{\bf R}(1+\tau)^{-1}e^{-\frac{c_0 y^2}{1+\tau}}|(\phi,\zeta)|^2dy d\tau. \end{array}\label{(3.25)} \end{equation} Next, we estimate $\|\phi_y\|^2$. Denote $\tilde v=\frac{v}{V}.$ From the system $\eqref{P}_2$, one has $$ (\frac{\tilde{v}_y}{\tilde v})_\tau-\psi_\tau-(p-P)_y-Q_1=0. $$ Multiplying the above equation by $\frac{\tilde{v}_y}{\tilde v}$ and noticing that $$ -(p-P)_y=\frac{R\theta}{v}\frac{\tilde{v}_y}{\tilde v}-\frac{R\zeta_y}{v}+(p-P)\frac{V_y}{V}-R\Theta_y(\f1V-\f1v), $$ one obtains \begin{eqnarray*} && \displaystyle \left(\frac{1}{2}(\frac{\tilde{v}_y}{\tilde v})^2-\psi\frac{\tilde{v}_y}{\tilde v}\right)_\tau +\left(\psi\frac{\tilde{v}_\tau}{\tilde v}\right)_y+\frac{R\theta}{v}(\frac{\tilde{v}_y}{\tilde v})^2\\ && \quad = \displaystyle\psi_y(\frac{u_y}{v}-\frac{U_y}{V})+\left(\frac{R\zeta_y}{v}-(p-P)\frac{V_y}{V} +R\Theta_y(\f1V-\f1v)-Q_1\right)\frac{\tilde{v}_y}{\tilde v}. \end{eqnarray*} Integrating the above equality with respect to $y$ and $\tau$ over ${\bf R}^\pm\times[\tau_1,\tau]$ and using Cauchy-Schwarz's inequality, we infer that \begin{equation} \begin{array}{ll} &\displaystyle \Xint-_{\bf R}\left(\frac{1}{2}(\frac{\tilde{v}_y}{\tilde v})^2-\psi\frac{\tilde{v}_y}{\tilde v}\right)(\tau,y)dy+\int_{\tau_1}^\tau\left[\psi\frac{\tilde{v}_\tau}{\tilde v}\right](\tau)d\tau+\int_{\tau_1}^\tau\Xint-_{\bf R}\frac{R\theta}{2v}(\frac{\tilde{v}_y}{\tilde v})^2dy d\tau\\[4mm] \leq&\displaystyle \Xint-_{\bf R}\left(\frac{1}{2}(\frac{\tilde{v}_y}{\tilde v})^2-\psi\frac{\tilde{v}_y}{\tilde v}\right)(\tau_1,y)dy+\int_{\tau_1}^\tau\Xint-_{\bf R} |\psi_y(\frac{u_y}{v}-\frac{U_y}{V})| dy d\tau\\ & \displaystyle+C\int_{\tau_1}^\tau\Xint-_{\bf R} \left|\frac{R\zeta_y}{v}-(p-P)\frac{V_y}{V} +R\Theta_y(\f1V-\f1v)-Q_1\right|^2 dy d\tau , \end{array}\label{(3.27)} \end{equation} where the jump across $y=0$ can be bounded as follows. \begin{equation*} \begin{array}{ll} \displaystyle \int_{\tau_1}^\tau\left[\psi\frac{\tilde{v}_\tau}{\tilde v}\right](\tau)d\tau =\int_{\tau_1}^\tau\psi(\tau,0)\left[\frac{u_y}{v}-\frac{U_y}{V}\right] (\tau)d\tau=\int_{\tau_1}^\tau\psi(\tau,0)\left[p\right](\tau)d\tau\\[4mm] \qquad\displaystyle =R\int_{\tau_1}^\tau\psi(\tau,0)\theta(\tau,0)\left[\f1v\right](\tau)d\tau=-R\int_{\tau_1}^\tau \frac{\psi(\tau,0)\theta(\tau,0)}{v(\tau,0+)v(\tau,0-)}\left[v\right](\tau)d\tau\\[4mm] \qquad\displaystyle \leq C\int_{\tau_1}^\tau\|\psi\|_{L^\infty}(\tau)|[v]|(\tau_1)e^{-C(\tau-\tau_1)}d\tau \leq C\delta\int_{\tau_1}^\tau\|\psi\|^{\f12}\|\psi_y\|^{\f12}e^{-C(\tau-\tau_1)}d\tau\\[4mm] \qquad\displaystyle \leq \delta\int_{\tau_1}^\tau\|\psi_y\|^2d\tau+\delta\sup_{\tau\in[\tau_1,\tau_2]}\|\psi\|^2(\tau)+C\delta. \end{array} \end{equation*} Using the equality $$ \frac{\tilde{v}_y}{\tilde v}=\frac{v_y}{v}-\frac{V_y}{V}=\frac{\phi_y}{v}-\frac{V_y\phi}{vV}, $$ we see that \begin{equation*} C^{-1}(|\phi_y|^2-|V_y\phi|^2)\leq (\frac{\tilde{v}_y}{\tilde v})^2\leq C(|\phi_y|^2+|V_y\phi|^2). \end{equation*} From the definition of $Q_1$ in \eqref{Q1} it follows that \begin{equation*} \begin{array}{ll} \displaystyle\int_{\tau_1}^\tau\|Q_1\|^2d\tau\leq C\int_{\tau_1}^\tau\Big(\|Q_{11}\|^2+\|Q_{12}\|^2\Big)d\tau\\ \displaystyle \qquad\leq C\int_{\tau_1}^\tau\Big(\|Q_{11}\|^2+\|(U^{R_1}_{yy},U^{R_3}_{yy},U^{R_1}_yV^{R_1}_y,U^{R_3}_yV^{R_3}_y)\|^2\Big)d\tau\leq C\delta^{\f14}. \end{array} \end{equation*} Therefore, substituting all the above estimates into \eqref{(3.27)}, we conclude that \begin{equation} \begin{array}{ll} \displaystyle \quad \|\phi_y(\tau,\cdot)\Xparallel-^2+\int_{\tau_1}^\tau\|\phi_y\Xparallel-^2 d\tau \leq C\|(\phi,\psi,\phi_{y})\Xparallel-^2(\tau_1)+C\|(\phi,\psi)(\tau,\cdot)\|^2\\ \displaystyle +C\delta \int_{\tau_1}^\tau\Xint-_{\bf R}(1+\tau)^{-1}e^{-\frac{c_0 y^2}{1+\tau}}|(\phi,\zeta)|^2dy d\tau+C\int_{\tau_1}^\tau\|(\psi_y,\zeta_y)\|^2d\tau \\[4mm] \displaystyle+C\int_{\tau_1}^\tau(1+\tau)^{-\f76}\|(\phi,\psi,\zeta)\|^2 d\tau+C\delta^{\f14}. \end{array} \label{(3.30)} \end{equation} Multiplying the inequality \eqref{(3.25)} by a large constant $C_1>0$, and summing the resulting inequality with \eqref{(3.30)}, we obtain Lemma \ref{lemma1}. This completes the proof. $\hfill\Box$ \vspace{2mm} Next, we derive the higher order estimates, which are summarized in the following Lemma: \begin{lemma}\label{lemma2} Under the assumptions of Proposition \ref{priori}, it holds that \begin{equation*} \begin{array}{ll} \displaystyle N(\tau_1,\tau_2)+\int_{\tau_1}^{\tau_2} \big\{\|\sqrt{(U^{R_1}_y,U^{R_3}_y)}(\phi,\zeta)\|^2+\|\phi_y\Xparallel-^2+\|(\psi_y,\zeta_y)\Xparallel-_1^2 +\|(\psi_{y\tau},\zeta_{y\tau})\Xparallel-^2\big\} d\tau\\ \displaystyle \leq CN(\tau_1)+C\int_{\tau_1}^{\tau_2}(1+\tau)^{-\f76}\|(\phi,\psi,\zeta)\|^2 d\tau+C\delta^{\f14}+C \delta \int_{\tau_1}^{\tau_2} \Xint-_{\bf R} (1+\tau)^{-1}e^{-\frac{c_0 y^2}{1+\tau}} |(\phi,\zeta)|^2dy d\tau. \end{array} \end{equation*} \end{lemma} \noindent{\rm\bf Proof:} Multiplying the equation $\eqref{P}_2$ by $\displaystyle-\psi_{yy}$, one gets \begin{equation*} \begin{array}{ll} \displaystyle \left(\frac{\psi_y^2}{2}\right)_\tau-\left(\psi_\tau\psi_y\right)_y+\frac{\psi_{yy}^2}{v} =\Big\{(p-P)_y+\frac{v_y}{v^2}\psi_y-\big(U_y (\frac{1}{v}-\frac{1}{V})\big)_y+Q_1\Big\}\psi_{yy}. \end{array} \end{equation*} Integration of the above equation with respect to $y$ and $\tau$ over ${\bf R}^\pm\times[\tau_1,\tau]$ gives \begin{equation} \begin{array}{ll} \displaystyle \Xint-_{\bf R}\frac{\psi_y^2}{2}(\tau,y)dy+\int_{\tau_1}^\tau\Xint-_{\bf R}\frac{\psi_{yy}^2}{v}dyd\tau =\Xint-_{\bf R}\frac{\psi_y^2}{2}(\tau_1,y)dy-\int_{\tau_1}^\tau\left[\psi_\tau\psi_y\right](\tau)d\tau\\ \displaystyle ~~+\int_{\tau_1}^\tau\Xint-_{\bf R}\Big\{(p-P)_y+\frac{v_y}{v^2}\psi_y -\big(U_y (\frac{1}{v}-\frac{1}{V})\big)_y+Q_1\Big\}\psi_{yy}dyd\tau =:\sum_{i=1}^3J_i. \end{array}\label{(3.31)} \end{equation} We have to estimate $J_i$. First, the jump $J_2$ can be bounded as follows. \begin{equation}\label{J2-1} \begin{array}{ll} J_2&\displaystyle =-\int_{\tau_1}^\tau\left[\psi_\tau\psi_y\right](\tau)d\tau=-\int_{\tau_1}^\tau\psi_\tau(\tau,0)\left[\psi_y\right](\tau)d\tau\\[3mm] &\displaystyle= -\int_{\tau_1}^\tau\psi_\tau(\tau,0)\left[u_y\right](\tau)d\tau=-\int_{\tau_1}^\tau\psi_\tau(\tau,0)\left[(\frac{u_y}{v}-p)v\right](\tau)d\tau\\[3mm] &\displaystyle=-\int_{\tau_1}^\tau\psi_\tau(\tau,0)(\frac{u_y}{v}-p)(\tau,0)\left[v\right](\tau)d\tau\\[3mm] &\displaystyle\leq C\int_{\tau_1}^\tau\|\psi_\tau\|_{L^\infty}\big(\|\psi_y\|_{L^\infty}+1\big)\left[v\right](\tau_1)e^{-C(\tau-\tau_1)}d\tau\\[3mm] &\displaystyle\leq C\delta\int_{\tau_1}^\tau\|\psi_\tau\|^{\f12}\|\psi_{y\tau}\Xparallel-^{\f12}\big(\|\psi_y\|^{\f12}\|\psi_{yy}\Xparallel-^{\f12}+1\big)e^{-C(\tau-\tau_1)}d\tau. \end{array} \end{equation} In view of $\eqref{P}_2$ and \eqref{priori-a}, one has \begin{equation}\label{psi-tau} \begin{array}{ll} \|\psi_\tau\|&\displaystyle \leq C\Big(\|\psi_{yy}\Xparallel-+\|(\phi_y,\psi_y,\zeta_y)\Xparallel-+\|(U_{yy},V_y,U_y,\Theta_y)\phi\|+\|Q_1\|\Big)\\[3mm] &\displaystyle \leq C\Big(\|\psi_{yy}\Xparallel-+\|(\phi_y,\psi_y,\zeta_y)\Xparallel-+\delta\Big). \end{array} \end{equation} Substituting \eqref{psi-tau} into \eqref{J2-1}, we obtain \begin{equation}\label{J2} \begin{array}{ll} \displaystyle |J_2| \leq C\delta\int_{\tau_1}^\tau\Big(\|\psi_{yy}\Xparallel-+\|(\phi_y,\psi_y,\zeta_y)\Xparallel-+\delta\Big)^{\f12} \|\psi_{y\tau}\Xparallel-^{\f12}\Big(\|\psi_y\|^{\f12}\|\psi_{yy}\Xparallel-^{\f12}+1\Big)e^{-C(\tau-\tau_1)}d\tau\\ \displaystyle \leq \mu\int_{\tau_1}^\tau\|(\psi_{yy},\psi_{y\tau})\Xparallel-^2d\tau +C_\mu~\delta\int_{\tau_1}^\tau \|(\phi_y,\psi_y,\zeta_y)\Xparallel-^2 d\tau+C_\mu\delta . \end{array} \end{equation} On the other hand, $J_3$ can be estimates as follows. \begin{equation}\label{J3} \begin{array}{ll} J_3&\displaystyle =\int_{\tau_1}^\tau\Xint-_{\bf R}\bigg\{(p-P)_y+\frac{v_y}{v^2}\psi_y -\big(U_y (\frac{1}{v}-\frac{1}{V})\big)_y+Q_1\bigg\}\psi_{yy}dyd\tau\\[4mm] &\displaystyle \leq C\int_{\tau_1}^\tau\Xint-_{\bf R}\Big\{|(\phi_y,\zeta_y)|+|(\phi,\zeta)||(\phi_y,V_y,\Theta_y,U_{yy})|\\ &\displaystyle\qquad\qquad\qquad~~+|(\phi_y,V_y)||(\psi_y,U_y,U_y\phi)|+|Q_1|\Big\}|\psi_{yy}|dyd\tau\\[3mm] &\displaystyle \leq \mu \int_{\tau_1}^\tau \|\psi_{yy}\Xparallel-^2d\tau+C_\mu \int_{\tau_1}^\tau \|(\phi_y,\psi_y,\zeta_y)\Xparallel-^2d\tau+C_\mu~\int_{\tau_1}^\tau(1+\tau)^{-\f76}\|(\phi,\psi,\zeta)\|^2d\tau\\[3mm] &\displaystyle ~~~+C_\mu~\delta+C_\mu ~\delta \int_{\tau_1}^\tau\Xint-_{\bf R}(1+\tau)^{-1}e^{-\frac{c_0 y^2}{1+\tau}}|(\phi,\zeta)|^2dy d\tau. \end{array} \end{equation} Substituting \eqref{J2} and \eqref{J3} into \eqref{(3.31)} and choosing $\mu$ suitably small in the front of the integral $\int_{\tau_1}^\tau \|\psi_{yy}\Xparallel-^2d\tau$, we deduce that \begin{equation} \begin{array}{ll} \displaystyle \|\psi_y\|^2(\tau) +\int_{\tau_1}^\tau\|\psi_{yy}\Xparallel-^2d\tau \leq C\|\psi_{y}\|^2(\tau_1)+C\mu \int_{\tau_1}^\tau \|\psi_{y\tau}\Xparallel-^2d\tau\\[4mm] \displaystyle\quad +C_\mu\int_{\tau_1}^\tau(1+\tau)^{-\f76}\|(\phi,\zeta)\|^2 d\tau+C_\mu~\delta+C_\mu\int_{\tau_1}^\tau \|(\phi_y,\psi_y,\zeta_y)\Xparallel-^2d\tau \\[4mm] \displaystyle \quad +C_\mu~ \delta \int_{\tau_1}^\tau \Xint-_{\bf R} (1+\tau)^{-1}e^{-\frac{c_0 y^2}{1+\tau}} |(\phi,\zeta)|^2dy d\tau . \end{array}\label{(3.32)} \end{equation} Multiplication of the equation $\eqref{P}_3$ with $-\zeta_{yy}$ yields that \begin{equation*} \begin{array}{ll} \displaystyle \frac{R}{\gamma-1}\left(\frac{\zeta_y^2}{2}\right)_\tau-\frac{R}{\gamma-1}\left(\zeta_\tau\zeta_y\right)_y+\nu\frac{\zeta_{yy}^2}{v}\\ \displaystyle =\bigg\{(pu_y-PU_y) +\nu\frac{\zeta_yv_y}{v^2}-\nu\big(\Theta_y(\f1v-\f1V)\big)_y-(\frac{u_y^2}{v}-\frac{U_y^2}{V})+Q_2\bigg\}\zeta_{yy}. \end{array} \end{equation*} Integrating the above equality with respect to $y$ and $\tau$ over ${\bf R}^\pm\times[\tau_1,\tau]$, and employing almost the same arguments as those used for $\|\psi_y\Xparallel-^2(\tau)$ in \eqref{(3.32)}, we obtain \begin{equation} \begin{array}{ll} &\displaystyle \|\zeta_y\|^2(\tau) +\int_{\tau_1}^\tau\|\zeta_{yy}\Xparallel-^2d\tau \leq C\|\zeta_{y}\|^2(\tau_1)+C\delta^{\f14}+C\int_{\tau_1}^\tau(1+\tau)^{-\f76}\|(\phi,\zeta)\|^2 d\tau \\ &\displaystyle \quad +C\int_{\tau_1}^\tau \|(\phi_y,\psi_y,\zeta_y)\Xparallel-^2d\tau+C (\delta )^2\int_{\tau_1}^\tau \Xint-_{\bf R} (1+\tau)^{-1}e^{-\frac{c_0 y^2}{1+\tau}} |(\phi,\zeta)|^2dy d\tau, \end{array}\label{(3.33)} \end{equation} where we have used the following jump estimate across $y=0$ \begin{equation*} \begin{array}{ll} \displaystyle -\frac{R}{\gamma-1}\int_{\tau_1}^\tau\left[\zeta_\tau\zeta_y\right](\tau) d\tau =-\frac{R}{\gamma-1}\int_{\tau_1}^\tau\zeta_\tau(\tau,0)\left[\zeta_y\right](\tau) d\tau\\[4mm] \displaystyle =-\frac{R}{\gamma-1}\int_{\tau_1}^\tau\zeta_\tau(\tau,0)\left[\theta_y\right](\tau) d\tau =-\frac{R}{\gamma-1}\int_{\tau_1}^\tau\zeta_\tau(\tau,0)\frac{\theta_y}{v}(\tau,0)\left[v\right](\tau) d\tau\\ \displaystyle \leq C\int_{\tau_1}^\tau\|\zeta_\tau\|_{L^\infty}\big(1+\|\zeta_y\|_{L^\infty}\big)[v](\tau_1)e^{-C(\tau-\tau_1)}d\tau\\ \displaystyle \leq C\delta\int_{\tau_1}^\tau\|\zeta_\tau\Xparallel-^{\f12}\|\zeta_{y\tau}\Xparallel-^{\f12} \big(1+\|\zeta_y\|^{\f12}\|\zeta_{yy}\Xparallel-^{\f12}\big)e^{-C(\tau-\tau_1)}d\tau \end{array} \end{equation*} and the estimate \begin{equation}\label{zeta-tau} \begin{array}{ll} \displaystyle \|\zeta_\tau\|\leq C\Big(\|\zeta_{yy}\Xparallel-+\|(\phi_y,\psi_y,\zeta_y)\Xparallel-+\|(U_y,\Theta_{yy},\Theta_yV_y,U_y^2)(\phi,\zeta)\|+\|Q_2\|\Big)\\ \displaystyle \qquad \leq C\Big(\|\zeta_{yy}\Xparallel-+\|(\phi_y,\psi_y,\zeta_y)\Xparallel-+\delta\Big). \end{array} \end{equation} It follows from \eqref{psi-tau} and \eqref{zeta-tau} that \begin{equation}\label{tau} \begin{array}{ll} \displaystyle \int_{\tau_1}^{\tau_2}\|(\psi_\tau,\zeta_\tau)(\tau,\cdot)\|^2d\tau\\ \displaystyle \leq C\Big(\int_{\tau_1}^{\tau_2}\|(\psi_{yy},\zeta_{yy})\Xparallel-^2d\tau+\int_{\tau_1}^{\tau_2}\|(\phi_y,\psi_y,\zeta_y)\Xparallel-^2d\tau\\[4mm] \displaystyle\qquad\qquad+\int_{\tau_1}^{\tau_2}\|(U_y,\Theta_{yy},\Theta_yV_y,U_y^2)(\phi,\zeta)\|^2d\tau+\int_{\tau_1}^{\tau_2}\|Q_2\|^2d\tau\Big)\\[4mm] \displaystyle \leq C\int_{\tau_1}^{\tau_2}\|(\psi_{yy},\zeta_{yy})\Xparallel-^2d\tau+C\int_{\tau_1}^{\tau_2} \|(\phi_y,\psi_y,\zeta_y)\Xparallel-^2d\tau+\int_{\tau_1}^{\tau_2}(1+\tau)^{-2}\|(\phi,\zeta)\|^2d\tau+C\delta^{\f14}. \end{array} \end{equation} Now we turn to control $\displaystyle \sup_{\tau\in[\tau_1,\tau_2]} \|(\psi_{\tau},\zeta_{\tau})\Xparallel-^2$. First, applying the operator $\partial_\tau$ to the equation $(\ref{P})_2$, we get \begin{equation*} \psi_{\tau\tau}=\big(\frac{u_y}{v}-p\big)_{y\tau}-\big(\frac{U_y}{V}-P\big)_{y\tau}-Q_{1\tau}. \end{equation*} Multiplication of the above equation by $\psi_\tau$ gives \begin{equation*} \begin{array}{ll} \displaystyle \left(\frac{\psi_\tau^2}{2}\right)_\tau+\frac{\psi_{y\tau}^2}{v}=\Big\{\psi_\tau\big(\frac{u_y}{v} -p\big)_\tau-\psi_\tau\big(\frac{U_y}{V}-P\big)_\tau\Big\}_y\\[4mm] \displaystyle\qquad -\psi_{y\tau}\frac{U_{y\tau}}{v}+\psi_{y\tau}\frac{u_y}{v^2}v_\tau+\psi_{y\tau}(\frac{U_y}{V})_\tau +\psi_{y\tau}(p-P)_\tau-\psi_\tau Q_{1\tau}. \end{array} \end{equation*} If we integrate the above equality with respect to $y$ and $\tau$ over ${\bf R}^\pm\times[\tau_1,\tau]$, we find that \begin{equation}\label{f1} \begin{array}{ll} \displaystyle \Xint-_{\bf R}\frac{\psi_\tau^2}{2}(\tau,y)dy+\int_{\tau_1}^\tau\Xint-_{\bf R}\frac{\psi_{y\tau}^2}{v}dyd\tau\\[4mm] \displaystyle~~=\Xint-_{\bf R}\frac{\psi_\tau^2}{2}(\tau_1,y)dy-\int_{\tau_1}^\tau\Big[\psi_\tau \big(\frac{u_y}{v}-p\big)_\tau-\psi_\tau\big(\frac{U_y}{V}-P\big)_\tau\Big](\tau)d\tau\\[4mm] \displaystyle +\int_{\tau_1}^\tau\Xint-_{\bf R}\Big\{-\psi_{y\tau}\frac{U_{y\tau}}{v}+\psi_{y\tau} \frac{u_y}{v^2}v_\tau+\psi_{y\tau}(\frac{U_y}{V})_\tau+\psi_{y\tau}(p-P)_\tau-\psi_\tau Q_{1\tau}\Big\}dyd\tau , \end{array} \end{equation} where the jump across $y=0$ in fact vanishes, i.e., \begin{equation} \begin{array}{ll} \displaystyle \Big[\psi_\tau\big(\frac{u_y}{v}-p\big)_\tau-\psi_\tau\big(\frac{U_y}{V}-P\big)_\tau\Big](\tau)\\[3mm] \displaystyle=[\psi_\tau](\tau)\big(\frac{u_y}{v}-p\big)_\tau(\tau,0-) +\psi_\tau(\tau,0+) \Big[\big(\frac{u_y}{v}-p\big)_\tau\Big](\tau)-[\psi_\tau](\tau)\big(\frac{U_y}{V}-P\big)_\tau(\tau,0)\\[3mm] \displaystyle=[\psi]_\tau(\tau)\big(\frac{u_y}{v}-p\big)_\tau(\tau,0-)+\psi_\tau(\tau,0+) \Big[\frac{u_y}{v}-p\Big]_\tau(\tau)-[\psi]_\tau(\tau)\big(\frac{U_y}{V}-P\big)_\tau(\tau,0)\\[2mm] \displaystyle =0. \end{array} \label{js1} \end{equation} Now we apply $\partial_\tau$ to the equation $(\ref{P})_3$ to deduce that \begin{equation*} \frac{R}{\gamma-1}\zeta_{\tau\tau}=\nu\big(\frac{\theta_y}{v}\big)_{y\tau}-\nu\big(\frac{\Theta_y}{V}\big)_{y\tau} +\Big\{u_y\big(\frac{u_y}{v}-p\big)\Big\}_{\tau}-\Big\{u_y\big(\frac{U_y}{V}-P\big)\Big\}_{\tau}-Q_{2\tau}. \end{equation*} Multiplying the above equation by $\zeta_\tau$, one has \begin{equation*} \begin{array}{ll} \displaystyle \frac{R}{\gamma-1}(\frac{\zeta_\tau^2}{2})_\tau+\nu\frac{\zeta_{y\tau}^2}{v} =\Big\{\nu\zeta_\tau\big(\frac{\theta_y}{v}\big)_\tau-\nu\zeta_\tau\big(\frac{\Theta_y}{V}\big)_\tau\Big\}_y\\[3mm] \displaystyle \qquad+\nu\zeta_{y\tau}\frac{\Theta_{y\tau}}{v} +\nu\zeta_{y\tau}\frac{\theta_y}{v^2}v_\tau+\nu\zeta_{y\tau}(\frac{\Theta_y}{V})_\tau+\zeta_\tau u_{y\tau}(\frac{u_y}{v}-p)\\[3mm] \displaystyle\qquad+\zeta_\tau u_y(\frac{u_y}{v}-p)_{\tau} -\zeta_\tau U_{y\tau}(\frac{U_y}{V}-P)-\zeta_\tau U_y(\frac{U_y}{V}-P)_{\tau}-\zeta_\tau Q_{2\tau}. \end{array} \end{equation*} Integrating the above equality with respect to $y$ and $\tau$ over ${\bf R}^\pm\times[\tau_1,\tau]$, we deduce that \begin{equation}\label{f2} \begin{array}{ll} \displaystyle \Xint-_{\bf R}\frac{R\zeta_\tau^2}{2(\gamma-1)}(\tau,y)dy+\int_{\tau_1}^\tau\Xint-_{\bf R}\nu\frac{\zeta_{y\tau}^2}{v}dyd\tau =\Xint-_{\bf R}\frac{R\zeta_\tau^2}{2(\gamma-1)}(\tau_1,y)dy\\[5mm] \displaystyle -\int_{\tau_1}^\tau\Big[\nu\zeta_\tau\big(\frac{\theta_y}{v}\big)_\tau-\nu\zeta_\tau\big(\frac{\Theta_y}{V}\big)_\tau\Big](\tau)d\tau +\int_{\tau_1}^\tau\Xint-_{\bf R}\Big\{\nu\zeta_{y\tau}\frac{\Theta_{y\tau}}{v}+\nu\zeta_{y\tau}\frac{\theta_y}{v^2}v_\tau+\nu\zeta_{y\tau}(\frac{\Theta_y}{V})_\tau\\[5mm] \displaystyle +\zeta_\tau u_{y\tau}(\frac{u_y}{v}-p)+\zeta_\tau u_y(\frac{u_y}{v}-p)_{\tau} -\zeta_\tau U_{y\tau}(\frac{U_y}{V}-P) -\zeta_\tau U_y(\frac{U_y}{V}-P)_{\tau}-\zeta_\tau Q_{2\tau}\Big\}dyd\tau , \end{array} \end{equation} where the jump in fact vanishes. \begin{equation} \begin{array}{ll} \displaystyle \Big[\nu\zeta_\tau\big(\frac{\theta_y}{v}\big)_\tau-\nu\zeta_\tau\big(\frac{\Theta_y}{V}\big)_\tau\Big](\tau)\\[2mm] \displaystyle=\nu[\zeta_\tau](\tau)\big(\frac{\theta_y}{v}\big)_\tau(\tau,0-)+\nu \zeta_\tau(\tau,0+)\Big[\big(\frac{\theta_y}{v}\big)_\tau\Big](\tau) -\nu[\zeta_\tau](\tau)\big(\frac{\Theta_y}{V}\big)_\tau(\tau,0)\\[3mm] \displaystyle=\nu[\zeta]_\tau(\tau)\big(\frac{\theta_y}{v}\big)_\tau(\tau,0-)+\nu\zeta_\tau(\tau,0+)\Big[\frac{\theta_y}{v}\Big]_\tau(\tau) -\nu[\zeta]_\tau(\tau)\big(\frac{\Theta_y}{V}\big)_\tau(\tau,0)\\[2mm] \displaystyle=0. \end{array} \label{js2} \end{equation} Hence, taking into account \eqref{js1} and \eqref{js2}, we get from \eqref{f1} and \eqref{f2} that \begin{equation}\label{f3} \begin{array}{ll} \displaystyle \|(\psi_\tau,\zeta_\tau)\Xparallel-^2(t) +\int_{\tau_1}^\tau\|(\psi_{y\tau},\zeta_{y\tau})\Xparallel-^2d\tau \leq C\|(\psi_\tau,\zeta_\tau)\Xparallel-^2(\tau_1)+C \int_{\tau_1}^\tau \|(\psi_{\tau},\zeta_\tau)\Xparallel-^2d\tau\\[4mm] \displaystyle\quad +C\int_{\tau_1}^\tau(1+\tau)^{-\f76}\|(\phi,\zeta)\|^2 d\tau+C~\delta+C\int_{\tau_1}^\tau \|(\phi_y,\psi_y,\zeta_y)\|^2d\tau \\[4mm] \displaystyle \quad +C\delta \int_{\tau_1}^\tau \Xint-_{\bf R} (1+\tau)^{-1}e^{-\frac{c_0 y^2}{1+\tau}} |(\phi,\zeta)|^2dy d\tau. \end{array} \end{equation} Combing the estimates \eqref{(3.32)}, \eqref{(3.33)}, \eqref{tau}, \eqref{f3} and Lemma \ref{lemma1} together, we obtain Lemma \ref{lemma2}, and the proof is completed. $\hfill\Box$ \vspace{2mm} It remains to control the term $$ \delta \int_{\tau_1}^\tau \Xint-_{\bf R} (1+\tau)^{-1}e^{-\frac{c_0 y^2}{1+\tau}} |(\phi,\zeta)|^2dy d\tau, $$ which comes from the viscous contact wave. We shall use the estimate on the heat kernel in \cite{HLM} to get the desired estimates. \begin{lemma}\label{lemma3} Suppose that $Z(t,y)$ satisfies $$ Z\in L^\infty(0,T; L^2(\mathbf{R}^{\pm})),~~Z_y\in L^2(0,T; L^2(\mathbf{R}^{\pm})),~~Z_\tau\in L^2(0,T; H^{-1}(\mathbf{R}^{\pm})), $$ then \begin{equation} \begin{array}{ll} &\displaystyle \int_{\tau_1}^\tau \Xint-_{\mathbf{R}}(1+\tau)^{-1}Z^2e^{-\frac{2\beta y^2}{1+\tau}}dy d\tau\\[4mm] &\displaystyle \leq C_\beta\bigg\{ \|Z(\tau_1,y)\|^2+\int_{\tau_1}^\tau \|h_y\Xparallel-^2 d\tau+\int_{\tau_1}^\tau\langle Z_\tau,Zg_\beta^2\rangle_{H^1\times H^{-1}(\mathbf{R}^{\pm})}d\tau\bigg\} \end{array}\label{(3.35)} \end{equation} where \begin{equation}\label{g-b} g_\beta(\tau,y)= \displaystyle (1+\tau)^{-\f12}\int_{0}^{y} e^{-\frac{\beta \eta^2}{1+\tau}}d\eta \end{equation} and $\beta>0$ is the constant to be determined. \end{lemma} \begin{remark} Lemma \ref{lemma3} can be shown using arguments similar to those in \cite{HLM}, and hence its proof will be omitted here for simplicity. Note that the domain considered here consists of two half lines $\mathbf{R}^\pm$, and hence the jump across $y=0$ should be treated. In view of this, the functional $g_\beta$ should be chosen in \eqref{g-b}, so that $g_\beta$ is continuous at $y=0$. Furthermore, it holds that $g_\beta(\tau,0)\equiv0.$ \end{remark} \begin{lemma}\label{lemma4} Under the assumptions of Proposition \ref{priori}, it holds that \begin{equation*} \begin{array}{ll} &\displaystyle \int_{\tau_1}^\tau\Xint-_{\bf R} \frac{e^{-\frac{c_0 y^2}{1+\tau}}}{1+\tau} |(\phi,\psi,\zeta)|^2 dy d\tau\leq C\delta+C\|(\phi,\psi,\zeta)(\tau_1,\cdot)\|^2+C\|(\phi,\psi,\zeta)(\tau,\cdot)\|^2\\[2mm] &~~~~~~~~~~~~~~~~~~~~\displaystyle +C\int_{\tau_1}^\tau\|(\phi_y,\psi_y,\zeta_y)\Xparallel-^2d\tau+C\int_{\tau_1}^\tau (1+\tau)^{-\f76}\|(\phi,\psi)\|^2 d\tau. \end{array}\label{(3.36)} \end{equation*}\end{lemma} \noindent{\bf Proof:} From the equation $\eqref{P}_2$ and the fact $p-P=\frac{R\zeta-P\phi}{v}$ one gets \begin{equation} \psi_\tau+(\frac{R\zeta-P\phi}{v})_y=(\frac{u_y}{v}-\frac{U_y}{V})_y-Q_1.\label{(3.37)} \end{equation} Let \begin{equation*}\label{G-a} G_\alpha(\tau,y)=(1+\tau)^{-1}\int_{0}^{y} e^{-\frac{\alpha \eta^2}{1+\tau}}d\eta , \end{equation*} where $\alpha$ is a positive constant to be determined. Multiplying the equation \eqref{(3.37)} by $G_\alpha(R\zeta-P\phi)$, we find that \begin{equation} \begin{array}{ll} &\displaystyle \left(\frac{G_\alpha(R\zeta-P\phi)^2}{2v}\right)_y-(G_\alpha)_y\frac{(R\zeta-P\phi)^2}{2v}+\frac{G_\alpha(R\zeta-P\phi)^2}{2v^2}(V_y+\phi_y)\\[2mm] &\displaystyle =-G_\alpha (R\zeta-P\phi)\psi_\tau+ G_\alpha (R\zeta-P\phi)(\frac{u_y}{v}-\frac{U_y}{V})_y-G_\alpha (R\zeta-P\phi)Q_1. \end{array}\label{(3.38)} \end{equation} Noticing that \begin{equation} -G_\alpha (R\zeta-P\phi)\psi_\tau=-\big(G_\alpha (R\zeta-P\phi)\psi\big)_\tau +(G_\alpha)_\tau(R\zeta-P\phi)\psi+G_\alpha \psi(R\zeta-P\phi)_\tau \label{(3.39)} \end{equation} and \begin{equation} \begin{array}{ll} \displaystyle (R\zeta-P\phi)_\tau=R\zeta_\tau-P_\tau\phi-P\phi_\tau\\[2mm] \displaystyle =-\gamma P\psi_y+(\gamma-1)\Big\{-(p-P)(U_y+\psi_y) +(\frac{u_y^2}{v}-\frac{U_y^2}{V})+\nu(\frac{\theta_y}{v}-\frac{\Theta_y}{V})_y -Q_2\Big\} -P_\tau\phi , \end{array}\label{(3.40)} \end{equation} if we insert \eqref{(3.40)} into \eqref{(3.39)} and use the equality \begin{equation*} \displaystyle -G_\alpha \gamma P \psi_y \psi =-\Big(\gamma G_\alpha P\frac{\psi^2}{2}\Big)_y+\gamma P(G_\alpha)_y\frac{\psi^2}{2}+\gamma P_y\frac{\psi^2}{2}, \label{(3.41)} \end{equation*} we get from \eqref{(3.38)} that \begin{equation} \frac{e^{-\frac{\alpha y^2}{1+\tau}}}{2(1+\tau)}\Big\{(R\zeta-P\phi)^2+\gamma P\psi^2\Big\}=\big\{G_\alpha v(R\zeta-P\phi)\psi\big\}_\tau+H_{2y}+Q_4, \label{(3.42)} \end{equation} where \begin{equation*} \displaystyle H_2=\frac{G_\alpha(R\zeta-P\phi)^2}{2v}+\gamma G_\alpha P\frac{\psi^2}{2}-\nu(\gamma-1)G_\alpha \psi (\frac{\theta_y}{v}-\frac{\Theta_y}{V})-G_\alpha (R\zeta-P \phi)(\frac{u_y}{v}-\frac{U_y}{V}) \label{(3.43)} \end{equation*} and \begin{equation*} \begin{array}{ll} \displaystyle Q_4= \frac{G_\alpha(R\zeta-P\phi)^2}{2v^2}(V_y+\phi_y)-(G_\alpha)_\tau (R\zeta-P\phi)\psi +\big(G_\alpha (R\zeta-P\phi)\big)_y(\frac{u_y}{v}-\frac{U_y}{V}) \\[2mm] \displaystyle \qquad+(\gamma-1)G_\alpha \psi\left\{(p-P)(U_y+\psi_y)-(\frac{u_y^2}{v}-\frac{U_y^2}{V})+Q_2\right\}\\[2mm] \displaystyle\qquad +(\gamma-1)\nu(G_\alpha \psi)_y(\frac{\theta_y}{v}-\frac{\Theta_y}{V})+G_\alpha (R\zeta-P\phi)Q_1+G_\alpha \psi P_\tau\phi-\gamma P_y\frac{\psi^2}{2}. \end{array} \label{(3.44)} \end{equation*} Integrating \eqref{(3.42)} over $\mathbf{R}^\pm\times [\tau_1,\tau]$, one infers that \begin{equation} \begin{array}{ll} &\displaystyle \int_{\tau_1}^\tau\Xint-_{\bf R}\frac{e^{-\frac{\alpha y^2}{1+\tau}}}{1+\tau}\big\{(R\zeta-P\phi)^2+\psi^2\big\}dy d\tau= \Xint-_{\bf R}\big\{G_\alpha v(R\zeta-P\phi)\psi\big\}(\tau,y)dy\\[4mm] &\displaystyle\qquad -\Xint-_{\bf R}\big\{G_\alpha v(R\zeta-P\phi)\psi\big\}(\tau_1,y)dy+\int_{\tau_1}^\tau\left[H_{2}\right](\tau)d\tau +\int_{\tau_1}^\tau\Xint-_{\bf R}Q_4dyd\tau. \end{array}\label{(3.46)-1} \end{equation} Here we only analyze the jump term $[H_2]$ across $y=0$, the other terms in \eqref{(3.46)-1} can be estimated similarly to those in \cite{HLM} or \cite{HWY}. Recalling that $G_\alpha(\tau,y)$ is continuous at $y=0$ and $G_\alpha(\tau,0)\equiv 0$, we easily see that $$ [H_2](\tau)=0. $$ Thus, from \eqref{(3.46)-1} one gets \begin{equation} \begin{array}{ll} &\displaystyle \int_{\tau_1}^\tau\Xint-_{\bf R}\frac{e^{-\frac{\alpha y^2}{1+\tau}}}{1+\tau}\big\{(R\zeta-P\phi)^2+\psi^2\big\}dy d\tau\leq C\delta+C\|(\phi,\psi,\zeta)(\tau_1,\cdot)\|^2\\[3mm] &\displaystyle \qquad\qquad +C\|(\phi,\psi,\zeta)(\tau,\cdot)\|^2+C\int_{\tau_1}^\tau (1+\tau)^{-\f76}\|(\phi,\psi,\zeta)(\tau,\cdot)\|^2d\tau\\[2mm] &\displaystyle\qquad\qquad +C\int_{\tau_1}^\tau \|(\phi_y,\psi_y,\zeta_y)(\tau,\cdot)\Xparallel-^2 d\tau+C\delta \int_{\tau_1}^\tau\Xint-_{\bf R}\frac{e^{-\frac{\alpha y^2}{1+\tau}}}{1+\tau}|(\phi,\zeta)|^2dy d\tau. \end{array}\label{(3.46)} \end{equation} In order to get the desired estimate in Lemma \ref{lemma4}, we will use Lemma \ref{lemma3} to derive another similar estimate from the energy equation $\eqref{P}_3$. To this end, we set $$ Z=\frac{R}{\gamma-1}\zeta+P\phi $$ in Lemma \ref{lemma3}. Thus we only need to compute the last term in \eqref{(3.35)}. From the energy equation $\eqref{P}_3$, we have \begin{equation*} Z_\tau=P_\tau\phi-(p-P)u_y+\nu(\frac{\theta_y}{v}-\frac{\Theta_y}{V})_y+(\frac{u_y^2}{v}-\frac{U_y^2}{V})-Q_2,\label{(3.47)} \end{equation*} whence \begin{equation*} \begin{array}{ll} &\displaystyle \int_{\tau_1}^\tau\langle Z_\tau, Zg_\beta^2\rangle_{H^1\times H^{-1}({\bf R}^\pm)}d\tau\\[5mm] =&\displaystyle\int_{\tau_1}^\tau\Xint-_{\mathbf{R}}\big(P_\tau\phi-(p-P)U_y\big)Zg_\beta^2dy d\tau-\int_{\tau_1}^\tau\Xint-_{\mathbf{R}}(p-P)\psi_y Zg_\beta^2dy d\tau\\[4mm] &\displaystyle+\int_{\tau_1}^\tau\Big[\nu(\frac{\theta_y}{v}-\frac{\Theta_y}{V}) Zg_\beta^2\Big](\tau) d\tau- \int_{\tau_1}^\tau\Xint-_{\mathbf{R}}\nu(\frac{\theta_y}{v}-\frac{\Theta_y}{V})(Zg_\beta^2)_y dy d\tau\\[3mm] &\displaystyle +\int_{\tau_1}^\tau\Xint-_{\mathbf{R}}(\frac{u^2_y}{v}-\frac{U^2_y}{V})Zg_\beta^2 dy d\tau-\int_{\tau_1}^\tau\Xint-_{\mathbf{R}}Q_2 Zg_\beta^2 dy d\tau =:\displaystyle \sum_{i=1}^6 K_i. \end{array} \end{equation*} Here the jump term $K_3$ can be estimated as follows, recalling $g_\beta(\tau,0)\equiv0$. $$ K_3=\int_{\tau_1}^\tau\Big[\nu(\frac{\theta_y}{v}-\frac{\Theta_y}{V}) Zg_\beta^2\Big](\tau)d\tau =\nu\int_{\tau_1}^\tau g_\beta^2(\tau,0)(\frac{\theta_y}{v}-\frac{\Theta_y}{V})(\tau,0) \big[Z\big](\tau)d\tau\equiv 0, $$ while the terms $K_i$ ($i=1,4,5,6$) can be directly dealt with in the same manner as in \cite{HLM} or \cite{HWY}. To bound the term $K_2$, we make use of the mass equation $\eqref{P}_1$ to write $K_2$ in the form \begin{equation*} \begin{array}{lll} \displaystyle ~~~-(p-P)\psi_y Zg_\beta^2=\frac{\gamma P\phi-(\gamma-1)Z}{v}Zg_\beta^2\phi_\tau=\displaystyle\frac{\gamma PZg_\beta^2}{2v}(\phi^2)_\tau -\frac{(\gamma-1)Z^2 g_\beta^2}{v}\phi_\tau\\[4mm] =\displaystyle\Big( \frac{\gamma PZ \phi^2g_\beta^2-2(\gamma-1)\phi Z^2g_\beta^2}{2v}\Big)_\tau -\frac{\gamma PZ\phi^2-2(\gamma-1)Z^2\phi}{v}g_\beta(g_\beta)_\tau \\[3mm] ~~~\displaystyle +\frac{\gamma PZ\phi^2-2(\gamma-1)Z^2\phi}{2v^2}g_\beta^2v_\tau- \Big(\frac{2(\gamma-1)g_\beta^2\phi Z}{v}+\frac{\gamma Pg_\beta^2\phi^2}{2v}\Big)Z_\tau-\frac{\gamma g_\beta^2\phi^2Z}{2v}P_\tau , \end{array} \end{equation*} where all terms on the right-side hand of the above identity can be directly bounded in the same way as in \cite{HLM} or \cite{HWY}. Therefore, we have bounded $K_2$. Taking $\beta=\frac{c_0}{2}$, one can get from Lemma \ref{lemma3} that \begin{equation} \begin{array}{ll} &\displaystyle \int_{\tau_1}^\tau\Xint-_{\bf R} \frac{e^{-\frac{c_0 y^2}{1+\tau}}}{1+\tau} Z^2 dy d\tau \leq C\delta+C\|(\phi,\psi,\zeta)(\tau_1,\cdot)\|^2+C\|(\phi,\psi,\zeta)(\tau,\cdot)\|^2 \\[4mm] &\quad \displaystyle +C\int_{\tau_1}^\tau\|(\phi_y,\psi_y,\zeta_y)\Xparallel-^2d\tau+C\int_{\tau_1}^\tau (1+\tau)^{-\f76}\|(\phi,\psi)\|^2 d\tau \\[4mm] &\quad \displaystyle +C(\delta +\eta_1)\int_{\tau_1}^\tau \Xint-_{\bf R}(1+\tau)^{-1}e^{-\frac{c_0 y^2}{1+\tau}}|(\phi,\zeta)|^2dy d\tau. \end{array}\label{(3.49)} \end{equation} Now, taking $\alpha=c_0$ in \eqref{(3.46)} and choosing $\delta $ and $\eta_1$ suitably small, we combine \eqref{(3.46)} with \eqref{(3.49)} to obtain the desired estimate in Lemma \ref{lemma4}. $\hfill \Box$ \vspace{2mm} By Lemmas \ref{lemma2} and \ref{lemma4}, we conclude $$ \begin{array}{ll} \displaystyle N(\tau_1,\tau_2) +\int_{\tau_1}^{\tau_2}\Big\{ \|\phi_y\Xparallel-^2+\|(\psi_y,\zeta_y)\Xparallel-_1^2+\|(\psi_{y\tau},\zeta_{y\tau})\Xparallel-^2\Big\} d\tau\\ \qquad \displaystyle \leq CN(\tau_1)+C\int_{0}^t(1+\tau)^{-\f76}\|(\phi,\psi,\zeta)\|^2 d\tau+C\delta^\f14. \end{array} $$ An application of Gronwall's inequality to the above inequality gives the estimate \eqref{p1} in Proposition \ref{priori}. This completes the proof of Proposition \ref{prop1}.
2024-02-18T23:41:05.671Z
2012-03-07T02:03:14.000Z
algebraic_stack_train_0000
4,138
13,526
proofpile-arXiv_066-4522
\section{Introduction} Chunked codes (CC), originally proposed in~\cite{MHL:2006}, generalize random linear network codes (dense codes), and operate by dividing the message of the source into non-overlapping or overlapping sub-messages of equal size, called \emph{chunks}~\cite{MHL:2006,SZK:2009,HBJ:2011}. Each node at each transmission time randomly chooses a chunk, and transmits it by using a dense code. In fact, a dense code is a CC with only one chunk of the size equal to the message size. Thus, CC require less complex coding operations due to applying coding on chunks smaller than the original message. This however comes at the cost of lower speed of convergence to the capacity compared to dense codes. The speed of convergence of CC to the capacity of line networks with arbitrary deterministic traffics was studied in~\cite{MHL:2006,HBJ:2011}. In particular, it has been shown that (i) a CC achieves the capacity, so long as the size of the chunks is lower bounded by a function super-logarithmic in the message size and super-log-cubic in the network length, and (ii) a CC, preceded by a capacity-achieving erasure code, approaches the capacity with an arbitrarily small but non-zero constant gap, so long as the size of the chunks is lower bounded by a function constant in the message size and log-cubic in the network length. There is however no result on the speed of convergence of CC to the capacity over the networks with probabilistic traffics. The speed of convergence of dense codes to the capacity of some probabilistic traffics was studied in~\cite{PFS:2005,DDHE:2009}. Very recently, in~\cite{HB1S:2012}, we studied the coding delay and the average coding delay of a dense code over the traffics with deterministic regular or Poisson transmissions and Bernoulli losses.\footnote{The \emph{coding delay} of a code over a network with a given traffic (schedule of transmissions and losses) is the minimum time that the code takes to transmit all the message vectors from the source to the sink. The coding delay is a random variable due to the randomness in both the code and the traffic. The \emph{average coding delay} of a code with respect to a class of traffics is the coding delay of the code averaged out over all the traffics (but not the codes), and hence is a random variable due to the randomness in the code.} The results were in some cases more general, and in some other cases tighter, than the existing bounds in~\cite{PFS:2005,DDHE:2009}. In this paper, we generalize our analysis in~\cite{HB1S:2012}, and for the first time, study the coding delay and the average coding delay of CC for different ranges of the chunk sizes.\footnote{In this paper, we focus on CC with non-overlapping chunks. The analysis of CC with overlapping chunks is the focus of an ongoing research project.} The main contributions of this work are: \begin{itemize} \item We derive upper bounds on the coding delay and the average coding delay of a CC alone, or a CC with precoding, over the traffics with deterministic regular transmissions or Poisson transmissions and Bernoulli losses with arbitrary parameters or unequal parameters. \item We show that: (i) a CC achieves the capacity, so long as the size of the chunks is bounded from below by a function super-logarithmic in the message size and super-log-linear in the network length, and (ii) the combination of a CC and a capacity-achieving erasure code approaches the capacity with an arbitrarily small non-zero constant gap, so long as the size of the chunks is bounded from below by a function constant in the message size and log-linear in the network length. The lower bounds in both cases are smaller than those over the networks with arbitrary deterministic traffics. Thus both coding schemes are less computationally complex (require smaller chunks), for the same speed of convergence, over such probabilistic traffics, compared to arbitrary deterministic traffics. \item In a capacity-achieving scenario, for such probabilistic traffics, we show that: (i) the upper bound on the overhead\footnote{The (\emph{average}) \emph{overhead} is the difference between the (average) coding delay and the ratio of the message size to the capacity.} grows sub-log-linearly with the message size and the network length, and decays sub-linearly with the size of the chunks, and (ii) the upper bound on the average overhead grows sub-log-linearly (or poly-log-linearly) with the message size, and sub-log-linearly (or log-linearly) with the network length, and decays sub-linearly (or linearly) with the size of the chunks, in the case with arbitrary (or unequal) parameters. For arbitrary deterministic traffics, the upper bound on the overhead was shown in~\cite{HBJ:2011} to be similar to (i), but with a larger (super-linear) growth rate with the network length. \end{itemize} \vspace{-.25 cm} \section{Network Model and Problem Setup} \subsection{Transmission and Loss Model} We consider a unicast problem (one-source one-sink) over a line network with $L$ links connecting $L+1$ nodes $\{v_i\}_{0\leq i\leq L}$ in tandem. The source node $v_0$ has a message of $k$ vectors (called \emph{message vectors}) from a vector space $\mathcal{F}$ over $\mathbb{F}_2$, and the sink node $v_L$ requires all the message vectors.\footnote{The analysis in this paper is generalizable to finite fields of larger size.} Each (non-sink) node at each transmission time transmits a (coded) packet, which is a vector in $\mathcal{F}$. The packet transmissions are assumed to occur in discrete-time, and the transmission times over different links are assumed to follow independent stochastic processes. The transmission times over the $i\textsuperscript{th}$ link are specified by (i) a deterministic process where there is a packet transmission at each time instant, or (ii) a Poisson process with parameter $\lambda_i: 0<\lambda_i\leq 1$, where $\lambda_i$ is the average number of transmissions per time unit over the $i\textsuperscript{th}$ link. The transmission schedules resulting from (i) and (ii) are referred to as \emph{deterministic regular} and \emph{Poisson}, respectively. Each transmitted packet either succeeds or fails to be received (\emph{successful} vs. \emph{lost}). The successful packets are assumed to arrive with zero delay, and the lost packets will never arrive. The packets are assumed to be successful independently over different links. The successful packets over the $i\textsuperscript{th}$ link are specified by a Bernoulli process with (success) parameter $p_i: 0<p_i\leq 1$, where $p_i$ is the average number of successes per transmission over the $i\textsuperscript{th}$ link. The loss model defined as above is referred to as \emph{Bernoulli}. The special case of Bernoulli loss with all $p_i$'s equal to $1$ is analogous to the lossless case. \vspace{-.35 cm} \subsection{Problem Setup} The goal in this paper is to derive upper bounds on the coding delay and the average coding delay of chunked codes over line networks with deterministic regular or Poisson transmissions and Bernoulli losses. In a chunked coding scheme, the set of $k$ message vectors at the source node is divided into $q$ disjoint subsets, called \emph{chunks}, each of size $\alpha=k/q$. The source node, at each transmission time, chooses a chunk independently at random, and transmits a packet by randomly linearly combining the message vectors belonging to the underlying chunk. Each non-source non-sink node, at the time of each transmission, chooses a chunk independently at random, and transmits a packet by randomly linearly combining its previously received packets pertaining to the underlying chunk. The global encoding vector\footnote{The \emph{global encoding vector} of a packet is the vector of the coefficients representing the mapping between the message vectors and the packet.} of each packet is assumed to be transmitted along with the packet. The sink node can decode a chunk, so long as it receives an innovative\footnote{A collection of packets is \emph{innovative} if the global encoding vectors of the packets belonging to the collection are linearly independent.} collection of packets pertaining to the underlying chunk of a size equal to the size of the chunk. \vspace{-.25 cm} \section{Deterministic Regular Transmissions and Bernoulli Losses}\label{sec:BernoulliLossRegularTrafficCC} We first review the analysis of dense codes, which are a special case of CC with one chunk, in two cases of arbitrary or unequal (success) parameters, presented in~\cite{HB1S:2012}.\footnote{The details of the proofs in the case of arbitrary parameters were given in~\cite{HB1S:2012} and hence omitted. However, neither the details, nor the sketches of the proofs in the case of unequal parameters were given in~\cite{HB1S:2012}. We present the sketches of the proofs in this paper for the purpose of completeness.} Next, we generalize the analysis to CC with more than one chunk. \vspace{-.35cm} \subsection{Dense Codes}\label{subsec:DC} The goal of the analysis is to lower bound (i) the size of a maximal dense collection of packets at the sink node until a certain time,\footnote{A collection of packets is \emph{dense} if the local encoding vectors of the packets are linearly independent, where the local encoding vector of a packet is the vector of the coefficients of the linear combination pertaining to the packet.} and then, (ii) the probability that a sufficient number of packets in the underlying collection are innovative. Let $Q_{i+1}$ and $Q_i$ be the decoding matrices\footnote{The global encoding vectors of the packets at a node form the rows of the \emph{decoding matrix} at the node.} at the $(i+1)\textsuperscript{th}$ and $i\textsuperscript{th}$ nodes, respectively, and $T_i$ be a matrix over $\mathbb{F}_2$ such that $Q_{i+1}=T_i Q_i$. The entries of $Q_{i+1}$ and $Q_i$ are in $\mathbb{F}_2$. Each row of $T_i$ is the local encoding vectors of a successful packet sent by the $i\textsuperscript{th}$ node. Let $Q'_i$ be $Q_i$ restricted to its rows corresponding to the global encoding vectors of the dense packets at the $i\textsuperscript{th}$ node. Let $T'_i$, the \emph{transfer matrix} at the $i\textsuperscript{th}$ node, be a matrix over $\mathbb{F}_2$ such that $Q_{i+1}=T'_i Q'_i$. Each row of $T'_i$ indicates the labels of the dense packets at the $i\textsuperscript{th}$ node which contribute to a successful packet sent by the $i\textsuperscript{th}$ node. For every matrix $Q$ over $\mathbb{F}_2$, the \emph{density} of $Q$, denoted by $\mathcal{D}(Q)$, is the size of a maximal dense collection of rows in $Q$, where a collection of rows is \emph{dense} if the rows have all independent and uniformly distributed Bernoulli entries. Further, $Q$ is called a \emph{dense matrix} if all its rows form a dense collection. For every matrix $T$ over $\mathbb{F}_2$, the \emph{rank} of $T$, denoted by $\text{rank}(T)$, is the size of a maximal collection of linearly independent rows in $T$. \begin{lemma}\label{lem:DensityTM} Let $Q$ be a dense matrix over $\mathbb{F}_2$, and $T$ be a matrix over $\mathbb{F}_2$, where the number of rows in $Q$ and the number of columns in $T$ are equal. If $\text{rank}(T)\geq \gamma$, then $\mathcal{D}(TQ)\geq \gamma$.\end{lemma} Since $Q_{i+1}=T'_i Q'_i$, and $Q'_i$ is dense, $\mathcal{D}(Q_{i+1})$ is lower bounded so long as $\text{rank}(T'_i)$ is lower bounded. As shown in~\cite{HB1S:2012}, the matrix $T'_i$ includes a sub-matrix with the structure of a random block lower-triangular matrix, and the rank of a matrix with such a structure is lower bounded as follows. Let $w$, $r$ and $\{r_j\}_{1\leq j\leq w}$ be arbitrary non-negative integers, and let $r_{\text{max}}=\max_j r_j$ and $r_{\text{min}}=\min_j r_j$. Let $T_{i,j}$ be an $r\times r_j$ dense matrix over $\mathbb{F}_2$, if $1\leq j\leq i\leq w$; or an arbitrary $r\times r_j$ matrix over $\mathbb{F}_2$, otherwise. Let $T=[T_{i,j}]_{1\leq i,j\leq w}$. The matrix $T$ is called \emph{random block lower-triangular} (RBLT). \begin{lemma}\label{lem:VerticalT} Let $T$ be an RBLT matrix with parameters $w$, $r$ and $\{r_j: 0\leq r_j\leq r\}_{1\leq j\leq w}$. Let $u=\left\lceil{(n-\gamma)}/{r_{\text{min}}}\right\rceil$, and $n= \sum_{1\leq j\leq w}r_j$. For every integer $0\leq \gamma\leq n-1$, $\Pr\{r(T)<n-\gamma\}\leq u \left(1-2^{-r_{\text{max}}}\right) 2^{-\gamma+n-wr+(r-r_{\text{min}})(u-1)}$.\end{lemma} \begin{lemma}\label{lem:HorizontalT} Let $T$ be an RBLT matrix with parameters $w$, $r$ and $\{r_j: 0\leq r\leq r_j\}_{1\leq j\leq w}$. Let $u=\left\lceil{(n-\gamma)}/{r}\right\rceil$, and $n = w r$. For every integer $0\leq \gamma\leq n-1$, $\Pr\{r(T)<n-\gamma\}\leq u \left(1-2^{-r}\right) 2^{-\gamma+n-wr_{\text{min}}+(r_{\text{min}}-r)(u-1)}$.\end{lemma} The application of the lemmas is subject to useful and tight choices of $w$, $r$, and $r_j$'s. Such parameters depend on the traffic over the $i\textsuperscript{th}$ and $(i+1)\textsuperscript{th}$ links, and hence not straightforward to optimize. However, by using a probabilistic technique, tight bounds on such parameters can be derived. Let $(0,N_T]$ be the period of time over which the transmissions occur. Let $(0,N_T]$ be divided into $w$ disjoint partitions of length $N_T/w$. For every $1\leq j\leq w$, and $1\leq i\leq L$, let $I_{ij}$ be the $j\textsuperscript{th}$ partition pertaining to the $i\textsuperscript{th}$ link. For every $i,j$: $i\leq j\leq w-L+i$, $I_{ij}$ is called \emph{active}. Let $w_T\doteq L(w-L+1)$ be the total number of active partitions. Let $\varphi_{ij}$ be the number of successful packets in $I_{ij}$. By the assumption, $\varphi_{ij}$ is a binomial random variable with the expected value $\varphi_i=p_i N_T/w$. Let $p\doteq\min_{1\leq i\leq L}p_i$, and $\varphi\doteq pN_T/w$. For any real number $x$, let $\dot{x}$ denote $x/2$. By applying the Chernoff bound, one can show that $\varphi_{ij}$ is not larger than or equal to $r\doteq (1-\gamma^{*})\varphi$ with probability (w.p.) bounded above by (b.a.b.) $\dot{\epsilon}/w_T$, so long as $0<\gamma^{*}<1$, where $\gamma^{*}\sim \sqrt{({1}/{\dot{\varphi}})\ln({w_T}/{\dot{\epsilon}})}$. For all $i,j$, suppose that $\varphi_{ij}$ is larger than or equal to $r$. Let $\mathcal{D}(Q_i^j)$ be the number of dense packets in the first $j$ active partitions over the $i\textsuperscript{th}$ link. The packets over the first link are all dense. Thus, for all $j$, $\mathcal{D}(Q_1^j)\geq rj$. For any other values of $i,j$, by applying Lemma~\ref{lem:VerticalT}, it can be shown that the inequality $\mathcal{D}(Q_i^j)\geq rj-j(1+o(1))\log(w_T/\epsilon)$ fails w.p. b.a.b. $ij\dot{\epsilon}/w_T$, so long as \begin{equation}\label{eq:Temp7}w\log\frac{w_T}{\epsilon}=o(p N_T).\end{equation} This result shows that the number of dense packets at the sink node, $\mathcal{D}(Q_L)$, fails to be larger than \begin{eqnarray}\label{eq:Temp8} \lefteqn{p N_T - O(p N_T L/w) - } \nonumber\\ && O(\sqrt{p N_T w\log(w L/\epsilon)})-O(w\log(w L/\epsilon)), \end{eqnarray} w.p. b.a.b. $\epsilon$. By condition~\eqref{eq:Temp7}, it follows that each $O(.)$ term in~\eqref{eq:Temp8} is $o(p N_T)$ which ensures that the code achieves the capacity. We specify $w$ by $\sqrt[3]{{p N_T L^2}/{\log(p N_T L/\epsilon)}}$ in order to maximize~\eqref{eq:Temp8} subject to condition~\eqref{eq:Temp7}. Let $n_T$ be equal to~\eqref{eq:Temp8}. Thus, $Q_L$ fails to include an $n_T\times k$ dense sub-matrix w.p. b.a.b. $\epsilon$. \begin{lemma}\label{lem:DenseRankProb}Let $Q$ be an $n\times k$ ($k\leq n$) dense matrix over $\mathbb{F}_2$. Then, $\Pr\{\text{rank}(Q)<k\}\leq 2^{-(n-k)}$.\end{lemma} By Lemma~\ref{lem:DenseRankProb}, $\Pr\{\text{rank}(Q_L)<k\}$ is b.a.b. $\epsilon$, so long as $k\leq n_T-\log(1/\epsilon)$. By replacing $\epsilon$ with $\dot{\epsilon}$, it follows that the sink node can recover all the message vectors w.p. b.a.b. $\epsilon$, so long as $k\leq n_T - \log(1/\epsilon)-1$. Let $k_{\text{max}}$ be the largest integer $k$ satisfying this inequality. Thus, $k_{\text{max}}\sim p N_T$, and by replacing $N_T$ with $k/p$, the following result is immediate. \begin{theorem}\label{thm:DenseCodesRegularBernoulliActualNon-IdenticalGeneral} The coding delay of a dense code over a line network of $L$ links with deterministic regular traffics and Bernoulli losses with parameters $\{p_i\}$ is larger than \[\frac{1}{p}\left(k+(1+o(1))\left(\frac{kL}{w}+\sqrt{k\left(w\log\frac{wL}{\epsilon}\right)}+w\log\frac{wL}{\epsilon}\right)\right)\] w.p.~b.a.b.~$\epsilon$,~where~$w\sim \left(k L^2/\log(k L/\epsilon)\right)^{\frac{1}{3}}$,~$p\doteq\min_{1\leq i\leq L}{p_i}$.\end{theorem} In the case of the average coding delay, the analysis proceeds by replacing $r$ with $\varphi$ in the preceding results, and re-specifying $w$ by $\sqrt{{p N_T L}/{\log(p N_T L/\epsilon)}}$ in order to maximize \begin{equation}\label{eq:Temp5} pN_T - O(p N_T L/w) - O(w\log(w L/\epsilon)),\end{equation} instead of~\eqref{eq:Temp8}, subject to condition~\eqref{eq:Temp7}. \begin{theorem}\label{thm:DenseCodesRegularBernoulliAverageNon-IdenticalGeneral} The average coding delay of a dense code over a network similar to Theorem~\ref{thm:DenseCodesRegularBernoulliActualNon-IdenticalGeneral} is larger than \[\frac{1}{p}\left(k +(1+o(1))\left(\frac{k L}{w}+w\log\frac{wL}{\epsilon}\right)\right)\] w.p. b.a.b. $\epsilon$, where $w\sim \left({kL/\log(kL/\epsilon)}\right)^{\frac{1}{2}}$.\end{theorem} In order to derive tighter bounds the actual values of the success parameters $\{p_i\}$ need to be taken into consideration. In particular, the coding delay and the average coding delay of dense codes for a special case, where no two links have equal success parameters, are upper bounded as follows. Let us assume $p_1>p_2>\cdots> p_L$, without loss of generality. Let $p\doteq \min_{1\leq i\leq L} p_i$, $\gamma_e\doteq \min_{1<i\leq L}\gamma_{e_i}$, and $\gamma_{e_i}\doteq |p_i-p_{i-1}|$. Let $r_i\doteq (1-\gamma^{*}_i)\varphi_i$, where $\varphi_i=p_iN_T/w$ and $\gamma^{*}_i\sim\sqrt{(1/\dot{\varphi_i})\log(w_T/\dot{\epsilon})}$. Let $\varphi_{ij}$ be defined as before. For all $i,j$, suppose that $\varphi_{ij}$ is larger than or equal to $r_i$. Similarly as before, for all $j$, $\mathcal{D}(Q_1^j)\geq r_1 j$. For any other values of $i,j$, by applying Lemma~\ref{lem:HorizontalT}, it can be shown that the inequality $\mathcal{D}(Q_i^j)\geq r_i j$ fails w.p. b.a.b. $ij\dot{\epsilon}/w_T$, so long as \begin{equation}\label{eq:UnequalParameters} {w}\log\frac{w_T}{\epsilon}=o(\gamma_e p N_T).\end{equation} Let $p$, $\varphi$, $\gamma^{*}$ and $r$ denote $p_L$, $\varphi_L$, $\gamma^{*}_L$ and $r_L$, respectively. Thus, the inequality $\mathcal{D}(Q_L)\geq (1-\gamma^{*})\varphi w_T/L$ fails w.p. b.a.b. ${\epsilon}$. By replacing $\varphi$ with $pN_T/w$, the right-hand side of the last inequality can be written as: \begin{equation}\label{eq:Temp4} p N_T - O(p N_T L/w) -O(\sqrt{p N_T w \log(w L/\epsilon)}).\end{equation} The rest of the analysis is similar to that of Theorem~\ref{thm:DenseCodesRegularBernoulliActualNon-IdenticalGeneral}, except that~\eqref{eq:Temp4} excludes the last term in~\eqref{eq:Temp8}, and the choice of $w$ needs to satisfy condition~\eqref{eq:UnequalParameters}, instead of condition~\eqref{eq:Temp7}. \begin{theorem}\label{thm:DenseCodesRegularBernoulliActualNon-Identical} Consider a sequence of unequal parameters $\{p_i\}_{1\leq i\leq L}$. The coding delay of a dense code over a line network of $L$ links with deterministic regular traffics and Bernoulli losses with parameters $\{p_i\}$ is larger than \[\frac{1}{p}\left(k+(1+o(1))\left(\frac{kL}{w}+\sqrt{k\left(w\log\frac{wL}{\epsilon}\right)}\right)\right)\] w.p. b.a.b. $\epsilon$, where $w\sim\gamma_e \left(k L^2/\log(k L/\epsilon)\right)^{\frac{1}{3}}$, $p \doteq \min_{1\leq i\leq L}p_i$, $\gamma_{e}\doteq \min_{1< i\leq L} \gamma_{e_i}$, and $\gamma_{e_i} \doteq |p_i-p_{i-1}|$.\end{theorem} In the case of the average coding delay, the analysis follows the same line as that of Theorem~\ref{thm:DenseCodesRegularBernoulliAverageNon-IdenticalGeneral}, except that the choice of $w$ needs to maximize \begin{equation}\label{eq:Temp9} pN_T - O(p N_T L/w)\end{equation} subject to condition~\eqref{eq:UnequalParameters}, instead of~\eqref{eq:Temp5} subject to condition~\eqref{eq:Temp7}. \begin{theorem}\label{thm:DenseCodesRegularBernoulliAverageNon-Identical} The average coding delay of a dense code over a network similar to Theorem~\ref{thm:DenseCodesRegularBernoulliActualNon-Identical} is larger than \[\frac{1}{p}\left(k +(1+o(1))\left(\frac{k L}{w}\right)\right)\] w.p. b.a.b. $\epsilon$, where $w\sim \gamma_e k/(f(k)\log(k L/\epsilon))$, and $f(k)$ goes to infinity, as $k$ goes to infinity, such that $f(k)=o(\gamma_e k/\log(kL/\epsilon))$.\end{theorem} \vspace{-.35 cm} \subsection{CC: Capacity-Achieving}\label{subsec:CCCACH} In a CC, at each transmission time, a chunk is chosen w.p. $1/q$, and a packet transmission over the $i\textsuperscript{th}$ link is successful w.p. $p_i$. Thus the probability that a given packet transmission over the $i\textsuperscript{th}$ link is successful and pertains to a given chunk is $p_i/q$. Thus by replacing $p_i$ with $p_i/q$ in the analysis of dense codes in Section~\ref{subsec:DC}, the coding delay and the average coding delay of CC in a capacity-achieving scenario will be upper bounded as follows. The results of dense codes are indeed a special case of those of CC with one chunk of size $k$. It is, however, worth noting that, due to the change in the parameters, the number of partitions $w$ needs to satisfy a new condition: $wq\log\frac{w_T q}{\epsilon}=o(p N_T)$ or $wq\log\frac{w_T q}{\epsilon}=o(\gamma_e p N_T)$, instead of condition~\eqref{eq:Temp7} or~\eqref{eq:UnequalParameters}, in the proofs of Theorems~\ref{thm:CapAchCodDelGeneral} and~\ref{thm:CapAchAveCodDelGeneral}, or those of Theorems~\ref{thm:CapAchCodDelSpecial} and~\ref{thm:CapAchAveCodDelSpecial}, respectively. Further by replacing $w$ with its optimal choice in the new version of~\eqref{eq:Temp8},~\eqref{eq:Temp5},~\eqref{eq:Temp4} and~\eqref{eq:Temp9}, each $O(.)$ term needs to be $o(pN_T/q)$ in order to ensure that CC are capacity-achieving in the underlying case. Such a condition lower bounds the size of the chunks $\alpha$ by a function super-logarithmic in the message size $k$. \begin{theorem}\label{thm:CapAchCodDelGeneral} The coding delay of a CC with $q$ chunks over a line network of $L$ links with deterministic regular traffics and Bernoulli losses with parameters $\{p_i\}$ is larger than \begin{dmath*}\frac{1}{p}\left(k+(1+o(1))\left(\frac{k L}{w}+\sqrt{k\left(wq\log\frac{w q L}{\epsilon}\right)}+wq\log\frac{w q L}{\epsilon}\right)\right)\end{dmath*} w.p. b.a.b. $\epsilon$, so long as $q=o({k}/({L\log(kL/\epsilon)}))$, where $w\sim\left(kL^2/(q\log(kL/\epsilon))\right)^{\frac{1}{3}}$, and $p\doteq \min_{1\leq i\leq L}p_i$.\end{theorem} \begin{proof}\renewcommand{\IEEEQED}{} The proof follows the same line as in that of Theorem~\ref{thm:DenseCodesRegularBernoulliActualNon-IdenticalGeneral} by implementing the following modifications. Let us replace $p$ and $\epsilon$ with $p/q$ and $\epsilon/q$, respectively. Then, $\varphi=pN_T/wq$, and $r=(1-\gamma^{*})\varphi$, where $\gamma^{*}\sim\sqrt{(1/\dot{\varphi})\ln(w_T q/\dot{\epsilon})}$. Fix a chunk $\omega$. For all $1\leq i\leq L$, and $1\leq j\leq w-L+1$, let $\mathcal{D}(Q_i^j)$, $\mathcal{D}_p(Q_i^j)$, and $r_{ij}$ be defined as before, but only restricted to the packets pertaining to the chunk $\omega$. Similarly as before, for all $i,j$, $\mathcal{D}(Q_i^j)$ can be lower bounded as follows: for all $1\leq j\leq w-L+1$, $\mathcal{D}(Q_1^j)\geq rj$, and for all other values of $i,j$, $\mathcal{D}(Q_i^j)$ fails to be larger than $rj-j(1+o(1))\log(w_T q/\epsilon)$, w.p. b.a.b. $ij\dot{\epsilon}/w_T q$, so long as \begin{equation}\label{eq:Temp15} wq\log\frac{w_T q}{\epsilon}=o(p N_T).\end{equation} Thus the number of dense packets pertaining to the chunk $\omega$ at the sink node fails to be larger than \begin{eqnarray}\label{eq:Temp16} \lefteqn{\frac{p N_T}{q} - O\left(\frac{p N_T L}{wq}\right) - } \nonumber\\ && O\left(\sqrt{\frac{p N_T w}{q} \log\frac{w q L}{\epsilon}}\right)-O\left(w\log\frac{w q L}{\epsilon}\right), \end{eqnarray} w.p. b.a.b. $\epsilon/q$. In order to maximize~\eqref{eq:Temp16} subject to condition~\eqref{eq:Temp15}, we specify $w$ by \[\sqrt[3]{\frac{p N_T L^2}{q\log(p N_T L/\epsilon)}}.\] Now let us assume that $N_T$ is $(1+o(1))k/p$. By replacing $\epsilon$ with $\dot{\epsilon}$, in the preceding results, and by replacing $k$ and $\epsilon$ with $k/q$ and $\dot{\epsilon}/q$, respectively, in Lemma~\ref{lem:DenseRankProb}, it follows that the sink node fails to decode the chunk $\omega$ w.p. b.a.b. $\epsilon/q$, so long as $N_T$ is larger than \begin{dmath}\label{eq:Temp17} \frac{1}{p}\left(k+(1+o(1))\left(\frac{k L}{w}+\sqrt{k\left(wq\log\frac{w q L}{\epsilon}\right)}+wq\log\frac{w q L}{\epsilon}\right)\right).\end{dmath} Taking a union bound over all the chunks, it follows that the sink node fails to decode all the chunks w.p. b.a.b. $\epsilon$, so long as $N_T$ is larger than~\eqref{eq:Temp17}. To ensure that the lower bound on $N_T$ is $(1+o(1))k/p$, all the terms in~\eqref{eq:Temp17}, excluding the first one, need to be $o(k/p)$. This condition is met so long as $q$ is \[\hspace{2.65 in}o\left(\frac{k}{L\log(kL/\epsilon)}\right).\hspace{2.65 in}\IEEEQEDopen\]\end{proof} \begin{theorem}\label{thm:CapAchAveCodDelGeneral} The average coding delay of a CC with $q$ chunks over a network similar to Theorem~\ref{thm:CapAchCodDelGeneral} is larger than \begin{dmath*}\frac{1}{p}\left(k+(1+o(1))\left(\frac{k L}{w}+wq\log\frac{w q L}{\epsilon}\right)\right)\end{dmath*} w.p. b.a.b. $\epsilon$, so long as $q=o({k}/({L\log(kL/\epsilon)}))$, where $w\sim\left(kL/(q\log(kL/\epsilon))\right)^{\frac{1}{2}}$.\end{theorem} \begin{proof}The proof is similar to that of Theorem~\ref{thm:CapAchCodDelGeneral}, except that $r$ needs to be replaced with $\varphi$. This implies that the third term in~\eqref{eq:Temp16} disappears. Thus by specifying $w$ with \[\sqrt{\frac{p N_T L}{q\log(p N_T L/\epsilon)}}\] in order to maximize~\eqref{eq:Temp16}, excluding the third term, subject to condition~\eqref{eq:Temp15}, it follows that the sink node fails to decode all the chunks w.p. b.a.b. $\epsilon$, so long as $N_T$ is larger than \begin{dmath}\label{eq:Temp18} \frac{1}{p}\left(k+(1+o(1))\left(\frac{k L}{w}+wq\log\frac{w q L}{\epsilon}\right)\right).\end{dmath} The rest of the proof follows that of Theorem~\ref{thm:CapAchCodDelGeneral}.\end{proof} In the case of unequal success parameters, the coding delay and the average coding delay are upper bounded as follows. \begin{theorem}\label{thm:CapAchCodDelSpecial} The coding delay of a CC with $q$ chunks over a line network of $L$ links with deterministic regular traffics and Bernoulli losses with unequal parameters $\{p_i\}$ is larger~than \begin{dmath*}\frac{1}{p}\left(k+(1+o(1))\left(\frac{k L}{w}+\sqrt{k\left(wq\log\frac{w q L}{\epsilon}\right)}\right)\right)\end{dmath*} w.p. b.a.b. $\epsilon$, so long as $q=o\left(\gamma^3_{e} {k}/({L\log(kL/\epsilon)})\right)$, where $w\sim\gamma_e\left(kL^2/(q\log(kL/\epsilon))\right)^{\frac{1}{3}}$, $p\doteq \min_{1\leq i\leq L}p_i$, $\gamma_e\doteq\min_{1<i\leq L} \gamma_{e_i}$, and $\gamma_{e_i}\doteq |p_i-p_{i-1}|$.\end{theorem} \begin{proof}\renewcommand{\IEEEQED}{} Fix a chunk $\omega$. By replacing $p$ and $\epsilon$ with $p/q$ and $\epsilon/q$, respectively, in the proof of Theorem~\ref{thm:DenseCodesRegularBernoulliActualNon-Identical}, it follows that the number of dense packets pertaining to the chunk $\omega$ at the sink node fails to be larger than \begin{eqnarray}\label{eq:Temp19} \lefteqn{\frac{p N_T}{q} - O\left(\frac{p N_T L}{wq}\right) - } \nonumber\\ && O\left(\sqrt{\frac{p N_T w}{q} \log\frac{w q L}{\epsilon}}\right), \end{eqnarray} w.p. b.a.b. $\epsilon/q$, so long as \begin{equation}\label{eq:Temp20} wq\log\frac{w_T q}{\epsilon}=o(\gamma_e p N_T).\end{equation} The rest of the proof is similar to that of Theorem~\ref{thm:CapAchCodDelGeneral}, except that~\eqref{eq:Temp19} excludes the last term in~\eqref{eq:Temp16}, and the choice of $w$ needs to satisfy condition~\eqref{eq:Temp20}, instead of condition~\eqref{eq:Temp15}. By specifying $w$ with \[\sqrt[3]{\frac{\gamma^3_e p N_T L^2}{q\log(p N_T L/\epsilon)}}\] in order to maximize~\eqref{eq:Temp19} subject to condition~\eqref{eq:Temp20}, it follows that the sink node fails to decode all the chunks w.p. b.a.b. $\epsilon$, so long as $N_T$ is larger than \begin{dmath}\label{eq:Temp21} \frac{1}{p}\left(k+(1+o(1))\left(\frac{k L}{w}+\sqrt{k\left(wq\log\frac{w q L}{\epsilon}\right)}\right)\right).\end{dmath} In~\eqref{eq:Temp21}, each term, except the largest one, needs to be $o(k/p)$, and this condition is met so long as $q$ is \[\hspace{2.65 in}o\left(\frac{\gamma^3_e k}{L\log(kL/\epsilon)}\right).\hspace{2.65 in}\IEEEQEDopen\]\end{proof} \begin{theorem}\label{thm:CapAchAveCodDelSpecial} The average coding delay of a CC with $q$ chunks over a network similar to Theorem~\ref{thm:CapAchCodDelSpecial} is larger than \begin{dmath*}\frac{1}{p}\left(k+(1+o(1))\left(\frac{k L}{w}\right)\right)\end{dmath*} w.p. b.a.b. $\epsilon$, so long as $q=o\left(\gamma_e {k}/(f(k){L\log(kL/\epsilon)})\right)$, where $w\sim\gamma_e k/(q f(k)\log(kL/\epsilon))$, and $f(k)$ goes to infinity, as $k$ goes to infinity, such that $f(k)=o(\gamma_e k/(\log(kL/\epsilon)))$.\end{theorem} \begin{proof}\renewcommand{\IEEEQED}{} The proof follows the same line as that of Theorem~\ref{thm:CapAchCodDelGeneral}, except that the choice of $w$ needs to maximize \begin{equation}\label{eq:Temp22} \frac{p N_T}{q} - O\left(\frac{p N_T L}{wq}\right)\end{equation} subject to condition~\eqref{eq:Temp20}. To do so, we specify $w$ by \[\frac{\gamma_e p N_T}{q f(p N_T)\log(p N_T L/\epsilon)},\] where $f(n)$ goes to infinity, as $n$ goes to infinity, such that $f(n)=o(\gamma_e n/(\log(n L/\epsilon)))$. The sink node fails to decode all the chunks w.p. b.a.b. $\epsilon$, so long as $N_T$ is larger than \begin{dmath}\label{eq:Temp23} \frac{1}{p}\left(k+(1+o(1))\left(\frac{k L}{w}\right)\right).\end{dmath} The second term in~\eqref{eq:Temp23} needs to be $o(k/p)$, and this condition is met so long as $q$ is \[\hspace{2.5in}o\left(\frac{\gamma_e k}{f(k)L\log(kL/\epsilon)}\right).\hspace{2.5in}\IEEEQEDopen\]\end{proof} \vspace{-.35 cm} \subsection{CC with Precoding: Capacity-Approaching with A Gap}\label{subsec:CCCAPP} By the results of Section~\ref{subsec:CCCACH}, one can conclude that CC are not capacity-achieving if the size of the chunks does not comply with condition $\alpha=\omega({L\log(kL/\epsilon)})$.\footnote{For non-negative functions $f(n)$ and $g(n)$, we write $f(n)=\omega(g(n))$, if and only if $\lim_{n\rightarrow\infty}f(n)/g(n)=\infty$.} The analysis of Section~\ref{subsec:DC} further does not apply to CC with chunks of small sizes violating the above condition. From a computational complexity perspective, CC with chunks of smaller sizes are, however, of more practical interest (e.g., linear-time CC with constant-size chunks). In the following,~we study CC with chunks of a size constant in the message~size. Let $\{p_i\}_{1\leq i\leq L}$ be an arbitrary sequence of success parameters, and let $p\doteq \min_{1\leq i\leq L}p_i$. Let the size of the chunks $\alpha$ ($=k/q$) be a constant in the message size $k$, i.e., $\alpha=O(1)$. Fix a chunk, and focus on the packets pertaining to that chunk. Let the time interval $(0,N_T]$ and its $w$ disjoint partitions be defined as before in~Section~\ref{subsec:DC}. Let $\varphi_{ij}$ be the number of packets (pertaining to the given chunk) in the partition $I_{ij}$, and $\varphi_i$ be the expected value of $\varphi_{ij}$. Let $\varphi\doteq \min_{1\leq i\leq L}\varphi_i$. Then, $\varphi_i=p_i N_T/wq$, and $\varphi=pN_T/wq$. Let $N_T=(1+\gamma_c)k/p$, where $0<\gamma_c<1$ is an arbitrarily small constant. By replacing $N_T$ with $(1+\gamma_c)k/p$, $\varphi=(1+\gamma_c)\alpha/w$, and $\varphi=O(1)$, as $w$ is a constant (otherwise, $\varphi$ goes to $0$, as $N_T$ goes to infinity). By applying the Chernoff bound, it can be shown that $\Pr\{\varphi_{ij}<(1-\gamma^{*})\varphi\}\leq e^{-{\gamma^{*}}^2\dot{\varphi}}$, for every $0<\gamma^{*}<1$. Taking $e^{-{\gamma^{*}}^2\dot{\varphi}}\leq \dot{\gamma_b}/w_T$, it follows that $\varphi_{ij}$ is not larger than or equal to $r\doteq(1-\gamma^{*})\varphi$ w.p. b.a.b. $\dot{\gamma_b}/w_T$, where $\gamma^{*}$ is the smallest real number satisfying $\gamma^{*}\geq \sqrt{(1/\dot{\varphi})\ln(w_T/\dot{\gamma_b})}$, such that $r$ is an integer ($\gamma^{*}=O(1)$). Taking a union bound over all the active partitions of all links, it follows that $\varphi_{ij}$ is not larger than or equal to $r$ w.p. b.a.b. $\dot{\gamma_b}$. Let $\mathcal{D}(Q_{i}^j)$ be the number of dense packets pertaining to the given chunk in the first $j$ active partitions over the $i\textsuperscript{th}$ link. By applying Lemma~\ref{lem:HorizontalT}, it can be shown that: (i) for all $1\leq j\leq w-L+1$, $\mathcal{D}(Q_1^j)\geq rj$, (ii) for all $1<i\leq L$, the inequality $\mathcal{D}(Q_i^1)\geq r-\log(w_T/\dot{\gamma_b})$ fails w.p. b.a.b. $i\dot{\gamma_b}/w_T$, and (iii) for all the other $i,j$, the inequality $\mathcal{D}(Q_i^j)\geq r-j\log(w_T/\dot{\gamma_b})-\log((j+1)w_T/\dot{\gamma_b})$ fails w.p. b.a.b. $ij\dot{\gamma_b}/w_T$, so long as \begin{equation}\label{eq:Temp10}\alpha = \Omega \left(w^2\log\frac{w_T}{{\gamma_b}}\right).\end{equation} By using the above results, it follows that the number of dense packets pertaining to the given chunk at the sink node fails to be lower bounded by \begin{dmath}\label{eq:Temp11} \frac{w_T\varphi}{L} - O\left(\frac{w_T}{L}\sqrt{\varphi\log\frac{w_T}{\gamma_b}}\right)-O\left(\frac{w_T}{L}\log\frac{w_T}{\gamma_b}\right)\end{dmath} w.p. b.a.b. $\gamma_b$. The lower bound is non-negative so long as $\alpha = \Omega\left(w\log({w_T}/{\gamma_b})\right)$, and this condition holds so long as condition~\eqref{eq:Temp10} holds. We specify $w$ by $\sqrt[3]{\alpha L^2/\log(\alpha L/\gamma_b)}$ to maximize~\eqref{eq:Temp11}. By replacing $w$ in~\eqref{eq:Temp10}, it can be rewritten as \begin{equation}\label{eq:Temp12} \alpha=\Omega\left(L^4\log\frac{L}{\gamma_b}\right).\end{equation} By replacing $\gamma_b$ with $\dot{\gamma_b}$, and by applying Lemma~\ref{lem:DenseRankProb}, it follows that the sink node fails to decode the given chunk w.p. b.a.b. $\gamma_b$, so long as~\eqref{eq:Temp11} is larger than $\alpha+\log({1}/{\dot{\gamma_b}})$. By replacing our choice of $w$ in~\eqref{eq:Temp11}, it can be seen that, excluding the first term, the second term dominates the rest. By replacing $\varphi$ with $(1+\gamma_c)\alpha/w$, and by using the properties of the notation $\Omega(.)$, the decoding condition becomes \begin{equation}\label{eq:Temp13}\alpha=\Omega\left(\frac{L}{\gamma^3_c}\log\frac{L}{\gamma_b\gamma_c}\right).\end{equation} Thus, the given chunk is undecodable w.p. b.a.b. $\gamma_b$, so long as both conditions~\eqref{eq:Temp12} and~\eqref{eq:Temp13} are met. In other words, the expected fraction of undecodable chunks is bounded from above by $\gamma_b$. By using a martingale argument similar to the one in~\cite{HBJ:2011}, the concentration of the fraction of undecodable chunks around the expectation can be shown as follows. \begin{lemma}\label{lem:Concentration}By applying a CC with chunks of size $\alpha$, satisfying both conditions~\eqref{eq:Temp12} and~\eqref{eq:Temp13}, the fraction of undecodable chunks at the sink node until time $N_T = (1+\gamma_c)k/p$ is larger than $(1+\gamma_a)\gamma_b$, w.p. b.a.b. $\epsilon$, so long as \begin{equation}\label{eq:Temp14}{\alpha^2}/{\gamma^2_a\gamma^2_b}=o({k}/{\log({1}/{\epsilon})}),\end{equation} where $0<\gamma_a,\gamma_b,\gamma_c<1$ are arbitrary constants.\end{lemma} By the result of Lemma~\ref{lem:Concentration}, the fraction of chunks which are not decodable until time $N_T$ becomes larger than $(1+\gamma_a)\gamma_b$, w.p. b.a.b. $\epsilon$. Since $\gamma_a,\gamma_b$ are non-zero constants, a CC, alone, does not decode all the chunks. However, the completion of decoding of all the chunks is guaranteed by devising a proper precoding scheme~\cite{HBJ:2011}. The precoding works as follows: The set of $k$ message vectors at the source node constitute the input of a capacity-achieving (c.-a.) erasure code, called \emph{precode}. The rate of the precode is $1-(1+\gamma_a)\gamma_b$ (i.e., the precode decoder can correct up to a fraction $(1+\gamma_a)\gamma_b$ of erasures), and the number of the coded packets at the output of the precode, called \emph{intermediate packets}, is $\left(1+(1+\gamma_a)\gamma_b+O(\gamma^2_b)\right)k$. By applying a CC with chunks of size $\alpha$, satisfying conditions~\eqref{eq:Temp12},~\eqref{eq:Temp13} and~\eqref{eq:Temp14}, the fraction of the intermediate packets that are not recoverable at the output of the CC decoder until time $(1+\gamma_c)\left(1+(1+\gamma_a)\gamma_b+O(\gamma^2_b)\right)\frac{k}{p}$ is larger than $(1+\gamma_a)\gamma_b$, w.p. b.a.b. $\epsilon$. Then, the precode decoder can recover all the $k$ message vectors from the set of recovered intermediate packets. Therefore, the coding delay of a CC with precoding (CCP) is upper bounded as follows. \begin{theorem}\label{thm:CapAppCodDelGeneral} The coding delay of a CCP with chunks of size $\alpha$ and a c.-a. erasure code of rate $1-\gamma_a$, over a line network of $L$ links with deterministic regular traffics and Bernoulli losses with parameters $\{p_i\}$ is larger than $(1+\gamma_c)\left(1+(1+\gamma_a)\gamma_b+O(\gamma^2_b)\right)\frac{k}{p}$, w.p. b.a.b.~$\epsilon$, so long~as \[\alpha=\Omega\left(\left\{\left(\frac{L}{\gamma^3_{c}}\log\frac{L}{\gamma_b\gamma_c}\right),\left(L^4 \log \frac{L}{\gamma_b}\right)\right\}\right),\] and $\alpha^2/\gamma^2_a\gamma^2_b=o(k/\log(1/\epsilon))$, where $0<\gamma_a,\gamma_b,\gamma_c<1$ are arbitrary constants, and $p\doteq \min_{1\leq i\leq L}p_i$.\end{theorem} In the case of the average coding delay of a CC with precoding, the following can be shown similar to Theorem~\ref{thm:CapAppCodDelGeneral} by replacing $r$ with $\varphi$, and hence the proof is omitted. \begin{theorem}\label{thm:CapAppAveCodDelGeneral} The average coding delay of a CCP with chunks of size $\alpha$ and a c.-a. erasure code of rate $1-\gamma_a$, over a network similar to Theorem~\ref{thm:CapAppCodDelGeneral} is larger than $(1+\gamma_c)\left(1+(1+\gamma_a)\gamma_b+O(\gamma^2_b)\right)\frac{k}{p}$, w.p. b.a.b. $\epsilon$, so long as \[\alpha=\Omega\left(\frac{L}{\gamma_{c}}\log\frac{L}{\gamma_b \gamma_c}\right),\] and $\alpha^2/\gamma^2_a\gamma^2_b=o(k/\log(1/\epsilon))$, where $0<\gamma_a,\gamma_b,\gamma_{c}<1$ are arbitrary constants.\end{theorem} \begin{table*} \vspace{-.05cm} \caption{Comparison of Chunked Codes over Line Networks with Various Traffics} \vspace{-.25cm} \hspace{-0.25in} \begin{tabular}{|p{1.275cm}|p{1cm}|c|c|c|c|} \hline \multirow{2}{*}{\hspace{.25 cm}\vspace{-.1cm}Traffic} & \multirow{2}{*}{\hspace{-.275 cm}$\vspace{-.1cm}\begin{array}{c} \text{Success} \\ \text{Parameters} \end{array}$} & $\begin{array}{c} \text{Overhead } \text{(}\eta\text{)} \\ \text{and} \end{array}$ & \multirow{2}[4]{*}{\vspace{.1cm} $\begin{array}{c} \text{Size of Chunks} \\ \text{(}\alpha\text{)}\end{array}$} & \multirow{2}[4]{*}{\vspace{.1cm}$w$} & \multirow{2}[4]{*}{\vspace{.1cm}Comments} \\ & & \hspace{.1cm}Average Overhead ($\bar{\eta}$)& & & \\ \hline $\begin{array}{c}\hspace{-.3cm} \text{Arbitrary} \\ \hspace{-.3cm} \text{Deterministic}\end{array}$ & \hspace{.35cm} - & $\eta=\bar{\eta}=O\left(kL\left(\frac{1}{\alpha} \log\frac{kL}{\epsilon}\right)^{\frac{1}{3}}\right)$ & $\omega\left({L^3\log\frac{kL}{\epsilon}}\right)$ & - & \multirow{5}[10]{*}{\vspace{-0.75cm}$\begin{array}{c} m= \frac{kw}{\alpha}\log\left(\frac{kLw}{\alpha\epsilon}\right) \\ f(k)=o\left(\frac{\gamma_e k}{\log\frac{kL}{\epsilon}}\right) \\ \lim_{k\rightarrow\infty}f(k)=\infty \\ \gamma_{e_i}=|p_i-p_{i-1}|\\ \gamma_e=\min_{1<i\leq L}\gamma_{e_i}\\ p=\min_{1\leq i\leq L}p_i \end{array}$} \\ \cline{1-5} \multirow{4}{*}{\vspace{-0.75cm}$\begin{array}{c}\hspace{-.335cm}\text{Deterministic} \\ \hspace{-.335cm}\text{Regular} \\ \hspace{-.335cm}\text{Transmissions} \\ \hspace{-.335cm}\text{and} \\ \hspace{-.335cm}\text{Bernoulli} \\ \hspace{-.335cm}\text{Losses} \end{array}$} & \multirow{2}{*}{\vspace{-.25cm}Arbitrary} & $\eta = \frac{1}{p}\left((1+o(1))\left(\frac{kL}{w}+k^{\frac{1}{2}}m^{\frac{1}{2}}+m\right)\right)$ & \multirow{2}[4]{*}{\vspace{-.15cm}$\omega\left({L\log\frac{kL}{\epsilon}}\right)$} & $\left(\frac{\alpha L^2}{\log\frac{kL}{\epsilon}}\right)^{\frac{1}{3}}$ & \\ \cline{3-3}\cline{5-5} & & $\bar{\eta}=\frac{1}{p}\left((1+o(1))\left(\frac{kL}{w}+m\right)\right)$ & & $\left(\frac{\alpha L}{\log\frac{kL}{\epsilon}}\right)^{\frac{1}{2}}$ & \\ \cline{2-5} & \multirow{2}{*}{\vspace{-.25cm}Unequal} & $\eta = \frac{1}{p}\left((1+o(1))\left(\frac{kL}{w}+k^{\frac{1}{2}}m^{\frac{1}{2}}\right)\right)$ & $\omega\left(\frac{L}{\gamma^3_e} \log\frac{kL}{\epsilon}\right)$ & $\left(\frac{\gamma^3_e\alpha L^2}{\log\frac{kL}{\epsilon}}\right)^{\frac{1}{3}}$ & \\ \cline{3-5} & & $\bar{\eta}=\frac{1}{p}\left((1+o(1))\left(\frac{kL}{w}\right)\right)$ & $\omega\left(f(k)\left(\frac{L}{\gamma_e} \log\frac{kL}{\epsilon}\right)\right)$ & $\frac{1}{f(k)}\left(\frac{\gamma_e\alpha}{\log\frac{kL}{\epsilon}}\right)$ & \\ \hline \end{tabular} \label{tab:TableI} \end{table*} \begin{table*} \vspace{-.1cm} \caption{Comparison of Chunked Codes with Precoding (A Capacity-Achieving Erasure Code) over Line Networks with Various Traffics} \vspace{-.25cm} \hspace{-0.275in} \begin{tabular}{|p{1.275cm}|p{1cm}|c|c|c|c|} \hline \multirow{2}{*}{\hspace{.25 cm}\vspace{-.1cm}Traffic} & \multirow{2}{*}{\hspace{-.275 cm}$\vspace{-.1cm}\begin{array}{c} \text{Success} \\ \text{Parameters} \end{array}$} & $\begin{array}{c} \text{Overhead } \text{(}\eta\text{)} \\ \text{and} \end{array}$ & \multicolumn{2}{c|}{\multirow{2}{*}{\vspace{-.1cm}$\begin{array}{c} \text{Size of Chunks} \\ \text{(}\alpha\text{)}\end{array}$}} & \multirow{2}{*}{\vspace{-.1cm}Comments} \\ & & \hspace{.1cm}Average Overhead ($\bar{\eta}$) & \multicolumn{2}{c|}{} & \\ \hline $\begin{array}{c}\hspace{-.3cm} \text{Arbitrary} \\ \hspace{-.3cm} \text{Deterministic}\end{array}$ & \hspace{.35cm} - & \hspace{.1cm}$\eta=\bar{\eta}=\gamma_o k$ & $\Omega\left(\frac{L^3}{\gamma^3_c}\log\frac{L}{\gamma_b\gamma_c}\right)$ & \multicolumn{1}{r|}{\multirow{5}[10]{*}{\vspace{-.25cm}$o\left(\sqrt{\frac{\gamma^2_a\gamma^2_b k}{\log\frac{1}{\epsilon}}}\right)$}} & \multirow{5}[10]{*}{\vspace{-.1cm}$\begin{array}{c} 0<\gamma_a,\gamma_b,\gamma_c<1 \\ \{\gamma_a,\gamma_b,\gamma_c\}=O(1) \\ \gamma_o=\gamma_c+(1+\gamma_c)\gamma'_o \\ \gamma'_o = (1+\gamma_a)\gamma_b+O(\gamma^2_b) \\ \gamma_{e_i}=|p_i-p_{i-1}| \\ \gamma_e=\min_{1<i\leq L}\gamma_{e_i} \\ p=\min_{1\leq i\leq L}p_i \end{array}$} \\ \cline{1-2}\cline{3-3}\cline{4-4} \multirow{4}{*}{$\begin{array}{c}\hspace{-.335cm}\text{Deterministic} \\ \hspace{-.335cm}\text{Regular} \\ \hspace{-.335cm}\text{Transmissions} \\ \hspace{-.335cm}\text{and} \\ \hspace{-.335cm}\text{Bernoulli} \\ \hspace{-.335cm}\text{Losses} \end{array}$} & \multirow{2}[4]{*}{Arbitrary} & {$\eta=\gamma_o\frac{k}{p}$} & $\Omega\left(\left\{\left(\frac{L}{\gamma^3_c}\log\frac{L}{\gamma_b\gamma_c}\right),\left(L^4\log\frac{L}{\gamma_b}\right)\right\}\right)^{\textcolor[rgb]{1.00,1.00,1.00}{\frac{1}{2}}}_{\textcolor[rgb]{1.00,1.00,1.00}{\frac{1}{2}}}$ & & \\ \cline{3-3} \cline{4-4} & & $\bar{\eta}=\gamma_o\frac{k}{p}$ & $\Omega\left(\frac{L}{\gamma_c}\log\frac{L}{\gamma_b\gamma_c}\right)^{\textcolor[rgb]{1.00,1.00,1.00}{\frac{1}{2}}}_{\textcolor[rgb]{1.00,1.00,1.00}{\frac{1}{2}}}$ & & \\ \cline{3-3} \cline{2-2}\cline{4-4} & \multirow{2}[4]{*}{Unequal} & $\eta=\gamma_o\frac{k}{p}$ & $\Omega\left(\left\{\left(\frac{L}{\gamma^3_c}\log\frac{L}{\gamma_b\gamma_c}\right),\left(\frac{L}{\gamma^3_e}\log\frac{L}{\gamma_b\gamma_e}\right)\right\}\right)^{\textcolor[rgb]{1.00,1.00,1.00}{\frac{1}{2}}}_{\textcolor[rgb]{1.00,1.00,1.00}{\frac{1}{2}}}$ & & \\ \cline{3-3} \cline{4-4} & & $\bar{\eta}=\gamma_o\frac{k}{p}$ & $\Omega\left(\frac{L}{\gamma^2_e\gamma_c}\log\frac{L}{\gamma_b\gamma_c}\right)^{\textcolor[rgb]{1.00,1.00,1.00}{\frac{1}{2}}}_{\textcolor[rgb]{1.00,1.00,1.00}{\frac{1}{2}}}$ & & \\ \cline{3-3} \hline \end{tabular} \label{tab:TableII} \vspace{-.45cm} \end{table*} In the special case of unequal success parameters, the coding delay and the average coding delay of CC with precoding are upper bounded as follows. The proofs follow the same line as in the general case except that a new set of conditions needs to be satisfied based on the assumption that no two success parameters are equal. \begin{theorem}\label{thm:CapAppCodDelSpecial} The coding delay of a CCP with chunks of size $\alpha$ and a c.-a. erasure code of rate $1-\gamma_a$, over a line network of $L$ links with deterministic regular traffics and Bernoulli losses with unequal parameters $\{p_i\}$ is larger than $(1+\gamma_c)\left(1+(1+\gamma_a)\gamma_b+O(\gamma^2_b)\right)\frac{k}{p}$, w.p. b.a.b. $\epsilon$, so long as \[\alpha=\Omega\left(\left\{\left(\frac{L}{\gamma^3_{c}}\log\frac{L}{\gamma_b\gamma_c}\right),\left(\frac{L}{\gamma^3_{e}} \log \frac{L}{\gamma_e\gamma_b}\right)\right\}\right),\] and $\alpha^2/\gamma^2_a\gamma^2_b=o(k/\log(1/\epsilon))$, where $0<\gamma_a,\gamma_b,\gamma_c<1$ are arbitrary constants, $p\doteq \min_{1\leq i\leq L}p_i$, $\gamma_e\doteq\min_{1<i\leq L} \gamma_{e_i}$, and $\gamma_{e_i}\doteq |p_i-p_{i-1}|$.\end{theorem} \begin{proof}Let us assume $p_1>p_2>\cdots> p_L$, without loss of generality. Let $p\doteq \min_{1\leq i\leq L} p_i$, $\gamma_e\doteq \min_{1<i\leq L}\gamma_{e_i}$, and $\gamma_{e_i}\doteq |p_i-p_{i-1}|$. Fix a chunk. Let $r_i\doteq (1-\gamma^{*}_i)\varphi_i$, where $\varphi_i=p_iN_T/wq$ and $\gamma^{*}_i\sim\sqrt{(1/\dot{\varphi_i})\log(w_T/\dot{\gamma_b})}$, and $0<\gamma_b<1$ is an arbitrary constant. Let $\varphi_{ij}$ be the number of packets (pertaining to the given chunk) in the partition $I_{ij}$ (the $j\textsuperscript{th}$ partition pertaining to the $i\textsuperscript{th}$ link), where the time interval $(0,N_T]$ is split into $w$ partitions of length $N_T/w$, and let $\varphi_i$ be the expected value of $\varphi_{ij}$. For all $i,j$, suppose that $\varphi_{ij}$ is larger than or equal to $r_i$. Let $N_T=(1+\gamma_c)k/p$, where $0<\gamma_c<1$ is an arbitrarily small constant. By replacing $N_T$ with $(1+\gamma_c)k/p$, $\varphi_i=(1+\gamma_c)p_i\alpha/pw$, and $\varphi=O(1)$, similar to that in the proof of Theorem~\ref{thm:CapAppCodDelGeneral}. Similarly as before, for all $j$, $\mathcal{D}(Q_1^j)\geq r_1 j$. For any other values of $i,j$, by applying Lemma~\ref{lem:HorizontalT}, it can be shown that the inequality $\mathcal{D}(Q_i^j)\geq r_i j$ fails w.p. b.a.b. $ij\dot{\gamma_b}/w_T$, so long as \begin{equation}\label{eq:Temp24} \alpha=\Omega\left(\frac{w}{\gamma_e^2}\log\frac{w_T}{\gamma_b}\right).\end{equation} Let $\varphi$, $\gamma^{*}$ and $r$ denote $\varphi_L$, $\gamma^{*}_L$ and $r_L$, respectively. Thus, the number od dense packets pertaining to the given chunk at the sink node fails to be larger than \begin{dmath}\label{eq:Temp25}\alpha(1+\gamma_c)-O\left(\frac{\alpha L}{w}\right)-O\left(\sqrt{\alpha w\log\frac{w_T}{\gamma_b}}\right).\end{dmath} We specify $w$ by \[\left(\frac{\alpha L^2}{\log(w_T/\gamma_b)}\right)^{\frac{1}{3}}\] to maximize~\eqref{eq:Temp25} subject to condition~\eqref{eq:Temp24}. For this choice of $w$, condition~\eqref{eq:Temp24} is met so long as \begin{equation}\label{eq:Temp26}\alpha=\Omega\left(\frac{L}{\gamma^3_e}\log\frac{L}{\gamma_e\gamma_b}\right).\end{equation} By replacing $\gamma_b$ with $\dot{\gamma_b}$ in the preceding results, and substituting $w$ in~\eqref{eq:Temp25}, the result of Lemma~\ref{lem:DenseRankProb} shows that the sink node fails to decode the given chunk w.p. b.a.b. $\gamma_b$, so long as~\eqref{eq:Temp25} is larger than $\alpha+\log(1/\dot{\gamma_b})$. Based on the properties of the notation $\Omega(.)$, the latter condition is met so long as \begin{equation}\label{eq:Temp27}\alpha=\Omega\left(\frac{L}{\gamma^3_c}\log\frac{L}{\gamma_b\gamma_c}\right).\end{equation} The rest of the proof is similar to the proof of Theorem~\ref{thm:CapAppCodDelGeneral}, except that in this case conditions~\eqref{eq:Temp26} and~\eqref{eq:Temp27} need to be met, instead of conditions~\eqref{eq:Temp12} and~\eqref{eq:Temp13}.\end{proof} \begin{theorem}\label{thm:CapAppAveCodDelSpecial} The average coding delay of a CCP with chunks of size $\alpha$ and a c.-a. erasure code of rate $1-\gamma_a$, over a network similar to Theorem~\ref{thm:CapAppCodDelSpecial} is larger than $(1+\gamma_c)\left(1+(1+\gamma_a)\gamma_b+O(\gamma^2_b)\right)\frac{k}{p}$, w.p. b.a.b. $\epsilon$, so long as \[\alpha=\Omega\left(\frac{L}{\gamma^2_e\gamma_{c}}\log\frac{L}{\gamma_b \gamma_c}\right),\] and $\alpha^2/\gamma^2_a\gamma^2_b=o(k/\log(1/\epsilon))$, where $0<\gamma_a,\gamma_b,\gamma_{c}<1$ are arbitrary constants.\end{theorem} \begin{proof}\renewcommand{\IEEEQED}{}The proof follows the same line as that of Theorem~\ref{thm:CapAppCodDelSpecial}, except that the choice of $w$ needs to maximize \begin{equation}\label{eq:Temp28} \alpha(1+\gamma_c)-O\left(\frac{\alpha L}{w}\right)\end{equation} subject to condition~\eqref{eq:Temp24}. To do so, the choice of $w$ needs to be $\Omega(L/\gamma_c)$, and hence, condition~\eqref{eq:Temp24} becomes \[\hspace{2.45in}\alpha=\Omega\left(\frac{L}{\gamma^2_e\gamma_c}\log\frac{L}{\gamma_b\gamma_c}\right).\hspace{2.45in}\IEEEQEDopen\] \end{proof} \vspace{-.25 cm} \section{Poisson Transmissions and Bernoulli Losses}\label{sec:PoissonTrafficCC} In the case of Bernoulli losses and Poisson transmissions with parameters $\{p_i\}_{1\leq i\leq L}$ and $\{\lambda_i\}_{1\leq i\leq L}$, the points in time at which the arrivals/departures occur over the $i\textsuperscript{th}$ link follow a Poisson process with parameter $\lambda_i p_i$. Thus the number of packets pertaining to a given chunk, in each partition pertaining to the $i\textsuperscript{th}$ link, has a Poisson distribution with the expected value $\lambda_i p_i N_T/wq$. Since the result of Chernoff bound also holds for Poisson random variables, the main results in Section~\ref{sec:BernoulliLossRegularTrafficCC} apply to this case by replacing $p$ with $\lambda p$, where $\{\lambda,p\}\doteq \{\lambda_{\mu},p_{\mu}\}$, and $\mu\doteq\arg\min_{1\leq i\leq L}\lambda_i p_i$. \vspace{-.25cm} \section{Discussion}\label{sec:Discussion} Table~\ref{tab:TableI} shows the upper bounds\footnote{With a slight abuse of language, we refer to the ``upper bound'' on the overhead or the average overhead as the ``overhead'' or the ``average overhead.''} (w.p. of failure b.a.b. $\epsilon$) on the overhead and the average overhead of CC over various traffics for different ranges of the size of the chunks based on the results in Section~\ref{sec:BernoulliLossRegularTrafficCC} and those in~\cite{HBJ:2011}.\footnote{The results of Section~\ref{subsec:CCCACH} and those of Section~\ref{subsec:CCCAPP} were stated in terms of $q$ and $\alpha$, respectively. In this section, for the ease of comparison, the former results are also restated in terms of $\alpha$ by replacing $q$~with~$k/\alpha$.} The traffics are: arbitrary deterministic traffics, or traffics with deterministic regular transmissions and Bernoulli losses. We refer to the latter traffics as the \emph{probabilistic traffics} for simplifying the terminology. The probabilistic traffics are categorized into two sub-categories: traffics with arbitrary success parameters and traffics with unequal success parameters. In the case of arbitrary deterministic traffics, the capacity is $1$, and in the case of probabilistic traffics with success parameters $\{p_i\}_{1\leq i\leq L}$, the capacity is $p$, where $p=\min_{1\leq i\leq L}p_i$. We say that a code is ``capacity-achieving'' (c.-a.) if the ratio of the overhead to $k/p$ goes to $0$, as $k$ goes to infinity. Similarly, a code is ``capacity-achieving on average'' (c.-a.a.) if the ratio of the average overhead to $k/p$ goes to $0$, as $k$ goes to infinity. In Table~\ref{tab:TableI}, the upper (or lower) row in front of each case of success parameters corresponds to a c.-a. (or a c.-a.a.) scenario. In the table, one can see that, for each traffic, the size of the chunks ($\alpha$) has to be sufficiently large so that CC are c.-a. or c.-a.a.. For arbitrary deterministic traffics, the lower bound on $\alpha$ is super-logarithmic in $k$, i.e., $\omega(\log k)$, and super-log-cubic in $L$, i.e., $\omega({L^3\log L})$. For the probabilistic traffics with arbitrary or unequal success parameters, the lower bound on $\alpha$ has a similar growth rate with $k$, but a smaller (super-log-linear) growth rate with $L$, i.e., $\omega({L\log L})$. The coding cost of CC (i.e., the ratio of the number of the coding (packet) operations to $k$), is, on the other hand, linear in $\alpha$. Thus, CC can perform as fast over both the arbitrary deterministic traffics and the probabilistic traffics, but with a lower coding cost (smaller chunks) in the latter case compared to the former. Moreover, as it can be seen in Table~\ref{tab:TableI}, for both arbitrary deterministic and probabilistic traffics (in each case of arbitrary or unequal success parameters), the overhead grows sub-log-linearly with $k$, i.e., $O(k\log^{\frac{1}{3}} k)$, and decays sub-linearly with $\alpha$, i.e., $O(1/\alpha^{\frac{1}{3}})$. However, for arbitrary deterministic traffics, the overhead grows with $O(L\log^{\frac{1}{3}}L)$, and for the probabilistic traffics, it only grows with $O(L^{\frac{1}{3}}\log^{\frac{1}{3}} L)$. This implies a faster speed of convergence to the capacity in the latter case compared to the former. Similar comparison result can also be observed in terms of the average overhead, except that in the case of unequal success parameters, the average overhead decays linearly with $\alpha$, i.e., $O(1/\alpha)$, but grows poly-log-linearly with $k$, i.e., $O(k\log^2 k)$, for the choice of $f(k)=O(\gamma_e \log k)$, and log-linearly with $L$, i.e., $O(L\log L)$. Table~\ref{tab:TableII} also shows the results for CC with precoding (CCP) in the scenarios similar to those considered in Table~\ref{tab:TableI}, where the precode is an (capacity-achieving) erasure code of dimension $k$ and rate $1-\gamma_a$. In particular, one can see that CCP are ``capacity-approaching'' or ``capacity-approaching on average'' with an arbitrary small ``non-zero constant'' gap $\gamma_o$ (i.e., the ratio of the overhead or the average overhead to $k/p$ goes to $\gamma_o$, as $k$ goes to infinity) if $\alpha$ is sufficiently large. For simplifying the terminology, we drop the term ``with a non-zero constant gap.'' The upper row (or the lower row) in front of each case of success parameters corresponds to a capacity-approaching (or a capacity-approaching on average) scenario. For arbitrary deterministic traffics, the lower bound on $\alpha$ is constant in $k$, and log-cubic in $L$, i.e., $O(L^3\log L)$. For the probabilistic traffics with arbitrary or unequal success parameters, the lower bound on $\alpha$ is also constant in $k$, but has a smaller (log-linear) growth rate with $L$, i.e., $O(L\log L)$. Thus, in the case of CCP, one can make a conclusion similar to the one made in the case of stand-alone CC, with respect to the arbitrary deterministic and the probabilistic traffics. \bibliographystyle{IEEEtran}
2024-02-18T23:41:07.245Z
2012-03-09T02:00:38.000Z
algebraic_stack_train_0000
4,206
9,883
proofpile-arXiv_066-4625
\section{Introduction} \label{intro} In recent decades urbanization and industrialization in Tehran have exposed many people living in urban and suburban areas to dangerous air pollutants. In this regard, urban air quality monitoring is an essential matter of concern for municipal administrations and responsible public health organizations. Among different pollutant materials, particulate matters (PM) with a size of smaller than 2.5 $ \mu m $ are found to be the leading cause of cardiovascular diseases \citep{dominici2006fine}, respiratory diseases \citep{peng2009emergency}, myocardial infarction \citep{peters2001increased}, and subsequently increasing the number of morbidities \citep{lippmann2000association}, mortalities \citep{klemm2000daily}, and hospital admissions \citep{lippmann2000association}. For Tehran, Heger and Sarrat have reported that PM2.5 is the reason for 4000 deaths per year \citep{heger2018air}. Thus, accurate estimation of PM2.5 is a vital prerequisite for air quality studies and epidemiological investigations. For this purpose, air quality measuring and monitoring stations are launched that provide high temporal resolution measurements of PM2.5. However, in Tehran, these stations are located sparsely in space (see Fig. \ref{fig.tehran}), and the variation of PM2.5 concentration over space domain cannot be modeled for better exposure assessment of PM2.5. As a solution, early studies proposed using spatial interpolation such as kriging, nearest neighbor, etc., to densify PM2.5 measurements \citep{ijerph110909101,7576703,ijgi7090368}. Since different factors play roles in variation modeling of PM2.5, using merely interpolation cannot add auxiliary information for this modeling \citep{di2016assessing}. For instance, some affecting factors related to land use parameters such as road density, amount of urbanization, and others should be considered for modeling PM concentration variation \citep{BECKERMAN2013172, VIENNEAU2010688}. However, land-use terms change slightly through time and they alone are not sufficient for high resolution PM2.5 modeling \citep{HOEK20087561}. \textcolor{red}{As a solution, satellite-based products can be applied for high resolution PM2.5 modeling \citep{SOREKHAMER2020106057}}. In this regard, aerosol optical depth (AOD) data are widely employed for PM2.5 concentration estimation \citep{https://doi.org/10.1029/2003GL018174, YOU20151156, YAO2018819}. Wang and Christopher illustrated the dependency of AOD and PM2.5 measurements \citep{https://doi.org/10.1029/2003GL018174}. Several studies have also reported applying AOD data along with meteorological measurements for PM2.5 estimation \citep{atmos9030105, https://doi.org/10.1029/2008JD011497, https://doi.org/10.1029/2008JD011496}. Satellite sensors such as Aqua and Terra boarding on Moderate Resolution Imaging Spectroradiometer (MODIS) provide the possibility of daily AOD measurement in extensive area coverage. Two well-known AOD products provided by Aqua and Terra sensors are Deep Blue (DB) AOD and Dark Target (DT) AOD, which are named based on the algorithm of AOD retrieval. The DB algorithm is fundamentally used to retrieve AOD over bright surfaces mainly found over urban areas \citep{sayeretal, sayer2015effect}. The DT algorithm is designed to retrieve AOD over dark vegetated surfaces. Consequently, the performance of DT decreases for bright surfaces primarily found in urban areas \citep{amt-6-2989-2013}. Both products are provided daily at either 10 km or 3 km resolution. Recently a high-resolution retrieval of AOD at 1 km resolution is provided by a new generic algorithm, the Multiangle Implementation of Atmospheric Correction (MAIAC), which has been extensively used for air quality and epidemiological studies \citep{di2016assessing, xiao2017full, liang2018maiac}. The algorithm is based on time series processing of Aqua and Terra datatakes, which separates dynamic features such as aerosols and clouds from surface properties that are relatively static during the short period \citep{amt-11-5741-2018}. The high spatial resolution output of MAIAC makes retrieved AODs a potential source for precise mapping of AOD compared to DT and DB. Mhawish et al. compared the MAIAC AOD with DB- and DT-derived AODs and demonstrated the ability of MAIAC AOD for the better revealing of air pollution sources in the south of Asia \citep{MHAWISH201912}. Regarding the correlation of AOD and PM2.5 proved in different investigations \citep{https://doi.org/10.1029/2003GL018174, https://doi.org/10.1029/2008JD011496}, MAIAC data can be used for high resolution modeling and mapping of PM2.5 variability in urban areas. However, PM2.5 estimation from MAIAC AOD is a challenging task in the study area (Tehran), and important parameters influence the accuracy of PM2.5-AOD modeling. As will be illustrated in Section \ref{review}, in the literature, no study has been realized high resolution PM2.5 mapping over Tehran city in practice. Thus, this paper proposes a framework for high resolution mapping of PM2.5 using MAIAC AOD and other relevant parameters. This framework consists of 3 main stages: data preprocessing, regression modeling based on the machine learning techniques, and model deployment for daily, high resolution mapping of PM2.5. More details of the framework are presented in Section \ref{sec.frame}. The remainder of this paper is organized as follows. First, some related investigations are reviewed in Section \ref{review}. The study area and datasets employed in this research are introduced in Section \ref{study_area} and \ref{data}, respectively. Next, details of the devised framework including data analyzing and preprocessing, statistical PM2.5 modeling using machine learning techniques, and finally high resolution PM2.5 map generation through the model deployment are explained in Section \ref{sec.frame}. Then, the results of experiments are presented in Section \ref{result}, and in the following, the feasibility of PM2.5 mapping over the study area is discussed. Section \ref{sec.conclusion} presents the conclusions of this study. \section{Related Work} \label{review} \textcolor{red}{Three main types of models have been developed for PM2.5 concentration estimation from satellite AOD measurements: Chemical simulation models \citep{https://doi.org/10.1029/2004JD005025, van2010global}, statistical models \citep{ma2016satellite, song2014satellite}, and semi-empirical models \citep{LIN2015117}. Among them, statistical models are more popular to implement for PM2.5 modeling, and machine learning techniques have been widely used for this type of modeling. In the literature, simple linear regression models (univariate or multivariate) have been accomplished for PM2.5 concentration estimation. In addition to linear regression models, advanced machine learning algorithms have also been applied for PM2.5 concentration estimation \citep{atmos9030105, https://doi.org/10.1029/2008JD011496, https://doi.org/10.1029/2008JD011497, https://doi.org/10.1002/2017GL075710, AHMAD2019117050, chen2020estimating, SUN2021144502}. For example, Gupta and Christopher designed a multi-layer perceptron (MLP) to explore the relationship between AOD and PM2.5 using meteorological data \citep{https://doi.org/10.1029/2008JD011496}. Li et al. developed a geointelligent network using a deep belief Boltzmann structure for estimating PM2.5 \citep{https://doi.org/10.1002/2017GL075710}. Other machine learning algorithms such as support vector regressor (SVR) \citep{vapnik2013nature}, random forest \citep{james2013introduction}, gradient boosting \citep{friedman2002stochastic}, etc., have been used for estimating PM2.5 concentration from meteorological data and AOD as input features. \citeauthor{Weizhen_2014} developed a successive over relaxation SVR model using Gaussian kernel function for predicting PM2.5 and PM10 by satellite AOD and meteorological parameters in Beijing. The decision tree ensemble approaches have been broadly used for modeling PM concentration from AOD retrievals \citep{SUN2021144502, LU2021100734, hu2017estimating, CHEN2021110735, YANG2020111061, JIANG2021105146}. \citeauthor{LU2021100734} trained random forest to predict the PM2.5 level over several urban areas in China using high resolution AOD and meteorological data \citep{LU2021100734}. In another study, land use data and column water vapor in addition to AOD and meteorological parameters were involved for high resolution mapping of PM2.5 \citep{doi:10.1021/acs.est.0c01769}. In this study, the PM2.5 level was predicted by a linear mixed effect model and random forest \citep{doi:10.1021/acs.est.0c01769}. XGBoost as a gradient boosting approach is another approach utilized for PM2.5 modeling by AOD data. For example, \citeauthor{rs12203368} designed a spatially local extreme gradient boosting (SL-XGB) model for PM2.5 prediction from SARA AOD at urban scales \citep{rs12203368}.} \textcolor{red}{In addition to classical methods, neural networks as another category of popular machine learning techniques have been utilized for PM2.5 estimation from AOD data \citep{atmos9030105, https://doi.org/10.1029/2008JD011497}. In recent years, deep neural networks have proved their performances in different tasks of classification and regression. For PM2.5 estimation from satellite data, deep learning techniques have also been applied and compared with classical machine learning approaches. The efficiency of deep neural network structures has been illustrated in several investigations \citep{WANG2019128, rs12020264}. \citeauthor{rs12020264} used autoencoder-based residual networks for estimating PM2.5 and PM10 from AODs \citep{rs12020264}. \citeauthor{CHEN2021144724} used a self-adaptive deep neural network for finding the PM2.5-AOD relationship \citep{CHEN2021144724}.} \textcolor{red}{In the literature, the successful PM2.5 modeling from AOD data in Tehran has been mainly realized based on the 3 or 10 km (DB or DT) MODIS products. Earlier studies mainly focused on PM2.5 concentration estimation using satellite-based AOD measurements at lower resolution (\texttildelow 10 km) in Tehran. In an earlier study, it was tried to estimate PM2.5 using 10 km DT AODs in a short period of observations. The correlation of predicted PM2.5 and observed PM2.5 was around 0.55 \citep{sotoudeheian2014estimating}. In another investigation, Ghotbi et al. could estimate PM2.5 over Tehran with higher accuracy ($ R^{2} $ = 0.73) using the 3 km DT AOD and meteorological data derived from climate stations \citep{GHOTBI2016333}. However, they used few samples (332 data points) collected from few stations for a very short period from March to November 2009. Another study attempted to estimate PM2.5 from 10 km MODIS AOD (combined DB and DT) product over Tehran. The results demonstrated that using machine learning techniques gave accuracy up to 80\% \citep{atmos10070373}. PM2.5 estimation from high resolution satellite imagery such as Landsat satellite imagery has been investigated in several studies \citep{jafarian2020evaluation, IMANI2021111888}. However, these images have a lower temporal resolution (e.g., image acquisition per every six days ), which is not suitable for daily representation of the PM2.5 map. Only a study performed over Tehran using MAIAC AOD data has reported a correlation of less than 0.5, on average \citep{NABAVI2019889}, which is not perfect for high resolution PM2.5 mapping based on AOD.} \textcolor{red}{Review of previous investigations studied on Tehran urban area demonstrates that there is no practical implementation for daily, high resolution mapping of PM2.5 concentration over the study area. Consequently, the main focus of this paper is to develop a framework based on machine learning to reach the goal of high resolution PM2.5 mapping over the study area.} \section{Study Area and Materials} \subsection{Study Area}\label{study_area} The study area is Tehran city, the capital of Iran (shown in Fig. \ref{fig.tehran}), with a population of 13.3 million residents and 10 million commuters. It spreads from latitude 35$ ^{\circ} $ 35$'$ N to 35$ ^{\circ} $48$'$ N and longitude 51$ ^{\circ} $17$'$ E to 51$ ^{\circ} $ 33$'$ E. The highest point of the city has an elevation of 1800 m, while the lowest height is more than 900 m above the mean sea level. One of the primary sources of pollution is mobile sources such as vehicles and a relatively old fleet, which produce around 85\% of the total pollutants and 70\% of PM \citep{ARHAMI201770}. Also, human activities such as changing the land use and land cover of the urban and suburban areas increased the intensity of air pollution. Due to specific mountainous topography \textemdash surrounding the city by mountains from the north to the southeast\textemdash winds carry the air pollution from the industries in the west of the city to the middle and the east \citep{ATASH2007399}. \begin{figure*}[t!] \begin{center} \includegraphics[width=0.7\textwidth]{Tehranr.png} \caption{Visualization of the study area, Tehran, the capital of Iran. Locations of air quality monitoring stations are marked by orange circles.} \label{fig.tehran} \end{center} \end{figure*} \subsection{Materials}\label{data} For this study, different datasets, AODs collected by satellite; meteorological data provided by the global weather model; PM2.5 measured at ground air monitoring stations, are utilized. The datasets were collected for seven years, from Jan. 2013 to Jan. 2020. In the following, more details of each dataset are described. \subsubsection{PM2.5 Monitoring Data}\label{pm2.5} The mean daily PM2.5 level is collected by Tehran's Air Quality Control Company (AQCC). Fig. \ref{fig.tehran} shows the locations of the air quality monitoring stations. As shown in Fig. \ref{fig.tehran}, the air quality of the city is monitored by 23 stations scattered across the city. PM levels are measured hourly by a Tapered Element Oscillating Microbalance (TEOM) instrument \citep{sotoudeheian2014estimating}. Despite the good spread of monitoring stations, they are not sufficient for high resolution PM2.5 mapping of the city. Interpolation techniques ignore the variability of weather situations and human-made factors such as local emissions. The case becomes worse when the continuous measurement of PM2.5 by existing stations is not possible since, over time, some stations become out of order which means reducing the number of air monitoring stations. For example, among those 23 stations, measurements of some stations are not available in some periods due to technical issues. \subsubsection{MAIAC AOD}\label{AOD} AOD can be employed to model the variability of PM2.5 levels in locations between monitoring stations. Spaceborne sensors can provide daily AOD measurements. AOD identifies the columnar aerosol level in the atmosphere by measuring the light extinction induced by aerosols. \textcolor{red}{Two widely used satellite AOD products are those retrieved by DB and DT algorithms from MODIS Aqua and Terra sensors. While the DT algorithm is mainly applicable for dark vegetated areas, which restricts its usage in urban areas \citep{amt-6-2989-2013}, the MODIS DB algorithm was originally developed to retrieve AOD over bright surfaces using 470 and/or 412/650 nm, depending on the surface \citep{sayeretal}. The second generation C6 version of the DB product has been further updated by \citeauthor{https://doi.org/10.1002/jgrd.50712} considering an improved assessment of NDVI-dependent surface reflectance, improved cloud screening and identification of dust. This helped to extend the applicability of the DB algorithm from the arid/desert region to the entire land surface except for snow/ice-covered areas \citep{https://doi.org/10.1002/jgrd.50712}.} Nevertheless, both algorithms lead to AOD data at 10 km or 3 km resolution, which limits a high-resolution PM monitoring, particularly over urban areas. A recent AOD product is the output of the MAIAC algorithm that uses time series of measurements acquired by Aqua and Terra sensors boarding on the MODIS satellite platform. The algorithm gives an AOD product at a resolution of 1 km which can be applied for high-resolution mapping of PM2.5, especially over urban areas \citep{amt-11-5741-2018}. \textcolor{red}{In this study, the MCD19A2 Version 6 of MAIAC data product is employed for PM2.5 estimation.} \subsubsection{Meteorological Data}\label{met} In addition to AOD, several investigations demonstrated the significance of meteorological data for PM2.5 concentration estimations \citep{https://doi.org/10.1029/2008JD011496, https://doi.org/10.1029/2008JD011497,atmos9030105}. The meteorological data can be collected by either weather stations or be provided by weather models. One of the famous global weather models that can provide uniformly distributed meteorological data around the whole world is the model developed by European Centre for Medium-Range Weather Forecasts (ECMWF). The meteorological data can be gathered from the fifth-generation ECMWF reanalysis for the global climate and weather, namely, ERA5 \citep{ER5}. ERA5 estimates the atmospheric, ocean-wave, and land-surface quantities hourly \citep{https://doi.org/10.1002/qj.3803}. It combines model data (a previous forecast) and newly available observations to update the estimate of the atmosphere. Another version of ERA5 is ERA5-Land hourly data that provides land variables at enhanced resolution comparing to ERA5 \citep{ER5-land}. All required meteorological data used in PM2.5 concentration estimation can be derived from ERA5 and ERA5-land hourly data. Tab. \ref{feature} expresses the characteristics and source of meteorological data used in this study for PM2.5 estimation. As expressed in Tab. \ref{feature}, both versions of ERA5 models can provide the meteorological data in a grid format with a higher spatial resolution compared to synoptic stations’ observations, which can potentially be employed for high resolution mapping of PM2.5. \begin{table*}[bt!] \centering \footnotesize \caption{Input features used for PM2.5 modeling in urban areas. Note that the column Notation represents the notations of features used in this paper. } \label{feature} \begin{tabular}{llcc Notation &Description & Data Source & Resolution \\\hline \toprule AODm & Mean aerosol optical depth & MAIAC & 1 km \\ nAODm & Normalized mean AOD & \thead{MAIAC \\ ECMWF} & 1 km \\ Prob\_bestm & Probability of mean AOD to have best quality & MAIAC & 1 km \\ Prob\_medm & Probability of mean AOD to have medium quality & MAIAC & 1 km \\ lat & Latitudinal position of the air quality monitoring station & \thead{MAIAC \\ AQCC} & 1 km \\ long & Longitudinal position of the air quality monitoring station & \thead{MAIAC \\ AQCC} & 1 km \\ d2m & 2m dewpoint temperature & ECMWF & \texttildelow 10 km \\ t2m & 2m temperature & ECMWF & \texttildelow 10 km \\ blh or PBLH& Planetary boundary layer height & ECMWF & \texttildelow 10 km \\ sp & Surface pressure & ECMWF & \texttildelow 10 km \\ lai\_hv & Leaf area index, high vegetation & ECMWF & \texttildelow 10 km \\ lai\_lv & Leaf area index, low vegetation & ECMWF & \texttildelow 10 km \\ ws10 & 10m wind speed & ECMWF &\texttildelow 10 km \\ wd10 & 10m wind direction & ECMWF & \texttildelow 10 km \\ cdir & Clear sky direct solar radiation at surface & ECMWF & \texttildelow 10 km \\ uvb & Downward UV radiation at the surface & ECMWF & \texttildelow 10 km \\ RH & Relative humidity & \textcolor{red}{ECMWF} & \texttildelow 10 km\\ month & Month & --- & --- \\ DOY & Day of year & --- & --- \\ \hline \end{tabular} \end{table*} \section{A Framework for High Resolution PM2.5 Mapping Using MAIAC AOD}\label{sec.frame} Fig. \ref{fig.frame} displays the devised framework suited to Tehran city for high resolution estimation and mapping of PM2.5 using AOD, meteorological data, and other features. The framework consists of three main stages, data preprocessing; regression modeling; and deployment. First, data become prepared in the preprocessing phase. In other words, the objective of this stage is to prepare features for importing into the next module i.e., regression modeling. In the regression modeling module, a machine learning technique is developed to explore the relationship between input features (AOD, meteorological data, etc.) and corresponding PM2.5 collected at ground stations. The achieved model from the regression is employed in the deployment stage to finally produce daily high resolution PM2.5 maps over the study area. More details of each step and embedding modules are described in the following sections. \begin{figure*}[t!] \begin{center} \includegraphics[width=1\textwidth]{framework.png} \caption{The framework devised for high resolution estimation of PM2.5 over Tehran} \label{fig.frame} \end{center} \end{figure*} \subsection {PM2.5 Data Preprocessing} \subsubsection{PM2.5 Correction}\label{correctionPM} Earlier studies illustrated the relationship between AOD retrieved by different algorithms from MODIS observations and PM2.5 measured by air quality monitoring stations mainly based on the univariate linear regressor. For example, Wang and Christopher showed a correlation of 0.76 and 0.67 between PM2.5 values and AOD products derived from Aqua and Terra, respectively \citep{https://doi.org/10.1029/2003GL018174}. Despite the good correlation between AOD and PM2.5 in the mentioned study, several studies have implied that this relationship could be significantly affected by the vertical distribution of aerosols and the ambient relative humidity \citep{tsai2011analysis, zhang2016semi, engel2006integrating, wang2010satellite}. In Tehran, PM2.5 and PM10 collected at monitoring stations are measured by TEOM after heating the ambient air to 50$ ^{\circ} $C \citep{sotoudeheian2014estimating, GHOTBI2016333}, and consequently, the mass of dry PM reported as measured PM is less than raw PM. This correction can be performed as below \citep{tsai2011analysis}: \begin{equation} PM_{c}= PM (1-\dfrac{RH}{100})^{-1}, \label{pm} \end{equation} where $ PM_{c} $ is the corrected value of measured $ PM $ at the monitoring station, and RH is the relative humidity. \subsubsection{Outlier Removal}\label{sec.outlier} PM2.5 values used in the study are daily averages of 24-hours PM2.5 measurements at air quality monitoring stations. A daily PM measurement is the average of at least 80\% of hourly valid data in a day recorded at each station and below this percent is reported as missing. While an averaging decreases the effect of possibly existing noise or outliers in hourly measurements, some hourly measurements may dramatically deviate from the actual values. In this case, even averaging cannot degrade deviations. Thus, these types of measurements are considered outliers. In this paper, two simple strategies are carried out for outlier detection and removal. First, the interquartile range (IQR) is assumed to separate inlier measurements from outliers \citep{yang2019outlier}. In this way, the inliers are obtained by the condition below: \begin{equation} Q_{1}-IQR < PM2.5<Q_{3}+IQR, \end{equation} where $ Q_{1} $ and $ Q_{3} $ are the first and third quartiles of input PM2.5 and $ IQR $ is the interquartile range of PM2.5 values. The second strategy is based on the standard deviation of input PM2.5, which is called $ 3\sigma $ in this paper \citep{posio2008outlier, bagheri2018fusion}. The inlier PM2.5 measurements are those that \begin{equation} \mu-3\sigma<PM2.5<\mu+3\sigma, \end{equation} where $ \mu $ and $ \sigma $ are the mean and standard deviation of PM2.5 measurements. \subsection{MAIAC AOD Data Preparation} \subsubsection{AOD Normalization} Another required modification is to normalize MAIAC AOD data. Since AOD is a columnar parameter while the PM values are measured at the surface nearby the station, a conversion from the columnar to the surface AOD measurement is necessary \citep{wang2010satellite}. In this regard, original AOD values should be normalized before any further processing \citep{tsai2011analysis}. This can be achieved by the height of the mixing layer at each monitoring station. Thus, the normalized AOD is calculated as \citep{tsai2011analysis}: \begin{equation} nAOD = \dfrac{AOD}{L_{mix}}, \label{naod1} \end{equation} where $ nAOD $ and $ AOD $ are the normalized AOD and the original AOD values retrieved by the MAIAC algorithm, respectively, and $ L_{mix} $ denotes the mixing layer height. In this study, it is assumed that aerosols are homogeneously mixed and the height of the haze layer is ignored in normalization. In addition, a previous investigation over Tehran city disclosed that aerosol layer height (ALH) (derived from CALIPSO profiles over the study area) and the planetary boundary layer height (PBLH) have the same altitude above the aerosol-laden layers \citep{NABAVI2019889}. As a result, $ L_{mix} $ can be replaced with PBLH. Therefore, eq. \ref{naod1} can be updated as below: \begin{equation} nAOD = \dfrac{AOD}{PBLH}, \label{naod2} \end{equation} where $ PBLH $ is the planetary boundary layer height obtained from ECMWF model. \subsubsection{AOD Extraction from MAIAC Products}\label{aod_extraction} AOD data provided by MODIS MAIAC is initially in a raster format, while for the statistical modeling of PM2.5, AOD is extracted at each air monitoring station. To obtain the coincident MODIS pixels with the PM2.5 measurement at the monitoring station, different window sizes, 3$ \times $3, 5$ \times $5, 7$ \times $7, 11$ \times $11, 15$ \times $15 are applied to evaluate the relationship between AOD and PM2.5 values. The final AOD is the average of AOD values (AODm) inside the considered window. For the experiment, several criteria are considered to ensure preserving the quality of AODs after averaging. The criteria are associated with the quality of AOD values extracted from MAIAC products at each window. In more detail, after averaging AODs inside the window, the standard deviation of the AODs is calculated. If this value is more than 0.5, the achieved mean AOD will be considered an invalid value, which means AOD values in neighborhoods fluctuate severely. Since in a window, some AODs are not available (filled by NaN), another criterion is that the number of pixels with valid AOD values should be more than three, which makes the averaging of AODs more meaningful. The aforementioned criteria are considered for any selected window sizes. \subsubsection{AOD Quality Extraction}\label{quality_extraction} In addition to the criteria mentioned in the previous section, other conditions can be considered using the information provided in the “Quality Assessment” (QA) file delivered along with MAIAC AOD products \citep{lyapustin2018modis}. According to the manual of MAIAC AOD product, and based on previous investigations \citep{just2015using, kloog2014new}, a recommendation is to merely apply those AODs for urban air quality applications that satisfying the condition below: \textbf{Condition 1}: (Adjacency Mask == Normal condition/Clear) \textbf{and} (Cloud Mask == Clear or Possibly Cloudy), where Adjacency Mask gives information of recognized neighboring clouds or snow (in the 2-pixel vicinity). The condition mentioned above can become stricter by filtering those AODs that are flagged as “Best quality” in the QA file \citep{lyapustin2018modis}. Consequently, the second condition is considered as: \textbf{Condition 2}: (Adjacency Mask == Normal condition/Clear) \textbf{and} (Cloud Mask == Clear) \textbf{and} (QA for AOD == Best quality) Regarding the fact that each window may include AODs with different qualities, the final AOD can be calculated by averaging only AODs satisfying the condition 1 or 2. However, this strategy can lead to missing valuable AODs that do not meet the conditions. Instead of filtering AODs based on the conditions as mentioned above, which may lead to missing the AOD information at an air monitoring station, two probability maps are generated based on those defined conditions. In this manner, AODs inside a window are averaged, and corresponding to achieved AOD, a probability representing the number of pixels (AODs) satisfying the relevant condition respective to the total number of pixels inside the window is calculated. In other words, the assigned probability illustrates the number of pixels with the highest quality (satisfying condition 2) or with medium quality (consistent with condition 1) involving in the calculation of mean AOD. These probability values can be used for controlling the quality of the final achieved AOD at each monitoring station. In this paper, two generated weight maps are notated as "Prob\_medm" and Prob\_bestm" regarding conditions 1 and 2, respectively. \subsubsection{Merging AODs of Aqua and Terra}\label{merg-AT} Hu et al. \citep{hu2014estimating} and Lee et al. \citep{lee2011novel} have shown averaging AOD values retrieved by Aqua and Terra overpassing at different local times (around 10:30 and 13:30 local times) can be applied as a daily AOD measurement. The correlation between Aqua and Terra AODs also allows filling missing AOD values of a sensor using AODs retrieved by another sensor. In locations where either Terra or Aqua AOD ($ AOD_{T} $ or $ AOD_{A} $) is missing, the missing value can be estimated using the computed regression equations. Then, the final AOD can be calculated as below: \begin{equation} AOD = \dfrac{AOD_{A} + AOD_{T}}{2}, \label{eq.AT} \end{equation} in which missing $ AOD_{A} $ or $ AOD_{T} $ can be estimated using coefficients achieved by linear regression. \subsection{Meteorological Data Preparation} For employing the meteorological data, it is needed to estimate them at locations of air quality monitoring stations. For this purpose, the meteorological values are interpolated at target locations using an interpolation technique such as kriging \citep{ijgi7090368, olea2012geostatistics,bagheri2014}. Two popular types of kriging that can be used for meteorological data interpolation are ordinary and universal kriging. Besides the kind of kriging, another critical parameter that should be correctly set is the semivariogram type. Versatile semivariograms have been designed such as linear, spherical, Gaussian, and power that are typically selected based on the study data \citep{bagheri2014, aretouyap2016lessening}. One strategy for setting the aforementioned hyperparameters is a grid search with cross-validation in which a subset of data is used to estimate those parameters. In the grid search strategy, the interpolation is done on the subset of data as training data by applying different parameters and the performance of interpolation is evaluated based on another subset of data as a validation dataset. Then, the hyperparameters are determined according to a set of parameters that gives the highest performance. \subsection{Feature Selection} As illustrated in Tab. \ref{feature}, several features, including those extracted from MAIAC products, meteorological data derived from ECMWF models, etc., are input into a predictive model for predicting PM2.5 concentration. However, before establishing a regression model, selecting the most important feature will be beneficial. This procedure gives an insight into the relationship between input variables and the output target (PM2.5), which can lead to reducing non-significant features, and in some cases improving the model accuracy. For this aim, different machine learning techniques such as random forest and gradient boosting can be applied. In this paper, as will be illustrated in Section \ref{sec.reg_result}, gradient boosting will be used as a machine learning technique for AOD-PM2.5 modeling. Additionally, It provides an ability for feature importance determination. The importance is estimated for an individual decision tree by the amount that each feature split point makes better performance, weighted the number of data, the node has observed. The average of all feature importance across all of the decision trees, called gain, identifies the final importance \citep{xu2014gradient}. \subsection{Regression Modeling}\label{sec.model} Besides data, another aspect of the designed framework is the type of model performed for PM2.5 concentration estimation using AOD and meteorological data. This paper also compares different machine learning algorithms to estimate PM2.5 from MAIAC AOD and ECMWF meteorological data. For this aim, different algorithms from the basic to advanced algorithms are carried out, and their performances are compared. In this regard, four types of machine learning algorithms, linear methods (univariate, multivariate, ridge, lasso); kernel methods (SVR); decision tree ensemble approaches (random forest, extra trees, XGBoost); and deep neural networks (deep autoencoder+SVR, deep belief network) are implemented for exploring the relationship between input features and PM2.5 values. \subsection{Model Deployment} The achieved model from the regression modeling phase can estimate PM2.5 in an arbitrary location using the input features, which will lead to high resolution mapping of PM2.5. The main challenge for estimating PM2.5 is missing AOD values because of cloud contamination or failure of the applied algorithm in retrieving AOD. However, the available AODs, although few numbers, can be employed and estimate PM2.5 in addition to those values measured by an air quality monitoring station. In other words, the estimated PM2.5 values using AODs and other features ultimately generate PM2.5 in locations where have not been sensed by an air quality monitoring station beforehand. The estimated PM2.5 values can be utilized as extra measurements in addition to ground station measurements for producing a high resolution map of PM2.5. The produced PM2.5 data can be supposed as new PM2.5 measuring stations (quasi-stations) and thus be combined with actual stations to indicate PM2.5 variations for higher resolution mapping better. Finally, a high resolution daily map of PM2.5 is produced by an interpolation technique using all estimated and observed PM2.5 values. Additionally, monthly and yearly high resolution maps of PM2.5 can be generated using the produced daily maps by median averaging. It should be noted that all prepossessing procedures, mentioned earlier, are performed on AOD and meteorological data to make them ready for PM2.5 estimation using the developed regression model. \section{Results and Discussion}\label{result} In this section, results of several experiments performed to investigate the efficiency of different modules of the proposed framework for PM2.5 concentration estimation are presented and discussed. Different metrics were employed for evaluating achieved results. Some standard metrics used in this study were root mean square error (RMSE), mean absolute error (MAE), and Pearson correlation coefficient ($ R^{2} $). \subsection{Data Preprocessing and Preparation} \subsubsection{Impact of AOD Normalization}\label{res.AOD_modif} The results of PM2.5 estimation using univariate regression model by original AODs and also normalized versions are presented in Tab. \ref{Tab.aodnormal}. The results illustrate that the modification of AOD using PBLH can significantly improve the estimations. \begin{table}[bt!] \centering \footnotesize \caption{a) Univariate column illustrates the results of using the original AOD and also the normalized version of AOD by PBLH. b) Multivariate column shows the effect of adding meteorological data in addition to AOD values for predicting PM2.5.} \label{Tab.aodnormal} \begin{tabular}{l |ccc|ccc & \multicolumn{3}{c|}{Univariate}& \multicolumn{3}{c}{Multivariate} \\ AOD type&\thead{RMSE \\ $\frac{ \mu g}{ m^{3}}$}& \thead{MAE \\ $\frac{ \mu g}{ m^{3}}$} & \textbf{$ R^{2} $}&\thead{RMSE \\ $\frac{ \mu g}{ m^{3}}$}& \thead{MAE \\ $\frac{ \mu g}{ m^{3}}$} & $ R^{2} $ \\\hline \toprule AOD & 18.53 & 15.26 & 0.01 & 11.76 & 9.35 & 0.56\\ nAOD (normalized) & 13.78 & 10.91 & 0.40 & 11.00 & 8.64 & 0.61 \\ \end{tabular} \end{table} \subsubsection{Results of Merging AODs of Aqua and Terra}\label{res.merg-AT} For the study area in this paper, it was illustrated that a combination of AOD measurements from Aqua and Terra could be used to achieve mean daily AOD. Fig. \ref{fig.TA} displays the linear correlation between AODs retrieved from Aqua and Terra sensors for the study years from Jan. 2013 to Jan. 2020. Tab. \ref{Tab.AT} also represents the correlation coefficient as well as the linear regression equation between the AOD measurements of two sensors. As presented in Tab. \ref{Tab.AT}, the correlation between measurements of two sensors is 0.72 when considering all measurements from Jan. 2013 to Jan. 2020. In addition, the influence of seasonality on correlation estimation between Aqua and Terra AOD has been evaluated. In this manner, the AOD values were divided into two categories; warm season (Apr. – Sep.), and cold season (Oct. – Mar.) based on the climate of the study area. Then the regression was performed for each seasonal category. The results revealed that in the cold season, the correlation was slightly higher than the case when all data were involved in regression. However, the correlation decreased for the warm season (Fig. \ref{fig.TA}). The highest correlation coefficient between AODs of Aqua and Terra is for cold season (around 0.73), when the congestion of pollution as well as missing AODs due to cloud coverage increases and accurate regression of AODs is more desirable. \begin{figure*}[bt!] \centering \subfloat[]{% \includegraphics[width=1\columnwidth]{T_A_cold}} \label{TA2013} \subfloat[]{% \includegraphics[width=1\columnwidth]{T_A_warm}} \label{TAt} \caption{Correlation between Aqua and Terra AODs; a) Cold season, b) Warm season.} \label{fig.TA} \end{figure*} \begin{table*}[bt!] \centering \footnotesize \caption{Regression equations and coefficients for retrieving missing AOD values from one sensor to another, Terra to Aqua: when Terra AOD is available, and Aqua AOD is missing; and Aqua to Terra applies vice versa. } \label{Tab.AT} \begin{tabular} {ll ccc AOD distinguishing &Time &Terra to Aqua & Aqua to Terra & $ R^{2} $ \\\hline \toprule \multirow{2}{*}{Seasonal} & Cold (Oct. - Mar.)& $ AOD_{A}= 0.83AOD_{T}+21.06 $ & $ AOD_{T}= 0.88AOD_{A}+15.47 $ & 0.73 \\ & Warm (Apr. - Sep.)& $ AOD_{A}= 0.81AOD_{T}+15.81 $ & $ AOD_{T}= 0.81AOD_{A}+49.94 $ & 0.65 \\ No separation &Total (2013-2019) & $ AOD_{A}= 0.79AOD_{T}+23.39 $ & $ AOD_{T}= 0.91AOD_{A}+21.89 $ & 0.72 \\ \hline \end{tabular} \end{table*} \subsubsection{Window Size for AOD Extraction}\label{res.ws} As explained in Section \ref{aod_extraction}, first, AOD is extracted from the MAIAC file at each monitoring station. For this aim, different windows sizes, 3$ \times $3; 5$ \times $5; 7$ \times $7; 9$ \times $9; 11$ \times $11; 15$ \times $15, were experimented, and the effect of window size on estimating PM2.5 was evaluated. A univariate linear regression model was applied to evaluate the correlation between the extracted MAIAC AODs and corresponding PM2.5 values. Based on the results illustrated in Tab. \ref{Tab.ws}, increasing the size of the window degrades the accuracy of the univariate regression model. The best results were achieved using a 3$ \times $3 window size. However, the smaller window size boosts the chance of encountering missing values or poor quality AODs according to the criteria explained in Sections \ref{aod_extraction} and \ref{quality_extraction}. Fig. \ref{ws} shows that increasing the window size reduces the regression performance (raising RMSE values), whereas the percentage of possibly available AODs (non-missing values) is raised. From the slope of the RMSE plot, one can conclude that the degradation of performance dramatically changes by increasing the window size from 3$ \times $3 to 9$ \times $9 and larger sizes. Also, the percentage of data, shown by the red line-square plot, has the greatest change by varying the window size from 3$ \times $3 (nearly 67\%) to 7$ \times $7 (almost 74\%). Nevertheless, Increasing the window size for AOD extraction at the monitoring stations causes mixing of the AOD values that belong to nearby air quality monitoring stations, which are located at a distance less than half of the window size. Another important aspect of exploring the optimal window size is the computational cost of AOD extraction. Increasing the window size requires more computational loads which can be problematic in the big data processing. In conclusion, 3$ \times $3 window size is selected as optimal window size for AOD extraction from MAIAC products in the study area. \begin{table}[bt!] \centering \footnotesize \caption{The impact of changing window sizes on the correlation between PM2.5 values and MAIAC AODs} \label{Tab.ws} \begin{tabular}{l ccc Window Size & \thead{RMSE \\ $\frac{ \mu g}{ m^{3}}$}& \thead{MAE \\ $\frac{ \mu g}{ m^{3}}$} & $ R^{2} $ \\\hline \toprule 3$ \times $3 & 13.78& 10.91& 0.40 \\ 5$ \times $5 & 13.97& 11.01& 0.40 \\ 7$ \times $7 & 14.08& 11.08& 0.40 \\ 9$ \times $9 & 14.13& 11.11& 0.40 \\ 11$ \times $11 & 14.21& 11.17& 0.40 \\ 15$ \times $15 & 14.30& 11.21& 0.40 \\ \hline \end{tabular} \end{table} \begin{figure}[bt!] \begin{center} \includegraphics[width=1\columnwidth]{WS2.png} \caption{The influence of changing window size on the RMSE of regression as well as the percentage of missing AOD data} \label{ws} \end{center} \end{figure} \subsubsection{Influence of Quality of AODs on PM2.5 Estimation}\label{res.quality} Fig. \ref{fig.quality}. displays the influence of quality of AOD values on predicting PM2.5. It should be noted that the simple linear regression model was also used for discovering the effect of AOD quality on the AOD-PM2.5 relationship. The probability of quality for each extracted AOD was computed according to conditions 1 and 2 described in Section \ref{quality_extraction}. The zero probability means no quality condition was assumed for AODs inputting into the regression model and probability of 0.75 implies that at least 0.75\% of AODs within the extracting window comply with either condition 1 or 2. As shown in the figure, choosing AODs with the highest probabilities, i.e. highly qualified AOD values, can accurately estimate PM2.5 values. However, using conditions, in particular, condition 2, causes missing those AODs that could be beneficial for AOD-PM2.5 modeling, especially when applying sophisticated machine learning algorithms. Thus, instead of filtering based on the conditions, which are mostly helpful for simpler models like the univariate model, this investigation suggests using the probabilities exploited from AOD data as input features in machine learning-based modeling. In Section \ref{res.feature_importance}, it will be revealed that the probabilities can be imported as informative features along with meteorological data for PM2.5 estimation. \begin{figure}[bt!] \begin{center} \includegraphics[width=1\columnwidth]{quality2} \caption{The influence of the probability of quality of AODs achieved based on the conditions 1 (medium) and condition 2 (best) on PM2.5 estimation} \label{fig.quality} \end{center} \end{figure} \subsubsection{PM2.5 Outlier Removal}\label{res.outlier} As explained in Section \ref{sec.outlier}, two strategies were applied for the detection and removal of PM2.5 outliers. Tab. \ref{Tab.outlier} represents results of univariate linear regression on data that have been modified using the aforementioned outlier removal strategies. The results illustrate that using the IQR technique can outperform the 3$ \sigma $ strategy in detecting outliers. Thus, the IQR strategy is chosen for outlier removal and data cleaning. \begin{table}[bt!] \centering \footnotesize \caption{The effect of using IQR and 3$ \sigma $ strategies for outlier detection and removal from PM2.5 values. The univariate regression was performed to evaluate each outlier removal strategy. } \label{Tab.outlier} \begin{tabular} {l ccc Method & \thead{RMSE \\ $\frac{ \mu g}{ m^{3}}$}& \thead{MAE \\ $\frac{ \mu g}{ m^{3}}$} & $ R^{2} $ \\\hline \toprule IQR & 13.78 & 10.91 & 0.40 \\ 3$ \sigma $ & 14.70 & 11.49 & 0.43 \\ \hline \end{tabular} \end{table} \subsubsection{Impact of Meteorological Data on Estimating PM2.5}\label{res.met} Adding meteorological observations as input features is beneficial for PM2.5 estimation. For the meteorological data used in this study, Tab. \ref{Tab.krig} illustrates the best parameters tuned for kriging interpolation of each meteorological parameter. In other words, the mentioned settings give the best results for each category of meteorological data. After preparing meteorological data, they will be used along with AOD and other features (listed in Tab. \ref{feature}) for PM2.5 modeling. \begin{table}[bt!] \centering \footnotesize \caption{The hyperparameters achieved from grid search with cross-validation for kriging interpolation of meteorological data in this study} \label{Tab.krig} \begin{tabular} {l cc Meteorological Data &Type of Kriging & Semivariogram \\\hline \toprule d2m& universal& spherical \\ t2m& universal& spherical \\ blh& ordinary& spherical \\ lai\_hv& ordinary& spherical \\ lai\_lv& ordinary& spherical \\ sp& universal& power \\ ws10& ordinary& spherical \\ wd10& ordinary& spherical \\ uvb& ordinary& spherical \\ cdir& ordinary& spherical \\ RH& universal& spherical \\ \hline \end{tabular} \end{table} The importance of meteorological data in PM2.5 estimation on predicting PM2.5 is presented in Tab. \ref{Tab.aodnormal}. As illustrated in Tab. \ref{Tab.aodnormal}, using meteorological features can improve the correlation coefficient of PM2.5 estimation up to 0.61, while without using this information, the correlation coefficient is around 0.40. It should be noted that the correlation of 0.40 is also achieved by the univariate model using normalized AOD by PBLH, which is also a meteorological parameter obtained from ECMWF models. Other metrics such as RMSE and MAE also confirm the efficiency of adding the aforementioned meteorological data. In more detail, the importance of meteorological variables on estimating PM2.5 concentration will be discussed in Section \ref{res.feature_importance}. \subsubsection{Results of Feature Selection} \label{res.feature_importance} Fig. \textcolor{blue}{S1} displays the importance of applied features in this study for PM2.5 modeling using XGBoost. As shown in this figure, the highest priority is for planetary boundary layer height (“blh”). Next, the normalized AOD (“nAODm”) works as the most informative attribute for PM2.5 regression. The plot also demonstrates that relative humidity has an important impact on estimating PM2.5. The lowest significance is related to “lai\_lv”, “month”, and “Prob\_medm” and “cdir”, respectively. To support the results of feature importance determination using XGBoost, the heatmap plot of the correlation matrix (based on absolute correlation values) of the input features and the target variable (“PM$ _{c} $”) is displayed in Fig. \textcolor{blue}{S2}. According to the heatmap plot, “PM$ _{c}” $ has the highest correlation with “nAODm”, “RH”, “blh”, “lai\_hv”, “t2m”, “wd10”, “uvb”, and “ws10”, which has been also recognized as very important features by XGBoost. Some features such as “cdir” are significantly correlated with “wd10”. Thus, “wd10” can be a substitute for “cdir” in practice. Some features such as positional features (lat, long) have been identified as highly significant features by XGBoost, whereas, they have been recognized as less important attributes by correlation matrix. The main reason is that the correlation matrix is formed based on the linear correlation of attributes, while it is possible that “PM$ _{c} $” may not necessarily have a linear correlation with some features such as positional attributes. For more experiments, Fig. \textcolor{blue}{S3} displays the results of XGBoost performance (RMSE) with different settings associated with the presence of features as input variables in the process of regression. The applied settings for features have been presented in Tab \ref{Tab.setting}. The first setting is the removal of the least important feature i.e., “lai\_lv”. The plot shows that the performance of algorithms is slightly promoted. The removal of less important features is continued according to different settings presented in Tab. \ref{Tab.setting}. The red dashed line is the performance of the algorithm when employing all defined input features. As the blue plot depicts, the performance of the algorithm when using all variables is as same as the case of removing “lai\_lv”, month, and “Preb\_med” features. This means that the mentioned features are useless in the process of XGBoost regression. Even, removal of “lai\_lv”, and month (setting S2) can slightly improve the algorithm performance. However, according to the plot of RMSE in Fig. \textcolor{blue}{S3}, removing more features based on settings such as S4, S5, and others degrades the algorithm accuracy. \begin{table}[t!] \centering \footnotesize \caption{Different settings of embedding and discarding of input features in the process of XGBoost regression} \label{Tab.setting} \begin{tabularx}{\columnwidth} {l X Settings & Discarded features \\\hline \toprule S1& lai\_lv \\ S2& lai\_lv + month \\ S3& lai\_lv + month + Preb\_med \\ S4& lai\_lv + month + Preb\_med + cdir \\ S5& lai\_lv + month + Preb\_med + cdir + sp \\ S6& lai\_lv + month + Preb\_med + cdir + sp + Prob\_best \\ S7& lai\_lv + month + Preb\_med + cdir + sp + Prob\_best + ws10 \\ S8& lai\_lv + month + Preb\_med + cdir + sp + Prob\_best + ws10 + wd10 \\ \hline \end{tabularx} \end{table} \subsection{Regression Modeling Results}\label{sec.reg_result} After feature selection, different machine learning techniques were applied to model the relationship between PM2.5 and AOD, and other input features. Each algorithm has unknown parameters or hyperparameters that should be tuned to reach the best performance. The proper values of hyperparameters were determined during the training (70\% of the entire data). After that, the independent data (30\% of the whole information) as unseen data (also called test data) were employed to evaluate the efficiency of algorithm. In this study, the 5-fold cross-validation strategy was used for training all algorithms except deep learning methods. Since training of DAE and DBN demands a lot of training data and also spends a large deal of computing time, the initial training data was split into two sets as training (80\%) and remaining training data (20\% ) as validation. The structures of the designed deep autoencoder (DAE+SVR) and deep belief networks (DBN) employed for PM2.5 concentration using MAIAC AOD values and other input features in this study have been displayed in Fig. \ref{AE} and \ref{DBN}, respectively. \begin{figure*}[bt!] \begin{center} \includegraphics[width=1\textwidth]{DAE.png} \caption{The structure of DAE used in this study for PM2.5 modeling} \label{AE} \end{center} \end{figure*} \begin{figure*}[bt!] \begin{center} \includegraphics[width=0.6\textwidth]{DBN.png} \caption{The structure of deep belief network (DBN) constructed by a stack of restricted Boltzmann machines (RBM) used for PM2.5 regression} \label{DBN} \end{center} \end{figure*} Other hyperparameters such as learning rate, optimization method, etc., were identified during the training using the validation data. The hyperparameters tuned for machine learning algorithms used in this study have been represented in Tab. \ref{Tab.hyp}. After fine-tuning, the developed models were evaluated on train data and test data to give models' performances during training and testing, respectively. Tab. \ref{Tab.model} collects performances of the applied machine learning techniques. \begin{table}[bt!] \centering \footnotesize \caption{Hyperparameter setting of machine learning algorithms used in this study} \label{Tab.hyp} \begin{tabularx}{1\columnwidth}{l X Algorithm& Main hyperparameters \\\hline \toprule Univariate& ---\\ Multivariate& ---\\ Ridge& regularization parameter: 0.1\\ Lasso& regularization parameter: 0.1\\ SVR& kernel type: RBF, regularization parameter: 100, epsilon: 0.1\\ Random Forest& No. of estimators: 500, max depth: 10, max features: 0.5, min samples in a leaf: 1, criterion: MSE\\ Extra Trees & No. of estimators: 500, max depth: 10, max features: 0.8, min samples in a leaf: 1, criterion: MSE\\ XGBoost& booster: decision tree, No. of trees: 2000, criterion: MSE, learning rate: 0.3, maximum depth: 6, max features: 1, min child weight: 1, Gamma: 0, \\ DBN & No. of hidden layers: 2 (64, 10 neurons), learning rate of RBM: 0.01, learning rate of the network: 0.001, optimizer: SGD, No. of epochs for training RBM: 50, No. of backpropagation iteration: 200, mini-batch: 256, activation function: ReLU, loss: mse\\ DAE& structure: Fig. \ref{AE}, optimization: Adam, learning rate: 0.001, activation: ReLU, loss: mse, regularization: $l_{2}$ weight penalty with factor of 0.001, epoch: 200) + SVR (kernel: RBF, regularization: 100, epsilon: 0.1)\\ \end{tabularx} \end{table} \begin{table*}[tb] \centering \footnotesize \caption{The performances of different machine learning techniques used in this study for PM2.5 estimation.} \label{Tab.model} \begin{tabularx}{\textwidth}{XX ccc ccc & & \multicolumn{3}{c}{Model training}& \multicolumn{3}{c}{Model testing} \\ Category&Method & RMSE $\frac{ \mu g}{ m^{3}}$& MAE $\frac{ \mu g}{ m^{3}}$ & $ R^{2} $ & RMSE $\frac{ \mu g}{ m^{3}}$ & MAE $\frac{ \mu g}{ m^{3}}$ & $ R^{2} $ \\ \toprule \multirow{4}{*}{Linear Methods} & Univariate & 13.82& 10.93& 0.40& 15.45& 12.06& 0.41 \\ & Multivariate & 10.96& 8.64& 0.61& 12.38& 9.84& 0.59 \\ & Ridge & 10.92& 8.60& 0.61& 12.26& 9.74& 0.59 \\ & Lasso & 11.18& 8.78& 0.59& 11.74& 9.29& 0.58 \\ \hline \multirow{1}{*}{Kernel Methods} &SVR & 10.27& 7.88& 0.63& 10.36& 7.98& 0.63 \\ \hline \multirow{3}{*}{Ensemble Methods}& Random Forest& 7.85& 6.13& 0.85& 9.51& 7.50& 0.69 \\ &Extra Trees& 8.61& 6.80& 0.80& 9.63& 7.66& 0.68 \\ &XGBoost& \textbf{6.39}& \textbf{4.79}& \textbf{0.92}& \textbf{8.97}& \textbf{6.88}& \textbf{0.74} \\ \hline \multirow{2}{*}{Deep learning} &DBN & 9.61& 7.46& 0.70& 9.99& 7.67& 0.66 \\ &DAE + SVR& 7.97 & 6.03& 0.83& 9.75& 7.32& 0.68 \\ \end{tabularx} \end{table*} According to Tab. \ref{Tab.model}, the best results are achieved using XGBoost according to the model performance on test data. The RMSE and MAE of XGBoost are 6.39 and 4.79 on train data and 8.97 and 6.88 on test data, respectively. Random Forest was trained well on train data (RMSE = 7.85, MAE= 6.13, and $R^{2} $= 0.85), However, its accuracy decreased when it was applied to test data. After that, DAE+SVR with RSME of 9.75, MAE of 7.32, and $R^{2} $ of 0.68 gives the best results on test data. Another deep neural network structure i.e., DBN could also outperform SVR and linear methods, while its accuracy is less than extra trees. Among linear methods as most straightforward regression techniques, linear ridge regressor has the highest accuracy, which shows the efficiency of regularization. Among the models, the lowest accuracy is related to the univariate models. Since the univariate regression model does not employ other valuable features, it leads to less performance than other models. \subsection{Deployment Results} The achieved model from the regression stage was finally employed for estimating PM2.5 in locations that were not sensed by ground sensors. Considering the overall performance of developed models on both train and test data, the tuned XGBoost model was selected as the final model for PM2.5 map generation. In the process of regression modeling, the missing values are not involved since sufficient valid samples are available for any type of regression algorithms. However, it is critical in the deployment stage, where the goal is to produce a raster of PM2.5 estimates. In the deployment stage, instead of interpolation of missing AOD values, the achieved model from the output of the regression phase is applied to estimate PM2.5 using valid AODs. Then, the locations with invalid AODs are directly filled by PM2.5 obtained through interpolation of PM2.5 estimates output of the trained regression model. Fig. \ref{fig.PMmap}.a depicts ground stations (orange circles) as well as locations of those PM2.5 values have been generated by the developed XGBoost model (black crosses). The produced PM2.5 data can be supposed as new PM2.5 measuring stations (quasi-stations) and thus be combined with ground stations to indicate PM2.5 variations for higher resolution mapping better. Finally, a high resolution daily map of PM2.5 is produced using an interpolation technique such as kriging using all estimated and observed PM2.5 values. Fig. \ref{fig.PMmap} displays exemplary high resolution (1 km) maps produced by the developed machine learning model on four different dates. Four dates were selected based on the versatile levels of pollution over the city reported by Tehran's Air Quality Control Company (AQCC). The dates are Jan. 1, 2018, that was announced “Unhealthy” based on the air quality index (AQI), Jan. 2, 2018, as “Unhealthy for sensitive group”, Jan. 3, 2018, as “Moderate”, and Feb. 25, 2018, as a “Clean” day. The efficiency of the proposed framework for each pollution level based on the reported AQI is shown by generated PM2.5 maps. As shown in Figs. \ref{fig.PMmap}, the PM2.5 map of Jan. 1th illustrates that the most areas have PM2.5 levels of more than 73 $\frac{ \mu g}{ m^{3}}$ that also confirms the level of pollution “Unhealthy” on that date. The second of Jan. is “Unhealthy for sensitive people”, which can also be inferred from the produced map. On this date, the southwest of the city is still suffering from the high amount of PM2.5; however, the concentration of PM2.5 in most areas is less than the previous day. On the third day, based on the achieved map, the level of PM2.5 in a vast part of the city (except west and northeast) is almost less than 24, which means decreasing the level of pollution to moderate as reported by AQCC. Finally, Fig.\ref{fig.PMmap}.e displays that the most estimated PM2.5 data have small values representing a “Clean” day. From the maps presented in Fig. \ref{fig.PMmap} as well as results provided in Tab. \ref{Tab.model}, it can be concluded that the developed framework including different stages can be successfully employed for daily high resolution map generation of PM2.5 over Tehran. Finally, the results of this investigation were compared to previous studies implemented in Tehran for PM2.5 estimation. As compared in Tab. \ref{Tab.literature}, while this investigation could successfully lead to daily, 1 km mapping of PM2.5 in Tehran, previous studies, in the best situation, could only produce a 3 km resolution map with accuracy less than that was achieved in this paper. \begin{table*}[bt!] \centering \footnotesize \caption{Comparing results of this study with previous implementations reported in the literature} \label{Tab.literature} \begin{tabular}{l ccccccc Study& Time &Data, Resolution & Model&RMSE $\frac{ \mu g}{ m^{3}}$& MAE $\frac{ \mu g}{ m^{3}}$ & $ R^{2} $ & Daily PM2.5 MAP \\\hline \toprule This study & 2013-2019& MAIAC-MODIS, 1 km & XGBoost & 8.97& 6.86& 0.74& 1 km PM2.5 map\\ \citet{atmos10070373}&2015-2018 & DB-DT-MODIS, 3 km& XGBoost& 15.15 & 10.94& 0.67 & Not reported\\ \citet{NABAVI2019889}& 2011-2016& MAIAC-MODIS, 1 km& Random Forest& ---& ---& $ < $ 0.50 & Seasonal map (1 km)\\ \citet{GHOTBI2016333}& March to Nov. 2009& DT-MODIS, 3 km & WRF-Multivariate & 16.91 & ---& 0.73 & 3 km PM2.5 map\\ \end{tabular} \end{table*} \section {Conclusion}\label{sec.conclusion} This paper investigated the possibility of PM2.5 estimation using the MAIAC AOD data and meteorological information over Tehran. For this aim, a framework including three main stages, data preprocessing; regression modeling; and model deployment for generating high resolution map of PM2.5 was proposed. During the data preprocessing, the effect of several factors and parameters on PM2.5 estimation such as window size for AOD extraction, the impact of AOD normalization, the significance of adding meteorological data, the role of involving AOD quality was evaluated. Regression modeling was performed from different categories of machine learning techniques for estimating PM2.5 using input features. Model performance results illustrated that the decision tree ensemble approaches such as random forest and XGBoost were the best choice for PM2.5 estimation from AOD and meteorological data. The developed regression model was finally employed for producing the 1 km resolution PM2.5 concentration maps, which could be potentially exploited for monitoring, predicting the air quality condition, and also detecting main air pollution sources. Inspection of generated maps on exemplary days with different levels of pollution based on the officially reported air quality index by AQCC of Tehran confirmed the efficiency of the developed framework. In the future, more attempts will be conducted to handling the challenges in the process of high resolution map generation such as involving other effective features in PM2.5 modeling, imputation of missing AOD values and improving the performance of modeling by developing more advanced machine learning techniques. \section{Acknowledgments} The author wants to thank everyone who has provided the required data for this research, Tehran's Air Quality Control Company (AQCC) for the ground PM2.5 measurements; NASA EarthData for the MAIAC MODIS products; and ECMWF for the meteorological data. \section{Appendix A. Supplementary data} Supplementary data to this article can be found online at \textcolor{red}{xxx} \bibliographystyle{model5-names} \biboptions{authoryear}
2024-02-18T23:41:07.706Z
2022-04-06T02:21:40.000Z
algebraic_stack_train_0000
4,229
9,661
proofpile-arXiv_066-4888
\section{Introduction} In the theory of {\em DG schemes} -- a simplified variant of {\em derived algebraic geometry} -- it is important to have a suitable way to resolve a sheaf of rings by a {\em flat sheaf of DG rings}. A typical problem is this: $X$ is a scheme, and $Y_1, Y_2 \subseteq X$ are closed subschemes. The {\em derived intersection} of $Y_1$ and $Y_2$ is a DG scheme \[ (Y, \mcal{O}_Y) =Y_1 \times_X^{\mrm{R}} Y_2 \] whose underlying topological space is $Y = Y_1 \cap Y_2$, and the structure sheaf \[ \mcal{O}_Y = \mcal{O}_{Y_1} \otimes_{\mcal{O}_X}^{\mrm{L}} \mcal{O}_{Y_2} \] is a suitable sheaf of commutative DG rings on this space. If there exist flat resolutions $\phi_i : \AA_{i} \to \mcal{O}_{Y_i}$, by which we mean that $\AA_{i}$ is a flat commutative DG $\mcal{O}_X$-ring, and $\phi_i$ is a DG ring quasi-isomorphism, then we can take \[ \mcal{O}_{Y} := (\AA_{1} \otimes_{\mcal{O}_X} \AA_{2})|_Y . \] (Of course, it is enough to resolve only one of the tensor factors.) In case $X$ is an affine scheme, or it is quasi-projective over a nice base ring $\mathbb{K}$, then it is quite easy to produce flat quasi-coherent DG ring resolutions of $\mcal{O}_{Y_i}$, and in this way to construct the sheaf of DG rings $\mcal{O}_Y$. This was already done in the paper \cite{CK} of Ciocan-Fontanine and Kapranov. But in general (for an arbitrary scheme $X$) there does not seem to be an existing method to obtain flat DG ring resolutions as sheaves on $X$ itself. Thus the derived intersection $(Y, \mcal{O}_Y)$ has until now existed only as an object in a much more complicated homotopical setting. See the preprint \cite{Be} of Behrend for one approach, and a survey of the approaches of To\"en et al.\ and of Lurie under ``derived stack'' in \cite{nLab}. Being only an object of a complicated homotopy category, the derived intersection $(Y, \mcal{O}_Y)$ is usually quite difficult to manipulate geometrically, and to form associated structures, such a derived module category over $\mcal{O}_{Y}$, etc. The first main innovation in this paper is the use of {\em commutative pseudo-semi-free sheaves of DG rings}. These sheaves enable the formation of flat commutative DG $\mcal{O}_X$-ring resolutions in great generality. But before saying what these sheaves are, we must present a few background concepts. For an open set $U \subseteq X$, let us denote by $\mcal{O}_{U \subseteq X}$ the extension by zero to $X$ of the sheaf $\mcal{O}_{U}$. Suppose $I = \coprod_{n \leq 0} I^n$ is a graded set, and for each $i \in I$ we are given an open set $U_i \subseteq X$. Define \[ \mcal{E}^n := \bigoplus\nolimits_{i \in I^n} \, \mcal{O}_{U_i \subseteq X} \] and \[ \mcal{E} := \bigoplus\nolimits_{n \leq 0} \, \mcal{E}^n . \] We call $\mcal{E}$ the {\em pseudo-free graded $\mcal{O}_X$-module} indexed by $I$. The summand $\mcal{E}^n$ is in degree $n$. Warning: the $\mcal{O}_X$-module $\mcal{E}$ is usually not quasi-coherent! The commutative tensor ring of $\mcal{E}$ over $\mcal{O}_X$ is called a {\em commutative pseudo-free graded ring}, and we denote it by $\mcal{O}_X[I]$. See Section \ref{sec:pseudo-free} for details. It is useful to view $\mcal{O}_X[I]$ as a commutative pseudo-polynomial graded $\mcal{O}_X$-ring, in which the elements $t_i := 1\in \Gamma(U_i, \mcal{O}_X)$ play the role of variables (and we call them pseudo-generators). For each point $x \in X$ the stalk $\mcal{O}_X[I]_x$ is a genuine commutative polynomial graded $\mcal{O}_{X, x}$-ring, in variables indexed by the graded set $\{ i \in I \mid x \in U_i \}$. A sheaf of commutative DG $\mcal{O}_X$-rings $\til{\AA}$ is called pseudo-semi-free if the graded sheaf of rings $\til{\AA}^{\natural}$, that is gotten by forgetting the differential on $\til{\AA}$, is a commutative pseudo-free graded $\mcal{O}_X$-ring. As a DG $\mcal{O}_X$-module, such $\til{\AA}$ is K-flat. Any sheaf of commutative DG $\mcal{O}_X$-rings $\AA$ admits a commutative pseudo-semi-free DG $\mcal{O}_X$-ring resolution $\til{\AA} \to \AA$. This is Theorem \ref{thm:140}. Furthermore, these resolutions are unique up to suitable homotopies, that we explain below. This brings us to the second main innovation of this paper. Consider the category $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \mcal{O}_X$ of commutative DG $\mcal{O}_X$-rings. We introduce a relation that we call {\em relative quasi-homotopy} on the set of morphisms in this category. See Definition \ref{dfn:233}. This is a congruence, and hence there is the {\em homotopy category}, in the genuine sense, that we denote by $\cat{K}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \mcal{O}_X)$. Its objects are the same as those of $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \mcal{O}_X$, and its morphisms are the relative quasi-homotopy classes. We can also form the abstract localization of $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \mcal{O}_X$ with respect to the quasi-isomorphisms, and the result is the {\em derived category of commutative DG $\mcal{O}_X$-rings}, that we denote by $\cat{D}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \mcal{O}_X)$. There is a commutative diagram of functors \begin{equation} \label{eqn:237} \UseTips \xymatrix @C=6ex @R=6ex { \catt{DGR}^{\leq 0}_{\mrm{sc}} / \mcal{O}_X \ar[d]_{\operatorname{P}} \ar[dr]^{\operatorname{Q}} \\ \cat{K}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \mcal{O}_X) \ar[r]^{\bar{\operatorname{Q}}} & \cat{D}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \mcal{O}_X) } \end{equation} The functor $\bar{\operatorname{Q}}$ is a {\em right Ore localization} with respect to the set of quasi-iso\-morph\-isms, and it is also {\em faithful}. This gives us very tight control on the morphisms in the derived category. The commutative pseudo-semi-free DG rings have a certain lifting property that makes everything work. See Theorem \ref{thm:141}. However, the commutative pseudo-semi-free DG rings have a built-in finiteness property (basically coming from the fact that only a finite intersection of open sets of $X$ is open), thus preventing them from being ``cofibrant objects''. This seems to indicate that there is no Quillen model structure on $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \mcal{O}_X$. We can now say how we solve the problem of derived intersection. For $i = 1, 2$ we view $\mcal{O}_{Y_i}$ as living in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \mcal{O}_X$, and we choose pseudo-semi-free resolutions $\AA_i \to \mcal{O}_{Y_i}$. Define the topological space $Y := Y_1 \cap Y_2 \subseteq X$ and the commutative DG $\mcal{O}_X$-ring \[ \mcal{O}_Y := (\AA_1 \otimes_{\mcal{O}_X} \AA_2)|_Y . \] Then the derived intersection of $(Y_1, \mcal{O}_{Y_1})$ and $(Y_2, \mcal{O}_{Y_2})$ is the DG ringed space $(Y, \mcal{O}_Y)$. There is a canonical isomorphism \[ \mcal{O}_Y \cong \mcal{O}_{Y_1} \otimes_{\mcal{O}_X}^{\mrm{L}} \mcal{O}_{Y_2} \] in $\cat{D}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \mcal{O}_X)$. See Corollary \ref{cor:265} for details. Furthermore, on any affine open set $V \subseteq X$ there is a canonical isomorphism in $\cat{D}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \mcal{O}_V)$ between $\mcal{O}_Y|_V$ and any quasi-coherent DG sheaf presentation of the derived intersection. This is explained in Sections \ref{sec:der-inters} and \ref{sec:alg-geom} of the paper. When the scheme $X$ is quasi-projective, it is not hard to see that our approach is compatible with that of \cite{CK}. For more general $X$ we did not attempt a comparison, but it is almost certain that our approach is compatible with those of Behrend, To\"en and Lurie. It stands to reason that the method outlined in this paper should allow a clean construction of the {\em cotangent complex} of $X$; see Remark \ref{rem:255}. Our method should also permit a geometric version of Shaul's {\em derived completion of DG rings} from \cite{Sh}, but we do not have a formulation of it yet. Presumably our work can be extended without too much difficulty to sites that are more general than the Zariski topology of a scheme (e.g.\ to algebraic spaces, and maybe even to algebraic stacks). We leave this exploration aside for the time being. In the body of the paper (before Section \ref{sec:alg-geom}) we do not work with schemes, but rather with {\em topological spaces} in general. Indeed, the natural geometric object to which our approach applies is a {\em commutative DG ringed space}, which is a pair $(X, \AA)$, consisting of a topological space $X$ and a sheaf of commutative DG rings $\AA$ on it. Moreover, there is no need for a special base ring, such as a field of characteristic $0$; our constructions are valid over $\mathbb{Z}$. The present paper is only a preview, meant to convey our new ideas on this subject. Only a few proofs are given here (and some of them are just partial proofs). Our most important result is Theorem \ref{thm:140} on the existence of pseudo-semi-free resolutions, and for that we provide a sketch of a proof (the beginning of a full proof, and an indication how to complete it). Understanding the geometric principle of the proof of Theorem \ref{thm:140}, and combining it with the proofs of several algebraic results in \cite{Ye2}, should presumably allow experts to write their own proofs of the rest of the theorems in this paper. At any rate, we intend to publish a complete account of our approach in the future. Until then, we welcome feedback from readers, with suggestions of proofs, of better results, and also of counterexamples and refutations, in case there should be any... \medskip \noindent {\bf Acknowledgments.} I wish to thank Liran Shaul, Rishy Vyas, Vladimir Hinich and Donald Stanley for discussions. \section{Pseudo-Free Sheaves of Commutative Graded Rings} \label{sec:pseudo-free} Let us fix a nonzero commutative base ring $\mathbb{K}$. For instance, $\mathbb{K}$ could be a field of characteristic $0$; or it could be the ring of integers $\mathbb{Z}$. Let $X$ be a topological space. The constant sheaf on $X$ with values in $\mathbb{K}$ is $\mathbb{K}_X$. The category of $\mathbb{K}_X$-modules (i.e.\ sheaves of $\mathbb{K}$-modules on $X$) is $\cat{M}(\mathbb{K}_X) = \cat{Mod} \mathbb{K}_X$. \begin{dfn} \label{dfn:240} A {\em sheaf of commutative graded $\mathbb{K}_X$-rings} is a sheaf of graded rings $\AA = \bigoplus_{m \leq 0} \AA^m$, together with a homomorphism $\mathbb{K}_X \to \AA^0$, that $\AA$ has the strong commutativity property: any local sections $a \in \AA^m$ and $b \in \AA^n$ satisfy $b \,{\cdot}\, a = (-1)^{m n} \,{\cdot}\, a \,{\cdot}\, b$, and $a \,{\cdot}\, a = 0$ if $m$ is odd. \end{dfn} The category of sheaves of commutative graded $\mathbb{K}_X$-rings is denoted by \linebreak $\catt{GR}^{\leq 0}_{\mrm{sc}} / \mathbb{K}_X$. Suppose $U \subseteq X$ is an open set, with inclusion morphism $g : U \to X$. For any $\mathbb{K}_U$-module $\mcal{N}$ its extension by zero to $X$ is the $\mathbb{K}_X$-module $g_!(\mcal{N})$; and for any $\mathbb{K}_X$-module $\mcal{M}$ its restriction to $U$ is the $\mathbb{K}_U$-module $g^{-1}(\mcal{M}) = \mcal{M}|_U$. These operations are adjoint: there is a canonical isomorphism \begin{equation} \label{eqn:221} \operatorname{Hom}_{\mathbb{K}_X} \bigl( g_!(\mcal{N}), \mcal{M} \bigr) \cong \operatorname{Hom}_{\mathbb{K}_U} \bigl( \mcal{N}, g^{-1}(\mcal{M}) \bigr) \end{equation} in $\cat{M}(\mathbb{K})$. See \cite[Section II.1]{Ha}. \begin{dfn} \label{dfn:220} Let $U \subseteq X$ be an open set, with inclusion morphism $g : U \to X$. The {\em pseudo-free $\mathbb{K}_X$-module of pseudo-rank $1$ and pseudo-support $U$} is the $\mathbb{K}_X$-module \[ \mathbb{K}_{U \subseteq X} := g_!(\mathbb{K}_U) . \] The element \[ t_U := 1 \in \Gamma(U, \mathbb{K}_{U \subseteq X}) \cong \Gamma(U, \mathbb{K}_X) \] is called the {\em pseudo-free generator} of the $\mathbb{K}_X$-module $\mathbb{K}_{U \subseteq X}$. \end{dfn} Taking $\mcal{N} = \mathbb{K}_{U}$ in formula (\ref{eqn:221}), we have canonical isomorphisms \begin{equation} \label{eqn:223} \operatorname{Hom}_{\mathbb{K}_X} (\mathbb{K}_{U \subseteq X}, \mcal{M}) \cong \operatorname{Hom}_{\mathbb{K}_U} \bigl( \mathbb{K}_{U}, g^{-1}(\mcal{M}) \bigr) \cong \Gamma(U, \mcal{M}) \end{equation} in $\cat{M}(\mathbb{K})$. The pseudo-free generator $t_U$ can be interpreted as a homomorphism $t_U : \mathbb{K}_{U \subseteq X} \to \mathbb{K}_X$ in $\cat{M}(\mathbb{K}_X)$. This is actually an injective homomorphism, and thus we can view $\mathbb{K}_{U \subseteq X}$ as an {\em ideal sheaf} in $\mathbb{K}_X$. It is the ideal sheaf ``pseudo-generated'' by $t_U$. For a point $x \in X$ we denote by $\mathbb{K}_{U \subseteq X, x}$ the stalk of the sheaf $\mathbb{K}_{U \subseteq X}$ at $x$. If $x \in U$, then the stalk $\mathbb{K}_{U \subseteq X, x}$ is a free $\mathbb{K}$-module of rank $1$ with basis $t_U$. But if $x \notin U$ then $\mathbb{K}_{U \subseteq X, x} = 0$. If $U' \subseteq X$ is another open set, then \[ \mathbb{K}_{U \subseteq X} \otimes_{\mathbb{K}_X} \mathbb{K}_{U' \subseteq X} \cong \mathbb{K}_{U \cap U' \subseteq X} \] canonically as $\mathbb{K}_X$-modules. The pseudo-free generators multiply: \[ t_U \otimes t_{U'} \mapsto t_{U \cap U'} . \] The pseudo-free sheaves were used to great effect by Grothendieck in \cite[Section II.7]{RD}. See also our paper \cite[Section 3]{Ye1}, where the name ``pseudo-free'' was first used. \begin{dfn} \label{dfn:210} A {\em generator specification} on $X$ is a triple \[ \bigl( I, \{ U_i \}_{i \in I}, \{ n_i \}_{i \in I} \bigr) \] consisting of a set $I$, a collection $\{ U_i \}_{i \in I}$ of open sets of $X$, and a collection $\{ n_i \}_{i \in I}$ of nonpositive integers $n_i$. \end{dfn} We often refer to the generator specification just as $I$, leaving the rest of the ingredients implicit. The numbers $n_i$ are called the cohomological degrees. For every index $i$ there is the the pseudo-free generator \begin{equation} \label{eqn:235} t_i := t_{U_i} \in \Gamma(U_i, \mathbb{K}_{U_i \subseteq X}) . \end{equation} Suppose a generator specification $I$ is given. For any $n$ let \begin{equation} \label{eqn:241} I^n := \{ i \in I \mid n_i = n \} . \end{equation} Thus $I = \coprod_n I_n$, so it is a graded set. For any $x \in X$ we let \begin{equation} \label{eqn:243} I_x := \{ i \in I \mid x \in U_i \} . \end{equation} \begin{dfn} \label{dfn:241} Given a generator specification $I$, the {\em graded pseudo-free $\mathbb{K}_X$-mod\-ule pseudo-generated by $I$} is \[ \mcal{E} := \bigoplus_{n \leq 0} \, \mcal{E}^n , \] where for every $n$ the graded component of cohomological degree $n$ is \[ \mcal{E}^n := \bigoplus_{i \in I^n} \, \mathbb{K}_{U_i \subseteq X} . \] \end{dfn} Note that for any point $x \in X$ the stalk $\mcal{E}_x$ is a graded free $\mathbb{K}$-module, with basis indexed by the graded set $I_x$. \begin{dfn} \label{dfn:242} The {\em noncommutative pseudo-free graded $\mathbb{K}_X$-ring} pseudo-ge\-ne\-rated by $I$ is \[ \mathbb{K}_X \bra{I} := \bigoplus_{l \geq 0} \, \mcal{E} \otimes_{\mathbb{K}_X} \cdots \otimes_{\mathbb{K}_X} \mcal{E} , \] where in the $l$-th summand there are $l$ tensor factors. The multiplication is the tensor product. \end{dfn} Note that $\mathbb{K}_X \bra{I}$ is actually bigraded: it has the tensor grading $l$ and the cohomological grading $n$; but we are only interested in the cohomological grading. \begin{dfn} \label{dfn:211} Let $I$ be a generator specification. The {\em commutative pseudo-free graded $\mathbb{K}_X$-ring} pseudo-generated by $I$ is the quotient $\mathbb{K}_X[I]$ of the $\mathbb{K}_X$-ring $\mathbb{K}_X \bra{I}$, modulo the two-sided ideal sheaf pseudo-generated by the local sections \[ t_{i} \,{\cdot}\, t_{j} - (-1)^{n_i \,{\cdot}\, n_j} \,{\cdot}\, t_{j} \,{\cdot}\, t_{i} \] for all $i, j \in I$, and by the local sections $t_{i} \,{\cdot}\, t_{i}$ for all $i$ such that $n_i$ is odd. \end{dfn} The homogeneous component of $\mathbb{K}_X[I]$ of cohomological degree $n$ is denoted by $\mathbb{K}_X[I]^n$. Thus \begin{equation} \label{eqn:244} \mathbb{K}_X[I] = \bigoplus_{n \leq 0} \, \mathbb{K}_X[I]^n . \end{equation} \begin{prop} \label{prop:240} For any point $x \in X$ there is a canonical graded $\mathbb{K}$-ring isomorphism \[ \mathbb{K}_X[I]_x \cong \mathbb{K}[ I_x] , \] where $\mathbb{K}[ I_x]$ is the commutative graded polynomial ring on the collection of graded variables indexed by the graded set $I_x$. \end{prop} \begin{cor} \label{cor:240} For each $n$ the sheaf $\mathbb{K}_X[I]^n$ is flat over $\mathbb{K}_X$. \end{cor} \begin{prop} \label{prop:235} Let $I$ be a generator specification, let $\AA \in \catt{GR}^{\leq 0}_{\mrm{sc}} / \mathbb{K}_X$, and for every $i \in I$ let $a_i \in \Gamma(U_i, \AA^{n_i})$. Then there is a unique homomorphism $\phi : \mathbb{K}_X[I] \to \AA$ in $\catt{GR}^{\leq 0}_{\mrm{sc}} / \mathbb{K}_X$ such that $\phi(t_i) = a_i$. \end{prop} \begin{proof} Let $\mcal{E}$ be the pseudo-free $\mathbb{K}_X$-module pseudo-generated by $I$. The adjunction property (\ref{eqn:223}) says that there is a unique homomorphism $\phi : \mcal{E} \to \AA$ in $\cat{M}(\mathbb{K}_X)$ such that $\phi(t_i) = a_i$ on $U_i$. This extends uniquely to a homomorphism of $\mathbb{K}_X$-rings $\phi : \mathbb{K}_X \bra{I} \to \AA$ by the universal property of the tensor product. To be explicit, any pseudo-monomial \[ t_{i_1} \otimes \cdots \otimes t_{i_l} \in \Gamma \bigl( U_{i_1} \cap \cdots \cap U_{i_l}, \mathbb{K}_X \bra{I} \bigr) \] goes to the element \[ a_{i_1} \cdots a_{i_l} \in \Gamma \bigl( U_{i_1} \cap \cdots \cap U_{i_l}, \AA \bigr) . \] Because $\AA$ is commutative, the two-sided ideal of graded commutators goes to zero, and therefore there is an induced homomorphism $\phi : \mathbb{K}_X[I] \to \AA$. The uniqueness is clear. \end{proof} \begin{prop} \label{prop:236} Let $I$ be a generator specification, let $\AA \in \catt{GR}^{\leq 0}_{\mrm{sc}} / \mathbb{K}_X$, and define $\mcal{B} := \AA \otimes_{\mathbb{K}_X} \mathbb{K}_X[I]$. Suppose for every $i \in I$ we are given an element $b_i \in \Gamma(U_i, \mcal{B}^{n_i + 1})$. \begin{enumerate} \item There is a unique derivation $\d : \mcal{B} \to \mcal{B}$ of degree $+1$ that extends the differential of $\AA$ and such that $\d(t_i) = b_i$. \item If $\d(b_i) = 0$ for all $i$, then $\d \circ \d = 0$. \end{enumerate} \end{prop} \begin{proof} Like the previous proof, combined with \cite[Lemma 3.20]{Ye2}. \end{proof} \section{Pseudo-Semi-Free Sheaves of Commutative DG Rings} Again $X$ is a topological space. \begin{dfn} \label{dfn:245} A {\em commutative DG $\mathbb{K}_X$-ring} is a sheaf of commutative graded $\mathbb{K}_X$-rings $\AA = \bigoplus_{i \leq 0} \AA^i$, as in Definition \ref{dfn:240}, with a $\mathbb{K}_X$-linear differential $\d$ of degree $+1$ that satisfies the graded Leibniz rule. A homomorphism of DG $\mathbb{K}_X$-rings $\phi : \AA \to \mcal{B}$ is a homomorphism of sheaves that respects the DG $\mathbb{K}_X$-ring structure. The category of commutative DG $\mathbb{K}_X$-rings is denoted by $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \mathbb{K}_X$. \end{dfn} In more conventional language, a commutative DG $\mathbb{K}_X$-ring $\AA$ would be called a sheaf of unital associative commutative nonpositive cochain differential graded $\mathbb{K}$-algebras on $X$. Commutative rings are viewed as DG rings concentrated in degree $0$. \begin{dfn} \label{dfn:250} A {\em commutative DG ringed space} over $\mathbb{K}$ is a pair $(X, \AA)$, where $X$ is a topological space, and $\AA$ commutative DG $\mathbb{K}_X$-ring. \end{dfn} \begin{dfn} \label{dfn:246} Let $(X, \AA)$ be a commutative DG ringed space over $\mathbb{K}$. A {\em commutative DG $\AA$-ring} is a pair $(\mcal{B}, \phi)$, where $\mcal{B} \in \catt{DGR}^{\leq 0}_{\mrm{sc}} / \mathbb{K}_X$, and $\phi : \AA \to \mcal{B}$ is a homomorphism in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \mathbb{K}_X$. The morphisms between commutative DG $\AA$-rings are the obvious ones. The resulting category is denoted by $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA$. \end{dfn} To any $\AA \in \catt{DGR}^{\leq 0}_{\mrm{sc}} / \mathbb{K}_X$ we can assign its cohomology $\operatorname{H}(\AA)$, which is a graded $\mathbb{K}_X$-ring. Note that $\operatorname{H}(\AA)$ is the sheaf associated to the presheaf $U \mapsto \operatorname{H}(\Gamma(U, \AA))$. Cohomology is a functor \[ \operatorname{H} : \catt{DGR}^{\leq 0}_{\mrm{sc}} / \mathbb{K}_X \to \catt{GR}^{\leq 0}_{\mrm{sc}} / \mathbb{K}_X . \] A homomorphism $\phi : \AA \to \mcal{B}$ is called a {\em quasi-isomorphism} if $\operatorname{H}(\phi)$ is an isomorphism. Let $\AA$ be a commutative DG $\mathbb{K}_X$-ring. We denote by $\AA^{\natural}$ the graded $\mathbb{K}_X$-ring gotten by forgetting the differentials. The commutative pseudo-free graded $\mathbb{K}_X$-ring pseudo-generated by a generator specification $I$ was introduced in Definition \ref{dfn:211}. \begin{dfn} Let $\phi : \AA \to \mcal{B}$ be a homomorphism in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \mathbb{K}_X$. We say that $\phi$ is a {\em pseudo-semi-free DG ring homomorphism} in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \mathbb{K}_X$, and that $\mcal{B}$ is a {\em commutative pseudo-semi-free DG $\AA$-ring}, if there is an isomorphism \[ \mcal{B}^{\natural} \cong \AA^{\natural} \otimes_{\mathbb{K}_X} \mathbb{K}_X[I] \] of graded $\AA^{\natural}$-rings, for some generator specification $I$. \end{dfn} \begin{prop} \label{prop:180} Let $\mcal{B}$ be a pseudo-semi-free commutative DG $\AA$-ring. Then $\mcal{B}$ is K-flat as a DG $\AA$-module. \end{prop} \begin{dfn} \label{dfn:181} Let $(X, \AA)$ be a commutative DG ringed space, and let $\mcal{B}$ be a commutative DG $\AA$-ring. A {\em pseudo-semi-free commutative DG ring resolution} of $\mcal{B}$ over $\AA$, or a {\em pseudo-semi-free resolution of $\mcal{B}$ in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA$}, is a pair $(\til{\mcal{B}}, \psi)$, where $\til{\mcal{B}}$ is a pseudo-semi-free commutative DG $\AA$-ring, and $\psi : \til{\mcal{B}} \to \mcal{B}$ is a surjective quasi-isomorphism of DG $\AA$-rings. \end{dfn} Here is the most important result of this paper. \begin{thm} \label{thm:140} Let $(X, \AA)$ be a commutative DG ringed space, and let $\mcal{B}$ be a commutative DG $\AA$-ring. There exists a commutative pseudo-semi-free DG ring resolution of $\mcal{B}$ over $\AA$. \end{thm} \begin{proof}[Sketch of Proof] This is a geometrization of the proof of \cite[Theorem 3.21(1)]{Ye2}, replacing variables by pseudo-generators. Instead of \cite[Lemmas 3.19 and 3.20]{Ye2}, here we use Propositions \ref{prop:235} and \ref{prop:236}. As in the proof of \cite[Theorem 3.21(1)]{Ye2}, we shall construct an ascending sequence $F_0(\til{\mcal{B}}) \subseteq F_1(\til{\mcal{B}}) \subseteq \cdots$ of pseudo-semi-free DG rings in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA$, together with a compatible sequence of homomorphisms $\phi_q : F_q(\til{\mcal{B}}) \to \mcal{B}$. Moreover, there will be an ascending sequence $F_0(I) \subseteq F_1(I) \subseteq \cdots$ of generator specifications, and compatible isomorphisms \[ F_q(\til{\mcal{B}})^{\natural} \cong \AA^{\natural} \otimes_{\mathbb{K}_X} \mathbb{K}_X[F_q(I)] \] of graded $\AA^{\natural}$-rings. The following conditions will be satisfied: \begin{enumerate} \rmitem{i} The graded sheaf homomorphisms $\phi_q : F_q(\til{\mcal{B}}) \to \mcal{B}$, $\operatorname{B}(\phi_q) : \operatorname{B}(F_q(\til{\mcal{B}})) \to \operatorname{B}(\mcal{B})$ and $\operatorname{H}(\phi_q) : \operatorname{H}(F_q(\til{\mcal{B}})) \to \operatorname{H}(\mcal{B})$ are surjective in degrees $\geq -q$. \rmitem{ii} The graded sheaf homomorphism $\operatorname{H}(\phi_q) : \operatorname{H}(F_q(\til{\mcal{B}})) \to \operatorname{H}(\mcal{B})$ is bijective in degrees $\geq -q + 1$. \end{enumerate} In condition (i), $\operatorname{B}(-)$ denotes coboundaries. The DG ring \[ \til{\mcal{B}} := \lim_{q \to} F_q(\til{\mcal{B}}) \] and the homomorphism \[ \phi := \lim_{q \to} \phi_q : \til{\mcal{B}} \to \mcal{B} \] will have the desired properties. We shall just start the actual proof, for $q = 0$, imitating the proof of \cite[Theorem 3.21(1)]{Ye2}, and emphasizing the geometric considerations that arise here. For any point $x \in X$ we have the ring homomorphism $\AA^0_x \to \mcal{B}^0_x$. Let $\operatorname{B}^0(\mcal{B}^0_x)$ be the module of $0$-coboundaries. We choose a collection $\{ c''_k \}_{k \in K''_{0, x}}$ of elements in $\mcal{B}^{-1}_x$, indexed by a set $K''_{0 ,x}$, such that the collection $\{ \d(c''_k) \}_{k \in K''_{0, x}}$ generates $\operatorname{B}^0(\mcal{B}^0_x)$ as an $\AA^0_x$-module. Let $J''_{0, x}$ be another set, with a bijection $\d : K''_{0 ,x} \to J''_{0 , x}$. Define $b''_{j} := \d(c''_k) \in \mcal{B}^0_x$ for any $k \in K''_{0, x}$ and $j = \d(k) \in J''_{0 , x}$, so we have a collection $\{ b''_j \}_{j \in J''_{0, x}}$ of elements of $\mcal{B}^{0}_x$. Next choose a collection $\{ b'_j \}_{j \in J'_{0, x}}$ of elements in $\mcal{B}^{0}_x$, indexed by a set $J'_{0 , x}$, such that the collection $\{ b''_j \}_{j \in J''_0} \cup \{ b'_j \}_{j \in J'_0}$ generates $\mcal{B}^0_x$ as an $\AA^0_x$-ring. The indexing sets $J'_{0, x}$, $J''_{0, x}$ and $K''_{0, x}$ here correspond, respectively, to the indexing sets $Y'_0$, $Y''_{0}$ and $Z''_{0}$ in the proof of \cite[Theorem 3.21(1)]{Ye2}. Indeed, they would be the same if $X = \{ x \}$, a space with a single point in it. For any index $k \in K''_{0 ,x}$ there is an open neighborhood $U''_{k}$ of $x$ such the element $c''_k \in \mcal{B}^{-1}_x$ extends to an element $c''_k \in \Gamma(U''_k, \mcal{B}^{-1})$. This choice also gives us an element \[ b''_j := \d(c''_k) \in \Gamma(U''_k, \mcal{B}^0) \] for $j = \d(k) \in J''_{0, x}$. Likewise, for any index $j \in J'_{0 ,x}$ there is an open neighborhood $U'_{j}$ of $x$ such the element $b'_j \in \mcal{B}^{0}_x$ extends to an element $b'_j \in \Gamma(U'_j, \mcal{B}^{0})$. Define the set \[ F_0(I_x) := J'_{0, x} \sqcup J''_{0, x} \sqcup K''_{0, x} . \] For $i \in F_0(I_x)$ define the open set $U_i := U''_{k}$ if either $i = k \in K''_{0, x}$ or $i = \d(k) \in J''_{0, x}$; and define $U_i := U'_{j}$ if $i = j \in J'_{0, x}$. Define the integer $n_i := -1$ if $i = k \in K''_{0, x}$; and define $n_i := 0$ if $i \in J''_{0, x}$ or $i \in J'_{0, x}$. Thus we have a generator specification \begin{equation} \label{eqn:260} \bigl( F_0(I_x), \{ U_i \}_{i \in F_0(I_x)}, \{ n_i \}_{i \in F_0(I_x)} \bigr) \end{equation} ``around $x$''. Taking the union of (\ref{eqn:260}) over all points $x \in X$ we get a ``global'' generator specification \[ \bigl( F_0(I), \{ U_i \}_{i \in F_0(I)}, \{ n_i \}_{i \in F_0(I)} \bigr) . \] Define the DG ring \[ F_0(\til{\mcal{B}}) := \AA \otimes_{\mathbb{K}_X} \mathbb{K}_X[F_0(I)] \] with differential $\d(t_k) := t_{\d(k)}$ for any index $k \in K''_{0, x} \subseteq F_0(I_x) \subseteq F_0(I)$. This is possible by Proposition \ref{prop:236}. According to Proposition \ref{prop:235} there is a homomorphism $\phi_0 : F_0(\til{\mcal{B}}) \to \mcal{B}$ of DG $\AA$-rings. Checking at stalks we see that $\phi_0$ satisfies condition (i) above for $q = 0$; condition (ii) is trivial for $q = 0$. At this stage we can shrink the indexing set $F_0(I)$, as long as condition (i) holds for $q = 0$. See Remark \ref{rem:260} regarding the possibility to make the set $F_0(I)$ finite. From here on the construction of $F_q(\til{\mcal{B}})$ for $q \geq 1$ continues along the lines of the proof of \cite[Theorem 3.21(1)]{Ye2}, with geometric arguments very similar to those we used above: for every $q$ we go to stalks at points, choose elements, and extend them to open sets. \end{proof} \begin{dfn} \label{dfn:230} Let $\eta : \AA \to \AA^+$ be a homomorphism in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \mathbb{K}_X$. We say that $\eta$ is a {\em split acyclic pseudo-semi-free homomorphism} if $\eta$ is pseudo-semi-free and a quasi-isomorphism, and there is a homomorphism $\epsilon : \AA^+ \to \AA$ in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \mathbb{K}_X$ such that $\epsilon \circ \eta = \operatorname{id}_{\AA}$. \end{dfn} \begin{thm} \label{thm:215} Suppose $\phi : \AA \to \mcal{B}$ is a quasi-isomorphism in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \mathbb{K}_X$. Then $\phi$ can be factored as $\phi = \phi^+ \circ \eta$, where $\phi^+ : \AA^{+} \to \mcal{B}$ is a surjective quasi-isomorphism, and $\eta : \AA \to \AA^{+}$ is a split acyclic pseudo-semi-free homomorphism. \end{thm} In a commutative diagram: \[ \UseTips \xymatrix @C=8ex @R=8ex { & \AA^+ \ar@{->>}[dr]^{\phi^+} \ar@{->>}[dl]_{\epsilon} \\ \AA & \AA \ar@{>->}[u]_(0.4){\eta} \ar@{>->>}[l]_{\operatorname{id}} \ar[r]^{\phi} & \mcal{B} } \] \begin{proof}[Sketch of Proof] We actually prove more: there is a split contractible commutative pseudo-semi-free DG ring $\mcal{C}$, and a homomorphism $\mcal{C} \to \mcal{B}$, such that $\AA^+ = \linebreak \AA \otimes_{\mathbb{K}_X} \mcal{C}$. \end{proof} \begin{thm} \label{thm:141} Let $\AA \in \catt{DGR}^{\leq 0}_{\mrm{sc}} / \mathbb{K}$, let $\mcal{B} \in \catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA$, and for $i = 0, 1$ let $\phi_i : \til{\mcal{B}}_i \to \mcal{B}$ be quasi-isomorphisms in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA$. Then there exists a pseudo-semi-free DG ring $\til{\mcal{B}}' \in \catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA$, together with quasi-isomorphisms $\psi_i : \til{\mcal{B}}' \to \til{\mcal{B}}_i$ in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA$, such that $\phi_0 \circ \psi_0 = \phi_1 \circ \psi_1$. \end{thm} The statement is shown in the next commutative diagram in the category \linebreak $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA$. \[ \UseTips \xymatrix @C=4ex @R=4ex { & \til{\mcal{B}}' \ar@{-->}[dl]_{\psi_0} \ar@{-->}[dr]^{\psi_1} \\ \til{\mcal{B}}_0 \ar@{->}[dr]_{\phi_0} & & \til{\mcal{B}}_1 \ar@{->}[dl]^{\phi_1} \\ & \mcal{B} } \] \begin{proof}[Sketch of Proof] This is similar to the proof of \cite[Theorem 3.22]{Ye2}. But the \linebreak pseudo-semi-free DG ring $\til{\mcal{B}}'$ has to be tailored, in terms of the open sets involved, to the DG rings $\til{\mcal{B}}_0$ and $\til{\mcal{B}}_1$. \end{proof} Of course, by induction, this can be extended to any {\em finite} number of quasi-iso\-morph\-isms $\phi_i : \til{\mcal{B}}_i \to \mcal{B}$. \begin{rem} \label{rem:180} The construction of the DG ring $\til{\mcal{B}}$ in Theorem \ref{thm:141} involves refinement. There is finiteness built into it (since open sets allow only finite intersections). In general, a single $\til{\mcal{B}}'$ will not work for an infinite collection of quasi-isomorphisms $\phi_i : \til{\mcal{B}}_i \to \mcal{B}$. This seems to indicate that there are no cofibrant objects in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \mathbb{K}_X$, and thus there is no Quillen model structure! \end{rem} \section{Relative Quasi-Homotopies and the Derived Category} \label{sec:quasi-hom} The next definition is a variant of the left homotopy from Quillen theory (see \cite{Ho}). The DG ring $\mcal{B}^+$ plays the role of a {\em cylinder object}. \begin{dfn} \label{dfn:232} Let $\AA \in \catt{DGR}^{\leq 0}_{\mrm{sc}} / \mathbb{K}_X$, and let $\phi_0, \phi_1 : \mcal{B} \to \mcal{C}$ be homomorphisms in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA$. A {\em homotopy between $\phi_0$ and $\phi_1$ relative to $\AA$} is a commutative diagram \[ \UseTips \xymatrix @C=8ex @R=6ex { \mcal{B} & \mcal{B} \otimes_{\AA} \mcal{B} \ar@{->>}[l]_(0.6){\mu} \ar[r]^(0.6){\phi_0 \, \otimes \, \phi_1} \ar[d]_{\eta} & \mcal{C} \\ & \mcal{B}^{+} \ar@{->>}[ul]^{\epsilon} \ar[ur]_{\phi} } \] in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA$, where $\mu$ is the multiplication homomorphism, and $\epsilon$ is a quasi-iso\-morph\-ism. If a homotopy exists, then we say that $\phi_0$ and $\phi_1$ are {\em homotopic relative to $\AA$}. \end{dfn} \begin{dfn} \label{dfn:233} Let $\phi_0, \phi_1 : \mcal{B} \to \mcal{C}$ be homomorphisms in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA$. The homomorphisms $\phi_0$ and $\phi_1$ are said to be {\em quasi-homotopic relative to $\AA$} is there is a quasi-isomorphism $\psi : \til{\mcal{B}} \to \mcal{B}$ in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA$ such that $\phi_0 \circ \psi$ and $\phi_1 \circ \psi$ are homotopic relative to $\AA$, in the sense of Definition \ref{dfn:232}. This relation on morphisms in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA$ is called {\em relative quasi-homotopy}. \end{dfn} \[ \UseTips \xymatrix @C=8ex @R=6ex { \til{\mcal{B}} \ar[r]^{\psi} \ar@(ur,ul)[rr]^{\phi_i \, \circ \, \psi} & \mcal{B} \ar[r]^{\phi_i} & \mcal{C} } \] \begin{thm} \label{thm:245} Suppose $\til{\phi}_0, \til{\phi}_1 : \til{\mcal{B}} \to \til{\mcal{C}}$, $\phi_0, \phi_1 : \til{\mcal{B}} \to \mcal{C}$ and $\sigma : \til{\mcal{C}} \to \mcal{C}$ are homomorphisms in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA$, such that that $\phi_i = \sigma \circ \til{\phi}_i$, $\sigma$ is a quasi-isomorphism, and the homomorphisms $\phi_0$ and $\phi_1$ are homotopic relative to $\AA$. Then there is a pseudo-semi-free resolution $\til{\psi}: \til{\mcal{B}}' \to \til{\mcal{B}}$ in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA$, such that $\til{\phi}_0 \circ \til{\psi}$ and $\til{\phi}_1 \circ \til{\psi}$ are homotopic relative to $\AA$. \end{thm} Here are the commutative diagrams in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA$, for $i = 0, 1$~: \[ \UseTips \xymatrix @C=8ex @R=6ex { \til{\mcal{B}}' \ar@{-->}[r]^{\til{\psi}} \ar@{-->}[dr]_{\til{\phi_i} \, \circ \, \til{\psi}} & \til{\mcal{B}} \ar[d]^{\til{\phi}_i} \ar[dr]^{\phi_i} \\ & \til{\mcal{C}} \ar[r]^{\sigma} & \mcal{C} } \] \begin{thm} \label{thm:232} Let $\AA \in \catt{DGR}^{\leq 0}_{\mrm{sc}} / \mathbb{K}_X$. The relation of relative quasi-homotopy is a congruence on the category $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA$. \end{thm} In order to have a visible distinction from the Quillen model category notation, below we choose notation that resembles the Grothendieck notation in \cite{RD}. \begin{dfn} \label{dfn:234} Let $\AA \in \catt{DGR}^{\leq 0}_{\mrm{sc}} / \mathbb{K}_X$. The {\em homotopy category} of $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA$ is its quotient category modulo the relative quasi-homotopy congruence, and we denote it by $\cat{K}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA)$. \end{dfn} Thus for any pair of objects $\mcal{B}, \mcal{C}$ we have \[ \operatorname{Hom}_{\cat{K}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA)}(\mcal{B}, \mcal{C}) = \frac{\operatorname{Hom}_{\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA}(\mcal{B}, \mcal{C})} {\tup{relative quasi-homotopy}} . \] There is a functor \[ \operatorname{P} : \catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA \to \cat{K}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA) \] that is the identity on objects and surjective on morphisms. Within $\cat{K}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA)$ we have the set of quasi-isomorphisms, and they form a multiplicatively closed set of morphisms. \begin{thm} \label{thm:233} The quasi-isomorphisms in $\cat{K}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA)$ satisfy the right Ore condition and the right cancellation condition. \end{thm} \begin{dfn} \label{dfn:235} Let $\AA \in \catt{DGR}^{\leq 0}_{\mrm{sc}} / \mathbb{K}_X$. The {\em derived category} of $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA$ is its localization with respect to the quasi-isomorphisms. We denote it by $\cat{D}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA)$. \end{dfn} By definition there is a functor \[ \operatorname{Q} : \catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA \to \cat{D}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA) \] that is the identity on objects. Since relatively quasi-homotopic morphisms in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA$ become equal in $\cat{D}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA)$, we get a commutative diagram of functors \[ \UseTips \xymatrix @C=6ex @R=6ex { \catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA \ar[d]_{\operatorname{P}} \ar[dr]^{\operatorname{Q}} \\ \cat{K}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA) \ar[r]^{\bar{\operatorname{Q}}} & \cat{D}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA) } \] \begin{cor} \label{cor:231} The functor \[ \bar{\operatorname{Q}} : \cat{K}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA) \to \cat{D}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA) \] is a right Ore localization, and it is also faithful. \end{cor} This tells us that any morphism in $\cat{D}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA)$ can be expressed as a simple right fraction: \[ \operatorname{Q}(\phi) \circ \operatorname{Q}(\psi)^{-1} \] where $\phi, \psi$ are morphisms in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA$, and $\psi$ is a quasi-isomorphism. Moreover, there is equality \[ \operatorname{Q}(\phi_1) = \operatorname{Q}(\phi_2) \] in $\cat{D}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA)$ iff $\phi_1$ and $\phi_2$ are relatively quasi-homotopic in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA$. We do not wish to perform a detailed study of maps between commutative DG ringed spaces in this paper. We only note that: \begin{prop} \label{prop:250} Let $V \subseteq X$ be an open set. The restriction functor \[ \catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA \to \catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA|_V , \quad \mcal{B} \mapsto \mcal{B}|_V \] induces functors \[ \cat{K}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA) \to \cat{K}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA|_V) \] and \[ \cat{D}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA) \to \cat{D}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA|_V) , \] that commute with the functors $\operatorname{Q}$, $\operatorname{P}$ and $\bar{\operatorname{Q}}$. \end{prop} \section{Left Derived Tensor Products of Sheaves of DG Rings} \label{sec:der-inters} As before, $(X, \AA)$ is a commutative DG ringed space over $\mathbb{K}$. \begin{thm} \label{thm:246} Consider the commutative DG ringed space $(X, \AA)$. There is a bifunctor \[ (- \otimes^{\mrm{L}}_{\AA} -) : \cat{D}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA) \times \cat{D}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA) \to \cat{D}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA) \, , \] together with a morphism \[ \xi : \operatorname{Q} \circ \, (- \otimes_{\AA} -) \to (- \otimes^{\mrm{L}}_{\AA} -) \] of bifunctors \[ \catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA \, \times \, \catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA \to \cat{D}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA) \, , \] with this property\tup{:} if $\mcal{B}, \mcal{C} \in \catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA$ are such that at least one of them is K-flat over $\AA$, then the morphism \[ \xi_{\mcal{B}, \mcal{C}} : \mcal{B} \otimes_{\AA} \mcal{C} \to \mcal{B} \otimes^{\mrm{L}}_{\AA} \mcal{C} \] in $\cat{D}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA)$ is an isomorphism. \end{thm} \begin{proof}[Sketch of Proof] Given $\mcal{B}, \mcal{C} \in \catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA$ we choose pseudo-semi-free resolutions $\til{\mcal{B}} \to \mcal{B}$ and $\til{\mcal{C}} \to \mcal{C}$ in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \AA$, and define \[ \mcal{B} \otimes^{\mrm{L}}_{\AA} \mcal{C} := \til{\mcal{B}} \otimes_{\AA} \til{\mcal{C}} . \] The results of Section \ref{sec:quasi-hom} shows that this is a derived functor. \end{proof} The derived tensor product respects localizations, as in Proposition \ref{prop:250}. \section{Resolutions in Algebraic Geometry} \label{sec:alg-geom} In this section $(X, \mcal{O}_X)$ is a scheme (over the base ring $\mathbb{K}$). Let $\AA$ be a quasi-coherent commutative DG $\mcal{O}_X$-ring; by this we mean that each $\mcal{O}_X$-module $\AA^p$ is quasi-coherent. Let $V \subseteq X$ be an affine open set, and write $C := \Gamma(V, \mcal{O}_X)$ and $A := \Gamma(V, \AA)$. So $A$ is a commutative DG $C$-ring. We can build a commutative semi-free DG $C$-ring resolution $g : \til{A} \to A$. Each $\til{A}^p$ is a $C$-module, and we can sheafify it to get a quasi-coherent sheaf $\til{\AA}^p$ on $V$. In this way we obtain a {\em commutative semi-free DG $\mcal{O}_V$-ring resolution} $g : \til{\AA} \to \AA|_V$. \begin{thm} \label{thm:142} Let $(X, \mcal{O}_X)$ be a scheme, let $\AA$ be a quasi-coherent commutative DG $\mcal{O}_X$-ring, and let let $V \subseteq X$ be an affine open set. Suppose we are given a commutative pseudo-semi-free DG $\mcal{O}_X$-ring resolution $g : \til{\AA} \to \AA$ on all of $X$, and also a commutative semi-free DG $\mcal{O}_V$-ring resolution $g' : \til{\AA}' \to \AA|_V$ on $V$. Then there exists a commutative pseudo-semi-free DG $\mcal{O}_V$-ring resolution $g'' : \til{\AA}'' \to \AA|_V$ on $V$, and DG $\mcal{O}_V$-ring quasi-isomorphisms $f : \til{\AA}'' \to \til{\AA}|_V$ and $f' : \til{\AA}'' \to \til{\AA}'$, such that $g|_V \circ f = g' \circ f' = g''$. \end{thm} \begin{proof} The commutative semi-free DG $\mcal{O}_V$-ring resolution $g'' : \til{\AA}'' \to \AA|_V$ is just a special case of a commutative pseudo-semi-free DG $\mcal{O}_V$-ring resolution; so we can apply Theorem \ref{thm:141} to it and to $g|_V : \til{\AA}|_V \to \AA|_V$. \end{proof} What Theorem \ref{thm:142} says is that locally our commutative pseudo-semi-free resolutions are the same as the quasi-coherent resolutions that were considered in \cite{CK}. In the next corollary we identify a sheaf on a closed subset $Y \subseteq X$ with its pushforward to $X$. \begin{cor} \label{cor:265} Let $(Y_1, \mcal{O}_{Y_1})$ and $(Y_2, \mcal{O}_{Y_2})$ be closed subschemes of $(X, \mcal{O}_{X})$. There is a commutative DG ringed space $(Y, \mcal{O}_{Y})$, such that \[ Y = Y_1 \cap Y_2 \subseteq X \] as topological spaces, and \[ \mcal{O}_{Y} = \mcal{O}_{Y_1} \otimes^{\mrm{L}}_{\mcal{O}_{X}} \mcal{O}_{Y_2} \] in $\cat{D}(\catt{DGR}^{\leq 0}_{\mrm{sc}} / \mcal{O}_X)$. Thus \[ (Y, \mcal{O}_{Y}) = (Y_1, \mcal{O}_{Y_1}) \times^{\mrm{R}}_{(X, \mcal{O}_{X})} (Y_2, \mcal{O}_{Y_2}) , \] the derived intersection of these subschemes. \end{cor} \begin{proof} For $i = 1, 2$ we choose pseudo-semi-free resolutions $\AA_i \to \mcal{O}_{Y_i}$ in \linebreak $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \mcal{O}_X$. Let $Y := Y_1 \cap Y_2$, and define \[ \mcal{O}_Y := (\AA_1 \otimes_{\mcal{O}_X} \AA_2)|_Y , \] the restriction of the DG $\mcal{O}_X$-ring $\AA_1 \otimes_{\mcal{O}_X} \AA_2$ to the closed subset $Y$. A calculation in stalks shows that the canonical DG ring homomorphism \[ \AA_1 \otimes_{\mcal{O}_X} \AA_2 \to \mcal{O}_Y \] is a quasi-isomorphism. \end{proof} \begin{rem} \label{rem:255} Here is a speculation regarding the {\em cotangent complex} of the scheme $X$. For this we view $\mcal{O}_X$ as living in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \mathbb{K}_X$, where $\mathbb{K}$ is the base ring. Let $\AA \to \mcal{O}_X$ be a commutative pseudo-semi-free resolution in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \mathbb{K}_X$. There is a DG $\AA$-module $\Omega^1_{\AA / \mathbb{K}}$, defined as the sheafification of the presheaf \[ V \mapsto \Omega^1_{\Gamma(V, \AA) / \mathbb{K}} . \] Let \[ \operatorname{L}_X := \mcal{O}_X \otimes_{\AA} \Omega^1_{\AA / \mathbb{K}} \in \cat{D}(\mcal{O}_X) \] We believe that $\operatorname{L}_X$ is canonically isomorphic (in the derived category $\cat{D}(\mcal{O}_X)$) to the cotangent complex as constructed in \cite{Il}. \end{rem} \begin{rem} \label{rem:260} If the scheme $X$ is noetherian, and if $\AA$ is a coherent commutative $\mcal{O}_X$-ring (e.g.\ $\AA = \mcal{O}_Y$ for a closed subscheme $Y \subseteq X$), then it is possible to find a commutative pseudo-semi-free resolution $\til{\AA} \to \AA$ in $\catt{DGR}^{\leq 0}_{\mrm{sc}} / \mcal{O}_X$ such that $\til{\AA}^{\natural} \cong \mcal{O}_X \otimes_{\mathbb{K}_X} \mathbb{K}_X[I]$, and the indexing set $I$ is finite in each degree. This is by the results of \cite[Section II.7]{RD}. \end{rem}
2024-02-18T23:41:08.852Z
2016-08-16T02:13:46.000Z
algebraic_stack_train_0000
4,283
7,931
proofpile-arXiv_066-5089
\section{Introduction} \label{sec:introduction} With the first direct detection of gravitational waves (GWs) by the LIGO/Virgo collaboration \cite{Abbott:2016blz,Abbott:2016nmj,TheLIGOScientific:2016pea} the era of GW astronomy has begun. The information gathered from present and future GW observations will improve our understanding of the astrophysical objects emitting the GW signal, of the origin and evolution of the universe and its structure, and of the gravitational interaction. Earth-based detectors, such as the advanced LIGO \cite{ligo} and Virgo \cite{virgo} interferometers, target the GW frequency window $10 - 1000$ Hz, while pulsar timing arrays (PTA), such as the ones united under the International Pulsar Timing Array (IPTA) collaboration \cite{2010CQGra..27h4013H}, probe much lower frequencies around $10^{-9} - 10^{-8}$ Hz. In order to fill the gap in frequency between Earth-based interferometers and PTA, space-born GW observatories have been proposed, which will be able to reach high sensitivity in the frequency band $10^{-4} - 10^{-1}$ Hz. Such range of frequencies in the GW landscape is supposed to be rich of astrophysical sources and it is yet completely unexplored. In particular the strong GW signal emitted by merging massive black hole binaries (MBHBs) from $10^4$ to $10^7$ solar masses is expected to fall exactly within the frequency band targeted by space-born detectors. Since such massive black holes are believed to reside at the centre of galaxies, observing the GW signal they emit will help to better understand the formation and evolution of galaxies and cosmic structures. In 2013, the European Space Agency has approved a GW observatory in space as the L3 mission of its ``Cosmic Vision Program'' scheduled for launch around 2030-2034, for which the ``evolved LISA'' (eLISA) space-based interferometer is the main candidate \cite{elisaweb,Seoane:2013qna}. eLISA is designed to probe the GW landscape around the mHz region where the signal produced by MBHBs is expected to be the loudest. The final design of the mission, which is composed by three satellites orbiting around the Sun in an equilateral triangular formation, has not been decided yet, and some variables are still under considerations (see e.g. \cite{Klein:2015hvg}): the number of laser links between the satellites (four or six), corresponding to the number of active arms (two or three); the arm-length of the triangle (from one to five million km); and the duration of the mission (two or five years). The eLISA low-frequency noise level (another of the variables previously considered) has been recently tested by the LISA Pathfinder mission \cite{pathfinderweb}, and according to the first results \cite{Armano:2016bkm} the expected noise is almost one hundred times better than the original requirement for the instrument. In an earlier work, Ref.~\cite{Tamanini:2016zlh}, we have studied the capability of eLISA in probing the acceleration of the universe by means of MBHB mergers as {\it standard sirens}, i.e.~as sources of known distance \cite{schutz,Holz:2005df,Cutler:2009qv}. We have derived eLISA constraints on standard cosmological models: $\Lambda$CDM, dynamical dark energy, non-zero spatial curvature and so on. In the present paper, we specifically consider alternative scenarios to explain the acceleration of the universe: in particular, we study early and interacting dark energy models, see sections \ref{sec:early_DE} and \ref{sec:interacting_dark_energy}. The principle of standard sirens is the following. The measured GW waveform depends directly on the luminosity distance of the source, and thus parameter estimation allows to infer the distance to the source for every GW event detected. If subsequent electromagnetic (EM) observations are able to identify an EM counterpart, then one is able to obtain a measure of the source redshift and thus a point in the distance-redshift space. Once a sufficient number of standard sirens is observed, the theoretically predicted distance-redshift relation can be compared against the data and constraints on the cosmological parameters can be statistically inferred. We assume spatial flatness throughout the paper, so that \begin{equation} d_L(z) = c \left(1+z\right) \int_0^z \frac{1}{H(z')} dz' \,, \label{eq:dist_red_rel} \end{equation} where $c$ is the speed of light and $H(z)$ is the Hubble rate. The analysis presented here is meant to complete the one performed in \cite{Tamanini:2016zlh}. We concentrate on early (EDE) and interacting (IDE) dark energy, but use the same standard siren catalogues that have been obtained in \cite{Tamanini:2016zlh} starting from simulated rates of MBHB mergers detectable by different eLISA configurations, and considering realistic scenarios for the observation of the EM counterparts, based on the capabilities of future EM telescopes (LSST, SKA, ELT). Here we also consider the same three astrophysical models of MBHB formation and evolution appearing in \cite{Tamanini:2016zlh}, namely a light seeds model (popIII), a heavy seeds model with delay (Q3d) and a heavy seeds model without delay (Q3nod) (see also \cite{Klein:2015hvg} and references therein for more information). We present separate results for all these models. The number of standard sirens for each MBHB formation scenario has been selected under the hypothesis that the sky localisation of the event can be achieved using also the merger and ringdown phases of the signal. In this procedure the telescopes can be pointed only after the merger to look for a distinctive signature, therefore one implicitly assumes that there is a delay between the merger and the flare, or that the electromagnetic signal is persistent and peculiar enough that it can be confidently identified also minutes to hours after merger. This procedure was labelled the ``optimistic scenario'' in \cite{Tamanini:2016zlh}. Moreover, the statistical methods employed to handle the simulated data coincide with the ones adopted in \cite{Tamanini:2016zlh}: in particular we perform a Fisher matrix analysis and obtain constraints and contour plots following the procedures exposed in \cite{Tamanini:2016zlh}. Differently from \cite{Tamanini:2016zlh}, in what follows we consider only three eLISA configurations, letting the arm-length to vary as one (A1), two (A2) and five (A5) million km, but fixing the number of laser links to six (L6), the mission duration to five years (M5) and the low-frequency noise to the LISA Pathfinder ``expected'' one (N2) (see \cite{Tamanini:2016zlh} and \cite{Klein:2015hvg} for details). The reasons for this choice are the following: \begin{itemize} \item The aim of the present paper is not to carefully analyse all possible eLISA configurations to understand the science return of each of them (as it was in \cite{Tamanini:2016zlh}), but rather to investigate simple extensions of the $\Lambda$CDM model in order to understand the pros and cons of the eLISA mission in probing alternative cosmological models. \item According to the results of \cite{Tamanini:2016zlh}, four-link (two arms) configurations perform much worse than six-link (three arms) configurations in providing a sufficiently high number of MBHB standard sirens for cosmology. We therefore ignore four-link configurations since we expect that they will not be able to give meaningful constraints on the parameters of alternative cosmologies beyond $\Lambda$CDM. \item The number of detections, and thus the number of standard sirens, scales linearly with the mission duration: the longer the mission, the higher the number of datapoints. In analogy with the investigation of \cite{Tamanini:2016zlh} we thus only focus on a mission of five years\footnote{A method to estimate how the cosmological constraints change as the mission duration changes has been outlined in \cite{Tamanini:2016zlh}.}. \item Finally we only consider the ``expected'' low-frequency noise from LISA Pathfinder, called N2 \cite{Tamanini:2016zlh,Klein:2015hvg}. According to the first results of the mission \cite{Armano:2016bkm} this requirement has been met at frequencies $f>1$ mHz, while at lower frequencies the situation is still open: however one can optimistically forecast that the N2 noise level, if not a better one, will be finally achieved over the whole frequency spectrum. \end{itemize} The configuration with two million km arms, denoted N2A2M5L6, will be taken as our reference design for eLISA upon which the majority of subsequent results are based. When we need to pick a specific MBHB formation model, we choose the popIII scenario which is the one providing an intermediate number of standard sirens. In what follows we first investigate EDE in section \ref{sec:early_DE} and then analyse two different models of IDE in section \ref{sec:interacting_dark_energy}. We have chosen EDE and IDE as alternative cosmological models because they are simple one-parameter extensions of $\Lambda$CDM and they allow to better expose the advantages of eLISA in probing the expansion of the Universe at high redshift. In the following, we set the fiducial values of the parameters to $\Omega_m^0=0.3$,~$w_0=-1$,~$h=0.67$,~ $\Omega_{de}^e=0$, $\epsilon_1=0$, $\epsilon_2=0$. Section \ref{sec:discussion_and_conclusion} contains discussions and conclusions. \section{Early dark energy} \label{sec:early_DE} In early dark energy models, first proposed in \cite{Wetterich:2004pv}, the dark energy component evolves with redshift in such a way that it gives a non-negligible contribution also at early times, contrary to $\Lambda$CDM or other dynamical dark energy models where typically dark energy plays no role for redshift higher than about one. Early dark energy models can be probed by measuring the distance-redshift relation Eq.~\eqref{eq:dist_red_rel}. Ignoring the contribution of relativistic components and assuming spatial flatness, one can write \begin{equation} \frac{H^2(z)}{H_0^2} = \frac{\Omega_m^0 (z+1)^3}{1 - \Omega_{de}(z)} \,, \label{eq:H} \end{equation} where $\Omega_m^0$ and $H_0$ are the present matter fraction and Hubble parameter and $\Omega_{de}(z)$ is the relative energy density of DE evolving in time, or equivalently over the redshift $z$: $\Omega_{de}(z)=\rho_{de}(z)/\rho_{tot}(z)$. The most widely used parameterization of $\Omega_{de}$ has been proposed in \cite{Doran:2006kp}: \begin{equation} \Omega_{de}(z) = \frac{\Omega_{de}^0 - \Omega_{de}^e \left[ 1- (z+1)^{3 w_0} \right]}{\Omega_{de}^0 + \Omega_{m}^0 (z+1)^{-3 w_0}} + \Omega_{de}^e \left[ 1 - (z+1)^{3 w_0} \right] \,, \label{eq:early_DE} \end{equation} where $w_0$ and $\Omega_{de}^0$ are the present values of respectively the equation of state (EoS) and relative energy density of dark energy, the latter being related to $\Omega_m^0$ through $\Omega_{de}^0 = 1 - \Omega_m^0$. The parameter $\Omega_{de}^e$ characterizes the amount of dark energy present at early times $z \gg 1$, which remains constant also in the very early universe, before recombination. \begin{figure} \begin{center} \includegraphics[width=.7\textwidth]{figures/EDE_Omega_z_plot.pdf} \end{center} \caption{Evolution of $\Omega_{de}(z)$ in early dark energy models with $\Omega_{de}^e = 0.03$ and $z_e = 6$ (solid blue line) or $z_e \rightarrow\infty$ (dashed blue line). The EDE model considered by Pettorino et al \cite{Pettorino:2013ia}, with the cut of EDE set to $z=6$, is also shown for comparison (dotted-dashed red line). The dotted black line represents $\Lambda$CDM.} \label{fig:EDE_omega_z_plot} \end{figure} Here we analyse a parametrisation modified with respect to the one in Eq.~\eqref{eq:early_DE}, which allows us to investigate how the constraints change if $\Omega_{de}$ is non-negligible only for a limited amount of time in the past, instead of contributing beyond recombination as in the above model \eqref{eq:early_DE}. We will see that for eLISA the two models are equivalent as soon as the redshift at which early dark energy starts to contribute is sufficiently high: higher than about 6, as we will demonstrate. Therefore, even though the parametrisation in our analysis is different, we derive also eLISA constraints on the model \eqref{eq:early_DE}. The scenario in which EDE is non-negligible only for a limited amount of time in the past was first proposed in Ref.~\cite{Pettorino:2013ia}, which analyses how CMB constraints are affected by a variation of the epoch at which early dark energy starts to contribute. The model we consider here is somewhat different from those presented in~\cite{Pettorino:2013ia}, and shares some similarity with what done in Ref.~\cite{Aubourg:2014yra} in the case of Baryon Acoustic Oscillations (BAO). We let the universe become $\Lambda$CDM at redshift $z>z_e$, such that: \begin{equation} \Omega_{de}(z) = \begin{cases} \frac{\Omega_{de}^0 - \Omega_{de}^e \left[ 1- (z+1)^{3 w_0} \right]}{\Omega_{de}^0 + \Omega_{m}^0 (z+1)^{-3 w_0}} + \Omega_{de}^e \left[ 1 - (z+1)^{3 w_0} \right] & \text{if } z < z_e \,, \\ \frac{\Omega_{de}^0}{\Omega_{de}^0 + \Omega_m^0 (z+1)^3} & \text{if } z \geq z_e \,, \end{cases} \label{eq:not_so_early_DE} \end{equation} where $z_e$ determines the redshift up to which dark energy causes deviation from the usual $\Lambda$CDM expansion history, while for $z > z_e$ the standard $\Lambda$CDM evolution is recovered; see Fig.~\ref{fig:EDE_omega_z_plot}. This corresponds to model EDE3 of~\cite{Pettorino:2013ia} except that the universe does not go back to $\Lambda$CDM at late time $a>a_c$: in EDE3 early dark energy is present only in the time interval between $z_c<z<z_e$, with $z_e$ a parameter and $z_c$ fixed by continuity; cf.~Fig.~\ref{fig:EDE_omega_z_plot}. Ref.~\cite{Aubourg:2014yra} instead considers a model in which the sound horizon is kept fixed at the fiducial $\Lambda$CDM value, meaning that early dark energy is negligible in the pre-recombination era and approaches the evolution of Eq.~\eqref{eq:early_DE} later in the matter era: this would correspond to our model \eqref{eq:not_so_early_DE} with sufficiently high $z_e$. The reason why we have chosen to consider model \eqref{eq:not_so_early_DE} is the following. As pointed out in \cite{Pettorino:2013ia}, CMB observations are mainly sensitive to deviations from $\Lambda$CDM at very high redshift: the CMB provides therefore very good constraints on $\Omega_{de}^e$ for the model \eqref{eq:early_DE}, as demonstrated also by the Planck analysis \cite{Ade:2015rim}. Ref.~\cite{Ade:2015rim} furthermore analyses the model EDE3 of \cite{Pettorino:2013ia}, where early dark energy is relevant only in the time interval between $z_c<z<z_e$: in this case, the constraint on $\Omega_{de}^e$ seriously degrades as $z_e$ decreases, because the CMB is less effective in constraining the late-time evolution of the universe (c.f. Fig.~11 of~\cite{Ade:2015rim}). Even though EDE3 is not exactly the same model as Eq.~\eqref{eq:not_so_early_DE}, one expects the CMB constraints obtained for EDE3 to equally apply to our parameterisation if $z_e$ is sufficiently small, say $z_e \lesssim 10$, precisely because CMB observations are mainly sensitive to deviations from $\Lambda$CDM only at very high redshift (and not at $z<z_e$). On the other hand, eLISA will be able to probe the redshift range $0 < z \lesssim 8$, because the redshift distribution of standard sirens extends in this interval: c.f. Fig.~\ref{fig:SS_z_distrib} of Appendix~\ref{sec:redshift_distribution_of_standard_sirens}, and the analysis of \cite{Tamanini:2016zlh}. We therefore expect any deviation from $\Lambda$CDM in the cosmic evolution happening at $z\lesssim 8$ to be best constrained by eLISA. Hence the EDE model \eqref{eq:not_so_early_DE}, where the energy density of DE gives a non negligible contribution up to today, should be well tested by the eLISA mission. If instead the cosmic expansion history is not distinguishable from $\Lambda$CDM in the range $0 < z \lesssim 8$, as can happen in EDE3, no constraint can be put by eLISA on any parameter beyond $\Lambda$CDM (as $\Omega_{de}^e$). This is our main motivation for considering the parametrisation \eqref{eq:not_so_early_DE} as opposed to EDE3: it can be well constrained by eLISA, the constraints can be compared with those obtained using the CMB both if $z_e\rightarrow \infty$ and if $z_e \lesssim 10$, but at the same time they can also be compared with late-time constraints such as those, for example, given by BAO~\cite{Aubourg:2014yra} and 21-cm \cite{Archidiacono:2014msa}. In the following analysis we choose different values of $z_e$, namely $z_e = 1, 2, 3, 4, 6$ and $z_e\gg 6$: as we will see, this latter is effectively equivalent to $z_e\rightarrow \infty$, i.e.~to model \eqref{eq:early_DE} because eLISA will not be sensitive to transitions occurring after a redshift of about six. We investigate five cosmological models based on Eq.~\eqref{eq:not_so_early_DE}: \begin{enumerate} \item A four-parameter model where every independent parameter is free: $\Omega_m^0$, $h$, $w_0$, $\Omega_{de}^e$; \item Three three-parameter models where one parameter among $\Omega_m^0$, $h$ and $w_0$ is fixed to its fiducial value and the others are free, together with $\Omega_{de}^e$; \item Three two-parameter models where one couple of parameters among $\Omega_m^0, w_0, h, \Omega_{de}^e$ is fixed to its fiducial values and the other couple is free; \item A one-parameter model where all $\Lambda$CDM parameters $\Omega_m^0$, $h$ and $w_0$ are fixed and only $\Omega_{de}^e$ is free. \end{enumerate} Note that since we are studying extensions to $\Lambda$CDM, $\Omega_{de}^e$ is always considered as a free parameter: fixing $\Omega_{de}^e$ to zero would reduce to the analysis already performed in \cite{Tamanini:2016zlh} where the resulting constraints on $\Omega_m^0$, $h$ and $w_0$ can be found. On the other hand, $z_e$ is not taken as a free parameter but as a variable of the model. Whenever $z_e\lesssim 10$, CMB observations cannot really constrain the model since they are sensitive only at high redshift (c.f. Fig.~11 of~\cite{Ade:2015rim}). Fixing the $\Lambda$CDM parameters to their fiducial values when $z_e\lesssim 10$ can therefore be considered equivalent to imposing a CMB prior. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c||c|c|c|c|} \hline \multicolumn{8}{|c|}{EDE} \\ \hline \multicolumn{4}{|c||}{$z_e = 2$} & \multicolumn{4}{|c|}{$z_e = 6$} \\ \hline $\Delta\Omega_m^0$ & $\Delta h$ & $\Delta w_0$ & $\Delta \Omega_{de}^e$ & $\Delta\Omega_m^0$ & $\Delta h$ & $\Delta w_0$ & $\Delta \Omega_{de}^e$ \\ \hline 0.163 & 0.229 & 2.29 & 0.877 & 6.10 & 0.654 & 5.53 & 19.2 \\ 0.280 & 0.450 & 3.80 & 0.983 & 1.08 & 0.255 & 1.87 & 3.12 \\ 0.0656 & 0.0728 & 0.797 & 0.399 & 1.48 & 0.160 & 1.52 & 4.60 \\ \hline \text{} & 0.0815 & 0.751 & 0.223 & \text{} & 0.0815 & 0.716 & 0.130 \\ \text{} & 0.0649 & 0.587 & 0.299 & \text{} & 0.144 & 0.877 & 0.228 \\ \text{} & 0.0457 & 0.450 & 0.146 & \text{} & 0.0342 & 0.336 & 0.0770 \\ \hline 0.0583 & \text{} & 0.193 & 0.313 & 0.826 & \text{} & 0.181 & 2.65 \\ 0.0427 & \text{} & 0.302 & 0.342 & 0.578 & \text{} & 0.291 & 1.90 \\ 0.0392 & \text{} & 0.121 & 0.192 & 0.349 & \text{} & 0.0928 & 1.08 \\ \hline 0.0583 & 0.0190 & \text{} & 0.261 & 0.713 & 0.0175 & \text{} & 2.29 \\ 0.0455 & 0.0377 & \text{} & 0.263 & 0.480 & 0.0421 & \text{} & 1.57 \\ 0.0378 & 0.0102 & \text{} & 0.155 & 0.338 & 0.00932 & \text{} & 1.03 \\ \hline 0.0563 & \text{} & \text{} & 0.186 & 0.499 & \text{} & \text{} & 1.52 \\ 0.0404 & \text{} & \text{} & 0.146 & 0.330 & \text{} & \text{} & 1.00 \\ 0.0376 & \text{} & \text{} & 0.127 & 0.290 & \text{} & \text{} & 0.877 \\ \hline \text{} & 0.0188 & \text{} & 0.148 & \text{} & 0.0145 & \text{} & 0.102 \\ \text{} & 0.0370 & \text{} & 0.246 & \text{} & 0.0228 & \text{} & 0.129 \\ \text{} & 0.0102 & \text{} & 0.0806 & \text{} & 0.00876 & \text{} & 0.0631 \\ \hline \text{} & \text{} & 0.180 & 0.173 & \text{} & \text{} & 0.126 & 0.107 \\ \text{} & \text{} & 0.283 & 0.241 & \text{} & \text{} & 0.140 & 0.0988 \\ \text{} & \text{} & 0.117 & 0.109 & \text{} & \text{} & 0.0841 & 0.0719 \\ \hline \text{} & \text{} & \text{} & 0.0355 & \text{} & \text{} & \text{} & 0.0322 \\ \text{} & \text{} & \text{} & 0.0331 & \text{} & \text{} & \text{} & 0.0280 \\ \text{} & \text{} & \text{} & 0.0253 & \text{} & \text{} & \text{} & 0.0225 \\ \hline \end{tabular} \end{center} \caption{Standard 1$\sigma$ errors on early DE for N2A2M5L6. In the left table early DE is present only up to $z = 2$, while in the right table it is present only up to $z = 6$. In each row of the table, the top sub-row shows the errors for light seeds (popIII), the central sub-row for heavy seeds with delays (Q3d) and the bottom sub-row for heavy seeds without delays (Q3nod). Empty entries mean that the corresponding parameter has been fixed to its fiducial value (exact prior).} \label{tab:err_EDE} \end{table} The forecast constraints that one can obtain with the eLISA configuration N2A2M5L6 are summarized in Table~\ref{tab:err_EDE} for $z_e = 2$ and $z_e = 6$. We stress that, since eLISA will probe only the redshift range up to $z \simeq 8$, any model with $z_e\gtrsim 8$ will have the same constraints. In practice, as we will see, our analysis shows that already after $z\simeq 6$ the constraints on $\Omega_{de}^e$ stabilize: for this reason $z_e \gtrsim 6$ or $z_e \rightarrow \infty$ are equivalent from the point of view of the constraints. \begin{figure} \begin{center} \includegraphics[width=\textwidth]{Ellipses_EDE_2sigma} \end{center} \caption{EDE: 2$\sigma$ contours for $z_e = 2$ (blue) and $z_e = 6$ (red) with N2A2M5L6 for the three MBHB formation scenarios in the two-parameter cosmological models where $\Omega_{de}^e$ is a free parameter together with $\Omega_m^0$, $h$ and $w_0$, respectively.} \label{fig:early_DE_ellipses} \end{figure} Notice in Table~\ref{tab:err_EDE} the difference between the cases $z_e = 2$ and $z_e = 6$ when all four parameters are free to vary (first row): the case $z_e = 2$ is poorly, but at least slightly constrained, while the case $z_e = 6$ is not at all constrained (errors bigger than 100\% of the parameter fiducial values). This is due to the fact that the parameters $\Omega_m^0$ and $\Omega_{de}^e$ are degenerate (as shown by Eqs.~\eqref{eq:H} and \eqref{eq:early_DE}), combined with the fact that the redshift distribution of standard sirens extends up to $z\simeq 8$ and there are in general few events at low redshift (this depends somewhat on the MBHB formation model: see Appendix~\ref{sec:redshift_distribution_of_standard_sirens} for the redshift distribution of standard sirens). For $z>z_e$, the standard sirens data are effectively constraining a three parameter model because $\Omega_{de}^e$ is zero and the expansion is the same as $\Lambda$CDM. Fixing $z_e=2$ allows therefore for a better measurement of the three $\Lambda$CDM parameters from the majority of the standard sirens data at high redshift, and in turn also of $\Omega_{de}^e$ due to the degeneracy with $\Omega_m^0$. On the other hand, when $z_e = 6$ the majority of the standard sirens data are effectively constraining the full four-parameter cosmological model, thus the accuracy with which the parameters can be determined is worse than in the $z_e = 2$ case. This effect can be understood also by comparing for example the 5th to 7th rows of Table~\ref{tab:err_EDE} (two-parameter models). Fixing both $h$ and $w_0$ to their fiducial values (5th row of Table~\ref{tab:err_EDE}) improves the constraints with respect to the four parameter model but does not break the degeneracy among $\Omega_m^0$ and $\Omega_{de}^e$: for this reason, the case $z_e = 6$ still has larger errors than the case $z_e = 2$. On the other hand, when $\Omega_m^0$ is fixed together with another parameter (rows six and seven of Table~\ref{tab:err_EDE}), the case $z_e = 6$ becomes better constrained than the case $z_e = 2$: the degeneracy has been broken by fixing $\Omega_m^0$ and more standard sirens are available if $z_e = 6$ to help constrain the EDE model. The same behaviour is observed in rows two to four of Table~\ref{tab:err_EDE} (three-parameter models), although it is less evident and can depend on the MBHB formation scenario. In Fig.~\ref{fig:early_DE_ellipses} we show 2$\sigma$ contour plots for the two-parameter models $(\Omega_{de}^e,~\Omega_m^0)$, $(\Omega_{de}^e,~h)$ and $(\Omega_{de}^e,~w_0)$. The figure represents all three MBHB models for the configuration N2A2M5L6, and both cases $z_e = 2$ and $z_e = 6$. The contour plots in the $(\Omega_{de}^e, \Omega_{m}^0)$ plane clearly show the degeneracy between these two parameters and how is it improved by choosing $z_e = 2$ instead of $z_e = 6$. On the other hand, when $\Omega_{m}^0$ is fixed to its fiducial value (which at these redshift can be considered equivalent to setting a CMB prior), the degeneracy with $\Omega_{de}^e$ is broken and the constraints improve; however, there remains some level of degeneracy among $\Omega_{de}^e$ and respectively $h$ and $w_0$, as can be appreciated from the second and third rows of figure~\ref{fig:early_DE_ellipses}. \begin{figure} \begin{center} \includegraphics[width=\textwidth]{Early_DE_z} \end{center} \caption{1$\sigma$ errors in the one-parameter cosmological model with only $\Omega_{de}^e$ (left panels) and in the two-parameter cosmological models with both $\Omega_{de}^e$ and $w_0$ (right panel) for three 6-link eLISA configurations. In the right panels empty and filled markers denote the uncertainties on $w_0$ and $\Omega_{de}^e$, respectively.} \label{fig:early_DE} \end{figure} Fig.~\ref{fig:early_DE} shows how the accuracy in determining the parameters $\Omega_{de}^e$ and $w_0$ changes with different values of $z_e$. We consider the three eLISA configurations with 6-links, noise N2 and varying arm-length, and the three MBHB formation models. The left column shows the one-parameter model with $\Omega_{de}^e$ only and the right column the two-parameter model with both $\Omega_{de}^e$ and $w_0$. Since $\Omega_{m}^0$ is fixed, the errors always decrease with increasing $z_e$ because of the higher number of standard sirens available for the measurement\footnote{Note that the error on $w_0$ in the model heavy seeds with delay (Q3d) actually increases going from $z_e = 1$ to $z_e = 2$. This is due to the fact that for Q3d there is a very low number of standard sirens with $z < 1$ (see Appendix~\ref{sec:redshift_distribution_of_standard_sirens}). Thus, when $z_e=1$, almost all the data are effectively constraining a one-parameter model where $\Omega_{de}^e=0$ and only $w_0$ is left free to vary, providing in this manner a good constraint on $w_0$. However, for $z_e \geq 2$ the number of standard sirens constraining the full two-parameter model is sufficiently high so that both $\Delta\Omega_{de}^e$ and $\Delta w_0$ decrease as $z_e$ increases. This suggests that there exists a redshift between 1 and 2 where the error on $w_0$ is the highest. The fact that this effect does not appear in the other BH models is due to the higher number of standard sirens at low redshift: it shows that in those cases the redshift at which $\Delta w_0$ is largest happens to be below $z = 1$ .} (cf.~Appendix~\ref{sec:redshift_distribution_of_standard_sirens}). However they stabilise around $z_e=6$ and do not change appreciably if $z_e\gg 6$: the constraining power of eLISA cannot improve further due to the reduced number of standard sirens after $z \simeq 6$ (c.f. redshift distributions in Appendix~\ref{sec:redshift_distribution_of_standard_sirens}). \subsection*{Comparison with present constraints} The errors on $\Omega_{de}^e$ for $z_e \geq 6$ when all other parameters are held fixed are more than one order of magnitude worse than the ones presently available from CMB observations \cite{Hojjati:2013oya}, in particular if compared with the latest Planck results \cite{Ade:2015rim} which provide a 2$\sigma$ uncertainty on $\Omega_{de}^e$ of 0.0036. In the same regime, i.e.~when early dark energy is relevant back to the pre-recombination era and the sound horizon is therefore rescaled, BAO measurements do not improve on CMB constraints since they are plagued, as we are, by the strong degeneracy between $\Omega_m^0$ and $\Omega_{de}^e$. Forecasts for 21-cm probes instead give a constraint better than 10\% on $\Omega_{de}^e$, according to \cite{Archidiacono:2014msa}. In summary, if $z_e \gg 10$, present CMB experiments already perform way better than what eLISA will be able to provide. On the other hand, CMB observations are unable to give significant constraints whenever $z_e \lesssim 10$ \cite{Pettorino:2013ia,Ade:2015rim}, while the constraints on $\Omega_{de}^e$ by eLISA outlined in Table~\ref{tab:err_EDE} become competitive when $z_e$ is sufficiently low. This highlights the main strength of eLISA in testing alternative cosmological models: if deviations from $\Lambda$CDM occur only in the range $z \lesssim 6$, eLISA has higher constraining ability than CMB probes and can therefore be considered as complementary to them. If instead $z_e\gg 6$, eLISA cannot compete with CMB probes. Concerning BAO measurements, they provide stronger constraints than eLISA only in the case when the sound horizon is held fixed at the fiducial $\Lambda$CDM value (i.e.~early dark energy becomes relevant at some point in the matter-dominated era): Ref.~\cite{Aubourg:2014yra} finds $\Delta \Omega_{de}^e= 0.031$ at 2$\sigma$. If this is not the case, present BAO observations perform worse than future eLISA. Finally SNIa observations are not expected to significantly improve constraints on early dark energy because they rely on measurements at low redshift. \section{Interacting dark energy} \label{sec:interacting_dark_energy} Interacting dark energy (IDE) models have been first proposed to help alleviate the coincidence problem \cite{Wetterich:1994bg,Amendola:1999er}, and have been widely studied in the literature because they seem to be favoured by present cosmological data, especially when redshift space distortions (RSD) are included in the datasets used to establish the constraints (c.f. discussions in Sec. \ref{sec:IDE1} and \ref{sec:IDE2}). In IDE one introduces a coupling between DE and DM that at the background level modifies the conservation equations as \begin{align} \dot\rho_{dm} + 3 H \rho_{dm} &= Q \,, \label{eq:IDE_cons_dm} \\ \dot\rho_{de} + 3 H (1+w_0) \rho_{de} &= -Q \,, \label{eq:IDE_cons_de} \end{align} where an over-dot stands for differentiation with respect to cosmic time and $Q$ defines the amount of energy exchanged between the two dark fluids. In what follows we neglect the baryonic and radiation contributions and consider two terms for the energy exchange, that we denote respectively IDE1 and IDE2: \begin{equation} Q = \epsilon_1 H \rho_{dm}~~ \text{(IDE1)} \quad\quad\text{and}\quad\quad Q = \epsilon_2 H \rho_{de}~~ \text{(IDE2)} \,. \label{eq:IDE_Qs} \end{equation} These are the simplest phenomenological models of IDE and have been extensively investigated in the literature (see e.g. \cite{Wang:2016lxa} for a recent review). The purpose of the present analysis is to test the ability of eLISA in constraining a possible interaction in the dark sector, not to distinguish between different interacting models: we therefore do not consider more elaborated IDE models. Note that standard sirens, similarly to SNIa, only probe the expansion of the universe at the background level and thus in our analysis there is no need to specify the fully covariant form of the interactions defined by Eq.~\eqref{eq:IDE_Qs}. We therefore do not need to worry about possible covariantization issues \cite{Li:2014eha,Faraoni:2014vra,Tamanini:2015iia,Skordis:2015yra} or instabilities at the perturbation level \cite{Valiviita:2008iv,He:2008si,Jackson:2009mz}. Consequently, we leave the parameters $\epsilon_1$ and $\epsilon_2$ free to take both positive and negative values, and their sign does not have to be connected to the value of $w_0$: these restrictions are necessary for analyses that need to perturb the dark fluids because they take into account observational probes of cosmological perturbations (CMB, BAO, RSD...) \cite{He:2008si,Valiviita:2008iv,Gavela:2009cy,Marcondes:2016reb}. It is clear that any realistic model of IDE needs to be stable under perturbations, but here we prefer to show the plain result of our analysis without setting any stability prior on parameters. Since standard sirens only probe the background, their constraining power is not optimal compared to other combined probes, as we will see; however, we have the advantage of being general, in the sense that any IDE model which reduces to \eqref{eq:IDE_Qs} at the background level, independently of the full form of its interacting exchange four-vector, can be constrained by the results that follow. \begin{figure} \begin{center} \includegraphics[width=.7\textwidth]{figures/IDE_Omega_z_plot.pdf} \end{center} \caption{Evolution of $\Omega_{de}(z)$ in interacting dark energy models. From top to bottom of the curves: the red line denotes IDE1 with $\epsilon_1 = 0.1$ and $z_i = 6$ (solid line) or $z_i \rightarrow\infty$ (dashed line). The green line denotes IDE2 with $\epsilon_2 = 0.1$ and $z_i = 6$ (solid line) or $z_i \rightarrow\infty$ (dotted-dashed line). The black, dotted line denotes $\Lambda$CDM.} \label{fig:IDE_omega_z_plot} \end{figure} In both cases of Eq.~\eqref{eq:IDE_Qs}, Eqs.~\eqref{eq:IDE_cons_dm}--\eqref{eq:IDE_cons_de} can be solved analytically yielding \begin{align} \rho_{dm} &= \rho_{dm}^0 (1+z)^{3- \epsilon_1} \,,\\ \rho_{de} &= \rho_{de}^0 (1+z)^{3(1+w_0)} + \frac{\epsilon_1}{\epsilon_1+3 w_0} \rho_{dm}^0 \left[ (1+z)^{3(1+w_0)} - (1+z)^{3- \epsilon_1} \right] \,, \end{align} for IDE1 and \begin{align} \rho_{dm} &= \rho_{dm}^0 (1+z)^3 + \rho_{de}^0 (1+z)^3 \left[ \frac{\epsilon_2}{\epsilon_2 + 3 w_0} \left( 1 - (1+z)^{3 w_0 + \epsilon_2} \right) \right] \,,\\ \rho_{de} &= \rho_{de}^0 (1+z)^{3(1+w_0) + \epsilon_2} \,, \end{align} for IDE2, where $\rho_{dm}^0$ and $\rho_{de}^0$ are the values of the energy densities at $z=0$. For negative values of $\epsilon_1$ the energy flows from dark matter to dark energy, and for positive values of $\epsilon_2$ the energy flows from dark energy to dark matter. Inserting the above equations into the Friedmann equation one obtains \begin{equation} \frac{H^2}{H_0^2} = \Omega_{m}^0 (1+z)^{3- \epsilon_1} + \Omega_{de}^0 (1+z)^{3 (1+w_0)} + \Omega_{m}^0 \frac{\epsilon_1}{\epsilon_1 + 3 w_0} \left[ (1+z)^{3 (1+w_0)} - (1+z)^{3- \epsilon_1} \right] \,, \label{eq:H_IDE1} \end{equation} for IDE1 and \begin{equation} \frac{H^2}{H_0^2} = \Omega_{m}^0 (1+z)^3 + \Omega_{de}^0 (1+z)^3 \left[ \frac{\epsilon_2}{\epsilon_2 + 3 w_0} \left(1 - (1+z)^{3 w_0 + \epsilon_2} \right) \right] + \Omega_{de}^0 (1+z)^{3(1+w_0) + \epsilon_2} \,, \label{eq:H_IDE2} \end{equation} for IDE2, which have to be inserted in the distance-redshift relation \eqref{eq:dist_red_rel} in order to fit the standard sirens data. Note that in both IDE models the interaction is between dark energy and dark matter only, without baryons. Therefore in Eq.~\eqref{eq:H_IDE1} $\Omega_{m}^0$ should in reality be the parameter $\Omega_{dm}^0=\rho_{dm}^0/\rho_{\rm tot}^0$. Here we neglect this fact and write Eq.~\eqref{eq:H_IDE1} in terms of the total $\Omega_{m}^0$: this introduces an error, but since standard sirens only probe the background evolution and not the growth of structure we expect this error to be small. As for EDE, we consider IDE models where the interaction between dark energy and dark matter is negligible for redshift higher than some reference redshift $z_i$, see e.g.~\cite{Cai:2009ht,Salvatelli:2014zta}. For such models the Hubble rate is given by Eq.~\eqref{eq:H_IDE1} (or Eq.~\eqref{eq:H_IDE2}) only up to $z_i$, while for higher redshift it is set to its $\Lambda$CDM behaviour. In other words the evolution of $\Omega_{de}(z) = \rho_{de}(z)/\rho_{\rm tot}(z)$ (and consequently of $\Omega_{dm}$) is modified only up to $z_i$, and goes to $\Lambda$CDM for higher redshifts; see Fig.~\ref{fig:IDE_omega_z_plot}. Similarly to EDE, a late time interaction in the dark sector cannot be efficiently constrained by CMB data (see e.g.~\cite{Salvatelli:2014zta}). This is again the main motivation for considering IDE models with a given $z_i$: in this case standard sirens data, probing the expansion in the redshift range $0< z\lesssim 8$, can reveal useful to strengthen the constraints derived from CMB. For example, Ref.~\cite{Salvatelli:2014zta} analyses the IDE2 model fixing $w_0=-1$ and letting the interaction parameter $\epsilon_2$ take different values in different redshift bins: for a single bin (corresponding to our model) with $z_i=0.9$, they find that a null interaction is excluded at 99\% confidence level. This result is obtained combining Planck data with RSD, and shows that an IDE model where the interaction switches on at late times is a possible solution to the tension that arises in $\Lambda$CDM between CMB and RSD data. In the following analysis we consider again different values of $z_i$, namely $z_i = 1, 2, 3, 4, 6$ and $z_i\gg 6$: with the latter effectively equivalent to $z_i\rightarrow \infty$, for the same reasons discussed in Sec.~\ref{sec:early_DE} for EDE. For both IDE models, we investigate the following scenarios: \begin{enumerate} \item A four-parameter model where every parameter is free: $\Omega_m^0$, $h$, $w_0$ and $\epsilon_1$ (or $\epsilon_2$); \item Three three-parameter models where one parameter among $\Omega_m^0$, $h$ and $w_0$ is fixed to its fiducial value and the others are free, together with $\epsilon_1$ (or $\epsilon_2$); \item Three two-parameter models where one couple of parameters among $\Omega_m^0, w_0, h, \epsilon_1 (\text{or}~\epsilon_2)$ is fixed to its fiducial values and the other couple is free; \item A one-parameter model where all $\Lambda$CDM parameters $\Omega_m^0$, $h$ and $w_0$ are fixed and only $\epsilon_1$ (or $\epsilon_2$) is free. \end{enumerate} We stress again that since we are studying extensions to $\Lambda$CDM, $\epsilon_1$ and $\epsilon_2$ are always considered as free parameters: fixing $\epsilon_1$ (or $\epsilon_2$) to zero would reduce the analysis to the standard cosmology one performed in \cite{Tamanini:2016zlh}. \subsection{IDE1} \label{sec:IDE1} \begin{table} \begin{center} \begin{tabular}{|c|c|c|c||c|c|c|c|} \hline \multicolumn{8}{|c|}{IDE1: $Q = \epsilon_1 H \rho_{dm}$} \\ \hline \multicolumn{4}{|c||}{$z_i = 2$} & \multicolumn{4}{|c|}{$z_i = 6$} \\ \hline $\Delta\Omega_m^0$ & $\Delta h$ & $\Delta w_0$ & $\Delta \epsilon_1$ & $\Delta\Omega_m^0$ & $\Delta h$ & $\Delta w_0$ & $\Delta \epsilon_1$ \\ \hline 0.148 & 0.168 & 1.13 & 1.04 & 0.695 & 0.251 & 3.25 & 2.52 \\ 0.287 & 0.395 & 2.73 & 1.21 & 0.492 & 0.302 & 3.09 & 1.77 \\ 0.0660 & 0.0511 & 0.423 & 0.485 & 0.333 & 0.0760 & 1.32 & 1.13 \\ \hline \text{} & 0.0840 & 0.649 & 0.274 & \text{} & 0.0811 & 0.663 & 0.139 \\ \text{} & 0.0681 & 0.514 & 0.351 & \text{} & 0.196 & 1.40 & 0.291 \\ \text{} & 0.0393 & 0.373 & 0.177 & \text{} & 0.0339 & 0.277 & 0.0838 \\ \hline 0.0740 & \text{} & 0.147 & 0.530 & 0.258 & \text{} & 0.526 & 0.866 \\ 0.0518 & \text{} & 0.142 & 0.452 & 0.283 & \text{} & 0.641 & 0.880 \\ 0.0503 & \text{} & 0.0954 & 0.344 & 0.152 & \text{} & 0.313 & 0.535 \\ \hline 0.0841 & 0.0195 & \text{} & 0.578 & 0.145 & 0.0367 & \text{} & 0.490 \\ 0.0622 & 0.0204 & \text{} & 0.458 & 0.201 & 0.0621 & \text{} & 0.551 \\ 0.0561 & 0.0118 & \text{} & 0.377 & 0.0758 & 0.0180 & \text{} & 0.282 \\ \hline 0.0431 & \text{} & \text{} & 0.440 & 0.0297 & \text{} & \text{} & 0.204 \\ 0.0428 & \text{} & \text{} & 0.407 & 0.0298 & \text{} & \text{} & 0.178 \\ 0.0296 & \text{} & \text{} & 0.295 & 0.0200 & \text{} & \text{} & 0.137 \\ \hline \text{} & 0.00889 & \text{} & 0.217 & \text{} & 0.00736 & \text{} & 0.117 \\ \text{} & 0.0157 & \text{} & 0.283 & \text{} & 0.00931 & \text{} & 0.103 \\ \text{} & 0.00571 & \text{} & 0.136 & \text{} & 0.00479 & \text{} & 0.0786 \\ \hline \text{} & \text{} & 0.0794 & 0.228 & \text{} & \text{} & 0.0608 & 0.116 \\ \text{} & \text{} & 0.118 & 0.262 & \text{} & \text{} & 0.0672 & 0.0979 \\ \text{} & \text{} & 0.0553 & 0.155 & \text{} & \text{} & 0.0415 & 0.0795 \\ \hline \text{} & \text{} & \text{} & 0.104 & \text{} & \text{} & \text{} & 0.0722 \\ \text{} & \text{} & \text{} & 0.0886 & \text{} & \text{} & \text{} & 0.0564 \\ \text{} & \text{} & \text{} & 0.0738 & \text{} & \text{} & \text{} & 0.0513 \\ \hline \end{tabular} \end{center} \caption{Standard 1$\sigma$ errors on the parameters of IDE1 for the eLISA configuration N2A2M5L6. In the left table the interaction is present only up to $z = 2$, while in the right table it is present only up to $z = 6$. In each row of the table, the top sub-row shows the errors for light seeds (popIII), the central sub-row for heavy seeds with delays (Q3d) and the bottom sub-row for heavy seeds without delays (Q3nod). Blank entries mean that the corresponding parameter has been fixed to its fiducial value (exact prior).} \label{tab:err_IDE1} \end{table} \begin{figure} \begin{center} \includegraphics[width=\textwidth]{Ellipses_IDE1_2sigma} \end{center} \caption{IDE1: 2$\sigma$ contours for $z_i = 2$ (blue) and $z_i = 6$ (red) with N2A2M5L6 for three MBHB scenarios in the two-parameter cosmological models where $\epsilon_1$ is a free parameter together with $\Omega_{m}^0$, $h$ and $w_0$, respectively.} \label{fig:ellipses_IDE1} \end{figure} Let us start from IDE1. Standard 1$\sigma$ errors for the eLISA configuration N2A2M5L6 are shown in Table~\ref{tab:err_IDE1}. The situation is quite similar to the EDE case of Sec.~\ref{sec:early_DE}: because of the strong degeneracy between $\Omega_{m}^0$ and $\epsilon_1$, the errors in the four-parameter model increase if IDE is present up to higher redshift, as shown in the first row of Table~\ref{tab:err_IDE1}. The $z_i = 2$ errors are smaller than the $z_i = 6$ ones because in the first case the data are effectively constraining a three-parameter model (with parameters $\Omega_m^0$, $h$ and $w_0$) for $z > 2$, where the majority of standard sirens is (see Appendix~\ref{sec:redshift_distribution_of_standard_sirens}), partly bypassing degeneracies between $\epsilon_1$ and the other parameters. Moreover, Table~\ref{tab:err_IDE1} shows that whenever $\Omega_{m}^0$ is fixed to its fiducial value, the errors do decrease going from $z_i = 2$ to $z_i = 6$, as expected: this is clear in rows six and seven of Table~\ref{tab:err_IDE1}, while for the three parameter model (second row) it is less evident, as was the case in the EDE model. On the other hand, if either $h$ or $w_0$ (but not $\Omega_{m}^0$) is set to its fiducial value in the three-parameter models, the degeneracy among the parameters remains, as can be appreciated in the 3rd and 4th rows of Table~\ref{tab:err_IDE1}, where the errors are still bigger in the $z_i = 6$ case. In Appendix~\ref{sec:degeneracies} we present the marginalised contour plots for the three-parameter models when $z_i \gtrsim 6$ (see Fig.~\ref{fig:degeneracies_1}) and discuss further the degeneracies of these models. Fig.~\ref{fig:ellipses_IDE1} shows contour plots in the two-parameter models $(\epsilon_1,\Omega_{m}^0)$, $(\epsilon_1,h)$ and $(\epsilon_1,w_0)$, for the three MBHB formation scenarios, configuration N2A2M5L6, and both $z_e = 2$ and $z_e = 6$. A relevant difference with the EDE case appears in the two-parameter model $(\epsilon_1,\Omega_{m}^0)$: even though $\Omega_{m}^0$ is free to vary together with $\epsilon_1$, the errors are smaller in the $z_i = 6$ case, contrary to what happened for EDE. This can be appreciated both comparing the ellipses in the first row of Fig.~\ref{fig:ellipses_IDE1} with those in the first row of Fig.~\ref{fig:early_DE_ellipses}, and from the values of the errors in row five of Table~\ref{tab:err_IDE1}. The degeneracy among $\Omega_{m}^0$ and $\epsilon_1$ is still evident but less serious than in the EDE case. \begin{figure} \begin{center} \includegraphics[width=\textwidth]{IDE_1} \end{center} \caption{IDE1: 1$\sigma$ errors on $\epsilon_1$ and $w_0$ in the one parameter cosmological model with only $\epsilon_1$ (left panels) and in the two parameter cosmological models with both $\epsilon_1$ and $w_0$ (right panel) for three different 6-link eLISA configurations. In the right panels empty and filled markers denote the uncertainties on $w_0$ and $\epsilon_1$, respectively.} \label{fig:IDE_1} \end{figure} In Fig.~\ref{fig:IDE_1} we compare models with different $z_i$ by means of 1$\sigma$ errors on $\epsilon_1$ and $w_0$. As for the EDE case, we consider the three eLISA configurations with 6-links, noise N2 and varying arm-length, and the three MBHB formation scenarios. The left column represents the one-parameter model with $\epsilon_1$ and the right column the two-parameter model with both $\epsilon_1$ and $w_0$. As expected when fixing $\Omega_{m}^0$, the constraints on both parameters improve as $z_i$ grows\footnote{Differently to what observed for EDE, in this case the errors on $w_0$ decrease going from $z_i = 1$ to $z_i = 2$ also in the heavy seeds with delay scenario (Q3d). However, the gain in $\Delta w_0$ going from $z_i = 1$ to $z_i = 2$ is less pronounced than in the Q3nod and PopIII models. As for EDE, this is again due to the lower number of standard sirens present in the Q3d model at low redshift (see Appendix~\ref{sec:redshift_distribution_of_standard_sirens}), but here the effect is not strong enough to make $\Delta w_0$ at $z_i = 1$ smaller than at $z_i = 2$, as it happens for EDE (cf.~Sec.~\ref{sec:early_DE}).}. They stabilise around $z_i=6$ and do not change appreciably if $z_i\gg 6$: the constraining power of eLISA cannot improve further due to the reduced number of standard sirens after $z \simeq 6$. \subsubsection*{Comparison with present constraints} The present status of constraints on the IDE1 model is that the cosmological data show a tendency to prefer a small positive interaction $\epsilon_1>0$ with $w<-1$, although the significance is often low and depends on the combination of datasets chosen for the analysis. We give some examples of latest analyses, which only consider the case where the interaction is present also far in the past, $z_i\rightarrow \infty$. Ref.~\cite{Costa:2016tpb} analyses the IDE1 model with $\lambda_1=\epsilon_1/3$ and finds $\lambda_1=0.0006628^{+ 0.000241}_{-0.000592} $ and $w_0=-1.069^{+ 0.0268}_{-0.0152}$ at 1$\sigma$, when all probes are combined (Planck+SNIa+BAO+$H_0$+RSD). Also Ref.~\cite{Nunes:2016dlj} finds a mild preference for a positive $\epsilon_1$ and $w<-1$, but only when considering exclusively $H_0$ probes supplemented by cosmic chronometers techniques. Refs. \cite{Sola:2016ecz,Li:2015vla} analyse models of interacting vacuum energy, setting then $w_0=-1$. The first work, Ref.~\cite{Sola:2016ecz}, finds that the SNIa+BAO+$H(z)$+LSS+Planck data favour a mild dynamical vacuum evolution\footnote{Here LSS means the measurement of $\sigma_8$.}, while Ref.~\cite{Li:2015vla} uses again Planck+SNIa+BAO+$H_0$ data to find a strong constraint on $\beta=\epsilon_1/3$ with no evidence of a positive interaction: $ \beta = -0.00045\pm 0.00069 $. Therefore, constraints on IDE1 still depend widely on the chosen combination of datasets and on the data analysis technique. However, we remark that in general the claimed sensitivity on $\epsilon_1$ of present cosmological probes is higher by at least two orders of magnitude than the forecast for eLISA we give in the present work with all other parameters fixed. Furthermore, this sensitivity may be largely improved by the time eLISA will fly, in particular with the help of the Euclid survey. The current analyses only consider models for which the interaction is present also far in the past, and we cannot therefore compare with them the case in which eLISA provides its best constraints. However, given the big discrepancy in the precision of the measurement for $z_i\rightarrow \infty$, it is likely that for IDE1 eLISA will only serve as an independent but not so sensitive mean to test the model, and possibly break degeneracies with other cosmological probes when one considers the scenario in which the interaction between dark energy and dark matter is negligible for redshift higher than some reference redshift $z_i$. \subsection{IDE2} \label{sec:IDE2} \begin{table} \begin{center} \begin{tabular}{|c|c|c|c||c|c|c|c|} \hline \multicolumn{8}{|c|}{IDE2: $Q = \epsilon_2 H \rho_{de}$} \\ \hline \multicolumn{4}{|c||}{$z_i = 2$} & \multicolumn{4}{|c|}{$z_i = 6$} \\ \hline $\Delta\Omega_m^0$ & $\Delta h$ & $\Delta w_0$ & $\Delta \epsilon_2$ & $\Delta\Omega_m^0$ & $\Delta h$ & $\Delta w_0$ & $\Delta \epsilon_2$ \\ \hline 0.168 & 0.199 & 1.56 & 1.23 & 2610 & 0.0977 & 3730 & 11200 \\ 0.308 & 0.448 & 3.22 & 1.49 & 2520 & 0.165 & 3590 & 10800 \\ 0.0692 & 0.0622 & 0.534 & 0.545 & 1540 & 0.0384 & 2200 & 6600 \\ \hline \text{} & 0.0819 & 0.662 & 0.295 & \text{} & 0.0803 & 0.686 & 0.178 \\ \text{} & 0.0668 & 0.522 & 0.398 & \text{} & 0.148 & 1.02 & 0.308 \\ \text{} & 0.0421 & 0.398 & 0.192 & \text{} & 0.0338 & 0.307 & 0.105 \\ \hline 0.0682 & \text{} & 0.128 & 0.495 & 5.93 & \text{} & 8.45 & 25.4 \\ 0.0481 & \text{} & 0.157 & 0.486 & 0.844 & \text{} & 1.20 & 3.67 \\ 0.0455 & \text{} & 0.0851 & 0.322 & 1.36 & \text{} & 1.94 & 5.87 \\ \hline 0.0751 & 0.0156 & \text{} & 0.524 & 0.487 & 0.0843 & \text{} & 2.16 \\ 0.0558 & 0.0229 & \text{} & 0.441 & 0.567 & 0.126 & \text{} & 2.32 \\ 0.0502 & 0.00937 & \text{} & 0.340 & 0.217 & 0.0343 & \text{} & 0.993 \\ \hline 0.0569 & \text{} & \text{} & 0.466 & 0.0541 & \text{} & \text{} & 0.366 \\ 0.0455 & \text{} & \text{} & 0.374 & 0.0632 & \text{} & \text{} & 0.396 \\ 0.0386 & \text{} & \text{} & 0.315 & 0.0374 & \text{} & \text{} & 0.248 \\ \hline \text{} & 0.0112 & \text{} & 0.217 & \text{} & 0.00916 & \text{} & 0.144 \\ \text{} & 0.0208 & \text{} & 0.332 & \text{} & 0.0128 & \text{} & 0.148 \\ \text{} & 0.00688 & \text{} & 0.132 & \text{} & 0.00580 & \text{} & 0.0932 \\ \hline \text{} & \text{} & 0.101 & 0.239 & \text{} & \text{} & 0.0772 & 0.146 \\ \text{} & \text{} & 0.154 & 0.296 & \text{} & \text{} & 0.0903 & 0.130 \\ \text{} & \text{} & 0.0697 & 0.157 & \text{} & \text{} & 0.0534 & 0.0994 \\ \hline \text{} & \text{} & \text{} & 0.0866 & \text{} & \text{} & \text{} & 0.0703 \\ \text{} & \text{} & \text{} & 0.0765 & \text{} & \text{} & \text{} & 0.0577 \\ \text{} & \text{} & \text{} & 0.0609 & \text{} & \text{} & \text{} & 0.0499 \\ \hline \end{tabular} \end{center} \caption{Standard 1$\sigma$ errors on IDE2 parameters for N2A2M5L6. In the left table the interaction is present only up to $z = 2$, while in the right table it is present only up to $z = 6$. In each row of the table, the top sub-row shows the errors for light seeds (popIII), the central sub-row for heavy seeds with delays (Q3d) and the bottom sub-row for heavy seeds without delays (Q3nod). Blank entries mean that the corresponding parameter has been fixed to its fiducial value (exact prior).} \label{tab:err_IDE2} \end{table} \begin{figure} \begin{center} \includegraphics[width=\textwidth]{Ellipses_IDE2_2sigma} \end{center} \caption{IDE2: 2$\sigma$ contours for $z_i = 2$ (blue) and $z_i = 6$ (red) with N2A2M5L6 for three MBHB models in the two-parameter cosmological models where $\epsilon_2$ is a free parameter together with $\Omega_{m}^0$, $h$ and $w_0$, respectively.} \label{fig:ellipses_IDE2} \end{figure} Standard 1$\sigma$ errors for the IDE2 model are shown in Table~\ref{tab:err_IDE2} where, as usual, we have chosen the eLISA configuration N2A2M5L6. In this model we also expect a degeneracy between $\Omega_{m}^0$ and $\epsilon_2$ due to the energy exchange between dark energy and dark matter and vice-versa. Table~\ref{tab:err_IDE2} shows that the degeneracy is more severe than in the IDE1 case: while the errors on all parameters are comparable with those of IDE1 when $z_i=2$, they are much larger when $z_i=6$. This is especially patent in the four-parameter model, but one can see from the table that in general IDE2 with any number of free parameter is more loosely constrained than IDE1 if $z_i=6$. Note in particular the two-parameter model $(\epsilon_2,\Omega_{m}^0)$, fifth row of Table~\ref{tab:err_IDE2}: in the heavy seed with delay scenario (the one with the lowest number of standard sirens), the errors are higher when $z_i=6$ than when $z_i=2$, meaning that the degeneracy is not broken contrary to what happened in the IDE1 case. This can be appreciated also from the first line of Fig.~\ref{fig:ellipses_IDE2}, where we show the 2$\sigma$ contour plots for all the two-parameter models in the three different MBHB formation scenarios. Degeneracies in the IDE2 model are further discussed in Appendix~\ref{sec:degeneracies}, and in Figs.~\ref{fig:degeneracies_1} and \ref{fig:degeneracies_2} of the Appendix we report marginalised contour plots for three-parameter cosmological models. Like for EDE and IDE1, a comparison between IDE2 models with different $z_i$ is given in Fig.~\ref{fig:IDE_2} in terms of errors on $\epsilon_2$ and $w_0$: the analysis of this figure is analogous to what discussed for IDE1. \begin{figure} \begin{center} \includegraphics[width=\textwidth]{IDE_2} \end{center} \caption{IDE2: 1$\sigma$ constraints on $\epsilon_2$ and $w_0$ in the one parameter cosmological model with only $\epsilon_2$ (left panels) and in the two parameter cosmological models with both $\epsilon$ and $w_0$ (right panel) for three different 6-link eLISA configurations. In the right panels empty and filled markers denote the uncertainties on $w_0$ and $\epsilon_2$, respectively.} \label{fig:IDE_2} \end{figure} \subsubsection*{Comparison with present constraints} The first evidence that IDE2 could be preferred by cosmological observations was presented in Ref.~\cite{Salvatelli:2013wra}. Subsequently Ref.~\cite{Salvatelli:2014zta} analysed the model of a late-time interaction between dark matter and vacuum energy, i.e.~IDE2 with $w_0=-1$ and $\epsilon_2$ varying in several redshift bins. They found moderate evidence for a negative interaction ($\epsilon_2<0$) starting at $z_i=0.9$ from the combination of Planck, SNIa and RSD data, excluding the null interaction ($\Lambda$CDM) at 99\%. On the other hand, Ref.~\cite{Murgia:2016ccp} considers the case of an interaction extending back in the past, $z_i\rightarrow\infty$, exclude $w_0=-1$ to avoid instabilities and finds that a positive coupling with $w_0<-1$ is favoured by the combination of Planck, SNIa and BAO/RSD data: they get $\epsilon_2=0.159^{+0.146}_{-0.154}$ at 2$\sigma$. This result is qualitatively consistent with Ref.~\cite{Costa:2016tpb}, which also excludes a zero positive interaction and finds $\lambda_2=\epsilon_2/3=0.02047^{+0.00565}_{-0.00667}$ at 1$\sigma$ (using also $H_0$ measurements), and with Ref.~\cite{Feng:2016djj} where Planck, SNIa, BAO and $H_0$ data are used to get $\lambda_2=\epsilon_2/3=0.0782^{+0.0377}_{-0.0347}$ at 2$\sigma$. Ref.~\cite{Sola:2016ecz} also finds evidence for a non-zero coupling (although here $w_0$ is fixed to -1). Conversely, Ref.~\cite{Li:2015vla} again finds no evidence for a non-zero interaction, as was the case for IDE1: with Planck+SNIa+BAO+$H_0$ data, the authors constrain $\beta=\epsilon_1/3$ to $ \beta = -0.026^{+0.036}_{-0.053}$ when $z_i \rightarrow\infty$ and fixing $w_0 = -1$. It is important to point out that overall, the constraints and/or error bars on the IDE2 model from present observational datasets are less stringent than for the IDE1 model, because of degeneracies: for example, the same analysis applied to both models in Ref.~\cite{Li:2015vla} led to constraints on IDE2 that are weaker by two orders of magnitude. As we have stressed, degeneracies do degrade also the eLISA forecasts, but only by a factor of a few: we expect therefore that eLISA, in combination with other cosmological observables, will be able to improve the constraints on IDE2, in particular for those models assuming low values of $z_i$. \section{Discussion and conclusion} \label{sec:discussion_and_conclusion} We have presented a forecast analysis of the capabilities of the eLISA mission to constrain two alternative models of dark energy, namely early and interacting dark energy. These models have been widely studied in the literature, and previous analyses have shown the advantages of testing them using data at various redshifts, combining different observational probes such as CMB, BAO, SNIa, LSS, weak lensing, RSD. The motivation of this work resides in the fact that standard sirens with eLISA can provide access to an intermediate range of redshift $1\lesssim z \lesssim 8$, higher than what can be reached with SNIa and matter structure data. Furthermore, the measurement of the luminosity distance with standard sirens being given by gravitational waves, it provides an independent access to the distance-redshift relation, partly complementary to electromagnetic observations. In the present analysis we have used the same procedure developed in \cite{Tamanini:2016zlh}: we have started from simulations of the event rates of MBHB in three different models for the BH seeds, and we have used realistic scenarios for the occurrence and detection of the EM counterparts. Equipped with catalogues of standard sirens, we have selected those which are visible by different eLISA configurations setting the thresholds of SNR$>8$ and sky localisation better than 10 ${\rm deg}^2$. These have been achieved including both the inspiral and merger and ringdown phases of the GW event (the ``optimistic scenario'' in \cite{Tamanini:2016zlh}). Since in Ref.~ \cite{Tamanini:2016zlh} we demonstrated that eLISA configurations with four links are not very powerful in probing the expansion of the universe, here we concentrated only on six-link configurations. We have also fixed the duration of the mission to five years (as done in \cite{Tamanini:2016zlh}) and the noise level to the Pathfinder expected one (N2), which is justified after the success of the LISA Pathfinder mission \cite{Armano:2016bkm}. We have therefore considered three eLISA configurations: N2A1M5L6, N2A2M5L6, N2A5M5L6. The main result of the present analysis is that standard sirens with eLISA can be competitive in constraining EDE and IDE models if the onset of the deviation from $\Lambda$CDM (i.e.~the epoch when EDE starts to be non-negligible, or when the interaction with DM begins) occurs relatively late, at $z \lesssim 6$. Models for which the deviation from $\Lambda$CDM starts far in the past, typically before recombination, are well constrained by current cosmological probes, in particualr by the CMB; the present constraints are way better than those that can be achieved with even the best configurations of eLISA (except perhaps for the IDE2 case, depending on the analysis one compares with: c.f. discussion at the end of Sec.~\ref{sec:IDE2}). On the other hand, if the deviation starts relatively late, the present observational constraints on both EDE and IDE models are highly degraded, and eLISA becomes therefore competitive in testing these scenarios. This happens because the redshift distribution of standard sirens peaks between $2\leq z \leq 4$, and very few standard sirens are available at redshift larger than six. The errors on the EDE and IDE parameters beyond $\Lambda$CDM (namely $w_0$, $\Omega_{de}^e$, $\epsilon_{1}$, $\epsilon_2$), when all the other cosmological parameters are held fixed, decrease with the increase of the redshift at which the deviation from $\Lambda$CDM starts. However, this occurs up to a redshift for the onset of the deviation of about six; for higher deviation redshift, the eLISA errors stabilize and do not change appreciably up to far in the radiation era. This behaviour of the errors follows the redshift distribution of the standard sirens (see appendix~\ref{sec:redshift_distribution_of_standard_sirens}) which are available for the measurement: after a redshift of about six, the number of standard sirens detected does not increase sufficiently any longer to provide a better measurement of the cosmological parameters. We have also demonstrated, however, that this behaviour can be affected by degeneracies among the parameters, in particular among $\Omega_m^0$ and $\Omega_{de}^e$ or $\epsilon_{1}$, $\epsilon_2$ (a degeneracy which is expected in both EDE and IDE models due to the way these parameters enter in the distance-redshift relation). If $\Omega_m^0$ is not set to its fiducial value, the errors on $\Omega_{de}^e$ or $\epsilon_{1}$, $\epsilon_2$ increase when the redshift of the onset of the deviation from $\Lambda$CDM increases. Once again this reflects the fact that the peak of the standard siren distribution resides in the interval $2\leq z \leq 4$ (see appendix~\ref{sec:redshift_distribution_of_standard_sirens}): for low deviation redshift, the bulk of the MBHB standard sirens detected by eLISA effectively probes $\Lambda$CDM or dynamical dark energy (i.e.~with $w_0\neq -1$), and this in turns leads to a reduction of the errors on $\Omega_{de}^e$ or $\epsilon_{1}$, $\epsilon_2$. This happens especially for the models where either all four, or three parameters are free to vary. Therefore, without setting an exact prior on $\Omega_m^0$, eLISA can only constrain models where the redshift of the onset of the deviation from $\Lambda$CDM is sufficiently low. Note however, that whenever the deviation from $\Lambda$CDM starts well after recombination, from the point of view of the eLISA analysis one can confidently use the very precise CMB measurement of $\Omega_m^0$ as an exact prior. We can therefore conclude that eLISA with six-link configurations will serve as an independent mean to test alternative models for the acceleration of the universe such as EDE and IDE, and will be able to improve the present constraints, in particular for the EDE and IDE2 models, if one considers low values of the redshift at which the deviation from $\Lambda$CDM starts. \acknowledgments We thank the {\it Institut d'Astrophysique de Paris} and the institute {\it AstroParticule et Cosmologie} at {\it Universit\'{e} Paris Diderot} for hospitality. We also thank Enrico Barausse for useful comments on the draft. NT acknowledge support from the Labex P2IO and the Enhanced Eurotalents Programme.
2024-02-18T23:41:09.705Z
2017-03-09T02:08:02.000Z
algebraic_stack_train_0000
4,320
10,293
proofpile-arXiv_066-5226
\section{Introduction} Nowadays society is becoming increasingly dependent on the use of computer systems in various fields such as finance, security and many aspects of everyday life. On the other hand threats and attacks are increasingly growing. Cyber-Security research area looks at the ability to act pro-actively in order to mitigate or prevent attacks. In that sense Network Intrusion Detection (NID) is one of the (possible) solutions. This task is basically carried out under two approaches: (i) Missuse Detection and (ii) Anomaly Detection. These approaches have advantages and disadvantages associated with their suitability to various scenarios \cite{rivero2014}. There are machine learning based solutions to these approaches \cite{sangkatsanee,tsai}. Despite the extensive academic research, their deployment in operational NID environments has been very limited \cite{sommer}. On the other hand new attacks are constantly occurring, often as variants of known attacks. Traditional machine learning approaches are unable to tackle challenging scenarios in which new classes may appear after the learning stage. This scenario is present many real world situations \cite{eszsl}. Specifically in NID related tasks where one of the main problems is the emergence of new attacks which corresponds with new classes. In that case the detection algorithm should identify the new classes which is important in real environments. Recently, there has been an increasing interest in the study of ZSL approaches, which might be a possible solution to this problem \cite{akata,dap,eszsl}. In this paper we propose an Attribute Learning for Network Intrusion Detection (ALNID) algorithm. We present the preliminary results as an initial step prior to the detection of new attacks on networks. A proposal of the experimental data setup for the application of ZSL in NID is also given. The proposed attribute learning algorithm can be generalized to many problems. It is simple and based on criteria such as attributes frequency and entropy, achieving to learn new values for the original attributes. The results show a significant improvement in the representation of the attributes for classes. This is encouraging since we expect to achieve higher accuracy in the inference stage of ZSL. The rest of the paper is organized as follows. In Section II we briefly review the background, related work on ZSL as well as the attribute learning prior stage. In Section III we describe the ALNID algorithm. In Section IV we present the data preprocessing and propose an experimental data setup for the application of ZSL in NID. In Section V we discuss the results on KDD Cup 99 dataset. Finally, we conclude and address some lines of future work. \section{Zero-Shot Learning} ZSL is inspired by the way human beings are capable of identifying new classes when a high-level description is provided. It consists of identifying new classes without training examples, from a high-level description of the new classes (unseen classes) that relate them to classes previously learned during the training (seen classes). This is done by learning the attributes as an intermediate layer that provides semantic information about the classes to classify. ZSL has two stages: the training or attribute learning stage in which knowledge about the attributes is captured, and then the inference stage where this knowledge is used to classify instances among a new set of classes. This approach has been widely applied to images classification tasks \cite{akata,dap,eszsl,sun}. There are different solutions for both attribute learning and for the inference stage, which models the relationships among features, attributes, and classes in images. Such strategy makes it difficult to apply ZSL to other problems. This, coupled with the need of identify new attacks on computer networks motivated us to development this research. \subsection{Attribute Learning Stage} The attribute learning focuses on learning to recognize several properties from objects, which allow to learn new classes based only on their description \cite{eszsl}. Recently there have been applications of automatic recognition of attributes on images \cite{ferrari,lampert,sun}. In \cite{sun} the authors propose the extensive Scene UNderstanding (SUN) database that contains $899$ categories and $130,519$ images. The database is built by human scene classification using Amazon Mechanical Turk (AMT). The authors firstly defined the attributes and then collected votes from AMT evaluating the presence or not of the previously defined attributes on each image. The procedure then selected or designed several state-of-art features that are potentially useful for scene classification. To the best of our knowledge this approach has never been applied on network intrusion detection. \subsection{Inference Stage} In the inference stage the predicted attributes are combined to infer the classes. There are basically three approaches \cite{eszsl} for this second stage: k--nearest neighbour (k--NN), probabilistic frameworks \cite{dap} and energy function \cite{akata,eszsl}. One interesting technique is the cascaded probabilistic framework proposed in \cite{dap,lampert} where the predicted attributes on the first stage are combined to determine the most likely target class. It has two variants: the Directed Attribute Prediction (DAP), in which during the training stage, for each attribute, a probabilistic classifier is learned. Then, at inference stage the classifiers are used to infer new classes from their attributes signatures. The other variant is the Indirected Attribute Prediction (IAP) which learns a classifier for each training class and then combines predictions of training and test classes. The third ZSL approach uses energy function \cite{akata,eszsl}, which is the same as the penalty function where given $ x $: data, and $v$: category vector, the energy function $ E_w (x,v) = x'Wv $ is trained, with the metric $ W $ being $ E_w (x,v) $ positive if $ x=v $ and negative if $ x!=v $. In \cite{crossmodal} a framework is proposed to predict both seen and unseen classes. Unlike other approaches this proposal not only classifies unknown classes but also classifies known classes. In the attribute learning stage they consider the set of all classes $Y$ during training and testing. Some classes $y$ are available as seen classes in training data ($Y_s$) and the others are the Zero-Shot classes, without any training data, as unseen classes ($Y_u$). Then they define $W = W_s \cup W_u$ as the word vectors for both, seen and unseen classes. All training images $x^{(i)} \in X_y $ of a seen class $y \in Y_s$ are mapped to the word vector $w_y$. Then, to train this mapping a two-layer neural network to minimize the following objective function (Equation \ref{eq:neuralnetwork}) is trained \cite{crossmodal}: \begin{equation} J(\Theta ) = \sum\limits_{y \in Y_s }^{} {\sum\limits_{x^{(i)} \in X_y }^{} {\left\| {w_y - \theta ^{(2)} f(\theta ^{(1)} x^{(i)} )} \right\|^2 } } \label{eq:neuralnetwork} \end{equation} Other recent and simple energy function based approach is the ESZSL proposal \cite{eszsl}. It is based on \cite{akata} which models the relationships among features, attributes, and classes. The authors assume that at training stage there are $z$ classes, which have a signature composed of $a$ attributes. That signatures are represented in a matrix $ S \in [0,1]^{axz} $. This matrix contains for each attribute any value in $[0, 1]$ representing the relationship between each attribute and the classes. However, how the matrix $S$ is computed is not addressed. The instances available at training stage are denoted by $X \in \mathbb{R}^{dxm} $, where $d$ is the dimensionality of the data, and $m$ is the number of instances. The authors also compute the matrix $ Y \in \{ - 1,1\} ^{mxz} $ to denote to which class belongs each instance. During the training they compute the matrix $V \in \mathbb{R}^{dxa} $ as in Equation \ref{eq:eszsl}: \begin{equation} V = (XX^T + \gamma I)^{ - 1} XYS(SS^T + \lambda I)^{ - 1} \label{eq:eszsl} \end{equation} In inference stage they distinguish between a new set of $z'$ classes by their attributes signatures, $ S' \in [0,1]^{axz'} $. Then, given a new instance $x$ the prediction is given by Equation \ref{eq:3}: \begin{equation} \mathop {\arg \max }\limits_i (x^T VS_i ') \label{eq:3} \end{equation} \begin{figure}[t!] \centering \includegraphics[keepaspectratio, width= 8.0cm]{zsl2.jpg} \caption{Zero-Shot Learning for Network Intrusion Detection} \label{fig:zero-shot} \end{figure} \section{Proposed Attribute Learning Algorithm} Most of the attribute learning algorithms have been implemented for automatic recognition of attributes on images based on the features extraction \cite{ferrari,lampert,sun}. It is hardly applicable to other kind of problems like NID. Firstly we evaluate different machine learning algorithms on the preprocessed dataset (which is described in next section) and the C45 decision tree algorithm shows the higher classification accuracy of 99.54\%. This result together the lack of attribute learning algorithms for non computer vision related tasks leads us to think in the extracted rules as a solution to build or relearn the attributes. This method could be extended not only to cover a wider area of Information Technology security but also to different kind of problems with labeled and structured data in which the C45 algorithm could be applied with good results. Our proposal is for the Attribute Learning Stage of the ZSL approach depicted in Figure \ref{fig:zero-shot}. ALNID - Attribute Learning for Network Intrusion Detection Algorithm - is a rule-based algorithm which weights the attributes according its entropy and frequency in the rules. We begin with a set of $X$ instances composed of $ A = \{ a_1 ,\cdots,a_n \} $ attributes. For each $a_n \in A$ we compute the quantity of information ($I$), the entropy ($E$) and the information gain ($G$). With these values we compute the C45 algorithm for decision trees. Furthermore, during each iteration we record how many times each $a_i \in A$ attribute is evaluated by each rule of the set $ R = \{ r_1 ,\cdots,r_m \} $. This is what we call frequency. Then, a new set of attributes $A' = \{ a_1 ',\cdots,a_n '\} $ is created. The values of $A'$ are the frequency count, increasing by each time that an attribute is evaluated by the rule $r_m$. As output the algorithm returns the set of new valued instances $X' = \{ A_i '\} _{i = 1}^N$ composed of the learned attributes, where $N$ is the number of instances. The pseudo-code of the proposed method is listed below: \begin{verbatim} {attributes weighted w.r.t. their frequency in each rule match}; X: Examples, A: Attributes (Input) X': Examples with learned attributes (Output) var I: Quantity of information E: Entropy G: Information Gain begin if all examples are from the same class then repeat Weight attribute; Add examples to X' until attribute in A return X'; else I = Compute quantity of information of the examples; repeat E = Compute the entropy of attribute; G = Compute the information gain of attribute; until attribute in A a = attribute that maximizes G; v = value of a; Delete a from A; Generate a root node for the attribute a; Weight attribute; Add examples to X' repeat ALNID (examples from X with a == v, A); generate a new branch with a==v; until partition in Partitions generated by values of attribute a return X'; end. \end{verbatim} \begin{table}[htb] \centering \small \caption{Data Setup for Zero-Shot Learning on Network Intrusion Detection} \begin{tabular}{lccr} \hline\noalign{\smallskip} \textbf{Class} & \textbf{Nr. of Examples} & \textbf{Zero-Shot Class} & \textbf{Category}\\ \noalign{\smallskip} \hline \noalign{\smallskip} smurf & 280,790 & no & DOS (D) \\ neptune & 107,201 & no & DOS (D) \\ back & 2203 & no & DOS (D)\\ teardrop & 979 & yes & DOS (D)\\ pod & 264 & no & DOS (D)\\ land & 21 & yes & DOS (D)\\ normal & 97,277 & no & NORMAL (N) \\ satan & 1589 & no & PROBE (P) \\ ipsweep & 1247 & yes & PROBE (P) \\ portsweep & 1040 & no & PROBE (P) \\ nmap & 231 & yes & PROBE (P) \\ warezclient & 1020 & no & R2L (R) \\ guess\_passwd & 53 & yes & R2L (R) \\ warezmaster & 20 & no & R2L (R) \\ imap & 12 & yes & R2L (R) \\ ftp\_write & 8 & no & R2L (R) \\ multihop & 7 & no & R2L (R) \\ phf & 4 & no & R2L (R) \\ spy & 2 & no & R2L (R) \\ buffer\_overflow & 30 & no & U2R (U) \\ rootkit & 10 & yes & U2R (U) \\ loadmodule & 9 & no & U2R (U) \\ perl & 3 & yes & U2R (U) \\ \hline \end{tabular} \label{table:Table2} \end{table} \section{Data and Experimental Setup} KDD Cup 99\footnote{KDD Cup is the annual Data Mining and Knowledge Discovery competition organized by ACM Special Interest Group on Knowledge Discovery and Data Mining.} is the most used in research as data to test and compare intrusion detection algorithms. It contains about 5 millions of instances representing 39 types of attacks which are grouped into 4 categories and also one category for normal traffic. Some preprocessing variants applied on the dataset are based on the reduction of the classes replacing the attack labels by their corresponding categories, thus reducing the number of classes to 5 \cite{rivero2014}. Each instance represents a TCP/IP connection composed of 41 attributes both quantitative and qualitative \footnote{Current research uses a small portion that represents the $10\%$ of the original dataset containing $494,021$ instances.}. Despite our proposal holds for the attribute learning stage, we propose herein a scheme to apply ZSL approach in NID tasks (see Figure \ref{fig:zero-shot}). Table \ref{table:Table2} shows that classes such as \emph{spy} and \emph{perl} have 2 and 3 instances respectively while classes as \emph{smurf} and \emph{normal} have 280,790 and 97,277 respectively. Then, considering the study in \cite{rivero2014} we selected the 12 attributes from the original 41 ones. Its statistical description are shown in Table \ref{table:Table3}. We modified the dataset using the five categories as classes. Later, for each category we selected two attacks as Zero-Shot classes. Table \ref{table:Table2} illustrates these values and the Zero-Shot classes. This setup is very practical for the application at hand because we can classify the Zero-Shot classes. In this case, new attacks can be classified in the categories to which they belong to. \section{Results and Discussion} The ALNID algorithm was evaluated on the preprocessed dataset computing a DT of 132 leaves and 263 rules ($ R = \{ r_1 ,...,r_{263} \} $). The classification accuracy for the seen classes was 99.94\%. \begin{table}[htb] \centering \small \caption{Statistical description of the attributes} \begin{tabular}{lcccr} \hline\noalign{\smallskip} \textbf{\small{Attributes}} & \textbf{\small{Minimum}} & \textbf{\small{Maximum}} & \textbf{\small{Mean}} & \textbf{\small{StdDev}} \\ \hline \noalign{\smallskip} duration & 0 & 58,329 & 47.979 & 707.747 \\ \emph{duration'} & 0 & 3 & 0.013 & 0.117 \\ \hline protocol\_type & 1 & 3 & 2.189 & 0.961 \\ \emph{protocol\_type'} & 0 & 1 & 0.845 & 0.362 \\ \hline src\_bytes & 0 & 693,375,640 & 3025.616 & 988,219.101 \\ \emph{src\_bytes'} & 0 & 7 & 0.762 & 1.526 \\ \hline dst\_bytes & 0 & 5155,468 & 868.531 & 33,040.035 \\ \emph{dst\_bytes'} & 0 & 3 & 0.057 & 0.297 \\ \hline urgent & 0 & 3 & 0 & 0.006 \\ \emph{urgent'} & 0 & 1 & 0 & 0.016 \\ \hline count & 0 & 511 & 332.286 & 213.147 \\ \emph{count'} & 1 & 4 & 1.821 & 0.425 \\ \hline srv\_count & 0 & 511 & 292.907 & 246.323 \\ \emph{srv\_count'} & 0 & 2 & 0 & 0.017 \\ \hline same\_srv\_rate & 0 & 1 & 0.792 & 0.388 \\ \emph{same\_srv\_rate'} & 0 & 1 & 0.784 & 0.411 \\ \hline dst\_host\_count & 0 & 255 & 232.471 & 64.745 \\ \emph{dst\_host\_count'} & 0 & 2 & 0.033 & 0.238\\ \hline dst\_host\_srv\_count & 0 & 255 & 188.666 & 106.04 \\ \emph{dst\_host\_srv\_count'} & 0 & 3 & 0.21 & 0.469 \\ \hline dst\_host\_same\_srv\_rate & 0 & 1 & 0.754 & 0.411\\ \emph{dst\_host\_same\_srv\_rate'} & 0 & 1 & 0.024 & 0.154 \\ \hline dst\_host\_same\_src\_port\_rate & 0 & 1 & 0.602 & 0.481 \\ \emph{dst\_host\_same\_src\_port\_rate'} & 0 & 3 & 0.585 & 0.779 \\ \hline \end{tabular} \label{table:Table3} \end{table} Table \ref{table:Table3} summarizes the minimum, maximum, mean and standard deviation values for each of the original attributes and the learned ones by our proposal. The Figure~\ref{fig:two} depicts the distribution per class of the original attribute values ($A = \{ a_1,...,a_{12}\} $) and the learned ones ($A' = \{ a_1 ',...,a_{12} '\} $). These were listed by decreasing order of the entropy ($E$) of the learned attributes. In general the graphical plots show better separated distributions per class for the learned attributes with our proposal than for the originals ones. The learned attribute \emph{duration'} in Figure \ref {fig:two}(\emph{a}) and (\emph{b}) improves distribution at least w.r.t. one class, e.g. the NORMAL (N) one. The rest of the learned attributes by ALNID achieves a higher separability than the original ones in their distribution regarding the classes (see Figure \ref {fig:two}(\emph{c})--(\emph{x})). This new representation of the learned attributes are expected to improve the k--NN classification during the inference stage. \begin{figure} \centering \subfigure[duration]{\includegraphics[width=0.35\textwidth]{2duration.png} } \subfigure[\emph{duration'}]{\includegraphics[width=0.35\textwidth]{1duration.png} } \subfigure[protocol\_type]{\includegraphics[width=0.35\textwidth]{2protocoltype.png} } \subfigure[\emph{protocol\_type'}]{\includegraphics[width=0.35\textwidth]{1protocoltype.png} } \subfigure[src\_bytes]{\includegraphics[width=0.35\textwidth]{2srcbytes.png} } \subfigure[\emph{src\_bytes'}]{\includegraphics[width=0.35\textwidth]{1srcbytes.png} } \subfigure[dst\_bytes]{\includegraphics[width=0.35\textwidth]{2dstbytes.png} } \subfigure[\emph{dst\_bytes'}]{\includegraphics[width=0.35\textwidth]{1dstbytes.png} } \subfigure[urgent]{\includegraphics[width=0.35\textwidth]{2urgent.png} } \subfigure[\emph{urgent'}]{\includegraphics[width=0.35\textwidth]{1urgent.png} } \subfigure[count]{\includegraphics[width=0.35\textwidth]{2count.png} } \subfigure[\emph{count'}]{\includegraphics[width=0.35\textwidth]{1count.png} } \label{fig:one} \end{figure} \begin{figure} \centering \subfigure[srv\_count]{\includegraphics[width=0.35\textwidth]{2srvcount.png} } \subfigure[\emph{srv\_count'}]{\includegraphics[width=0.35\textwidth]{1srvcount.png} } \subfigure[sam\_srv\_rate]{\includegraphics[width=0.35\textwidth]{2samesrvrate.png} } \subfigure[\emph{sam\_srv\_rate'}]{\includegraphics[width=0.35\textwidth]{1samesrvrate.png} } \subfigure[dst\_host\_count]{\includegraphics[width=0.35\textwidth]{2dsthostcount.png} } \subfigure[\emph{dst\_host\_count'}]{\includegraphics[width=0.35\textwidth]{1dsthostcount.png} } \subfigure[dst\_host\_srv\_count]{\includegraphics[width=0.35\textwidth]{2dsthostsrvcount.png} } \subfigure[\emph{dst\_host\_srv\_count'}]{\includegraphics[width=0.35\textwidth]{1dsthostsrvcount.png} } \subfigure[dst\_host\_same\_srv\_rate]{\includegraphics[width=0.35\textwidth]{2dsthostsamesrvrate.png} } \subfigure[\emph{dst\_host\_same\_srv\_rate'}]{\includegraphics[width=0.35\textwidth]{1dsthostsamesrvrate.png} } \subfigure[dst\_host\_same\_port\_rate]{\includegraphics[width=0.35\textwidth]{2dsthostsamesrcportrate.png} } \subfigure[\emph{dst\_host\_same\_port\_rate'}]{\includegraphics[width=0.35\textwidth]{1dsthostsamesrcportrate.png} } \caption{Original (left) and learned (right) attributes. } \label{fig:two} \end{figure} Also we found a way to compute the signature matrix $ S \in [0,1]^{axz} $ required by the energy function used in the inference approach in \cite{eszsl}. The range of values of the learned attributes is discrete which can easily be integrated in the inference stage. \section{Conclusions} ZSL is a two-stage approach that addresses the classification of new classes without any example during the training. This, joined with the need to detect new attacks on traffic networks motivated us to research in the application of ZSL to NID. In this paper we proposed ALNID, a new algorithm for the attribute learning stage of ZSL. The algorithm builds a DT and combines entropy and frequency of the original attributes in a weighted function setting. Our evaluation proposal on KDD Cup 99 dataset showed better distribution of the attribute values per class. The class separability is convenient for the inference stage based on k--NN. Future work will extend the work to the inference stage based on the learned attributes by this proposal and will validate ZSL to other NID Big Data. \bibliographystyle{splncs}
2024-02-18T23:41:10.324Z
2016-08-01T02:00:39.000Z
algebraic_stack_train_0000
4,344
3,259
proofpile-arXiv_066-5589
\section{Introduction} An $\ell$-facial coloring of a plane graph is a coloring of its vertices such that vertices at distance at most $\ell$ on a boundary walk of some face receive distinct colors. This type of colorings was introduced by Kr\'{a}\v{l}, Madaras, and \v{S}krekovski~\cite{KraMadSkr05,KraMadSkr07} as an extension of cyclic colorings in order to obtain some results on diagonal colorings. They showed that $\tfrac{18}{5}\ell$ colors suffice for an $\ell$-facial vertex coloring of any plane graph and any $\ell \ge 5$. Moreover, they proved that every plane graph admits a $2$-facial, $3$-facial, and $4$-facial coloring with at most $8$, $12$, and $15$ colors, respectively. The obtained bounds are not tight, in fact, the following conjecture was proposed. \begin{conjecture}[Kr\'{a}\v{l}, Madaras, and \v{S}krekovski] \label{conj:3l} Every plane graph admits an $\ell$-facial coloring with at most $3\,\ell + 1$ colors for every $\ell \ge 0$. \end{conjecture} Graphs that achieve the conjectured bound are plane embeddings of $K_4$, where the three edges incident to a same vertex are subdivided $\ell - 1$ times. Conjecture~\ref{conj:3l}, if true, has several interesting implications. In case when $\ell = 1$, it implies the Four Color Theorem. If $\ell = 2$, it implies Wegner's conjecture restricted to subcubic plane graphs~\cite{Weg77}, which states that the square of every subcubic plane graph admits a proper vertex coloring with at most $7$ colors. Currently the best known bound for an $\ell$-facial coloring is due to Havet et al.~\cite{HavKraSerSkr10}. \begin{theorem}[Havet et al.] \label{thm:vertbound} Every plane graph admits an $\ell$-facial coloring with at most $\big \lfloor \frac{7}{2} \,\ell \big \rfloor + 6$ colors\,. \end{theorem} There are also several results regarding the small values of $\ell$. In 2006, Montassier and Raspaud~\cite{MonRas06} considered $2$-facial colorings of plane graphs with big girth and $K_4$-minor free graphs. In 2008, Havet et al.~\cite{HavSerSkr08} proved that every plane graph admits a $3$-facial coloring with at most $11$ colors, which is just one color more as Conjecture~\ref{conj:3l} claims. In this paper we consider the edge version of facial colorings. An \textit{$\ell$-facial edge coloring}, $\ell$-FEC, of a plane graph $G$ with $k$ colors is a mapping $\varphi \, : \, E(G) \rightarrow \set{1,2,\dots,k}$ such that for any pair of edges $e$, $f$ of $G$ at distance at most $\ell$ on a boundary of some face $\varphi(e) \neq \varphi(f)$. The minimum number of colors for which $G$ admits an $\ell$-facial edge coloring is the \textit{$\ell$-facial chromatic index}, $\chis{G}$. Notice that all the upper bounds established for $\ell$-facial vertex colorings hold also for the edge version. Consider the medial graph $M(G)$ of a plane graph $G$, which is also a plane graph. An $\ell$-facial vertex coloring of $M(G)$ corresponds to an $\ell$-facial edge coloring of $G$. Thus, the problem of $\ell$-facial edge coloring is just a restricted case of the problem of $\ell$-facial coloring. However, there exist graphs whose $\ell$-facial chromatic index achieves the $3\ell + 1$ bound (see Fig.~\ref{fig:3lconj}). \begin{figure}[htb] $$ \includegraphics{fig_3lconj} $$ \caption{Graphs with the $\ell$-facial chromatic index equal to $3\,\ell + 1$.} \label{fig:3lconj} \end{figure} Therefore, a weaker version of Conjecture~\ref{conj:3l} may be proposed. \begin{conjecture} \label{conj:3le} Every plane graph admits an $\ell$-facial edge coloring with at most $3\,\ell + 1$ colors for every $\ell \ge 1$. \end{conjecture} As mentioned above, the case with $\ell =1$ is already confirmed. Our aim in this paper is to confirm that the case with $\ell = 2$ holds. \begin{theorem} \label{thm:main} Every plane graph admits a $2$-facial edge coloring with at most $7$ colors. \end{theorem} \subsection{Preliminaries} In the paper the following definitions and notation are used. A vertex of degree $k$, at most $k$, and at least $k$ is called a \textit{$k$-vertex}, a \textit{$k^-$-vertex}, and a \textit{$k^+$-vertex}, respectively. Similarly, a \textit{$k$-face}, a \textit{$k^-$-face}, and a \textit{$k^+$-face} is a face of length $k$, at most $k$, and at least $k$, respectively. By $(v_1,v_2,\dots,v_k)$ we denote a $k$-face on which the vertices $v_1,v_2,\dots,v_k$ appear on the boundary in the given order. We say that two faces are adjacent if they share an edge. Let $V$ be some subset of vertices of a graph $G$. As usual, $G[V]$ is a subgraph of $G$ induced by the vertices of $V$. For a given cycle $C$ in a plane embedding of a graph $G$ we define $\int(C)$ to be the graph induced by the vertices lying strictly in the interior of $C$. Similarly, $\ext(C)$ is the graph induced by the vertices lying strictly in the exterior of $C$. A \textit{separating} cycle is a cycle $C$ such that both, $\int(C)$ and $\ext(C)$, contain at least one vertex. Two edges are \textit{facially adjacent} or \textit{facial neighbors} if they are consecutive on the boundary of some face. An \textit{$\ell$-facial neighbor} of an edge is any edge at distance at most $\ell$ on the boundary of some face, hence, facially adjacent edges are $1$-facial neighbors. In a partial coloring, we say that a color $c$ is \textit{available} for an edge $e$, if there is no $2$-facial neighbor of $e$ colored by $c$. Let $H$ be a subset of edges of a graph $G$. A graph $\ngph{H}$ is a graph with the vertex set $H$, and two vertices $x$ and $y$ are adjacent in $\ngph{H}$ if they are $2$-facial neighbors in $G$, we call it the \textit{$2$-medial graph of $H$ in $G$}. Obviously, a proper vertex coloring of $\ngph{H}$ corresponds to $2$-FEC of the edges in $H$. We say that $L$ is a \textit{list-assignment} for the graph $G$ if it assigns a list $L(v)$ of available colors to each vertex $v$ of $G$. If $G$ has a proper vertex coloring $c_l$ such that $c_l(v) \in L(v)$ for all vertices in $V(G)$, then $G$ is \textit{$L$-colorable} or $c_l$ is an \textit{$L$-coloring} of $G$. The graph $G$ is \textit{$k$-choosable} if it is $L$-colorable for every assignment $L$, where $|L(v)| \ge k$, for every $v \in V(G)$. In the sequel, we make use of the following result. \begin{theorem}[\cite{Bor77, ErdRubTay80}] \label{thm:lstbrooks} Let $G$ be a connected graph. Suppose that $L$ is a list-assignment where $|L(v)| \ge d(v)$ for each $v \in V(G)$. If \begin{enumerate} \item $|L(v)| > d(v)$ for some vertex $v$, or \item $G$ contains a block which is neither a complete graph nor an induced odd cycle (i.e. $G$ is not a Gallai tree), \end{enumerate} then $G$ admits an $L$-coloring. \end{theorem} Notice that a vertex $v$ of a graph $G$ with $|L(v)| > d(v)$, we call it \textit{free}, retains at least one available color after all its neighbors are colored, therefore we may ignore it, i.e., consider the coloring of $G-v$. After a recursive removal of all free vertices from $G$, we obtain a graph, which we call the \textit{core of $G$}, and denote by $\core{G}$. Observe that $G$ is $L$-colorable if and only if $\core{G}$ is $L$-colorable. The \textit{null graph} is a graph on zero vertices. Every graph $G$ whose core is the null graph, is $L$-colorable. \section{Proof of Theorem~\ref{thm:main}} In the proof of Theorem~\ref{thm:main}, we assume that there exists a minimal counterexample $G$ to the claim and show that it cannot exist by studying its structure. First, we show that certain configurations cannot occur in the minimal counterexample. Then we assign charges to the vertices and faces of $G$. Using Euler's formula we compute that the total charge is negative. However, by redistributing the charge among vertices and faces, we show that it is nonnegative, obtaining a contradiction on the existence of $G$. This approach is the well known discharging method which remains the only technique for proving the Four Color Theorem. \subsection{Structure of the minimal counterexample} In this part, we list several configurations that do not appear in the minimal counterexample $G$. All the proofs proceed in a similar way, first we modify $G$ to obtain a smaller graph which, by the minimality of $G$, admits a $2$-FEC $\varphi$ with at most $7$ colors and then show that $\varphi$ can be extended to $G$. \begin{lemma} \label{lem:2conn} $G$ is 2-connected. \end{lemma} \begin{proof} Suppose, for a contradiction, that $x$ is a cutvertex of $G$. Let $G_1$ be a component of $G - x$ and $H_2 = G \setminus G_1$. Let $H_1 = G[V(G_1) \cup \set{x}]$, and let $\varphi_1$ and $\varphi_2$ be $2$-FECs of $H_1$ and $H_2$ with at most $7$ colors, respectively. There are at most $4$ edges $e_i$, $i \le 4$, in $H_1$ that are $2$-facially adjacent to some edges in $H_2$, and similarly at most $4$ edges $f_j$, $j \le 4$, in $H_2$ that are $2$-facially adjacent to some edges in $H_1$. In case when the sum $i + j = k$ is at most $7$, it is easy to see that there exists a permutation of colors in $\varphi_2$ such that all $k$ edges receive different colors, and so the colorings $\varphi_1$ and $\varphi_2$ induce a $2$-FEC of $G$ with at most $7$ colors. Otherwise $k = 8$ and there exist $i$ and $j$ such that the two edges $e_i$ and $f_j$ are not $2$-facially adjacent. Thus, we can permute the colors in $\varphi_2$ such that $e_i$ and $f_j$ are colored with the same color and color the other $6$ edges with different colors. Again, $\varphi_1$ and $\varphi_2$ induce a $2$-FEC of $G$ with at most $7$ colors. \end{proof} \begin{lemma} \label{lem:basic} For $G$ it holds that: \begin{enumerate} \item[$(i)$] $\delta(G) \ge 2$; \item[$(ii)$] every $2$-vertex has two $3^+$-neighbors; \item[$(iii)$] every face in $G$ has size at least $4$; \item[$(iv)$] there is no separating cycle of length at most $5$. \end{enumerate} \end{lemma} \begin{proof} We consider each case separately. \begin{enumerate} \item[$(i)$] This is a simple corollary of Lemma~\ref{lem:2conn}. \item[$(ii)$] Suppose that $uv$ is an edge of $G$ with $d(u) = d(v) = 2$. By the minimality, there is a $2$-FEC with at most $7$ colors of $G-v$. Let $w$ be the second neighbor of $v$ in $G$. The edges $vw$ and $uv$ have at most $5$ colored $2$-facial neighbors, therefore we can color both, a contradiction. \item[$(iii)$] Suppose that $\alpha$ is a face of $G$ of size $i \le 3$. If $i \in \set{1,2}$, $\alpha$ contains a loop $uu$ or two paralel edges $uv^1$, $uv^2$. By the minimality, there is a $2$-FEC with at most $7$ colors of $G-xy$, $xy\in\{uu,uv^2\}$. The edge $uu$ has at most $4$ and $uv^2$ at most $5$ $2$-facial neighbors, so the coloring can be extended to $G$ again. For $i=3$, let $\alpha = (u,v,w)$. Let $G'$ be a graph obtained from $G$ by removing the edges on the boundary of $\alpha$ and identifying its vertices. Let $\varphi$ be a $2$-FEC of $G'$ with at most $7$ colors. In order to extend $\varphi$ to $G$, it remains to color the edges $uv$, $vw$, and $uw$. Each of them has at most $4$ colored $2$-facial neighbors and thus at least three available colors, so we can color them. \item[$(iv)$] Suppose that $C$ is a separating cycle of length at most $5$. By the minimality, there exist a $2$-FEC $\varphi_1$ of $\int(C)$ together with the edges of $C$, and a $2$-FEC $\varphi_2$ of $\ext(C)$ together with the edges of $C$. Since the length of $C$ is at most $5$, all the edges of $C$ are colored differently. Notice that, by a permutation of colors in $\varphi_2$ such that the colors of the edges of $C$ coincide in $\varphi_1$ and $\varphi_2$, we obtain a $2$-FEC of $G$. \end{enumerate} \end{proof} \begin{lemma} \label{lem:no6} There are no $6$-faces in $G$. \end{lemma} \begin{proof} Suppose, for a contradiction, that $\alpha = (v_1,v_2,v_3,v_4,v_5,v_6)$ is a $6$-face of $G$. By Lemma~\ref{lem:2conn}, we have that all the six vertices are distinct. Let $G'$ be the graph obtained from $G$ by identifying the two edges $v_1v_6$ and $v_3v_4$ such that the vertex $v_1$ goes to $v_3$, and $v_4$ goes to $v_6$ (see Fig.~\ref{fig:lem6face}) \begin{figure}[htb] $$ \includegraphics{fig_lem6face} $$ \caption{The $6$-face $\alpha$ in $G$ (left) and in $G'$ (right).} \label{fig:lem6face} \end{figure} By the minimality, there exists a $2$-FEC $\varphi$ of $G'$ with at most $7$ colors. Let $\psi$ be the partial $2$-FEC of $G$ induced by $\varphi$, where the edges $v_1v_2$, $v_2v_3$, $v_4v_5$, and $v_5v_6$ remain noncolored, since their $2$-facial neighborhoods in $G'$ are different as in $G$. Notice that $v_1v_6$ and $v_3v_4$ are not at facial distance $2$, for otherwise we have either a separating cycle of length at most $5$ or adjacent $2$-vertices in $G$, which are reducible by Lemma~\ref{lem:basic}. In order to complete $\psi$, we color them as follows. Since the edges $v_1v_6$ and $v_3v_4$ receive the same color, each of the four noncolored edges has at most $5$ forbidden colors. Hence, there are at least $2$ available colors for each of them. Let $H$ be the set of the edges $v_1v_2$, $v_2v_3$, $v_4v_5$, and $v_5v_6$. The graph $\ngph{H}$ is isomorphic to a $4$-cycle, which is $2$-choosable by Theorem~\ref{thm:lstbrooks}. Therefore, we can color the four edges to obtain a $2$-FEC of $G$, a contradiction. \end{proof} In the following lemmas we consider the appearance of $2$-vertices in $G$. \begin{lemma} \label{lem:72v} A $7^-$-face in $G$ is not incident to a $2$-vertex. \end{lemma} \begin{proof} Let $\alpha$ be a $7^-$-face of $G$. By Lemmas~\ref{lem:basic} and~\ref{lem:no6}, the length of $\alpha$ is either $4$, $5$, or $7$. Let $v$ be a $2$-vertex incident to $\alpha$ and $u$ and $w$ the two neighbors of $v$. Let $\beta$ be the second face incident to $v$. We consider the cases regarding the length of $\alpha$. \medskip \textit{Case 1: $\alpha$ is a $4$-face.\quad} Let $G' = G - v$ and $\varphi$ be a $2$-FEC of $G'$ with at most $7$ colors. For the edges $uv$ and $vw$ there are at least $2$ available colors. Thus, $\varphi$ can easily be extended to $uv$ and $vw$, a contradiction. \medskip \textit{Case 2: $\alpha = (u, v, w, x_1, x_2)$ is a $5$-face.\quad} By Case 1, we have that $\beta$ is either a $5$-face or a $7^+$-face. In the former case, consider the graph $G' = G - v$ and notice that $\alpha$ and $\beta$ form a $6$-face $\gamma$ in $G'$. By the proof of Lemma~\ref{lem:no6}, we have that there is a $2$-FEC of $G'$ such that two edges of $\gamma$ are assigned the same color. Hence, the edges $uv$ and $vw$ have at most $5$ distinct colors in the $2$-facial neighborhood, which means that they have at least two available colors and we can color them. Therefore, we may assume that $\beta$ is a $7^+$-face. Let $y_2$, $y_1$, $u$, $v$, $w$, $z_1$, and $z_2$ be the vertices appearing on the boundary of $\beta$ in the given order (see Fig.~\ref{fig:lem72proof}). \begin{figure}[ht] $$ \includegraphics{fig_lem72proof} $$ \caption{The faces $\alpha$ and $\beta$ of Case 2.} \label{fig:lem72proof} \end{figure} Let $G'$ be the graph obtained from $G$ by removing the vertex $v$ and identifying the vertices $y_1$ and $z_1$. Let $\varphi$ be a $2$-FEC of $G'$. Notice that the edges $y_1y_2$ and $z_1z_2$ are assigned distinct colors, since they are facially adjacent in $G'$. In order to extend $\varphi$ to $G$, we need to recolor the edges $uy_1$ and $wz_1$, which have at least one available color each, and color the edges $uv$ and $vw$. After $uy_1$ and $wz_1$ being recolored, there is at least one available color for $uv$ and $vw$. Moreover, notice that the union of the sets of available colors of $uv$ and $vw$ has at least two elements, since $y_1y_2$ and $z_1z_2$ are assigned distinct colors, so we can color them. Hence, $\varphi$ is extended to $G$, a contradiction. \medskip \textit{Case 3: $\alpha=(u, v, w, x_1, x_2, x_3, x_4)$ is a $7$-face.\quad} Let $G'$ be the graph obtained from $G$ by identifying the edges $ux_4$ and $x_1x_2$ such that the vertex $u$ goes to $x_1$ and $x_4$ goes to $x_2$. Again, notice that $ux_4$ and $x_1x_2$ are not at facial distance $2$, otherwise a separating cycle of length at most $5$ or adjacent $2$-vertices appear in $G$, what contradicts Lemma~\ref{lem:basic}. Let $\varphi$ be a $2$-FEC of $G'$. Uncolor the edges $uv$, $vw$, $wx_1$, $x_2x_3$ and $x_3x_4$. Since the edges $ux_4$ and $x_1x_2$ are assigned the same color in $G$, the edges $uv$ and $vw$ have at least $3$ available colors and each of the remaining three edges has at least $2$. Therefore, the core $\zeta(\ngph{H})$, where $H$ is the set of noncolored edges in $G$, is the null graph. It follows that $\varphi$ can be extended to $G$, what establishes the lemma. \end{proof} \begin{lemma} \label{lem:2verts} The facial distance between any two $2$-vertices is at least $4$ in $G$. \end{lemma} \begin{proof} By Lemma~\ref{lem:basic}, we have that $2$-vertices are not adjacent in $G$. In order to prove this lemma, we consider the cases when the facial distance between two $2$-vertices $u$ and $v$ incident to some face $\alpha$ is $2$ and $3$, respectively. Note that $\alpha$ is of length at least $8$ by Lemma~\ref{lem:72v}. In the former case, let $w$ be a common neighbor of $u$ and $v$ such that they are consecutive on the boundary of $\alpha$ and let $x$ and $y$ be the second neighbors of $u$ and $v$, respectively. Let $G' = G - u - v$ and let $\varphi$ be a $2$-FEC of $G'$ with at most $7$ colors. The coloring $\varphi$ induces a $2$-FEC of $G$ where the edges $ux$, $uw$, $vw$, and $vy$ remain noncolored. Observe that $ux$ and $vy$ have at least $2$ available colors, while $uw$ and $vw$ have at least $3$ each. Hence, the graph $\ngph{\set{ux,uw,vw,vy}}$, which is isomorphic to two triangles sharing an edge, is colorable by Theorem~\ref{thm:lstbrooks}. In the latter case, we assume that the facial distance between $u$ and $v$ is $3$. Let $x,u,w,z,v,y$ be the vertices appearing on the boundary of some face $\alpha$ of $G$. Let $G' = G - u - v$ and $\varphi$ be a $2$-FEC of $G'$ with at most $7$ colors. Similarly as in the previous case, there are four noncolored edges, $ux$, $uw$, $vy$, and $vz$, in $G$ whose $2$-medial graph is isomorphic to a $4$-path $p$. The two endvertices of $p$ have at least one available color, while the two middle vertices have at least two. In case when the core graph of $p$ is not the null graph (otherwise we can extend $\varphi$ to $G$), we have that all $2$-facial neighbors of every noncolored edge are assigned distinct colors. Thus, we may uncolor the edge $wz$ and use its color for the edges $ux$ and $vz$, color with an available color $uw$ and $vy$, and finally color $wz$, which has at least one available color. So, $\varphi$ can be extended to $G$, a contradiction. \end{proof} It follows that there are at most two $2$-vertices incident to an $8$-face. In the following lemma we show that an $8$-face is incident to at most one $2$-vertex. \begin{lemma} \label{lem:82v} An $8$-face in $G$ is incident to at most one $2$-vertex. \end{lemma} \begin{proof} Let $\alpha = (v_1,v_2,\dots,v_8)$ be an $8$-face of $G$ incident to two $2$-vertices. By Lemma~\ref{lem:2verts}, they are at facial distance $4$, so we may assume that $d(v_1) = d(v_5) = 2$. Let $G'$ be a graph obtained by identifying the edges $v_2v_3$ and $v_6v_7$, where the vertex $v_2$ goes to $v_7$ and $v_3$ goes to $v_6$. The edges $v_2v_3$ and $v_6v_7$ are not $2$-facial neighbors in $G$, otherwise a separating cycle of length at most $5$ or adjacent $2$-vertices appear in $G$, contradicting Lemma~\ref{lem:basic}. Let $\varphi$ be a $2$-FEC of $G'$ with at most $7$ colors which induces a partial $2$-FEC of $G$ where the edges $v_1v_2$, $v_3v_4$, $v_4v_5$, $v_5v_6$, $v_7v_8$, and $v_1v_8$ remain noncolored. Notice that there are at least $2$ available colors for the edges $v_3v_4$ and $v_7v_8$ and at least $3$ available colors for the remaining edges, since the edges $v_2v_3$ and $v_6v_7$ are assigned the same color. Consider the graph $\ngph{H}$, where $H$ is the set of noncolored edges. It is easy to deduce that $\zeta(\ngph{H})$ is the null graph hence $\ngph{H}$ is colorable and $\varphi$ can be extended to $G$, a contradiction. \end{proof} \begin{lemma} \label{lem:44} There are no adjacent $4$-faces in $G$. \end{lemma} \begin{proof} Let $\alpha$ and $\beta$ be two adjacent $4$-faces sharing an edge $e$. By the minimality, there is a $2$-FEC of $G-e$ with at most $7$ colors. The edge $e$ has at most $6$ colored facial neighbors in the $2$-neighborhood, so we can color it with an available color, a contradiction. \end{proof} \begin{lemma} \label{lem:45} Let a $4$-face and a $5$-face be adjacent in $G$ by an edge $uv$. Then, both vertices, $u$ and $v$, are of degree at least $4$. \end{lemma} \begin{figure}[ht] $$ \includegraphics{fig_45face} $$ \caption{Adjacent $4$- and $5$-faces in $G$ do not have common $3^-$-vertices.} \label{fig:45face} \end{figure} \begin{proof} Suppose, to the contrary, that $\alpha=(u,v,v_1,u_1)$ is a $4$-face and $\beta=(u,v,v_2,w,u_2)$ is a $5$-face of $G$ where $d(u)=3$ (see Fig.~\ref{fig:45face}). The edges $vv_1$ and $u_2w$ are not at facial distance $2$, for otherwise there are adjacent $2$-vertices or a separating cycle of length at most $5$ in $C$. Therefore, let $G'$ be the graph obtained from $G$ by removing the edge $uv$ and identifying the edges $vv_1$ and $u_2w$, where $v$ goes to $w$ and $v_1$ goes to $u_2$. Let $\varphi$ be a $2$-FEC of $G'$ with at most $7$ colors. To obtain a $2$-FEC of $G$ from $\varphi$, we uncolor and assign new colors to the edges $uu_1$, $u_1v_1$, $vv_2$, $v_2w$, $uu_2$, and color $uv$. Observe that the edges $u_1v_1$, $vv_2$, and $v_2w$ have at most $5$ colored $2$-facial neighbors in $G$, and the edges $uu_1$ and $uu_2$ have at most $4$ such neighbors. The only colored $2$-facial neighbors of $uv$ are the edges $vv_1$ and $u_2w$ colored by the same color, hence, $uv$ has $6$ available colors. Again, notice that the core of the $2$-medial graph of the noncolored edges is the null graph, hence $\varphi$ can be extended to $G$, a contradiction. \end{proof} \begin{lemma} \label{lem:55} Let two $5$-faces of $G$ be adjacent by an edge $uv$. Then, at least one of the vertices $u$ and $v$ is of degree at least $4$. \end{lemma} \begin{figure}[ht] $$ \includegraphics{fig_55face} $$ \caption{Adjacent $5$-faces in $G$ have at least one common $4^+$-vertex.} \label{fig:55face} \end{figure} \begin{proof} Let $\alpha=(u,v,v_1,w_1,u_1)$ and $\beta=(u,v,v_2,w_2,u_2)$ be two adjacent $5$-faces of $G$ where $d(u) = d(v) = 3$ (see Fig.~\ref{fig:55face}). Let $G'$ be the graph obtained from $G$ by removing the edge $uv$ and identifying the edges $u_1w_1$ and $v_2w_2$, where the vertex $u_1$ goes to $w_2$, and $w_1$ goes to $v_2$ (similarly as in the previous arguments, it is easy to see that the edges $u_1w_1$ and $v_2w_2$ are not $2$-facially adjacent). Let $\varphi$ be a $2$-FEC of $G'$ with at most $7$ colors, which induces an improper $2$-FEC $\varphi'$ of $G$. Again, due to the changes of their $2$-facial neighborhoods we uncolor the edges $uu_1$, $v_1w_1$, $vv_1$, $vv_2$, $u_2w_2$, and $uu_2$. Notice that $u_1w_1$ and $v_2w_2$ are assigned the same color, say $g$. In what follows, we color the noncolored edges of $G$. The edges $v_1w_1$, $u_2w_2$ have at most $5$ and the edges $uu_1$, $vv_1$, $vv_2$, and $uu_2$ have at most $4$ colored $2$-facial neighbors in $G$, the edge $uv$ has two, but they are assigned the same color. Consider the graph $\ngph{H}$ (see Fig.~\ref{fig:lem55proof}), where $H$ is the set of noncolored edges. We will show that we can color its vertices. \begin{figure}[ht] $$ \includegraphics{fig_lem55proof} $$ \caption{The $2$-medial subgraph induced by the noncolored edges. In the brackets the minimal numbers of available colors are given.} \label{fig:lem55proof} \end{figure} Observe that the lists of available colors of the noncolored edges together contain at most $6$ distinct colors, since each noncolored edge has a $2$-facial neighbor of color $g$. Furthermore, all of these $6$ colors are available for $uv$. We consider the properties of these lists and show that we can always extend the coloring to $G$. First, notice that \textit{the lists of any two vertices that are not adjacent in $\ngph{H}$ are disjoint}. Otherwise, we may assume that two nonadjacent vertices $x$ and $y$ may receive the same color, say $a$, and we color them by $a$. Observe that regardless of choice of $x$ and $y$, the sizes of lists of the remaining vertices may decrease by at most $1$, so the vertex $uv$ retains at least $5$ available colors and has four noncolored neighbors, which means that it does not appear in the core $\zeta(\ngph{H})$. Therefore, $\zeta(\ngph{H})$ is either the null graph or a $4$-cycle where every vertex has at least two available colors. Thus $\ngph{H}$ is colorable. Hence, we may assume that the lists of any two vertices that are not adjacent in $\ngph{H}$ are disjoint. Without loss of generality, let $\set{a,b,c} \subseteq L(uu_1)$, $\set{d,e,f} \subseteq L(vv_2)$, and $\set{d,e} \subseteq L(v_2w_2)$. Consider the lists $L(uu_1)$ and $L(uu_2)$. Both edges, $uu_1$ and $uu_2$, are $2$-facially adjacent to two common edges and to the edges of color $g$. It means that $|L(uu_1) \cap L(uu_2)| \ge 2$. Therefore, $|L(vv_2) \cup L(uu_2)| \ge 5$ and so $(L(vv_2) \cup L(uu_2)) \cap L(v_1w_1) \neq \emptyset$, a contradiction. \end{proof} \begin{lemma} \label{lem:43v} A $4$-face in $G$ is incident to at least one $4^+$-vertex. \end{lemma} \begin{proof} Let $\alpha=(v_1,v_2,v_3,v_4)$ be a $4$-face such that $d(v_i)=3$, for $i\in \set{1,2,3,4}$ and let $u_i$ be the third neighbor of $v_i$. By Lemmas~\ref{lem:44} and~\ref{lem:45}, it follows that $\alpha$ is adjacent only to $7^+$-faces. Let $G' = G - \set{v_1,v_2,v_3,v_4}$ and $\varphi$ be a $2$-FEC coloring of $G'$ with at most $7$ colors. In order to extend $\varphi$ to $G$, we need to color the $8$ edges incident to the vertices $v_i$. Let $H$ be the set of these edges and consider the graph $\ngph{H}$ (see Fig.~\ref{fig:lem43proof}). The $4$-vertices have at least $3$ available colors, while the $5$-vertices have at least $5$. \begin{figure}[ht] $$ \includegraphics{fig_lem43proof} $$ \caption{The $2$-medial subgraph induced by the noncolored edges. In the brackets the minimal numbers of available colors are given.} \label{fig:lem43proof} \end{figure} Consider the properties of lists of available colors of the $4$-vertices. Suppose first that there is some color, say $a$, available for the vertices $u_1v_1$ and $u_3v_3$. Then, we color both vertices by $a$. Notice that the sizes of lists of available colors of the remaining vertices decrease by at most $1$. Thus, the remaining vertices form a graph that is colorable by Theorem~\ref{thm:lstbrooks}. Hence, we may assume that \textit{the lists $L(u_1v_1)$ and $L(u_3v_3)$ are disjoint}. By the symmetry, we also have that \textit{$L(u_2v_2)$ and $L(u_4v_4)$ are disjoint}. Hence, there is a color $b$ in $L(u_2v_2)$ which is not available for $u_1v_1$ or $u_3v_3$, say $u_1v_1$, and there is a color $c$ available for $u_4v_4$, which is not available for $u_3v_3$. Therefore, after coloring $u_2v_2$ by $b$ and $u_4v_4$ by $c$, the lists of available colors of the remaining $6$ vertices decrease by at most one, and the vertices comprise a graph isomorphic to the graph from the previous paragraph, which is colorable. Hence, the coloring can be extended to $G$, a contradiction. \end{proof} \subsection{Discharging} In this part we show that the graph $G$ with the structural properties described in the previous part cannot exist. By $n_k(\alpha)$ we denote the number of $k$-vertices incident to the face $\alpha$, and by $l(\alpha)$ the length of a face $\alpha$. Now, we assign charges to the vertices and faces of $G$ as follows: \begin{itemize} \item{} $\ch{v} = 5 d(v) - 14$, for every vertex $v$ of $G$; \item{} $\ch{\alpha} = 2 l(\alpha) - 14$, for every face $\alpha$ of $G$. \end{itemize} By Euler's formula, we have that the total sum of all charges is $$ \sum_{v \in V(G)} \ch{v} + \sum_{\alpha \in F(G)} \ch{\alpha} = -28\,. $$ In order to show that a minimal counterexample $G$ does not exist, we redistribute the charges among the vertices and faces using the following rules: \begin{itemize} \item[\textbf{R1}]\quad Every $4^+$-vertex $v$ sends $\frac{\ch{v}}{d(v)}$ to every incident $5^-$-face. \item[\textbf{R2}]\quad Every $4^+$-vertex $v$ sends additional $\frac{\ch{v}}{2 \, d(v)}$ to an incident $5^-$-face $\alpha$ along each edge incident to $v$, $\alpha$ and a $7^+$-face. \item[\textbf{R3}] \begin{itemize} \item[(i)]\quad Every $3$-vertex incident to two $7^+$-faces sends $1$ to an incident $5^-$-face. \item[(ii)]\quad Every $3$-vertex incident to one $7^+$-face sends $\frac{1}{2}$ to every incident $5$-face. \item[(iii)]\quad Every $3$-vertex incident only to $5$-faces sends $\frac{1}{3}$ to every incident $5$-face. \end{itemize} \item[\textbf{R4}]\quad Every $8^+$-face $\alpha$ sends $\frac{\ch{\alpha}}{n_2(\alpha)}\ge2$ to every incident $2$-vertex. \end{itemize} Now, we are ready to prove Theorem~\ref{thm:main}. \begin{proof}[Proof of Theorem~\ref{thm:main}.] We prove that after applying the discharging rules the final charge $\chfin{x}$ of every $x \in V (G) \cup F(G)$ is nonnegative. First, we compute the final charges of the faces. By Lemma~\ref{lem:basic}, there are only faces of size at least $4$ in $G$. Moreover, there are no $6$-faces by Lemma~\ref{lem:no6}. Notice also that only $8^+$-faces may send charge by R4, however, they send only the positive portions and thus retain nonnegative charges. Hence, only $4$- and $5$-faces have negative initial charges. We consider them separately. \begin{itemize} \item{} \textit{Let $\alpha=(v_1,v_2,v_3,v_4)$ be a $4$-face.} By Lemma~\ref{lem:72v}, $\alpha$ is incident only to $3^+$-vertices. Moreover, by Lemma~\ref{lem:43v}, at least one of its neighbors is of degree at least $4$. If $n_3(\alpha) = 0$, $\alpha$ receives at least $\frac{3}{2}$ from each neighbor by the rule R1, so its final charge is nonnegative. In case when $n_3(\alpha) = 1$, let $d(v_1)=3$. By Lemmas~\ref{lem:44} and~\ref{lem:45}, the other two faces, the vertex $v_1$ is incident to, are $7^+$-faces. Hence, the vertices $v_2$ and $v_4$ send at least $ \frac{3}{2}+\frac{3}{4}$ by R1 and R2, $v_3$ sends at least $\frac{3}{2}$ by R1, and $v_1$ sends $1$ to $\alpha$ by R3. Thus, $\chfin{\alpha} \ge -6 + 2(\tfrac{3}{2} + \tfrac{3}{4}) + \tfrac{3}{2} + 1 = 1$. Suppose now that $n_3(\alpha) = 2$. The $3$-vertices incident to $\alpha$ may share an edge of $\alpha$ or have the facial distance $2$ on the boundary of $\alpha$. In both cases $3$-vertices are incident to $7^+$-faces by Lemma~\ref{lem:45}. First, suppose $d(v_1) = d(v_2) = 3$. Hence, the vertices $v_3$ and $v_4$ send at least $\frac{3}{2}+\frac{3}{4}$ by R1 and R2, $v_1$ and $v_2$ send $1$ by R3, so the final charge is at least $\frac{1}{2}$. Second, suppose that $v_1$, $v_3$ are $3$-vertices. Then $v_2$ and $v_4$ send at least $\frac{3}{2}+ 2\cdot\frac{3}{4}$ by R1 and R2, and $v_1$ and $v_3$ send $1$ by R3 to $\alpha$, so $\chfin{\alpha} \ge -6 + 2(\tfrac{3}{2} + 2\cdot\tfrac{3}{4}) + 2 \cdot 1 = 2$. Finally, suppose that $n_3(\alpha) = 3$. Then, $\alpha$ is adjacent only to $7^+$-faces. Let $v_1$, $v_2$, and $v_3$ be the $3$-vertices. Each of them sends $1$ by R3 to $\alpha$ and $v_4$ sends at least $\frac{3}{2}+2\cdot\frac{3}{4}$ by R1 and R2. Hence, the final charge of $\alpha$ is positive. \item{} \textit{Let $\alpha=(v_1,v_2,v_3,v_4,v_5)$ be a $5$-face.} By Lemma~\ref{lem:72v}, $\alpha$ is incident only to $3^+$-vertices. If $n_3(\alpha) \le 2$, $\alpha$ receives at least $3\cdot\frac{3}{2}$ from incident $4^+$-vertices, hence $\chfin{\alpha} > 0$. Suppose now that $n_3(\alpha) = 3$. In case when all three $3$-vertices are consecutive on $\alpha$, say $d(v_1) = d(v_2) = d(v_3) = 3$, $v_2$ is incident to two $7^+$-faces by Lemma~\ref{lem:55}. Then, $v_1$ and $v_3$ send at least $\frac{1}{2}$ by R3, $v_2$ sends $1$ by R3 and $v_4$, $v_5$ send at least $\frac{3}{2}$ by R1 and R2 to $\alpha$. Hence, $\chfin{\alpha} \ge -4 + 2 \cdot\frac{1}{2} + 1 + 2 \cdot \frac{3}{2} = 1$. In the second case, one of the $3$-vertices has two $4^+$-neighbors on the boundary of $\alpha$, so we may assume that $d(v_1) = d(v_2) = d(v_4) = 3$. Then, $v_1$ and $v_2$ send at least $\frac{1}{2}$ by R3, $v_3$ and $v_5$ send at least $\frac{3}{2}$ by R1 and R2 and $v_4$ sends at least $\frac{1}{3}$ by R3 to $\alpha$. So, the final charge of $\alpha$ is at least $\frac{1}{3}$. Next, let $n_3(\alpha) = 4$ and, say, $d(v_5) \ge 4$. By Lemmas~\ref{lem:45} and~\ref{lem:55}, two faces incident to $v_2$ and $v_3$ are of size at least $7$. Then, $v_1$ and $v_4$ send at least $\frac{1}{2}$ by R3, $v_2$ and $v_3$ send $1$ by R3, and $v_5$ sends at least $\frac{3}{2}$ to $\alpha$ by R1 and R2. The final charge of $\alpha$ is at least $\frac{1}{2}$. In case when $n_3(\alpha) = 5$, $\alpha$ is adjacent only to $7^+$-faces, by Lemmas~\ref{lem:45} and~\ref{lem:55}. Each vertex incident to $\alpha$ sends $1$ by R3, therefore the final charge of $\alpha$ is $1$. \end{itemize} Hence, all the faces have nonnegative final charge. It remains to consider the vertices. After applying the rules, the charge of the $3^+$-vertices remains nonnegative, since they redistribute only the positive portions of their charges. So, we consider only the $2$-vertices. Let $v$ be a $2$-vertex incident to faces $\alpha$ and $\beta$. By Lemmas~\ref{lem:72v} and~\ref{lem:2verts}, $\alpha$ and $\beta$ are $8^+$-faces. By R4, each of them sends at least $2$ of charge to $v$, so $v$ has nonnegative final charge. It follows that all the vertices and faces of $G$ have nonnegative final charge, a contradiction. This establishes Theorem~\ref{thm:main}. \end{proof} \paragraph{Acknowledgement.} The authors would like to thank S. Jendro\v{l} who introduced the problem to P. \v{S}ugerek. \bibliographystyle{acm} {\small
2024-02-18T23:41:12.005Z
2013-03-19T02:15:37.000Z
algebraic_stack_train_0000
4,412
6,191
proofpile-arXiv_066-5619
\section{Introduction}\label{sec:introduction} The standard model (SM)~\cite{Glashow:1961tr,Weinberg:1967tq,sm_salam} of particle physics accurately describes many experimental results that probe elementary particles and their interactions up to an energy scale of a few hundred \GeVns \cite{EWKlimits}. In the SM, the building blocks of matter, the fermions, are comprised of quarks and leptons. The interactions are mediated through the exchange of force carriers: the photon for electromagnetic interactions, the $\PW$ and $\cPZ$ bosons for weak interactions, and the gluons for strong interactions. All the elementary particles acquire mass through their interaction with the Higgs field~\cite{Englert:1964et,Higgs:1964ia,Higgs:1964pj,Guralnik:1964eu,Higgs:1966ev,Kibble:1967sv,Nambu:1961tp,NambuNobel,GellMann:1960np}. This mechanism, called the ``Higgs'' or ``BEH'' mechanism~\cite{Englert:1964et,Higgs:1964ia,Higgs:1964pj,Guralnik:1964eu,Higgs:1966ev,Kibble:1967sv}, is the first coherent and the simplest solution for giving mass to \PW\ and \cPZ\ bosons, while still preserving the symmetry of the Lagrangian. It is realized by introducing a new complex scalar field into the model. By construction, this field allows the $\PW$ and $\cPZ$ bosons to acquire mass whilst the photon remains massless, and adds to the model one new scalar particle, the SM Higgs boson (\PH). The Higgs scalar field and its conjugate can also give mass to the fermions, through Yukawa interactions \cite{Nambu:1961tp,NambuNobel,GellMann:1960np}. The SM does not directly predict the values of the masses of the elementary particles, and in the same context there is no prediction for the Higgs boson mass. The particle masses are considered parameters to be determined experimentally. Nevertheless, a number of very general arguments~\cite{Cornwall:1973tb,Cornwall:1974km,LlewellynSmith:1973ey,Lee:1977eg} have been used to narrow the range of possible values for the Higgs boson mass to below approximately 1\TeV. The wealth of electroweak precision data from the LEP and SLC colliders, the Tevatron, and other experiments predicted the Higgs boson mass to be at approximately 90\GeV, with an upper limit of $152\GeV$ at the 95\% confidence level (CL) \cite{EWKlimits}. Direct searches at LEP excluded values lower than $114.4\GeV$ at 95\% CL~\cite{LEPlimits}, and early Tevatron measurements excluded the mass range 162--166\GeV at 95\% CL~\cite{TEVHIGGS_2010}. The discovery or exclusion of the SM Higgs boson is one of the primary scientific goals of the LHC. Previous direct searches at the LHC were based on data from proton-proton collisions corresponding to an integrated luminosity of 5.1\fbinv collected at a centre-of-mass energy of 7\TeV. The CMS experiment excluded at 95\% CL masses from 127 to 600\GeV~\cite{Chatrchyan:2012tx}. The ATLAS experiment excluded at 95\% CL the ranges 111.4--116.4, 119.4--122.1, and 129.2--541\unit{GeV}~\cite{ATLAScombJul2012_7TeV}. Within the remaining allowed mass region, an excess of events between 2 and 3 standard deviations ($\sigma$) near 125\GeV was reported by both experiments. In 2012, the proton-proton centre-of-mass energy was increased to 8\TeV, and by the end of June, an additional integrated luminosity of more than 5.3\fbinv had been recorded by each of the two experiments, thereby enhancing significantly the sensitivity of the search for the Higgs boson. The result was the observation by the ATLAS and CMS Collaborations of a new heavy boson with a mass of approximately $125\GeV$. The two experiments simultaneously published the observation in concise papers~\cite{ATLASobservation125,CMSobservation125}. The CMS publication~\cite{CMSobservation125} focused on the observation in the five main decay channels in the low-mass range from $110$ to $145\GeV$: $\PH \to \Pgg\Pgg$, $\PH \to \cPZ \cPZ \to 4\ell$, $\PH \to \PW\PW \to \ell\cPgn\ell\cPgn$, $\PH \to \Pgt\Pgt$, and $\PH \to \cPqb\cPqb$, where $\ell$ stands for electron or muon, and for simplicity our notation does not distinguish between particles and antiparticles. In the summer 2012 the analysis of the full data set by the CDF and D0 Collaborations resulted in an excess of events of about 3$\sigma$ in the mass range $120 \le \ensuremath{m_{\PH}} \le 135\GeV$, while searching for a SM Higgs boson decaying into \cPqb\ quarks~\cite{PhysRevLett.109.071804}. The channels with the highest sensitivity for discovering the SM Higgs boson with a mass near $125\GeV$ are $\PH \to \Pgg\Pgg$ and $\PH \to \cPZ \cPZ \to 4\ell$. The other three final states have poorer mass resolution and, therefore, necessitate more data to achieve a similar sensitivity. Among them, the $\PH \to \PW\PW \to \ell\cPgn\ell\cPgn$ channel has the largest signal-to-background ratio. These five channels are complementary in the way they are measured in the detector, as is the information they can provide about the SM Higgs boson. A light Higgs boson has a natural width of a few \MeV~\cite{LHCHiggsCrossSectionWorkingGroup:2011ti}, and therefore the precision of the mass measurement from fully reconstructed decays would be limited by the detector resolution. The first two channels, $\PH \to \Pgg\Pgg$ and $\PH \to \cPZ\cPZ \to 4\ell$, produce a narrow mass peak. These two high-resolution channels were used to measure the mass of the newly observed particle~\cite{CMSobservation125,ATLASobservation125}. In the SM, the properties of the Higgs boson are fully determined once its mass is known. All cross sections and decay fractions are predicted~\cite{LHCHiggsCrossSectionWorkingGroup:2011ti,Dittmaier:2012vm}, and thus the measured rates into each channel provide a test of the SM. The individual measurements can be combined, and from them the coupling constants of the Higgs boson with fermions and bosons can be extracted. The measured values can shed light on the nature of the newly observed particle because the Higgs boson couplings to fermions are qualitatively different from those to bosons. The data described in this paper are identical to those reported in the observation publication~\cite{CMSobservation125}. The main focus of this paper is an in-depth description of the five main analyses and a more detailed comparison of the various channels with the SM predictions by evaluating the couplings to fermions and vector bosons, as well as various coupling ratios. The paper is organized into several sections. Sections 2 and 3 contain a short description of the CMS detector and the event reconstruction of physics objects relevant for the Higgs boson search. Section 4 describes the data sample, the Monte Carlo (MC) event generators used for the signal and background simulation, and the evaluation of the signal sensitivity. Then the analyses of the five decay channels are described in detail in Sections 5 to 9. In the last section, the statistical method used to combine the five channels and the statistical treatment of the systematic uncertainties are explained. Finally, the results are combined and the first measurements of the couplings of the new particle to bosons and fermions are presented. \section{The CMS experiment}\label{sec:experiment} The discovery capability for the SM Higgs boson is one of the main benchmarks that went into optimizing the design of the CMS experiment~\cite{Pimia:1990zy,DellaNegra:1992hp,Ellis:1994sq,Chatrchyan:2008aa}. The central feature of the detector~\cite{Chatrchyan:2008aa} is a superconducting solenoid 13\unit{m} long, with an internal diameter of 6\unit{m}. The solenoid generates a uniform 3.8\unit{T} magnetic field along the axis of the LHC beams. Within the field volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass/scintillator hadron calorimeter (HCAL). Muons are identified and measured in gas-ionization detectors embedded in the outer steel magnetic flux return yoke of the solenoid. The detector is subdivided into a cylindrical barrel and endcap disks on each side of the interaction point. Forward calorimeters complement the coverage provided by the barrel and endcap detectors. The CMS experiment uses a right-handed coordinate system, with the origin at the nominal interaction point, the $x$ axis pointing to the centre of the LHC, the $y$ axis pointing up (perpendicular to the LHC plane), and the $z$ axis along the anticlockwise-beam direction. The azimuthal angle $\phi$ is measured in the $x$-$y$ plane. The pseudorapidity is defined as $\eta = -\ln[\tan{(\theta/2)}]$ where the polar angle $\theta$ is measured from the positive $z$ axis. The centre-of-mass momentum of the colliding partons in a proton-proton collision is subject to Lorentz boosts along the beam direction relative to the laboratory frame. Because of this effect, the pseudorapidity, rather than the polar angle, is a more natural measure of the angular separation of particles in the rest frame of the detector. Charged particles are tracked within the pseudorapidity range $|\eta|<2.5$. The silicon pixel tracker is composed of 66~million pixels of area $100\times150\mum^2$, arranged in three barrel layers and two endcap disks at each end. The silicon strip tracker, organized in ten barrel layers and twelve endcap disks at each end, is composed of 9.3 million strips with pitch between 80 and 205$\mum$, with a total silicon surface area of $198\unit{m}^2$. The performance of the tracker is essential to most analyses in CMS and has reached the design performance in transverse-momentum ($\pt$) resolution, efficiency, and primary- and secondary-vertex resolutions. The tracker has an efficiency larger than 99\% for muons with $\pt >1\GeV$, a $\pt$ resolution between 2 and 3\% for charged tracks of $\pt \approx 100$\GeV in the central region ($|\eta| <$ 1.5), and unprecedented capabilities for b-jet identification. Measurements of the impact parameters of charged tracks and secondary vertices are used to identify jets that are likely to contain the hadronization and decay products of $\cPqb$ quarks (``$\cPqb$ jets''). A b-jet tagging efficiency of more than 50\% is achieved with a rejection factor for light-quark jets of ${\approx}200$, as measured with $\ttbar$ events in data~\cite{CMS-PAS-BTV-12-001}. The dimuon mass resolution at the $\Upsilon$ mass, dominated by instrumental effects, is measured to be 0.6\% in the barrel region~\cite{PhysRevD.83.112004}, consistent with the design goal. Due to the high spatial granularity of the pixel detector, the channel occupancy is less than $10^{-3}$, allowing charged-particle trajectories to be measured in the high-rate environment of the LHC without loss of performance. The ECAL is a fine-grained, homogeneous calorimeter consisting of more than 75\,000 lead tungstate crystals, arranged in a quasi-projective geometry and distributed in a barrel region ($|\eta| < 1.48$) and two endcaps that extend up to $|\eta| = 3.0$. The front-face cross section of the crystals is approximately $22\times 22\unit{mm}^2$ in the barrel region and $28.6\times 28.6\unit{mm}^2$ in the endcaps. Preshower detectors consisting of two planes of silicon sensors interleaved with a total of three radiation lengths of lead absorber are located in front of the endcaps. Electromagnetic (EM) showers are narrowly distributed in the lead tungstate crystals (Moli\`ere radius of 21\unit{mm}), which have a transverse size comparable to the shower radius. The precise measurement of the transverse shower shape is the primary method used for EM particle identification, and measurements in the surrounding crystals are used for isolation criteria. The energy resolution of the ECAL is the single most important performance benchmark for the measurement of the Higgs boson decay into two photons and to a lesser extent for the decay to \cPZ\cPZ\ that subsequently decay to electrons. In the central barrel region, the energy resolution of electrons that interact minimally with the tracker material indicates that the resolution of unconverted photons is consistent with design goals. The energy resolution for photons with transverse energy of ${\approx}60\GeV$ varies between 1.1\% and 2.5\% over the solid angle of the ECAL barrel, and from 2.2\% to 5\% in the endcaps. For ECAL barrel unconverted photons the diphoton mass resolution is estimated to be 1.1\GeV at a mass of 125\unit{GeV}. The HCAL barrel and endcaps are sampling calorimeters composed of brass and plastic scintillator tiles, covering $|\eta| < 3.0$. The hadron calorimeter thickness varies from 7 to 11 interaction lengths within the solenoid, depending on $|\eta|$; a scintillator ``tail catcher'' placed outside the coil of the solenoid, just in front of the innermost muon detector, extends the instrumented thickness to more than 10 interaction lengths. Iron forward calorimeters with quartz fibres, read out by photomultipliers, extend the calorimeter coverage up to $|\eta| = 5.0$. Muons are measured in the range $|\eta| < 2.4$, with detection planes based on three technologies: drift tubes ($|\eta| <$ 1.2), cathode strip chambers ($0.9 < |\eta| < 2.4$), and resistive-plate chambers ($|\eta| < 1.6$). The first two technologies provide a precise position measurement and trigger, whilst the third one provides precise timing information, as well as a second independent trigger. The muon system consists of four stations in the barrel and endcaps, designed to ensure robust triggering and detection of muons over a large angular range. In the barrel region, each muon station consists of twelve drift-tube layers, except for the outermost station, which has eight layers. In the endcaps, each muon station consists of six detection planes. The precision of the $r$-$\phi$ measurement is 100\mum in the drift tubes and varies from 60 to 140\mum in the cathode strip chambers, where $r$ is the radial distance from the beamline and $\phi$ is the azimuthal angle. The CMS trigger and data acquisition systems ensure that data samples with potentially interesting events are recorded with high efficiency. The first-level (L1) trigger, composed of the calorimeter, muon, and global-trigger processors, uses coarse-granularity information to select the most interesting events in less than 4\mus. The detector data are pipelined to ensure negligible deadtime up to a L1 rate of 100\unit{kHz}. After L1 triggering, data are transferred from the readout electronics of all subdetectors through the readout network to the high-level-trigger (HLT) processor farm, which assembles the full event and executes global reconstruction algorithms. The HLT filters the data, resulting in an event rate of $\approx$500\unit{Hz} stored for offline processing. All data recorded by the CMS experiment are accessible for offline analysis through the world-wide LHC computing grid. The CMS experiment employs a highly distributed computing infrastructure, with a primary Tier-0 centre at CERN, supplemented by seven Tier-1, more than 50 Tier-2, and over 100 Tier-3 centres at national laboratories and universities throughout the world. The CMS software running on this high-perfor\-mance computing system executes a multitude of crucial tasks, including the reconstruction and analysis of the collected data, as well as the generation and detector modelling of MC simulation. \section{Event reconstruction}\label{sec:reconstruction} Figure~\ref{fig:reconstruction_vertices} shows the distribution of the number of vertices reconstructed per event in the 2011 and 2012 data, and the display of a four-lepton event recorded in 2012. The large number of proton-proton interactions occurring per LHC bunch crossing (``pileup''), on average of 9 in 2011 and 19 in 2012, makes the identification of the vertex corresponding to the hard-scattering process nontrivial, and affects most of the physics objects: jets, lepton isolation, etc. The tracking system is able to separate collision vertices as close as 0.5~\mm along the beam direction~\cite{IEEE_DetAnnealing}. For each vertex, the sum of the $\pt^2$ of all tracks associated with the vertex is computed. The vertex for which this quantity is the largest is assumed to correspond to the hard-scattering process, and is referred to as the primary vertex in the event reconstruction. In the $\PH\to\Pgg\Pgg$ final state, a large fraction of the transverse momentum produced in the collision is carried by the photons, and a dedicated algorithm, described in Section~\ref{sec:hgg_vertex}, is therefore used to assign the photons to a vertex. \begin{figure}[htb] \begin{center} \includegraphics[width=0.47\textwidth]{figures/reconstruction/vertices.pdf} \hspace{0.5cm} \includegraphics[width=0.4\textwidth]{figures/reconstruction/vertex_display.png} \end{center} \caption{Left: probability distribution for the number of vertices $N_\text{vertices}$ reconstructed per event in the 2011 and 2012 data. The $\sqrt{s}=7$ and $8\TeV$ probability distributions are weighted by their equivalent integrated luminosity, and by the corresponding total cross section $\sigma(\Pp\Pp \to \PH+X)$ for a SM Higgs boson of mass 125\GeV. Right: display of a four-lepton event recorded in 2012, with 24 reconstructed vertices. The four leptons are shown as thick lines and originate from the vertex chosen for the hard-scattering process.} \label{fig:reconstruction_vertices} \end{figure} A particle-flow (PF) algorithm~\cite{CMS-PAS-PFT-09-001, CMS-PAS-PFT-10-002} combines the information from all CMS subdetectors to identify and reconstruct the individual particles emerging from all vertices: charged hadrons, neutral hadrons, photons, muons, and electrons. These particles are then used to reconstruct the missing transverse energy, jets, and hadronic $\tau$-lepton decays, and to quantify the isolation of leptons and photons. Electrons and photons can interact with the tracker material before reaching the ECAL to create additional electrons and photons through pair production and bremsstrahlung radiation. A calorimeter superclustering algorithm is therefore used to combine the ECAL energy deposits that could correspond to a photon or electron. In the barrel region, superclusters are formed from five-crystal-wide areas in $\eta$, centred on the locally most-energetic crystal and having a variable extension in $\phi$. In the endcaps, where the crystals are arranged according to an $x$-$y$ rather than $\eta$-$\phi$ geometry, matrices of $5\times5$ crystals around the most-energetic crystals are merged if they lie within a narrow road in $\eta$. The stability and uniformity of the ECAL response must be calibrated at a fraction of a percent to maintain the excellent intrinsic energy resolution of the ECAL \cite{ECAL-EnergyResol}. A dedicated monitoring system, based on the injection of laser light into each crystal, is used to track and correct for channel response changes caused by radiation damage and subsequent recovery of the crystals \cite{ECAL-LaserMonit}. Response variations are a few percent in the barrel region, and increase up to a few tens of percent in the most-forward endcap regions. The channel-to-channel response is equalized using several techniques that exploit reference signatures from collision events (mainly $\pi^0, \eta \to \gamma\gamma$) \cite{ECAL-Calibrations}. The residual miscalibration of the channel response varies between 0.5\% in the central barrel to a few percent in the endcaps \cite{ECAL-Role}. At the reconstruction level, additional correction factors to the photon energy are applied. These corrections are sizeable for photons that convert before entering the ECAL, for which the resolution is mainly limited by shower-loss fluctuations. Given the distribution of the tracker material in front of the ECAL, these effects are sizeable for $|\eta|>1$ \cite{ECAL-Role}. Candidate photons for the $\PH\to\Pgg\Pgg$ search are reconstructed from the superclusters, and their identification is discussed in Section~\ref{sec:hgg_selection}. The photon energy is computed starting from the raw supercluster energy. In the region covered by the preshower detector ($|\eta| > 1.65$), the energy recorded in that detector is added. In order to obtain the best resolution, the raw energy is corrected for the containment of the shower in the clustered crystals and for the shower losses of photons that convert in the tracker material before reaching the calorimeter. These corrections are computed using a multivariate regression technique based on the boosted decision tree (BDT) implementation in \textsc{tmva}~\cite{Hocker:2007ht}. The regression is trained on photons from a sample of simulated events using the ratio of the true photon energy to the raw energy as the target variable. The input variables are the $\eta$ and $\phi$ coordinates of the supercluster, a collection of shower-shape variables, and a set of energy-deposit coordinates defined with respect to the supercluster. A second BDT, using the same input variables, is trained on a separate sample of simulated photons to provide an estimate of the uncertainty in the energy value provided by the first BDT. The width of the reconstructed $\cPZ$ resonance is used to quantify the ECAL performance, using decays to two electrons whose energies are measured using the ECAL alone, with their direction determined from the tracks. In the 7\TeV data set, the dielectron mass resolution at the $\cPZ$ boson mass, fitting for the measurement contribution separately from the natural width, is 1.56\GeV in the barrel and 2.57\GeV in the endcaps, while in the 8\TeV data sample, reconstructed with preliminary calibration constants, the corresponding values are 1.61\GeV and 3.75\GeV. Electron reconstruction is based on two methods: the first where an ECAL supercluster is used to seed the reconstruction of a charged-particle trajectory in the tracker~\cite{Baffioni:2006cd,CMS-PAS-EGM-10-004}, and the second where a candidate track is used to reconstruct an ECAL supercluster~\cite{CMS-PAS-PFT-10-003}. In the latter, the electron energy deposit is found by extrapolating the electron track to the ECAL, and the deposits from possible bremsstrahlung photons are collected by extrapolating a straight line tangent to the electron track from each tracker layer, around which most of the tracker material is concentrated. In both cases, the trajectory is fitted with a Gaussian sum filter~\cite{Adam2005} using a dedicated modelling of the electron energy loss in the tracker material. Merging the output of these two methods provides high electron reconstruction efficiency within $|\eta| < 2.5$ and $\PT>2$\GeV. The electron identification relies on a \textsc{tmva} BDT that combines observables sensitive to the amount of bremsstrahlung along the electron trajectory, the geometrical and momentum matching between the electron trajectory and the associated supercluster, as well as the shower-shape observables. Muons are reconstructed within $|\eta| < 2.4$ and down to a \PT of 3\GeV. The reconstruction combines the information from both the silicon tracker and the muon spectrometer. The matching between the tracker and the muon system is initiated either ``outside-in'', starting from a track in the muon system, or ``inside-out'', starting from a track in the silicon tracker. Loosely identified muons, characterized by minimal requirements on the track components in the muon system and taking into account small energy deposits in the calorimeters that match to the muon track, are identified with an efficiency close to 100\% by the PF algorithm. In some analyses, additional tight muon identification criteria are applied: a good global muon-track fit based on the tracker and muon chamber hits, muon track-segment reconstruction in at least two muon stations, and a transverse impact parameter with respect to the primary vertex smaller than 2\,mm. Jets are reconstructed from all the PF particles using the anti-\kt jet algorithm~\cite{Cacciari:2008gp} implemented in \textsc{fastjet}~\cite{Cacciari:fastjet}, with a distance parameter of 0.5. The jet energy is corrected for the contribution of particles created in pileup interactions and in the underlying event. This contribution is calculated as the product of the jet area and an event-by-event \PT density $\rho$, also obtained with \textsc{fastjet} using all particles in the event. Charged hadrons, photons, electrons, and muons reconstructed by the PF algorithm have a calibrated momentum or energy scale. A residual calibration factor is applied to the jet energy to account for imperfections in the neutral-hadron calibration, the jet energy containment, and the estimation of the contributions from pileup and underlying-event particles. This factor, obtained from simulation, depends on the jet \PT and $\eta$, and is of the order of 5\% across the whole detector acceptance. Finally, a percent-level correction factor is applied to match the jet energy response in the simulation to the one observed in data. This correction factor and the jet energy scale uncertainty are extracted from a comparison between the data and simulation of $\gamma$+jets, \cPZ+jets, and dijet events~\cite{CMS-JME-10-011}. Particles from different pileup vertices can be clustered into a pileup jet, or significantly overlap a jet from the primary vertex below the \PT threshold applied in the analysis. Such jets are identified and removed using a \textsc{tmva} BDT with the following input variables: momentum and spatial distribution of the jet particles, charged- and neutral-particle multiplicities, and consistency of charged hadrons within the jet with the primary vertex. The missing transverse energy (\MET) vector is calculated as the negative of the vectorial sum of the transverse momenta of all particles reconstructed by the PF algorithm. The resolution $\sigma(E_{x,y}^{\text{miss}})$ on either the $x$ or $y$ component of the \MET vector is measured in $\cPZ\to\mu\mu$ events and parametrized by $\sigma(E_{x,y}^{\text{miss}})=0.5\times\sqrt{{\Sigma {\ET}}}$, where $\Sigma {\ET}$ is the scalar sum of the transverse momenta of all particles, with $\sigma$ and $\Sigma {\ET}$ expressed in \GeVns{}. In 2012, with an average number of 19 pileup interactions, $\Sigma {\ET}\approx 600\GeV$ for the analyses considered here. Jets originating from b-quark hadronization are identified using different algorithms that exploit particular properties of such objects~\cite{CMS-PAS-BTV-12-001}. These properties, which result from the relatively large mass and long lifetime of b quarks, include the presence of tracks with large impact parameters, the presence of secondary decay vertices displaced from the primary vertex, and the presence of low-\PT leptons from semileptonic b-hadron decays embedded in the jets~\cite{CMS-PAS-BTV-12-001}. A combined secondary-vertex (CSV) b-tagging algorithm, used in the $\PH\to\cPqb\cPqb$ and $\PH\to\Pgt\Pgt$ searches, makes use of the information about track impact parameters and secondary vertices within jets in a likelihood discriminant to provide separation of b jets from jets originating from gluons, light quarks, and charm quarks. The efficiency to tag b jets and the rate of misidentification of non-b jets depends on the algorithm used and the operating point chosen. These are typically parameterized as a function of the transverse momentum and rapidity of the jets. The performance measurements are obtained directly from data in samples that can be enriched in b jets, such as $\ttbar$ and multijet events. Hadronically decaying $\tau$ leptons ($\Pgt_h$) are reconstructed and identified using an algorithm~\cite{CMS-PAS-TAU-11-001} which targets the main decay modes by selecting candidates with one charged hadron and up to two neutral pions, or with three charged hadrons. A photon from a neutral-pion decay can convert in the tracker material into an electron and a positron, which can then radiate bremsstrahlung photons. These particles give rise to several ECAL energy deposits at the same $\eta$ value and separated in azimuthal angle, and are reconstructed as several photons by the PF algorithm. To increase the acceptance for such converted photons, the neutral pions are identified by clustering the reconstructed photons in narrow strips along the $\phi$ direction. The $\Pgt_h$ from \PW, \cPZ, and Higgs boson decays are typically isolated from the other particles in the event, in contrast to misidentified $\Pgt_h$ from jets that are surrounded by the jet particles not used in the $\Pgt_h$ reconstruction. The $\Pgt_h$ isolation parameter $R_\text{Iso}^{\tau}$ is obtained from a multivariate discriminator, taking as input a set of transverse momentum sums $S_{j} = \sum_i p_{\mathrm{T}, i, j}$, where $p_{\mathrm{T}, i, j}$ is the transverse momentum of a particle $i$ in a ring $j$ centred on the $\Pgt_h$ candidate direction and defined in $(\eta, \phi)$ space. Five equal-width rings are used up to a distance $\Delta R = \sqrt{(\Delta \eta)^2 + (\Delta \phi)^2}=0.5$ from the $\Pgt_h$ candidate, where $\Delta \eta$ and $\Delta \phi$ are the pseudorapidity and azimuthal angle differences (in radians), respectively, between the particle and the $\Pgt_h$ candidate direction. The effect of pileup on the isolation parameter is mainly reduced by discarding from the $S_{j}$ calculation the charged hadrons with a track originating from a pileup vertex. The contribution of pileup photons and neutral hadrons is handled by the discriminator, which also takes as input the $\PT$ density $\rho$. The isolation parameter of electrons and muons is defined relative to their transverse momentum $\PT^{\ell}$ as \begin{equation} R_\text{Iso}^{\ell} \equiv \left( \sum_\text{charged} \PT + \mathrm{MAX}\left[ 0, \sum_\text{neut. had.} \PT + \sum_{\gamma} {\PT} - \rho_\text{neutral} \times A_\text{eff} \right] \right) / \PT^{\ell}, \label{eq:reconstruction_isolation} \end{equation} where $\sum_\text{charged} \PT$, $\sum_\text{neut. had.} \PT$, and $\sum_{\gamma} \PT$ are, respectively, the scalar sums of the transverse momenta of charged hadrons, neutral hadrons, and photons located in a cone centred on the lepton direction in $(\eta, \phi)$ space. The cone size $\Delta R$ is taken to be 0.3 or 0.4 depending on the analysis. Charged hadrons associated with pileup vertices are not considered, and the contribution of pileup photons and neutral hadrons is estimated as the product of the neutral-particle \PT density $\rho_\text{neutral}$ and an effective cone area $A_\text{eff}$. The neutral-particle \PT density is obtained with \textsc{fastjet} using all PF photons and neutral hadrons in the event, and the effective cone area is slightly different from the actual cone area, being computed in such a way so as to absorb the residual dependence of the isolation efficiency on the number of pileup collisions. \section{Data sample and analyses performance}\label{sec:searches} The data have been collected by the CMS experiment at a centre-of-mass energy of 7\TeV in the year 2011, corresponding to an integrated luminosity of about $5.1\fbinv$, and a centre-of-mass energy of 8\TeV in the year 2012, corresponding to an integrated luminosity of about 5.3\fbinv. A summary of all analyses described in this paper is presented in Table~\ref{tab:channels}, where we list their main characteristics, namely: exclusive final states, Higgs boson mass range of the search, integrated luminosity used, and the approximate experimental mass resolution. The presence of a signal in one of the channels at a certain value of the Higgs boson mass, $\ensuremath{m_{\PH}}$, should manifest itself as an excess in the corresponding invariant-mass distribution extending around that value for a range corresponding to the $\ensuremath{m_{\PH}}$ resolution. \begin{table} \begin{center} \topcaption{Summary information on the analyses included in this paper. The column ``\PH prod.'' indicates the production mechanism targeted by an analysis; it does not imply 100\% purity (e.g.\ analyses targeting vector-boson fusion (VBF) production are expected to have 30\%--50\% of their signal events coming from gluon-gluon fusion production). The main contribution in the untagged and inclusive categories is always gluon-gluon fusion. A final state can be further subdivided into multiple categories based on additional jet multiplicity, reconstructed mass, transverse momentum, or multivariate discriminators. Notations used are: $(\mathrm{jj})_{\mathrm{VBF}}$ stands for a dijet pair consistent with topology (VBF-tag); $V$ = $\PW$ and $\cPZ$ bosons; same flavour (SF) dileptons = $\Pe\Pe$ or $\mu\mu$ pairs; different flavour (DF) dileptons = $\Pe\mu$ pairs; $\tau_h = \tau$ leptons decaying hadronically. $V\PH$ stands for associated production with a vector boson. } \footnotesize \label{tab:channels} \begin{tabular}{l|c|l| ccc cc } \hlin \PH\ decay & \PH\ prod. & Exclusive final states & No. of & $m_{\PH}$ range & $m_{\PH}$ & \multicolumn{2}{c}{ $\mathcal{L}$ (fb$^{-1}$)} \\ & & analysed & channels & (\GeVns) & resolution & 7\TeV & 8\TeV \\ \hline\hlin \multirow{3}{*}{$\gamma\gamma$} & untagged & 4 diphoton categories & 4 & 110--150 & 1--2\% & 5.1 & 5.3 \\ & \multirow{2}{*}{VBF-tag} & $\gamma\gamma + (\rm{jj})_{\rm{VBF}}$ & \multirow{2}{*}{1 or 2} & \multirow{2}{*}{110--150} & \multirow{2}{*}{1--2\%} & \multirow{2}{*}{5.1} & \multirow{2}{*}{5.3} \\ & & 2 $m_{\rm{jj}}$ categories for 8\TeV & & & & & \\ \hline $\cPZ\cPZ \to 4\ell$ & inclusive & $4\Pe, \, 4\mu, \, 2 \Pe 2\mu$ & 3 & 110--180 & 1--2\% & 5.0 & 5.3 \\ \hline \multirow{3}{*}{$\PW\PW \to \ell\nu\ell\nu$} & 0 or 1 jet & DF or SF dileptons & 4 & 110--160 & 20\% & 4.9 & 5.1 \\ & \multirow{2}{*}{VBF-tag} & $ \ell\nu\ell\nu + (\mathrm{jj})_{\mathrm{VBF}}$ & \multirow{2}{*}{1 or 2} & \multirow{2}{*}{110--160} & \multirow{2}{*}{20\%} & \multirow{2}{*}{4.9} & \multirow{2}{*}{5.1} \\ & & DF or SF dileptons for 8\TeV & & & & & \\ \hlin \multirow{4}{*}{$\tau\tau$} & \multirow{2}{*}{0 or 1 jet} & $(\Pe\tau_h, \, \mu\tau_h, \, \Pe\mu, \, \mu\mu) $ & \multirow{2}{*}{16} & \multirow{2}{*}{110--145} & \multirow{2}{*}{20\%} & \multirow{2}{*}{4.9} & \multirow{2}{*}{5.1} \\ & & 2 $p_T^{\tau\tau}$ categories and 0 or 1 jet & & & & & \\ & VBF-tag & $(\Pe\tau_h, \, \mu\tau_h, \, \Pe\mu, \, \mu\mu) + (\rm{jj})_{\rm{VBF}}$ & 4 & 110--145 & 20\% & 4.9 & 5.1 \\ \hlin \multirow{2}{*}{$bb$} & \multirow{2}{*}{$V\PH$-tag} & $(\nu\nu, \, \Pe\Pe, \, \mu\mu, \, \Pe\nu, \, \mu\nu$ + 2 \cPqb\ jets) & \multirow{2}{*}{10} & \multirow{2}{*}{110--135} & \multirow{2}{*}{10\%} & \multirow{2}{*}{5.0} & \multirow{2}{*}{5.1} \\ & & 2 $\pt^{V}$ categories & & & & & \\ \hlin \end{tabular} \end{center} \end{table} \subsection{Simulated samples} MC simulation samples for the SM Higgs boson signal and background processes are used to optimize the event selection, evaluate the acceptance and systematic uncertainties, and predict the expected yields. They are processed through a detailed simulation of the CMS detector based on \GEANTfour~\cite{Agostinelli:2002hh} and are reconstructed with the same algorithms used for the data. The simulations include pileup interactions properly reweighted to match the distribution of the number of such interactions observed in data. For leading-order generators the default set of parton distribution functions (PDF) used to produce these samples is CTEQ6L~\cite{CTEQ6L1}, while CT10~\cite{Guzzi:2011sv} is employed for next-to-leading-order (NLO) generators. For all generated samples the hadronization is handled by {\PYTHIA 6.4} ~\cite{Sjostrand:2006za} or \HERWIG{++}~\cite{Gieseke:2006ga}, and the \TAUOLA \cite{TAUOLA} package is used for $\tau$ decays. The {\PYTHIA} parameters for the underlying event and pileup interactions are set to the {Z2} tune \cite{1107.0330} for the 7\TeV data sample and to the {Z2*} tune \cite{1107.0330} for the 8\TeV data sample. \subsection{Signal simulation} The Higgs boson can be produced in pp collisions via four different processes: gluon-gluon fusion, vector-boson fusion, associated production with a vector boson, and associated production with a $\ttbar$ pair. Simulated Higgs boson signals from gluon-gluon fusion ($\Pg\Pg \rightarrow \PH$), and vector-boson fusion (VBF) ($\Pq\Pq \rightarrow \Pq\Pq \PH$), are generated with {\POWHEG}~\cite{powheg,powheg1,powheg2} at NLO. The simulation of associated-production samples uses {\PYTHIA}, with the exception of the $\PH\to\cPqb\cPqb$ analysis that uses {\POWHEG} interfaced to \HERWIG{++}. Events at the generator level are reweighted according to the total cross section $\sigma(\Pp\Pp\rightarrow \PH)$, which contains contributions from gluon-gluon fusion up to next-to-next-to-leading order (NNLO) and next-to-next-to-leading-log (NNLL) terms~\cite{LHCHiggsCrossSectionWorkingGroup:2011ti,deFlorian:2012yg, Anastasiou:2012hx,Anastasiou:2008tj,deFlorian:2009hc,Baglio:2010ae,Djouadi:1991tka,Dawson:1990zj,Spira:1995rr, Harlander:2002wh,Anastasiou:2002yz,Ravindran:2003um,Catani:2003zt,Actis:2008ug,Aglietti:2004nj,Degrassi:2004mx}, vector-boson fusion including NNLO quantum chromodynamic (QCD) and NLO electroweak (EW) terms~\cite{LHCHiggsCrossSectionWorkingGroup:2011ti,Ciccolini:2007jr,Ciccolini:2007ec,Figy:2003nv,Arnold:2008rz,Bolzoni:2010xr}, associated production V$\PH$ (where V $= \cPZ,\PW$) at NNLO QCD and NLO EW ~\cite{Han:1991ia,Brein:2003wg,Ciccolini:2003jy,Hamberg:1990np,Denner:2011id,Ferrera:2011bk}, and the production in association with $\ttbar$ at NLO QCD~\cite{Beenakker:2001rj,Beenakker:2002nc,Dawson:2002tg,Dawson:2003zu}. For the four-fermion final states the total cross section is scaled by the branching fraction $\mathcal{B}(\PH\rightarrow 4\ell)$ calculated with the \textsc{prophecy4f} program \cite{Bredenstein:2006ha,Bredenstein:2006rh}. The calculations include NLO QCD and EW corrections, and all interference effects up to NLO~\cite{LHCHiggsCrossSectionWorkingGroup:2011ti,Bredenstein:2006rh,Bredenstein:2006ha,Djouadi:1997yw,hdecay2,Actis:2008ts,Denner:2011mq,Dittmaier:2012vm}. For all the other final states {\HDECAY} \cite{Djouadi:1997yw,hdecay2} is used, which includes NLO QCD and NLO EW corrections. The predicted signal cross sections at 8\TeV and branching fraction for a low-mass Higgs boson are shown in the left and right plots of Fig.~\ref{fig:xs_and_br_lm}, respectively~\cite{LHCHiggsCrossSectionWorkingGroup:2011ti,Dittmaier:2012vm}. The uncertainty in the signal cross section related to the choice of PDFs is determined with the PDF4LHC prescription~\cite{Alekhin:2011sk,Botje:2011sn,Lai:2010vv,Martin:2009iq,Ball:2011mu}. The uncertainty due to the higher-order terms is calculated by varying the renormalization and factorization scales in each process, as explained in Ref.~\cite{LHCHiggsCrossSectionWorkingGroup:2011ti}. For the dominant gluon-gluon fusion process, the transverse momentum spectrum of the Higgs boson in the 7\TeV MC simulation samples is reweighted to match the NNLL + NLO distribution computed with \textsc{h}q\textsc{t}~\cite{Bozzi:2005wk,deFlorian:2011xf} (and \textsc{fehipro}~\cite{FeHiPro1,FeHiPro2} for the high-$\pt$ range in the $\tau\tau$ analysis), except in the $\PH\to\cPZ\cPZ$ analysis, where the reweighting is not necessary. At 8\TeV, \POWHEG was tuned to reach a good agreement of the $\pt$ spectrum with the NNLL + NLO prediction in order to make reweighting unnecessary~\cite{Dittmaier:2012vm}. \begin{figure}[tbp] \begin{center} \includegraphics[width=0.48\textwidth]{figures/Higgs_XS_8TeV_LM200.pdf} \includegraphics[width=0.48\textwidth]{figures/BR_lm_quadrato.pdf} \caption{Higgs boson production cross sections at $\sqrt{s}$ = 8\TeV (left) and branching fractions (right) as a function of the Higgs boson mass from Refs.~\cite{LHCHiggsCrossSectionWorkingGroup:2011ti,Dittmaier:2012vm}. The width of the lines represents the total theoretical uncertainty in the cross section and in the branching fractions.} \label{fig:xs_and_br_lm} \end{center} \end{figure} \subsection{Background simulation} The background contribution from \cPZ\cPZ\ production via $\Pq\Paq$ is generated at NLO with {\POWHEG}, while other diboson processes (\PW\PW, \PW\cPZ) are generated with {\MADGRAPH}~\cite{Alwall:2011uj,Alwall:2007st} with cross sections rescaled to NLO predictions. The {\PYTHIA} generator is also used to simulate all diboson processes. The $\Pg\Pg \rightarrow$VV contributions are generated with {\sc gg2vv}~\cite{Binoth:2008pr}. The V$+\text{jets}$ and V$\Pgg$ samples are generated with {\MADGRAPH}, as are contributions to inclusive $\cPZ$ and $\PW$ production, with cross sections rescaled to NNLO predictions. Single-top-quark and $\ttbar$ events are generated at NLO with {\POWHEG}. The {\PYTHIA} generator takes into account the initial-state and final-state radiation effects that can lead to the presence of additional hard photons in an event. The {\MADGRAPH} generator is also used to generate samples of $\ttbar$ events. QCD events are generated with {\PYTHIA}. Table~\ref{tab:mc} summarizes the generators used for the different analyses. \begin{table} \begin{center} \small \topcaption{Summary of the generators used for the simulation of the main backgrounds for the analyses presented in this paper. } \label{tab:mc} \begin{tabular}{l|c|c} \hlin Analysis & Physics Process & Generator used \\ \hline\hlin $ \PH \to \gamma\gamma$ & QCD & \PYTHIA \\ & \cPZ+jet & \MADGRAPH \\ \hlin $ \PH \to \cPZ\cPZ$ & qq $\to 4\ell$ & \POWHEG \\ & gg $\to 4\ell$ & \textsc{gg2zz} \\ & \cPZ+jet & \MADGRAPH \\ & $ \cPZ+\gamma$ & \MADGRAPH \\ & $ \ttbar $ & \POWHEG \\ & qq $\to \PW\PW,\PW\cPZ$ & \MADGRAPH \\ \hlin $ \PH \to \PW\PW$ & qq $\to \PW\PW$ & \MADGRAPH \\ & gg $\to \PW\PW$ & \textsc{gg2ww} \\ & V+jet & \MADGRAPH \\ & $ \ttbar$ & \POWHEG \\ & $ \cPqt\PW$ & \POWHEG \\ & QCD & \PYTHIA \\ \hlin $ \PH \to \tau\tau$ & \cPZ+jet & \MADGRAPH \\ & $ \ttbar$ & \MADGRAPH \\ & qq $\to \cPZ\cPZ,\cPZ\PW,\PW\PW$ & \PYTHIA \\ & QCD & \PYTHIA \\ \hlin $ \PH \to \cPqb\cPqb$ & qq $\to \cPZ\cPZ,\cPZ\PW,\PW\PW$ & \PYTHIA \\ & \cPZ+jet & \MADGRAPH \\ & \PW+jet & \MADGRAPH \\ & $ \ttbar$ & \MADGRAPH \\ & $ \cPqt\PW$ & \POWHEG \\ & QCD & \PYTHIA \\ \hlin \end{tabular} \end{center} \end{table} \subsection{Search sensitivities} The search sensitivities of the different channels, for the recorded luminosity used in the analyses, expressed in terms of the median expected 95\% CL upper limit on the ratio of the measured signal cross section, $\sigma$, and the predicted SM Higgs boson cross section, $\sigma_{\mathrm{SM}}$, are shown in Fig.~\ref{fig:sensitivity} (left) as a function of the Higgs boson mass. A channel showing values below unity (dashed horizontal line) for a given mass hypothesis would be expected, in the absence of a Higgs boson signal, to exclude the standard model Higgs boson at 95\% CL or more at that mass. Figure~\ref{fig:sensitivity} (right) shows the expected sensitivities for the observation of the Higgs boson in terms of local $p$-values and significances as a function of the Higgs boson mass. The local $p$-value is defined as the probability of a background fluctuation; it measures the consistency of the data with the background-only hypothesis. The overall statistical methodology used in this paper was developed by the ATLAS and CMS Collaborations in the context of the LHC Higgs Combination Group~\cite{LHC-HCG-Report}. A summary of our usage of this methodology in the search for the Higgs boson is given in Section~\ref{sec:results}. \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{figures/comb/sqr_acls_allexp_bydecay_smallGGScale} \hfill \includegraphics[width=0.49\textwidth]{figures/comb/sqr_pvala_allexp_bydecay_smallGGScale_wideX} \caption{ The median expected 95\% CL upper limits on the cross section ratio $ \sigma / \sigma_{\mathrm{SM}}$ in the absence of a Higgs boson (left) and the median expected local $p$-value for observing an excess, assuming that a Higgs boson with that mass exists (right), as a function of the Higgs boson mass for the five Higgs boson decay channels and their combination. } \label{fig:sensitivity} \end{figure*} \section{\texorpdfstring{$\PH\to\Pgg\Pgg$}{H to gamma gamma}\label{sec:hgg}} In the $\PH\to\Pgg\Pgg$ analysis, a search is made for a narrow peak, of width determined by the experimental resolution of ${\sim}1\%$, in the diphoton invariant-mass distribution for the range 110--150\GeV, on top of a large irreducible background from the production of two photons originating directly from the hard-scattering process. In addition, there is a sizable amount of reducible background in which one or both of the reconstructed photons originate from the misidentification of particles in jets that deposit substantial energy in the ECAL, typically photons from the decay of $\pi^0$ or $\eta$ mesons. Early studies indicated this to be one of the most promising channels in the search for a SM Higgs boson in the low-mass range~\cite{Seez1990a}. To enhance the sensitivity of the analysis, candidate diphoton events are separated into mutually exclusive classes with different expected signal-to-background ratios, based on the properties of the reconstructed photons and the presence or absence of two jets satisfying criteria aimed at selecting events in which a Higgs boson is produced through the VBF process. The analysis uses multivariate techniques for the selection and classification of the events. As independent cross-checks, two additional analyses are performed. The first is almost identical to the CMS analysis described in Ref.~\cite{Chatrchyan:2012tw}, but uses simpler criteria based on the properties of the reconstructed photons to select and classify events. The second analysis incorporates the same multivariate techniques described here, however, it relies on a completely independent modelling of the background. These two analyses are described in more detail in Section~\ref{sec:hgg_crosscheck}. \subsection{Diphoton trigger} \label{sec:hgg_dataReco} All the data under consideration have passed at least one of a set of diphoton triggers, each using transverse energy thresholds and a set of additional photon selections, including criteria on the isolation and the shapes of the reconstructed energy clusters. The transverse energy thresholds were chosen to be at least 10\% lower than the envisaged final-selection thresholds. This set of triggers enabled events passing the later offline $\PH \to \gamma\gamma$ selection criteria to be collected with a trigger efficiency greater than $99.5\%$. \subsection{Interaction vertex location} \label{sec:hgg_vertex} In order to construct a photon four-momentum from the measured ECAL energies and the impact position determined during the supercluster reconstruction, the photon production vertex, \ie the origin of the photon trajectory, must be determined. Without incorporating any additional information, any of the reconstructed pp event vertices is potentially the origin of the photon. If the distance in the longitudinal direction between the assigned and the true interaction point is larger than 10\mm, the resulting contribution to the diphoton mass resolution becomes comparable to the contribution from the ECAL energy resolution. It is, therefore, desirable to use additional information to assign the correct interaction vertex for the photon with high probability. This can be achieved by using the kinematic properties of the tracks associated with the vertices and exploiting their correlation with the diphoton kinematic properties, including the transverse momentum of the diphoton (\ensuremath{p_{\mathrm{T}}^{\gamma\gamma}}\xspace). In addition, if either of the photons converts into an $\Pep\Pem$ pair and the tracks from the conversion are reconstructed and identified, the direction of the converted photon, determined by combining the conversion vertex position and the position of the ECAL supercluster, can be extrapolated to identify the diphoton interaction vertex. For each reconstructed interaction vertex the following set of variables are calculated: the sum of the squared transverse momenta of all tracks associated with the vertex and two variables that quantify the $\pt$ balance with respect to the diphoton system. In the case of a reconstructed photon conversion, an additional ``pull'' variable is used, defined as the distance between the vertex $z$ position and the beam-line extrapolated $z$ position coming from the conversion reconstruction, divided by the uncertainty in this extrapolated $z$ position. These variables are used as input to a BDT algorithm trained on simulated Higgs signal events and the interaction point ranking highest in the constructed classifier is chosen as the origin of the photons. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.49\linewidth]{figures/hgg_vertex_efficiencyVsPt22June.pdf} \caption{ Comparison of the vertex-identification efficiency between data (circles) and MC simulated $\Z\to\mu\mu$ events (squares), as a function of the $Z$ boson \pt. } \label{fig:hgg_vtxeff} \end{center} \end{figure} The vertex-finding efficiency, defined as the efficiency to locate the vertex to within 10\mm of its true position, is studied using $\Z\to\mu\mu$ events where the muon tracks were removed from the tracks considered, and the muon momenta were replaced by the photon momenta. The result is shown in Fig.~\ref{fig:hgg_vtxeff}. The overall efficiency in signal events with a Higgs boson mass of 120\GeV, integrated over its \pt spectrum, is $(83.0\pm0.4)\%$ in the 7\TeV data set, and $(79.0\pm0.2)\%$ in the 8\TeV data set. The statistical uncertainties in these numbers are propagated to the uncertainties in the final result. A second vertex related multivariate discriminant is employed to estimate, event-by-event, the probability for the vertex assignment to be within 10\mm of the diphoton interaction point. This BDT is trained using simulated \HGG\ events. The input variables are the classifier values of the vertex BDT described above for the three vertices with the highest score BDT values, the number of vertices, the diphoton transverse momentum, the distances between the chosen vertex and the second and third choices, and the number of photons with an associated conversion track. These variables allow for a reliable quantification of the probability that the selected vertex is close to the diphoton interaction point. The resulting vertex-assignment probability from simulated events is used when constructing the Higgs boson signal models. The signal modelling is described in Section~\ref{sec:hgg_smodeling}. \subsection{Photon selection} \label{sec:hgg_selection} The event selection requires two photon candidates with transverse momenta satisfying $\pt^{\gamma}(1) > \ensuremath{m_{\gamma\gamma}}\xspace/3$ and $\pt^{\gamma}(2) > \ensuremath{m_{\gamma\gamma}}\xspace/4$, where $\ensuremath{m_{\gamma\gamma}}\xspace$ is the diphoton invariant mass, within the ECAL fiducial region $|\eta|$ $<~2.5$, and excluding the barrel-endcap transition region $1.44 < |\eta| < 1.57$. The fiducial region requirement is applied to the supercluster position in the ECAL and the $\pt$ threshold is applied after the vertex assignment. The requirements on the mass-scaled transverse momenta are mainly motivated by the fact that by dividing the transverse momenta by the diphoton mass, turn-on effects on the background-shape in the low mass region are strongly reduced. In the rare cases where the event contains more than two photons passing all the selection requirements, the pair with the highest summed (scalar) \pt is chosen. The relevant backgrounds in the \HGG\ channel consist of the irreducible background from prompt diphoton production, \ie processes in which both photons originate directly from the hard-scattering process, and the reducible backgrounds from $\GAMJET$ and dijet events, where the objects misidentified as photons correspond to particles in jets that deposit substantial energy in the ECAL, typically photons from the decay of isolated $\pi^0$ or $\eta$ mesons. These misidentified objects are referred to as \emph{fake} or \emph{nonprompt} photons. In order to optimize the photon identification to exclude such nonprompt photons, a BDT classifier is trained using simulated $\Pp\Pp\to\GAMJET$ event samples, where prompt photons are used as the signal and nonprompt photons as the background. The variables used in the training are divided into two groups. The first contains information on the detailed electromagnetic shower topology, the second has variables describing the photon isolation, \ie kinematic information on the particles in the geometric neighbourhood of the photon. Examples of variables in the first group are the energy-weighted shower width of the cluster of ECAL crystals assigned to the photon and the ratio of the energy of the most energetic $3\times3$ crystal cluster to the total cluster energy. The isolation variables include the magnitude of the sum of the transverse momenta of all other reconstructed particles inside a cone of size $\DR=0.3$ around the candidate photon direction. In addition, the geometric position of the ECAL crystal cluster, as well as the event energy density $\rho$, are used. The photon ID classifier is based on the measured properties of a single photon and makes no use of the any properties that are specific to the production mechanism. Any small residual dependence on the production mechanism, e.g. through the isolation distribution, arises from the different event enviroments in Higgs decays and in photon plus jets events. Instead of having a requirement on the trained multivariate classifier value to select photons with a high probability of being prompt photons, the classifier value itself is used as input to subsequent steps of the analysis. To reduce the number of events, a loose requirement is imposed on the classifier value (${>}-0.2$) for candidate photons to be considered further. This requirement retains more than $99\%$ of signal photons. The efficiency of this requirement, as well as the differential shape of the classifier variable for prompt photons, have been studied by comparing $\cPZ\to \Pe\Pe$ data to simulated events, given the similar response of the detector to photon and electrons. The comparisons between the differential shape in data and MC simulation for the 8\TeV analysis are shown in Fig.~\ref{fig:hgg_idmva}, for electrons in the barrel (left) and endcap (right) regions. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.98\linewidth]{figures/hgg_idmva_noratio.pdf} \caption{ Comparison of the photon identification (ID) classifier variable distribution between 8\TeV data (points) and MC simulated events (histogram), separated into barrel (left) and endcap (right) electrons originating from $\cPZ\to \Pe\Pe$ events. The uncertainties in the distributions from simulation are shown by the cross-hatched histogram. } \label{fig:hgg_idmva} \end{center} \end{figure} \subsection{Event classification} \label{sec:hgg_diphotonBDT} The strategy of the analysis is to look for a narrow peak over the continuum in the diphoton invariant-mass spectrum. To increase the sensitivity of the search, events are categorized according to their expected diphoton mass resolution and signal-to-background ratio. Categories with good resolution and a large signal-to-background ratio dominate the sensitivity of the search. To accomplish this, an event classifier variable is constructed based on multi-variate techniques, that assigns a high classifier value to events with signal-like kinematic characteristics and good diphoton mass resolution, as well as prompt-photon-like values for the photon identification classifier. However, the classifier should not be sensitive to the value of the diphoton invariant mass, in order to avoid biasing the mass distribution that is used to extract a possible signal. To achieve this, the input variables to the classifier are made dimensionless. Those that have units of energy (transverse momenta and resolutions) are divided by the diphoton invariant-mass value. The variables used to train this diphoton event classifier are the scaled photon transverse momenta ($\pt^{\gamma}(1)/\ensuremath{m_{\gamma\gamma}}\xspace$ and $\pt^{\gamma}(2)/\ensuremath{m_{\gamma\gamma}}\xspace$), the photon pseudorapidities ($\eta(1)$ and $\eta(2)$), the cosine of the angle between the two photons in the transverse plane ($\cos\left(\phi(1)-\phi(2)\right)$), the expected relative diphoton invariant-mass resolutions under the hypotheses of selecting a correct/incorrect interaction vertex ($\sigma_{m}^{\text{correct (incorrect)}}/\ensuremath{m_{\gamma\gamma}}\xspace$), the probability of selecting a correct vertex ($p_\text{vtx}$), and the photon identification classifier values for both photons. The $\sigma_{m}^{\text{correct (incorrect)}}/\ensuremath{m_{\gamma\gamma}}\xspace$ is computed using the single photon resolution estimated by the dedicated BDT described in Section 3. A vertex is being labeled as correct if the distance from the true interaction point is smaller than 10\unit{mm}. To ensure the classifier assigns a high value to events with good mass resolution, the events are weighted by a factor inversely proportional to the mass resolution, \begin{equation} w_\text{sig} = \frac{p_\text{vtx}}{\sigma_{m}^{\text{correct}}/\ensuremath{m_{\gamma\gamma}}\xspace} + \frac{1-p_\text{vtx}}{\sigma_{m}^{\text{incorrect}}/\ensuremath{m_{\gamma\gamma}}\xspace}. \end{equation} This factor incorporates the resolutions under both correct- and incorrect-interaction-vertex hypotheses, properly weighted by the probabilities of having assigned the vertex correctly. The training is performed on simulated background and Higgs boson signal events. The training procedure makes full use of the signal kinematic properties that are assumed to be those of the SM Higgs boson. The classifier, though still valid, would not be fully optimal for a particle produced with significantly different kinematic properties. The uncertainties in the diphoton event classifier output come from potential mismodelling of the input variables. The dominant sources are the uncertainties in the shapes of the photon identification (ID) classifier and the individual photon energy resolutions, which are used to compute the relative diphoton invariant-mass resolutions. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{figures/hgg_phidmva1eb} \includegraphics[width=0.49\textwidth]{figures/hgg_phidmva1ee} \caption{ Distribution of the photon ID classifier value for the larger transverse momentum photon in the ECAL barrel (left) and endcaps (right) from candidate diphoton data events (points) with $m_{\gamma\gamma}>160\GeV$. The predicted distributions for the various diphoton backgrounds as determined from simulation are shown by the histograms. The variations of the classifier value due to the systematic uncertainties are shown by the cross-hatched histogram.} \label{fig:hgg_phoidshift} \end{center} \end{figure} The first of these amounts to a potential shift in the photon ID classifier value of at most ${\pm}0.01$ in the 8\TeV and ${\pm}0.025$ in the 7\TeV analysis. These values are set looking to the observed differences between the photon ID classifier value distributions from data and simulation. This comparison for the 7\TeV analysis is shown in Fig.~\ref{fig:hgg_phoidshift}, where the distribution for the leading (highest $\pt$) candidate photons in the ECAL barrel (left) and endcaps (right) are compared between data and MC simulation for $\ensuremath{m_{\gamma\gamma}}\xspace>160\GeV$, where most photons are prompt ones. In addition to the three background components described in Section~\ref{sec:hgg_selection} (prompt-prompt, prompt-nonprompt, and nonprompt-nonprompt), the additional component composed by Drell--Yan events, in which both final-state electrons are misidentified as photons, has been studied and found to be negligible. As discussed previously a variation of the classifier value by ${\pm}0.025$, represented by the cross-hatched histogram, covers the differences. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{figures/hgg_phsige1eb} \includegraphics[width=0.49\textwidth]{figures/hgg_phsige1ee} \caption{ Distribution of the photon resolution estimate $\sigma_E/E$ for the leading photon in the ECAL barrel (left) and endcaps (right) from candidate diphoton data events (points) with $m_{\gamma\gamma}>160\GeV$. The predicted distributions for the various diphoton backgrounds, as determined from simulation, are shown by the histograms. The variations of the resolution due to the systematic uncertainties are shown by the cross-hatched histogram.} \label{fig:hgg_sigescale} \end{center} \end{figure} For the second important variable, the photon energy resolution estimate (calculated by a BDT, as discussed in Section~3), a similar comparison is shown in Fig.~\ref{fig:hgg_sigescale}. Again, the 7\TeV data distributions for candidate photons in the ECAL barrel (left) and endcap (right) are compared to MC simulation for $\ensuremath{m_{\gamma\gamma}}\xspace>160\GeV$. The systematic uncertainty of ${\pm}10\%$ is again shown as the cross-hatched histogram. The effect of both these uncertainties propagated to the diphoton event classifier distribution can be seen in Fig.~\ref{fig:hgg_dpmvavalidation}, where the 7\TeV data diphoton classifier variable is compared to the MC simulation predictions. The data and MC simulation distributions in both the left and right plots of Fig.~\ref{fig:hgg_dpmvavalidation} are the same. In the left plot, the uncertainty band arises from propagating the photon ID classifier uncertainty, while in the right plot, it is from propagating the energy resolution uncertainty. From these plots one can see that the uncertainty in the photon ID classifier dominates the overall uncertainty, and by itself almost covers the full difference between the data and MC simulation distributions. Both uncertainties are propagated into the final result. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.49\linewidth]{figures/hgg_phmvanomid_7TeV} \includegraphics[width=0.49\linewidth]{figures/hgg_phmvanomsige_7TeV} \caption{ The effect of the systematic uncertainty assigned to the photon identification classifier output (left) and the photon resolution estimate (right) on the diphoton BDT output for background MC simulation ($100\GeV<\ensuremath{m_{\gamma\gamma}}\xspace<180\GeV$) and for data. The nominal BDT output is shown as a stacked histogram and the variation due to the uncertainty is shown as a cross-hatched band. These plots show only the systematic uncertainties that are common to both signal and background. There are additional significant uncertainties that are not shown here. } \label{fig:hgg_dpmvavalidation} \end{center} \end{figure} The diphoton event classifier output is then used to divide events into different classes, prior to fitting the diphoton invariant-mass spectrum. The procedure successively splits events into classes by introducing a boundary value for the diphoton classifier output. The first boundary results in two classes, and then these classes are further split. Each split is introduced using the boundary value that gives rise to the best expected exclusion limit. The procedure is terminated once additional splitting results in a negligible (${<}1$\%) gain in sensitivity. Additionally, the lowest score class is dropped since it does not contribute significantly to the sensitivity. This procedure results in four event classes for both the 7 and 8\TeV data sets. The systematic uncertainties in the diphoton identification classifier and photon energy resolution discussed above can cause events to migrate between classes. In the 8\TeV analysis, these class migrations are up to $4.3\%$ and $8.1\%$, respectively. They are defined as the relative change of expected signal yield in each category under the variation of the photon ID BDT classifier and the per-photon energy resolution estimate, within their uncertainties as explained above. The sensitivity of the analysis is enhanced by using the special kinematics of Higgs bosons produced by the VBF process~\cite{Ballestrero:2008gf}. Dedicated classes of events are selected using dijet-tagging criteria. The 7\TeV data set has one class of dijet-tagged events, while the 8\TeV data set has two. In the 7\TeV analysis, dijet-tagged events are required to contain two jets with transverse energies exceeding 20 and 30\GeV, respectively. The dijet invariant mass is required to be greater than 350\GeV, and the absolute value of the difference of the pseudorapidities of the two jets has to be larger than 3.5. In the 8\TeV analysis, dijet-tagged events are required to contain two jets and are categorized as ``Dijet tight" or ``Dijet loose". The jets in Dijet tight events must have transverse energies above 30\GeV and a dijet invariant mass greater than 500\GeV. For the jets in the Dijet loose events, the leading (subleading) jet transverse energy must exceed 30 (20)\GeV and the dijet invariant mass be greater than 250\GeV, where leading and subleading refer to the jets with the highest and next-to-highest transverse momentum, respectively. The pseudorapidity separation between the two jets is also required to be greater than 3.0. Additionally, in both analyses the difference between the average pseudorapidity of the two jets and the pseudorapidity of the diphoton system must be less than 2.5~\cite{Rainwater:1996ud}, and the difference in azimuthal angle between the diphoton system and the dijet system is required to be greater than 2.6\unit{radians}. To further reduce the background in the dijet classes, the $\pt$ threshold on the leading photon is increased to $\pt^{\gamma}(1) > \ensuremath{m_{\gamma\gamma}}\xspace/2$. Systematic uncertainties in the efficiency of dijet tagging for signal events arise from the uncertainty in the MC simulation modelling of the jet energy corrections and resolution, and from uncertainties in simulating the number of jets and their kinematic properties. These uncertainties are estimated by using different underlying-event tunes, PDFs, and renormalization and factorization scales as suggested in Refs.~\cite{LHCHiggsCrossSectionWorkingGroup:2011ti,Dittmaier:2012vm}. A total systematic uncertainty of 10\% is assigned to the efficiency for VBF signal events to pass the dijet-tag criteria, and an uncertainty of 50\%, dominated by the uncertainty in the underlying-event tune, to the efficiency for signal events produced by gluon-gluon fusion. Table~\ref{tab:ClassFracs} shows the predicted number of signal events for a SM Higgs boson with $\ensuremath{m_{\PH}}= 125\GeV$, as well as the estimated number of background events per \GeVns of invariant mass at $\ensuremath{m_{\gamma\gamma}}\xspace = 125\GeV$, for each of the eleven event classes in the 7 and 8\TeV data sets. The table also gives the fraction of each Higgs boson production process in each class (as predicted by MC simulation) and the mass resolution, represented both as $\sigma_\text{eff}$, half the width of the narrowest interval containing 68.3\% of the distribution, and as the full-width-at-half-maximum (FWHM) of the invariant-mass distribution divided by 2.35. \begin{table}[htbp] \begin{center} \topcaption{Expected number of SM Higgs boson events ($\ensuremath{m_{\PH}}=125\GeV$) and estimated background (at $\ensuremath{m_{\gamma\gamma}}\xspace= 125\GeV$) for the event classes in the 7 (5.1\fbinv) and 8\TeV (5.3\fbinv) data sets. The composition of the SM Higgs boson signal in terms of the production processes and its mass resolution are also given.} \begin{tabular}{>{\small}c<{\small}|>{\small}r<{\small}||r|>{\small}r<{\small}>{\small}r<{\small}>{\small}r<{\small}>{\small}r<{\small}|>{\centering}b{1.3cm}<{\centering}|>{\centering}b{2.0cm}<{\centering}||r@{\,$\pm$\,}l} \hline \multicolumn{2}{c||}{\multirow{2}{*}{Event classes}} & \multicolumn{7}{c||}{SM Higgs boson expected signal ($\ensuremath{m_{\PH}}=125\GeV$)} & \multicolumn{2}{c}{\multirow{2}{*}{\begin{minipage}[t]{2.5cm}\begin{center}Background \footnotesize{$\ensuremath{m_{\gamma\gamma}}\xspace=125\GeV$}\\\small{(events/\GeV)}\end{center}\end{minipage}}}\tabularnewline \cline{3-9} \multicolumn{2}{c||}{} & Events & ggH & VBF & VH & ttH & $\sigma_\text{eff}$ \small{(\GeVns{})} & \small{FWHM/2.35} \small{(\GeVns{})} & \multicolumn{2}{c}{} \tabularnewline \hline\hline \multirow{5}{*}{7\TeV} & BDT 0 & 3.2 & 61\% & 17\% & 19\% & 3\% & 1.21 & 1.14 & \rule{6mm}{0mm} 3.3 & 0.4 \tabularnewline & BDT 1 & 16.3 & 88\% & 6\% & 6\% & -- & 1.26 & 1.08 & 37.5 & 1.3 \tabularnewline & BDT 2 & 21.5 & 92\% & 4\% & 4\% & -- & 1.59 & 1.32 & 74.8 & 1.9 \tabularnewline & BDT 3 & 32.8 & 92\% & 4\% & 4\% & -- & 2.47 & 2.07 & 193.6 & 3.0 \tabularnewline & Dijet tag & 2.9 & 27\% & 72\% & 1\% & -- & 1.73 & 1.37 & 1.7 & 0.2 \tabularnewline \hline \multirow{6}{*}{8\TeV} & BDT 0 & 6.1 & 68\% & 12\% & 16\% & 4\% & 1.38 & 1.23 & 7.4 & 0.6 \tabularnewline & BDT 1 & 21.0 & 87\% & 6\% & 6\% & 1\% & 1.53 & 1.31 & 54.7 & 1.5 \tabularnewline & BDT 2 & 30.2 & 92\% & 4\% & 4\% & -- & 1.94 & 1.55 & 115.2 & 2.3 \tabularnewline & BDT 3 & 40.0 & 92\% & 4\% & 4\% & -- & 2.86 & 2.35 & 256.5 & 3.4 \tabularnewline & Dijet tight & 2.6 & 23\% & 77\% & -- & -- & 2.06 & 1.57 & 1.3 & 0.2 \tabularnewline & Dijet loose & 3.0 & 53\% & 45\% & 2\% & -- & 1.95 & 1.48 & 3.7 & 0.4 \tabularnewline \hline \end{tabular} \label{tab:ClassFracs} \end{center} \end{table} \subsection{Signal and background modelling} \label{sec:hgg_smodeling} The modelling of the Higgs boson signal used in the estimation of the sensitivity has two aspects. First, the normalization, \ie the expected number of signal events for each of the considered Higgs boson production processes; second, the diphoton invariant-mass shape. To model both aspects, including their respective uncertainties, the MC simulation events and theoretical considerations described in Section~\ref{sec:searches} are used. To account for the interference between the signal and background diphoton final states~\cite{interference}, the expected gluon-gluon fusion process cross section is reduced by 2.5\% for all values of \ensuremath{m_{\PH}}. Additional systematic uncertainties in the normalization of each event class arise from potential class-to-class migration of signal events caused by uncertainties in the diphoton event classifier value. The instrumental uncertainties in the classifier value and their effect have been discussed previously. The theoretical ones, arising from the uncertainty in the theoretical predictions for the photon kinematics, are estimated by measuring the amount of class migration under variation of the renormalization and factorization scales within the range $\ensuremath{m_{\PH}}/2 < \mu < 2\ensuremath{m_{\PH}}$, (class migrations up to 12.5\%) and the PDFs (class migrations up to 1.3\%). These uncertainties are propagated to the final statistical analysis. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.55\linewidth]{figures/hgg_resolution.pdf} \caption{ Comparison of the dielectron invariant-mass spectrum from \cPZ~$\to \Pe\Pe$ events between 8\TeV data (points) and the simulated events (histogram), where the selected electrons are reconstructed as photons. The simulated distribution after applying smearing and scaling corrections of the electron energies is shown by the solid line. } \label{fig:hgg_resolution} \end{center} \end{figure} To model the diphoton invariant-mass spectrum properly, it is essential that the simulated diphoton mass and scale are accurately predicted. This is done by comparing the dielectron invariant-mass distribution in $\cPZ\to \Pe\Pe$ events between data and MC simulation, where the electrons have been reconstructed as photons. This comparison is shown for the 8\TeV data in Fig.~\ref{fig:hgg_resolution}, where the points represent data, and the histogram MC simulation. Before correction, the dielectron invariant-mass distribution from simulation is narrower than the one from data, caused by an inadequate modelling of the photon energy resolution in the simulation. To correct this effect, the photon energies in the Higgs boson signal MC simulation events are smeared and the data events scaled, so that the dielectron invariant-mass scale and resolution as measured in $\cPZ\to \Pe\Pe$ events agree between data and MC simulation. These scaling and smearing factors are determined in a total of eight photon categories, \ie separately for photons in four pseudorapidity regions ($|\eta|<1$, $1\leq|\eta|<1.5$, $1.5\leq|\eta|<2$, and $|\eta| \geq 2$), and separately for high $R9$ (${>}0.94$)~and low $R9$ (${\leq}0.94$)~photons, where $R9$ is the ratio of the energy of the most energetic $3\times3$ crystal cluster and the total cluster energy. Additionally, the factors are computed separately for different running periods in order to account for changes in the running conditions, for example the change in the average beam intensity. These modifications reconcile the discrepancy between data and simulation, as seen in the comparison of the dots and solid curve of Fig.~\ref{fig:hgg_resolution}. The uncertainties in the scaling and smearing factors, which range from 0.2\% to 0.9\% depending on the photon properties, are taken as systematic uncertainties in the signal evaluation and mass measurement. The final signal model is then constructed separately for each event class and each of the four production processes as the weighted sum of two submodels that assume either the correct or incorrect primary vertex selection (as described in Section~\ref{sec:hgg_vertex}). The two submodels are weighted by the corresponding probability of picking the right ($p_\mathrm{vtx}$) or wrong ($1-p_\mathrm{vtx}$) vertex. The uncertainty in the parameter $p_\mathrm{vtx}$ is taken as a systematic uncertainty. To describe the signal invariant-mass shape in each submodel, two different approaches are used. In the first, referred to as the parametric model, the MC simulated diphoton invariant-mass distribution is fitted to a sum of Gaussian distributions. The number of Gaussian functions ranges from one to three depending on the event class, and whether the model is a correct- or incorrect-vertex hypothesis. The systematic uncertainties in the signal shape are estimated from the variations in the parameters of the Gaussian functions. In the second approach, referred to as the binned model, the signal mass shape for each event class is taken directly from the binned histogram of the corresponding simulated Higgs boson events. The systematic uncertainties are included by parametrizing the change in each bin of the histogram as a linear function under variation of the corresponding nuisance parameter, \ie the variable that parametrizes this uncertainty in the statistical interpretation of the data. The two approaches yield consistent final results and serve as an additional verification of the signal modelling. The presented results are derived using the parametric-model approach. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.49\linewidth]{figures/hgg_effsigmamvacat0_8TeV.pdf} \includegraphics[width=0.49\linewidth]{figures/hgg_effsigmamvacat3_8TeV.pdf} \caption{ Comparison of the diphoton invariant-mass distribution from the parametric signal model (blue line) and simulated MC events (open squares) for a Higgs mass hypothesis of $\ensuremath{m_{\PH}}=120\GeV$ for two (BDT 0 on the left, BDT 3 on the right) of the four 8\TeV BDT event classes. } \label{fig:hgg_sigmodel} \end{center} \end{figure} The parametric signal models for a Higgs boson mass of $120\GeV$ in two of the 8\TeV BDT event classes are shown in Fig.~\ref{fig:hgg_sigmodel}. The signal models are summed over the four production processes, each weighted by their respective expected yield as computed from MC simulation. The two plots in Fig.~\ref{fig:hgg_sigmodel} illustrate how the diphoton invariant-mass resolution improves with increasing diphoton classifier value. The left distribution is for classifier values greater than 0.88 and has a mass resolution $\sigma_{\text{eff}} = 1.34\GeV$, while the right distribution is for classifier values between $-0.05$ and 0.50 and has $\sigma_{\text{eff}} = 2.77\GeV$. This is the intended behaviour of the event class implementation. The uncertainties in the weighting factors for each of the production processes arise from variations in the renormalization and factorization scales, and uncertainties in the PDFs. They range from several percent for associated production with W/Z to almost 20\% for the gluon-gluon fusion process. The detailed values for the 8\TeV analysis, together with all the other systematic uncertainties discussed above, are summarized in Table~\ref{tab:hgg_systematics}. The corresponding uncertainties in the 7\TeV analysis are very similar, with the exception of the already mentioned uncertainty on the photon ID classifier, which was significantly larger in the 7\TeV analysis. The reason for this is a worse agreement between data and MC simulation. In addition to the per-photon energy scale uncertainties, that are derived in the eight $\eta-R9$ categories, additional fully correlated energy scale uncertainties are assigned in order to account for possible non-linearity as a function of energy and for additional electron-photon differences. The uncertainty associated with possible non-linearities in the energy measurement as a function of the cluster energy are evaluated by measuring the energy scale of $\Z\to \Pe\Pe$ events as a function of the scalar sum of transverse momentum of the two electrons. The change in energy scale due to possible non-linearities in the energy measurement is estimated around $0.2\%$; since this correction is not applied, a systematic uncertainty of $0.4\%$ is assigned. An additional fully correlated uncertainty related to difference of $0.25\%$ between electron and photon is assigned, amounting to half of the absolute energy scale difference between electrons and photons for non-showering electrons/photons in the barrel. Adding these two numbers in quadrature results in the additional energy scale uncertainty of $0.47\%$, that is treated as fully correlated among all event classes. \begin{table}[htbp] \topcaption{Largest sources of systematic uncertainty in the analysis of the 8\TeV data set. Eight photon categories are defined, depending on their $\eta$ and $R9$, where $R9$ is the ratio of the energy of the most energetic $3\times3$ crystal cluster and the total cluster energy. The four pseudorapidity regions are: $|\eta|<1$ (low $\eta$), $1\leq|\eta|<1.5$ (high $\eta$) for the barrel, and $1.5\leq|\eta|<2$ (low $\eta$), $|\eta| \geq 2$ (high $\eta$) for the endcaps; the two $R9$ regions are: high $R9$ ($>0.94$) and low $R9$ (${\leq}0.94$).} \centering\small{ \begin{tabular}{ l r|c|c} \hline \multicolumn{2}{ l |}{\textbf{Sources of systematic uncertainty}} & \multicolumn{2}{ c }{\textbf{Uncertainty}}\\ \hline \hline \multicolumn{2}{ l |}{\textbf{Per photon}} & Barrel & Endcap \\ \hline \multicolumn{2}{ l |}{Photon selection efficiency} & 0.8\% & 2.2\%\\ Energy resolution ($\Delta\sigma/E_{\mathrm{MC}}$) & $R9 > 0.94$ (low $\eta$, high $\eta$) & 0.22\%, 0.60\% &0.90\%, 0.34\% \\ & $R9 \leq 0.94$ (low $\eta$, high $\eta$) & 0.24\%, 0.59\% & 0.30\%, 0.52\% \\ Energy scale ($(E_{\text{data}}-E_{\mathrm{MC}})/E_{\mathrm{MC}}$) & $R9 > 0.94$ (low $\eta$, high $\eta$) & 0.19\%, 0.71\% & 0.88\%, 0.19\% \\ & $R9 \leq 0.94$ (low $\eta$, high $\eta$) & 0.13\%, 0.51\% & 0.18\%, 0.28\% \\ \hline \multicolumn{2}{l |}{Energy scale (fully correlated)} & \multicolumn{2}{ c }{$0.47\,\%$}\\ \multicolumn{2}{r|}{} & \multicolumn{2}{ c }{} \\ \hline \multicolumn{2}{l |}{Photon identification classifier} & \multicolumn{2}{ c }{$0.01$}\\ \multicolumn{2}{r|}{} & \multicolumn{2}{ c }{} \\ \hline \multicolumn{2}{ l |}{Photon energy resolution BDT} & \multicolumn{2}{ c }{$10\%$}\\ \multicolumn{2}{r|}{} & \multicolumn{2}{ c }{} \\ \hline \hline \multicolumn{4}{ l }{\textbf{Per event}}\\ \hline \multicolumn{2}{l|}{Integrated luminosity} & \multicolumn{2}{ c }{4.4\%} \\ \multicolumn{2}{l|}{Vertex finding efficiency} & \multicolumn{2}{ c }{0.2\%}\\ \multicolumn{2}{l|}{Trigger efficiency --- One or both photons $R9 \leq 0.94$ in endcap} & \multicolumn{2}{ c }{0.4\%} \\ \multicolumn{2}{r|}{Other events} & \multicolumn{2}{ c }{0.1\%} \\ \hline \hline \multicolumn{4}{ l }{\textbf{Dijet selection}}\\ \hline Dijet tagging efficiency & VBF & \multicolumn{2}{ c }{10\%}\\ \multicolumn{2}{r|}{Gluon-gluon fusion } & \multicolumn{2}{ c }{50\%}\\ \multicolumn{2}{r|}{} & \multicolumn{2}{ c }{} \\ \hline \hline \multicolumn{2}{ l |}{\textbf{Production cross sections}} & Scale & PDF \\ \hline \multicolumn{2}{l|}{Gluon-gluon fusion} & +12.5\% -8.2\% & +7.9\% -7.7\% \\ \multicolumn{2}{l|}{VBF} & +0.5\% -0.3\% & +2.7\% -2.1\% \\ \multicolumn{2}{l|}{Associated production with W/Z} & 1.8\% & 4.2\% \\ \multicolumn{2}{l|}{Associated production with $\ttbar$} & +3.6\% -9.5\% & 8.5\% \\ \hline \end{tabular} } \label{tab:hgg_systematics} \end{table} The modelling of the background relies entirely on the data. The observed diphoton invariant-mass distributions for the eleven event classes (five in the 7 and eight in the 8\TeV analysis) are fitted separately over the range $100 < \ensuremath{m_{\gamma\gamma}}\xspace < 180\GeV$. This has the advantage that there are no systematic uncertainties due to potential mismodelling of the background processes by the MC simulation. The procedure is to fit the diphoton invariant-mass distribution to the sum of a signal mass peak and a background distribution. Since the exact functional form of the background in each event class is not known, the parametric model has to be flexible enough to describe an entire set of potential underlying functions. Using a wrong background model can lead to biases in the measured signal strength. Such a bias can, depending on the Higgs boson mass and the event class, reach or even exceed the size of the expected signal, and therefore dramatically reduce the sensitivity of the analysis to any potential signal. In what follows, a procedure for selecting the background function is described that results in a potential bias small enough to be neglected. If the true underlying background model could be used in the extraction of the signal strength, and no signal is present in the fitted data, the median fitted signal strength would be zero in the entire mass region of interest. The deviation of the median fitted signal strength from zero in background-only pseudo-experiments can thus be used to quantify the potential bias. These pseudodata sets are generated from a set of hypothetical truth models, with each model using a different analytical function that adequately describes the observed diphoton invariant-mass distribution. The set of truth-models contains exponential and power-law functions, as well as polynomials (Bernstein polynomials) and Laurent series of different orders. None of these functions is required to describe the actual (unknown) underlying background distribution. Instead, we argue that they span the phase-space of potential underlying models in such a way that a fit model resulting in a negligible bias against all of them would also result in a negligible bias against the (unknown) true underlying distribution. The first step in generating such pseudodata sets consists of constructing a truth model, from which the pseudodata set is drawn. This is done by fitting the data in each of the eleven event classes separately, and for each of the four general types of background functions, resulting in four truth-models for each event class. The order of the background function required to adequately describe the data for each of the models is determined by increasing the order until an additional increase does not result in a significant improvement of the fit to the observed data. A $\chi^2$-goodness-of-fit is used to quantify the fit quality, and an F-test to determine the termination criterion. ``Increasing the order'' here means adding additional terms of higher order in the case of the polynomial and the Laurent series, and adding additional exponential or power-law terms with different parameters in the case of the exponential and power-law truth models. Once the four truth models are determined for a given event class, ${\sim}40\,000$ pseudodata sets are generated for each by randomly drawing diphoton mass values from them. The next step is then to find a function (in what follows referred to as \emph{fit model}), that results in a negligible bias against all four sets of toy data in the entire mass region of interest, \ie an analytical function that when used to extract the signal strength in all the 40\,000 pseudodata sets, gives a mean value for the fitted strength consistent with zero. The criterion for the bias to be negligible is that it must be five times smaller than the statistical uncertainty in the number of fitted events in a mass window corresponding to the FWHM of the corresponding signal model. With this procedure, any potential bias from the background fit function can be neglected in comparison with the statistical uncertainty from the finite data sample. We find that only the polynomial background function produces a sufficiently small bias for all four truth models. Therefore, we only use this background function to fit the data. The required order of the polynomial function needed to reach the sufficiently small bias is determined separately for each of the 11 event classes, and ranges from 3 to 5. The entire procedure results in a background model for each of the event classes as a polynomial function of a given, class-dependent order. The parameters of this polynomial, \ie the coefficients for each term, are left free in the fit, and their variations are therefore the only source of uncertainty from the modelling of the background. The simultaneous fit to the signal-plus-background models, derived as explained above, together with the \ensuremath{m_{\gamma\gamma}}\xspace distributions for the data, are shown for the eleven event classes in Figs.~\ref{fig:hgg_BckSig7TeV} and \ref{fig:hgg_BckSig8TeV} for the 7 and 8\TeV data samples, respectively. The uncertainty bands shown in the background component of the fit arise from the variation of the background fit parameters, and correspond to the uncertainties in the expected background yield. The fit is performed on the data from all event class distributions simultaneously, with an overall floating signal strength. In these fits, the mass hypothesis is scanned in steps of 0.5\GeV between 110 and 150\GeV. At the point with the highest significant excess over the background-only hypothesis ($\ensuremath{m_{\PH}}=125$\GeV), the best fit value is $\sigma/\sigma_\mathrm{SM}=1.56\pm0.43$. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.45\linewidth]{figures/hgg_mvacat0_7TeV} \includegraphics[width=0.45\linewidth]{figures/hgg_mvacat1_7TeV}\\ \includegraphics[width=0.45\linewidth]{figures/hgg_mvacat2_7TeV} \includegraphics[width=0.45\linewidth]{figures/hgg_mvacat3_7TeV}\\ \includegraphics[width=0.45\linewidth]{figures/hgg_mvacat4_7TeV} \caption{The diphoton invariant-mass distributions for the five classes of the 7\TeV data set (points) and the results of the signal-plus-background fits for $m_{\gamma\gamma}$ = 125\GeV (lines). The background fit components are shown by the dotted lines. The light and dark bands represent the ${\pm}$1 and ${\pm}$2 standard deviation uncertainties, respectively, on the background estimate. } \label{fig:hgg_BckSig7TeV} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width=0.45\linewidth]{figures/hgg_mvacat0_8TeV} \includegraphics[width=0.45\linewidth]{figures/hgg_mvacat1_8TeV}\\ \includegraphics[width=0.45\linewidth]{figures/hgg_mvacat2_8TeV} \includegraphics[width=0.45\linewidth]{figures/hgg_mvacat3_8TeV}\\ \includegraphics[width=0.45\linewidth]{figures/hgg_mvacat4_8TeV} \includegraphics[width=0.45\linewidth]{figures/hgg_mvacat5_8TeV} \caption{The diphoton invariant-mass distributions for the six classes of the 8\TeV data set (points) and the results of the signal-plus-background fits for $m_{\gamma\gamma}$ = 125\GeV (lines). The background fit components are shown by the dotted lines. The light and dark bands represent the ${\pm}$1 and ${\pm}$2 standard deviation uncertainties, respectively, on the background estimate. } \label{fig:hgg_BckSig8TeV} \end{center} \end{figure} In order to better visualize any overall excess/significance in the data, each event is weighted by a class-dependent factor, and its corresponding diphoton invariant mass is plotted with that weight in a single distribution. The weight depends on the event class and is proportional to $S/(S+B)$, where $S$ and $B$ are the number of expected signal and background events in a mass window corresponding to $2\sigma_\text{eff}$, centered on $m_{\gamma\gamma}$ = 125\GeV and calculated from the signal-plus-background fit to all data event classes simultaneously. The particular choice of the weights is motivated in Ref.~\cite{Barlow:1986ek}. The resulting distribution is shown in Fig.~\ref{fig:hgg_MassFactSoB}, where for reference the distribution for the unweighted sum of events is shown as an inset. The binning for the distributions is chosen to optimize the visual effect of the excess at 125\GeV, which is evident in both the weighted and unweighted distributions. It should be emphasized that this figure is for visualization purposes only, and no results are extracted from it. \begin{figure} \begin{center} \includegraphics[width=0.63\linewidth]{figures/hgg_MassFactSoBWeightedMass} \caption{The diphoton invariant-mass distribution for the 7 and 8\TeV data sets (points), with each event weighted by the predicted $S/(S+B)$ ratio of its event class. The solid and dotted lines give the results of the signal-plus-background and background-only fit, respectively. The light and dark bands represent the $\pm$1 and $\pm$2 standard deviation uncertainties respectively on the background estimate. The inset shows the corresponding unweighted invariant-mass distribution around $m_{\gamma\gamma}$ = 125\GeV. } \label{fig:hgg_MassFactSoB} \end{center} \end{figure} \subsection{Alternative analyses} \label{sec:hgg_crosscheck} In order to verify the results described above, two alternative analyses are performed. The first (referred to as the {\it cut-based} analysis) refrains from relying on multivariate techniques, except for the photon energy corrections described in Section~\ref{sec:reconstruction}. Instead, the photon identification is performed by an optimized set of requirements on the discriminating variables explained in Section~\ref{sec:hgg_selection}. Additionally, instead of using a BDT event-classifier variable to separate events into classes, the event classes are built using requirements on the photons directly. Four mutually exclusive classes are constructed by splitting the events according to whether both candidate photons are reconstructed in the ECAL barrel or endcaps, and whether the $R9$ variable exceeds 0.94. This categorization is motivated by the fact that photons in the barrel with high $R9$ values are typically measured with better energy resolution than ones in the endcaps with low $R9$. Thus, the classification serves a similar purpose to the one using the BDT event classifier: events with good diphoton mass resolution are grouped together into one class. The four event classes used in this analysis are then: \begin{itemize} \item both photons are in the barrel, with $R9>0.94$, \item both photons are in the barrel and at least one of them with $R9\leq0.94$, \item at least one photon is in the endcap and both photons with $R9>0.94$, \item at least one photon is in the endcap and at least one of them with $R9\leq0.94$. \end{itemize} The second alternative analysis (referred to as the {\it sideband} analysis) uses the identical multivariate technique as the baseline analysis, as well as an identical event sample, but relies on different procedures to model the signal and background contributions. This approach uses data in the sidebands of the invariant mass distribution to model the background. Consequently, this analysis is much less sensitive to the parametric form used to describe the diphoton mass spectrum and allows the explicit inclusion of a systematic uncertainty for the possible bias in the background mass fit. For any given mass hypothesis \ensuremath{m_{\PH}}, a signal region is defined to be in the range ${\pm}2\%$ on either side of \ensuremath{m_{\PH}}. A contiguous set of sidebands is defined in the mass distribution on either side of the signal region, from which the background is extracted. Each sideband is defined to have the equivalent width of ${\pm}2\%$ relative to the mass hypothesis that corresponds to the centre of the sideband. A total of six sidebands are used in the analysis (three on either side of the signal region), with the two sidebands adjacent to the signal region omitted in order to avoid signal contamination, as illustrated in Fig.~\ref{fig:hgg_sidebands}. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.60\linewidth]{figures/fit_m125_0-1GeV} \caption{The six sidebands (dashed lines) around the signal region (solid line) in the sideband analysis. } \label{fig:hgg_sidebands} \end{center} \end{figure} The result is extracted by counting events in the signal region, in classes that are defined by the output distribution of a BDT. This mass-window BDT takes two dimensionless inputs: the diphoton BDT output (as described in Section \ref{sec:hgg_diphotonBDT}), and the mass, in the form $\Delta m/\ensuremath{m_{\PH}}$, where $\Delta m =\ensuremath{m_{\gamma\gamma}}\xspace - \ensuremath{m_{\PH}}$ and $\ensuremath{m_{\PH}}$ is the Higgs boson mass hypothesis. The output of the BDT is binned to define the event classes. The bin boundaries are optimized to give the maximum expected significance in the presence of a Standard Model Higgs boson signal, and the number of bins is chosen such that any additional increase in the number of bins results in an improvement in the expected significance of less than 0.1\%. The same bin boundaries are used for the signal region and for the six sidebands. The dijet-tagged events constitute an additional bin (two bins for the 8\TeV data set) appended to the bins of the mass-window BDT output value. The background model (\ie the BDT output distribution for background events in the signal region) is constructed from the BDT output distributions of the data in each of the six sidebands. The only assumptions made concerning the background model shape, both verified within the assigned systematic errors, are that the fraction of events in each BDT output bin varies linearly as a function of invariant mass (and thus with sideband position), and that there is negligible signal contamination in the sidebands. Only the overall normalization of the background model (the total number of background events in the signal region) is obtained from a parametric fit to the mass spectrum. The signal region is excluded from this fit. The bias incurred by the choice of the functional form used in the fit has been studied in a similar fashion to that described in Section \ref{sec:hgg_smodeling}, and is covered with a systematic uncertainty of 1\%. The mass-window BDT is trained using simulated Higgs boson events with $\ensuremath{m_{\PH}}=123\GeV$ and simulated background events, including prompt-prompt, prompt-fake, and fake-fake processes. The training samples are not used in any other part of the analysis, except as input to the binning algorithm, thus avoiding any biases from overtraining. The signal region for mass hypothesis $\ensuremath{m_{\PH}}=125\GeV$ is estimated from simulation to contain 93\% of the signal. The number of expected signal events in each bin is determined using MC simulation, as in the baseline analysis. Systematic uncertainties in the signal modelling lead to event migrations between the BDT bins, that are accounted for as additional nuisance parameters in the limit-setting procedure. Examples of distributions in this analysis are shown in Fig.~\ref{fig:hgg_MassWindowModel}, for the 7 (left) and 8\TeV (right) data sets. The different event classes are listed along the $x$ axis. The first seven classes are the mass-window BDT classes. They are ordered by increasing expected signal-to-background ratio. The class labeled as ``Dijet'' contains the dijet-tagged events. The number of data events, displayed as points, is compared to the expected background events determined from the sideband population, shown by the histogram. The expected signal yield for a Higgs boson mass of $\ensuremath{m_{\PH}}=125\GeV$ is shown with the dotted line. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.49\linewidth]{figures/hgg_MassWindow_model_m125} \includegraphics[width=0.49\linewidth]{figures/hgg_MassWindow_model_m125_8TeV} \caption{The number of observed events (points) for each of the mass-window BDT classes in the sideband analysis of $\PH \to \gamma\gamma$ for the 7 (left) and 8\TeV (right) data sets. The expected number of background events in each class, determined from the sidebands of the diphoton invariant-mass distribution, is shown by the solid line. The dark and light bands display the ${\pm}1$ and ${\pm}2$ standard deviation uncertainties in the background predictions, respectively. The expected number of signal events in each class for a 125\GeV Higgs boson, as determined from MC simulation, is shown by the dotted line. } \label{fig:hgg_MassWindowModel} \end{center} \end{figure} The statistical interpretation of the results is given in Section 10. \section{\texorpdfstring{$\PH\to\cPZ\cPZ$}{H to ZZ}\label{sec:hzz4l}} \subsection{Event selection and kinematics} The search for the decay $\PH \rightarrow \cPZ\cPZ \rightarrow 4\ell$ with $\ell = \Pe, \mu$ is performed by looking for a narrow four-lepton invariant-mass peak in the presence of a small continuum background. The background sources include an irreducible four-lepton contribution from direct \cPZ\cPZ\ ($\cPZ\gamma^*$) production via the $\Pq\Paq$ annihilation and $\Pg\Pg$ fusion processes. Reducible contributions arise from $\cPZ + \cPqb\cPaqb$ and $\ttbar$ production, where the final state contains two isolated leptons and two $\cPqb$-quark jets that produce two nonprompt leptons. Additional background arises from $\cPZ+$jets and $\PW\cPZ+$jets events, where jets are misidentified as leptons. Since there are differences in the reducible background rates and mass resolutions between the subchannels $4\Pe$, $4\mu$, and $2\Pe2\mu$, they are analyzed separately and the results are then combined statistically. Compared to the first CMS $\cPZ\cPZ\ \to 4\ell$ analysis reported in Ref.~\cite{Chatrchyan:2012dg}, this analysis employs improved muon reconstruction, lepton identification and isolation, recovery of final-state-radiation (FSR) photons, and the use of a kinematic discriminant that exploits the expected decay kinematics of the signal events. New mass and spin-parity results obtained from a $\PH \rightarrow \cPZ\cPZ \rightarrow 4\ell$ analysis using additional integrated luminosity at the centre-of-mass energy of 8\TeV are described in a recent CMS publication~\cite{:2012br}, and not discussed further here. Candidate events are first selected by triggers that require the presence of a pair of electrons or muons. An additional trigger requiring an electron and a muon in the event is also used for the 8\TeV data. The requirements on the minimum $\PT$ of the two leptons are 17 and 8\GeV. The trigger efficiency is determined by first adjusting the simulation to reproduce the efficiencies obtained on single lepton legs in special tag-and-probe measurements, and then using the simulation to combine lepton legs within the acceptance of the analysis. The efficiency for a Higgs boson of mass $> 120\GeV$, is greater than 99\% (98\%, 95\%) in the $4\mu$ ($2\Pe 2\mu$, $4\Pe$) channel. The candidate events are selected using identified and isolated leptons. The electrons are required to have transverse momentum $\PT^{\Pe} > 7\GeV$ and pseudorapidity within the tracker geometrical acceptance of $|\eta^{\Pe}| < 2.5$. The corresponding requirements for muons are $\PT^{\Pgm} > 5\GeV$ and $|\eta^{\Pgm}| < 2.4$. No gain in expected significance for a Higgs boson signal is obtained by lowering the $\PT$ thresholds for the leptons, since the improvement in signal detection efficiency is accompanied by a large increase in the $\cPZ+$jets background. The lepton-identification techniques have been described in Section~\ref{sec:reconstruction}. The multivariate electron identification is trained using a Higgs boson MC simulation sample for the $\PH\rightarrow\cPZ\cPZ$ signal and a sample of \PW+1-jet events from data for the background. The working point is optimized using a \cPZ+1-jet data sample. For each lepton, $\ell = \Pe$, $\mu$, an isolation requirement of $ R_\text{Iso}^{\ell} < 0.4$ is applied to suppress the \cPZ+jet, \cPZ+$\cPqb\cPaqb$, and $\ttbar$ backgrounds. In addition, the lepton impact parameter significance with respect to the primary vertex, defined as ${\rm SIP_{3D}}= \frac{\rm IP}{\sigma_{\rm IP}} $, with ${\rm IP}$ the impact parameter in three dimensions and $\sigma_{\rm IP}$ its uncertainty, is used to further reduce background. The criteria of $| {\rm SIP_{3D}} | < 4$ suppresses the $\cPZ + \cPqb\cPaqb$ and $\ttbar$ backgrounds with negligible effect on the signal efficiency. The efficiencies for reconstruction, identification, and isolation of electrons and muons are measured in data, using a tag-and-probe technique~\cite{CMS:2011aa} based on an inclusive sample of $\cPZ \to \ell\ell $ events. The measurements are performed in bins of $\PT^{\ell} $ and $ |\eta| $. Additional samples of dileptons with $\PT^{\ell} < 15\GeV$ from $\cPJgy$ decays are used for the efficiency measurements (in the case of muons) or for consistency checks (in the case of electrons). Examples of tag-and-probe results for the lepton identification efficiencies obtained with data and MC simulation are shown for electrons (top) and muons (bottom) in Fig.~\ref{fig:leptonTP}. The efficiencies measured with data are in agreement with those obtained using MC simulation. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.49\linewidth]{figures/HZZ_Eff_ElectronBarrel.pdf} \includegraphics[width=0.49\linewidth]{figures/HZZ_Eff_ElectronEndcap.pdf} \includegraphics[width=0.49\linewidth]{figures/HZZ_Eff_MuonBarrel.pdf} \includegraphics[width=0.49\linewidth]{figures/HZZ_Eff_MuonEndcap.pdf} \caption{ Measurements of the lepton identification efficiency using a tag-and-probe technique based on samples of \cPZ\ and $\cPJgy$ dilepton events. The measurements are shown for electrons (top) at 7 $\TeV$ and muons (bottom) at 8 $\TeV$ as a function of $\PT^{\ell} $ for the $ |\eta| $ regions of the barrel (left) and endcaps (right). For muons, the efficiencies at $\PT^{\mu} < 15\GeV$ (dashed line on bottom plots) is obtained using $\cPJgy$. The results obtained from data (points with error bars) are compared to results obtained from MC simulation (histograms), with the shaded region representing the combined statistical and systematic uncertainties. } \label{fig:leptonTP} \end{center} \end{figure} The mean differences (at the percent level) are used to correct the MC simulation predictions, and the uncertainty in the difference is propagated as a systematic uncertainty per lepton. The overall lepton selection efficiencies are obtained as the product of the reconstruction, identification, and isolation efficiencies. The overall efficiency for selecting electrons in the ECAL barrel (endcaps) varies from about 71\% (65\%) for $7 < \PT^{\Pe} < 10\GeV$ to 82\% (73\%) at $\PT^{\Pe} \simeq 10\GeV$, and reaches 90\% (89\%) for $\PT^{\Pe} \simeq 20\GeV$. The efficiency for electrons drops to about 85\% in the transition region, $1.44 < |\eta^e| < 1.57$, between the ECAL barrel and endcaps. The muons are selected with an efficiency above $98\%$ in the full $|\eta^{\Pgm}| < 2.4$ range for $\PT^{\mu} > 5\GeV$. Photons reconstructed with pseudorapidity $\vert \eta^{\gamma} \vert < 2.4$ are possible FSR candidates. The photon selection criteria are optimized as a function of the angular distance between the photon and the closest lepton in $(\eta, \phi)$ space. In an inner cone $\Delta R = 0.07$, photons are accepted if $\pt > 2\GeV$, with no further requirements. In an outer annulus $0.07< \Delta R<0.5$, where the rate of photons from the underlying event and pileup is much larger, a tighter threshold of 4\GeV is used, and the photons are also required to be isolated: the sum of the $\pt $ of all charged hadrons, neutral hadrons, and photons in a cone of radius $\Delta R = 0.3$ centred on the photon should not exceed the $\pt $ of the photon itself. In contrast to lepton isolation, and in order to take into account the fact that the photon might come from a pileup interaction, the photon isolation also uses the charged hadrons associated with other primary vertices. The selection criteria have been tuned to achieve approximately the same purity in the two angular regions. When reconstructing the $\cPZ \to \ell\ell $ candidates, only FSR photons associated with the closest lepton, and that make the dilepton-plus-photon invariant mass closer to the nominal \cPZ\ mass than the dilepton invariant mass, are kept. The dilepton-plus-photon invariant mass must also be less than 100\GeV. The performance of the FSR selection algorithm is measured using MC $\PH \to \cPZ\cPZ $ simulation samples, and the rate is verified with inclusive $\cPZ$-boson events in data. Photons within the acceptance for the FSR selection are measured with an efficiency of ${\simeq}50\%$ and a mean purity of $80\%$. The FSR photons are selected in 5\% of inclusive $\cPZ$-boson events in the muon channel and 0.5\% in the electron channels. In the case of electrons, the FSR photons are often implicitly combined into the electron superclusters, resulting in a lower FSR recovery efficiency. The \cPZ\ boson candidates are reconstructed from pairs of leptons of the same flavour and opposite charge ($\ell^+\ell^-$). The lepton pair with an invariant mass closest to the nominal \cPZ\ mass is denoted as $\cPZ_1$ with mass $m_{\cPZ_1}$ and is retained if it satisfies $40 < m_{\cPZ_1} < 120\GeV$. The invariant mass of the second \cPZ\ candidate, denoted $\cPZ_2$, must satisfy $12 < m_{\cPZ_2} < 120\GeV$. The minimum value of $12\GeV$ is found from simulation to provide the optimal sensitivity for a Higgs boson mass in the range $ 110 < \ensuremath{m_{\PH}} < 160~\GeV$. If more than one ${\cPZ_2}$ candidate satisfies all the criteria, we choose the candidate reconstructed from the two leptons with the highest scalar sum of their \PT. Among the four selected leptons forming $\cPZ_1$ and $\cPZ_2$, at least one is required to have $\PT > 20\GeV$ and another $\PT > 10\GeV$. These \PT thresholds ensure that the selected leptons are on the high-efficiency plateau for the trigger. To further reject leptons originating from weak semileptonic hadron decays or decays of low-mass hadronic resonances, we require that all opposite-charge pairs of leptons chosen from among the four selected leptons (irrespective of flavour) have an invariant mass greater than 4\GeV. The phase space for the Higgs boson search is defined by restricting the four-lepton mass range to $m_{4\ell} > 100\GeV$. The predicted lepton $\PT$ distributions from the MC simulation for a Higgs boson with $\ensuremath{m_{\PH}} = 125$\GeV are shown in Fig.~\ref{fig:leptonPT} for the $4\Pe$, $4\mu$, and $2\Pe 2\mu$ channels. Also given in Fig.~\ref{fig:leptonPT} (bottom right) are the event selection efficiencies for each of the three lepton channels, as a function of the Higgs boson mass. These distributions clearly emphasize the importance of low lepton-$\PT$ thresholds and high lepton efficiencies. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.49\linewidth]{figures/HZZ_pteff_4e_MH125} \includegraphics[width=0.49\linewidth]{figures/HZZ_pteff_4mu_MH125} \includegraphics[width=0.49\linewidth]{figures/HZZ_pteff_2e2mu_MH125} \includegraphics[width=0.49\linewidth]{figures/HZZ_eff_2012_zoom} \caption{ The MC simulation distributions of the lepton transverse momentum $\PT^{\ell}$ for each of the four leptons, ordered by $\PT^{\ell}$, from the process $\PH \to \cPZ\cPZ \to 4\ell$ for a Higgs boson mass of 125\GeV in the $4\Pe$ (top left), $4\mu$ (top right), and $2\Pe 2\mu$ (bottom left) channels. The distributions are shown for events when all four leptons are within the geometrical acceptance of the analysis (open histograms), and for events passing the final selection criteria (solid histograms).The bottom-right plot displays the event selection efficiencies for $\PH \to \cPZ\cPZ \to 4\ell$ determined from MC simulation, as a function of the Higgs boson mass, for the $4\Pe$, $4\mu$, and $2\Pe 2\mu$ channels. The efficiencies are relative to events where all four leptons are within the geometrical acceptance. Divergent contributions from $\cPZ\gamma^*$ with $\gamma^* \rightarrow \ell \ell$ at generator level are avoided by requiring that all dilepton invariant masses are greater than 1\GeV.} \label{fig:leptonPT} \end{center} \end{figure} The selection efficiencies shown in Fig.~\ref{fig:leptonPT} are relative to events where all four leptons are within the geometrical acceptance and all dilepton invariant masses satisfy $m_{\ell\ell} > 1\GeV$. The combined signal reconstruction and selection efficiency, for a Higgs boson with $\ensuremath{m_{\PH}} = 125\GeV$, is 18\% for the $4\Pe$ channel, 40\% for the $4\mu$ channel, and 27\% for the $2\Pe2\mu$ channel. The expected resolution on the per-event mass measurement is on average 2.2\% for the $4\Pe$ channel, 1.1\% for the $4\mu$ channel, and 1.6\% for the $2\Pe2\mu$ channel. The kinematics of the $\PH\rightarrow\cPZ\cPZ\rightarrow 4\ell$ process, as well as for any boson decaying to $\cPZ\cPZ$, has been extensively studied in the literature~\cite{Soni:1993jc,Barger:1993wt,Choi:2002jk,Allanach:2002gn,Choi:2002jk, Buszello:2002uu,Godbole:2007cn,Keung:2008ve,Antipin:2008hj,Hagiwara:2009wt, Gao:2010qx,DeRujula:2010ys,Gainer:2011xz,Bolognesi:2012mm}. Since the Higgs boson is spinless, the angular distribution of its decay products is independent of the production mechanism. In the Higgs boson rest frame, for a given invariant mass of the $4\ell$ system, the kinematics are fully described by five angles, denoted $\vec{\Omega}$, and the invariant masses of the two lepton pairs $\cPZ_1$ and $\cPZ_2$. These seven variables provide significant discriminating power between signal and background. A kinematic discriminant ($K_{D}$) is introduced using the full probability density in the dilepton masses and angular variables, ${\cal P}(m_{\cPZ_1},m_{\cPZ_2},\vec{\Omega}|m_{4\ell})$. The $K_{D}$ is constructed for each candidate event based on the probability ratio of the signal and background hypotheses, $K_{D}={\cal P_\mathrm{sig}}/({\cal P_\mathrm{sig}}+{\cal P_\mathrm{bkg}})$, as described in Refs.~\cite{Chatrchyan:2012sn,CMSobservation125}. For the signal, the phase-space and Z-propagator terms~\cite{Choi:2002jk} are included in a fully analytic parametrization of the Higgs boson signal~\cite{Gao:2010qx}. An analytic parametrization is also used for the background probability distribution for the mass range above the $\cPZ\cPZ$ threshold, while it is tabulated using a MC simulation of the $\cPq\cPaq\to\cPZ\cPZ(\cPZ\gamma^*)$ process below this threshold. \subsection{Background estimation and systematic uncertainties} The small number of observed candidate events precludes a precise direct determination of the background by extrapolating from the signal region mass sidebands. Instead, we rely on MC simulation to evaluate the local density ($\Delta N / \Delta m_{4\ell}$) of $\cPZ\cPZ$ background events expected as a function of $m_{4\ell}$. The cross section for \cPZ\cPZ\ production at NLO is calculated with \textsc{mcfm}~\cite{MCFM,Campbell:1999ah,Campbell:2011bn}. This includes the dominant process from $\Pq\Paq$ annihilation, as well as from gluon-gluon fusion. The uncertainties in the predicted number of background events owing to the variation in the QCD renormalization and factorization scales and PDF set are on average 8\% for each final state~\cite{Dittmaier:2012vm}. The number of predicted $\cPZ\cPZ\rightarrow 4\ell$ events and their systematic uncertainties after the signal selection are given in Table~\ref{tab:SelectYieldsLowMass}. The reducible $\cPZ+\bbbar$, $\ttbar$, $\cPZ+\text{jets}$, $\cPZ+\gamma+\text{jets}$, and $\PW\cPZ+\text{jets}$ backgrounds contain at least one nonprompt lepton in the four-lepton final state. The main sources of nonprompt leptons are electrons and muons coming from decays of heavy-flavour quark, misidentified jets (usually originating from light-flavour quarks), and electrons from photon conversions. The lepton misidentification probabilities are measured in data samples of $\cPZ+\text{jet}$ events with one additional reconstructed lepton, which are dominated by final states that include a $\cPZ$ boson and a fake lepton. The contamination from \PW\cPZ\ production in these events is suppressed by requiring $\ETmiss <25$\GeV. The lepton misidentification probabilities measured from these events are consistent with those derived from MC simulation. These misidentification probabilities are applied to dedicated $\cPZ_1+X$ control samples, where $X$ contains two reconstructed leptons with relaxed isolation and identification criteria. Starting from these samples, two complementary approaches are used to extract the corresponding reducible $\cPZ+X$ background yield expected in the $4\ell$ signal region. The first approach avoids signal contamination in the background sample by reversing the opposite-sign requirement on the $\cPZ_2$ lepton candidates, and then applies the fake lepton efficiencies to the additional leptons to calculate the expected number of background events in the signal sample. The second approach uses a control region defined by two opposite-sign leptons failing the isolation and identification criteria, and using the misidentification probability to extrapolate to the signal region. In addition, a control region with three passing leptons and one failing lepton is also used to estimate the background with three prompt leptons and one misidentified lepton. Comparable background predictions in the signal region are found from both methods within their uncertainties. The average of the two predictions is used for the background estimate, with an uncertainty that includes the difference between them (see Table~\ref{tab:SelectYieldsLowMass}). Systematic uncertainties are evaluated from the data for the trigger (1.5\%), and the combined four-lepton reconstruction, identification, and isolation efficiencies that vary from 1.2\% in the $4\mu$ channel at $\ensuremath{m_{\PH}}=150$\GeV to about 11\% in the $4\Pe$ channel at $\ensuremath{m_{\PH}}=120$\GeV. The effects of the systematic uncertainties in the lepton energy-momentum calibration (0.4\%) and energy resolution on the four-lepton invariant-mass distribution are taken into account. The accuracy of the absolute mass scale and resolution is validated using $Z \to \ell\ell$, $Y \to \ell\ell$, and $\cPJgy \to \ell\ell$ events. The effect of the energy resolution uncertainty is taken into account by introducing a 20\% variation on the simulated width of the signal mass peak. An uncertainty of 50\% is assigned to the reducible background rate. This arises from the finite statistical precision in the reducible background control regions, differences in the background composition between the various control regions, and differences between the data samples used to measure the lepton misidentification probabilities. Since all the reducible and instrumental background are estimated using control regions in the data, they are independent of the uncertainty in the integrated luminosity. However, this uncertainty (2.2\% at 7\TeV~\cite{CMS-PAS-SMP-12-008} and 4.4\% at 8\TeV~\cite{CMS:2012jza}) does affect the prediction of the \cPZ\cPZ\ background and the normalization of the signal in determining the Higgs boson cross section. Finally, the systematic uncertainties in the theoretical Higgs boson cross section (17--20\%) and $4\ell$ branching fraction (2\%) are taken from Ref.~\cite{LHCHiggsCrossSectionWorkingGroup:2011ti}. \subsection{Results} The number of selected $\cPZ\cPZ\rightarrow 4\ell$ candidate events in the mass range $110 < m_{4\ell} < 160$\GeV for each of the three final states is given in Table~\ref{tab:SelectYieldsLowMass}. The number of predicted background events in each of the three final states and their uncertainties are also given, together with the number of signal events expected from a SM Higgs boson of $\ensuremath{m_{\PH}} = 125$\GeV. \begin{table}[htbp] \begin{center} \topcaption{The number of observed selected events, compared to the expected background yields and the expected number of signal events ($\ensuremath{m_{\PH}} = 125$\GeV) for each lepton final state in the $\PH\to\cPZ\cPZ \to 4\ell$ analysis. The estimates of the $\cPZ\cPZ$ background are from MC simulation and the $\cPZ+X$ background are based on data. These results are given for the four-lepton invariant-mass range from 110 to 160\GeV. The total expected background and the observed numbers of events are also given integrated over the three bins (``signal region'' defined as $121.5 < m_{4\ell} < 130.5$\GeV) of Fig.~\ref{fig:ZZmass}, centred on the bin where the most significant excess is seen. The uncertainties shown include both statistical and systematic components. } \label{tab:SelectYieldsLowMass} \begin{tabular}{l|c|c|c||c} \hline Channel & $4\Pe$ & $4\Pgm$ & $2\Pe2\Pgm$ & Total \\ \hline \hline $\cPZ\cPZ$ background & 2.7 $\pm$ 0.3 & 5.7 $\pm$ 0.6 & 7.2 $\pm$ 0.8 & 15.6 $\pm$ 1.4 \\ $\cPZ+X$ & $1.2 ^{ + 1.1}_{ - 0.8 }$ & $0.9 ^{ + 0.7 }_{ - 0.6 }$ & $2.3 ^{ + 1.8 }_{ - 1.4 }$ & $4.4 ^{ + 2.2 \phantom{^0}}_{ - 1.7\phantom{_0} }$ \\ % \hline All backgrounds \small{($110 < m_{4\ell} < 160$\GeV)} & $3.9 ^{ + 1.1 }_{ - 0.8 }$ & $6.6^{ + 0.9 }_{ - 0.8 }$ & $9.5 ^{ + 2.0 }_{ - 1.6 }$ & $20.0 ^{ + 3.2}_{ - 2.6}$ \\ % \hline Observed \small{($110 < m_{4\ell} < 160$\GeV)} & 6 & 6 & 9 & 21\\ \hline \hline Expected Signal \small{($\ensuremath{m_{\PH}} = 125$\GeV)} & 1.37 $\pm$ 0.44 & 2.75 $\pm$ 0.56 & 3.44 $\pm$ 0.81 & 7.6 $\pm$ 1.1 \\ \hline \hline All backgrounds \small{(signal region)} & $0.71^{+0.20}_{-0.15}$ & $1.25^{+0.15}_{-0.13}$ & $1.83^{+0.36}_{-0.28}$ & $3.79^{+0.47}_{-0.45}$\\ \hline Observed \small{(signal region)} & 1 & 3 & 5 & 9\\ \hline \end{tabular} \end{center} \end{table} The observed $m_{4\ell}$ distribution from data is shown in Fig.~\ref{fig:ZZmass}. There is a clear peak at the \cPZ\ boson mass from the decay $\cPZ \to 4\ell$~\cite{CMS:2012bw}. The size and shape of the peak are consistent with those from the background prediction. Over the full Higgs boson search region from 110 to 160\GeV, the reducible background from \cPZ+X events is much smaller than the irreducible $\cPZ\cPZ (\cPZ\gamma^*)$ background. There is an excess of events above the expected background near 125\GeV. The total number of observed events and the expected number of background events in the three bins centred on the excess ($121.5 < m_{4\ell} < 130.5$\GeV), and referred to as the ``signal" region, are given in Table 5. The expected four-lepton invariant-mass distribution for a Higgs boson with a mass of 125\GeV is shown by the open histogram in Fig.~\ref{fig:ZZmass}. The distributions of the reconstructed $\cPZ_1$ and $\cPZ_2$ dilepton invariant masses for the events in the signal region are shown in the left and right plots of Fig.~\ref{fig:Z1Z2masses}, respectively. The $\cPZ_1$ distribution has a tail towards low invariant mass, indicative that also the highest mass $\cPZ$ is often off-shell. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.7\linewidth]{figures/HZZ_m4l_70_180_Higgs125_3GeV.pdf} \caption{ Distribution of the observed four-lepton invariant mass from the combined 7 and 8\TeV data for the $\PH \to \cPZ\cPZ\to 4\ell$ analysis (points). The prediction for the expected $\cPZ$+X and $\cPZ\cPZ(\cPZ\gamma^*)$ background are shown by the dark and light histogram, respectively. The open histogram gives the expected distribution for a Higgs boson of mass 125\GeV. } \label{fig:ZZmass} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width=0.45\linewidth]{figures/HZZ_MZ1_MainThreeBins_MH125} \includegraphics[width=0.45\linewidth]{figures/HZZ_MZ2_MainThreeBins_MH125} \caption{ Distributions of the observed $\cPZ_1$ (left) and $\cPZ_2$ (right) dilepton invariant masses for four-lepton events in the mass range $121.5 < m_{4\ell} < 130.5$\GeV for the combined 7 and 8\TeV data (points) . The shaded histograms show the predictions for the background distributions, and the open histogram for a Higgs boson with a mass of 125\GeV. } \label{fig:Z1Z2masses} \end{center} \end{figure} The two-dimensional distribution of the kinematic discriminant $K_{D}$ versus the four-lepton reconstructed mass $m_{4\ell}$ is shown in Fig.~\ref{fig:Mass4lKD} for the individual selected events. Superimposed on this figure are the contours of the expected event density for the background (upper) and a SM Higgs boson at $\ensuremath{m_{\PH}}$ = 125\GeV (lower). \begin{figure}[htbp] \begin{center} \includegraphics[width=0.7\linewidth]{figures/HZZ_candidates_mela_background} \includegraphics[width=0.7\linewidth]{figures/HZZ_candidates_mela_signal} \caption{ The two-dimensional distribution of the kinematic discriminant $K_{D}$ versus $m_{4\ell}$ for selected $4\ell$ events in the combined 7 and 8\TeV data. Events in the three different final states are designated by symbols shown in the legend. The horizontal error bars indicate the estimated per-event mass resolution deduced from the combination of the per-lepton momentum uncertainties. The contours in the upper plot show the event density for the background expectation, and in the lower plot the contours for a SM Higgs boson with $\ensuremath{m_{\PH}}$ = 125\GeV (both in arbitrary units). } \label{fig:Mass4lKD} \end{center} \end{figure} A clustering of events is observed in the region around $m_{4\ell}$ = 125\GeV with $K_{D} \ge 0.7$. The background expectation is low in this region and the signal expectation is high, corresponding to the excess of events above background seen in the one-dimensional $m_{4\ell}$ distribution. The observed distribution of the $K_{D}$ discriminant values for invariant masses in the signal range $121.5 < m_{4\ell} < 130.5\GeV$ is shown in Fig.~\ref{fig:Mass4lKD05} (left). The $m_{4\ell}$ distribution of events satisfying $K_{D} > 0.5$ is shown in Fig.~\ref{fig:Mass4lKD05} (right). The clustering of events is clearly visible near $m_{4\ell}$$\approx$$125$\GeV. \begin{figure}[!htb] \begin{center} \includegraphics[width=0.49\linewidth]{figures/HZZ_LD_lowmass_7Plus8TeV} \includegraphics[width=0.5\linewidth]{figures/HZZ_Mass_7Plus8TeV_100-180_Mela05} \caption{ Left: Distribution of the kinematic discriminant $K_D$ for $\PH \to \cPZ\cPZ \to 4\ell$ candidate events from the combined 7 and 8\TeV data (vertical lines) in the signal mass region $121.5 < m_{4\ell} < 130.5$\GeV. The predicted distributions for the \cPZ+X and $\cPZ\cPZ(\cPZ\gamma^*)$ backgrounds and for a Higgs boson with a mass of 125\GeV are shown by the histograms. Right: The $m_{4\ell}$ distribution for data events with $K_D >$ 0.5 (points) and the predicted distributions for the backgrounds and a Higgs boson with a mass of 125\GeV (histograms). } \label{fig:Mass4lKD05} \end{center} \end{figure} \section{\texorpdfstring{$\PH\to\PW\PW$}{H to WW}\label{sec:hww2l2nu}} The decay mode $\PH\to\PW\PW$ is highly sensitive to a SM Higgs boson with a mass around the $\PW\PW$ threshold of 160~$\GeV$. With the lepton identification and $\ETmiss$ reconstruction optimized for LHC pileup conditions, it is possible to extend the sensitivity down to 120~$\GeV$. The search strategy for $\PH\to\PW\PW$ is based on the final state in which both $\PW$ bosons decay leptonically, resulting in a signature with two isolated, oppositely charged, high-$\pt$ leptons (electrons or muons) and large $\MET$ caused by the undetected neutrinos. It is not possible to reconstruct the Higgs mass in this final state, nevertheless there is some mass sensitivity via different kinematic distributions like the dilepton mass or the invariant mass of leptons and $\ETm$. The analysis of the 7\TeV data is described in Ref.~\cite{Chatrchyan:2012ty} and remains unchanged, while the 8\TeV analysis is modified to cope with the more difficult conditions induced by the higher pileup in the 2012 data taking, and is explained below. \subsection{\texorpdfstring{$\PW\PW$}{WW} event selection} \label{sec:ww_evtsel} To improve the signal sensitivity, events are separated by jet multiplicity into three mutually exclusive categories, which are characterized by different expected signal yields and signal-to-background ratios. We call these the 0-jet, 1-jet, and 2-jet categories. Jets are reconstructed using the selection described in Section~\ref{sec:reconstruction}, and events are classified according to the number of selected jets with $\ET>30\GeV$ and $|\eta|<$4.7. To exclude electrons and muons from the jet sample, these jets are required to be separated from the selected leptons in $\Delta R$ by at least $\Delta R^{\mathrm{jet-lepton}}>0.3$. Events with more than 2 jets are only considered if there are no additional jets above this threshold present in the pseudorapidity region between the two highest-$\ET$ jets. Furthermore, the search splits candidate signal events into three final states, denoted by: $\Pep\Pem$, $\Pgmp\Pgmm$, and $\ensuremath{\Pe^{\pm}}\ensuremath{\Pgm^{\mp}}$. The bulk of the signal arises through direct $\PW\PW$ decays to dileptons of opposite charge, where the small contribution proceeding through an intermediate $\Pgt$ leptonic decays is implicitly included. The events are selected by triggers that require the presence of one or two high-$\pt$ electrons or muons. The trigger efficiency for signal events that pass the full event selection is measured to be above 97\% in the $\Pgmp\Pgmm$ final state, and above 98\% in the $\Pep\Pem$ and $\ensuremath{\Pe^{\pm}}\ensuremath{\Pgm^{\mp}}$ final states for a Higgs boson mass of about $125\GeV$. The trigger efficiencies increase along with Higgs boson mass. These efficiencies are measured using $\ensuremath{\cPZ/\GAMMA^*{\to \ell^+\ell^-}}$ events~\cite{CMS:2011aa}, with associated uncertainties of about 1\%. Two oppositely charged lepton candidates are required, with $\pt > 20\GeV$ for the higher-$\pt$ lepton ($\ensuremath{p_{\mathrm{T}}^{\Lep,\mathrm{max}}}$) and $\pt > 10\GeV$ for the lower-$\pt$ lepton ($\ensuremath{p_{\mathrm{T}}^{\Lep,\mathrm{min}}}$). Only electrons (muons) with $|\eta| < 2.5 \, (2.4)$ are considered in the analysis. A tight muon selection is applied, as described in Section~\ref{sec:reconstruction}. Muons are required to be isolated to distinguish between muon candidates from \PW\ boson decays and those from QCD background processes, which are usually in or near jets. For each muon candidate, the scalar sum of the transverse energy of all particles consistent with originating from the primary vertex is reconstructed in cones of several widths around the muon direction, excluding the contribution from the muon itself. This information is combined using a multivariate algorithm that exploits the differences in the energy deposition between prompt muons and muons from hadron decays inside a jet. Electron candidates are identified using the multivariate approach described in Section~\ref{sec:reconstruction}. Electrons are required to be isolated by applying a threshold on the sum of the transverse energy of the particles that are reconstructed in a cone around them, excluding the contribution from the electron itself. For both electrons and muons, a correction is applied to account for the contribution to the energy in the isolation cone from pileup, as explained in Section~\ref{sec:reconstruction}. In addition to high-momentum, isolated leptons and minimal jet activity, missing transverse momentum is present in signal events, but generally not in the background. In this analysis, a projected~$\MET$ variable is employed. It is equal to the component of the $\MET$ vector transverse to the nearest lepton direction, if the difference in azimuthal angle between this lepton and the $\MET$ vector is less than $90^\circ$. If there is no lepton within $90^\circ$ of the $\MET$ direction in azimuth, the value of $\MET$ is used. Since the projected~$\MET$ resolution is degraded by pileup, the minimum of two $\MET$ observables is used in the determination of the projected $\MET$ value: the first is the standard $\MET$, while the second uses only charged particles associated with the primary vertex to measure the missing transverse energy. Events with projected~$\MET$ above 20$\GeV$ are selected for the analysis. To suppress the top-quark background, a \textit{top-quark tagging} technique, based on low-momentum muon identification and \cPqb-jet tagging~\cite{CMS-PAS-BTV-12-001}, is applied. The first selection is designed to veto events containing muons from \cPqb\ hadrons coming from top-quark decays. The second selection uses a \cPqb-jet tagging algorithm that looks for tracks with large impact parameter within jets. The rejection when combining the two selections for the top-quark background is about 50\% in the 0-jet category and above 80\% for events with at least one jet passing the selection criteria. Various selection criteria are used to reduce the other background contributions. For the $\PW$+jets background, a minimum dilepton transverse momentum ($\pt^{\ell\ell}$) of 45\GeV is required. To reduce the background from $\ensuremath{\W\Z}$ production, any event that has a third lepton passing the identification and isolation requirements is rejected. This requirement rejects less than 1\% of the $\PW\PW \to 2\ell2\nu$ events, while rejecting around 35\% of the remaining $\ensuremath{\W\Z}$ events. The contribution from $\ensuremath{\PW\GAMMA}$ production, where the photon converts into a electron pair, is reduced by about 90\% in the dielectron final state by requirements that reject $\gamma$ conversions. Those requirements consist in finding tracks that associated with the electron give good conversion candidates. The background from low-mass resonances is rejected by requiring a dilepton mass ($\ensuremath{m_{\Lep\Lep}}$) greater than 12 $\GeV$. The Drell--Yan process produces same-flavour lepton pairs ($\Pep\Pem$ and $\Pgmp\Pgmm$). In order to suppress this background, a few additional requirements are applied in the same-flavour final states. First, the resonant $\cPZ$ component of the Drell--Yan production is rejected by requiring a dilepton mass outside a 30\GeV window centred on the $\Z$ mass. Then, the remaining off-peak contribution is suppressed by exploiting different \MET-based approaches depending on the number of jets and the Higgs boson mass hypothesis. At large Higgs boson masses ($\ensuremath{m_{\PH}} > 140\GeV$), signal events are associated with large \MET\ and, thus, to suppress the Drell--Yan background it is sufficient to require the minimum of the two projected~$\MET$ variables to be greater than 45 $\GeV$. On the contrary, in low-mass Higgs boson events ($\ensuremath{m_{\PH}} \leq 140\GeV$) it is more difficult to separate the signal from the Drell--Yan background; therefore in this case, a dedicated multivariate selection, combining the missing transverse momentum with kinematic and topological variables, is used to reject Drell--Yan events and maximize the signal yield. A third approach is employed in events with two jets. Here, the dominant source of \MET\ is the mismeasurement of the hadronic jet energy, and the optimal performance is obtained by requiring $\MET > 45\GeV$. Finally, the momenta of the dilepton system and the most energetic jet must have an angle in the transverse plane smaller than $165^\circ$. These selections reduce the Drell--Yan background by three orders of magnitude, while rejecting less than 50\% of the signal, as determined from simulation. After applying the full set of selection criteria, referred to as the $\PW\PW$ selection, the observed yields in the combined 7 and 8\TeV data set are 1594, 1186, and 1295 events in the 0-jet, 1-jet, and 2-jet categories, respectively. This sample is dominated by nonresonant $\PW\PW$ events in the 0-jet category and by a similar fraction of $\PW\PW$ and top events in the other two categories. The main efficiency loss is due to the lepton selection and the stringent $\MET$ requirements. Figures~\ref{fig:wwpresel_nj_mh125_deltaphill} and~\ref{fig:wwpresel_nj_mh125_massll} show the observed distributions of the azimuthal angle difference ($\ensuremath{\Delta\phi_{\Lep\Lep}}$) and the dilepton mass ($m_{\ell\ell}$) after the $\PW\PW$ selection, respectively, and the expected distributions for a SM Higgs boson with $\ensuremath{m_{\PH}}=125\GeV$ and for backgrounds in the 0- and 1-jet categories. The clear difference in the shape between the $\PH \to \PW\PW$ and the nonresonant $\PW\PW$ processes is because of the spin-0 nature of the Higgs boson. \begin{figure}[h!t] \begin{center} \includegraphics[width=0.49\textwidth]{figures/hww_wwpresel_0j_mh125_deltaphill.pdf} \includegraphics[width=0.49\textwidth]{figures/hww_wwpresel_1j_mh125_deltaphill.pdf} \caption{Distributions of the azimuthal angle difference $\ensuremath{\Delta\phi_{\Lep\Lep}}$ between selected leptons in the 0-jet (left) and 1-jet (right) categories, for data (points), the main backgrounds (solid histograms), and a SM Higgs boson signal with $\ensuremath{m_{\PH}}= 125\GeV$ (hatched histogram) at 8\TeV. The standard $\PW\PW$ selection is applied.} \label{fig:wwpresel_nj_mh125_deltaphill} \end{center} \end{figure} \begin{figure}[h!t] \begin{center} \includegraphics[width=0.49\textwidth]{figures/hww_wwpresel_0j_mh125_massll.pdf} \includegraphics[width=0.49\textwidth]{figures/hww_wwpresel_1j_mh125_massll.pdf} \caption{Distributions of the dilepton invariant mass $\ensuremath{m_{\Lep\Lep}}$ of selected dileptons in the 0-jet (\cmsLeft) and 1-jet (\cmsRight) categories, for data (points), the main backgrounds (solid histograms), and a SM Higgs boson with $\ensuremath{m_{\PH}}= 125\GeV$ (hatched histogram) at 8\TeV. The standard $\PW\PW$ selection is applied. The last bin contains overflows.} \label{fig:wwpresel_nj_mh125_massll} \end{center} \end{figure} \subsection{\texorpdfstring{$\PH \to \PW\PW$}{Higgs to WW} search strategy} \label{sec:hww} To enhance the sensitivity for a Higgs boson signal, a cut-based approach is chosen for the final event selection. Because the kinematics of signal events change as a function of the Higgs boson mass, separate optimizations are performed for different $\ensuremath{m_{\PH}}$ hypotheses. The extra requirements, designed to optimize the sensitivity for a SM Higgs boson, are placed on $\ensuremath{p_{\mathrm{T}}^{\Lep,\mathrm{max}}}$, $\ensuremath{p_{\mathrm{T}}^{\Lep,\mathrm{min}}}$, $\ensuremath{m_{\Lep\Lep}}$, $\ensuremath{\Delta\phi_{\Lep\Lep}}$ and the transverse mass $m_\mathrm{T}$, defined as $\sqrt{2 \pt^{\ell\ell} \MET (1-\cos\ensuremath{\Delta\phi_{\met\Lep\Lep}})}$, where $\ensuremath{\Delta\phi_{\met\Lep\Lep}}$ is the difference in azimuthal angle between the $\MET$ direction, and the transverse momentum of the dilepton system. The requirements, which are the same for both the 0- and 1-jet categories, are summarized in Table~\ref{tab:cuts_analysis}. The $\ensuremath{m_{\Lep\Lep}}$ distribution in the 0-jet (left) and 1-jet (right) categories for the $\Pe\mu$ candidate events are shown in Fig.~\ref{fig:hwwsel_nj_mh125_massem}, along with the predictions for the background and a SM Higgs boson with $\ensuremath{m_{\PH}}=125\GeV$. \begin{table*}[h!t] \begin{center} \topcaption{Final event selection requirements for the cut-based analysis of the 0- and 1-jet event samples. Values for other Higgs boson mass hypotheses follow a smooth behavior with respect to the reported values.} {\small \setlength{\extrarowheight}{1pt} \begin{tabular} {l|c|c|c|c|c} \hline $\ensuremath{m_{\PH}}$ ($\GeVns{}$) & $\ensuremath{p_{\mathrm{T}}^{\Lep,\mathrm{max}}}$ ($\GeVns{}$) & $\ensuremath{p_{\mathrm{T}}^{\Lep,\mathrm{min}}}$ ($\GeVns{}$) & $\ensuremath{m_{\Lep\Lep}}$ ($\GeVns{}$) & $\ensuremath{\Delta\phi_{\Lep\Lep}}$ (\de) & $m_\mathrm{T}$ ($\GeVns{}$) \\ \hline \hline 125 & $>$23 & $>$10 & $<$43 & $<$100 & 80--123 \\ 130 & $>$25 & $>$10 & $<$45 & $<$90 & 80--125 \\ \hline \end{tabular} } \label{tab:cuts_analysis} \end{center} \end{table*} \begin{figure}[h!t] \begin{center} \includegraphics[width=0.49\textwidth]{figures/hww_hwwsel_0j_mh125_massem.pdf} \includegraphics[width=0.49\textwidth]{figures/hww_hwwsel_1j_mh125_massem.pdf} \caption{Dilepton invariant mass distribution from the 0-jet (\cmsLeft) and 1-jet (\cmsRight) $\Pe\mu$ events from the 8\TeV data (points with error bars), and the prediction for the various backgrounds (solid histograms), and for a SM Higgs boson with $\ensuremath{m_{\PH}}=125\GeV$ (hatched histogram) at 8\TeV. The cut-based $\PH \to \PW\PW$ selection, except for the requirement on the dilepton mass itself, is applied.} \label{fig:hwwsel_nj_mh125_massem} \end{center} \end{figure} The 2-jet category is mainly sensitive to VBF production \cite{Ciccolini:2007jr, Ciccolini:2007ec, Arnold:2008rz,Cahn:1987}, whose cross section is roughly ten times smaller than that from gluon-gluon fusion. The VBF channel offers a different production mechanism to test the consistency of a signal with the SM Higgs boson hypothesis. The VBF signal can be extracted using simple selection criteria, especially in the relatively low-background environment of the fully leptonic $\PW\PW$ decay mode, providing additional search sensitivity. The $\PH \to \PW\PW$ events from VBF production are characterized by two energetic forward-backward jets and very little hadronic activity in the rest of the event. Events passing the $\PW\PW$ criteria are further required to satisfy $\pt>30\GeV$ for the two highest-$\ET$ jets, with no jets above this threshold present in the pseudorapidity region between these two jets. Both leptons are required to be within the pseudorapidity region between the two jets. To reject the main background from top-quark decays, the two jets must have a pseudorapidity difference larger than 3.5 and a dijet invariant mass greater than 450\GeV. In addition, $m_\mathrm{T}$ is required to be between $30\GeV$ and the Higgs boson mass hypothesis. Finally, a $\ensuremath{m_{\PH}}$-dependent upper limit on the dilepton mass is applied. \subsection{Background predictions} \label{sec:backgrounds} A combination of techniques is used to determine the contributions from the background processes that remain after the final selection. The largest background contributions are estimated directly from data, avoiding uncertainties related to the simulation of these sources. The remaining contributions estimated from simulation are small. The $\PW$+jets and QCD multijet backgrounds arise from semileptonic decays of heavy quarks, ha\-drons misidentified as leptons, and electrons from photon conversions. Estimations of these contributions are derived directly from data, using a control sample of events in which one lepton passes the standard criteria and the other does not, but instead satisfies a relaxed set of requirements (``loose" selection), resulting in a ``tight-loose" sample. Then the efficiency, $\epsilon_\text{loose}$, for a lepton candidate that satisfies the loose selection to also pass the tight selection is determined, using data from an independent multijet event sample dominated by nonprompt leptons and parametrized as a function of the $\pt$ and $\eta$ of the lepton. Finally, the background contamination is estimated using the events in the ``tight-loose" sample, weighted by \mbox{$\epsilon_\text{loose}$/$(1-\epsilon_\text{loose})$}. The systematic uncertainty in the determination of $\epsilon_\text{loose}$ dominates the overall uncertainty of this method, which is estimated to be about 36\%. The uncertainty is obtained by varying the requirements to obtain $\epsilon_\text{loose}$, and from a closure test, where the tight-loose rate derived from QCD simulated events is applied to a $\ensuremath{\PW+\text{jets}}$ simulated sample to predict the rate of events with one real and one misidentified lepton. The normalization of the top-quark background is estimated from data by counting the number ($N_\text{tagged}$) of top-quark-tagged events and applying a corresponding top-quark-tagging efficiency $\epsilon_\text{top}$. The top-quark-tagging efficiency ($\epsilon_\text{top}$) is measured with a control sample dominated by $\ttbar$ and $\PW\cPqt$ events, which is selected by requiring a \cPqb-tagged jet in the event. The number of top-quark background events in the signal region is then given by: $N_\text{tagged} \times (1-\epsilon_\text{top})/\epsilon_\text{top}$. Background sources from non-top events are subtracted by estimating the misidentification probability from data control samples. The main uncertainty comes from the statistical uncertainty in the \cPqb-tagged control sample and from the systematic uncertainties related to the measurement of $\epsilon_\text{top}$. The uncertainty is about 20\% in the 0-jet category and about 5\% in the 1-jet category. For the low-mass $\PH \to \PW\PW$ signal region, $\ensuremath{m_{\PH}} \leq 200\GeV$, the nonresonant $\PW\PW$ background prediction is estimated from data. This contribution is measured using events with a dilepton mass larger than 100\GeV, where the Higgs boson signal contamination is negligible, and the MC simulation is then used to extrapolate into the signal region. The total uncertainty is about 10\%, where the statistical uncertainty of the data control region is the largest component. For larger Higgs boson masses there is a significant overlap between the nonresonant $\PW\PW$ and Higgs boson signal, and the simulation is used for the estimation of the background. The $\ensuremath{\cPZ/\GAMMA^*{\to \ell^+\ell^-}}$ contribution to the $\Pep\Pem$ and $\Pgmp\Pgmm$ final states is estimated by extrapolating the observed number of events with a dilepton mass within $\pm7.5\GeV$ of the $\Z$ mass, with the residual background in that region subtracted using $\ensuremath{\Pe^{\pm}}\ensuremath{\Pgm^{\mp}}$ events. The extrapolation to the signal region is then performed using the simulation. The results are cross-checked with data, using the same algorithm and subtracting the background in the $\Z$-mass region, estimated from the number of $\ensuremath{\Pe^{\pm}}\ensuremath{\Pgm^{\mp}}$ events. The largest uncertainty in the estimate is the statistical uncertainty in the control sample, which is about 20\% to 50\%. The $\ensuremath{\mathrm{\Z}/\GAMMA^* \to\Pgt^+\Pgt^-}$ contamination is estimated using $\ensuremath{\mathrm{\Z}/\GAMMA^*\mathrm{\to \Pep\Pem}}$ and $\Pgmp\Pgmm$ events selected in data, where the leptons are replaced with simulated $\Pgt$ decays, thus providing a better description of the process $\ensuremath{\mathrm{\Z}/\GAMMA^* \to\Pgt^+\Pgt^-}$. The \TAUOLA~\cite{TAUOLA} program is used in the simulation of the $\Pgt$ decays to account for $\tau$-polarization effects. Finally, to estimate the $\ensuremath{\PW\GAMMA}^{*}$ background contribution from asymmetric virtual photon decays to dileptons~\cite{wgammastart}, where one lepton escapes detection, the \MADGRAPH generator~\cite{Alwall:2011uj} with dedicated cuts is used. In particular, all the leptons are required to have a $\pt$ larger than 5 $\GeV$ and the mass of each lepton is considered in the generation of the samples. To normalize the simulated events, a control sample of high-purity $\ensuremath{\PW\GAMMA}^{*}$ events from data with three reconstructed leptons is compared to the simulation prediction. A normalization factor of $1.6\pm0.5$ with respect to the theoretical leading-order $\ensuremath{\PW\GAMMA}^{*}$ cross section is found. Other minor backgrounds from $\ensuremath{\W\Z}$, $\cPZ\cPZ$ (when the two selected leptons come from different boson decays), and $\ensuremath{\PW\GAMMA}$ are estimated from simulation. The $\ensuremath{\PW\GAMMA}$ background estimate is cross-checked in data using events passing all the selection requirements, except the two leptons must have the same charge; this sample is dominated by $\PW$+jets and $\ensuremath{\PW\GAMMA}$ events. The agreement between data and the background prediction in this test is at the 20\% level. The number of observed events and the expected number of events from all background processes after the $\PW\PW$ selection are summarized in Table \ref{tab:wwselection_all}. The number of events observed in data and the signal and background predictions after the final selection are listed in Table~\ref{tab:hwwselection} for two Higgs boson mass hypotheses. \begin{table*}[h!t] \begin{center} \topcaption{Observed number of events and background estimates for the 8\TeV data sample, after applying the $\PW\PW$ selection requirements. The uncertainties are statistical only.} \label{tab:wwselection_all} \footnotesize { \begin{tabular}{l|c|c|c|c|c|c|c|c} \hline & $\PW\PW$ & \ttbar+t$\PW$ & $\PW$+jets & $\PW\cPZ+\cPZ\cPZ$ & $\cPZ/\gamma^*$ & $\ensuremath{\PW\GAMMA}^{(*)}$ & tot. bkg. & data \\ \hline \hline 0-jet & 1046.1 $\pm$ 7.2 & 164.2 $\pm$ 5.4 & 158.2 $\pm$ 7.1 & 32.6 $\pm$ 0.6 & 73 $\pm$ 17 & 27.1 $\pm$ 3.9 & 1501 $\pm$ 21 & 1594 \\ 1-jet & 381.0 $\pm$ 4.0 & 527.3 $\pm$ 8.4 & 122.6 $\pm$ 6.7 & 30.3 $\pm$ 0.6 & 77 $\pm$ 24 & 23.7 $\pm$ 5.2 & 1162 $\pm$ 27 & 1186 \\ 2-jet & 177.0 $\pm$ 2.8 & 886.5 $\pm$ 11.1 & 94.9 $\pm$ 6.4 & 20.8 $\pm$ 0.5 & 227 $\pm$ 20 & 5.6 $\pm$ 2.1 & 1412 $\pm$ 24 & 1295 \\ \hline \end{tabular}} \end{center} \end{table*} \begin{table*}[h!t] \begin{center} \topcaption{The signal predictions, background estimates, and numbers of events in data for two different Higgs boson mass hypotheses with the 8\TeV data set, after applying the final $\PH \to \PW\PW$ cut-based requirements, which depend on the Higgs boson mass hypothesis. The different jet categories and dilepton final states are shown separately. The combined statistical, experimental, and theoretical systematic uncertainties are given. } \label{tab:hwwselection} { \footnotesize \setlength{\extrarowheight}{1pt} \begin{tabular} {l|c|c|c|c|c|c|c|c} \hline $\ensuremath{m_{\PH}}$ & $\PH \to \PW \PW$ & $\PW \PW$ & $\PW\cPZ+\cPZ\cPZ+\cPZ/\gamma^*$ & \ttbar+t$\PW$ & $\PW$+jets & $\ensuremath{\PW\GAMMA}^{(*)}$ & all bkg. & data\\ \hline \hline \multicolumn{9}{c}{0-jet category $\Pe\mu$ final state } \\ \hline $125$ & $23.9\pm5.2$ & $87.6\pm9.5$ & $2.2\pm0.2$ & $9.3\pm2.7$ & $19.1\pm7.2$ & $6.0\pm2.3$ & $124.2\pm12.4$ & $158$ \\ $130$ & $35.3\pm7.6$ & $96.8\pm10.5$ & $2.5\pm0.3$ & $10.1\pm2.8$ & $20.7\pm7.8$ & $6.3\pm2.4$ & $136.3\pm13.6$ & $169$ \\ \hline \multicolumn{9}{c}{0-jet category $\Pe\Pe$/$\mu\mu$ final state} \\ \hline $125$ & $14.9\pm3.3$ & $60.4\pm6.7$ & $37.7\pm12.5$ & $1.9\pm0.5$ & $10.8\pm4.3$ & $4.6\pm2.5$ & $115.5\pm15.0$ & $123$ \\ $130$ & $23.5\pm5.1$ & $67.4\pm7.5$ & $41.3\pm15.9$ & $2.3\pm0.6$ & $11.0\pm4.3$ & $4.8\pm2.5$ & $126.8\pm18.3$ & $134$ \\ \hline \multicolumn{9}{c}{1-jet category $\Pe\mu$ final state} \\ \hline $125$ & $10.3\pm3.0$ & $19.5\pm3.7$ & $2.4\pm0.3$ & $22.3\pm2.0$ & $11.7\pm4.6$ & $5.9\pm3.2$ & $61.7\pm7.0$ & $54$ \\ $130$ & $15.7\pm4.7$ & $22.0\pm4.1$ & $2.6\pm0.3$ & $25.1\pm2.2$ & $12.8\pm5.1$ & $6.0\pm3.2$ & $68.5\pm7.6$ & $64$ \\ \hline \multicolumn{9}{c}{1-jet category $\Pe\Pe$/$\mu\mu$ final state} \\ \hline $125$ & $4.4\pm1.3$ & $9.7\pm1.9$ & $8.7\pm4.9$ & $9.5\pm1.1$ & $3.9\pm1.7$ & $1.3\pm1.2$ & $33.1\pm5.7$ & $43$ \\ $130$ & $7.1\pm2.2$ & $11.2\pm2.2$ & $9.1\pm5.4$ & $10.7\pm1.2$ & $3.7\pm1.7$ & $1.3\pm1.2$ & $36.0\pm6.3$ & $53$ \\ \hline \multicolumn{9}{c}{2-jet category $\Pe\mu$ final state} \\ \hline $125$ & $1.5\pm0.2$ & $0.4\pm0.1$ & $0.1\pm0.0$ & $3.4\pm1.9$ & $0.3\pm0.3$ & $0.0\pm0.0$ & $4.1\pm1.9$ & $6$ \\ $130$ & $2.5\pm0.4$ & $0.5\pm0.2$ & $0.1\pm0.0$ & $3.0\pm1.8$ & $0.3\pm0.3$ & $0.0\pm0.0$ & $3.9\pm1.9$ & $6$ \\ \hline \multicolumn{9}{c}{2-jet category $\Pe\Pe$/$\mu\mu$ final state} \\ \hline $125$ & $0.8\pm0.1$ & $0.3\pm0.1$ & $3.1\pm1.8$ & $2.0\pm1.2$ & $0.0\pm0.0$ & $0.0\pm0.0$ & $5.4\pm2.2$ & $7$ \\ $130$ & $1.3\pm0.2$ & $0.4\pm0.2$ & $3.8\pm2.2$ & $2.0\pm1.2$ & $0.0\pm0.0$ & $0.0\pm0.0$ & $6.2\pm2.5$ & $7$ \\ \hline \end{tabular} } \end{center} \end{table*} \subsection{Efficiencies and systematic uncertainties} \label{sec:systematics} The signal efficiency is estimated using simulations. All Higgs boson production mechanisms are considered: gluon-gluon fusion, associated production with a $\PW$ or $\Z$ boson (VH), and VBF processes. Residual discrepancies in the lepton reconstruction and identification efficiencies between data and simulation are corrected for by data-to-simulation scale factors measured using $\ensuremath{\cPZ/\GAMMA^*{\to \ell^+\ell^-}}$ events in the $\Z$-peak region~\cite{CMS:2011aa}, recorded with dedicated unbiased triggers. These factors depend on the lepton $\pt$ and $|\eta|$, and are typically in the range 0.9--1.0. The uncertainties on the lepton and trigger efficiencies are about 2\% per lepton leg. Experimental effects, theoretical predictions, and the choice of MC event generators are considered as sources of systematic uncertainty, and their impact on the signal efficiency is assessed. The experimental uncertainties in lepton efficiency, momentum scale and resolution, $\MET$ modelling, and jet energy scale are applied to the reconstructed objects in simulated events by smearing and scaling the relevant observables, and propagating the effects to the kinematic variables used in the analysis. The 36\% normalization uncertainty in the $\ensuremath{\PW+\text{jets}}$ background is included by varying the efficiency for misidentified leptons to pass the tight lepton selection and by comparing the results of a closure test using simulated samples. The relative systematic uncertainty on the signal efficiency from pileup is evaluated to be $1\%$. This corresponds to shifting the mean of the expected distribution of the number of pp collision per beam-crossing that is used to reweight the simulation up and down by one pp interaction. The systematic uncertainty on the integrated luminosity measurement is $4.4\%$~\cite{CMS:2012jza}. The systematic uncertainties from theoretical input are separated into two components, which are assumed to be independent. The first component is the uncertainty in the fraction of events classified into the different jet categories and the effect of migration between categories. The second component is the uncertainty in the lepton acceptance and the selection efficiency of the other requirements. The effect of variations in the PDF, the value of $\alpha_{s}$, and the higher-order corrections are considered for both components, using the PDF4LHC prescription~\cite{Botje:2011sn,Alekhin:2011sk,Lai:2010vv,Martin:2009iq,Ball:2011mu} and the recommendations from~\cite{LHCHiggsCrossSectionWorkingGroup:2011ti}. For the jet categorization, the effects of higher-order logarithmic terms via the uncertainty in the parton shower model and the underlying event are also considered by comparing different generators. These uncertainties range between 10\% and 30\%, depending on the jet category. The uncertainties related to the diboson cross sections are calculated using the {\sc MCFM} program~\cite{MCFM}. The systematic uncertainty in the overall signal efficiency is estimated to be about 20\% and is dominated by the theoretical uncertainty in the missing higher-order corrections and PDF uncertainties. The total uncertainty in the background estimations in the $\PH\to\PW\PW$ signal region is about 15\%, dominated by the statistical uncertainty in the observed number of events in the background-control regions. The interpretation of the results in terms of upper limits on the Higgs boson production cross section will be given in Section 10. \section{\texorpdfstring{$\PH\to\Pgt\Pgt$}{H to tau tau}\label{sec:htt}} The $\PH\to\tau\tau$ decay mode is sensitive to a SM Higgs boson with a mass below about $145\GeV$, for which the branching fraction is large. The search uses final states where the two $\tau$ leptons are identified either by their leptonic decay to an electron or muon, or by their hadronic decay designated as $\Pgt_h$. Four independent channels are studied: $\Pe\Pgt_h$, $\Pgm\Pgt_h$, $\Pe\Pgm$, and $\Pgm\Pgm$. In each channel, the signal is separated from the background, and in particular from the irreducible $\cPZ\to\tau\tau$ process, using the $\tau$-lepton pair invariant mass $m_{\tau\tau}$, reconstructed from the four-momentum of the visible decay products of the two $\tau$ leptons and the \MET vector, as explained in Section~\ref{sec:htt_mtautau}. Events are classified by the number of additional jets in the final state, in order to enhance the contribution of different Higgs boson production mechanisms. The 0- and 1-jet categories select primarily signal events with a Higgs boson produced by gluon-gluon fusion, or in association with a W or Z vector boson that decays hadronically. These two categories are further classified according to the $\PT$ of the $\tau$-lepton decay products, because high-$\PT$ events benefit from a higher signal-to-background ratio. Events in the VBF category are required to have two jets separated by a large rapidity, which preferentially selects signal events from the vector-boson fusion production mechanism and strongly enhances the signal purity. \subsection{Trigger and inclusive event selection} The high-level trigger requires a combination of electron, muon, and $\Pgt_h$ trigger objects~\cite{CMS-PAS-EGM-10-004,CMS-PAS-MUO-10-002,CMS-EWK-TAU}. The electron and muon HLT reconstruction is seeded by electron and muon level-1 trigger objects, respectively, while the $\Pgt_h$ trigger object reconstruction is entirely done at HLT stage. A specific version of the particle-flow algorithm is used in the HLT to reconstruct these objects and quantify their isolation, as done in the offline reconstruction. The identification and isolation criteria and the transverse momentum thresholds for these objects were progressively tightened as the LHC instantaneous luminosity increased over the data taking period. In the $\Pe\Pgt_h$ and $\Pgm\Pgt_h$ channels, the trigger requires the presence of a lepton and a $\Pgt_h$, both loosely isolated with respect to the offline isolation criteria described below. In the $\Pe\mu$ and $\mu\mu$ channels, the lepton trigger objects are not required to be isolated. For the $\Pe\Pgt_h$, $\Pgm\Pgt_h$, and $\mu\mu$ channels, the muon and electron trigger efficiencies are measured with respect to the offline selection in the data and the simulation using $\cPZ\to \ell\ell (\ell=\Pe,\Pgm)$ events passing a single-lepton trigger. For the $\Pe\mu$ channel, they are determined using $\cPZ \to \tau\tau \to \Pe \mu $ events passing a single-lepton trigger. The $\Pgt_h$ triggering efficiency is obtained using $\cPZ \to \tau\tau \to \mu \Pgt_h$ events passing a single-muon trigger. In the analysis, simulated events are weighted by the ratio between the efficiency measured in the data and the simulation, which are parametrized as a function of the lepton or $\Pgt_h$ transverse momentum and pseudorapidity. To be considered in the offline event selection, electrons and muons must fulfill tight isolation criteria. The electron and muon isolation parameter $R_\text{Iso}^{\ell}$ is calculated as in Eq.~(\ref{eq:reconstruction_isolation}) using a cone size $\Delta R=0.4$, but with the following differences. The sum $\sum_\text{charged} \PT$ is performed considering all charged particles associated with the primary vertex, including other electrons and muons. The contribution of neutral pileup particles is estimated as $0.5 \sum_{\rm charged, PU} \pt$, where the sum is computed for all charged hadrons from pileup interactions in the isolation cone, and where the factor 0.5 corresponds approximately to the ratio of neutral-to-charged hadron energy in the hadronization process, as estimated from simulation. Electrons and muons are required to have $R_\text{Iso}^{\ell}<0.1$. This criterion is relaxed to 0.15 in the $\Pe\Pgm$ channel for leptons in the barrel, and in the $\Pgm\Pgm$ channel for muons with $\PT<20\GeV$. The $\tau$-isolation discriminator $R_\text{Iso}^{\tau}$ defined in Section~\ref{sec:reconstruction} is used to select loosely isolated $\tau_h$ so that the overall $\Pgt_h$ identification efficiency is 60--65\%, for a jet misidentification probability of 2--3\%. Finally, electrons and muons misidentified as $\Pgt_h$ are suppressed using dedicated criteria based on the consistency between the tracker, calorimeter, and muon-chamber measurements. In the $\Pe\Pgt_h$ and $\Pgm\Pgt_h$ channels, we select events containing either an electron with $\pt >$~20\GeV or a muon with $\pt >17\GeV$, and $|\eta| < 2.1$, accompanied by an oppositely charged $\Pgt_h$ with $\pt > 20\GeV$ and $|\eta| < 2.3$. In the 8\TeV data set analysis, the electron and muon \pt thresholds are increased to 24 and 20\GeV, respectively, to account for the higher trigger thresholds. In these channels, events with more than one loosely identified electron or muon with $\pt >15\GeV$ are rejected to reduce the Drell--Yan background. In the $\Pe\Pgm$ channel, we demand an electron within $|\eta|<2.3$ and an oppositely charged muon within $|\eta|<2.1$. The higher-\pt lepton must have $\pt >$ 20\GeV and the other lepton $\pt >10\GeV$. In the $\Pgm\Pgm$ channel, the higher-\pt muon is required to have $\pt>20\GeV$ and the other muon $\pt>10\GeV$. Both muons must be within $|\eta|<2.1$. Neutrinos produced in the $\tau$-lepton decay are nearly collinear with the visible decay products because the $\tau$-lepton energy is much larger than its mass after event selection. Conversely, in $\PW+$jets events where a jet is misidentified as $\Pgt_h$, one of the main backgrounds in the $\ell\Pgt_h$ channels, the high mass of the $\PW$ results in a neutrino direction approximately opposite to the lepton in the transverse plane. In the $\Pe\Pgt_h$ and $\Pgm\Pgt_h$ channels, we therefore require the transverse mass \begin{equation} m_\mathrm{T} = \sqrt{2 \pt \MET (1-\cos(\Delta\phi))} \end{equation} to be less than 40\GeV, where \pt is the lepton transverse momentum and $\Delta\phi$ is the azimuthal angle difference between the lepton momentum and the \MET vector. In the $\Pe\Pgm$ channel, instead of an $m_\mathrm{T}$ requirement, we demand $D_{\zeta} \equiv \not\!{p_\zeta}- 0.85 \cdot p_\zeta^{\text{vis}} > -25$\GeV, where \begin{align} \not\!{p_{\zeta}} &=\vec p_\mathrm{T,1} \cdot \hat \zeta + \vec p_\mathrm{T,2} \cdot \hat \zeta+ \VEtmiss \cdot \hat \zeta, \\ p_{\zeta}^{\text{vis}} &= \vec{p}_\mathrm{T,1} \cdot \hat \zeta + \vec{p}_\mathrm{T,2} \cdot \hat \zeta. \end{align} Here, as illustrated in Fig.~\ref{fig:htt_zeta_drawing}, $\hat \zeta$ is a unit vector along the $\zeta$ axis, defined as the bisector of the lepton directions in the transverse plane~\cite{CRISTOBAL}, $\vec p_\mathrm{T,i}$ are the lepton transverse momenta, and $\VEtmiss$ is the missing transverse energy vector. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.35\textwidth]{figures/htt/zeta_drawing.pdf} \end{center} \caption{The $\zeta$ axis and the projections onto this axis of $\VEtmiss$ and transverse momenta $\vec{p}_\mathrm{T,1}$ and $\vec{p}_\mathrm{T,2}$ of the two leptons.} \label{fig:htt_zeta_drawing} \end{figure} The $D_{\zeta}$ distribution is shown in Fig.~\ref{fig:htt_control}(b). Requiring a large $D_{\zeta}$ rejects $\PW+$jets and $t\bar t$ events, for which the \MET vector is typically oriented in the opposite direction of the two-lepton system, resulting in a small $D_{\zeta}$. Conversely, in $H\to \tau \tau$ or $Z \to \tau \tau$ events, the neutrinos are emitted along the directions of the two $\tau$ leptons, resulting in a large $D_{\zeta}$. The 0.85 factor is introduced to bring the mean of the $D_{\zeta}$ distribution to 0 for $Z \to \tau \tau$. In the $\Pgm\Pgm$ channel, the sample of dimuon events is largely dominated by the $Z\rightarrow\Pgm\Pgm$ background, which is suppressed using a BDT discriminant combining a set of variables related to the kinematics of the dimuon system, and the distance of closest approach between the two muons. \subsection{The \texorpdfstring{$\tau\tau$}{tau tau} invariant-mass reconstruction} \label{sec:htt_mtautau} The invariant mass $m_\text{vis}$ of the visible decay products of the two $\tau$ leptons can be used as an estimator of the mass of a possible parent boson, in order to separate the $\PH \to \tau \tau$ signal from the irreducible $\cPZ \to \tau \tau$ background. However, the neutrinos from $\tau$-lepton decays can have substantial energy limiting the separation power of this estimator. An alternative approach is to reconstruct the neutrino energy using a collinear approximation~\cite{massRecoCollinearApprox}, which has the disadvantage of providing an unphysical solution for about 20\% of the events, in particular when the \MET and the parent boson \PT are small. The SVFit algorithm described below reconstructs the $\tau\tau$ invariant-mass $m_{\tau\tau}$ with improved resolution and gives a physical solution for every event. Six parameters are needed to specify $\tau$-lepton decays to hadrons: the polar and azimuthal angles of the visible decay product system in the $\tau$-lepton rest frame, the three boost parameters from the $\tau$-lepton rest frame to the laboratory frame, and the invariant mass $m_\text{vis}$ of the visible decay products. In the case of a leptonic $\tau$-lepton decay, two neutrinos are produced, and the invariant mass of the two-neutrino system constitutes a seventh parameter. The unknown parameters are constrained by four observables that are the components of the four-momentum of the system formed by the visible $\tau$-lepton decay products, measured in the laboratory frame. For each hadronic (leptonic) $\tau$-lepton decay, 2 (3) parameters are thus left unconstrained. We choose these parameters to be: \begin{itemize} \item $x$, the fraction of the $\tau$-lepton energy in the laboratory frame carried by the visible decay products. \item $\phi$, the azimuthal angle of the $\tau$-lepton direction in the laboratory frame. \item $m_{\nu\nu}$, the invariant mass of the two-neutrino system. For hadronic $\tau$-lepton decay, $m_{\nu\nu} \equiv 0$. \end{itemize} The two components $E_{x}^\text{miss}$ and $E_{y}^\text{miss}$ of the missing transverse energy vector provide two further constraints, albeit with an experimental resolution of 10--15\GeV on each \cite{PFMEtSignAlgo}. The fact that the reconstruction of the $\tau$-lepton pair decay kinematics is underconstrained by the measured observables is addressed by a maximum-likelihood fit method. The mass $m_{\tau\tau}$ is reconstructed by combining the measured observables $E_{x}^\text{miss}$ and $E_{y}^\text{miss}$ with a likelihood model that includes terms for the $\tau$-lepton decay kinematics and the \MET resolution. The model gives the probability density $f(\vec{z} \vert \vec{y}, \vec{a_1}, \vec{a_2})$ to observe the values $\vec{z} = (E_{x}^\text{miss}, E_{y}^\text{miss})$ in an event, given that the unknown parameters specifying the kinematics of the two $\tau$-lepton decays have values $\vec{a_1} = (x_{1}, \phi_{1}, m_{\nu\nu,1})$ and $\vec{a_2} = (x_{2}, \phi_{2}, m_{\nu\nu,2})$, and that the four-momenta of the visible decay products have the measured values $\vec{y} = (p^\text{vis}_{1}, p^\text{vis}_{2})$. The likelihood model is used to compute the probability \begin{equation} P(m_{\tau\tau}^{i}) = \int \delta \left( m_{\tau\tau}^{i} - m_{\tau\tau}(\vec{y}, \vec{a_1}, \vec{a_2}) \right) f(\vec{z} \vert \vec{y}, \vec{a_1}, \vec{a_2})\, \rd\vec{a_1}\,\rd\vec{a_2}, \label{eq:mtautau} \end{equation} as a function of mass hypothesis $m_{\tau\tau}^{i}$. The best estimate $\hat{m}_{\tau\tau}$ for $m_{\tau\tau}$ is taken to be the value of $m_{\tau\tau}^{i}$ that maximizes $P(m_{\tau\tau}^{i})$. The probability density $f(\vec{z} \vert \vec{y}, \vec{a_1}, \vec{a_2})$ is the product of three likelihood functions. The first two model the decay parameters $\vec{a_1}$ and $\vec{a_2}$ of the two $\tau$ leptons, and the last one quantifies the consistency of a $\tau$-lepton decay hypothesis with the measured $\MET$. The likelihood functions modelling the $\tau$-lepton decay kinematics are different for leptonic and hadronic $\tau$-lepton decays. Matrix elements from Ref.~\cite{TauPol} are used to model the differential distributions in the leptonic decays, \begin{equation} L_{\tau,l} = \frac{\rd\Gamma}{\rd x\, \rd m_{\nu\nu}\,\rd\phi} \propto \frac{m_{\nu\nu}}{4m_{\tau}^2} \big[(m_{\tau}^2 +2m_{\nu\nu}^2 )(m_{\tau}^2 - m_{\nu\nu}^2)\big], \label{eq:likelihoodLepTauDecay} \end{equation} within the physically allowed region $0 \leq x \leq 1 \mbox{ and } 0 \leq m_{\nu\nu} \leq m_{\tau}\sqrt{1-x}$. For hadronic $\tau$-lepton decays, a model based on two--body phase-space~\cite{PDG} is used, treating all the $\tau$-lepton visible decay products as a single system, \begin{equation} L_{\tau,h} = \frac{\rd\Gamma}{\rd x\,\rd\phi} \propto \frac{1}{1- \frac{m^2_\text{vis}}{m^2_{\tau}}}, \label{eq:likelihoodHadTauDecay} \end{equation} within the physically allowed region $\frac{m_\text{vis}^{2}}{m_{\tau}^{2}} \leq x \leq 1$. We have verified that the two-body phase space model is adequate for representing hadronic $\tau$-lepton decays by comparing distributions generated by a parameterized MC simulation based on the two-body phase-space model with the detailed simulation implemented in \TAUOLA. The likelihood functions for leptonic (hadronic) $\tau$-lepton decays do not depend on the parameters $x$ and $\phi$ ($x$, $\phi$, and $m_{\nu\nu}$). The dependence on $x$ enters via the integration boundaries, and the dependence on $\phi$ comes from the \MET likelihood function. The \MET likelihood function $L_{MET}$ quantifies the compatibility of a $\tau$-lepton decay hypothesis with the reconstructed missing transverse momentum in an event, assuming the neutrinos from the $\tau$-lepton decays are the only source of \MET, and is defined as \begin{equation} L_{\rm MET} (E_{x}^\text{miss}, E_{y}^\text{miss}) = \frac{1}{2 \pi \sqrt{\vert V \vert}} \cdot \exp \left( -\frac{1}{2} \left( \begin{array}{c} E_{x}^\text{miss} - \sum p_{x}^{\nu} \\ E_{y}^\text{miss} - \sum p_{y}^{\nu} \end{array} \right)^{T} \cdot V^{-1} \cdot \left( \begin{array}{c} E_{x}^\text{miss} - \sum p_{x}^{\nu} \\ E_{y}^\text{miss} - \sum p_{y}^{\nu} \end{array} \right) \right). \end{equation} In this expression, the expected \MET resolution is represented by the covariance matrix $V$, estimated on an event-by-event basis using a \MET-significance algorithm~\cite{PFMEtSignAlgo}, and $\vert V \vert$ is the determinant of this matrix. The $m_{\tau\tau}$ resolution achieved by the SVFit algorithm is estimated to be about $20\%$ from simulation. Figure~\ref{fig:htt_svfitperf} shows the normalized distributions of $m_\text{vis}$ and $m_{\tau\tau}$ in the $\Pgm\Pgt_h$ channel from simulated $\cPZ \to \tau \tau$ events and simulated SM Higgs boson events with $m_H=125\GeV$. The SVFit mass reconstruction allows for a better separation between signal and background than $m_\text{vis}$. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.45\textwidth]{figures/htt/svFitPerformance_forColin_visMass.pdf} \includegraphics[width=0.45\textwidth]{figures/htt/svFitPerformance_forColin_svFitMass.pdf} \\ \end{center} \caption{Normalized distribution of the visible invariant mass $m_\text{vis}$ (left) and SVFit mass $m_{\tau\tau}$ (right) obtained from MC simulation in the $\Pgm\Pgt_h$ channel for the $\cPZ\to \tau \tau$ background (solid histogram) and a SM Higgs boson signal of mass $m_H=125\GeV$ (open histogram).} \label{fig:htt_svfitperf} \end{figure} \subsection{Event categories} To further enhance the sensitivity of the search for the SM Higgs boson, the selected events are split into mutually exclusive categories based on the jet multiplicity, and the transverse momentum of the visible $\tau$-lepton decay products. The jet multiplicity categories are defined using jets within $|\eta|<5$. In some cases, events are rejected if they contain a b-tagged jet, identified using the CSV algorithm described in Section~\ref{sec:reconstruction}. From simulation, the efficiency for b-jet tagging is $75$\%, with a misidentification rate of $1$\%. The event categories are: \begin{itemize} \item \textbf{VBF:} In this category, two jets with $\pt>30$\GeV are required in the event. A rapidity gap is demanded by requiring there be no third jet with $\pt>30$\GeV between these two jets. A BDT discriminator is used to discriminate between VBF Higgs boson production and the background processes. This discriminator takes as input the invariant mass of the two jets, the differences in $\eta$ and $\phi$ between the directions of the two jets, the \pt of the $\Pgt_h\Pgt_h$ system, the \pt of the $\Pgt_h\Pgt_h$-\MET system, the \pt of the dijet system, and the difference in $\eta$ between the $\Pgt_h\Pgt_h$ system direction and the closest jet. In the $\Pe\Pgm$ channel, the large \ttbar background is suppressed by rejecting events with a b-tagged jet with $\pt>20$\GeV. \item \textbf{1-jet:} Events in this category are required to have $\ge$1 jet with $\pt>30$\GeV, not fulfill the VBF criteria, and not contain any b-tagged jet with $\pt>20$\GeV. This category addresses the production of a high-\pt Higgs boson recoiling against a jet. Events with high-\pt Higgs bosons typically have much larger \MET and thus benefit from a more precise measurement of $m_{\tau\tau}$, owing to the improved \MET resolution. In the $\Pe\Pgt_h$ channel, the large background from $\cPZ \to \Pe\Pe$ + jets events with one electron misidentified as $\Pgt_h$ is reduced by requiring $\MET>30\GeV$. \item \textbf{0-jet:} This category requires events to have no jet with $\pt>30$\GeV and no b-tagged jet with $\pt>20$\GeV. In the $\Pe\Pgt_h$ channel, $\MET$ is required to be larger than 30\GeV as in the 1-jet category. \end{itemize} The 0- and 1-jet categories are each further divided into two subsets, using the \PT of the visible $\tau$-lepton decay products, either hadronic or leptonic. We label these subsets ``low-\pt" and ``high-\pt". In the $\Pe\Pgt_h$ and $\Pgm\Pgt_h$ channels, the boundary between the two subsets is defined as $\PT(\Pgt_h)=40\GeV$. In the $\Pe\Pgm$ and $\Pgm\Pgm$ channels, the threshold is at 35\GeV on the muon \PT and 30\GeV on the leading muon \PT, respectively. Thus, five independent categories of events are used in the SM Higgs boson search: VBF, 1-jet/high-\PT, 1-jet/low-\PT, 0-jet/high-\PT, and 0-jet/low-\PT. \subsection{Background estimation and systematic uncertainties} For each channel and each category, Table~\ref{tab:htt_numevents} shows the overall number of events observed in the 7 and 8\TeV data, as well as the corresponding number of expected events from the various background contributions, in the full $m_{\tau\tau}$ range. The expected number of events from a SM Higgs boson signal of mass $m_H=125\GeV$ is also shown. The numbers in Table~\ref{tab:htt_numevents} cannot be used to estimate the global significance of a possible signal since the expected significance varies considerably with $m_{\tau\tau}$, and the sensitive 1-jet/high-\PT category is merged with the 1-jet/low-\PT category. \begin{table}[!hp] \begin{center} \topcaption{ Observed and expected numbers of events in the four $\PH \rightarrow \tau\tau$ decay channels and the 3 event categories, for the combined 7 and 8\TeV data. The uncertainties include the statistical and systematic uncertainties added in quadrature. In the 0- and 1-jet categories, the low- and high-$\pt$ subcategories have been combined. The expected number of signal events for a SM Higgs boson of mass $m_H=125\GeV$ is also given. } \begin{tabular}{c|c|c|c} \hline Process & 0-jet & 1-jet & VBF \\ \hline \hline \multicolumn{4}{c}{$\Pe\tau_h$} \\ \hline Z$\to\tau\tau$ & $2550 \pm 200$ & $2130 \pm 170$ & $53 \pm 6$ \\ QCD & $910 \pm 70$ & $410 \pm 30$ & $35 \pm 8$ \\ W+jets & $1210 \pm 70$ & $1111 \pm 75$ & $46 \pm 10$ \\ Z+jets & $560 \pm 99$ & $194 \pm 24$ & $13 \pm 2$ \\ $\ttbar$ & $162 \pm 21$ & $108 \pm 13$ & $7 \pm 2$ \\ Dibosons & $20 \pm 5$ & $60 \pm 14$ & $1.1 \pm 0.9$ \\ \hline Total Background & $5410 \pm 270$ & $4020 \pm 220$ & $155 \pm 15$ \\ H$\to\tau\tau$ (125\GeVns{}) & $15 \pm 2$ & $26 \pm 4$ & $4.4 \pm 0.7$ \\ Data & 5273 & 3972 & 142 \\ \hline \hline \multicolumn{4}{c}{$\mu\tau_h$} \\ \hline $\cPZ\to\tau\tau$ & $50\,500 \pm 3800$ & $10\,570 \pm 830$ & $100 \pm 11$ \\ QCD & $14\,100 \pm 1600$ & $3980 \pm 510$ & $41 \pm 9$ \\ W+jets & $13\,300 \pm 1300$ & $5600 \pm 480$ & $72 \pm 15$ \\ Z+jets & $1620 \pm 230$ & $658 \pm 97$ & $2.5 \pm 0.6$ \\ $\ttbar$ & $651 \pm 82$ & $479 \pm 61$ & $15 \pm 3$ \\ Dibosons & $298 \pm 70$ & $256 \pm 58$ & $3 \pm 2$ \\ \hline Total Background & $80\,400 \pm 4500$ & $21\,500 \pm 1200$ & $234 \pm 22$ \\ H$\to\tau\tau$ (125\GeV) & $141 \pm 21$ & $86 \pm 12$ & $8 \pm 1$ \\ Data & 80\,229 & 22\,009 & 263 \\ \hline \hline \multicolumn{4}{c}{$\Pe\mu$} \\ \hline $\cPZ\to\tau\tau$ & $22\,030 \pm 850$ & $5030 \pm 230$ & $56 \pm 5$ \\ QCD & $940 \pm 200$ & $550 \pm 120$ & $7 \pm 2$ \\ $\ttbar$ & $39 \pm 3$ & $831 \pm 86$ & $24 \pm 6$ \\ Dibosons & $796 \pm 96$ & $550 \pm 120$ & $11 \pm 2$ \\ \hline Total Background & $23\,800 \pm 930$ & $6960 \pm 350$ & $99 \pm 9$ \\ H$\to\tau\tau$ (125\GeV)& $53 \pm 7$ & $35 \pm 4$ & $3.5 \pm 0.5$ \\ Data & 23\,274 & 6847 & 110 \\ \hline \multicolumn{4}{c}{$\mu\mu$} \\ \hline $\cPZ\to\tau\tau$ & $9120 \pm 490$ & $1980 \pm 120$ & $5.3 \pm 0.4$ \\ QCD & $759 \pm 53$ & $341 \pm 27$ & ${<}1$ \\ W+jets & $145 \pm 10$ & $19 \pm 1$ & ${<}1$ \\ Z$\to\mu\mu$ & $(1263 \pm 73)\times10^3$ & $(380 \pm 24)\times10^3$ & $71 \pm 10$ \\ $\ttbar$ & $2440 \pm 200$ & $1330 \pm 130$ & $7 \pm 2$ \\ Dibosons & $1500 \pm 1100$ & $2210 \pm 790$ & $2.4 \pm 0.9$ \\ \hline Total Background & $(1277 \pm 73)\times10^3$ & $(386 \pm 24)\times10^3$ & $85 \pm 11$ \\ $\PH\to\tau\tau$ (125\GeV)& $26 \pm 4$ & $16 \pm 2$ & $0.8 \pm 0.1$ \\ Data & 1\,291\,874 & 385\,494 & 83 \\ \hline \end{tabular} \label{tab:htt_numevents} \end{center} \end{table} The largest source of background is the Drell--Yan production of $\cPZ\to\Pgt\Pgt$. This contribution is greatly reduced by the 1-jet and VBF selection criteria, and is modelled using a data sample of $\cPZ\to\Pgm\Pgm$ events, in which the reconstructed muons are replaced by the reconstructed particles from simulated $\tau$-lepton decays, a technique called ``embedding''. The background yield is rescaled to the $\cPZ\to\Pgm\Pgm$ yield in the data before any jet selection, thus, for this dominant background, the systematic uncertainties in the efficiency of the jet-category selections and the luminosity measurement are negligible. In the $\Pe\Pgt_h$ and $\Pgm\Pgt_h$ channels, the largest remaining systematic uncertainty affecting this background yield is in the $\Pgt_h$ selection efficiency. This uncertainty, which includes the uncertainty in the $\Pgt_h$ triggering efficiency, is estimated to be 7\% from an independent study based on a tag-and-probe technique~\cite{CMS:2011aa}. The Drell--Yan production of $\cPZ\to \ell\ell$, labelled as $\cPZ$+jets in Table~\ref{tab:htt_numevents}, is an important source of background in the $\Pe\Pgt_h$ channel, owing to the 2--3\% probability for electrons to be misidentified as $\Pgt_h$~\cite{CMS-PAS-TAU-11-001}, and the fact that the reconstructed $\tau\tau$ invariant-mass distribution peaks in the Higgs boson mass search range. The contribution of this background in the $\Pe\Pgt_h$ and $\mu\Pgt_h$ channels is estimated from simulation. The simulated Drell--Yan yield is rescaled to the data using $\cPZ\to\Pgm\Pgm$ events, and the efficiencies of the jet category selections are measured in a $\cPZ\to \mu\mu$ data sample. The dominant systematic uncertainty in the background yield is from the $\ell \to \Pgt_h$ misidentification rate, which is obtained by comparing tag-and-probe measurements from $\cPZ\to \ell\ell$ events in the data and the simulation, and is 30\% for electrons and 100\% for muons. The very small probability for a muon to be misidentified as $\Pgt_h$ makes it difficult to estimate the systematic uncertainty in this probability, but also makes this background very small in the $\mu\Pgt_h$ channel. The background from $\PW$+jets production contributes significantly to the $\Pe\Pgt_h$ and $\Pgm\Pgt_h$ channels when the $\PW$ boson decays leptonically and one jet is misidentified as a $\Pgt_h$. The background is modelled for these channels using the simulation. The $\PW$+jets background yield is normalized to the data in a high-$m_\mathrm{T}$ control region dominated by the background in each of the five categories. The factor for extrapolating to the low-$m_\mathrm{T}$ signal region is obtained from the simulation, and has a 30\% systematic uncertainty. In the 1-jet/high-\pt and VBF categories, where the number of simulated events is marginal, mass-shape templates are obtained by relaxing the $\Pgt_h$ isolation requirement, ensuring that the bias introduced in the shape is negligible. Figure~\ref{fig:htt_control}~(upper left) shows the $m_\mathrm{T}$ distribution obtained in the $\Pgm\Pgt_h$ channel after the inclusive selection from data and simulation. In the high-$m_\mathrm{T}$ region, the agreement between the observed and expected yields comes from the normalization of the $\PW$+jets prediction to the data. The agreement in shape indicates good modelling of \MET in the simulation. \begin{figure}[t!] \begin{center} \includegraphics[width=0.4\textwidth]{figures/htt/2011_muTau_mt_fix.pdf} \includegraphics[width=0.4\textwidth]{figures/htt/pzetavar_fix.pdf} \\ \includegraphics[width=0.4\textwidth]{figures/htt/nalljets_log_fix.pdf} \includegraphics[width=0.4\textwidth]{figures/htt/nbjets_log_fix.pdf} \\ \end{center} \caption{The observed distributions (points with error bars) for the (upper left) transverse mass $m_\mathrm{T}$ in the $\Pgm\Pgt_h$ channel at $\sqrt{s}=7\TeV$; (upper right) $\not\! p_\zeta- 0.85 \cdot p_\zeta^{\mathrm{vis}}$, (lower left) number of jets, and (lower right) number of b-tagged jets in the $\Pe\Pgm$ channel at $\sqrt{s}=8\TeV$. The expected distributions from the various background sources are shown by the shaded histograms. In particular, the Electroweak background combines the expected contributions from $\PW$+jets, $\cPZ$+jets, and diboson processes. The predictions for a SM Higgs boson with $\ensuremath{m_{\PH}}=125$\GeV are given by the dotted histograms, multiplied by a factor of 5 for clarity.} \label{fig:htt_control} \end{figure} The \ttbar production process is the main remaining background in the $\Pe\Pgm$ channel. The predicted yield for all channels is obtained from simulation, with the yield rescaled to the one observed in the data from a \ttbar-enriched control sample, extracted by requiring b-tagged jets. The systematic uncertainty in the yield includes a 10\% systematic uncertainty in the b-tagging efficiency. Figures~\ref{fig:htt_control} (upper right), (lower left), and (lower right) show the distributions of $D_\zeta$, the number of jets, and the number of b-tagged jets in the $\Pe\Pgm$ channel. There is good agreement between the data and the background predictions in the distributions at low $D_\zeta$ values in Fig.~\ref{fig:htt_control} (upper right), and at high numbers of jets in Fig.~\ref{fig:htt_control} (lower left) and (lower right), where the \ttbar process dominates. QCD multijet events, in which one jet is misidentified as $\Pgt_h$ and another as a lepton, constitute another important source of background in the $\Pe\Pgt_h$ and $\Pgm\Pgt_h$ channels. In the 0- and 1-jet categories, the QCD multijet background prediction is obtained using a control sample where the lepton and the $\Pgt_h$ are required to have the same charge. In this control sample, the QCD multijet distribution and yield are obtained by subtracting from the data the contribution of the Drell--Yan, \ttbar, and $\PW$+jets processes, estimated as explained above. The expected contribution of the QCD multijet background in the opposite-charge signal sample is then derived by rescaling the yield obtained in the same-charge control sample by a factor of 1.1, which is measured in the data using a pure QCD multijet sample obtained by inverting the lepton isolation and relaxing the $\Pgt_h$ isolation. The 10\% systematic uncertainty in this factor covers its small dependence on $\PT(\Pgt_h)$ and the statistical uncertainty in its measurement, and dominates the uncertainty in this background contribution. In the VBF category, the number of events in the same-charge control sample is too small to use this procedure. Instead, the QCD multijet yield is obtained by multiplying the inclusive QCD yield by the VBF selection efficiency measured in data using a QCD-dominated sample in which the lepton and the $\Pgt_h$ are not isolated. The mass shape template is obtained from data by relaxing the muon and $\Pgt_h$ isolation criteria. The small background from $\PW+$jets and QCD multijet events in the $\Pe\Pgm$ channel is estimated from the number of events with one identified lepton and a second lepton that passes relaxed selection criteria, but fails the nominal lepton selection. This number is converted to the expected background yield using the efficiencies for such loosely identified lepton candidates to pass the nominal lepton selection. These efficiencies are measured in data using QCD multijet events. Finally, the small background contribution in each channel from diboson and single top-quark production is estimated using the simulation. The main experimental systematic uncertainties affecting the expected signal yield are from the $\Pgt_h$ identification efficiency (7\%), the \MET scale (5\%), owing to the $m_\mathrm{T}$ requirement and the \MET selection applied to the 0- and 1-jet categories of the $\Pe\Pgt_h$ channel, the integrated luminosity (5\%), and the jet energy scale ($<$ 4\%). The uncertainties in the muon and electron selection efficiencies, including trigger, identification, and isolation, are both 2\%. The theoretical uncertainty in the signal yield comes from the uncertainties in the PDFs, the renormalization and factorization scales, and the modelling of the underlying event and parton showers. The magnitude of the theoretical uncertainty depends on the production process (gluon-gluon fusion, VBF, or associated production) and on the event category. In particular, the scale uncertainty in the VBF production yield is 10\%. The scale uncertainty in the gluon-gluon fusion production yield is 10\% in the 1-jet/high-\pt category and 30\% in the VBF category. The $\Pgt_h$ (3\%) and electron (1\%) energy scale uncertainties cause an uncertainty in the $m_{\tau\tau}$ spectrum shape, and are discussed in the next section. The muon energy scale uncertainty is negligible. \subsection{Results} The statistical methodology described in Section~\ref{sec:method} is used to search for the presence of a SM Higgs boson signal, combining the five categories of the four final states in the 7 and 8\TeV data sets as forty independent channels in a binned likelihood based on the $m_{\tau\tau}$ distributions obtained for each channel. Systematic uncertainties are represented by nuisance parameters in the likelihood. A log-normal prior is assumed for the systematic uncertainties affecting the background normalization, discussed in the previous section. The $\Pgt_h$ and electron energy scale uncertainties, which affect the shape of the $m_{\tau\tau}$ spectrum, are represented by nuisance parameters whose variation results in a continuous change of this shape~\cite{Conway-PhyStat}. Figures~\ref{fig:htt_mtt_leptau} and~\ref{fig:htt_mtt_leplep} show the observed $m_{\tau\tau}$ distributions in the $\Pe\Pgt_h$, $\Pgm\Pgt_h$, $\Pe\Pgm$, and $\Pgm\Pgm$ channels, for each event category, compared with the background predictions. The 7 and 8\TeV data sets are merged, as well as the low- and high-\PT subcategories of the 0- and 1-jet categories. The binning given in the figures corresponds to the binning used in the likelihood. The background mass distributions are the result of the global maximum-likelihood fit under the background-only hypothesis. This fit finds the best set of values for the nuisance parameters to match the data, assuming no signal is present. The variation of the nuisance parameters is limited by the systematic uncertainties estimated for each of the background contributions and used as input to the fit. For example, in the VBF category of the $\Pe\Pgt_h$ channel, the most important nuisance parameters related to background normalization are the ones affecting the $Z\to \tau \tau$ yield ($\Pgt_h$ selection efficiency), the $\cPZ\to \Pe\Pe$ yield ($\Pe \to \Pgt_h$ misidentification rate), the $\PW$+jets yield (extrapolation from the high $m_\mathrm{T}$ to the low $m_\mathrm{T}$ region), and the QCD yield (ratio between the yields in the opposite-charge and same-charge regions). The fit makes use of the high-$m_{\tau\tau}$ region of the VBF category to constrain the nuisance parameters affecting the $\PW$+jets yield. The nuisance parameter related to the $\Pgt_h$ identification efficiency is mostly constrained by the 0- and 1-jet categories, where the number of events in the $Z\to \tau\tau$ peak is much larger. It is also the case for the nuisance parameter related to the $\Pgt_h$ energy scale, which affects the shape of the $Z\to \tau \tau$ distribution. The interpretation of the results in terms of upper limits on the Higgs boson production cross section is given in Section 10. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.42\textwidth]{figures/htt/eleTau_0jet_rescaled_7and8TeV.pdf} \includegraphics[width=0.42\textwidth]{figures/htt/muTau_0jet_rescaled_7and8TeV.pdf} \\ \includegraphics[width=0.42\textwidth]{figures/htt/eleTau_boost_rescaled_7and8TeV.pdf} \includegraphics[width=0.42\textwidth]{figures/htt/muTau_boost_rescaled_7and8TeV.pdf} \\ \includegraphics[width=0.42\textwidth]{figures/htt/eleTau_vbf_rescaled_7and8TeV.pdf} \includegraphics[width=0.42\textwidth]{figures/htt/muTau_vbf_rescaled_7and8TeV.pdf} \end{center} \caption{Observed (points with error bars) and expected (histograms) $m_{\tau\tau}$ distributions for the $\Pe\Pgt_h$ (left) and $\mu\Pgt_h$ (right) channels, and, from top to bottom, the 0-jet, 1-jet, and VBF categories for the combined 7 and 8\TeV data sets. In the 0- and 1-jet categories, the low- and high-$\pt$ subcategories have been summed. The electroweak background combines the expected contributions from $\PW$+jets, $\cPZ$+jets, and diboson processes. In the case of $\Pe\Pgt_h$, the $\cPZ\to\Pe\Pe$ background is shown separately. The dotted histogram shows the expected distribution for a SM Higgs boson with $\ensuremath{m_{\PH}}=125$\GeV (multiplied by a factor of 5 for clarity). } \label{fig:htt_mtt_leptau} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width=0.42\textwidth]{figures/htt/emu_0jet_rescaled_7and8TeV.pdf} \includegraphics[width=0.42\textwidth]{figures/htt/mumu_rescaled_0jet_LOG.pdf} \\ \includegraphics[width=0.42\textwidth]{figures/htt/emu_boost_rescaled_7and8TeV.pdf} \includegraphics[width=0.42\textwidth]{figures/htt/mumu_rescaled_1jet_LOG.pdf} \\ \includegraphics[width=0.42\textwidth]{figures/htt/emu_vbf_rescaled_7and8TeV.pdf} \includegraphics[width=0.42\textwidth]{figures/htt/mumu_rescaled_vbf.pdf} \end{center} \caption{Observed (points with error bars) and expected (histograms) $m_{\tau\tau}$ distributions for the $\Pe\mu$ (left) and $\mu\mu$ (right) channels, and, from top to bottom, the 0-jet, 1-jet, and VBF categories for the combined 7 and 8\TeV data sets. In the 0- and 1-jet categories, the low- and high-$\pt$ subcategories have been summed. The electroweak background combines the contributions from $\PW$+jets, $\cPZ$+jets, and diboson processes. In the case of $\mu\mu$, the $\cPZ\to\mu\mu$ background is shown separately. The dotted histogram shows the expected distribution for a SM Higgs boson with $\ensuremath{m_{\PH}}=125$\GeV (multiplied by a factor of 5 for clarity). } \label{fig:htt_mtt_leplep} \end{figure} \section{\texorpdfstring{$\PH\to\cPqb\cPqb$}{H to bb}\label{sec:hbb}} The decay $\PH\to\cPqb\cPqb$ has the largest branching fraction of the five search modes for $\ensuremath{m_{\PH}}\leq135$\GeV, but the signal is overwhelmed by the QCD multijet production of \cPqb\ quarks. The analysis is therefore designed to search for a dijet resonance in events where a Higgs boson is produced at high $\pt$, in association with a $\PW$ or $\cPZ$ boson that decays leptonically, which largely suppresses the QCD multijet background. The following final states are included in the search: $\PW(\mu\nu)\PH$, $\PW(\Pe\nu)\PH$, $\cPZ(\mu\mu)\PH$, $\cPZ(\Pe\Pe)\PH$, and $\cPZ(\nu\nu)\PH$, all with the Higgs boson decaying to \cPqb\cPqb. Backgrounds arise from the production of vector bosons in association with jets (from all quark flavours), singly- and pair-produced top quarks, dibosons, and QCD multijet processes. Simulated samples of signal and background events are used to optimize the analysis. Control regions in data are selected to adjust the predicted event yields from simulation for the main background processes and to estimate their contribution in the signal region. Several different high-level triggers are used to collect events consistent with the signal hypothesis in all five channels. For the WH channels, the trigger paths consist of several single-lepton triggers with tight lepton identification. Leptons are also required to be isolated from other tracks and calorimeter energy depositions to maintain an acceptable trigger rate. For the \ensuremath{\PW(\Pgm\cPgn)\PH}\ channel, in the 7\TeV data set, the trigger thresholds for the muon transverse momentum, \PT, vary from 17 to 40\GeV. The higher thresholds are implemented for periods of higher instantaneous luminosity. For the 8\TeV data set, the muon \PT threshold is 24\GeV for the isolated-muon trigger, and 40\GeV for muons without any isolation requirements. The combined single-muon trigger efficiency is $\approx$$90\%$ for signal events that pass all offline requirements, described in Section~\ref{sssec:hbb_Event_Selection}. For the \ensuremath{\PW(\Pe\cPgn)\PH}\ channel, in the 7\TeV data set, the electron \PT\ threshold ranges from 17 to 30\GeV. In addition, two jets and a minimum value on the missing transverse energy are required. These additional requirements help to maintain acceptable trigger rates during the periods of high instantaneous luminosity. For the 8\TeV data set, a single-isolated-electron trigger is used with a 27\GeV \PT\ threshold. The combined efficiency for these triggers for signal events that pass the final offline selection criteria is larger than 95\%. The \ensuremath{\cPZ(\Pgm\Pgm)\PH}\ channel uses the same single-muon triggers as the \ensuremath{\PW(\Pgm\cPgn)\PH}\ channel. For the \ensuremath{\cPZ(\Pe\Pe)\PH}\ channel, dielectron triggers with lower-\PT\ thresholds of $17$ and $8$\GeV and tight isolation requirements are used. These triggers are $\approx 99\%$ efficient for \ensuremath{\Z\Hi}\ signal events that pass the final offline selection criteria. For the \ensuremath{\cPZ(\cPgn\cPgn)\PH}\ channel, combinations of several triggers are used, all with the requirement that the missing transverse energy be above a certain threshold. Additional jet requirements are made to keep the trigger rates acceptable as the luminosity increases and to reduce the $\ETmiss$ thresholds, in order to increase the signal acceptance. A trigger with $\ETmiss$ $>150$\GeV requirement is implemented for both the 7 and 8\TeV data sets. For the 7\TeV data, triggers that require the presence of two jets with $|\eta|<2.6$, \PT $>20$\GeV, and \MET thresholds of 80 and 100\GeV, depending on the instantaneous luminosity, are also used. For the 8\TeV data set, a trigger that requires two jets, each with $|\eta| < 2.6$ and \PT $> 30$\GeV, and $\MET > 80$\GeV is also implemented. As the instantaneous luminosity increased further, this trigger was replaced by one requiring $\MET > 100$\GeV, two jets with $|\eta| < 2.6$, one with \PT $>60$\GeV and the other with \PT $> 25$\GeV, the dijet \PT $> 100$\GeV, and no jet with \PT $> 40$\GeV within 0.5 radians in azimuthal angle of the \ETmiss vector. For \ensuremath{\cPZ(\cPgn\cPgn)\PH}\ signal events with missing transverse energy $>160$\GeV, the overall trigger efficiency is $\approx 98\%$ with respect to the offline event reconstruction and selection described below. The corresponding efficiency for $120< \ETmiss < 160\GeV$ is about 66\%. \subsection{Event selection}\label{sssec:hbb_Event_Selection} The final-state objects used in the $\PH \to \,\cPqb \cPqb$ event reconstruction are described in Section~\ref{sec:reconstruction}. Electron candidates are considered in the pseudorapidity range $\left | \eta \right | < 2.5$, excluding the $1.44 <\left | \eta \right | < 1.57$ transition region between the ECAL barrel and endcaps. Tight muon candidates are considered in the $\left | \eta \right | < 2.4$ range. An isolation requirement on $R_\text{Iso}^{\ell}$ of approximately 10\%, as calculated in Eq.~(\ref{eq:reconstruction_isolation}), that is consistent with the expectation for leptons originating from W and Z boson decays, is applied to electron and muon candidates. The exact requirement depends on the lepton $\eta$, $\pt$, and flavour. To identify \cPqb\ jets, different values for the CSV output discriminant, which can range between 0 and 1, are used, with corresponding different efficiencies and misidentification rates. For example, with a CSV $>0.90$ requirement, the efficiencies to tag b quarks, c quarks, and light quarks, are 50\%, 6\%, and 0.15\%, respectively~\cite{CMS-PAS-BTV-12-001}. The corresponding efficiencies for CSV $>0.50$ are 72\%, 23\%, and 3\%. All events from data and simulation are required to pass the same trigger and event reconstruction algorithms. Scale factors that account for differences in the performance of these algorithms between data and simulation are computed and used in the analysis. The background processes to VH production are V+jets, \ttbar, single-top-quark, diboson (VV), and QCD multijet production. These overwhelm the signal by several orders of magnitude. The event selection is based on the kinematic reconstruction of the vector boson and the Higgs boson decay into two \cPqb-tagged jets. Backgrounds are then substantially reduced by requiring a significant boost of the \pt of the vector boson and the Higgs boson~\cite{PhysRevLett.100.242001}, which tend to recoil from each other with a large azimuthal opening angle, \dphiVH, between them. For each channel, two ranges of \ensuremath{\PT(\mathrm{V})}\ are considered. These are referred to as ``low'' and ``high''. Owing to different signal and background compositions, each \ensuremath{\PT(\mathrm{V})}\ range has a different sensitivity, and the analysis is performed separately for each range. The results from all the ranges are then combined for each channel. The ranges for the WH channels are $120<\ensuremath{\PT(\mathrm{V})}<170$\GeV and $\ensuremath{\PT(\mathrm{V})}>170$\GeV, for the \ensuremath{\cPZ(\cPgn\cPgn)\PH}\ channel $120<\ensuremath{\PT(\mathrm{V})}<160$\GeV and $\ensuremath{\PT(\mathrm{V})}>160$\GeV, and for the \ZllH\ channel $50<\ensuremath{\PT(\mathrm{V})}<100$\GeV and $\ensuremath{\PT(\mathrm{V})}>100$\GeV. Candidate \WtoLN\ decays are identified by requiring the presence of a single isolated lepton and missing transverse energy. Muons (electrons) are required to have a \pt\ above 20 (30)\GeV. For the \ensuremath{\PW(\Pe\cPgn)\PH}\ channel only, to reduce contamination from QCD multijet processes, \MET is required to be greater than 35\GeV. Candidate \ZtoLL\ decays are reconstructed by combining isolated, oppositely charged pairs of electrons or muons with \pt\ $> 20$\GeV and a dilepton invariant mass satisfying $75<m_{\ell\ell}<105\GeV$. The identification of \ZtoNN\ decays requires the \MET in the event to be within the \ensuremath{\PT(\mathrm{V})}\ ranges described above. Two requirements suppress events from QCD multijet processes with an $\ETmiss$ arising from mismeasured jets. First, the $\ETmiss$ vector must be isolated from jet activity, using the requirement that the azimuthal angle difference $\dphiMJ$ between the $\ETmiss$ direction and any jet with $|\eta|<2.5$ and $\pt>$20 (30)\GeV be greater that 0.5 radians for the 7 (8)\TeV data sample. Second, the azimuthal angle between the $\ETmiss$ vector calculated using only charged particles with \pt$>0.5$\GeV and $\left | \eta \right |<2.5$ and the direction of the standard $\ETmiss$ vector (calculated using all particles, charged and neutral) must be greater than 0.5 radians. Subject to these two requirements, background from QCD multijet processes is reduced to a negligible level in the \ensuremath{\cPZ(\cPgn\cPgn)\PH}\ channel. To reduce the \ttbar and \ensuremath{\W\Z}\ background in the \ensuremath{\W\Hi}\ and \ensuremath{\cPZ(\cPgn\cPgn)\PH}\ channels, events where the number of additional isolated leptons with \pt$>20$\GeV is greater than 0 are rejected. Reconstruction of the \HBB\ decay is done by requiring two jets above the minimum \pt thresholds listed in Table~\ref{tab:BDTsel}, having $|\eta|<2.5$, and tagged by the CSV algorithm. If more than two such jets are found in the event, the pair with the highest total dijet transverse momentum, \ptjj, is selected. The background from V+jets and dibosons is reduced significantly through \cPqb\ tagging, and subprocesses where the two jets originate from genuine \cPqb\ quarks dominate the final selected data sample. After all the event selection criteria are applied, the invariant-mass resolution for the Higgs boson decay to \cPqb\cPqb\ is approximately 10\%, as found in a previous CMS analysis~\cite{VHbb_PLB}. The mass resolution is improved here by applying regression techniques similar to those used by the CDF experiment~\cite{1107.3026}. Through this procedure, a further correction, beyond the standard jet energy corrections, is computed for individual \cPqb\ jets in order to better measure the true parton energy. A BDT algorithm is trained on simulated \HBB\ signal events, with inputs that include detailed information about each jet that helps to differentiate \cPqb-quark jets from light-flavour jets. The resulting improvement in the \cPqb\cPqb\ invariant-mass resolution is approximately 15\%, resulting in an increase in the analysis sensitivity of 10--20\%, depending on the specific channel. The BDT regression is implemented in the TMVA framework~\cite{Hocker:2007ht}. The complete set of input variables is (though not all variables are used for every channel): \begin{itemize} \item transverse momentum of the jet before and after energy corrections; \item transverse energy and mass of the jet after energy correction; \item uncertainty in the jet energy correction; \item transverse momentum of the highest-\PT constituent in the jet; \item pseudorapidity of the jet; \item total number of jet constituents; \item length and uncertainty of the displacement of the jet's secondary vertex; \item mass and transverse momentum of the jet's secondary vertex; \item number and fraction of jet constituents that are charged; \item event energy density, $\rho$, calculated using constituents with $\left | \eta \right | < 2.5$; \item missing transverse energy in the event; \item azimuthal angle between the missing transverse energy vector and the direction of the nearest jet in pseudorapidity. \end{itemize} To better discriminate the signal from background for different Higgs boson mass hypotheses, an event classification BDT algorithm is trained separately for each mass value using simulated samples of signal and background events that pass the selection criteria described above, together with the requirements listed in Table~\ref{tab:BDTsel}. The set of input variables used in training this BDT is chosen by iterative optimization from a larger number of potentially discriminating variables. Table~\ref{tab:BDTvars} lists these variables. The number $N_{aj}$ of additional jets in an event counts jets that satisfy $\pt>20\GeV$ and $\left | \eta \right | < 4.5$ for \WlnH, $\pt>20\GeV$ and $\left | \eta \right | < 2.5$ for \ZllH, or $\pt>30\GeV$ and $\left | \eta \right | < 4.5$ for \ensuremath{\cPZ(\cPgn\cPgn)\PH}. The output distribution of this BDT algorithm is fitted to search for events from Higgs boson production. Fitting this distribution, rather than simply counting events in a range of the distribution with a good signal-to-background ratio, as in Ref.~\cite{VHbb_PLB}, improves the sensitivity of the analysis by approximately 20\%. \begin{table}[tbp] \topcaption{Selection criteria for the simulated event samples used in training of the signal and background BDT algorithm. Variables marked ``--'' are not used in the given channel. Entries in parentheses indicate the selection for the high-\ensuremath{\PT(\mathrm{V})}\ range. The second and third rows refer to the \pt\ threshold for the highest- and second-highest-\PT jet, respectively, for the pair with the highest total dijet transverse momentum, \ptjj. The parameter \Nal\ is the number of additional isolated leptons in the event. Kinematic variables are given in \GeVns and angles in radians. } \label{tab:BDTsel} \begin{center} \begin{tabular}{lccc} \hline Variable & \WlnH & \ZllH & \ensuremath{\cPZ(\cPgn\cPgn)\PH} \\ \hline \hline $m_{\ell\ell}$ & -- & $\in[75-105]$ & -- \\ $\pt(\rm{j}_1)$ & $>30$ & $>20$ & $>80$ \\ $\pt(\rm{j}_2)$ & $>30$ & $>20$ & $>30$ \\ \ptjj & $>120$ & -- & $\in[120-160]\, (>160)$ \\ \Mjj & $<250$ & $\in[80-150]$ (--) & $<250$ \\ \ensuremath{\PT(\mathrm{V})} & $\in[120-170]\, (>170)$ & $\in[50-100]\, (>100)$ & -- \\ CSV$_{\mathrm{max}}$ & $>0.40$ & $0.50\, (0.244)$ & $>0.679$ \\ CSV$_{\mathrm{min}}$ & $>0.40$ & $0.244$ & $>0.244$ \\ \Nal & $=0$ & -- & $=0$ \\ \MET & $>35 (\Pe)$ & -- & $\in[120-160]\, (>160)$ \\ \dphiMJ & -- & -- & $>0.5$ \\ \dphiVH & -- & -- & $>2.0$ \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[tbp] \topcaption{Variables used for training the signal and background BDT algorithm.} \label{tab:BDTvars} \begin{center} \begin{tabular}{ll} \hline Variable& definition \\\hline\hline $p_{\mathrm{T_\mathrm{j}}}$& transverse momentum of each \cPqb\ jet from the Higgs boson decay \\ \Mjj& dijet invariant mass \\ \ptjj& dijet transverse momentum \\ \ensuremath{\PT(\mathrm{V})}& vector boson transverse momentum \\ CSV$_{\text{max}}$& value of CSV for the \cPqb-tagged jet with the largest CSV value \\ CSV$_{\text{min}}$& value of CSV for the \cPqb-tagged jet with the second largest CSV value \\ \dphiVH& azimuthal angle between the vector boson (or \MET vector) and the dijet direction \\ \dEtaJJ& difference in $\eta$ between \cPqb\ jets from Higgs boson decay \\ \dRJJ& distance in $\eta$--$\phi$ between \cPqb\ jets from Higgs boson decay (not for \ZllH ) \\ \Naj& number of additional jets \\ \dphiMJ& azimuthal angle between \MET and the closest jet (only for \ensuremath{\cPZ(\cPgn\cPgn)\PH} ) \\ \hline \end{tabular} \end{center} \end{table} \subsection{Background control regions}\label{sssec:hbb_Background_Control_Regions} Control regions are identified in the data and used to correct the estimated yields from the MC simulation for two of the important background processes: \ttbar\ production and V+jets, originating from light-flavour partons (u, d, s, or c quarks and gluons) or from heavy-flavour (b quarks). Simultaneous fits are then performed to the distributions of the discriminating variables in the control regions to obtain scale factors by which the simulation yields are adjusted. This procedure is performed separately for each channel. For the \ZllH\ and \ensuremath{\W\Hi}\ modes the scale factors derived for the electron and muon decay channels are combined. These scale factors account not only for possible simulation cross-section discrepancies with the data, but also for potential differences in the selection efficiencies for the various physics object. Therefore, separate scale factors are used for each background process in the different channels. The uncertainties in the scale factor determination include a statistical uncertainty from the fits (owing to the finite size of the samples) and an associated systematic uncertainty. The latter is estimated by refitting the distributions in the control regions after applying estimates for sources of potential systematic shifts such as \cPqb-jet-tagging efficiency, jet energy scale, and jet energy resolution. Tables~\ref{tab:ZllControl}--\ref{tab:WlnControl} list the selection criteria used for the control regions in the \ZllH\ , \ensuremath{\cPZ(\cPgn\cPgn)\PH}\, and \ensuremath{\W\Hi}\ channels, respectively. Table~\ref{tab:SFs} summarizes the fit results for all channels separately for the 7\TeV and 8\TeV data sets. The fit results are found to be robust and the fitted scale factors are consistent with the values from the previous analysis~\cite{VHbb_PLB}. \begin{table}[tbp] \topcaption{Definitions of the control regions for the simulated sample of Z+jets and \ttbar backgrounds in the \ZllH\ channel. The same selection is used for the low- and high-\ensuremath{\PT(\mathrm{V})}\ ranges. The values of kinematical variables are in \GeVns{}.} \label{tab:ZllControl} \begin{center} \begin{tabular}{lcc} \hline Variable & Z+jets & \ttbar \\ \hline\hline $m_{\ell\ell}$ & $\in[75-105]$ & $\notin [75-105]$ \\ $\pt(\rm{j}_1)$ & $>20$ & $>20$ \\ $\pt(\rm{j}_2)$ & $>20$ & $>20$ \\ \ensuremath{\PT(\mathrm{V})} & $\in[50-100]$ & $\in[50-100]$ \\ CSV$_{\mathrm{max}}$ & $>0.244$ & $>0.244$ \\ CSV$_{\mathrm{min}}$ & $>0.244$ & $>0.244$ \\ \Mjj & $\notin [80-150]$, $<250$ & $\notin [80-150]$, $<250$ \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[tbp] \topcaption{Definitions of the control regions for the simulated samples of V+jets and \ttbar background processes in the \ensuremath{\cPZ(\cPgn\cPgn)\PH}\ channel for the low- and high-\ensuremath{\PT(\mathrm{V})}\ regions. The values in parentheses are for the high-\ensuremath{\PT(\mathrm{V})}\ region. The labels LF and HF refer to light- and heavy-flavour jets. The parameter \Nal\ is the number of additional isolated leptons in the event. The values for kinematical variables are in \GeVns{}.} \label{tab:ZnnControl} \begin{center} \scalebox{0.75}{ \begin{tabular}{lccccc} \hline Variable & Z+jets (LF) & Z+jets (HF) & \ttbar & W+jets (LF) & W+jets (HF) \\ \hline\hline $\pt(\rm{j}_1)$ & $>60 (>80)$ & $>60 (>80)$ & $>60 (>80)$ & $>60 (>80)$ & $>60 (>80)$ \\ $\pt(\rm{j}_2)$ & $>30$ & $>30$ &$>30$ &$>30$ &$>30$ \\ \ptjj &$>120 (>160)$ &$>120 (>160)$ &$>120 (>160)$ &$>120 (>160)$ &$>120 (>160)$ \\ CSV$_{\mathrm{max}}$ &- & $>$0.898 & $>$0.898 &- & $>$0.898 \\ \Naj &- &- &1 &0 &0 \\ \Nal &0 &0 &1 &1 &1 \\ \MET & $\in$[120--160] ($>160$) & $\in$[120--160] ($>160$) & $\in$[120--160] ($>160$) &$\in$[120--160] ($>160$) &$\in$[120--160] ($>160$) \\ $\Mjj$ &- & $\notin$[90--150] & $\notin$[90--150] &- & $\notin$[90--150] \\ \hline \end{tabular} } \end{center} \end{table} \begin{table}[tbp] \topcaption{Definitions of the control regions for the simulated samples of three background processes in the \WlnH\ channel for the low- and high-\ensuremath{\PT(\mathrm{V})}\ regions. The values in parentheses are used for the high-\ensuremath{\PT(\mathrm{V})}\ region. The labels LF and HF refer to light- and heavy-flavour jets. The parameter \Nal\ is the number of additional isolated leptons in the event, and METsig is the ratio of the $\ETmiss$ value to its uncertainty~\cite{PFMEtSignAlgo}. The values for kinematical variables are in \GeVns{}. The symbols $\Pe$ and $\mu$ mean that the selection is used only for the \ensuremath{\PW(\Pe\cPgn)\PH}\ mode or \ensuremath{\PW(\Pgm\cPgn)\PH}\ mode, respectively.} \label{tab:WlnControl} \begin{center} \begin{tabular}{lccc} \hline Variable & W+jets (LF) & \ttbar & W+jets (HF) \\ \hline \hline $\pt(\rm{j}_1)$ &$>$30 & $>$30 & $>$30 \\ $\pt(\rm{j}_2)$ & $>$30 & $>$30 & $>$30 \\ \ptjj & $>$120 & $>$120 & $>$120 \\ \ensuremath{\PT(\mathrm{V})} & $\in[120-170]$ ($>$170) & $\in[120-170]$ ($>$170) & $\in[120-170]$ ($>$170) \\ CSV$_{\mathrm{max}}$ & -- & $>$0.898 & $>$0.898 \\ \Naj & $<$2 & $>$1 & $=$0 \\ \Nal & $=$0 & $=$0 & $=$0 \\ \MET & $>$35 ($\Pe$) & $>$35 ($\Pe$) & $>$35 ($\Pe$) \\ METsig & $>$2.0($\mu$), $>$3.0($\Pe$) & -- & -- \\ $\Mjj$ & $<$250 & $<$250 & $\notin[90-150]$ \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[tbp] \topcaption{Data/MC scale factors for the control region in each Higgs boson production process with the 7\TeV and 8\TeV data sets in the low- and high-\ensuremath{\PT(\mathrm{V})}\ ranges. The uncertainties shown are statistical and systematic, respectively. The labels LF and HF refer to light- and heavy-flavour jets. } \label{tab:SFs} \begin{center} \scalebox{0.68}{ \begin{tabular}{lcccccccc} \hline Process & WH &WH & \ZllH\ &\ZllH & \ensuremath{\cPZ(\cPgn\cPgn)\PH} & \ensuremath{\cPZ(\cPgn\cPgn)\PH} \\ \hline\hline Low-\ensuremath{\PT(\mathrm{V})} & 7\TeV & 8\TeV & 7\TeV & 8\TeV & 7\TeV & 8\TeV \\ \hline W+jets (LF) & $0.88\pm 0.01\pm 0.03$ & $0.97\pm 0.01 \pm 0.03$ & -- & -- & $ 0.89 \pm 0.01 \pm 0.03 $ & $0.91 \pm 0.03 \pm 0.03 $ \\ W+jets (HF) & $1.91\pm 0.14\pm 0.31$ & $2.05\pm 0.21\pm 0.33$ & -- & -- & $ 1.36 \pm 0.10 \pm 0.15 $ & $1.63 \pm 0.29 \pm 0.14 $\\ Z+jets (LF) & -- & -- & $1.11 \pm 0.03\pm 0.11$ & $1.41\pm 0.03\pm 0.16$ & $ 0.87 \pm 0.01 \pm 0.03 $ & $1.01 \pm 0.05 \pm 0.03 $ \\ Z+jets (HF) & -- & -- & $0.98 \pm 0.05\pm 0.12$ & $1.04\pm 0.05\pm 0.20$ & $ 0.96 \pm 0.02 \pm 0.03 $ & $ 1.00 \pm 0.10 \pm 0.04 $ \\ \ttbar & $0.93\pm 0.02\pm 0.05$ & $1.12\pm 0.01\pm 0.06$ & $1.03 \pm 0.04\pm 0.11$ & $1.06\pm 0.03\pm 0.11$ & $ 0.97 \pm 0.02 \pm 0.04 $ & $ 1.02 \pm 0.03 \pm 0.03 $ \\ \hline\hline High-\ensuremath{\PT(\mathrm{V})} & 7\TeV & 8\TeV & 7\TeV & 8\TeV & 7\TeV & 8\TeV \\ \hline W+jets (LF) & $0.79\pm 0.01\pm 0.02$ & $0.88\pm 0.01\pm 0.02$ & -- & -- & $ 0.78 \pm 0.02 \pm 0.03 $ & $ 0.86 \pm 0.03 \pm 0.03 $\\ W+jets (HF) & $1.49\pm 0.14\pm 0.19$ & $1.30\pm 0.20\pm 0.17$ & -- & -- & $ 1.48 \pm 0.15 \pm 0.20 $ & $ 1.43 \pm 0.28 \pm 0.18 $ \\ Z+jets (LF) & -- & -- & $1.11 \pm 0.03\pm 0.11$ & $1.41\pm 0.03\pm 0.16$ & $ 0.97 \pm 0.02 \pm 0.04 $ & $ 1.01 \pm 0.04 \pm 0.04 $ \\ Z+jets (HF) & -- & -- & $0.98 \pm 0.05\pm 0.12$ & $1.03\pm 0.05\pm 0.20$ & $ 1.08 \pm 0.09 \pm 0.06 $ & $ 1.06 \pm 0.06 \pm 0.07 $ \\ \ttbar & $0.84\pm 0.02\pm 0.03$ & $0.97\pm 0.02\pm 0.03$ & $1.03 \pm 0.04\pm 0.11$ & $1.06\pm 0.03\pm 0.11$ & $ 0.97 \pm 0.02 \pm 0.04 $ & $ 1.03 \pm 0.04 \pm 0.04 $ \\ \hline \end{tabular} } \end{center} \end{table} \subsection{Systematic uncertainties}\label{sssec:hbb_Uncertainties} Sources of systematic uncertainty in the expected signal and background yields and distribution shapes are listed in Table~\ref{tab:syst}. The uncertainty in the integrated luminosity measurement is $2.2\%$ for the 7\TeV data~\cite{CMS-PAS-EWK-11-001} and $4.4\%$ for the 8\TeV data~\cite{CMS:2012jza}. Muon and electron trigger, reconstruction, and identification efficiencies are determined in data from samples of leptonic Z boson decays. The uncertainty in the yields due to the trigger efficiency is 2\% per charged lepton and the uncertainty in the identification efficiency is also 2\% per lepton. The parameters describing the \ensuremath{\cPZ(\cPgn\cPgn)\PH}\ trigger efficiency turn-on curve are varied within their statistical uncertainties and for different assumptions on the methodology. A 2\% systematic uncertainty in the yield is estimated. The jet energy scale is varied by $\pm$1 standard deviation as a function of the jet $\pt$ and $\eta$, and the efficiency of the analysis selection is recomputed. A 2--3\% yield variation is found, depending on the particular decay channel and production process. The effect of the uncertainty in the jet energy resolution is evaluated by smearing the jet energies by the measured uncertainty, giving a 3--6\% variation in yields. The uncertainties in the jet energy scale and resolution also affect the shape of the BDT output distribution. The impact of the jet energy scale uncertainty is determined by recomputing the BDT distribution after shifting the energy scale up and down by its uncertainty. Similarly, the impact of the jet energy resolution is determined by recomputing the BDT distribution after increasing or reducing the jet energy resolution. Data-to-simulation \cPqb-tagging-efficiency scale factors, measured in \ttbar events and multijet events, are applied to the jets in signal and background events. The estimated systematic uncertainties in the \cPqb-tagging scale factors are: $6\%$ per \cPqb\ tag, $12\%$ per c tag, and $15\%$ per mistagged jet (originating from gluons and light quarks)~\cite{CMS-PAS-BTV-12-001}. These translate into yield uncertainties in the 3--15\% range, depending on the channel and the production process. The shape of the BDT output distribution is also affected by the shape of the CSV distribution, and therefore recomputed according to the range of variations of the CSV distributions. The theoretical VH signal cross section is calculated to NNLO, and the systematic uncertainty is $4\%$~\cite{LHCHiggsCrossSectionWorkingGroup:2011ti}, including the effects of scale and PDF variations~\cite{Botje:2011sn,Alekhin:2011sk,Lai:2010vv,Martin:2009iq,Ball:2011mu}. The analysis described in this paper is performed in the regime where the V and H have a significant boost in $\pt$, and thus, potential differences in the \pt\ spectrum of the V and H between the data and the MC simulation generators could introduce systematic effects in the estimates of the signal acceptance and efficiency. Theoretical calculations are available that estimate the NLO electroweak (EW)~\cite{HAWK1,HAWK2,Denner:2011id} and NNLO QCD~\cite{Ferrera:2011bk} corrections to VH production in the boosted regime. The estimated effect from electroweak corrections for a boost of $\approx$$150\GeV$ are $5\%$ for ZH and $10\%$ for WH. For the QCD correction, a $10\%$ uncertainty is estimated for both ZH and WH, which includes effects due to additional jet activity from initial- and final-state radiation. The finite size of the signal MC simulation samples, after all selection criteria are applied, contributes an uncertainty of 1--5\% in the various channels. The total uncertainty in the prediction of the background yields from estimates using data is approximately 10\%. For the V+jets background, the differences in the BDT output distribution for events from the \MADGRAPH and \HERWIG{++} MC simulation generators are considered. For the single-top-quark and diboson yield predictions, which are obtained solely from simulation, a $30\%$ systematics uncertainty in the cross sections is used. \begin{table}[tbp] \topcaption{Systematic uncertainties in the predicted signal and background yields from the sources listed. The ranges give the variations over the 7 and 8\TeV data sets, different search channels, specific processes, and Higgs boson mass hypotheses. The acronym EWK stands for electroweak.} \label{tab:syst} \begin{center} \scalebox{0.90}{ \begin{tabular}{lc} \hline Source & Range (\%) \\ \hline\hline Integrated luminosity & 2.2--4.4 \\ Lepton identification and trigger efficiency (per lepton) & 3 \\ \ensuremath{\cPZ(\cPgn\cPgn)\PH}\ triggers & 2 \\ Jet energy scale & 2--3 \\ Jet energy resolution & 3--6 \\ Missing transverse energy & 3 \\ \cPqb-tagging efficiency & 3--15 \\ Signal cross section (scale and PDF) & 4 \\ Signal cross section (\pt boost, EWK/QCD) & 5--10/10 \\ Statistical precision of signal simulation & 1--5 \\ Backgrounds estimated from data & 10 \\ Backgrounds estimated from simulation & 30 \\ \hline \end{tabular} } \end{center} \end{table} \subsection{Results}\label{sssec:Results} Maximum-likelihood fits are performed to the output distributions of the BDT algorithms, trained separately for each channel and each Higgs boson mass value hypothesis in the 110--135\GeV range. In the fit, the BDT shapes and normalizations, for signal and each background component, are allowed to vary within the systematic and statistical uncertainties described in Section~\ref{sssec:hbb_Uncertainties}. These uncertainties are treated as nuisance parameters, with appropriate correlations taken into account. Tables~\ref{tab:LoPtBDTYields}--\ref{tab:HiPtBDTYields8TeV} summarize the expected signal and background yields for both \ensuremath{\PT(\mathrm{V})}\ bins in each channel from the 7\TeV and 8\TeV data. All the data/MC scale factors determined in Section~\ref{sssec:hbb_Background_Control_Regions} have been applied to the corresponding background yields. Examples of output BDT distributions, for the $\ensuremath{m_{\PH}}=125$\GeV training and for the high \ensuremath{\PT(\mathrm{V})}\ bin, are shown in Figure~\ref{fig:Hbb_figs}. The signal and background shapes and normalizations are those returned by the fits. Figure~\ref{fig:Hbb_figs} also shows the dijet invariant-mass distribution for the combination of all five channels in the combined $7$ and $8\TeV$ data sets, using an event selection that is more restrictive than the one used in the BDT analysis and that is more suitable for a counting experiment in just this observable. The events considered are those in the high \ensuremath{\PT(\mathrm{V})}\ bin with tighter b-tagging requirements on both jets, and with requirements that there be no additional jets in the events and that the azimuthal opening angle between the dijet system and the reconstructed vector boson be large. The $\PH\to\cPqb\cPqb$ search with such a selection is significantly less sensitive than the search using the BDT discriminant and it is therefore not elaborated on further in this article. The interpretation of the results from the BDT discriminant analysis, in terms of upper limits on the Higgs boson production cross section, is given in Section 10. \begin{table}[tbp] \topcaption{Predicted signal and background yields (statistical uncertainty only) in the BDT output distribution for the low-\ensuremath{\PT(\mathrm{V})}\ range with the 7\TeV data for each of the five channels. The labels LF and HF refer to light- and heavy-flavour jets. The numbers in parentheses refer to the Higgs boson mass hypothesis in \GeVns{}.} \label{tab:LoPtBDTYields} \begin{center} \scalebox{0.80}{ \begin{tabular}{lcccccc} \hline Process & $\ensuremath{\cPZ(\Pgm\Pgm)\PH}$ & $\ensuremath{\cPZ(\Pe\Pe)\PH}$ & $\ensuremath{\cPZ(\cPgn\cPgn)\PH}$ & $\ensuremath{\PW(\Pgm\cPgn)\PH}$ & $\ensuremath{\PW(\Pe\cPgn)\PH}$ \\\hline\hline Z+jets (LF) & $176 \pm 14$ & $255 \pm 18$ & $ 158.3 \pm 6.1 $ & $11.0 \pm 1.5$ & $1.87 \pm 0.56$ \\ Z+jets (HF) & $235 \pm 16$ & $225 \pm 16$ & $ 254.9 \pm 5.5 $ & $23.2 \pm 2.1$ & $2.71 \pm 0.68$ \\ W+jets (LF) & -- & -- & $ 133.1 \pm 8.1 $ & $124.6 \pm 4.6$ & $58.5 \pm 3.1$ \\ W+jets (HF) & -- & -- & $ 171.85 \pm 7.1 $ & $248.3 \pm 9.5$ & $135.3 \pm 7.0$ \\ \ttbar & $74.2 \pm 1.9$ & $64.3 \pm 1.7$ & $ 898.5 \pm 5.2 $ & $894.6 \pm 4.1$ & $575.5 \pm 3.3$ \\ Single Top & $3.73 \pm 0.72$ & $2.67 \pm 0.64$ & $ 98.5 \pm 5.9 $ & $123.1 \pm 3.0$ & $67.7 \pm 2.2$ \\ VV & $10.77 \pm 0.53$ & $10.07 \pm 0.55$ & $ 33.5 \pm 1.5 $ & $15.10 \pm 0.72$ & $7.89 \pm 0.54$ \\\hline ZH(110) & $2.72 \pm 0.03$ & $2.19 \pm 0.03$ & $6.19\pm 0.05$ & $0.28\pm 0.02$ & $0.08\pm 0.01$ \\ WH(110) & -- & -- & $3.19\pm 0.04$ & $4.98\pm 0.08$ & $2.96\pm 0.06$ \\ ZH(115) & $2.34 \pm 0.03$ & $1.88 \pm 0.03 $ & $4.52\pm 0.05$ & $0.21\pm 0.01$ & $0.07\pm 0.01$ \\ WH(115) & -- & -- & $2.36\pm 0.03$ & $4.57\pm 0.07$ & $2.58\pm 0.05$ \\ ZH(120) & $1.93 \pm 0.02$ & $1.56 \pm 0.02$ & $4.10\pm 0.04$ & $0.19\pm 0.01$ & $0.07\pm 0.01$ \\ WH(120) & -- & -- & $2.15\pm 0.04$ & $3.90\pm 0.05$ & $2.17\pm 0.04$ \\ ZH(125) & $1.52 \pm 0.02$ & $1.23 \pm 0.02$ & $3.67\pm 0.04$ & $0.18\pm 0.01$ & $0.06\pm 0.01$ \\ WH(125) & -- & -- & $1.94\pm 0.04$ & $3.19\pm 0.04$ & $1.90\pm 0.03$ \\ ZH(130) & $1.19 \pm 0.01$ & $0.95 \pm 0.01$ & $2.81\pm 0.04$ & $0.15\pm 0.01$ & $0.05\pm 0.01$ \\ WH(130) & -- & -- & $1.25\pm 0.03$ & $2.56\pm 0.04$ & $1.50\pm 0.03$ \\ ZH(135) & $0.83 \pm 0.01$ & $0.67 \pm 0.01$ & $2.10\pm 0.02$ & $0.11\pm 0.01$ & $0.03\pm 0.01$ \\ WH(135) & -- & -- & $0.87\pm 0.02$ & $1.92\pm 0.02$ & $1.13\pm 0.02$ \\ \hline Sum &$500 \pm 22$ &$558 \pm 24$ & $1749 \pm 16$ & $1440\pm 12$ & $850\pm 9$ \\\hline Data & $ 493 $ & $512$ & $1793$ & $1411$ & $925$ \\\hline \end{tabular} } \end{center} \end{table} \begin{table}[tbp] \topcaption{Predicted signal and background yields (statistical uncertainty only) in the BDT output distribution for the high-\ensuremath{\PT(\mathrm{V})}\ range with the 7\TeV data for each of the five channels. The labels LF and HF refer to light- and heavy-flavour jets. The numbers in parentheses refer to the Higgs boson mass hypothesis in \GeVns{}.} \label{tab:HiPtBDTYields} \begin{center} \scalebox{0.80}{ \begin{tabular}{lcccccc} \hline Process & $\ensuremath{\cPZ(\Pgm\Pgm)\PH}$ & $\ensuremath{\cPZ(\Pe\Pe)\PH}$ & $\ensuremath{\cPZ(\cPgn\cPgn)\PH}$ & $\ensuremath{\PW(\Pgm\cPgn)\PH}$ & $\ensuremath{\PW(\Pe\cPgn)\PH}$ \\\hline\hline Z+jets (LF) & $291 \pm 15$ & $275 \pm 15$ & $ 107.7 \pm 3.1 $ & $3.47 \pm 0.79$ & $1.63 \pm 0.52$ \\ Z+jets (HF) & $180 \pm 11$ & $160 \pm 10$ & $ 117.0 \pm 4.6 $ & $6.7 \pm 1.2$ & $2.13 \pm 0.59$ \\ W+jets (LF) & -- & -- & $ 81.4 \pm 3.8$ & $61.9 \pm 3.0$ & $41.4 \pm 2.5$ \\ W+jets (HF) & -- & -- & $ 171.7 \pm 5.9 $ & $129.5 \pm 6.1$ & $67.8 \pm 4.4$ \\ \ttbar & $41.7 \pm 1.4$ & $39.4 \pm 1.3$ & $ 275.7 \pm 3.0 $ & $302.4 \pm 2.3$ & $225.0 \pm 2.0$ \\ Single Top & $1.49 \pm 0.45$ & $3.44 \pm 0.71 $ & $ 37.9 \pm 3.4 $ & $60.8 \pm 2.1$ & $41.6 \pm 1.7$ \\ VV & $14.02 \pm 0.67$ & $11.68 \pm 0.60$ & $ 24.6 \pm 2.8 $ & $9.71 \pm 0.58$ & $6.28 \pm 0.47$ \\\hline ZH(110) & $3.19 \pm 0.04 $ & $2.69 \pm 0.03$ & $5.75\pm 0.04$ & $0.14\pm 0.01$ & $0.07\pm 0.01$ \\ WH(110) & -- & -- & $1.88\pm 0.06$ & $4.39\pm 0.07$ & $3.18\pm 0.06$ \\ ZH(115) & $2.78 \pm 0.03$ & $2.37 \pm 0.027$ & $5.87\pm 0.05$ & $0.08\pm 0.01$ & $0.04\pm 0.01$ \\ WH(115) & -- & -- & $1.71\pm 0.05$ & $3.93\pm 0.06$ & $2.82\pm 0.05$ \\ ZH(120) & $2.41 \pm 0.02$ & $2.09 \pm 0.023$ & $5.15\pm 0.04$ & $0.10\pm 0.01$ & $0.06\pm 0.01$ \\ WH(120) & -- & -- & $1.42\pm 0.04$ & $3.57\pm 0.05$ & $2.51\pm 0.04$ \\ ZH(125) & $1.99 \pm 0.02$ & $1.67 \pm 0.02$ & $4.46\pm 0.04$ & $0.08\pm 0.01$ & $0.04\pm 0.01$ \\ WH(125) & -- & -- & $1.15\pm 0.03$ & $3.04\pm 0.04$ & $2.14\pm 0.04$ \\ ZH(130) & $1.58 \pm 0.02$ & $1.37 \pm 0.01$ & $3.54\pm 0.03$ & $0.06\pm 0.01$ & $0.04\pm 0.01$ \\ WH(130) & -- & -- & $0.70\pm 0.02$ & $2.51\pm 0.04$ & $1.83\pm 0.03$ \\ ZH(135) & $1.24 \pm 0.01$ & $1.03 \pm 0.01$ & $2.76\pm 0.02$ & $0.05\pm 0.01$ & $0.03\pm 0.01$ \\ WH(135) & -- & -- & $0.77\pm 0.02$ & $1.94\pm 0.03$ & $1.39\pm 0.02$ \\ \hline Sum & $529 \pm 19$ & $490 \pm 18$ & $816 \pm 10$ & $575\pm 6$ & $386\pm 6$ \\\hline Data & $565$ & $491$ & $783$ & $551$& $383$ \\\hline \end{tabular} } \end{center} \end{table} \begin{table}[tbp] \topcaption{Predicted signal and background yields (statistical uncertainty only) in the BDT output distribution for the low-\ensuremath{\PT(\mathrm{V})}\ range with the 8\TeV data for each of the five channels. The labels LF and HF refer to light- and heavy-flavour jets. The numbers in parentheses refer to the Higgs boson mass hypothesis in \GeVns{}.} \begin{center} \scalebox{0.80}{ \begin{tabular}{lcccccc} \hline Process & $\ensuremath{\cPZ(\Pgm\Pgm)\PH}$ & $\ensuremath{\cPZ(\Pe\Pe)\PH}$ & $\ensuremath{\cPZ(\cPgn\cPgn)\PH}$ & $\ensuremath{\PW(\Pgm\cPgn)\PH}$ & $\ensuremath{\PW(\Pe\cPgn)\PH}$ \\\hline\hline Z+jets (LF) & $296 \pm 20$ & $254 \pm 23$ & $156.3 \pm 2.6$ & $13.7 \pm 2.5$ & $6.7 \pm 1.9$ \\ Z+jets (HV) & $250 \pm 15$ & $228 \pm 17$ & $355.1 \pm 4.7$ & $21.7 \pm 2.9$ & $8.5 \pm 2.0$ \\ W+jets (LF) & -- & -- & $202.6\pm 3.1$ & $92.8 \pm 6.9$ & $58.2\pm 5.7$ \\ W+jets (HV) & -- & -- & $384.6 \pm 5.0$ & $177.7 \pm 14.1$ & $102.4 \pm 10.7$ \\ \ttbar & $86.3 \pm 3.7$ & $75.7 \pm 3.6$ & $1573 \pm 29$ & $1308 \pm 15$ & $970 \pm 13$ \\ Single Top & $5.4 \pm 1.9$ & $2.45 \pm 0.82$ & $102.2 \pm 2.2$ & $64.3 \pm 5.2$ & $49.6 \pm 4.8$ \\ VV & $13.7 \pm 1.1$ & $12.4 \pm 1.0$ & $48.3 \pm 2.5$ & $19.0 \pm 1.8$ & $13.0 \pm 1.8$ \\\hline ZH(110) & $2.83 \pm 0.06$ & $2.21 \pm 0.05$ & $7.78\pm 0.02$ & $0.31 \pm 0.02$ & $0.14 \pm 0.01$ \\ WH(110) & -- & --& $1.14\pm 0.02$& $4.87 \pm 0.17$ & $3.39\pm 0.14$ \\ ZH(115) & $2.37 \pm 0.05$ & $1.89 \pm 0.04$ & $6.64\pm 0.02$ & $0.25 \pm 0.01$ & $0.11 \pm 0.01$ \\ WH(115) & -- & -- & $1.11\pm 0.02$ & $4.73 \pm 0.15$ & $3.28 \pm 0.13$ \\ ZH(120) & $1.92 \pm 0.04$ & $1.54 \pm 0.03$ & $5.78\pm 0.04$ & $0.23 \pm 0.01$ & $0.10 \pm 0.01$ \\ WH(120) & -- & -- & $1.07\pm 0.02$ & $3.79 \pm 0.12$ & $2.59 \pm 0.10$ \\ ZH(125) & $1.52 \pm 0.03$ & $1.24 \pm 0.03$ & $4.39\pm 0.02$ & $0.18 \pm 0.01$ & $0.08 \pm 0.01$ \\ WH(125) & -- & -- & $0.95\pm 0.03$ & $3.19 \pm 0.10$ & $2.41 \pm 0.09$ \\ ZH(130) & $1.15 \pm 0.02$ & $0.92 \pm 0.02$ & $3.37\pm 0.04$ & $0.15 \pm 0.01$ & $0.05 \pm 0.01$ \\ WH(130) & -- & -- & $0.79\pm 0.03$ & $2.61 \pm 0.09$ & $1.85 \pm 0.08$ \\ ZH(135) & $0.83 \pm 0.02$ & $0.65 \pm 0.02$ & $2.31\pm 0.03$ & $0.11 \pm 0.01$ & $0.04 \pm 0.01$ \\ WH(135) & -- & -- & $0.61\pm 0.02$ & $1.85 \pm 0.06$ & $1.40 \pm 0.05$ \\ \hline Sum &$651 \pm 26$& $572 \pm 29$ & $2822 \pm 30$ & $1697 \pm 22$ & $1208 \pm 19$ \\\hline Data & $707$ & $547$ & $2804$ & $1727$ &$1289$ \\ \hline \end{tabular} } \end{center} \end{table} \begin{table}[tbp] \topcaption{Predicted signal and background yields (statistical uncertainty only) in the BDT output distribution for the high-\ensuremath{\PT(\mathrm{V})}\ range with the 8\TeV data for each of the five channels. The labels LF and HF refer to light- and heavy-flavour jets. The numbers in parentheses refer to the Higgs boson mass hypothesis in \GeVns{}.} \label{tab:HiPtBDTYields8TeV} \begin{center} \scalebox{0.80}{ \begin{tabular}{lcccccc} \hline Process & $\ensuremath{\cPZ(\Pgm\Pgm)\PH}$ & $\ensuremath{\cPZ(\Pe\Pe)\PH}$ & $\ensuremath{\cPZ(\cPgn\cPgn)\PH}$ & $\ensuremath{\PW(\Pgm\cPgn)\PH}$ & $\ensuremath{\PW(\Pe\cPgn)\PH}$ \\\hline\hline Z+jets (LF) & $426 \pm 17$ & $353 \pm 16$ & $ 109.6 \pm 3.0 $ & $4.1 \pm 1.2$ & $1.33 \pm 0.41$\\ Z+jets (HF) & $238 \pm 11$ & $199 \pm 10$ & $ 182.0 \pm 3.6 $ & $6.3 \pm 1.4$& $3.17 \pm 0.99$ \\ W+jets (LF) & -- & -- & $ 79.0 \pm 2.8$ & $42.8 \pm 4.8$ & $32.2 \pm 4.4$ \\ W+jets (HV) & -- & -- & $ 97.4 \pm 4.9 $ & $64.4 \pm 7.1$ & $45.7 \pm 5.9$ \\ \ttbar & $55.0 \pm 3.0$ & $48.0 \pm 2.8$ & $ 488 \pm 16 $ & $458.8 \pm 8.1$ & $361.8 \pm 7.4$ \\ Single Top & $4.5 \pm 1.5$ & $5.9 \pm 2.2$ & $ 43.2 \pm 2.4 $ & $35.6 \pm 4.4$ & $28.6 \pm 4.1$ \\ VV & $16.5 \pm 1.3$ & $13.4 \pm 1.2$ & $ 34.8 \pm 1.8 $ & $16.1 \pm 1.7$ & $9.0 \pm 1.2$ \\\hline ZH(110) & $3.66 \pm 0.06$ & $2.95 \pm 0.06$ & $8.05\pm 0.07$ & $0.14 \pm 0.01$ & $0.08 \pm 0.01$ \\ WH(110) & -- & -- & $2.63\pm 0.06$ & $4.49 \pm 0.16$ & $3.92 \pm 0.16$ \\ ZH(115) & $3.17 \pm 0.05$ & $2.64 \pm 0.05$ & $6.81\pm 0.05$ & $0.12 \pm 0.01$ & $0.06 \pm 0.01$ \\ WH(115) & -- & -- & $1.52\pm 0.05$ & $4.30 \pm 0.14$ & $3.52 \pm 0.13$ \\ ZH(120) & $2.77 \pm 0.04$ & $2.26 \pm 0.04$ & $5.81\pm 0.04$ & $0.10 \pm 0.01$ & $0.06 \pm 0.01$ \\ WH(120) & -- & -- & $1.00\pm 0.04$ & $3.86 \pm 0.12$ & $3.09 \pm 0.11$ \\ ZH(125) & $2.31 \pm 0.04$ & $1.84 \pm 0.03$ & $5.40\pm 0.04$ & $0.09 \pm 0.01$ & $0.06 \pm 0.01$ \\ WH(125) & -- & -- & $0.74\pm 0.03$ & $3.29 \pm 0.10$ & $2.67 \pm 0.09$ \\ ZH(130) & $1.84 \pm 0.03$ & $1.53 \pm 0.03$ & $3.99\pm 0.03$ & $0.07 \pm 0.01$ & $0.04 \pm 0.01$ \\ WH(130) & -- & -- & $0.70\pm 0.02$ & $2.56 \pm 0.09$ & $2.07 \pm 0.08$ \\ ZH(135) & $1.39 \pm 0.02$ & $1.16 \pm 0.02$ & $2.80\pm 0.02$ & $0.06 \pm 0.01$ & $0.03 \pm 0.01$ \\ WH(135) & -- & -- & $0.67\pm 0.02$ & $2.00 \pm 0.06$ & $1.76 \pm 0.06$ \\ \hline Sum &$740 \pm 20$&$620 \pm 19$ & $1034 \pm 18$ & $628\pm 13$ &$482\pm 11$\\\hline Data & $776$ &$635$ & $1045$ & $689$ & $544$\\ \hline \end{tabular} } \end{center} \end{table} \begin{figure}[tbp] \begin{center} \includegraphics[width=0.49\textwidth]{figures/hbb_Zmm_HighPt_PostFit_s_7TeV.pdf} \includegraphics[width=0.49\textwidth]{figures/hbb_Znn_HighPt_PostFit_s_7TeV.pdf} \includegraphics[width=0.49\textwidth]{figures/hbb_Wen_HighPt_PostFit_s_8TeV.pdf} \includegraphics[width=0.49\textwidth]{figures/hbb_Mbb.pdf} \caption{Example of BDT output distributions in the high \ensuremath{\PT(\mathrm{V})}\ bin, after all the selection criteria have been applied, for \ensuremath{\cPZ(\Pgm\Pgm)\PH}\ (top left), \ensuremath{\cPZ(\cPgn\cPgn)\PH}\ (top right), and \ensuremath{\PW(\Pe\cPgn)\PH}\ (bottom left). Bottom right: the \cPqb-tagged dijet invariant-mass distribution from the combination of all VH channels for the combined 7 and 8\TeV data sets. Only events that pass a more restrictive selection are included (see text). For all figures the solid histograms show the signal and the various backgrounds, with the hatched region denoting the statistical uncertainties in the MC simulation. The data are represented by points with error bars. The VH signal is represented by a red line histogram. The ratio of the data to the sum of the expected background distributions is shown at the bottom of each figure.} \label{fig:Hbb_figs} \end{center} \end{figure} \section{Combined results}\label{sec:results} In this section, we present the results obtained by combining the measurements from all five search channels described above. We begin with a short summary of the statistical method used to combine the analyses. \subsection{Combination methodology} \label{sec:method} Combining the Higgs boson search results requires a simultaneous analysis of the data selected by the individual decay modes, accounting for their correlations and for all the statistical and systematic uncertainties. The statistical methodology used in this combination was developed by the ATLAS and CMS Collaborations in the context of the LHC Higgs Combination Group. A description of the general methodology can be found in Refs.~\cite{LHC-HCG-Report, Chatrchyan:2012tx}. Results presented in this paper are obtained using asymptotic formulae from Ref.~\cite{Cowan:2010st} and recent updates available in the \textsc{RooStats} package~\cite{RooStats}. The Higgs boson mass is tested in steps accordant with the expected Higgs boson width and the experimental mass resolution~\cite{LHC-HCG-Report}. \subsubsection{Characterizing the absence of a signal: limits} For the calculation of exclusion limits, we adopt the modified frequentist criterion $\ensuremath{\mathrm{CL_s}\xspace}$~\cite{Junk:1999kv,Read1}. The chosen test statistic $q$, used to determine how signal- or background-like the data are, is based on a profile likelihood ratio. Systematic uncertainties are incorporated via nuisance parameters and are treated according to the frequentist paradigm, as described in Ref~\cite{LHC-HCG-Report}. The profile likelihood ratio is defined as \begin{equation} q_{\mu} = - 2 \, \ln \frac {\mathcal{L}(\mathrm{obs} \, | \, \mu \cdot s + b, \, \hat \theta_{\mu} ) } {\mathcal{L}(\mathrm{obs} \, | \, \hat \mu \cdot s + b, \, \hat \theta ) } , \end{equation} where ``obs'' stands for the observed data; $s$ stands for the number and distribution of signal events expected under the SM Higgs boson hypothesis; $\mu$ is a signal-strength modifier, introduced to accommodate deviations from the SM Higgs boson predictions; $b$ is the number and distribution of background events; $\mu \cdot s + b$ is the signal-plus-background hypothesis, with the expected SM signal event yields $s$ multiplied by the signal-strength modifier $\mu$; $\theta$ are nuisance parameters describing the systematic uncertainties. The value $\hat \theta_{\mu}$ maximizes the likelihood in the numerator for a given $\mu$, while $\hat \mu$ and $\hat \theta$ define the point at which the likelihood reaches its global maximum. The ratio of the probabilities to observe a value of the test statistic at least as large as the one observed in data, $q_{\mu}^{\mathrm{obs}}$, under the signal+background ($\mu \cdot s + b$) and background-only ($b$) hypotheses, \begin{equation} \ensuremath{\mathrm{CL_s}\xspace} (\mu) = \frac {\mathrm{P}(q_{\mu} \geq q_{\mu}^{\mathrm{obs}} \, | \, \mu \cdot s+b)} {\mathrm{P}(q_{\mu} \geq q_{\mu}^{\mathrm{obs}} \, | \, b )} \, \leq \alpha , \end{equation} is used as the criterion for excluding the presence of a signal at the $1 - \alpha$ confidence level. A signal with a cross section $\sigma = \mu \cdot \sigma_{\mathrm{SM}}$ is defined to be excluded at 95\% CL if $\ensuremath{\mathrm{CL_s}\xspace} (\mu)\, \leq \, 0.05$. Here, $\sigma_{\mathrm{SM}}$ stands for the SM Higgs boson cross section. \subsubsection{Characterizing an excess of events: p-values and significance} To quantify the presence of an excess of events beyond what is expected for the background, we use a test statistic: \begin{equation} \label{eq:method_q0} q_{0} = - 2 \, \ln \frac {\mathcal{L}(\mathrm{obs} \, | \, b, \, \hat \theta_{0} ) } {\mathcal{L}(\mathrm{obs} \, | \, \hat \mu \cdot s + b, \, \hat \theta ) } , \end{equation} where the likelihood in the numerator is for the background-only hypothesis. The local statistical significance $Z_\text{local}$ of a signal-like excess is computed from the probability $p_0$ \begin{equation} p_0 = \mathrm{P}(q_0 \geq q_0^\text{obs} \, | \, b), \end{equation} henceforth referred to as the local $p$-value, using the one-sided Gaussian-tail convention: \begin{equation} \label{eq:Z} p_0 = \int_{Z_{\text{local}}}^{+\infty} \frac{1}{\sqrt{2\pi}} \exp(-x^2/2) \, \rd x. \end{equation} In the Higgs boson search, we scan over the Higgs boson mass hypotheses and find the value giving the minimum local $p$-value $p_{\text{local}}^{\text{min}}$, which describes the probability of a background fluctuation for that particular Higgs boson mass hypothesis. The probability to find a fluctuation with a local $p$-value lower or equal to the observed $p_{\text{local}}^{\text{min}}$ anywhere in the explored mass range is referred to as the global $p$-value, $p_{\text{global}}$: \begin{equation} p_{\mathrm{global}}= \mathrm{P}(p_0 \leq p_{\text{local}}^{\text{min}} \, | \, b). \end{equation} The fact that the global $p$-value can be significantly larger than $p_{\text{local}}^{\text{min}}$ is often referred to as the ``look-elsewhere effect'' (LEE). The global significance (and global $p$-value) of an observed excess can be evaluated following the method described in Ref.~\cite{LEE}, using: \begin{equation} \label{eq:LEE1} p_{\text{global}}= p_{\text{local}}^{\text{min}} \, + \, C \cdot \ensuremath{\cmsSymbolFace{e}}^{ - Z^2_{\text{local}} / 2 }. \end{equation} The constant $C$ is found by generating a set of pseudo-experiments and using it to evaluate the global $p$-value corresponding to the $p_{\mathrm{local}}^{\mathrm{min}}$ value observed in the data. Pseudo-experiments are a simulated outcome of an experiment obtained by randomly varying the average expected event yields and their distributions according to a specified model of statistical and systematic uncertainties. For example, a Poisson distribution is used to model statistical variations, while a Gaussian distribution is used to describe the systematic uncertainties. \subsubsection{Extracting signal-model parameters} The values of a set of signal-model parameters $a$ (the signal-strength modifier $\mu$ is one of them) are evaluated from a scan of the profile likelihood ratio $q(a)$: \begin{equation} q(a) = - 2 \, \ln \frac {\mathcal{L}(\text{obs} \, | \, s(a) + b, \, \hat \theta_{a} ) } {\mathcal{L}(\text{obs} \, | \, s(\hat a) + b, \, \hat \theta ) } . \end{equation} The values of the parameters $\hat a$ and $\hat \theta$ that maximize the likelihood $\mathcal{L}(\text{obs} \, | \, s(\hat a) + b, \, \hat \theta )$, are called the best-fit set. The 68\%~(95\%)~CL interval for a given signal-model parameter $a_i$ is evaluated from $q(a_i)=1$~(3.84), with all other unconstrained model parameters treated as nuisance parameters. The two-dimensional (2D) 68\%~(95\%)~CL contours for pairs of signal-model parameters $a_i,\, a_j$ are derived from $q(a_i, a_j) = 2.3$~(6.0). Note that the boundaries of the 2D confidence-level region projected onto either parameter axis are not identical to the one-dimensional (1D) confidence intervals for this parameter. \subsection{Exclusion limits on the SM Higgs boson} \subsubsection{Results of searches in the five decay modes} \label{sec:SubchannelLimits} Figures~\ref{fig:LimitGaGa} and \ref{fig:Limit} show the 95\% CL upper limits on the signal-strength modifier, $\mu = \sigma / \sigma_{\mathrm{SM}}$, as a function of $\ensuremath{m_{\PH}}$ for the five decay modes: $\Pgg\Pgg$, $\cPZ\cPZ$, $\PW\PW$, $\Pgt\Pgt$, and $\cPqb\cPqb$. The observed values are shown by the solid lines. The SM Higgs boson mass regions where the line is below $\sigma / \sigma_{\mathrm{SM}}= 1$ are excluded at 95\% CL. The dashed lines indicate the median of the expected results for the background-only hypothesis. The dark and light bands indicate the ranges in which the observed results are expected to reside in 68\% and 95\% of the experiments, should multiple experiments be performed under the background-only hypothesis. The probabilities for an observation to lie above and below the 68\% (95\%) bands are each 16\% (2.5\%). \begin{figure}[htbp] \begin{center} \includegraphics[width=0.49\linewidth]{figures/hgg_MassFactLimit} \\ \includegraphics[width=0.49\linewidth]{figures/hgg_BaselineLimit} \includegraphics[width=0.49\linewidth]{figures/hgg_MassWindowLimit} \caption{The 95\% CL upper limits on the production cross section of a Higgs boson expressed in units of the SM Higgs boson production cross section, $\sigma / \sigma_\text{SM}$, as obtained in the $\PH \to \Pgg\Pgg$ search channel for (top) the baseline analysis, (lower left) the cut-based analysis, and (lower right) the sideband analysis. The solid lines represent the observed limits; the background-only hypotheses are represented by their median (dashed lines) and by their 68\% (dark) and 95\% (light) CL bands. } \label{fig:LimitGaGa} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width=0.49\linewidth]{figures/comb/sqr_acls_hzz_zoom} \includegraphics[width=0.49\textwidth]{figures/comb/sqr_acls_hww_nominal_inject2_ylin} \\ \includegraphics[width=0.49\textwidth]{figures/comb/sqr_acls_htt_nominal_inject2_ylin} \includegraphics[width=0.49\textwidth]{figures/comb/sqr_acls_hbb_nominal_inject2_ylin} \caption{The 95\% CL upper limits on the production cross section of a Higgs boson expressed in units of the SM Higgs boson production cross section, $\sigma / \sigma_\text{SM}$, for the following search modes: (upper left) $\PH \to \cPZ\cPZ \to 4\ell$, (upper right) $\PH \to \PW\PW$, (lower left) $\PH \to \Pgt\Pgt$, and (lower right) $\PH \to \cPqb\cPqb$. The solid lines represent the observed limits; the background-only hypotheses are represented by their median (dashed lines) and by the 68\% and 95\% CL bands. The signal-plus-background expectation (dotted lines) from a Higgs boson with mass $\ensuremath{m_{\PH}} = 125$\GeV is also shown for the final states with a poor mass resolution, $\PW\PW$, $\tau\tau$, and $\cPqb\cPqb$. } \label{fig:Limit} \end{center} \end{figure} In the $\PH \to \Pgg\Pgg$ analysis, the SM Higgs boson signal is searched for in a simultaneous statistical analysis of the diphoton invariant-mass distributions for the eleven exclusive event classes: five classes (four untagged and one VBF-tagged) for the 7\TeV data and six classes (four untagged and two VBF-tagged) for the 8\TeV data, as described in Section~\ref{sec:hgg}. Figure~\ref{fig:LimitGaGa} shows the 95\% CL upper limits on the Higgs boson production cross section obtained in (a) the \emph{baseline} analysis and the two alternatives analyses: (b) the \emph{cut-based} analysis and (c) the \emph{sideband} analysis. The observed limits in the sideband analysis [Fig.~\ref{fig:LimitGaGa} (lower right)] are not smooth because, when changing the mass hypothesis, the event class boundaries move as well. This is true for the ${\pm}2\%$ signal window and each sideband window. This leads to events moving in and out of the classes in a discrete manner. Figure~\ref{fig:LimitGaGa} (top) shows that the $\PH \to \Pgg\Pgg$ search has reached the sensitivity for excluding the SM Higgs boson at 95\% CL in the mass range 110--144\GeV, while the observed data exclude it in the following three mass ranges: 113--122\GeV, 128--133\GeV, and 138--149\GeV. All three diphoton analyses give observed exclusion limits near $\ensuremath{m_{\PH}}=125\GeV$ that are much weaker than the expected for the background-only hypothesis, which implies a significant excess of events with diphoton masses around 125\GeV. The consistency of the results obtained with the three alternative approaches confirms the robustness of the measurement. In the $\PH \rightarrow \cPZ\cPZ \rightarrow 4\ell$ analysis, the SM Higgs boson signal is searched for in a simultaneous statistical analysis of six 2D distributions of the four-lepton invariant mass $m_{4\ell}$ and the matrix-element-based kinematic discriminant $K_D$, as described in Section~\ref{sec:hzz4l}. The six distributions correspond to the three lepton final states ($4\Pe$, $4\mu$, $2\Pe2\mu$) and the 7 and 8\TeV data sets. Figure~\ref{fig:Limit} (upper left) shows the 95\% CL upper limits on the Higgs boson production cross section. The $\PH \to \cPZ\cPZ \to 4\ell$ search has reached the sensitivity for excluding the SM Higgs boson at 95\% CL in the mass range 120--180\GeV, while the observed data exclude it in the following two mass ranges: 130--164\GeV and 170--180\GeV. The observed exclusion limits for $\ensuremath{m_{\PH}}=120$--$130\GeV$ are much weaker than the expected limits for the background-only hypothesis, suggesting a significant excess of four-lepton events in this mass range. As a cross-check, the statistical analysis using only the $m_{4\ell}$ distributions has been performed. The results are found to be consistent with the 2D analysis, although with less sensitivity. In the $\PH \to \PW\PW \to \ell\nu\ell\nu$ analysis, the SM Higgs boson signal is searched for in a simultaneous statistical analysis of eleven exclusive final states: same-flavour ($\Pep\Pem$ and $\mu^+\mu^-$) dilepton events with 0 and 1 jet for the 7 and~8\TeV data sets, different-flavour $\Pe^{\pm}\mu^{\mp}$ dilepton events with 0 and 1 jet for the 7 and~8\TeV data sets, dilepton events in the VBF-tag category for the 7\TeV data set, and same-flavour and different-flavour dilepton events in the VBF-tag category for the 8\TeV data set. All analysis details can be found in Section~\ref{sec:hww2l2nu}. Figure~\ref{fig:Limit} (upper right) shows the 95\% CL upper limits on the Higgs boson production cross section. The $\PH \to \PW\PW \to \ell\nu\ell\nu$ search has reached a sensitivity for excluding the SM Higgs boson at 95\% CL in the mass range 122--160\GeV (the higher-mass range is not discussed in this paper), while the observed data exclude it in the mass range 129--160\GeV. The observed exclusion limits are weaker than the expected ones for the background-only hypothesis in the entire mass range, suggesting an excess of events in data. However, given the mass resolution of about 20\% in this channel, owing to the presence of the two undetectable neutrinos, a broad excess is observed across the mass range from 110 to about 130\GeV. The dotted line in Fig.~\ref{fig:Limit} (b) indicates the median expected exclusion limits in the presence of a SM Higgs boson with a mass near 125\GeV. The observed limits in this channel are consistent with the expectation for a SM Higgs boson of 125\GeV. In the $\PH \to \Pgt\Pgt$ channel, the 0-, 1-jet, and VBF categories are used to set 95\% CL upper limits on the Higgs boson production. The ditau system is reconstructed in four final states: $\Pe\tau_{\mathrm{h}}$, $\mu\tau_{\mathrm{h}}$, $\Pe\mu$, $\mu\mu$, where the leptons come from $\tau \to \Pe\nu\nu$ or $\tau \to \mu\nu\nu$ decays. The 0- and 1-jet categories are further split into two categories of low or high ditau transverse momentum. The 7 and 8\TeV data are treated independently giving a total of 40 ditau mass distributions. All analysis details can be found in Section~\ref{sec:htt}. Figure~\ref{fig:Limit} (lower left) shows the 95\% CL upper limits on the Higgs boson production cross section in this channel. The $\PH \to \Pgt\Pgt$ search has not yet reached the SM Higgs boson exclusion sensitivity; the expected limits on the signal event rates are 1.3--2.4 times larger than the event rates expected for the SM Higgs boson in this channel. In the $\PH \to \cPqb\cPqb$ analysis, five final states are considered: two $\cPqb$-tagged jets with \ETm\ ($\cPZ \to \nu\nu$), $\Pep\Pem$, $\mu^+\mu^-$ ($\cPZ \to \ell^+\ell^-$), $\Pe + \ETm$, and $\mu + \ETm$($\PW \to \ell\nu$). Each of these categories is further split into two categories of low or high $\cPqb\cPqb$ transverse momentum. The 7 and~8\TeV data are treated independently giving a total of 20 BDT-output distributions. All analysis details can be found in Section~\ref{sec:hbb}. Figure~\ref{fig:Limit} (lower right) shows the 95\% upper CL limits on the Higgs boson production cross section in this channel. The $\PH \to \cPqb\cPqb$ search has not yet reached the SM Higgs boson exclusion sensitivity; the expected limits on the signal event rates are 1.2--2.8 times larger than the event rates expected for the SM Higgs boson in this channel. \subsubsection{Combined results} The five individual search channels described above are combined into a single search for the SM Higgs boson. Figure~\ref{fig:CLsMu95} (left) shows the 95\% CL upper limits on the signal-strength modifier, $\mu = \sigma / \sigma_{\mathrm{SM}}$, as a function of $\ensuremath{m_{\PH}}$. We exclude a SM Higgs boson at 95\% CL in two mass ranges: 110--\ObsNFL\GeV and \ObsNFH--\MHmax\GeV. \begin{figure*} [b] \centering \includegraphics[width=0.49\textwidth]{figures/comb/sqr_acls_comb_HPA_smallGGScale} \hfill \includegraphics[width=0.49\textwidth]{figures/comb/sqr_smacls_comb_HPA_smallGGScale} \caption{ The 95\% CL upper limits on the production cross section of a Higgs boson expressed in units of the SM Higgs boson production cross section, $\sigma / \sigma_\text{SM}$, (left) and the $\ensuremath{\mathrm{CL_s}\xspace}$ values (right) for the SM Higgs boson hypothesis, as a function of the Higgs boson mass for the five decay modes and the 7 and 8\TeV data sample combined. The solid lines represent the observed limits; the background-only hypotheses are represented by their median (dashed lines) and by the 68\% and 95\% CL bands. The three horizontal lines on the right plot show the $\ensuremath{\mathrm{CL_s}\xspace}$ values 0.05, 0.01, and 0.001, corresponding to 95\%, 99\%, and 99.9\% confidence levels, defined as $(1-\ensuremath{\mathrm{CL_s}\xspace})$. } \label{fig:CLsMu95} \end{figure*} The $\ensuremath{\mathrm{CL_s}\xspace}$ value for the SM Higgs boson hypothesis as a function of its mass is shown in Fig.~\ref{fig:CLsMu95} (right). The horizontal lines indicate $\ensuremath{\mathrm{CL_s}\xspace}$ values of 0.05, 0.01, and 0.001. The mass regions where the observed $\ensuremath{\mathrm{CL_s}\xspace}$ values are below these lines are excluded with the corresponding ($1-\ensuremath{\mathrm{CL_s}\xspace}$) confidence levels of 95\%, 99\%, and 99.9\%, respectively. The 95\% CL exclusion range for the SM Higgs boson is identical to that shown in Fig.~\ref{fig:CLsMu95} (left), as both results are simply different representations of the same underlying information. At 99\% CL, we exclude the SM Higgs boson in three mass ranges: \ObsOneNNL --\ObsOneNNH\GeV, \ObsTwoNNL --\ObsTwoNNH\GeV, and \ObsThreeNNL --\ObsThreeNNH\GeV. Figure~\ref{fig:CLsMu95} (right) shows that, in the absence of a signal, we would expect to exclude the entire $m_{\PH}$ range of 110--145\GeV at the 99.9\% CL or higher. In most of the Higgs boson mass range, the differences between the observed and expected limits are consistent since the observed limits are generally within the 68\% or 95\% bands of the expected limit values. However, in the range $\ObsNFL < m_{\PH}< \ObsNFH$\GeV, we observe an excess of events, making the observed limits considerably weaker than expected in the absence of the SM Higgs boson and, hence, not allowing the exclusion of the SM Higgs boson in this range. \subsection{Significance of the observed excess} \subsubsection{Results of searches in the $\PH \to \Pgg\Pgg$ and $\PH \to \cPZ\cPZ \to 4\ell$ decay modes} \label{sec:SubchannelSignificance} As presented in Section~\ref{sec:SubchannelLimits}, the searches for the SM Higgs boson in the $\Pgg\Pgg$ and $ \cPZ\cPZ \to 4\ell$ modes reveal a substantial excess of events with diphoton and four-lepton invariant masses near 125\GeV. Figure~\ref{fig:PValueGaGa} shows the local $p$-value as a function of the SM Higgs boson mass in the $\Pgg\Pgg$ channel. The results are presented for the three analyses: (a) \emph{baseline} analysis, and in the two alternative analyses: (b) \emph{cut-based} analysis, and (c) \emph{sideband} analysis. Figure~\ref{fig:PValueGaGa} (top) shows about a $3\sigma$ excess near 125\GeV in both the 7 and 8\TeV data. The minimum local $p$-value $p_0 = 1.8\ten{-5}$, corresponding to a local maximum significance of 4.1$\sigma$, occurs at a mass of 125.0\GeV for the combined 7 and 8\TeV data sets. The median expected significance for a SM Higgs boson of this mass is 2.7$\sigma$. In the asymptotic approximation, 68\% (95\%) of repeated experiments would give results within ${\pm}1 \sigma$ (${\pm}2 \sigma$) around the median expected significance. Therefore, the excess seen in data, even being larger than the expected median for a Higgs boson signal, is consistent with a SM Higgs boson with a probability of about 16\%. The consistency of the results from the three analyses is a good check on the robustness of the measurement. The local $p$-value as a function of the Higgs boson mass $\ensuremath{m_{\PH}}$ for the $\cPZ\cPZ \rightarrow 4\ell$ channel is shown in Fig.~\ref{fig:PValue}. The minimum of the local $p$-value is at $\ensuremath{m_{\PH}}=125.5\GeV$ and corresponds to a local significance of $3.2\sigma$. A local significance of $2.2\sigma$ is found for a 1D fit of the invariant mass without using the $K_{D}$ discriminant. The median expected significance for a SM Higgs boson of this mass is $3.8\sigma$ and 3.2$\sigma$ for the 2D and 1D fits, respectively. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.49\linewidth]{figures/hgg_MassFactPValue} \\ \includegraphics[width=0.49\linewidth]{figures/hgg_BaselinePValue} \includegraphics[width=0.49\linewidth]{figures/hgg_MassWindowPValue} \caption{The local $p$-value as a function of $\ensuremath{m_{\PH}}$ for the 7 and 8\TeV data sets and their combination for the $\Pgg\Pgg$ mode from (top) the primary analysis, (lower left) the cut-based analysis, and (lower right) the side-band analysis. The observed $p$-values for the combined 7 and 8\TeV data sets are shown by the solid lines; the median expected $p$-values for a SM Higgs boson with mass $\ensuremath{m_{\PH}}$, are shown by the dashed lines. The horizontal lines show the relationship between the $p$-value (left $y$ axis) and the significance in standard deviations (right $y$ axis). } \label{fig:PValueGaGa} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width=0.49\linewidth]{figures/HZZ_pValue_small} \caption{The local $p$-value as a function of $\ensuremath{m_{\PH}}$ for the 7 and 8\TeV data sets and their combination for the $\cPZ\cPZ \to 4\ell$ channel. The observed $p$-values for the combined 7 and 8\TeV data sets are shown by the solid line; the median expected $p$-values for a SM Higgs boson with mass $\ensuremath{m_{\PH}}$ are shown by the dashed line. The observed $p$-values for the 7 and 8\TeV data sets are shown by the dotted lines. The horizontal lines show the relationship between the $p$-value (left $y$ axis) and the significance in standard deviations (right $y$ axis). } \label{fig:PValue} \end{center} \end{figure} \subsubsection{Combined results} To quantify the inconsistency of the observed excesses with the background-only hypothesis, we show in Fig.~\ref{fig:pvalue} (left) the local $p$-value $p_0$ for the five decay modes combined for the 7 and 8\TeV data sets. The 7 and 8\TeV data sets exhibit excesses of $\MaxLocalZseven \sigma$ and $\MaxLocalZeight \sigma$, respectively, for a SM Higgs boson with a mass near {\color{black}125}\GeV. In the combination, the minimum local $p$-value of $p_{\min}$ = \MinLocalP , corresponding to a local significance of $\MaxLocalZ \sigma$, occurs at $\ensuremath{m_{\PH}} = 125.5$\GeV. Figure~\ref{fig:pvalue} (right) gives the $p$-value distribution for each of the decay channels. The largest contributions to the overall excess are from the $\Pgg\Pgg$ and $\cPZ\cPZ \to 4\ell$ channels. Both channels have good mass resolution and allow a precise measurement of the mass of the resonance corresponding to the excess. Their combined significance is {\color{black}5.0}$\sigma$, as displayed in Fig.~\ref{fig:pvalue_subcomb} (left). Figure~\ref{fig:pvalue_subcomb} (right) shows the combined $p$-value distribution for the channels with poorer mass resolution: $\PW\PW$, $\Pgt\Pgt$, and $\cPqb\cPqb$. Table~\ref{tab:Signif} summarizes the median expected and observed local significance for a SM Higgs boson mass hypothesis of 125.5\GeV from the individual decay modes and their combinations. In the $\Pgt\Pgt$ channel, we do not observe an excess of events at this mass. The expected significance is evaluated assuming the expected background and signal rates. The observed significance is expected to be within $\pm 1\sigma$ of the expected significance with a 68\% probability. \begin{table}[htbp] \begin{center} \topcaption{ The median expected and observed significances of the excesses in the individual decay modes and their various combinations for a SM Higgs boson mass hypothesis of 125.5\GeV. There is no observed excess in the $\Pgt\Pgt$ channel. } \label{tab:Signif} \begin{tabular}{l|c|c} \hline Decay mode or combination & Expected ($\sigma$) & Observed ($\sigma$) \\ \hline\hline $\cPZ\cPZ$ & 3.8 & 3.2 \\ % $\Pgg\Pgg$ & 2.8 & 4.1 \\ % $\PW\PW$ & 2.5 & 1.6 \\ % $\cPqb\cPqb$ & 1.9 & 0.7 \\ % $\Pgt\Pgt$ & 1.4 & -- \\ % \hline $\Pgg\Pgg$ + $\cPZ\cPZ$ & 4.7 & 5.0 \\ $\PW\PW$ + $\Pgt\Pgt$ + $\cPqb\cPqb$ & 3.4 & 1.6 \\ \hline $\Pgg\Pgg$ + $\cPZ\cPZ$ + $\PW\PW$ + $\Pgt\Pgt$ + $\cPqb\cPqb$ & 5.8 & 5.0 \\% \hline \end{tabular} \end{center} \end{table} \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{figures/comb/sqr_pvala_all_energy_smallGGScale_wideX} \hfill \includegraphics[width=0.49\textwidth]{figures/comb/sqr_pvala_all_bydecay_smallGGScale_wideX} \caption{ (Left) The observed local $p$-value for the combination of all five decay modes with the 7 and 8\TeV data sets, and their combination as a function of the Higgs boson mass. (Right) The observed local $p$-value for each separate decay mode and their combination, as a function of the Higgs boson mass. The dashed lines show the mean expected local $p$-values for a SM Higgs boson with mass $\ensuremath{m_{\PH}}$. } \label{fig:pvalue} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{figures/comb/sqr_pvala_all_energy_hires} \hfill \includegraphics[width=0.49\textwidth]{figures/comb/sqr_pvala_all_energy_lowres} \caption{ The observed local $p$-value for the $\Pgg\Pgg$ and $\cPZ\cPZ \to 4\ell$ decay channels with good mass resolution (left) and the $\PW\PW$, $\cPqb\cPqb$, and $\Pgt\Pgt$ modes with poorer mass resolution (right), as a function of the Higgs boson mass for the 7 and 8\TeV data sets and their combination. The dashed lines show the expected local $p$-values for a SM Higgs boson with mass $\ensuremath{m_{\PH}}$. } \label{fig:pvalue_subcomb} \end{figure*} The LEE-corrected significance is evaluated by generating $10\,000$ pseudo-experiments. After fitting for the constant $C$ in Eq.~(\ref{eq:LEE1}), we find that the global significance of the signal at $\ensuremath{m_{\PH}}=125.5$\GeV is $\GlobalZsmall \sigma$ ($\GlobalZmedium \sigma$) for the mass search range 115--130\GeV (110--145\GeV). The low probability for an excess at least as large as the observed one to arise from a statistical fluctuation of the background leads to the conclusion that we observe a new particle with a mass near 125\GeV. The $\Pgg\Pgg$ and $\cPZ\cPZ \to 4\ell$ decay modes indicate that the new particle is a boson, and the diphoton decay implies that its spin is different from 1~\cite{Landau,Yang}. \subsection{Mass of the observed state} To measure the mass of the observed state, we use the $\gamma\gamma$ and $\cPZ\cPZ \to 4\ell$ decay modes. Figure~\ref{fig:fit_mass} (left) shows the 2D~68\%~CL regions for the signal cross section (normalized to the SM Higgs boson cross section) versus the new boson's mass $m_{\mathrm{X}}$, separately for untagged $\gamma\gamma$, VBF-tagged $\gamma\gamma$, and $\cPZ\cPZ \to 4\ell$ events, and their combination. The combined 68\%~CL contour shown with a solid line in Fig.~\ref{fig:fit_mass} (left) assumes that the relative event yields between the three channels are fixed to the SM expectations, while the overall signal strength is a free parameter. The energy scale uncertainties for photons, electrons, and muons are treated as independent. The $\cPZ \to \Pe\Pe$ peak is used for correcting both photon and electron energy scales. However, we find that they have a very weak correlation, since photons in $\PH \to \gamma\gamma$ decays and electrons in $\PH \to \cPZ\cPZ \to 4\ell$ decays have substantially different energy scales. Moreover, the photons have an additional systematic uncertainty associated with the extrapolation of the energy scale corrections derived for the electrons to the energy scale corrections to be used for the photons. \begin{figure*} [thbp] \centering \includegraphics[width=0.49\textwidth]{figures/comb/sqr_mass_scan_2d_all_white} \hfill \includegraphics[width=0.49\textwidth]{figures/comb/sqr_mass_scan_1d_all} \caption{ (Left) The 2D 68\% CL contours for a hypothesized boson mass $m_{X}$ versus $\mu= \sigma / \sigma_{\mathrm{SM}}$ for the untagged $\gamma \gamma$, VBF-tagged $\gamma \gamma$, and $\cPZ\cPZ \to 4\ell$ decay channels, and their combination from the combined 7 and 8\TeV data. In the combination, the relative signal strengths for the three final states are fixed to those for the SM Higgs boson. (Right) The maximum-likelihood test statistic $q$ versus $m_{\mathrm{X}}$ for the untagged $\gamma \gamma$, VBF-tagged $\gamma \gamma$, and $\cPZ\cPZ \to 4\ell$ final states, and their combination from the combined 7 and 8\TeV data. Neither the absolute nor the relative signal strengths for the three final states are constrained to the SM Higgs boson expectations. The crossings with the thick (thin) horizontal line $q=1\,(3.8)$ define the 68\% (95\%) CL interval for the measured mass, shown by the vertical lines.} \label{fig:fit_mass} \end{figure*} To measure the value of $m_{\mathrm{X}}$ in a model-independent way, the untagged $\gamma\gamma$, VBF-tagged $\gamma\gamma$, and $\cPZ\cPZ \to 4\ell$ channels are assumed to have independent signal cross sections. This is achieved by scaling the expected SM Higgs boson event yields in these channels by independent factors $\mu_i$, where $i$ denotes the individual channel. The signal is assumed to be a particle with a unique mass $m_{\mathrm{X}}$. The mass and its uncertainty are extracted from a scan of the combined test statistic $q$, frequently referred to as $-2 \, \Delta \ln \mathcal{L}$, versus $m_{\mathrm{X}}$. The signal-strengths $\mu_i$ in such a scan are treated in the same way as the other nuisance parameters. Figure~\ref{fig:fit_mass} (right) shows the test statistic as a function of $m_{\mathrm{X}}$ for the three final states separately and their combination. The crossing of the $q(m_{\mathrm{X}})$ curves with the horizontal thick (thin) lines at $q=$1 (3.8) defines the 68\% (95\%) CL interval for the mass of the observed particle. These intervals include both the statistical and systematic uncertainties. The resulting mass measurement and 68\% CL interval in such a combination is $m_{\mathrm{X}} = 125.3 \pm 0.6$\GeV. To determine the statistical component in the overall uncertainty, we evaluate the test statistic $q(m_{\mathrm{X}})$ with all the nuisance parameters fixed to their best-fit values. The result is shown by the dashed line in Fig.~\ref{fig:fit_mass_statsyst}. The crossing of the dashed line with the thick horizontal line $q=$1 gives the statistical uncertainty (68\% CL interval) in the mass measurements: $\pm 0.4$\GeV. The quadrature difference between the overall and statistical-only uncertainties determines the systematic uncertainty component in the mass measurements: ${\pm}0.5$\GeV. Therefore, the final result for the mass measurement is $m_{\mathrm{X}}= \ensuremath{125.3 \pm 0.4\stat\pm 0.5\syst}\GeV$. \begin{figure*}[thbp] \centering \includegraphics[width=0.49\textwidth]{figures/comb/sqr_mass_scan_1d_hires_comp} \caption{ The maximum-likelihood test statistic $q$ versus the hypothesized boson mass $m_{\mathrm{X}}$ for the combination of the $\gamma \gamma$ and $\cPZ\cPZ \to 4\ell$ modes from the combined 7 and 8\TeV data. The solid line is obtained including all the nuisance parameters and, hence, includes both the statistical and systematic uncertainties. The dashed line is found with all nuisance parameters fixed to their best-fit values and, hence, represents the statistical uncertainties only. The crossings with the thick (thin) horizontal line $q=1$ (3.8) define the 68\% (95\%) CL interval for the measured mass, shown by the vertical lines. } \label{fig:fit_mass_statsyst} \end{figure*} \subsection{Consistency of the observed state with the SM Higgs boson hypothesis} The $p$-value characterizes the probability of the background producing the observed excess of events or greater, but it does not give information about the consistency of the observed excess with the expected signal. The current data sample allows for only a limited number of such consistency tests, which we present in this section. These consistency tests do not constitute measurements of any physics parameters per se, but rather show the consistency of the various observations with the expectations for the SM Higgs boson. Unless stated otherwise, all consistency tests presented in this section are for the hypothesis of the SM Higgs boson with mass 125.5\GeV and all quoted uncertainties include both the statistical and systematic ones. \subsubsection{Measurement of the signal strength} The value for the signal-strength modifier $\hat \mu = \sigma / \sigma_{\mathrm{SM}}$, obtained by combining all the search channels, provides the first consistency test. Note that $\hat \mu$ becomes negative if the observed number of events is smaller than the expected rate for the background-only hypothesis. Figure~\ref{fig:muhat} shows the $\hat \mu$ value versus the hypothesized Higgs boson mass \ensuremath{m_{\PH}}. The band corresponds to the 68\% CL region when including the statistical and systematic uncertainties. The value of $\mu$ is found in 0.5\GeV steps of \ensuremath{m_{\PH}}. The measured $\hat \mu$ value for a Higgs boson mass of 125.5\GeV is \MUHAT, consistent with the value $\mu=1$ expected for the SM Higgs boson. \begin{figure*} [htbp] \centering \includegraphics[width=0.49\textwidth]{figures/comb/sqr_mlz_comb_HPA_smallGGScale_1} \caption{ The signal-strength $\hat \mu = \sigma / \sigma_\mathrm{SM}$ as a function of the hypothesized SM Higgs boson mass $\ensuremath{m_{\PH}}$ using all the decay modes and the combined 7 and 8\TeV data sets. The bands correspond to ${\pm}1$ standard deviation including both statistical and systematic uncertainties. } \label{fig:muhat} \end{figure*} Figure~\ref{fig:muhat_compatibility} shows a consistency test of the $\hat \mu$ values obtained in different combinations of search channels. The combinations are organized by decay mode and additional features that allow the selection of events with an enriched purity of a particular production mechanism. The expected purities of different combinations are discussed in the sections describing the individual analyses. For example, assuming the SM Higgs boson cross sections, the channels with the VBF dijet requirements have a substantial fraction (20--50\%) of gluon-gluon fusion events. There is consistency among all the channels contributing to the overall measurement and their various combinations. \begin{figure*} [htbp] \centering \includegraphics[width=0.49\textwidth]{figures/comb/sqr_mlzs_ccc_mH125p5_all} \\ \includegraphics[width=0.49\textwidth]{figures/comb/sqr_mlzs_ccc_mH125p5_decay}\hfill \includegraphics[width=0.49\textwidth]{figures/comb/sqr_mlzs_ccc_mH125p5_prod} \caption{ Signal-strength values $\hat \mu = \sigma / \sigma_\mathrm{SM}$ for various combinations of the search channels with $\ensuremath{m_{\PH}}=125.5\GeV$. The horizontal bars indicate the $\pm 1 \sigma$ statistical-plus-systematic uncertainties. The vertical line with the band shows the combined $\hat \mu$ value with its uncertainty. (Top) Combinations by decay mode and additional requirements that select events with an enriched purity of a particular production mechanism. (Bottom-left) Combinations by decay mode. (Bottom-right) Combinations by selecting events with additional requirements that select events with an enriched purity of a particular production mechanism. } \label{fig:muhat_compatibility} \end{figure*} The four main Higgs boson production mechanisms can be associated with either top-quark couplings (gluon-gluon fusion and \cPqt\cPqt\PH) or vector-boson couplings (VBF and VH). Therefore, combinations of channels associated with a particular decay mode and explicitly targeting different production mechanisms can be used to test the relative strengths of the couplings of the new state to the vector bosons and top quark. Figure~\ref{fig:rvrf} shows the 68\% and 95\% CL contours for the signal-strength modifiers $\mu_{\cPg\cPg\PH+\cPqt\cPqt\PH}$ of the gluon-gluon fusion plus \cPqt\cPqt\PH, and $\mu_{\mathrm{VBF+VH}}$ of the VBF plus VH production mechanisms. The three sets of contours correspond to the channels associated with the $\Pgg\Pgg$, $\tau\tau$, and $\PW\PW$ decay modes; searches in these decay modes have subchannels with VBF dijet tags. The SM Higgs boson point shown by the diamond at $\mu_{\mathrm{ggH}+\mathrm{ttH}},\mu_{\mathrm{VBF+VH}}=$ (1, 1) is within the 95\% CL intervals for each of the three decay modes. \begin{figure*}[bhtp] \centering \includegraphics[width=0.49\textwidth]{figures/comb/sqr_rvrf_scan_2d_all_68_legSM} \caption{ The 68\% (solid lines) and 95\% (dashed lines) CL contours for the signal strength of the gluon-gluon fusion plus $\cPqt\cPqt\PH$ production mechanisms ($\mu_{\cPg\cPg\PH+\cPqt\cPqt\PH}$), versus VBF plus VH ($\mu_{\mathrm{VBF+VH}}$). The three different lines show the results for the decay modes: $\gamma\gamma$, $\PW\PW$, and $\tau\tau$. The markers indicate the best-fit values for each mode. The diamond at (1,1) indicates the expected values for the SM Higgs boson. } \label{fig:rvrf} \end{figure*} \subsubsection{Consistency of the data with the SM Higgs boson couplings} The event yield $N$ of Higgs bosons produced in collisions of partons $x$ ($xx \to \PH$) and decaying to particles $y$ ($\PH \to yy$), is proportional to the partial and total Higgs boson decay widths as follows: \begin{equation} N \propto \sigma(xx \to \PH) \cdot \mathcal{B}(\PH \to yy) \propto \frac {\Gamma_{xx} \, \Gamma_{yy} } { \Gamma_{\mathrm{tot}} }, \end{equation} where $\sigma(xx \to \PH)$ is the Higgs boson production cross section, $\mathcal{B}(\PH \to yy)$ is the branching fraction for the decay mode, $\Gamma_{xx}$ and $\Gamma_{yy}$ are the partial widths associated with the $\PH \to xx$ and $\PH \to yy$ processes, and $\Gamma_{\mathrm{tot}}$ is the total width. Seven partial widths ($\Gamma_{\PW\PW}$, $\Gamma_{\cPZ\cPZ}$, $\Gamma_{\cPqt\cPqt}$, $\Gamma_{\cPqb\cPqb}$, $\Gamma_{\tau\tau}$, $\Gamma_{\cPg\cPg}$, $\Gamma_{\gamma\gamma}$) and the total width $\Gamma_{\text{tot}}$ are relevant for the current analysis, where $\Gamma_{\cPg\cPg}$ is the partial width for the Higgs boson decay to two gluons. The partial widths $\Gamma_{\cPg\cPg}$ and $\Gamma_{\gamma\gamma}$ are generated by loop diagrams and thus are directly sensitive to the presence of new physics. The possibility of Higgs boson decays to beyond-the-standard-model (BSM) particles, with a partial width $\Gamma_{\mathrm{BSM}}$, is accommodated by making $\Gamma_{\text{tot}}$ equal to the sum of all partial widths of allowed decays to the SM particles plus $\Gamma_{\mathrm{BSM}}$. The partial widths are proportional to the square of the effective Higgs boson couplings to the corresponding particles. To test for possible deviations of the measurements from the rates expected in different channels for the SM Higgs boson, we introduce different sets of coupling scale factors $\kappa$ and fit the data to these new parameters. One can introduce up to eight independent parameters relevant for the current analysis. Significant deviations of the scale factors from unity would imply new physics beyond the SM Higgs boson hypothesis. The current data set is insufficient to measure all eight independent parameters. Therefore, we measure different subsets, with the remaining unmeasured parameters either constrained to equal the SM Higgs boson expectations or included in the likelihood fit as unconstrained nuisance parameters. \textit{A. Test of custodial symmetry} In the SM, the Higgs boson sector possesses a global $\mathrm{SU(2)_L \times SU(2)_R}$ symmetry, which is broken by the Higgs boson vacuum expectation value down to the diagonal subgroup $\mathrm{SU(2)_{L+R}}$. As a result, the tree-level relations between the ratios of the \PW\ and \cPZ\ boson masses, $m_{\PW} / m_{\cPZ}$, and their couplings to the Higgs boson, $g_{\PW} / g_{\cPZ}$, are protected against large radiative corrections, a phenomenon known as ``custodial symmetry''~\cite{Veltman:1977kh,Sikivie:1980hm}. However, large violations of custodial symmetry are possible in BSM theories. To test custodial symmetry, we introduce two scaling factors $\kappa_{\PW}$ and $\kappa_{\cPZ}$ that modify the SM Higgs boson couplings to $\PW$ and $\cPZ$ bosons, and perform two different procedures to determine the consistency of the ratio $\lambda_{\PW\cPZ} = \kappa_{\PW} / \kappa_{\cPZ}$ with unity. The dominant Higgs boson production mechanism for the inclusive $\PH \to \cPZ\cPZ$ and untagged $\PH \to \PW\PW$ channels is $gg \to \PH$. Therefore, the ratio of the event yields for these channels provides a test of custodial symmetry. To quantify the test, we introduce two event-rate modifiers $\mu_{\cPZ\cPZ}$ and $R_{\PW\cPZ}$. The expected $H \to \cPZ\cPZ \to 4\ell$ event yield is scaled by $\mu_{\cPZ\cPZ}$, while the expected untagged $H \to \PW\PW \to \ell\nu\ell\nu$ event yield is scaled by $R_{\PW\cPZ} \cdot \mu_{\cPZ\cPZ}$. The mass of the observed state is fixed to {\color{black} 125.5}\GeV. The test statistic $q(R_{\PW\cPZ})$ as a function of $R_{\PW\cPZ}$, with $\mu_{\cPZ\cPZ}$ included with the other nuisance parameters, is shown in Fig.~\ref{fig:fit_rwz_scan} (left) and yields $R_{\PW\cPZ} =$ \Rwz, where the uncertainty is the combined statistical and systematic. The contributions from VBF and VH production to the fit give a small bias of 0.02 when relating the observed event-yield ratio $R_{\PW\cPZ}$ to the square of the ratio of the couplings $\lambda^2_{\PW\cPZ}$. Hence, the current measurements are consistent, within the uncertainties, with the expectation from custodial symmetry. \begin{figure*}[bhtp] \centering \includegraphics[width=0.49\textwidth]{figures/comb/sqr_rwz_scan_1d_ggH_HPA} \hfill \includegraphics[width=0.49\textwidth]{figures/comb/sqr_lwz_scan_1d_all} \caption{ (Left) The likelihood test statistic $q(R_{\PW\cPZ})$ as a function of the event-rate modifier $R_{\PW\cPZ}$ from the combined untagged $\PH \to \PW\PW \to \ell\cPgn\ell\cPgn$ and inclusive $\PH \to \cPZ\cPZ \to 4\ell$ searches. (Right) The test statistic $q(\lambda_{\PW\cPZ})$ as a function of the ratio of the couplings to $\PW$ and $\cPZ$ bosons, $\lambda_{\PW\cPZ}$, from the combination of all channels. The intersection of the curves with the horizontal lines $q= 1$ and 3.8 give the 68\% and 95\% CL intervals, respectively. } \label{fig:fit_rwz_scan} \end{figure*} In the second method, we extract $\lambda_{\PW\cPZ}$ directly from the combination of all search channels. In this approach, we use three parameters: $\lambda_{\PW\cPZ}$, $\kappa_{\cPZ}$, and $\kappa_F$. The latter variable is a single event-rate modifier for all Higgs boson couplings to fermions. The BSM Higgs boson width $\Gamma_{\mathrm{BSM}}$ is set to zero. The partial width $\Gamma_{\cPg\cPg}$, induced by quark loops, scales as $\kappa_F^2$. The partial width $\Gamma_{\Pgg\Pgg}$ is also induced via loop diagrams, with the \PW\ boson and top quark being the dominant contributors; hence, it scales as $| \alpha \, \kappa_{\PW} + \beta \, \kappa_F |^2$, where $\kappa_{\PW} = \lambda_{\PW\cPZ} \cdot \kappa_{\cPZ}$ and the ratio of the factors $\alpha$ and $\beta$, $\beta / \alpha \approx -0.22$, is taken from the prediction for the SM Higgs boson with $\ensuremath{m_{\PH}}=125.5\GeV$~\cite{Spira:1995rr}. In the evaluation of $q (\lambda_{\PW\cPZ})$, both $\kappa_{\cPZ}$ and $\kappa_F$ are included with the other nuisance parameters. Assuming a common scaling factor for all fermions makes this measurement model dependent, but using all the channels gives it greater sensitivity. The results are shown in Fig.~\ref{fig:fit_rwz_scan} (right) by the solid line. The dashed line indicates the median expected result for the SM Higgs boson, given the integrated luminosity. The measured value is $\lambda_{\PW\cPZ} = 1.1^{+0.5}_{-0.3}$, where the uncertainty is the combined statistical and systematic. The result is consistent with the expectation of $\lambda_{\PW\cPZ} =1$ from custodial symmetry. In all further combinations presented below, we assume $\lambda_{\PW\cPZ}=1$ and use a common factor $\kappa_V$ to modify the Higgs boson couplings to $\PW$ and $\cPZ$ bosons. \textit{B. Test of the couplings to vector bosons and fermions} We further test the consistency of the measurements with the SM Higgs boson hypothesis by fitting for the two free parameters $\kappa_V$ and $\kappa_F$ introduced above. We assume $\Gamma_{\mathrm{BSM}}=0$, \ie no BSM Higgs boson decay modes. At lowest order, all partial widths, except for $\Gamma_{\Pgg\Pgg}$, scale either as $\kappa^2_V$ or $\kappa^2_F$. As discussed above, the partial width $\Gamma_{\Pgg\Pgg}$ scales as $| \alpha \, \kappa_V + \beta \, \kappa_F |^2$. Hence, $\Pgg\Pgg$ is the only channel sensitive to the relative sign of $\kappa_V$ and $\kappa_F$. Figure~\ref{fig:cVcF_2D} shows the 2D likelihood test statistic over the $(\kappa_V,\,\kappa_F)$ plane. The left plot allows for different signs of $\kappa_V$ and $\kappa_F$, while the right plot constrains both of them to be positive. The 68\%, 95\%, and 99.7\% CL contours are shown by the solid, dashed, and dotted lines, respectively. The global minimum in the left plot occurs in the $(+,-)$ quadrant, which is due to the observed excess in the $\Pgg\Pgg$ channel. If the relative sign between $\kappa_V$ and $\kappa_F$ is negative, the interference term between the $\PW$ and top-quark loops responsible for the $\PH \to \Pgg\Pgg$ decays becomes positive and helps boost the $\Pgg\Pgg$ branching fraction. However, the difference between the global minimum in the $(+,-)$ quadrant and the local minimum in the $(+,+)$ quadrant is not statistically significant since the 95\% CL contours encompass both of them. The data are consistent with the expectation for the SM Higgs boson: the point at ($\kappa_V,\kappa_F$) = (1, 1), shown by the diamond, is within the 95\% CL contour. Any significant deviation from ($\kappa_V,\kappa_F$) = (1, 1) would imply BSM physics, with the magnitude and sign of the $\kappa_V$ and $\kappa_F$ measurements providing a clue to the most plausible BSM scenarios. Figure~\ref{fig:cVcF_subchannels} displays the corresponding 68\% and 95\% contours of $\kappa_V$ versus $\kappa_F$ from each of the individual decay modes, restricting the parameters to the $(+,+)$ and $(+,-)$ quadrants (left), and the $(+,+)$ quadrant (right). The hypothesis of a ``fermiophobic'' Higgs boson that couples only to bosons is represented by the point at (1, 0). The point is just outside the 95\% CL contour, which implies that a fermiophobic Higgs boson with \ensuremath{m_{\PH}}\ = 125.5\GeV is excluded at 95\% CL. The 1D likelihood scans versus $\kappa_V$ and $\kappa_F$, setting one parameter at a time to the SM value of 1, are given in the left and right plots of Fig.~\ref{fig:cVcF_1D}, respectively. The resulting fit values are: $\kappa_V =1.00 \pm 0.13$ and $\kappa_F= 0.5 \pm 0.2$, where the uncertainties are combined statistical and systematic, with corresponding 95\% CL intervals of \CVNF\ and \CFNF, respectively. \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{figures/comb/sqr_cvcf_scan_2d_comb_HPA_smallGGScale} \hfill \includegraphics[width=0.49\textwidth]{figures/comb/sqr_cvcf_cut_scan_2d_comb_HPA_smallGGScale} \caption{ The likelihood test statistic in the $\kappa_V$ versus $\kappa_F$ plane. The cross indicates the best-fit values. The solid, dashed, and dotted lines show the 68\%, 95\%, and 99.7\% CL contours, respectively. The diamond shows the SM point $(\kappa_V, \kappa_F)$ = (1, 1). The left plot allows for different signs of $\kappa_V$ and $\kappa_F$, while the right plot constrains them both to be positive. } \label{fig:cVcF_2D} \end{figure*} \begin{figure*} \centering \resizebox{!}{0.44\textwidth}{\includegraphics{figures/comb/cVcF_all_channels_2quadrant}} \hfill \resizebox{!}{0.44\textwidth}{\includegraphics{figures/comb/cVcF_all_channels_1quadrant}} \caption{ The 68\% CL contours for the test statistic in the $(\kappa_V$ versus $\kappa_F)$ plane for individual channels (coloured regions) and the overall combination (solid thick lines). The thin dashed lines show the 95\% CL range for the overall combination. The black cross indicates the global best-fit values. The diamond shows the SM Higgs boson point $(\kappa_V, \kappa_F)$ = (1, 1). The point $(\kappa_V, \kappa_F)$ = (1, 0), indicated by the circle, corresponds to the fermiophobic Higgs boson scenario. The left plot allows for different signs of $\kappa_V$ and $\kappa_F$, while the right plot constrains them both to be positive. } \label{fig:cVcF_subchannels} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{figures/comb/sqr_cvcf_cut_scan_2d_comb_HPA_smallGGScale_slice_CF} \hfill \includegraphics[width=0.49\textwidth]{figures/comb/sqr_cvcf_cut_scan_2d_comb_HPA_smallGGScale_slice_CV} \caption{ The likelihood test statistic $q(\kappa_V;\kappa_F=1)$ (left) and $q(\kappa_F;\kappa_V=1)$ (right). The intersections with the horizontal lines $q=1$ and $q=3.84$ mark the 68\% and 95\% CL intervals, respectively, as shown by the vertical lines. } \label{fig:cVcF_1D} \end{figure*} \textit{C. Test for the presence of BSM particles} The presence of BSM particles can considerably modify the Higgs boson phenomenology, even if the underlying Higgs boson sector in the model remains unaltered. Processes induced by loop diagrams ($\PH \to \Pgg\Pgg$ and $\cPg\cPg \to \PH$) can be particularly sensitive to the presence of new particles. Therefore, we combine and fit the data to the scale factors $\kappa_{\Pgg}$ and $\kappa_\cPg$ for these two processes. The partial widths associated with the tree-level production processes and decay modes are assumed to be unaltered. Figure~\ref{fig:BSM1} displays the likelihood test statistic in the $\kappa_\cPg$ versus $\kappa_{\Pgg}$ plane, under the assumption that $\Gamma_{\mathrm{BSM}}=0$. The results are consistent with the expectation for the SM Higgs boson of $(\kappa_{\Pgg}, \kappa_\cPg)=(1, \,1)$. The best-fit value is $(\kappa_{\Pgg}, \kappa_\cPg)=(1.5,\,0.75)$. Figure~\ref{fig:BSM2} gives the likelihood test statistic versus $\mathrm{BR}_{\mathrm{BSM}}=\Gamma_{\mathrm{BSM}}/\Gamma_{\text{tot}}$, with $\kappa_\cPg$ and $\kappa_{\Pgg}$ included as unconstrained nuisance parameters. The resulting 95\% CL upper limit is $\mathrm{BR}_{\mathrm{BSM}} < 0.89$. \begin{figure*}[bhtp] \centering \includegraphics[width=0.49\textwidth]{figures/comb/sqr_kgkglu_scan_2d_comb_HPA} \caption{ The likelihood test statistic $q(\kappa_{\Pgg}, \kappa_\cPg)$ assuming $\Gamma_{\mathrm{BSM}}=0$. The cross indicates the best-fit values. The solid, dashed, and dotted contours show the 68\%, 95\%, and 99.7\% CL contours, respectively. The diamond shows the SM point $(\kappa_{\Pgg}, \kappa_\cPg)$ = (1, 1). The partial widths associated with the tree-level production processes and decay modes are assumed to be unaltered ($\kappa = 1$). } \label{fig:BSM1} \end{figure*} \begin{figure*}[bhtp] \centering \includegraphics[width=0.49\textwidth]{figures/comb/sqr_brinv_scan_1d_all} \caption{ The likelihood test statistic $q$ versus $\mathrm{BR}_{\mathrm{BSM}}=\Gamma_{\mathrm{BSM}}/\Gamma_{\mathrm{tot}}$, with the parameters $\kappa_\cPg$ and $\kappa_{\Pgg}$ included as nuisance parameters. The solid curve is the data; the dashed curve indicates the expected median results in the presence of the SM Higgs boson. The intersections with the horizontal lines $q=$ 1 and 3.8 give the 68\% and 95\% CL intervals, respectively. The partial widths associated with the tree-level production processes and decay modes are assumed to be unaltered ($\kappa = 1$). } \label{fig:BSM2} \end{figure*} \textit{D. Test for differences in the couplings to fermions} In two-Higgs-boson doublet models (2HDM) \cite{Branco:2011iw}, the couplings of the neutral Higgs bosons to fermions can be substantially modified with respect to the Yukawa couplings of the SM Higgs boson. For example, in the minimal supersymmetric model (MSSM), the couplings of the neutral Higgs bosons to up-type and down-type fermions are modified, with the modification being the same for all three generations and for quarks and leptons. In more general 2HDMs, leptons can be nearly decoupled from the Higgs boson that otherwise would behave like a SM Higgs boson with respect to the $\PW$ and $\cPZ$ bosons and the quarks. To test for such modifications to the fermion couplings, we evaluate two different combinations of the corresponding parameters: one in which we allow different ratios of couplings to the up- and down-type fermions ($\lambda_{\mathrm{du}} = \kappa_{\mathrm{d}} / \kappa_{\mathrm{u}}$), and the other where we allow different ratios of the couplings to the leptons and quarks ($\lambda_{\ell\mathrm{q}} = \kappa_{\ell} / \kappa_{\mathrm{q}}$). We assume that $\Gamma_{\mathrm{BSM}}=0$. Figure~\ref{fig:fit_ldu_llq_scan} (left) shows the resulting test statistic versus $\lambda_{\mathrm{du}}$, with the other free coupling modifiers, $\kappa_V$ and $\kappa_{\mathrm{u}}$, included as unconstrained nuisance parameters. The relative sign between the couplings to up- and down-type fermions is nearly degenerate, which manifests itself in the left-right symmetry observed in the plot. The symmetry is not perfect since there is some sensitivity to the sign of $\lambda_{\mathrm{du}}$ because of the nonvanishing role of the $\cPqb$ quark (in comparison to the top quark) in generating the Higgs boson coupling to gluons. Figure~\ref{fig:fit_ldu_llq_scan} (right) displays the corresponding results versus $\lambda_{\ell \mathrm{q}}$, with the two coupling modifiers, $\kappa_V$ and $\kappa_{\mathrm{q}}$, treated as unconstrained nuisance parameters. There are no loop-induced processes measurably sensitive to the relative sign of the couplings to leptons and quarks; hence, the plot exhibits a perfect left-right symmetry. Both $| \lambda_{\mathrm{du}} |$ and $| \lambda_{\ell \mathrm{q}} |$ are consistent with 0 and 1, with a 95\% CL upper limit of 1.5 for both. The main reason for both parameters having their best-fit values close to 0 is the lack of any event excess in the $H \to \Pgt\Pgt$ channel. However, neither the $H \to \Pgt\Pgt$ nor the $H \to \cPqb\cPqb$ channels have reached sufficient sensitivity to place strong constraints on the parameters associated with the corresponding Higgs boson couplings. \begin{figure*}[bhtp] \centering \includegraphics[width=0.49\textwidth]{figures/comb/sqr_ldu_scan_1d_all} \hfill \includegraphics[width=0.49\textwidth]{figures/comb/sqr_llq_scan_1d_all} \caption{ (Left) Likelihood test statistic $q$ as a function of the ratio $\lambda_{\mathrm{du}}$ of the coupling to the up- and down-type fermions with the coupling modifiers $\kappa_V$ and $\kappa_{\mathrm{u}}$ treated as nuisance parameters. (Right) The likelihood test statistic as a function of the ratio $\lambda_{\ell \mathrm{q}}$ of the couplings to leptons and quarks with the coupling modifiers $\kappa_V$ and $\kappa_{\mathrm{q}}$ treated as nuisance parameters. The solid curves are the results from the data. The dashed curves show the expected distributions for the SM Higgs boson. The intersection of the curves with the horizontal lines $q=$1 and 3.8 give the 68\% and 95\% CL intervals, respectively. } \label{fig:fit_ldu_llq_scan} \end{figure*} \section{Summary}\label{sec:Conclusion} In this paper, the analyses that were the basis for the discovery of a new boson at a mass of approximately 125\GeV have been described in detail. The data were collected by the CMS experiment at the LHC in proton-proton collisions at $\sqrt{s}=7$~and~8\TeV, corresponding to integrated luminosities of up to 5.1\fbinv and 5.3\fbinv, respectively. The particle is observed in the search for the SM Higgs boson using five decay modes $\Pgg\Pgg$, $\cPZ\cPZ$, $\PW\PW$, $\Pgt\Pgt$, and $\cPqb\cPqb$. An excess of events is found above the expected background, with a local significance of 5.0$\sigma$, signaling the production of a new particle. The expected significance for a SM Higgs boson of that mass is 5.8$\sigma$. The excess is most significant in the two decay modes with the best mass resolution, $\Pgg\Pgg$ and $\cPZ\cPZ \to 4\ell$, and a fit to these invariant-mass peaks gives a mass of $125.3\pm 0.4\stat\pm 0.5\syst\GeV$. The decay to two photons indicates that the new particle is a boson with spin different from one. Within the SM hypothesis, the couplings of the new particle to vector bosons, fermions, gluons, and photons have been measured. All the results are consistent, within their uncertainties, with expectations for a SM Higgs boson. More data are needed to ascertain whether the properties of this new state imply physics beyond the SM. \section*{Acknowledgements} \hyphenation{Bundes-ministerium Forschungs-gemeinschaft Forschungs-zentren} We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centres and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: the Austrian Federal Ministry of Science and Research and the Austrian Science Fund; the Belgian Fonds de la Recherche Scientifique, and Fonds voor Wetenschappelijk Onderzoek; the Brazilian Funding Agencies (CNPq, CAPES, FAPERJ, and FAPESP); the Bulgarian Ministry of Education, Youth and Science; CERN; the Chinese Academy of Sciences, Ministry of Science and Technology, and National Natural Science Foundation of China; the Colombian Funding Agency (COLCIENCIAS); the Croatian Ministry of Science, Education and Sport; the Research Promotion Foundation, Cyprus; the Ministry of Education and Research, Recurrent financing contract SF0690030s09 and European Regional Development Fund, Estonia; the Academy of Finland, Finnish Ministry of Education and Culture, and Helsinki Institute of Physics; the Institut National de Physique Nucl\'eaire et de Physique des Particules~/~CNRS, and Commissariat \`a l'\'Energie Atomique et aux \'Energies Alternatives~/~CEA, France; the Bundesministerium f\"ur Bildung und Forschung, Deutsche Forschungsgemeinschaft, and Helmholtz-Gemeinschaft Deutscher Forschungszentren, Germany; the General Secretariat for Research and Technology, Greece; the National Scientific Research Foundation, and National Office for Research and Technology, Hungary; the Department of Atomic Energy and the Department of Science and Technology, India; the Institute for Studies in Theoretical Physics and Mathematics, Iran; the Science Foundation, Ireland; the Istituto Nazionale di Fisica Nucleare, Italy; the Korean Ministry of Education, Science and Technology and the World Class University program of NRF, Republic of Korea; the Lithuanian Academy of Sciences; the Mexican Funding Agencies (CINVESTAV, CONACYT, SEP, and UASLP-FAI); the Ministry of Science and Innovation, New Zealand; the Pakistan Atomic Energy Commission; the Ministry of Science and Higher Education and the National Science Centre, Poland; the Funda\c{c}\~ao para a Ci\^encia e a Tecnologia, Portugal; JINR (Armenia, Belarus, Georgia, Ukraine, Uzbekistan); the Ministry of Education and Science of the Russian Federation, the Federal Agency of Atomic Energy of the Russian Federation, Russian Academy of Sciences, and the Russian Foundation for Basic Research; the Ministry of Science and Technological Development of Serbia; the Secretar\'{\i}a de Estado de Investigaci\'on, Desarrollo e Innovaci\'on and Programa Consolider-Ingenio 2010, Spain; the Swiss Funding Agencies (ETH Board, ETH Zurich, PSI, SNF, UniZH, Canton Zurich, and SER); the National Science Council, Taipei; the Thailand Center of Excellence in Physics, the Institute for the Promotion of Teaching Science and Technology of Thailand and the National Science and Technology Development Agency of Thailand; the Scientific and Technical Research Council of Turkey, and Turkish Atomic Energy Authority; the Science and Technology Facilities Council, UK; the US Department of Energy, and the US National Science Foundation. Individuals have received support from the Marie-Curie programme and the European Research Council and EPLANET (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the Ministry of Education, Youth and Sports (MEYS) of Czech Republic; the Council of Science and Industrial Research, India; the Compagnia di San Paolo (Torino); and the HOMING PLUS programme of Foundation for Polish Science, cofinanced from European Union, Regional Development Fund.
2024-02-18T23:41:12.107Z
2013-07-05T02:03:30.000Z
algebraic_stack_train_0000
4,420
47,628
proofpile-arXiv_066-5642
\section{Introduction} The discovery that string theory admits an enormous number of flux vacua \cite{Strominger:1986uh,Polchinski:1995,Giddings:2001yu} has played a formative role in the theory's development for well over a decade. Aspects of these vacua, from their phenomenological and cosmological properties to their distribution and statistical features, have been extensively studied \cite{Grana:2005jc,Douglas:2006es}. Yet, due to the complexity of this landscape of vacua, many basic questions remain. In this paper we undertake a study, pursued from a variety of perspectives in a number of works \cite{Kachru:2003sx,Ceresole:2006iq,Dine:2007er, Sarangi:2007jb, Tye:2007ja, Podolsky:2008du, Brown:2010bc, Brown:2007zzh}, that is of potential relevance to one such vital question: Do we expect these vacua to be long-lived? As a direct analysis would present formidable challenges, we instead consider generic, field theoretic models of the landscape and study how the stability of vacua varies as the dimension of the moduli space (the number of fields) increases. Our results suggest that tunneling rates, and hence vacuum instability, grow so rapidly with the number of moduli that the probability of a given local minimum being metastable is exponentially small. In field theory, vacuum decay by quantum tunneling was studied by Coleman~\cite{Coleman:1977py,Callan:1977pt}. He showed that the decay proceeded by the nucleation of bubbles of a lower vacuum inside the original false vacuum. In the semiclassical approximation the nucleation rate per unit volume is governed by a bounce solution of the Euclidean field equations. It can be written in the form $\Gamma= A e^{-B}$, where $A$ depends on the determinant of fluctuations around the bounce solution and $B$ is the Euclidean action of the bounce. The analysis was extended to include gravitational effects by Coleman and De Luccia~\cite{Coleman:1980aw}. For decay from a de Sitter vacuum the resulting corrections to the nucleation rate are typically small unless the potentials are Planckian in scale or the bubbles nucleate with a size comparable to the horizon length. With unusually flat potential barriers it can happen that there is no Coleman-De Luccia bounce, but in such cases there is always a Hawking-Moss~\cite{Hawking:1981fz} solution corresponding to a process in which an entire horizon volume fluctuates to the top of the potential barrier. In string theory the large number of vacua and moduli fields complicates the situation, but at the same time opens up lines of attack based on statistical analysis. For example, Denef and Douglas \cite{Denef:2004ze} proposed a method of calculating the density of flux vacua in the string landscape in terms of the K\"ahler potential on the moduli space of a given Calabi-Yau compactification. Their work showed that a sharp accumulation of vacua generally occurs near the conifold locus in moduli space. Dine et al. \cite{Dine:2007er} used scaling arguments to conclude that vacua in the string landscape with small cosmological constant become unstable when fluxes are large relative to their compactification volume. Chen et al.~\cite{Chen:2011ac} and Marsh et al.~\cite{Marsh:2011aa} showed that increasing the number of moduli fields suppresses exponentially the chance that a randomly chosen critical point will be a minima, with the suppression growing for vacua with high energy. In this paper we also pursue a statistical approach, focusing our analysis on effective field theory models of many-dimensional moduli spaces. We provide numerical evidence that the rate for tunneling out of a typical false vacuum grows rapidly as a function of the number of moduli fields. Specifically, the fraction of vacua with tunneling rates low enough to maintain metastability appears to fall as an exponential of a power of the moduli space dimension. In Sec.~\ref{many-fields} we describe our approach for estimating tunneling rates in field theoretical models of high-dimensional moduli spaces. Our numerical methods and results are described in Sec.~\ref{numerics}. These results reveal a general feature of high-dimensional field theories and are independent of applications to the string landscape. In Sec.~\ref{implications} we discuss the efficacy of using random potentials to model the string landscape, and emphasize various considerations that would need to be resolved before a direct application could be justified. In Sec.~\ref{Multiverse} we estimate the maximum dimension of moduli spaces whose associated flux landscape would be expected to generically contain metastable vacua, assuming that our numerics and extrapolations are applicable. The result is a drastic reduction in the number of metastable vacua. Finally, in Sec.~\ref{Conclusions} we suggest future directions for studying these models and summarize our conclusions. \section{Vacuum Decay} \label{many-fields} We consider the dynamics of a moduli space spanned by $N$ scalar fields $\phi_j$ with a Lagrangian \begin{equation} {\cal L} = \frac12 \sum_{j=1}^N \, \partial_\mu \phi_j \partial^\mu \phi_j - V(\phi_1,\phi_2,\dots, \phi_N) \, . \end{equation} The potential $V$ will in general have many local minima that correspond to metastable false vacua. Let us consider one of these which, by a shifting of the field variables, can be taken to lie at the origin of field space, $\phi=0$. Assuming the potential to be smooth at this point, we can expand it in a power series \begin{equation} V = \lambda \left( \sum_i A_{ii}^{(2)} \phi_i^2 v^2 + \sum_{ijk} A_{ijk}^{(3)} \phi_i \phi_j \phi_k v + \sum_{ijkl} A_{ijkl}^{(4)} \phi_i \phi_j \phi_k \phi_l + ... \right) \, . \label{Vexpansion} \end{equation} Here $v$, with dimensions of mass, is a characteristic distance in field space that corresponds to a typical distance between stationary points of $V$. We have also extracted a dimensionless constant $\lambda$, to be chosen so that the dimensionless coefficients of the power series in brackets are of order unity. Finally, we have used the freedom to make an O($N$) transformation on the fields to eliminate off-diagonal terms in the quadratic term of the power series. The exponent $B$ in the bubble nucleation rate is the action of the Euclidean bounce solution. We ignore gravitational effects and assume O(4) symmetry, with the fields being functions only of $s=\sqrt{{\bf x}^2 + x_4^2}$. The bounce then satisfies \begin{equation} {d^2\phi_j \over ds^2} + \frac{3}{s}\, {d\phi_j \over ds} = {\partial V \over \partial \phi_j} \, . \end{equation} The boundary conditions are that $\phi(\infty)=0$, its false vacuum value, and that $\phi'(0)=0$. The actual value of the field at the origin is not determined in advance, but must be a point on the opposite side of the potential barrier from the false vacuum. Note that, except in the thin-wall limit, $\phi(0)$ is never equal to the true vacuum value. We will consider large ensembles of potentials, with the coefficients in the power series chosen randomly, as described in more detail in the next section. Ideally, we would calculate the nucleation rate for each potential by solving the bounce equations. However, finding bounce solutions in a theory with more than one scalar field is a daunting numerical problem. Doing so for a large sample of potentials is clearly infeasible. Instead, we invoke a more easily calculable proxy that can provide an indication of how the decay rate varies with the number of fields. A bounce solution may be viewed as containing a wall region, in which the fields pass through the barrier in $V(\phi)$, and an interior region where the fields are close to their values in the new vacuum. The distinction between the two regions becomes exact in the thin-wall approximation, which is valid in the limit where $\epsilon$, the difference between the energy densities of the true and false vacua, tends to zero. In this approximation the fields in the interior region take on exactly their true vacuum values. The radius $R$ of the bounce is determined by a balance between the negative action in the bounce interior and the positive action in the wall. With a single scalar field, this wall action is the product of the three-dimensional area of bounce and a surface tension that is well approximated by \begin{equation} \sigma = \left|\int_{\phi_{\rm fv}}^{\phi_*} d\phi \, \sqrt{2[V(\phi) - V_{\rm fv}]} \right| \, , \label{sigma-def} \end{equation} where $\phi_*$ denotes the point on the true vacuum side of the barrier such that $V(\phi_*) = V_{\rm fv}$. With additional scalar fields, $\sigma$ is obtained by integrating along a path in field space running from the false vacuum to a point near the true vacuum, with the path and endpoint chosen to minimize the integral. The net result is that $R = 3\sigma/\epsilon$, while the tunneling exponent is \begin{equation} B = {\pi^2\over 2} \sigma R^3 = {27\pi^2 \sigma^4 \over 2 \epsilon^3} \, . \label{twaB} \end{equation} Thus, in the rare cases in which we have two almost degenerate vacua, the thin-wall approximation is applicable, $B$ is large, bubble nucleation is greatly suppressed, and the false vacuum is long-lived. Since our focus is on effects that enhance bubble nucleation, this approximation is not of direct interest to us. Nevertheless, we can draw some useful insight from it. Equation~(\ref{twaB}) shows a strong dependence on $\epsilon$. This cannot continue outside the thin-wall approximation, because then the field in the bounce solution never reaches the true vacuum, and so cannot be directly sensitive to the value of $\epsilon$. On the other hand, the surface tension is closely related to the form of the potential barrier and should continue to be relevant. Outside the thin-wall limit, the boundary between the wall and the bounce interior is not well defined. A reasonable prescription would be to take it to be the hypersurface in Euclidean space on which $V(\phi)$ is equal to its false vacuum value. We could then define a quantity $\sigma$ as before, with the integration path running from the false vacuum to a point on the hypersurface $\Sigma$ in field space, lying on the other side of the potential barrier, on which $V(\phi)=V_{\rm fv}$. The path and the specific endpoint on $\Sigma$ would be chosen to minimize the integral. Unfortunately, performing the required minimization for a large ensemble of potentials is still calculationally infeasible. However, a plausible approximation is at hand. We might expect the minimizing path to pass though the region in field space where the barrier in $V(\phi)$ is lowest. This suggests considering a straight-line path running from $\phi=0$, though each saddle point, $\phi_{\rm sp}$, on the surrounding barrier, and ending on $\Sigma$. In fact, since on average the contributions from the segments before and after the saddle point will be equal, we follow a slightly simpler approach. We integrate over only the first part of the path, extract factors of $\lambda$ and $v$, and define the line integral \begin{eqnarray} \tilde\sigma &=& 2 \int_P d\phi\, \sqrt{2[V(\phi) - V(0)]} \cr & \equiv& \sqrt{\lambda} \, v^3\, \tilde s \, , \end{eqnarray} where $P$ is a straight-line path in field space running from the false vacuum at $\phi=0$ to the saddle point. There may be many saddle points, and thus many such paths. We expect the tunneling rate to be controlled by the path with the lowest tunneling integral, and define the corresponding value of $\tilde s$ to be \begin{equation} \tilde s_{\rm min} = s \, . \end{equation} Standard scaling arguments show that $B$ is inversely proportional to $\lambda$, but independent of $v$, although $v$ does affect the prefactor $A$ in $\Gamma$. Outside the thin-wall limit, the typical bounce radius is $R\sim k (\sqrt{\lambda}\,v )^{-1}$, where $k$ is a numerical factor of order unity. This suggests that \begin{equation} B \sim \pi^2 R^3 \tilde \sigma \sim {\pi^2 \over \lambda} \,k^3 \, s \, , \label{B-estimate} \end{equation} where we have included a factor of $\pi^2$ because of the four-dimensional spherical symmetry. A simple test of this estimate can be obtained by numerically evaluating $B$ and $s$ for a single-field potential with cubic and quartic terms. Inserting the results into Eq.~(\ref{B-estimate}) and taking $v$ to be half the difference between the true and false vacuum values of $\phi$, one obtains values of $k$ that vary between 5.2 and 6.0 over a wide range of parameters away from the thin-wall limit. This suggests \begin{equation} B \sim 10^3 \, {s \over \lambda} \, . \label{estimateB} \end{equation} The dilute-gas approximation that underlies the semiclassical approach to bubble nucleation breaks down when $B$ is comparable to or less than unity. In this regime the metastability of the false vacuum has essentially disappeared. The numerical studies that we describe in the next section show that when many scalar fields are present, the overwhelming majority of potentials lead to a value of $B$ too small to maintain metastability with any plausible value of $\lambda$. \section{Numerical Studies} \label{numerics} We numerically studied ensembles of theories with potentials of the form of Eq.~(\ref{Vexpansion}), using $s$ as an indicator of the vacuum stability. To make the calculations manageable we truncated the power series at the quartic terms. An ensemble was defined by taking the $A^{(n)}$ to be random numbers uniformly distributed over ranges defined by \begin{eqnarray} A_{ii}^{(2)} & \in & [0,a_2] \, ,\cr A_{ijk}^{(3)} & \in & [-a_3,a_3] \, ,\cr A_{ijkl}^{(4)} & \in & [-a_4,a_4] \, . \end{eqnarray} Allowing the $A_{ijkl}^{(4)}$ to be negative means that the truncated potential is not necessarily bounded from below. This is not a concern for us, since we are only concerned with the behavior near the minimum; at larger distances higher-order terms can provide a lower bound on $V$. Because we will be comparing theories with different numbers of fields, an important issue is how to vary the $a_n$ as the number of fields is varied. To determine this, let us require that the typical variation of the potential in a ball of radius $\phi_R$ in field space be independent of $N$, the number of fields. This ensures that, for any number $N_0 \le N$, we can recover results for $N - N_0$ fields by considering an $N_0$-dimensional cross section of the analysis for $N$ fields. In turn, this ensures that dependencies we find on $N$ are not due to peculiar $N$-dependent normalizations in $V$. With this assumption, the typical value for each of the $\phi_j$ in such a ball of radius $\phi_R$ is of order $\phi_R/\sqrt{N}$. The quadratic term is a sum of positive contributions and will be independent of $N$ if $a_2$ is. There are $N^3$ cubic terms, each of magnitude $\phi_R^3/N^{3/2}$. Because these can be of either sign, they will tend to cancel, so that the effective number of terms is of order $N^{3/2}$ and we are led to take $a_3$ to also be $N$-independent. A similar argument shows that $a_4$ should also be independent of $N$. As for the actual values of the $a_n$, note that a change of any two of these can be absorbed by a redefinition of $\lambda$ and $v$. Hence, there is no loss of generality in taking $a_2=a_3=1$. We did so, and also set $a_4=1$; the effect of other choices for $a_4$ is described below. \begin{figure} \centering \includegraphics[width=3.2in]{quartic-tension.eps} \includegraphics[width=3.2in]{quartic-V.eps} \caption{Left, the median value of $s$ for quartic potentials. Right, the median value of the lowest saddle point, in units of $\lambda v^4$, for quartic potentials. In both cases the bars indicate the range from the 25th to the 75th percentile.} \label{QuarticTensionAndHeight} \end{figure} For a given value of $N$ we chose an ensemble of 10,000 potentials. For each potential we found all of the stationary points\footnote{For a description of the numerical method for finding all such points, see \cite{Mehta:2011xs}. Further details will be published elsewhere~\cite{mehta-unpub}.} and picked out the saddle points with a single negative mode. From among the saddle points for each potential we found the one that gave the smallest value of $\tilde s$ (i.e., $\tilde s_{\rm min} \equiv s$) and the one that gave the lowest barrier height. Figure~\ref{QuarticTensionAndHeight} shows the median values of these quantities within each ensemble as functions of $N$ for the quartic potential. For both quantities a sharp decrease with increasing $N$ is clearly evident. Not only do the height and surface tension of these saddle points decrease, but also they get closer to the false vacuum minimum at the origin. This can be seen in Fig.~\ref{QuarticDistance}, where we have plotted the median distance to the saddle point with lowest $\tilde s$. (A plot of median distance to the lowest saddle point is virtually indistinguishable.) All of these plots show a power law dependence on $N$ with, e.g., \begin{equation} s_{\rm median} \approx C_{\rm tension} \, N^{-\alpha_{\rm tension}} \, . \label{s-fit} \end{equation} The best fit values for the various $\alpha$ and $C$ are shown in Table~\ref{expTable}. \begin{figure} \centering \includegraphics[width=3.2in]{quartic-phi.eps} \caption{The median distance, in units of $v$, to the saddle point with $\tilde s_{\rm min}=s$ for quartic potentials. Again, the bars indicate the range from the 25th to the 75th percentile.} \label{QuarticDistance} \end{figure} \begin{table}[htdp] \begin{center} \begin{tabular}{|l|c|c|c|c|} \hline & ~$\alpha_{\rm tension}$~ & ~$\alpha_{\rm height}$~& ~$\alpha_{\rm distance}$~ \\ \hline Cubic potentials & 2.73 & 3.16 & 1.15 \\ \hline Quartic potentials & 2.66 & 3.12 & 1.10 \\ \hline SUSY & 3.16 & 3.99 & 1.19 \\ \hline \end{tabular} \end{center} \begin{center} \begin{tabular}{|l|c|c|c|c|} \hline & ~$C_{\rm tension}$~ & ~$C_{\rm height}$~ & ~ $C_{\rm distance}$~ \\ \hline Cubic potentials & 0.26 & 0.090 & 0.67 \\ \hline Quartic potentials & 0.22 & 0.083 & 0.60 \\ \hline SUSY & 0.25 & 0.11 & 0.60 \\ \hline \end{tabular} \end{center} \caption{Best fit parameters, defined as in Eq.~(\ref{s-fit}), for power law fits to the data in Figs.~\ref{QuarticTensionAndHeight}, \ref{QuarticDistance}, \ref{CubicTensionAndHeight}, and \ref{CubicDistance}, as well as to the data for the supersymmetric potentials discussed in Sec.~\ref{implications}.} \label{expTable} \end{table} We have also plotted the number of stationary points around the false vacuum, in Fig.~\ref{QuarticExtremaMedians}. These do not follow a power law, but are instead closer to an exponential behavior. \begin{figure} \centering \includegraphics[width=3.2in]{quartic-saddles.eps} \caption{The median number of saddle points for quartic potentials, with the bars indicating the range from the 25th to the 75th percentile.} \label{QuarticExtremaMedians} \end{figure} Our decision to arbitrarily terminate the expansion of the potential with the quartic terms was motivated by considerations of calculational practicality. As a test of this choice, we also carried out the calculations without the quartic terms in the potential, retaining only the quadratic and cubic terms. As can be seen from the plots in Figs.~\ref{CubicTensionAndHeight}-\ref{CubicExtremaMedians}. and the data in Table~\ref{expTable}, the results are quite similar to those with the quartic term included. This leads us to conclude that our omission of quintic and higher terms has little effect on our results. \begin{figure} \centering \includegraphics[width=3.2in]{cubic-tension.eps} \includegraphics[width=3.2in]{cubic-V.eps} \caption{The same as in Fig.~\ref{QuarticTensionAndHeight}, but for cubic potentials.} \label{CubicTensionAndHeight} \end{figure} \begin{figure} \centering \includegraphics[width=3.2in]{cubic-phi.eps} \caption{The same as in Fig.~\ref{QuarticDistance}, but for cubic potentials.} \label{CubicDistance} \end{figure} \begin{figure} \centering \includegraphics[width=3.2in]{cubic-saddles.eps} \caption{The same as in Fig.~\ref{QuarticExtremaMedians}, but for cubic potentials.} \label{CubicExtremaMedians} \end{figure} We now return to the question of the dependence of the results on the ranges chosen for the coefficients in the potential. As noted previously, any change in the constants defining the ranges of the quadratic and cubic terms can be compensated by a rescaling of $\lambda$ and $v$. The dependence on the quartic coefficient range is illustrated in Fig.~\ref{vary-a4}, where we plot the median value of $s$ as a function of $a_4$ with $N=2$. We see that increasing $a_4$ produces a roughly exponential decrease in the median value of $s$; decreasing $a_4$ to zero simply reduces to the purely cubic potential. Hence the net effect of increasing the range for $a_4$ would be to lead to a lower tunneling exponent, thus strengthening the effects that we find. The results for the other quantities and other values of $N$ give similar results. \begin{figure} \centering \includegraphics[width=3.2in]{quartic-vary-a4.eps} \caption{The dependence of the median value of $s$ on the range parameter $a_4$. The data shown are for $N=2$.} \label{vary-a4} \end{figure} Our data indicate that the median value of $s$ falls rapidly as the number of fields is increased. From this we can conclude that at large $N$ the typical local minimum is less likely to have a low nucleation rate and a long lifetime. This by itself does not tell us about the number of outliers, vacua with high values of $s$. To study this we examined the distributions of the quantities that we have plotted within a given ensemble. For example, in Fig.~\ref{QuarticDistributions} we show the distributions of values of $s$ for several choices of $N$ with a quartic potential. These roughly coincide when they are plotted as functions of $s/s_{\rm median}$. The data suggest that the frequency of large $s$ has an approximately exponential falloff that can be described by \begin{equation} n(s) \approx n_0 \exp\left(-\gamma s/s_{\rm median}\right) \, . \end{equation} The values of $\gamma$ for various values of $N$ are shown in Table~\ref{distribTable}. Similar results are found for the data on the height of and distance to the lowest saddle point. \begin{table}[htdp] \begin{center} \begin{tabular}{|c|c|c|c|} \hline ~$N$~ & ~ Cubic~ & ~Quartic~ & ~SUSY~ \\ \hline 1 & 0.58 & 0.46 & 0.45 \\ \hline 2 & 0.38 & 0.39 & 0.33\\ \hline 3 & 0.40 & 0.35 & 0.33 \\ \hline 4 & 0.34 & 0.33 & 0.32 \\ \hline 5 & 0.37 & 0.35 & 0.32 \\ \hline 6 & 0.37 & 0.34 & \\ \hline 7 & 0.38 & 0.34 & \\ \hline 8 & 0.38 & 0.35 & \\ \hline 9 & 0.38 & 0.37 & \\ \hline 10 & 0.35 & 0.34 & \\ \hline \end{tabular} \end{center} \caption{Best fit values for the parameter $\gamma$ in the distributions of surface tension $s$ for cubic and quartic non-supersymmetric and quartic supersymmetric potentials with various $N$.} \label{distribTable} \end{table} Inserting the fit of Eq.~(\ref{s-fit}) for $s_{\rm median}$ into this expression gives \begin{equation} n(s) \approx n_0 \exp\left(- {\gamma\over C_{\rm tension}}N^\alpha s \right) \, , \end{equation} where for convenience we have defined $\alpha \equiv \alpha_{\rm tension}$. Extrapolating to larger $N$ and using the estimate in Eq.~(\ref{estimateB}) suggests that the fraction of potentials with a tunneling exponent greater than some value $\hat B$ is roughly \begin{equation} f(\hat B) \sim \exp(- \beta \,N^\alpha \, \hat B) \, , \label{fractionEst} \end{equation} where \begin{equation} \beta = 10^{-3} \, {\gamma \lambda \over C_{\rm tension}} \, . \end{equation} Our numerical results suggest that $\gamma/C_{\rm tension}$ is close to unity. Because $\lambda$ is extracted from the shape of the landscape, there is no reason to expect it to be a small coupling constant. It could well be of order unity, in which case $\beta \sim 10^{-3}$. Equation~(\ref{fractionEst}) then represents a tremendous suppression with increasing $N$ of vacua with nucleation rates low enough to maintain metastability. \begin{figure} \centering \includegraphics[width=5.5in]{quartic-spread.eps} \caption{The distribution of values of $s$ in the ensembles for a quartic potential with various values of $N$. The vertical axis represents the natural logarithm of the number of values in each bin. For the sake of clarity, the data for different values of $N$ have been offset by constants, so only the slopes are meaningful. Purple diamonds correspond to $N=2$; green down-pointing triangles, $N=4$; red up-pointing triangles, $N=6$; blue squares, $N=8$; and black circles, $N=10$.} \label{QuarticDistributions} \end{figure} Note that we have ignored gravitational effects in our analysis of tunneling rates. With low barriers and rapid bubble nucleation this is generally a good approximation. It should be noted, though, that in a de Sitter vacuum with a relatively flat barrier the Hawking-Moss bounce provides an alternative mode of vacuum decay. The corresponding decay exponent is \begin{equation} B_{\rm HM} = {3\over 8 G_N^2} \,{V_{\rm barr} - V_{\rm fv} \over V_{\rm barr} V_{\rm fv}} \, , \end{equation} where $V_{\rm fv}$ and $V_{\rm barr}$ are the values of the potential in the false vacuum and at the top of the barrier, while $G_N$ is Newton's constant. Our numerical studies indicate that the barrier height relative to the false vacuum (i.e., $V_{\rm barr} - V_{\rm fv}$) has a power law falloff with $N$ quite similar to that found in the effective wall tension $s$, thus implying a similar enhancement of the Hawking-Moss transition rate. \section{Possible Implications for the String Landscape} \label{implications} The results we have described so far are a general feature of quantum field theories. Our motivation for studying random multi-field potentials, however, is the landscape of string theory. In this section, therefore, we assess the applicability and implications of our findings for the string landscape. Not only does the enormous number of local minima in the high dimensional string theory landscape (which can serve as the endpoints of tunneling events) offer the possibility of significant enhancement of tunneling probabilities,\footnote{The density of vacua near the conifold point, $\rho_{\rm conifold}$, in a one-dimensional moduli space is described by $1/[r^2(C+\log r)^2]$, where $r$ is the distance from the conifold point \cite{Denef:2004ze}; applying this result near a generic point along the conifold locus in an $n$-dimensional moduli space shows the rapid growth in the number of vacua, $\int d^nr \,\rho_{\rm conifold}\,$, with $n$.} but we have also seen that such large numbers of fields result in the exponential suppression of tunneling barriers. The question we now briefly consider is the extent to which the random potentials we have analyzed in the previous sections provide accurate insight into properties of the string landscape. For definiteness, focus on a standard flux compactification of type IIB string theory on a Calabi-Yau manifold $M$. With $G = F - \tau H$, $F$ and $H$ the RR and NS-NS three-form fluxes, and $\tau$ the axio-dilaton, the Gukov-Vafa-Witten superpotential $W$ is given by~\cite{Gukov:1999ya} \begin{equation} W = \int G \wedge \Omega \, , \label{superpot} \end{equation} where $\Omega$ is the holomorphic 3-form on $M$. The associated flux potential $V$ is given by \begin{equation} V_{M} = e^{K}(D_{\rho} W D^{\rho} {\overline W} - 3 |W|^2) \, . \label{susyV} \end{equation} Assume that $G$, together with additional contributions (e.g.~D3-brane instantons or wrapped D7-branes), stabilizes all moduli \cite{Denef:2004dm}. Then consider $\{V_M\}$ as $M$ varies among all Calabi-Yau's, $G$ varies over flux values, the K\"ahler stabilizing contributions vary over all possibilities, and supersymmetry-breaking effects similarly scan a broad range. With all such variations in play, the local minima of $\{V_{M}\}$ will sweep through a large class of vacua. In the vicinity of any such vacuum state, we can expand the effective potential, yielding a field theory model of the form (\ref{Vexpansion}). Now, considering all such expansions that arise from all local minima of the collection $\{V_{M}\}$, we expect the expansion coefficients to randomly vary. Indeed, such random variation inspired the ansatz we chose in our numerical studies. But in applying the results of the previous sections to the string landscape, there are a number of details and complications that deserve further attention. First, we have assumed that the action for tunneling trajectories is well approximated by our proxy: twice that of a straight line in field space connecting the minima being studied and the optimal saddle point on the surrounding barrier. Yet, tunneling trajectories in multidimensional field spaces are notoriously subtle and can exhibit unexpected features; explicit examples in the landscape are the conifunneling trajectories between monodromy-related flux vacua found in \cite{Ahlqvist:2010ki} (see also \cite{ Danielsson:2006xw, Johnson:2008kc}). An additional, and potentially pivotal, complication is that the different flux vacua that are not monodromy related are generally minima of distinct potentials. Physically, such transitions invoke features not captured by our local field theoretic model, including for example the nucleation of branes to absorb changes in flux \cite{Brown:1988kg,de Alwis:2006cb}. These effects might significantly affect the tunneling action, and possibly mitigate the field theory instabilities we have identified. Second, we have assumed that as the stabilizing contributions to a given model are varied the effective potentials around local minima will have expansions that are well modeled by random polynomials. Is this correct? To be concrete, consider the part of the potential arising from the flux, $G$. On a given Calabi-Yau, $M$, there are dim $H^{3}(M)$ fluxes that enter $G$. As those fluxes vary, we have far fewer free parameters than the coefficients in Eq.~(\ref{Vexpansion}). However, the associated minima of $V$ will occur at different locations, $p$, in the moduli space of $M$. As the holomorphic form $\Omega$ depends on $p$, the coefficients in a local expansion of the superpotential, Eq.~(\ref{superpot}), and the corresponding potential, Eq.~(\ref{susyV}), will also vary with $p$. So, the local potential will be randomized both by the varying fluxes and by the varying values of the period vector over the moduli space. It seems reasonable to us that this will result in local expansions well modeled by the random potentials invoked in Sec.~\ref{numerics}, but we do not have a firm argument. We will return to this issue in a forthcoming work, where we will explicitly check this for specific Calabi-Yau's with low-dimensional moduli spaces. Third, we have taken a canonical form for the kinetic terms in our field theoretic models. It is well known, however, that string vacua are densest in the vicinity of the conifold locus, where the classical moduli space metric suffers from a curvature singularity. In particular, near a generic point on the conifold locus we can choose local coordinates $(Z^1, Z^2, \dots ,Z^P)$ on the moduli space such that $Z^1 = 0$ labels the conifold. Near $Z^1 = 0,$ the moduli space metric $G$ behaves as: \begin{equation} G^{11}(Z) \sim {\rm ln} (|Z^1|^2) \, . \end{equation} The local form of the action then takes the form \begin{equation} \int \sqrt {-g} [g^{\mu \nu} G_{i j} \,\partial_\mu Z^i \,\partial_{\nu} Z^j - V(Z)], \end{equation} where $g$ is the space-time metric. In this expression the coordinates $Z$ are the moduli space representation of the scalar fields $\phi$ and $V$ is their flux potential. At any non-singular point we can, of course, use a local change of field variables to absorb $G$ into the $Z$, yielding a canonical kinetic term. But as we approach a conifold point, this change of variables corresponds to reducing the barrier heights in $V$ (assuming $V$ is continuous and is being expanded about a local minimum) and thus increases tunneling rates. In the regions of moduli space that are most densely populated with string vacua, we therefore expect the non-canonical kinetic terms to augment the destabilization we have found. Fourth, since the flux potential $V$ is derived from the superpotential $W$, and it is $W$ that directly incorporates flux values, a more accurate representation of the landscape arises from randomly varying coefficients in a local expansion of $W$, and then using the result to calculate the random potentials. In principle, the relationships between the coefficients in $V$, which reflect its origin in $W$, could alter our findings. We have undertaken such an analysis. Specifically, we considered a theory with $N$ chiral superfields $\Phi_j$. The superpotential $W$ was taken to be a polynomial \begin{equation} W = \frac12\sum_i C_{ii}^{(2)} \Phi_i^2 + \frac13 \sum_{ijk} C_{ijk}^{(3)} \Phi_i \Phi_j \Phi_k + \frac14\sum_{ijkl} C_{ijkl}^{(4)} \Phi_i \Phi_j \Phi_k \Phi_l + ... \label{Wexpansion} \end{equation} The $C_{ii}^{(2)}$ were randomly chosen real numbers in the range $[0,1]$, while $C_{ijk}^{(3)}$ and $C_{ijkl}^{(4)}$ were complex numbers whose real and imaginary parts were taken randomly from the interval $[-1,1]$. The scalar field potential derived from $W$ was then truncated at quartic order and analyzed in a fashion similar to our non-supersymmetric potentials. With up to $N=5$ superfields we again find a power law falloff with $N$. Because the $N$ chiral superfields correspond to $2N$ real scalar fields, instead of writing the fits to the data as in Eq.~(\ref{s-fit}) we write, e.g., \begin{equation} s_{\rm median} = C_{\rm tension} (2N)^{-\alpha} \, . \end{equation} As can be seen from the data in our tables, the various fit parameters are rather similar to those for the nonsupersymmetric case, with the most notable difference being that $s_{\rm median}$ varies as $N^{-3.16}$, compared to $N^{-2.66}$ in the non-supersymmetric case\footnote{Because we have not included any supersymmetry-breaking terms in the potential, the vacua here are actually stable, and our data correspond to domain walls rather than bubble walls. However, we do not expect the picture to be changed materially when the vacua are lifted.}. This example also illustrates the challenges of investigating random potentials for large numbers of fields. In all of our studies, supersymmetric or not, computational considerations have forced us to only probe a limited range of values of $N$, the number of fields, and work with truncated potentials. We are assuming that the pattern we have found, as evidenced in Figs.~\ref{QuarticTensionAndHeight}-\ref{QuarticDistributions}, will continue to hold as these constraints are relaxed. \section{A multiverse explanation of the cosmological constant?} \label{Multiverse} Our results have immediate implications for attempts to find a multiverse solution to the cosmological constant problem in such field theory models. The argument for such a solution is that even if the natural scale for the cosmological constant in a typical vacuum is Planckian, one might expect to find vacua with $\Lambda \sim 10^{-120}$ in Planck units if the number of vacua is much greater than $10^{120}$. Because we want a vacuum that has not only a small $\Lambda$ but also a long lifetime, what we actually need is that the number of truly metastable vacua be much greater than $10^{120}$. Let us suppose that the number of vacua is ${\cal N}_{\rm vac} \sim {\cal F}^N$, with perhaps ${\cal F}=10$. For metastability, we require that the tunneling exponent $B$ be no smaller than a value $B_{\rm min}$ of order unity. The number of metastable vacua is then \begin{equation} {\cal N}_{\rm vac} \, f(B_{\rm min}) \sim {\cal F}^N \, e^{-\beta B_{\rm min}N^\alpha} \end{equation} with $f(B)$ given by Eq.~(\ref{fractionEst}). The requirement that this be greater than $10^{120}$ can be written as \begin{equation} N - {b \over \ln {\cal F}} N^\alpha > 120 \left({\ln 10 \over \ln {\cal F}} \right) \end{equation} with \begin{equation} b = \beta B_{\rm min} = \left( 10^{-3} \gamma \over C_{\rm tension} \right) \lambda B_{\rm min} \sim 10^{-3} \lambda \, . \end{equation} This clearly needs large $N$, at least more than 120 if ${\cal F}=10$ as in the usual analysis. The new feature here is that, because of the enhancement of the tunneling rate at large $N$, taking $N$ to be too large makes matters worse rather than better. In fact, there may be no value of $N$ for which this condition is satisfied. This is illustrated in Fig.~\ref{QuarticRange}, where we have taken $\alpha=2.66$, the power we obtained from the analysis of quartic non-supersymmetric potentials. For ${\cal F}=10$ the allowed values of $N$ and $b$ correspond to the region to the left of the solid curve. We see that there are no acceptable values of $N$ for $b > 1.4 \times 10^{-3}$, and that the range of allowed $N$ is relatively restricted until $b$ falls well below this value. If instead we take ${\cal F}=100$, the allowed region extends to the dashed line, with the maximum allowed $b$ increased by roughly a factor of four. The effect of varying the power $\alpha$ can be seen in Fig.~\ref{SUSYRange}, where we have set $\alpha = 3.16$, the value from our supersymmetric data. Although the curve is similar to that in the previous case, the values of $b$ have fallen by more than an order of magnitude. \begin{figure} \centering \includegraphics[width=5.5in]{quartic-range.eps} \caption{Parameter ranges allowing multiverse explanations of the cosmological constant. With $\alpha=2.66$ and ${\cal F}=10$, a sufficient number of metastable vacua is only possible for parameters in the region to the left of the solid line. This region is extended to the dashed line if ${\cal F}=100$. } \label{QuarticRange} \end{figure} \begin{figure} \centering \includegraphics[width=5.5in]{susy-range.eps} \caption{Like Fig.~\ref{QuarticRange}, but for $\alpha=3.16$ and ${\cal F} =10$.} \label{SUSYRange} \end{figure} \section{Conclusions} \label{Conclusions} Motivated by the string landscape, we have undertaken a study of the stability of vacua in multi-field quantum theories. Our study has focused on random potentials with the number of fields $N \le 10$ and on polynomial expansions that include no more than quartic field contributions. Even with these constraints, coming from computational feasibility, the data we have accumulated provide evidence that transition rates are so rapidly enhanced as a function of $N$ that all but an exponentially small fraction of generic local minima in such quantum field theories are unstable to rapid decay. One consequence is that the range of parameters that give a sufficient number of metastable vacua to provide a natural solution to the cosmological constant problem is severely restricted. For example, if the moduli space dimension $N=500$ and there are ${\cal F}^N=10^{500}$ vacua, the ``coupling'' $\lambda$ characterizing the landscape must be less than 1/20 or so; with $N = 2000$, $\lambda$ must be an order of magnitude smaller. Alternatively, if $\lambda=1/2$ and $N=500$, then ${\cal F}$ must be greater than $6\times 10^6$. These considerations are potentially relevant to any model invoking anthropic explanations for the cosmological constant in which the required diversity of vacua is due to the model containing a high dimensional moduli space. However, since our results have been obtained in the context of a field theory with multiple scalar fields, the impact of similar considerations on the ability of the string landscape to offer a natural solution to the cosmological constant puzzle will require further study. \begin{acknowledgments} We thank Pontus Ahlqvist, Robert Brandenberger, Adam Brown, Alex Chen, Frederik Denef, Zhihua Dong, Kurt Hinterbichler, Vojkan Jaksic, Dmitry Jakobson, Luchang Jin, Dan Kabat, Zhongjie Lin, Bob Mawhinney, Massimo Porrati, I-Sheng Yang, Hantao Yin, Jianglei Yu, and Daiqian Zhang for helpful discussions and comments. This work was supported in part by the U.S. Department of Energy under grants DE-FG02-85ER40237 and DE-FG02-92ER40699. Parts of the computation were carried out on Fermilab LQCD clusters. \end{acknowledgments}
2024-02-18T23:41:12.193Z
2013-06-04T02:02:44.000Z
algebraic_stack_train_0000
4,426
6,694
proofpile-arXiv_066-5737
\section{Introduction} Spontaneous symmetry breaking plays an important role in high energy physics or more generally in quantum field theory; as an example one has to mention the mass generation by the Higgs mechanism. However, in the (0+1) dimension as a consequence of the equivalence between quantum field theory and quantum mechanics, a symmetry cannot be broken spontaneously due to tunneling \cite{zj,d1_anharmonic}. Thus, for one-dimensional quantum field theoretic models, the spontaneously broken phase should vanish if their phase structures have been determined without using approximations. Renormalization has relevance in quantum field theory, too, since this procedure is required to obtain measurable physical quantities. It can be performed nonperturbatively by means of the functional renormalization group (RG) method \cite{WP,We1993,Mo1994,internal} which was applied successfully in many cases; let us mention quantum Einstein gravity \cite{qeg} as a recent example. The functional RG equation for scalar fields \cite{We1993} \begin{equation} \label{erg} k \partial_k \Gamma_k [\varphi] = \hf {\rm Tr} \left[ (k\partial_k R_k) / (\Gamma_k^{(2)}[\varphi] + R_k) \right] \end{equation} is derived for the blocked effective action $\Gamma_k$, which interpolates between the bare $\Gamma_{k\to \Lambda} = S$ and the full quantum effective action $\Gamma_{k\to 0}=\Gamma$ where $k$ is the running momentum scale. The second functional derivative of the blocked action is represented by $\Gamma_k^{(2)}$, and the trace Tr stands for the momentum integration. $R_k$ is the regulator function where $R_k(p\to 0)>0$, $R_{k\to 0}( p)=0$ and $R_{k\to \Lambda}( p)=\infty$. To solve the RG equation \eq{erg} one of the commonly used systematic approximations is the truncated derivative (i.e., gradient) expansion where $\Gamma_k$ is expanded in powers of the derivative of the field, \begin{equation} \label{deriv} \Gamma_k [\varphi] = \int d^d x \left[V_k(\varphi) + Z_k(\varphi) \hf (\partial_{\mu} \varphi)^2 + ... \right]. \end{equation} Further approximations such as the Taylor or Fourier expansion of $V_k(\varphi)$, or $Z_k(\varphi)$, are usually also applied. However, the usage of approximations generates two problems: (i) for the $d=1$ dimension the spontaneously broken phase does not vanish in the approximated RG flow, and (ii) physical results obtained by the approximated RG flow become regulator dependent (i.e. renormalization scheme dependent). Therefore, it is of great importance to consider how the approximations used influence the phase structure of one-dimensional models and the comparison of results obtained by various types of regulator functions \cite{opt_rg,litim_o(n),opt_func,Ro2010,Mo2005,qed2,scheme, scheme_sg,minimal_sens,css,css_pms} is also a general issue. To optimize the scheme dependence, the Litim-Pawlowski optimization method has been worked out \cite{opt_rg,opt_func} based on the convergence of the truncated flow that is expanded in powers of the field variable. Its advantage is that in the leading order of the gradient expansion, i.e., in the local potential approximation (LPA), it is possible to find the optimal choice for the parameters of all the regulator functions. Furthermore the Litim's optimized regulator was constructed, which is expected to provide us with findings closest to "the best known" results in LPA, e.g. critical exponents of the $O(N)$ scalar theory in $d=3$ dimensions \cite{litim_o(n),minimal_sens,scheme,IR}. Its disadvantage is that Litim's regulator is in conflict with the derivative expansion since it is not a differentiable function. Another scenario for optimization through the principle of minimal sensitivity (PMS) was also considered \cite{minimal_sens}. Its advantage is that it can be applied at any order of the derivative expansions for any dimensions; i.e. it is possible to find the optimal choice for the parameters of a particular regulator. Its disadvantage is that one cannot determine the best regulator function among the usual exponential \cite{We1993}, power-law \cite{Mo1994} and Litim \cite{opt_rg} regulators through the PMS. However, the combination of the PMS method and the so-called compactly supported smooth (CSS) regulator \cite{css} provides the tool for optimization where various types of regulators can be directly compared to each other because the CSS recovers the major types of regulators in appropriate limits. This strategy has been successfully applied in LPA \cite{css_pms} in the framework of the $O(N)$ scalar theory in $d=3$ dimensions and the two-dimensional bosonized quantum electrodynamics (QED$_2$). In LPA the Litim regulator was found to be the most favorable regulator. Our goal in this work is to open a new platform to optimize RG equations that represents a suitable optimization scenario beyond LPA. In this new strategy, the requirement of the absence of the broken phase in the case of the nonapproximated RG flow in the $d=1$ dimension is used to optimize the RG scheme dependence of the approximated one. Its advantage is that regulators can be compared to each other at any order of the derivative expansion for $d=1$. It is performed in the framework of the sine-Gordon (SG) model \cite{cg_sg} which does not require field-dependent wave-function renormalization; thus, the determination of RG equations beyond LPA is simpler than in the case of other models. On the contrary, for the $O(N)$ scalar theory the field-dependent wave function renormalization cannot be avoided. Nevertheless, the new optimization method proposed here can be applied to the $O(N)$ scalar theory in $d=1$, too. After testing the optimization for the power-law regulator \cite{Mo1994}, we optimize the CSS regulator \cite{css}. A similar compactly supported smooth function has been used in nuclear physics \cite{sv} and the connection to the CSS regulator was shown \cite{css}. \section{Regulator functions} Regulator functions have already been discussed in the literature by introducing its dimensionless form \begin{equation} R_k( p) = p^2 r(y), \hskip 0.5cm y=p^2/k^2, \end{equation} where $r(y)$ is dimensionless. For example, the CSS regulator introduced recently in Ref. \cite{css} is defined as \begin{equation} \label{css_gen} r_{\mr{css}}^{\mr{gen}}(y) = \frac{\exp[c y_0^{b}/(f-h y_0^{b})] -1}{\exp[c y^{b}/(f -h y^{b})] -1} \theta(f-h y^b), \end{equation} with the Heaviside step function $\theta(y)$. Let us note, that the number of free parameters in \eq{css_gen} can be reduced by setting $f=1$ without loss of generality. The CSS regulator has the property \cite{css} to recover all major types of regulators: the Litim \cite{opt_rg}, the power-law \cite{Mo1994} and the exponential \cite{We1993} ones. By choosing a particular normalization (i.e. fixing $y_0$) the CSS regulator reads \begin{align} \label{css_norm} r_{\mr{css}}^{\mr{norm}}(y) =& \; \frac{\exp[\ln(2) c]-1}{\exp\left[\frac{\ln(2) c y^{b}}{1 -h y^{b}}\right] -1} \theta(1-h y^b), \end{align} where the limits are \begin{subequations} \label{css_norm_limits} \begin{align} \label{opt_lim} \lim_{c\to 0,h\to 1} r_{\mr{css}}^{\mr{norm}} = & \; \left(\frac{1}{y^b} -1\right) \theta(1-y), \\ \label{pow_lim} \lim_{c\to 0, h \to 0} r_{\mr{css}}^{\mr{norm}} = & \; \frac{1}{y^b}, \\ \label{exp_lim} \lim_{c \to 1, h \to 0} r_{\mr{css}}^{\mr{norm}} = & \; \frac{1}{\exp[\ln(2) y^b]-1}. \end{align} \end{subequations} The advantage of this type of normalization is that the form~\eq{css_norm} reproduces all the major types of regulators with optimal parameters, i.e. the Litim \eq{opt_lim} with $b=1$, the power-law \eq{pow_lim} with $b=2$, and the exponential \eq{exp_lim} with $b=1.44$. The optimal choices for the parameter $b$ are based on the Litim-Pawlowski optimization scenario. \section{SG model for dimensions $1\leq d \leq 2$} To perform the RG study of the SG model \cite{sg} beyond LPA it is convenient to introduce a dimensionless variable $\tilde\varphi = k^{(2-d)/2} \varphi$, and then the effective action reads \begin{eqnarray} \label{eaans_dimless} \Gamma_{k} = \int d^d x \left[\hf z_k (\partial_\mu{\tilde\varphi})^2 + u_k \cos(\tilde\varphi) \right], \end{eqnarray} where $u_k$ is the dimensionful coupling of the periodic self-interaction, $z_k$ stands for the field-independent wave-function renormalization that has a dimension of $k^{d-2}$ \cite{cg_sg}. Although RG transformations generate higher harmonics, we use the ansatz \eq{eaans_dimless} that contains a single Fourier mode since in the case of the SG model it was found to be an appropriate approximation \cite{cg_sg}. The RG flow equations for the couplings of \eq{eaans_dimless} can be derived from \eq{erg} \begin{eqnarray} \label{exact_u} k\partial_k u_k = \int _p \frac{k\partial_k R_k}{k^{2-d} u_k} \left(\frac{P-\sqrt{P^2-(k^{2-d} u_k)^2}}{\sqrt{P^2-(k^{2-d} u_k)^2}}\right),\\ \label{exact_z} k\partial_k z_k = \int_p \frac{k\partial_k R_k}{2} \biggl[ \frac{-(k^{2-d}u_k)^2P(\partial_{p^2}P+\frac{2}{d}p^2\partial_{p^2}^2P)} {[P^2-(k^{2-d}u_k)^2]^{5/2}}\nonumber\\ +\frac{(k^{2-d}u_k)^2 p^2 (\partial_{p^2}P)^2(4P^2+(k^{2-d}u_k)^2)} {d \, [P^2-(k^{2-d} u_k)^2]^{7/2}} \biggr] , \end{eqnarray} where $P = z_k k^{2-d} p^2+R_k$ and the momentum integral $\int_p = \int dp \, p^{d-1} \Omega_d/(2\pi)^d$ is usually performed numerically with the $d$-dimensional solid angle $\Omega_d$. The RG study of SG type models \cite{sg,qed2,qed_qcd} does not require field-dependent wave-function renormalization. We use normalized dimensionless parameters $\bar z_k \equiv (8\pi) \tilde z_k $ and $\bar u_k \equiv \tilde u_k k^2/ {\bar k}$ where $\tilde z_k = k^{2-d} z_k$ and $\tilde u_k = k^{-d} u_k$ are the conventional dimensionless couplings and $\bar{k} = \min_{p^2} P$. In $d=2$ dimensions the SG model undergoes a topological phase transition \cite{sg} where the critical value that separates the phases of the model, $1/\bar z_{\star} = 1$, was found to be independent of the choice of the regulator function \cite{scheme_sg}. For the $d=1$ dimension, based on the approximated RG flow, a saddle point $\bar u_{\star}$, $1/\bar z_{\star}$ appears in the RG flow \cite{cg_sg}; see, for example the results \fig{fig1} obtained by the power-law regulator \eq{pow_lim}, and thus the SG model has two phases. \begin{figure}[ht] \begin{center} \epsfig{file=opt_d1_d2_fig1.eps,width=8.0 cm} \caption{ \label{fig1} Phase diagram of the SG model for $d=1$ dimensions obtained by the numerical solution of Eqs.\eq{exact_u}, and \eq{exact_z} using the power-law regulator \eq{pow_lim} with $b=3$. Arrows indicate the direction of the flow. The distance $D$ is defined by \eq{dist}. } \end{center} \end{figure} In fractal dimensions, $1<d<2$ the nontrivial saddle point appears in the RG flow, too. However, there is an important difference between the cases of fractal dimensions and of the $d=1$ dimension; namely, the spontaneously broken phase should vanish for $d=1$ which indicates that the saddle point and the nontrivial IR fixed point ($1/\bar z_{\mr{IR}} \equiv 0$, $\bar u_{\mr {IR}} \equiv 1$) should coincide. Thus, the distance between the nontrivial IR fixed point and the saddle point (see \fig{fig1}), \begin{eqnarray} \label{dist} D &\equiv& \sqrt{(\bar u_{\mr {IR}} - \bar u_{\star})^2 + (1/\bar z_{\mr{IR}} - 1/\bar z_{\star})^2} \nonumber \\ &=& \sqrt{(1 - \bar u_{\star})^2 + 1/\bar z_{\star}^2} \end{eqnarray} can be used to optimize the scheme dependence of RG equations; i.e., the better the RG scheme the smaller the distance $D$ is. The other attractive IR fixed point ($\bar u_{k\to 0} = 0$, $1/\bar z_{k\to 0} =\infty$) corresponds to the symmetric phase \cite{cg_sg,css}. \section{Optimization of the power-law regulator} Let us consider first the optimization of RG equations using the power-law type regulator \eq{pow_lim} in the framework of the SG model. According to the numerical solution of Eqs. \eq{exact_u} and \eq{exact_z}, the saddle point appears in the RG flow for dimensions $1\leq d \leq 2$ and its position is plotted in \fig{fig2} for various values of the parameter $b$. \begin{figure}[ht] \begin{center} \epsfig{file=opt_d1_d2_fig2.eps,width=8.0 cm} \caption{ \label{fig2} Positions of the saddle point of the SG model for dimensions $1\leq d \leq 2$ obtained by the power-law regulator \eq{pow_lim}. The inset shows the dependence of the distance $D$ defined by Eq.\eq{dist} on the parameter $b$ for dimension $d=1$. } \end{center} \end{figure} For $d=2$ dimensions the curves coincide since the fixed point at which the two-dimensional SG model undergoes a topological phase transition is at $\bar u_\star = 0$, $1/\bar z_{\star} =1$ scheme independently. For fractal dimensions and also for $d=1$ the position of the saddle point becomes scheme dependent, i.e., it depends on the parameter $b$ of the regulator function \eq{pow_lim}. Since the (spontaneously) broken phase should vanish for the $d=1$ dimension, the distance Eq.\eq{dist} between the saddle point and the nontrivial IR fixed point ($1/\bar z_{\mr{IR}} \equiv 0$, $\bar u_{\mr {IR}} \equiv 1$) can be used to optimize the RG equation. The inset shows the dependence of the distance $D$ on the parameter $b$ and indicates that $b=2$ is the optimal choice. Thus, it recovers known results obtained by optimization \cite{opt_func,opt_rg} based on the optimal convergence of the flow that validates the strategy proposed here. \section{Optimization of the CSS regulator} Let us perform the optimization of the normalized CSS regulator \eq{css_norm} in the framework of the one-dimensional SG model using the optimization scenario based on the minimization of the distance $D$ defined by Eq. \eq{dist}. First, one has to determine the position of the saddle point. This can be done by using the linearized form of RG equations \eq{exact_u}, and \eq{exact_z} with dimensionless variables (linearized in terms of $\tilde u$) since usually $\tilde u_\star$ is found to be much smaller than one (after that $\bar u_\star$, $1/\bar z_\star$ and the distance $D$ can be calculated). Let us first perform consistency checks. In \fig{fig3} we plot the dependence of $D$ on the parameter $b$ of the CSS regulator \eq{css_norm} in various limits. For example, findings determined by the power-law limit \eq{pow_lim} of the CSS regulator (with $c=0.0001$ and $h=0.0001$) is represented by the dashed line which can be compared to the inset of \fig{fig2} where results obtained by the exact flow equations \eq{exact_u}, and \eq{exact_z} are shown. The two curves are qualitatively the same, and both have a minima at $b\approx 2$ (in case of \fig{fig3} it is at $b = 2.3$). Let us remind the reader that $b=2$ is the optimal choice according to the Litim-Pawlowski method. Another consistency check is based on the results obtained by the exponential limit \eq{exp_lim} of the CSS regulator (with $c=1$, $h=0.0001$) which is shown by the dotted line in \fig{fig3}. The minimum (i.e. the optimal choice) is at $b\approx 1.4$ whereas the Litimi-Pawlowski optimization indicates $b=1.44$. \begin{figure}[ht] \begin{center} \epsfig{file=opt_d1_d2_fig3.eps,width=8.0 cm} \caption{ \label{fig3} Dependence of $D$ \eq{dist} on the parameter $b$ of the CSS regulator \eq{css_norm} represented in various limits. Best values are obtained in terms of the parameters $c$ and $h$, for "fixed" $b$.} \end{center} \end{figure} The full line of \fig{fig3} shows the minimum (i.e. the best) values of $D$ obtained in terms of the parameters $c$ and $h$, for "fixed" $b$. For large $b$ it coincides with the power-law limit. It has an inflection point at $b \approx 2.1$ where the power-law and exponential limits cross each other. For small $b$ the best values are obtained for small but nonzero $c$. It also clearly indicates that the most favorable choice is $b\approx 1$. Thus, the Litim limit ($b\approx 1$, $c\approx 0$) of the CSS regulator is found to be the optimal choice beyond LPA (here we use the Litim limit to refer to a small but nonzero value for $c$ and do not take $c\to 0$ exactly). The computation of the best value of $D$ for $b\to 1$ is costly since the derivatives of the CSS regulator for $c\to 0$ have an oscillatory behavior \cite{css}. Nevertheless for small but finite $c$ the derivatives always exist; hence the Litim limit of the CSS can always be used at any order of the gradient expansion. It does not hold for the Litim regulator itself, which confronts the gradient expansion. Another important observation is that the usage of the PMS method (the global extremum of the CSS) produces exactly the same optimal parameters. Similarly, in LPA \cite{css_pms} the Litim limit of the CSS with $h=1$ was found to be the most favorable regulator. Beyond LPA the optimal choice for $h$ could depend on the model and also the approximations used. For example, here we found $c = 0.1$ and $h\approx 0.3$ as the optimal parameters for $b=1.25$. \section{Summary} A new optimization procedure for the functional RG method has been discussed that is based on the requirement for the absence of spontaneous symmetry breaking in the $d=1$ dimension. It has been applied for the SG model where no field-dependent wave-function renormalization is required; hence the method is suitable for optimization beyond LPA. It is validated by recovering known results on the power-law and exponential regulators. The CSS regulator has been optimized which leads to the best choice among the class of regulator functions. Results were obtained beyond LPA here and the Litim limit of the CSS was found to be the optimal choice. This is supported also by the PMS method that was tested in LPA \cite{css_pms} for the $O(N)$ model and for QED$_2$. Therefore, considerations were done for three different models, in three different dimensions at various orders of the derivative expansion with various optimization methods and all these results indicate that the Litim limit of the CSS (small but nonzero $c$) is the most favorable choice.
2024-02-18T23:41:12.685Z
2014-02-28T02:09:35.000Z
algebraic_stack_train_0000
4,447
3,097
proofpile-arXiv_066-5886
\section{Introduction} Cooperative effects occur when the behavior of a many body system is determined by their collective interactions with each other and thus manifest themselves in a large variety of physical systems. In this paper, we focus on the specific case of a collection of atoms illuminated by a laser. In this situation, the electro-magnetic field mediates resonant dipole-dipole interactions between the atoms, leading to a cooperative response of the system, which quantitatively differs from the single atom response. Such effects are imprinted on physical observables that can be experimentally measured such as e.g. the emission diagram or the radiation pressure force acting on the cloud. When a single atom is illuminated by a laser, the scattering process results in a force proportional to the number of scattered photons. Indeed, as an atom absorbs a photon from the laser of wave vector $\mathbf{k}_0$, it acquires a momentum $\hbar \mathbf{k}_0$, but the average momentum change during the emission process is zero. For a collection of atoms, the picture changes drastically as it was first noticed in a pioneering work by Dicke~\cite{Dicke} where he showed enhanced spontaneous emission decay rates in small and large samples due to constructive interferences of collective emission. In the situation of an incident laser scattering on a cloud of atoms, the atoms cooperate to scatter the light leading to a directional emission. This phenomenon is due to the synchronization of the atomic dipoles with the laser. The collective effects becomes even stronger as the atomic medium becomes optically dense and the radiation of the atoms starts to alter significantly the wave propagation. Among the other collective effects that arise, one can mention the collective Lamb shift~\cite{FHM,Keaveney2012}, Mie resonances~\cite{Bachelard2012a}, subradiance~\cite{Bienaime2012}, the refractive index of a dilute Bose gas \cite{Morice95} as well as a reduction of the radiation pressure force~\cite{Courteille2010,Bienaime2010}. Since the radiated light results from the interference of the waves emitted by each dipole, the simple relation between emitted photon and atomic recoil is lost. For example, a striking feature of cooperativity is the modification of the atomic recoil due to the presence of the neighboring atoms~\cite{Campbell2005,Bachelard2012b}, an effect that cannot be deduced from single-atom physics. We here discuss the particular relation between the directional superradiant emission, and the reduction of the radiation pressure force. The atomic cloud is described as a microscopic ensemble of coupled atomic dipoles, and both the radiated field and the force are expressed as a function of these dipoles. The optical theorem is derived in this framework, and is shown to lead to a direct relation between intensity scattered and radiation pressure force for the cloud center-of-mass. \section{Cooperative scattering model} The atomic cloud is described as a system of two-level ($g$ and $e$) atoms, with resonant frequency $\omega_a$ and position $\mathbf{r}_j$, that are driven by an uniform laser beam with electric field amplitude $E_0$, frequency $\omega_0$ and wave vector $\mathbf{k}_0=(\omega_0/c)\mathbf{\hat e}_z$. The laser-atom interaction is described by the following Hamiltonian: \begin{eqnarray}\label{H} H&=&\frac{\hbar\Omega_0}{2}\sum_{j=1}^N\left[\hat\sigma_j e^{i(\Delta_0 t- \mathbf{k}_0\cdot \mathbf{r}_j)}+\textrm{h.c.}\right]\nonumber\\ &+& \hbar\sum_{j=1}^N\sum_{\mathbf{k}}g_k\left(\hat\sigma_j e^{-i\omega_a t} +\hat\sigma_j^\dagger e^{i\omega_a t}\right) \left[\hat a_{\mathbf{k}}^\dagger e^{i(\omega_k t- \mathbf{k}\cdot \mathbf{r}_j)}+\hat a_{\mathbf{k}} e^{-i(\omega_k t- \mathbf{k}\cdot \mathbf{r}_j)}\right] \end{eqnarray} where $\Omega_0=d E_0/\hbar$ is the Rabi frequency of the incident laser field and $\Delta_0=\omega_0-\omega_a$ is the detuning between the laser and the atomic transition. In Eq. (\ref{H}) $\hat\sigma_j=|g_j\rangle\langle e_j|$ is the lowering operator for $j-$atom, $\hat a_{\mathbf{k}}$ is the photon annihilation operator and $g_k=(d^2\omega_k/2\hbar\epsilon_0 V)^{1/2}$ is the single-photon Rabi frequency, where $d$ is the electric-dipole transition matrix element and $V$ is the photon mode volume. The special case where a single photon (mode $\mathbf{k}$) can be assumed to be present in the system, was extensively investigated in Refs.~\cite{FHM,Scully2006,Svi08}, and later extended to include a low-intensity laser in Ref.~\cite{Courteille2010,Bachelard2011,Bienaime2011}. The system atoms+photons is then described by a state of the form~\cite{Svi10}: \begin{eqnarray}\label{state} |\Psi\rangle&=&\alpha(t)|g_1\dots g_N\rangle |0\rangle_{\mathbf{k}}+e^{-i\Delta_0 t}\sum_{j=1}^N \beta_j(t)|g_1\ldots e_j\ldots g_N\rangle|0\rangle_{\mathbf{k}}+ \sum_{\mathbf{k}}\gamma_{\mathbf{k}}(t)|g_1\dots g_N\rangle |1\rangle_{\mathbf{k}}\nonumber\\ &+&\sum_{\mathbf{k}}\sum_{m,n=1}^N \epsilon_{m<n,\mathbf{k}}(t)|g_1\ldots e_m\ldots e_n\ldots g_N\rangle|1\rangle_{\mathbf{k}}, \end{eqnarray} The first term in Eq. \eqref{state} corresponds to the initial ground state without photons, the second term is the sum over the states where a single atom has been excited by the classical field. The third term corresponds to the atoms that returned to the ground state having emitted a photon in the mode $\mathbf{k}$, whereas the last one corresponds to the presence of two excited atoms and one virtual photon with `negative' energy. It is due to the counter-rotating terms in the Hamiltonian (\ref{H}) and disappears when the rotating wave approximation is made. In the linear regime $\alpha\approx 1$ and in the Markov approximation, valid if the decay time is larger than the photon time-of-flight through the atomic cloud, the scattering problem reduces to the following differential equation~\cite{Scully09,Bachelard2011,Bienaime2011} \begin{equation}\label{eqbetaj} \dot\beta_j=\left(i\Delta_0-\frac{\Gamma}{2}\right)\beta_j- i\frac{\Omega_0}{2}e^{i \mathbf{k}_0\cdot \mathbf{r}_j}-\frac{\Gamma}{2}\sum_{m\neq j} \frac{\exp(ik_0|\mathbf{r}_j-\mathbf{r}_m|)}{ik_0|\mathbf{r}_j-\mathbf{r}_m|}\beta_m \end{equation} with initial condition $\beta_j(0)=0$, for $j=1,\dots,N$. Here, $\Gamma=V g_k^2 k_0^2/\pi c=d^2k_0^3/2\pi\epsilon_0\hbar$ is the single-atom {\it spontaneous} decay rate. The kernel in the last term of Eq. (\ref{eqbetaj}) has a real component, $-(\Gamma/2)\sum_{m\neq j}[\sin(x_{jm})/x_{jm}]$ (where $x_{jm}=k_0|\mathbf{r}_j-\mathbf{r}_m|$), describing the {\it collective} atomic decay, and an imaginary component, $i(\Gamma/2)\sum_{m\neq j}[\cos(x_{jm})/x_{jm}]$, describing the collective Lamb shift~\cite{Scully09,Scully10,Ralfie}. Notice that while Eq. (\ref{eqbetaj}) is here deduced from a quantum mechanical model, it can also be obtained classically, treating the two-level atoms as weakly excited classical harmonic oscillators~\cite{Svi10,Prasad}. \section{Radiated field} The radiation field operator $\hat a_{\mathbf{k}}$ evolves according to the following Heisenberg equation \begin{equation}\label{aH} \frac{d\hat a_{\mathbf{k}}}{dt}=\frac{1}{i\hbar}[\hat a_{\mathbf{k}},\hat H]=-ig_k e^{i(\omega_k-\omega_a) t}\sum_{m=1}^N \hat\sigma_m e^{-i\mathbf{k}\cdot \mathbf{r}_m}, \end{equation} where the fast oscillating term proportional to $\exp[i(\omega_k+\omega_a)t]$ has been neglected. The scattered field is obtained by performing the sum over all the modes, considering only the positive-frequency part of the electric field operator \begin{equation}\label{Es:a} \hat E_s(\mathbf{r},t)=\sum_{\mathbf{k}}{\cal E}_{k}\hat a_{\mathbf{k}}(t) e^{i\mathbf{k}\cdot \mathbf{r}-i\omega_k t} \end{equation} where ${\cal E}_k=(\hbar\omega_k/2\epsilon_0 V)^{1/2}$. Integrating Eq. (\ref{aH}) with respect to time, with $a_{\mathbf{k}}(0)=0$, inserting it in Eq. (\ref{Es:a}), and assuming the usual Markov approximation, one obtains~\cite{Bienaime2011} \begin{equation}\label{Es:3} \hat E_s(\mathbf{r},t)\approx -\frac{dk_0^3}{4\pi\epsilon_0}e^{-i\omega_at}\sum_{m=1}^N \frac{e^{ik_0 |\mathbf{r}-\mathbf{r}_m|}}{k_0|\mathbf{r}-\mathbf{r}_m|} \hat\sigma_m(t). \end{equation} When applied on the state~\eqref{state}, neglecting virtual transitions, it yields $\hat E_s|\Psi\rangle=E_s\exp(-i\omega_0t)|g_1\dots g_N\rangle$, with \begin{equation}\label{Es} E_s(\mathbf{r},t)= -\frac{\hbar \Gamma}{2d}\sum_{m=1}^N \beta_m(t) \frac{e^{ik_0 |\mathbf{r}-\mathbf{r}_m|}}{k_0|\mathbf{r}-\mathbf{r}_m|} \end{equation} Hence, the radiated field appears as a sum of spherical waves radiated by the atomic dipoles. In the far-field limit, one has $k_0|\mathbf{r}-\mathbf{r}_m|\approx k_0r-\mathbf{k}\cdot\mathbf{r}_m$, with $\mathbf{k}=k_0(\mathbf{r}/r)$, so the field~\eqref{Es} radiated in a direction $\mathbf{k}$ reads \begin{equation}\label{Es:far} E_s^{\mathrm{(far)}}(\mathbf{k},t)\approx -\frac{\hbar \Gamma}{2d}\frac{e^{ik_0r}}{k_0 r}\sum_{m=1}^N \beta_m(t)e^{-i\mathbf{k}\cdot\mathbf{r}_m}. \end{equation} The scattered intensity in a direction $\mathbf{k}$ is then derived as \begin{eqnarray}\label{Es:far2} I_s(\mathbf{k})&=& \frac{\epsilon_0 c\hbar^2 \Gamma^2}{2(dk_0 r)^2}\left|\sum_{m=1}^N \beta_m(t)e^{-i\mathbf{k}\cdot\mathbf{r}_m}\right|^2 \\ &=&\frac{\epsilon_0 c\hbar^2 \Gamma^2}{2(dk_0 r)^2}\left(\sum_{m=1}^N |\beta_m|^2+\sum_{j \neq m}^N \beta_{j}\beta_{m}^* e^{-i\mathbf{k}\cdot(\mathbf{r}_j-\mathbf{r}_m)}\right). \end{eqnarray} Integrating this intensity over all directions leads to the total scattered power \begin{equation}\label{eq:P} P_{r}=\frac{d^2k_0^4c}{2\pi\epsilon_0} \left(\sum_{m=1}^N |\beta_m|^2+\sum_{m \neq j}^N \beta_{j}\beta_{m}^* \frac{\sin(k_0|\mathbf{r}_j-\mathbf{r}_m|)}{k_0|\mathbf{r}_j-\mathbf{r}_m|}\right), \end{equation} where we have used the equality \begin{equation} \int \mbox{d}\hat{\mathbf{k}} e^{ik_0 \hat{\mathbf{k}}\cdot\mathbf{d}}=4\pi\frac{\sin(k_0|d|)}{k_0|d|}. \end{equation} In Eq. \eqref{eq:P}, the first term corresponds to the {\it incoherent} sum of the single atom radiated power. The second term is an interference term; in the limit of a cloud small compared to the wavelength, the dipole moments have the same phase and this latter term is responsible for a superradiant build-up of the radiated power $\propto N^2$~(see, e.g., Ref.~\cite{Dicke}). \section{Radiation pressure force} As for the radiation force operator acting on the $j${th} atom, it is derived from Eq. (\ref{H}) as \begin{equation} \hat{\mathbf{F}}_j=-\nabla_{\mathbf{r}_j}\hat H=\hat{\mathbf{F}}_{aj}+\hat{\mathbf{F}}_{ej}. \end{equation} A first contribution associated to the absorption of photons of the pump appears~\cite{Courteille2010,Bachelard2011}: \begin{equation} \hat{\mathbf{F}}_{aj}= i\hbar \mathbf{k}_0\frac{\Omega_0}{2} \left\{\hat\sigma_{j} e^{i(\Delta_0 t-\mathbf{k}_0\cdot \mathbf{r}_j)}- \textrm{h.c.}\right\},\label{Force-abs} \end{equation} whereas the second contribution comes from the emission of the photons in any direction $\mathbf{k}$: \begin{equation} \hat{\mathbf{F}}_{ej}= i\hbar\sum_{\mathbf{k}} \mathbf{k}g_{k} \left\{\hat a_{\mathbf{k}}^\dagger \hat\sigma_{j} e^{i(\omega_k-\omega_a)t-i\mathbf{k}\cdot\mathbf{r}_j}- \hat\sigma_{j}^\dagger \hat a_{\mathbf{k}} e^{-i(\omega_k-\omega_a)t+i\mathbf{k}\cdot\mathbf{r}_j}\right\}. \label{Force-emi} \end{equation} In Eq. \eqref{Force-emi}, the counter-rotating terms proportional to $\exp[\pm i(\omega_k+\omega_a)t]$ were neglected. As we are interested in comparing the radiation pressure force to the single-atom case, we define the average radiation force $\hat{\mathbf{F}}=(1/N)\sum_j\hat{ \mathbf{F}}_j=(F_{tot}/N) \mathbf{\hat e}_z$ that measures acceleration of the cloud center-of-mass given by $\mathbf{a}_{CM}=\hat{\mathbf{F}}/m$, with $m$ the single-atom mass. Note that this average force is $N$ times smaller than the total force $F_{tot}$ acting on the whole cloud of atoms. Since we consider clouds with rotational symmetry around the laser axis, this force is in the same direction as the incident field wave vector $\mathbf{k}_0=k_0 \mathbf{\hat e}_z$. This average force is measured by time-of flight techniques in cold atomic clouds released, for instance, from magneto-optical traps (MOTs) and has recently revealed cooperative effects in the scattering by extended atomic samples~\cite{Bienaime2010,Bender2010}. Like the scattered radiation, this force is an observable that contains signatures of the cooperative scattering by the atoms~\cite{Courteille2010,Bienaime2010}. The average absorption force along the $z$-axis, resulting from the recoil received upon absorption of a photon from the incident laser, reads \begin{eqnarray} \hat F_a &=& \frac{i}{2N}\hbar k_0\Omega_0\sum_{j=1}^N\left[\hat\sigma_j e^{i\Delta_0 t-i \mathbf{k}_0\cdot \mathbf{r}_j}-\textrm{h.c.}\right]. \label{Fa} \end{eqnarray} Similarly, the average emission force writes $\hat{\mathbf{F}}_e=(1/N)\sum_j\hat {\mathbf{F}}_{ej}$. Inserting the expression for $\hat a_{\mathbf{k}}$ from Eq. \eqref{aH} into Eq. \eqref{Force-emi}, and approximating the discrete sum over the modes $\mathbf{k}$ by an integral, it is possible to obtain, as it was done for the radiation field operator $\hat E_S$ of Eq. (\ref{Es}), the following expression for the average emission force along the $z$-axis~\cite{Courteille2010}: \begin{eqnarray} \hat F_e &=& -\frac{\hbar k_0\Gamma}{8\pi N}\int_0^{2\pi}\mbox{d}\phi\int_0^\pi \mbox{d}\theta\sin\theta\cos\theta\sum_{j,m=1}^N\left[ e^{-i\mathbf{k}\cdot(\mathbf{r}_j-\mathbf{r}_m)} \hat\sigma_{m}^\dagger\hat\sigma_{j}+\textrm{h.c.}\right]. \label{Fez} \end{eqnarray} Neglecting virtual photon contributions, the expectation values of the absorption and emission forces for state \eqref{state} are \begin{eqnarray} \langle\hat F_a\rangle &=& -\frac{\hbar k_0\Omega_0}{N} \sum_{j=1}^N\textrm{Im}\left[\beta_j e^{-i \mathbf{k}_0\cdot \mathbf{r}_j}\right] \label{Fazj} \\ \langle\hat F_{e}\rangle&=& -\frac{\hbar k_0\Gamma}{4\pi N}\int_0^{2\pi}\mbox{d}\phi\int_0^\pi \mbox{d}\theta\sin\theta\cos\theta\sum_{j,m=1}^N\left[\beta_{j}\beta_{m}^* e^{-i\mathbf{k}\cdot(\mathbf{r}_j-\mathbf{r}_m)}\right]\nonumber\\ &=& -\frac{\hbar k_0\Gamma}{N} \sum_{j,m=1}^N \frac{(z_j-z_m)}{|\mathbf{r}_j-\mathbf{r}_m|}j_1(k_0|\mathbf{r}_j-\mathbf{r}_m|)\mathrm{Im}\left(\beta_j\beta_m^*\right), \label{Fezj} \end{eqnarray} where we used the identity \begin{equation} \int_0^{2\pi}\mbox{d}\phi\int_0^\pi \mbox{d}\theta\sin\theta\cos\theta e^{-i\mathbf{k}\cdot(\mathbf{r}-\mathbf{r}')}=4\pi i\frac{z-z'}{|\mathbf{r}-\mathbf{r}'|}j_1(k_0|\mathbf{r}-\mathbf{r}'|).\label{id:j1} \end{equation} $j_1(z)$ here refers the first order spherical Bessel function. Note that the decomposition into absorption \eqref{Fazj} and emission \eqref{Fezj} forces is fully compatible with classical expressions of the optical force~\cite{Piovella2013}, where the force arises as the product between the atomic dipole and the {\it total} field~\cite{Gordon1980} (i.e. including the radiation from the other atoms). \section{Optical Theorem}\label{OT} \begin{figure}[t] \centering{\includegraphics[height=8cm]{SlabEmission}} \caption{ Scattering amplitude $|f(\mathbf{\hat{k}})|^2$ as given by Eq. (\ref{f}) for a cylindrical cloud of thickness $30/k_0$ and radius $90/k_0$, shone by a plane wave. The direction of the incoming wave is indicated by an arrow. The number of scatterers is $N=20000$, the detuning $\Delta_0=0$. The color-coded intensity is represented in log-scale. One can clearly see in red the strong forward emission of the sample, reminiscent of Mie scattering by large clouds compared to the wavelength. In the other directions, the scattered field is speckle-like due to the randomly positioned two-level scatterers, and describes the spontaneous emission by the cloud. Performing configuration averages would smooth out these fluctuations, except in the backward direction where, in the multiple scattering regime, the well known coherent backscattering cone is recovered~\cite{Lagendijk85,Maret85}. Finally, the emission in the transverse dimension is reduced due to the quasi-one-dimensional geometry.} \label{Emission_Diagram} \end{figure} Let us now discuss the formulation of the optical theorem in the framework of collective scattering. To that purpose, we consider an infinite slab illuminated by a plane wave. In the far-field limit, the field in a direction $\mathbf{\hat{k}}$ is \begin{equation}\label{Etot} E(\mathbf{r})=\left[\frac{E_0}{2}e^{i k_0 z}+ E_s^{\mathrm{(far)}}(r,\mathbf{\hat{k}})\right]e^{-i\omega_0 t}=\frac{E_0}{2}\left[e^{ik_0z}-\frac{e^{ik_0r}}{k_0r}f(\mathbf{\hat{k}})\right]e^{-i\omega_0 t} \end{equation} where the scattering amplitude for the scattered field $f$ is given by \begin{equation}\label{f} f(\mathbf{\hat{k}})=\frac{\Gamma}{\Omega_0}\sum_j\beta_j e^{-ik_0\mathbf{\hat{k}}\cdot \mathbf{r}_j}. \end{equation} As a consequence, the scattered intensity at a large distance $r$ from the cloud is \begin{equation}\label{Is} I_s=I_0\frac{|f(\mathbf{\hat{k}})|^2}{k_0^2r^2}, \end{equation} while the total scattering cross section is obtained by integrating over all the solid angle \begin{equation}\label{Cs} \sigma_{sca}=\frac{1}{k_0^2}\int\mbox{d}\mathbf{\hat{k}} |f(\mathbf{\hat{k}})|^2. \end{equation} To simulate numerically the slab illuminated by a plane wave, we consider a cylinder of transverse size large compared to its thickness and to the wavelength, with a random homogeneous distribution of atoms. Figure~\ref{Emission_Diagram} shows the emission diagram of the scattered field for resonant excitation and a cylindrical cloud of atoms. The energy conservation imposes that \begin{equation}\label{energy} \sigma_{ext}=\sigma_{sca}+\sigma_{abs} \end{equation} where $\sigma_{ext}$ and $\sigma_{abs}$ are the cross sections for extinction and absorption, respectively. The extinction cross section is then obtained from the optical theorem. In the forward direction the total field is \begin{equation}\label{Efor} E_{fwd}(\theta=0)=\frac{E_0}{2}\left[e^{ik_0z}-\frac{e^{ik_0r}}{k_0r}f(0)\right]e^{-i\omega_0 t}. \end{equation} In the slab configuration, the cloud radiates mainly in a narrow forward cone - the angle of the cone of emission is given by the inverse of the cloud transverse size. Hence, observing the field in a plane far from the atoms and within the forward cone of emission, the radius expands as $r\approx z+(x^2+y^2)/2z$, and one obtains \begin{equation}\label{Efor2} E_{fwd}(\mathbf{r})\approx\frac{E_0}{2}\left[1-\frac{f(0)}{k_0z}e^{ik_0(x^2+y^2)/2z}\right]e^{i(k_0z-\omega_0 t)}. \end{equation} So the intensity reads \begin{equation}\label{Efor3} |E_{fwd}(\mathbf{r})|^2\approx \frac{|E_0|^2}{4}\left\{1-\frac{2}{k_0z}\textrm{Re}\left[f(0)e^{ik(x^2+y^2)/2z}\right]\right\}, \end{equation} since we have neglected the quadratic term $|E_s|^2$. The measured intensity is the incident intensity minus the extinction intensity. In Eq. \eqref{Efor3}, the integration over $x,\ y$ yields a factor $2i\pi z/k_0$, and one gets \begin{equation}\label{sext} \sigma_{ext}=-\frac{4\pi}{k_0^2}\textrm{Im}[f(0)]. \end{equation} Hence, from Eq. \eqref{Cs} one obtains the relation \begin{equation}\label{sext2} -\textrm{Im}[f(0)]=\frac{1}{4\pi}\int \mbox{d}\mathbf{\hat{k}} |f(\mathbf{\hat{k}})|^2+\frac{k_0^2}{4\pi}\sigma_{abs} \end{equation} In our microscopic description of the light-atom interaction there is no absorption, so that $\sigma_{abs}=0$. An illustration of the validity of the optical theorem is given in Figure \ref{Optical_Theorem} for resonant light scattering by a slab containing two-level scatterers with a uniform density distribution. From Eqs. \eqref{f} and \eqref{sext}, and introducing the wavevector $\mathbf{k}=k_0\mathbf{\hat{k}}(\theta,\phi)$, we obtain the relation \begin{equation}\label{Csbeta} -\frac{\Omega_0}{\Gamma}\sum_j\textrm{Im}\left[\beta_j e^{-i\mathbf{k}_0\cdot \mathbf{r}_j}\right]= \frac{1}{4\pi}\int_0^{2\pi}\mbox{d}\phi\int_0^\pi \mbox{d}\theta\sin\theta \sum_{j,m}\left[\beta_{j}\beta_{m}^* e^{-ik_0\mathbf{\hat{k}}\cdot(\mathbf{r}_j-\mathbf{r}_m)}\right] \end{equation} Consequently, using Eqs. \eqref{Fazj} and \eqref{Fezj}, the average force along the $z$-axis reads: \begin{eqnarray}\label{force} F_z &=& \frac{\hbar k_0\Gamma}{4\pi N}\int_0^{2\pi}\mbox{d}\phi\int_0^\pi \mbox{d}\theta\sin\theta(1-\cos\theta)\sum_{j,m=1}^N\left[\beta_{j}\beta_{m}^* e^{-i\mathbf{k}\cdot(\mathbf{r}_j-\mathbf{r}_m)}\right]. \end{eqnarray} We observe from Eq. \eqref{force} that the average radiation pressure force is not merely proportional to the excitation probability, i.e. $\sum_j|\beta_j|^2$, but it is the result of an interference between the different atomic dipoles $\beta_j$. For this reason a measurement of the force captures the coherence properties of the scattering process as well as the detection of the light intensity. To make this point more explicit, using Eq. \eqref{Es:far2}, it is possible to write the force as \begin{eqnarray}\label{force2} F_z &=& \frac{r^2}{Nc}\int_0^{2\pi}\mbox{d}\phi\int_0^\pi \mbox{d}\theta\sin\theta(1-\cos\theta)I_s(\theta,\phi), \end{eqnarray} where the scattered far-field intensity is $I_s(\theta,\phi)=2c\epsilon_0|E_s(\theta,\phi)|^2$. This highlights the fact that the radiation pressure force, that pushes the atoms along the direction of the incident beam, is proportional to the net radiation flux of the scattered intensity. In the case of an isotropic emission (e.g., single-atom case, or cloud much smaller than the wavelength), the scattered intensity $I_s$ is independent on the angle and we get $F_z=(4\pi r^2/(Nc))I_s$: the direct proportionality between scattered power and radiation pressure force is recovered. The cooperative effect of light scattering in such small samples is then encoded in the total scattered intensity $I_s$. In the case of superradiant scattering for larger samples, a pronounced emission into the forward direction decreases the radiation force, as observed for example in Ref.~\cite{Bienaime2010}. \begin{figure}[t] \centering{\includegraphics[height=6cm]{Optical_Theorem}} \caption{Illustration of the optical theorem. Left: the scattered intensity integrated along $\phi$, i.e., $g (\theta) = \int_0^{2 \pi} d \phi \, |f(\theta,\phi)|^2$, is shown for resonant light $\Delta_0=0$ and a slab geometry with a uniform density distribution. The number of atomic scatterers is varied between $1$ and $5000$ (from inside to outside curves). The transverse size of the slab is $L_{x,y}=80/k_0$ and the longitudinal size is varied such that $L_z = (20/k_0)N/5000$. This procedure allows us to vary the optical thickness $b_0 = 4 \pi N / (k_0^2 L_x L_y)$ between $3.10^{-3}$ and $10$ while maintaining the atomic density constant. We would like to insist on the fact that the optical thickness is computed for the scattering of a scalar field which leads to an unusual resonant cross section for light $\sigma_0 = \lambda^2/\pi$ (different from the well-know resonant cross section $\sigma_0 = 3 \lambda^2 / (2 \pi)$ for vectorial light). The incident field is coming from the left and the intensity is plotted in log-scale. In addition to the forward Mie-like lobe, a lobe is also observed in the backward direction which we attribute to light reflection due to the sharp variation of optical index when the light hits the slab. Right: the blue circles represents the total scattering cross section obtained by integrating the emission diagram over $\theta$ and $\phi$, i.e., $\mathcal \sigma_{sca} = 1/k_0^2 \times \int_0^\pi d \theta \, \sin(\theta) g(\theta) $. In our microscopic model, there is no absorption so that $\sigma_{abs} = 0$, leading to $\sigma_{ext} = \sigma_{sca}$. The optical theorem Eq. (\ref{sext}) can thus be written as $\sigma_{sca} = - (4 \pi / k_0^2) \textrm{Im}[f(0)]$, which is plotted in magenta. The good agreement between the two curves illustrates the validity of the optical theorem.} \label{Optical_Theorem} \end{figure} \section{Scaling of the scattering cross section} In this section we are interested in understanding how the scattering cross section scales with the parameters of the system. We consider the case of a slab with uniform density distribution. The slab contains $N$ atoms and its size along the $x$, $y$, $z$ axes is denoted by $L_x$, $L_y$, $L_z$ respectively. The numerical simulations presented in figure \ref{Cross_Section_Scaling} show how the scattering cross section depends on the optical thickness of the cloud $b_0 = 4 \pi N / (k_0^2 L_x L_y)$. For dilute clouds of atoms we find: \begin{equation} \sigma_{sca} = 2.15\times L_x L_y \left[ 1 - \exp \left( -\frac{b_0}{2.15} \right) \right].\label{fit} \end{equation} When the slab is optically thick, i.e. $b_0 \gg 1$, we observe that the cross section appears to approach $2\times L_x L_y$. This factor of two corresponds to the well-known ``extinction paradox" \cite{Hulst81,Bohren83} for which the extinction cross section is twice as large as the one predicted by geometrical optics due to the diffraction contribution. The residual deviations from the factor of 2 between the scattering and geometrical cross sections might be associated to a still moderate size of our sample \cite{Chomaz11}, or to dipole blockade effects~\cite{Ott2013,Bienaime2013}. For spherical dielectric spheres, $\sigma_{ext}$ shows an oscillatory behavior around $2 \sigma_{geo}$ ($\sigma_{geo}=L_xL_y$ for our square geometry), which is damped for increasing sizes of the sphere \cite{Kargl90, Berg11}. When $b_0 \ll 1$ the scattering cross section can be written as $\sigma_{sca} = (L_x L_y)b_0 = N \sigma_0$, where $\sigma_0= \lambda^2/ \pi$ is the resonant scattering cross section for a single atom in the scalar wave description (it differs from the well know cross section for vectorial light $\sigma_0 = 3 \lambda^2 / (2 \pi)$). In this limit, the interpretation is clear: at low optical thickness the cooperative effects are negligible and the scattering of light is given by the response of $N$ independent atoms. We refer the reader to \cite{Sokolov13} for a study of the areal scaling of the light scattering by varying the size of a dense, cold atomic cloud. \begin{figure}[t] \centering{\includegraphics[height=6cm]{Cross_Section_Scaling}} \caption{Scaling of the scattering cross section. Left plot: following the same procedure as the one described in figure \ref{Optical_Theorem}, we compute the scattering cross sections for different slab geometries. The results are shown in scatter plot with different colors. The parameters of the simulations are reported in the legend of the figure. By fitting the data, constraining the slope in the limit $b_0\rightarrow 0$ (right plot), we obtain a scattering cross section that scales with the optical thickness $b_0$ of the slab according to Eq. (\ref{fit}) (magenta full line).} \label{Cross_Section_Scaling} \end{figure} Before concluding, we would like to underline the importance of the role of diffraction. Since we are using a microscopic description of the system, diffraction effects for the scattered field are already included in our model. However, free propagation of the incident field needs to be added for a fully consistent description. In this respect, the incident plane wave considered so far in the paper is a peculiar case. We will focus on these aspects in forthcoming studies to precisely understand the role of diffraction. This will naturally lead us to compare our coherent microscopic model of coupled dipoles to stochastic incoherent models commonly used to describe photon propagation in random media. Understanding coherent light propagation in disordered resonant scatterers is of prime importance for both the atomic physics and the waves in complex media communities. \section{Conclusion} We here discussed the superradiant emission of a cloud of cold atoms, when the interference of the waves radiated by the atomic dipoles builds up a coherent emission. Despite the fact that the simple relation between absorbed photons and radiation pressure force existing in the single-atom case was lost, the optical theorem allowed to recover a simple relation between the total scattered intensity and the displacement of the cloud center-of-mass. The measure of the force of the center of mass of the atomic cloud contains (partial) information on the scattered intensity, even for large values of optical thickness of the cloud. We have computed the total scattering cross section which approaches a value close to twice the geometrical cross section of the sample, in line with the well-know extinction paradox. Finally, understanding the role of diffraction paves the way for further studies to compare our coherent microscopic model to well established stochastic incoherent models describing photon propagation in random media. \section{Acknowledgements} We acknowledge financial support from IRSES project COSCALI and from USP/COFECUB (projet Uc Ph 123/11). R. B. and Ph. W. C. acknowledge support from the Funda\c{c}\~ao de Amparo à Pesquisa do Estado de S\~ao Paulo (FAPESP). M. T. R. is supported by an Averro\`es exchange program.
2024-02-18T23:41:13.340Z
2013-06-17T02:02:29.000Z
algebraic_stack_train_0000
4,479
4,963
proofpile-arXiv_066-6028
\section{Introduction} \label{sec:intro} Human stress research is significant since it plays a vital role in social, physiological, and psychological health. Stress research has a wide range of applications that include stress monitoring during the daily routine, stress assessment for improving health and work productivity, and preventing the onset of serious diseases. This research domain is beneficial for both individuals and society. Stressful conditions manifest in the form of adverse effects on an individual's working abilities and health. This makes them vulnerable to various kinds of diseases and weakens the recovery process for the human body from various clinical conditions~\citep{subhani2017machine}. Long-term exposure to heightened stress could cause symptoms of depression. Besides this, the strong connection between depression and stress could boost anxiety and mood disorders~\citep{dempsey2018stress}. Depression affects almost 350 million people worldwide and the situation seems to be worse in developing countries~\citep{world2015depression}. There are multiple causes of stress in human life that could also lead to mental disorders. These factors include, but are not limited to, internal conflicts, political unrest, economic instability, rising poverty, crime rates, and natural disasters, etc. Hence, stressful conditions could severely affect human life in day-to-day activities and could trigger mental and clinical conditions. Therefore, methods that are efficient in stress detection are needed for the time. Recently, the global change in lifestyle owing to COVID-19 is also believed to infest various mental conditions. We are forced to a change in our daily life and minimal social interaction, which is bound to affect our mental health. Experts believe that this could result in a mental pandemic if not handled properly \citep{spoorthy2020mental,ransing2020can}. Hence, monitoring and detecting stress at the right time has become a need of time. A large segment of the human population is living in a constant state of stress without even knowing its serious consequences. Most individuals are unaware of their current stress level, and ironically one of the reasons for heart attack and strokes is a high level of human stress. There are varied causes for stress such as poor income, joblessness, higher crime rates, natural disasters, and many others. According to a survey of the American Psychological Association in 2017, 62\% of the individuals have stress due to financial problems, 62\% due to problems at the workplace, 57\% due to political unrest in the country, and 51\% due to violence and crime in the society~\citep{american2017stress}. Despite such serious consequences, the definition of human stress in medicine is sometimes vague. Hence, priority should be given to focusing on stress quantification with precise numerical indexes. In the near future, it will not be enough to tell patients (dealing with stress) that it is all in their heads since advancements in diagnostic tools would aid in quantifying stress levels more precisely. Human stress is a difficult phenomenon to explain because every individual perceives it differently. There is a common perception in society that stress is a bad thing, but this is not always true as good stress also exists. For instance, an individual might have increased productivity under a stressful or challenging situation. In recent years, there has been an effort to develop strategies for stress assessment in a variety of daily life activities. For instance, in \citep{can2020personal}, authors have proposed a stress detection scheme based on heart activity, skin conductance, accelerometer, and skin temperature of the subject. The stress measurement protocol is composed of baseline, lecture, exam, and recovery sessions. Maximum accuracy of 94.52\% is achieved for three class stress classification using random forest classifier. A study focused on cognitive training and stress detection in older adults suffering from mild cognitive impairment (MCI) while they were participating in a cognitive and motor rehabilitation session \citep{delmastro2020cognitive}. An acute multi-level stress classification framework using photoplethysmography signals in response to the mental arithmetic task was presented in \citep{zubair2020multilevel}. A stress and anxiety detection mechanism using physiological signals for the academic environment was proposed in \citep{rodriguez2020towards}. A study to evaluate the physical and psychological stress in firefighters using the heart rate variability parameter was presented in \citep{pluntke2019evaluation}. Detection of human stress using ECG data with car driving and mental arithmetic task as a stimulus and deep neural network classifier was performed in a study conducted in \citep{cho2019ambulatory} with an achieved stress classification accuracy of 90.19\%. Regression analysis for the measurement of perceived stress using rest state EEG signal was presented in \citep{gillani2021prediction}. A review study about human stress assessment using physiological signals was presented in \citep{gedam2021review}. One of the major shortcomings of this review is that it only focuses on physiological measures of stress and does not enlighten other common methods of stress measurement like physical and psychological measures. Moreover, discussion about publicly existing datasets of human stress is not available. Generally, stress is defined as a response of the human body to different kinds of situations like a threat or a challenge \citep{muthukumar2010cadmium}. There are two main systems of the human body i.e., the autonomic nervous system (ANS) and hypothalamic-pituitary-adrenal (HPA) axis, which respond to stress~\citep{ulrich2009neural}. When a stressor is encountered by a person, it activates neurons present in the hypothalamus. It releases a hormone called corticotropin-releasing hormone (CRH), which consequently causes the release of another hormone (adrenocorticotropin hormone (ACTH)) from the pituitary gland. ACTH travels in the blood and affects the adrenal glands, which in turn triggers the release of stress hormones including cortisol, epinephrine, and norepinephrine~\citep{kajantie2006effects}. Cortisol is released in response to stress and it helps an individual to cope with an immediate threat. In terms of treatment and cure, stress is categorized into three main types i.e., acute stress, episodic stress, and chronic stress based on the symptoms and duration ~\citep{werner1993risk}. Acute stress (also termed as instantaneous stress) originates from a specific event that is novel or unpredictable for an individual. For instance a public speaking task, a nearly missed road accident, and an interview. These stressors are not prone to affect an individual's health, rather are good for human health. Since such events provide a chance for the human body to practice and develop the fight response to any stressful situation in the future~\citep{segerstrom2004psychological}. Whereas severe stressors, which persist for a longer duration, can lead to serious health disorders. Episodic stress occurs when a person faces multiple acute stressors over a shorter period. Episodic stress is commonly faced by individuals that take on more responsibilities than they can easily manage in a given time. Individuals facing episodic stress are often in a hurry and have a disorganized personalities. Individuals who have a pessimistic approach toward daily life routine tasks tend to have episodic stress. Unlike acute stress, episodic stress has negative effects on individual health. People facing episodic stress have low confidence in their abilities and they assume that they will never be able to come out of a stressful situation~\citep{sincero2012three}. Chronic stress (also called long-term stress) originates due to a variety of reasons like an unsatisfactory job, tense family life, and financial crises~\citep{hiriyappa2013stress}. Unlike acute stressors which can be negative as well as positive, chronic stress is always negative. Chronic stress affects the personality of an individual and can be the cause of many serious diseases which include heart attacks, cancer, and lung diseases~\citep{salleh2008life}. For the assessment of human stress, subjective and objective measures have been used~\citep{gross2016standard,fohr2015subjective}. For subjective stress assessment, two different ways are used including standard stress measurement questionnaires designed by field experts and conducting sessions with psychologists~\citep{gurung2013health}. Whereas, objective measures of stress further include physiological and physical measures~\citep{onorati2013reconstruction,arsalan2019classification}. In physical measures, visible changes in the human body are observed such as facial expressions~\citep{deschenes2015facial}, eye blinking rate~\citep{gowrisankaran2012asthenopia}, and dilation of the pupil~\citep{schulte2011handbook}. Whereas, for physiological measures, sensors are placed on the human body to measure internal changes. Towards this, various biomarkers have been employed including heart rate variability (HRV), heart rate (HR)~\citep{subahni2012association}, electrodermal activity~\citep{liapis2015recognizing}, respiration~\citep{wielgosz2016long}, and cortisol~\citep{liew2015classifying}. Further, in recent years the application of machine learning for developing artificially intelligent systems has gained pace. One of the driving factors for this scientific advancement has been the success of deep learning algorithms. Machine learning is poised to significantly change and improve how healthcare systems work. The improvement in computational power will allow the development of embedded systems that are AI-enabled in healthcare. Human stress detection could also benefit from these advancements. Machine learning can be deployed for both offline and online stress assessments. Some challenges that need to be handled include dealing with unpaired data, assigning reliable labels, and develop algorithms that reliably work with limited data and are explainable. Some studies focus on reviewing the current state of affairs related to human stress detection. For instance, a review on human stress detection using bio-signals is presented in~\citep{giannakakis2019review}. However, a discussion about the psychological, physical, and behavioral measures of human stress is found lacking. Further, publicly available databases for human stress measurement were also not explored. In another study, objective, subjective, physical, and behavioral measures for stress detection, as well as publicly available data used for human stress, are discussed. Another application-specific human stress measurement survey focusing on driver stress level is presented in~\citep{rastgoo2018critical}. Physical and physiological measures of human stress for driver stress detection are explored in detail. The limitation of this survey is that it only discusses a specific application i.e., driver stress level, and is not generic. Similarly, a review of methods developed for human stress measurement at the workplace is discussed in~\citep{carneiro2017new}. The limitation of this survey is that it only discusses the stress measurement methods about a specific application i.e., workplace environment and there is also no discussion about the publicly available existing databases for human stress assessment. Human stress measurement survey using smartphones and wearable sensors is presented in~\citep{can2019stress}. The paper presents the in-lab and out-of-laboratory environment stress measurement studies. The major limitation of the presented survey was the lack of discussion of existing publicly available datasets for human stress detection and the presentation of a limited amount of literature as compared to other available stress assessment surveys. A survey of devices available in the market was presented in~\citep{thapliyal2017stress} without any information on the studies using those devices. In summary, there is a need for a comprehensive presentation of the available human stress measurement methods. To the best of our knowledge, the current review addresses most of the shortcomings of existing human stress assessment survey papers by thoroughly investigating all the subjective and objective measures of human stress. In particular, our major contributions include, \begin{enumerate} \item Subjective measures, which include psychological questionnaires, are explored in detail for completeness. \item Objective measures of stress comprising of data acquired from wearable and non-wearable sensors are elaborated. \item Publicly available human stress measurement databases and commonly used stimuli for inducing stress are also discussed in detail. \item Future research directions in the domain of automated human stress detection using artificial intelligence are identified. \end{enumerate} The organization of this review paper is as follows. \Sec{intro} presents an introduction to available stress measurement techniques categorization and a discussion about the existing stress measurement reviews and their limitations. \Sec{sads} presents a review of the commonly used stressors adopted in stress measurement studies for inducing stress in the participants followed by a brief discussion about publicly available stress detection databases. Subjective stress measurement techniques commonly used in literature are explored in \Sec{ssa} followed by objective stress measurement techniques, its general framework, and its associated literature is explored in \Sec{osa}. \Sec{msa} presents the multimodal stress detection schemes available in the literature followed by a discussion about limitations of the existing schemes and future directions in \Sec{fd} and conclusion in \Sec{conc}. \section{Stress Detection Datasets and Stressors} \label{sec:sads} The section is subdivided into two parts: first, we discuss some commonly used stressors for inducing stress in humans, secondly, we summarize publicly available datasets for human stress detection. \subsection{Stress Inducers: Stressors} Human stress measurement methods presented in literature use a wide variety of stressors, which could include a public speaking task, an interview, an arithmetic task stressor, and many others. Stress is measured in response to these stressors by using different physiological and psychological techniques. Herein, we will review the most commonly used stressors for inducing stress in the participants and their related literature. \subsubsection{Stroop Color Word Test (SWT)} SWT is a neuropsychological test which has been developed by J.R. Stroop in 1935 and it has been widely adopted for experimental as well as clinical purposes. SWT was composed of three different tasks~\citep{stroop1935stroop}, where the first task consists of names of all colors written in black, in the second task the names of the colors and the color of the written text is different, whereas in the third task there are squares of different colors. During the test, a participant should answer the color of the word and not the word itself. In another version of SWT, three tasks were named as neutral (introductory session), congruent or non-conflict task, and non-congruent or conflict task. In the introductory session, all color names are written in black. In the congruent session, all color names are written in the same color as the color name. Whereas, in the non-congruent session, the name of the color is written in a different color from the color name. SWT has undergone a wide range of changes since its inception in 1935. The alterations include an increase or decrease in the task duration, the addition of more colors to the experimental tasks, and selection of one or more non-congruent colors among the number of congruent colors. Stroop color-word test has been widely used in brain imaging and human attention measurement studies~\citep{pujol2001effect} and for the measurement and identification of human stress~\citep{pehlivanouglu2005computer,tulen1989characterization,svetlak2010electrodermal,renaud1997stress,zhai2006stress,lundberg1994psychophysiological,alonso2015stress,ren2012affective,kurniawan2013stress,giannakakis2017stress,giakoumis2012using,karthikeyan2012descriptive,krantz2004consistency}. \subsubsection{Mental Arithmetic Task (MAT)} MAT is one of the most commonly used stimuli for inducing stress~\citep{lundberg1994psychophysiological,ushiyama1991physiologic,tomaka1994effects,seraganian1997effect,ring2002secretory,hassellund2010long,linden1991arithmetic}. Mental arithmetic task is a mechanism to increase the mental workload by performing a series of arithmetic operations with a varying range of difficulty. This stimulus is easy to implement and does not requires any special instrument. Another variant of the mental arithmetic task is Montreal Imaging Stress Task (MIST)~\citep{dedovic2005montreal}, which is a computer-based stress-inducing protocol mainly consisting of mental arithmetic problems and has been used as a stressor in several studies~\citep{setz2009discriminating,minguillon2016stress,al2015mental,al2016mental} \subsubsection{Cold Pressor Test (CPT)} The CPT is another stimulus that is commonly used for inducing stress in stress measurement experiments. CPT was first introduced by Hines and Brown in 1932~\citep{hines1932standard}. In particular, CPT involves immersion of the human hand or limb in cold water for a duration of 2 to 3 minutes. During this experiment, the subject feels uncomfortable and it is painful to adapt to a particular temperature for quite some time. The CPT protocol is widely used in laboratory experiments because of its ease of use. CPT triggers the activation of the sympathetic nervous system which increases blood pressure, heart rate, and skin conductance of the human body~\citep{lovallo1975cold}. A rise in cortisol level is also observed during CPT~\citep{al2002adrenocortical,bullinger1984endocrine}. Various versions of CPT have been used in different experiments which include immersion of both hands~\citep{suter2007cold} or both feet in hot or cold water~\citep{previnaire2012severity}. In~\citep{frings2013stress}, bilateral foot immersion was used to elicit stress response increasing salivary cortisol concentration and heart rate. In~\citep{hassellund2010long}, the author conducted a study in which the right hand of the subject was immersed completely in cold water for a time duration of one minute. In another study ~\citep{shi2010personalized}, the participant was asked to keep their hand in ice water until they started to feel discomfort. \subsubsection{Social Evaluative Tasks} Psycho-social stress is a type of human stress which occurs when an individual has to face people or a group of people as in public speaking task. When a socially threatening situation occurs, two mechanisms of the human body are affected, which include the autonomic nervous system and the neuroendocrine system. Hypothalamus activates both these systems to monitor the environmental demand (i.e., stress) as well as the internal state of the subject~\citep{bitsika2014hpa}. Based on these two mechanisms, physiological as well as the behavioral response is activated to generate a fight-or-flight response~\citep{taylor2000biobehavioral}. The physiological system of a human being is affected and has an immediate impact with an exposure to a social stressor~\citep{dickerson2004acute}. Exposure to social stressors has been the cause of many diseases including depression~\citep{mcewen2005glucocorticoids}, cardiovascular diseases~\citep{kemp2012depression}, and immune dysfunction~\citep{glaser2005stress}. Obesity, anxiety, and psychosocial stress have also been found interlinked to each other~\citep{pittig2013heart}. Hence, curing social stress is important, towards which exposure therapies have been developed to treat anxiety. Real-life social evaluative situations generate psychosocial stress~\citep{wolpe2013systematic}. Instead of real-life events exposure, virtual reality has also been used as a stressor~\citep{parsons2008affective}. Virtual reality exposure therapy (VRET) is an intermediate phase between thoughts and real-life events. Virtual reality is useful for a person who has difficulty imagining fearful tasks. VRET has also the advantage that if the stimuli become too threatening for the patient, the therapist has the control to stop the stimuli. VRET is a very effective method of treating social anxiety and based on this VRET patients learn methods to face such a threatening situation in real life~\citep{bordnick2012feasibility}. The public speaking task as a social stressor has been a focus on very few studies. Existing literature either focuses on the real audience~\citep{kudielka2009we} or a virtual audience~\citep{slater2006experimental,felnhofer2014afraid}. A complete study based on different physiological measures to find the impact of social stress in the real world as well as a controlled environment is still pending. Existing literature has shown that a virtual audience has been able to induce stress based on a virtual public speaking task based on heart rate and self-reported anxiety measure~\citep{slater2006experimental,felnhofer2014afraid,pertaub2002experiment}. Moreover, literature exists on the comparison of stress based on gender. It is shown in~\citep{kudielka2009we} that when men and women are both subjected to real-life stressors, no significant difference based on gender was found. HPA has also been found to have no changes between male and female participants~\citep{kelly2008sex}. It has been established in the literature that women have a decreased happiness level after facing social stressors. A study presented in~\citep{hemmeter2005modification} shows that men have higher cortisol concentration than females when facing a virtual stressor. Social rejection shows higher cortisol levels in women as compared to men in a study given in~\citep{stroud2002sex}. In~\citep{kothgassner2016salivary} author examined the stress response of a public speaking task in front of a real audience, virtual audience, and in an empty lecture hall. Gender difference in stress response was also evaluated and heart rate, heart rate variability, and saliva cortisol were used as a parameter of measurement. \subsubsection{Music} The effect of music on human stress has also been the subject of various studies. In~\citep{escher1993music}, authors experimented with cortisol changes and found that positive cortisol changes occur when the subject was asked to listen to the music before and during stressful medical treatment. In~\citep{suda2008emotional}, the author has demonstrated the effect of music on suppressing stress with an increase of cortisol level whereas, on the other hand in another study, the authors have demonstrated the fact that for a non-music condition the cortisol levels decreased after the stressor period~\citep{khalfa2003effects}. Another parameter of research is the effect of music on the SNS system of an individual. Different experiments have been conducted in this regard to establish the effect of music on the SNS system. In~\citep{bartlett1996physiological}, a decrease in SNS activity was observed in response to music. But few other studies contradict these findings. An investigation into the fact that whether human stress is relieved due to music is reported in~\citep{allen2001normalization}. It was concluded that the level of relaxation and the ability to cope with challenges is increased with a decrease in the perceived level of stress of an individual. A decrease in anxiety in response to listening to music is a consistent finding of many studies~\citep{knight2001relaxing}. Few studies exist that have reported no reduction in anxiety in response to music~\citep{evans2002effectiveness}. \subsubsection{International Affective Picture System (IAPS)} IAPS is a collection of photos that have been widely used to elicit an emotional response either positive or negative in the viewers~\citep{lang1997international}. IAPS is a set of photos that have been evaluated on a 9-scale rating of valance and arousal. IAPS has been used as a very effective tool to induce stress in stress recognition experiments~\citep{baltaci2016stress,liao2005real,giannakakis2017stress,nhan2009classifying,khalilzadeh2010qualitative}. The database is developed by the National Institute of Mental Health Center for Emotion and Attention at the University of Florida and is composed of 956 images that have been categorized into pleasant (to elicit positive feelings), non-pleasant (to elicit negative feelings) and neutral images. The database consist of a normative rating which is developed on three dimensions i.e., valance, arousal, and dominance, represents the average rating of the emotion induced by each picture. This rating helps the researchers using IAPS in their research to select an appropriate set of images for inducing relevant emotions. The establishment of this type of average rate is termed as standardization by psychologists. The standard rating of IAPS was obtained from 100 students composed of 50 males and 50 females having US-American origin. Normative rating of IAPS is also obtained from non-US participants of other origin i.e., Hungarian~\citep{deak2010hungarian}, German~\citep{gruhn2008age}, Portuguese~\citep{lasaitis2008brazilian}, Indian~\citep{lohani2013cross}, and Spanish~\citep{dufey2011adding}. Various kind of physiological modalities which include fMRI~\citep{caria2010volitional}, EEG~\citep{hajcak2009brain}, magnetoencephalography~\citep{styliadis2015distinct}, skin conductance~\citep{d2010early}, heart rate~\citep{bradley2001emotion}, and electromyography~\citep{baglioni2010psychophysiological} have been used along with IAPS stimulus. \subsubsection{Trier Social Stress Test (TSST)} TSST is a psychological stress-inducing protocol in the laboratory environment and was developed by Clemens Kirschbaum in 1993~\citep{kirschbaum1993trier}. TSST consists of two parts which include an anticipation period of 10 minutes and a test period of 10 minutes in which the subject has to deliver a speech and perform a mental arithmetic task in front of an audience. TSST has been used in a variety of stress measurement studies for inducing stress~\citep{kurniawan2013stress,engert2014exploring,vinkers2013effect,nater2005human}. \subsection{Publicly Available Datasets for Human Stress Detection} Only a few human stress assessment datasets have been curated by the research community and are publicly available for further research. In this section, we present details of publicly available data for this task using physiological signals. A human stress measurement data (\url{https://physionet.org/content/drivedb/1.0.0/}) to measure the driver stress using physiological signals of electrocardiogram, electromyogram, skin conductance, and respiration is presented in~\citep{healey2005detecting}. The physiological signals from 24 drivers were acquired during three different phases i.e., the rest condition, highway driving, and city driving. The three conditions (rest, highway, city) under which the data is acquired were mapped onto three stress levels i.e., low stressed, medium stressed, and highly stressed, respectively. One of the major limitations of this database is that the sampling rate of all the acquired physiological sensors is low e.g., electromyogram signal is recorded at a sampling rate of 15.5 Hz. Another dataset to measure driver workload (\url{http://www.hcilab.org/automotive/}) using physiological signals of heart rate, skin conductance, and body temperature and GPS, acceleration, and brightness level data obtained from the smartphone are presented in~\citep{schneegass2013data}. The data from 10 drivers (7 males and 3 females) is acquired while driving on the pre-defined route of 23.6 km on five different road types i.e., 30 km/h zone, 50 km/h zone, highway, freeway, and tunnel. Moreover, the labels depicting different levels of workload i.e., no workload to maximum workload are also provided in the database. The dataset can be used for the assessment of different levels of workload based on physiological signals. Another publicly available human stress measurement dataset (\url{https://physionet.org/content/noneeg/1.0.0/}) using bio-signals of electrodermal activity, temperature, acceleration, heart rate, and arterial oxygen level is presented in~\citep{birjandtalab2016non}. It consists of data from 20 participants consisting of 16 males and 4 females. Data acquisition is performed under four different conditions i.e., relaxed state, physical stress, cognitive stress, and emotional stress. Relaxation condition is achieved by asking the participants to listen to a soothing music track. Physical stress is induced by making the participants jog on a treadmill at 3 miles/hour. Cognitive stress is elicited by asking the participants to count backward from 2485 in a step of seven. Lastly, emotional stress is evoked by watching a video clip from a movie. Another dataset (\url{https://osf.io/c42cn/wiki/home/}) to measure the driver's behavior under different kinds of emotional, cognitive, and startling stressors which are the major cause of accidents is presented in~\citep{taamneh2017multimodal}. The dataset was acquired by involving 68 drivers, who drove under four different conditions i.e., no distraction, emotional distraction, cognitive distraction, and sensorimotor distraction in a controlled environment in a driving simulator. Modalities used for acquiring the driver response include heart rate, respiration rate, facial expressions, gaze, and electrodermal activity from the palm of the subject. Different types of subjective questionnaires were used to measure the cognitive state, personality type, and task load of the subject. \textbf{WE}arable \textbf{S}tress and \textbf{A}ffect \textbf{D}ataset (WESAD) (\url{https://ubicomp.eti.uni-siegen.de/home/datasets/}) is a publicly available data consisting of physiological and motion data of subjects for both emotion and stress stimuli~\citep{schmidt2018introducing}. Here 15 participants were involved in the experiment which includes 12 male and 3 female participants and the data were acquired in the laboratory setting. Data for each subject were recorded in three different conditions i.e., baseline recording done by performing a reading task, funny condition achieved by watching a set of funny videos, and stressed condition achieved by exposure to trier social stressor test. Sensor modalities used in the data acquisition include electrocardiography, electrodermal activity, electromyography, blood volume pulse, respiration, body temperature, and three-axis acceleration. A multimodal \textbf{S}mart reasoning system for \textbf{WELL}-being at work and at home \textbf{K}nowledge \textbf{W}ork (SWELL-KW) dataset (\url{http://cs.ru.nl/~skoldijk/SWELL-KW/Dataset.html}) for research on human stress and user modeling is developed in~\citep{koldijk2014swell}. Data were curated from 25 participants performing a variety of knowledge work tasks, which included report writing, preparing presentations, checking emails, and information search. Stress was induced by telling the participants that they have to present one of the prepared presentations to get the full experiment participation fee. Data for each participant were recorded for a duration of three hours, which was sub-divided into three one-hour blocks. Each block started with an eight-minute relaxation period block and then after that, the participant was assigned to the tasks on which he/she has to work. The participants had to write two reports and prepare one presentation in each block of the experiment. Stress was induced by showing a countdown timer flashing the remaining time for task completion. A dataset (\url{https://catalog.ldc.upenn.edu/LDC99S78}) to measure the effect of human stress on speech signals was presented in~\citep{steeneken1999speech}. Three different databases named Speech Under Stress Conditions (SUSC), Speech Under Simulated and Actual Stress (SUSAS), and DERA License Plate (DLP) datasets were developed in this research work to develop robust speech processing algorithms for the identification of human stress in the speech signals. Towards this, 32 speakers constituting 19 male and 13 female participants with an age bracket ranging from 22 to 76 years participated in the experiment to record 16,000 voice samples. The speech signals were sampled using a 16-bit analog to digital converter at a sampling rate of 8 kHz. Another dataset (\url{https://www.sensornetworkslab.com/clas}) for \textbf{C}ognitive \textbf{L}oad, \textbf{A}ffect and \textbf{S}tress Recognition (CLAS) was presented in~\citep{markova2019clas}. The database consists of the physiological recording of ECG, PPG, and EDA and motion data of accelerometer from 62 healthy participants (45 men and 17 women) with ages ranging from 20-50 years. The data was acquired while performing three interactive and two perspective tasks. The interactive tasks include mathematical problems, logic tasks, and Stroop color-word tests, whereas perspective tasks are composed of images and audio-video stimuli. All the physiological signals were acquired at a sampling rate of 256 Hz with a resolution of 16 bits per sample. Another publicly available dataset for assessment of social stress in humans using physiological signals of blood volume pulse and electrodermal activity is presented in \citep{meziatisabour2021ubfc}. Moreover, video recording was done to measure the remote photoplethysmography and facial features. A total of 68 undergraduate students from the psychology department participated in the experiment. The Competitive State Anxiety Inventory (CSAI) questionnaire was used to measure the three dimensions of self-reported anxiety which include cognitive anxiety, somatic anxiety, and self-confidence. Trier Social Stress Test has been used as a stimulus during which the physiological signals and the video recording is performed. A perceived human stress measurement dataset (\url{https://sites.google.com/site/simplrgp/resources}) using EEG signal was presented in~\citep{arsalan2019classification}. The database consists of EEG recordings from 28 participants (13 male and 15 females) with ages ranging from 18 to 40 years, in three different phases of the experiment i.e., pre-activity, during activity, and post-activity. EEG recording was performed while the participant was delivering a presentation on an unknown topic for a time duration of five minutes. Subjective scores from the perceived stress scale questionnaire were also recorded. \section{Subjective Stress Assessment} \label{sec:ssa} Subjective measures for human stress assessment have been traditionally used for many decades. While here our objective is to review methods that use data from wearable and non-wearable sensors for automated stress detection using artificial intelligence. However, subjective measures are explored herein for completeness. Further, such assessments have been used to benchmark machine learning-based methods. Towards this, there exists a wide range of questionnaires developed by psychologists for measuring a different type of stress. These measures are based on the questionnaire being filled by the subject. Psychological questionnaires are being used by the researchers to validate the objective measures obtained from the sensors. Perceived stress scale (PSS) questionnaire~\citep{cohen1983global} is commonly used by psychologists to measure chronic stress. Acute stress disorder (ASD) scale questionnaire~\citep{bryant2000acute} is developed by psychologists to measure acute stress. Some of the other questionnaires used by the psychologists are relative stress scale (RSS)~\citep{ulstein2007relative}, daily stress inventory (DSI)~\citep{brantley1987daily}, brief symptom inventory~\citep{derogatis1993brief} and trier inventory for the assessment of chronic stress (TICS)~\citep{schulz1999trier}. A brief review of the commonly used questionnaires for stress assessment is given below. \subsection{Acute Stress Disorder (ASD)} ASD is a subjective self-reporting questionnaire inventory that is used to quantify acute stress disorder and post-traumatic stress disorder. ASD is a self-reporting version of the Acute Stress Disorder Interview (ASDI) questionnaire. ASD was developed with three aims i.e., (a) identification of ASD, (b) self-report version of ASDI, and (c) a measure of post-traumatic stress disorders (PSTD). It is a 19-item questionnaire and is compliant with the Diagnostic and Statistical Manual of Mental Disorders criteria. The scale has been successfully used to measure the acute stress order among a wide range of subjects~\citep{bryant2000acute}. The 19 questions of the ASD questionnaire are composed of 5 dissociatives, 4 reexperiencing, 4 avoidance, and 6 arousal items. The questions for the ASD questionnaire are rated on a five-point Likert scale, where 1 means that a condition did not occur at all and 5 means the particular situation occurred very strongly. The minimum score of the questionnaire can be 19 and a maximum score of 85. A study to analyze the factor structure of acute stress disorder in the earthquake victims of the Chinese population is conducted in~\citep{wang2010factor}. The study was conducted on a sample of 353 samples consisting of 180 men and 173 women with a mean age of 29.36 and a standard deviation of 11.45. The study concluded that a four-factor model consisting of dissociation, reexperiencing, avoidance, and arousal is consistent with the conceptualization of ASD. A wide range of studies has been conducted to establish a correlation between PSTD and ASD. The studies report that around three-quarters of the survivors of the trauma patients who show symptoms of ASD ultimately develop PSTD~\citep{harvey1998relationship,harvey1999two,harvey2000two,brewin1999acute}. A study conducted for motor vehicle/industrial accidents in~\citep{harvey1998relationship} found a 3-factor model consisting of acute post-traumatic stress reactions, dissociative symptoms, and dissociative amnesia. The study was conducted on 99 participants consisting of 65 men and 34 women with a mean age of 31.59 and a standard deviation of 11.28. \subsection{Brief Symptom Inventory (BSI)} BSI is a questionnaire developed by a psychologist to measure psychological distress and psychiatric disorders in people~\citep{derogatis1993brief}. The data collected from the questionnaire can be used to diagnose and treat the patients. BSI is a 53-item questionnaire with each question being answered on a five-point scale of 1 to 5. The 53 items of BSI consist of questions of nine symptoms dimensions including Somatization, Obsession-Compulsion, Interpersonal Sensitivity, Depression, Anxiety, Hostility, Phobic Anxiety, Paranoid Ideation, and Psychoticism and three indices of distress i.e., Global Severity Index, Positive Symptom Distress Index, and Positive Symptom Total. The time required by the subject to complete the questionnaire is approximately 8 to 12 minutes. The respondent of the questionnaire answers the questions on a scale from 0 (condition never occurs) to 5 (a condition that occurs very frequently). The minimum score of the questionnaire can be 53 whereas a maximum score of 265 can be recorded. The somatization dimensions are calculated from items 2, 7, 23, 29, 30, 33, and 37, obsession-compulsion dimension is obtained from items 5, 15, 26, 27, 32, and 36, interpersonal sensitivity is measured from items 20, 21, 22, and 42, depression dimension is evaluated from items 9, 16, 17, 18, 35, and 50, anxiety is obtained from items 1, 12, 19, 38, 45, and 49, hostility dimension is calculated from items 6, 13, 40, 41, and 46, phobic anxiety is taken from items 8, 28, 31, 43, and 47, paranoid ideation is evaluated from items 4, 10, 24, 48, and 51, and psychoticism is measured from items 3, 14, 34, 44, and 53 of the questionnaire. Items 11, 25, 39, and 52 did not contribute to the calculation of any dimension but they are recorded because of their clinical importance. Global Severity Index is calculated by the sum of the items of all the nine dimensions as well as the four items which were not included for the calculation of any dimension and then dividing the sum by the total number of items a particular person answered. Positive Symptom Total is calculated by counting the number of items whose responses are non-zero. Positive Symptom Distress Index is obtained by dividing the sum of the non-zero response items by the positive symptom total. BSI has been used for examining the relationship among psycho-social family risk factors, parental psychological distress, and quality of life in pediatric cancer survivors in a study conducted in~\citep{racine2018quality}. The study reports that families having a low level of distress having a lesser impact on the quality of life of pediatric cancer survivors. The relationship between psycho-pathological symptoms and technological addictions has also been studied. In a study conducted among 126 university students, the nine dimensions obtained from the BSI questionnaire were found to be significantly correlated to internet addiction~\citep{adalier2012relationship}. A significant association was found between the anxiety level and internet addiction in adolescents in a study conducted in~\citep{stavropoulos2017longitudinal}. \subsection{Relative Stress Scale (RSS)} RSS is a commonly used subjective marker to measure the psychiatric disorders among the individuals who act as caretakers of dementia patients. It is a 15-item questionnaire and is a reliable measure of stress disorders among the carers of dementia patients. The items of the questionnaire are scored on a scale of 0 to 5, where 0 means a particular event never occurred and 5 means a particular event occurs very frequently. The minimum score of the questionnaire is 0 where the maximum score is 60. Age gender, education, occupation, and the relationship of the carer with the patient are also recorded in RSS. The carer of the patients was also asked to specify their routine and estimated time they used to take care and assist the patient in a week. The RSS questionnaire covers many different aspects of carer burden like a subjective emotional response (emotional distress), the negative feeling associated with the behavior of patients (negative feelings), restrictions in the patient carer's social life (social distress). Question items 1, 2, 3, 4, 5, and 6 of the RSS questionnaire measures emotional distress, items 7, 8, 9, 10, 11, and 13 measures social distress, and items 12, 14, and 15 measure the negative feelings of the patient carers. Emotional distress is carers is directly proportional to the amount of time spent per week with the patient and more specifically the emotional distress is higher in female carers supporting the fact that female carers are more emotional in their approach~\citep{fitting1986caregivers} whereas their counterpart males are more task or goal oriented~\citep{corcoran1992gender}. Social distress is also higher in carers who spend more time with the patients and patients with high social distress needs help and a break from caring for dementia patients. The negative feelings in the carer are associated with the patient's age, i.e., the younger the patient, the more the negative feelings that occur in the patient carers. RSS has been widely adopted in Norway for clinical purposes and research to measure the carer burden~\citep{braekhus1999social,thommessen2002psychosocial}. RSS has been used in literature for the validation of the distress scale of the Neuropsychiatric Inventory~\citep{kaufer1998assessing}. In~\citep{greene1982measuring}, RSS has been used to provide a useful basis for discussion with carers of dementia patients. \subsection{Daily Stress Inventory (DSI)} DSI is another measure developed to provide the research scientists and doctors with information about the psychiatric issues of the patients after they have gone through some stressful events. DSI is specifically designed for the measurement of small stressful events that need to be measured on daily basis. DSI possesses useful and unique qualities for the measurement of stressful events. DSI is a 58-item questionnaire that allows the participant to indicate the events which occurred in the last 24 hours. After indicating the events that occurred, the subject rate the stressfulness of those events on a Likert-type scale from 1 to 7. 1 refers to the events which occurred but they were not stressful whereas a score of 7 means that a particular event caused panic to the subject. At the end of the DSI inventory, two blank items were provided to let the subject report those events which were not included in the 58 items. However, the scores of the blank items were not counted toward the calculation of stress scores. The minimum score of the DSI inventory can be 58 whereas a maximum score of 406 can be obtained. Three different scores are computed for each individual, (i) the number of events that are reported by the subject to have occurred, (ii) the sum of the total score of all these events, (iii) the average of the scores of these events. DSI inventory has been used frequently in research studies and has shown good validity ad reliability~\citep{maslach1997evaluating}. DSI was aimed at daily monitoring over a course of seven to ten days to measure the changes in daily faced stressors and to observe the relationship of these stressors to physical and psychological symptoms~\citep{brantley1993daily}. This daily monitoring leads to a better association between these small stressors and the illness of the subject. A study conducted in~\citep{goreczny1988daily} monitored 24 patients with asthma and chronic obstructive pulmonary disease. Daily stress score was recorded via DSI inventory and respiratory symptoms were recorded for a time of 21 days. The study depicted the fact that on a highly stressful day, asthma symptoms in patients worsened. DSI has been used for correlation with other medical conditions like headache~\citep{mosley1991time,waggoner1986investigation}, Crohn’s disease~\citep{garrett1991relation} and diabetes~\citep{goetsch1990stress}. \subsection{Perceived Stress Scale (PSS)} PSS is a questionnaire developed to measure the chronic stress of an individual. The questionnaire assesses the extent to which an individual has been stressed in the last thirty days. PSS is a 10 item questionnaire scored on a scale of 0 to 4 where 0 means that a particular situation never occurred whereas 4 means a situation occurred very frequently. The score from all the items of the questionnaire is summed up to get the PSS questionnaire score. The minimum and maximum score that can be obtained from the PSS questionnaire is 0 and 40, respectively. PSS's final score is obtained by reversing the four-item of the questionnaire which include items 4, 5, 7, and 8, and using the other items of the questionnaire as it is. PSS has been used in a wide range of studies for the assessment of chronic stress among individuals. \subsection{Trier Inventory for the Assessment of Chronic Stress (TICS)} The TICS is a standardized questionnaire for assessing nine interrelated factors of chronic psychosocial stress and is a very reliable and effective tool. The nine factors which are addressed by TICS include Work Overload (e.g., "I have too many tasks to perform."), Social Overload (e.g., "I must frequently care for the well-being of others."), Pressure to Perform (e.g., "I have tasks to fulfill that pressure me to prove myself."), Work Discontent (e.g., "Times when none of my tasks seem meaningful to me."), Excessive Demands at Work (e.g., "Although I try, I do not fulfill my duties as I should."), Lack of Social Recognition (e.g., "Although I do my best, my work is not appreciated."), Social Tensions (e.g. "I have unnecessary conflicts with others."), Social Isolation (e.g., "Times when I have too little contact with other people."), and Chronic Worrying (e.g., "Times when I worry a lot and cannot stop). TICS is a 57-item questionnaire that is rated on a 5-point scale from 0 to 4 based on whether the participant experienced a particular situation in the last 3 months or not. On the 5-point scale, 0 means a situation never occurred, 1 means a situation very rarely occurs, 2 means a situation sometimes occurs, 3 means a particular situation often occurs, and 4 means a particular situation occurs very frequently. The total score of the TICS questionnaire can range from 0 to 228. In a study conducted in~\citep{sturmbauer2019stress}, a correlation between the Stress and Adversity Inventory (STRAIN) with TICS and PSS was examined. It was found that STRAIN is more correlated to TICS as compared to PSS. A correlation between the TICS score and central serous chorioretinopathy (CSC) named syndrome in young and middle-aged adults was established in~\citep{buehl2012trier}. The study found that people with CSC syndrome have higher TICS score as compared to individuals with no CSC syndrome. Subjective measures of human stress have been widely used in the literature but there exist some limitations and shortcomings of these methods. One of the major shortcomings of these subject measures is that these questionnaires are being responded to by the subject himself and if the subject answers the items of the questionnaire in a biased manner then the score obtained for stress measurement is unreliable and incorrect. Secondly, to answer the questionnaires, the subject has to be literate and able to properly read the items of the questionnaire. Thirdly, the questionnaires for stress measurement are not available in all the languages thus creating a bottleneck and hence cannot be used by individuals whose first language is not the one in which the questionnaire has been developed. Keeping in view these limitations, using only subjective measures is not a reliable indicator of stress, thus objective measures of stress are essential for the development of better stress measurement protocols. \section{Objective Stress Detection} \label{sec:osa} Objective measures of stress include physiological and physical measures. Physiological measures of stress need sensors to be connected to the human body at some specified location e.g., EEG, ECG, and EDA whereas, in the case of physical sensors the measurement can be done at a distance from the subject without the need of any physical contact. Objective measures of stress are free from human intervention and hence cannot be biased like the subjective questionnaire and it is the major benefit of objective measures over the subjective assessment of stress. Moreover, the studies which use objective measures of stress also validate their finding using subjective questionnaires~\citep{healey2005detecting}. The data acquisition protocols in case of objective measures of stress are time-consuming and complicated and hence to record data for a large population sample is difficult. The limited capacity of the existing stress modeling protocol and lack of a large data sample make it necessary to include the conventional subjective stress measurement methods for the validation of objective measures. It is because of these factors, subjective measures are still regarded as an efficient measure of stress~\citep{ulstein2007high,weidner1989hostility}. In this section, we will discuss a general framework for human stress assessment and review all the literature available for human stress measurement using objective methods. The general machine learning framework of human stress detection includes data acquisition and annotation, pre-processing, feature extraction and selection, and classification steps that are shown in \Fig{fig1a}. Each of these steps plays a vital role in accurate human stress detection and is discussed below. \begin{figure*} \begin{center} \includegraphics[width=80mm]{Figure1a.png} \end{center} \caption { \label{fig:fig1a} { General machine learning framework for the objective stress detection.}} \end{figure*} \noindent\textbf{Data Acquisition and Annotation} is one of the most important steps in the human stress detection framework. The quality of the acquired data is of utmost importance for the robust analysis of human stress and to draw a reliable conclusion. Before the start of data acquisition, a good experimental design following the standard protocols is needed. In stress measurement studies, there exist two types of experimental protocols which include (i) measuring stress induced by an external stimulus also called acute or instantaneous stress, and (ii) measuring perceived or chronic stress without using any external stimulus. Before data acquisition, the stress-inducing protocol which will be used need to be defined. Another important factor which needs to be considered for data acquisition is that whether the data need to be acquired in laboratory settings or out-of-laboratory environment. The number of participants of the experiment is also important because if the number of participants is small, the findings of the study could not be generalized and there is also a chance that data acquired from some participants might get corrupted but on the other hand, acquiring data from a large number of participants is a time consuming and cumbersome process to follow. The physical and physiological sensors for whom data is to be recorded should be selected before the start of data acquisition. In addition to the above-mentioned parameters, another important factor that needs to be considered in data acquisition is data annotation. Data annotation is the process of assigning each training example of the data to a particular class depending on some criteria. Commonly used criteria for data annotation in stress measurement studies include the use of subjective questionnaire~\citep{asif2019human,arsalan2019classification} and evaluation by psychologists~\citep{saeed2020eeg}. This variation in the labeling technique also poses challenges for the comparison of the available techniques with each other. \noindent\textbf{Pre-processing} is the second step in the stress detection pipeline and plays an important role in the whole process. Signals acquired by using wearable and non-wearable sensors during the data acquisition phase are affected by different kinds of noises which include power line~\citep{lin2016removal}, eye blinking artifacts~\citep{shoker2005artifact}, muscular artifacts~\citep{chen2014preliminary}, and posture or physical activity~\citep{alamudun2012removal}. Noise removal techniques for pre-processing different kinds of modalities have been developed in the literature. Accelerometer sensor has been used in stress detection studies and the noise which affects accelerometer data is composed of high-frequency components and can be removed by using low pass filtering. Authors have applied low pass filtering to remove high-frequency artifacts from the accelerometer signal in their stress assessment studies conducted in~\citep{mozos2017stress,gjoreski2017monitoring}. Two of the main steps for pre-processing an ECG signal are to identify the R-peaks and RR interval. Algorithms like Pan and Tompkin’s algorithm~\citep{pan1985real} have been developed to identify the R-peaks of the ECG signal. Moreover, to identify the valid RR-intervals algorithms have also been proposed~\citep{hovsepian2015cstress}. Pre-processing of PPG signals has been explored in the literature. PPG signals are affected by low-frequency noise which can be mitigated by the use of high-frequency filtering~\citep{elgendi2012analysis}. Meaningful information in an EDA signal is normally contained in low-frequency components and the noise is the high-frequency component of the signal which can be removed by passing the EDA signal through a low pass filter. Another important pre-processing task performed with the EDA signals is its segmentation into a slow varying baseline conductivity known as skin conductance level (SCL) and a high-frequency component called skin conductance response (SCR). Authors in~\citep{choi2011development} have proposed a technique to separate the SCL and SCR components of the EDA signal. Techniques for pre-processing an EMG have been proposed in the literature. A two-step noise removal technique for EMG signals is proposed in~\citep{wijsman2010trapezius}. In the first step, band-pass filtering is applied to the EMG signal to limit the signal from 20 to 450 Hz. In the second step, power line interference is negated by applying notch filters at frequencies of 50, 100, 150, 200, 250, and 350 Hz. Another important contamination source for EMG signals is ECG signals i.e., cardiac artifacts. Different algorithms to remove cardiac noise from EMG signals have been compared in a study conducted in~\citep{willigenburg2012removing}. \noindent\textbf{Feature Extraction and Selection} are critical for an efficient machine learning model. Feature extraction corresponds to the process of extraction of meaningful features from the acquired data. Meaningful features are the extracted set of features that are descriptive i.e., the features have discriminating values for instances from different classes. The extracted features constitute a feature vector which is fed as input to the classification stage. The extracted features can be categorized differently e.g., time or frequency or wavelet domain features, linear features vs non-linear features, unimodal vs multimodal features. The computational complexity of the extracted set of features can range from simple statistical features e.g., mean, median, minimum, and maximum to complex features based on certain modalities. A different set of features are extracted from each sensor for human stress recognition. Some of the commonly used features for accelerometer sensors in stress recognition studies include mean, standard deviation, variance, maximum, absolute value, signal magnitude area, root mean squared, energy, differential entropy, discrete Fourier transform, peak magnitude frequency, peak power, and zero crossing~\citep{garcia2015automatic,can2019continuous,sano2013stress}. List of some of the features extracted from ECG and the PPG signals in stress measurement studies include mean and standard deviation of the R-R interval, root mean square difference of the consecutive R-R interval, heart rate, heart rate variability, mean R peak amplitude, mean standard deviation, skewness, kurtosis, percentile, geometric and harmonic mean, low-frequency power, high-frequency power, power ratio, crest time, and instantaneous pulse ratio ~\citep{bong2012analysis,ahn2019novel,mohino2015assessment,cho2019instant,charlton2018assessing}. Common features extracted from the EEG signals in human stress measurement studies include~divisional asymmetry, rational asymmetry, mean power, power spectral density, alpha asymmetry index, normalized band power, relative power, coherence, and amplitude asymmetry~\citep{arsalan2019classification,ahn2019novel,asif2019human}. Looking at EDA based human stress measurement studies, statistical features of mean, standard deviation, mean of the absolute values, root mean square, the proportion of negative samples, the slope of the EDA level, mean EDA peak rate and height, minimum and maximum have been commonly used~\citep{giakoumis2012using,setz2009discriminating}. Feature selection is defined as a process that aims at selecting the subset of features that have the highest discriminative power and yield the highest classification accuracy from among the extracted set of features. Different features selection algorithms have been used in stress classification studies which include genetic algorithm~\citep{shon2018emotional}, t-test~\citep{saeed2020eeg}, minimum redundancy maximum relevance (mRMR)~\citep{subhani2017mrmr}, principal component analysis (PCA)~\citep{deng2012evaluating}, particle swarm optimization (PSO)~\citep{yerigeri2019meta}, wrapper based feature selection~\citep{hasan2019hybrid}, Bhattacharya distance~\citep{subhani2017machine}, and independent component analysis (ICA)~\citep{palacios2019ica}. \noindent\textbf{Classification} is the last step in the human stress detection framework and is an important part of the whole process. The classification process can be performed by either use of statistical measures (t-test or ANOVA) or by using machine learning techniques. For both types of techniques, the selected or extracted set of features is fed as input to the classification stage. The t-test is a type of inferential statistics test which is aimed at finding whether there is any significant difference between the means of two groups or not. The t-test is based on the assumption that the dependent variable of the data follows a normal distribution and we can identify the probability of a particular instance. T-test produces a p-value whose acceptable value is considered to be less than 0.05. A p-value of 0.01 means that the likelihood to get the difference in the two groups by chance is 1 out of 100 times. The t-test is applied to cases in which we need to find the difference between two groups where to find the difference between more than two groups ANOVA test is applied. The second type of method used for the classification of human stress in the literature includes machine learning techniques. A wide variety of algorithms depending upon the situation have been employed in human stress recognition studies. Multilayer perceptron (MLP) is a type of feed-forward neural network which is composed of at least three layers i.e., input layer, hidden layer, and output layer. MLP has been used for binary as well as multi-class stress classification tasks in a wide range of human stress recognition studies~\citep{arsalan2019classification,arsalan2019classification_EMBC}. Another commonly used classification technique for human stress recognition is the Naive Bayes algorithm. Naive Bayes algorithm is a type of algorithm which is based on the Bayes probability theorem and the conditional probability rule. Some of the stress recognition studies which have used the Naive Bayes algorithm include~\citep{ahuja2019mental,saeed2017quantification}. Support vector machine has also been used in a sizable amount of human stress recognition studies. Support vector machine (SVM) is a supervised machine learning classifier and it works by defining a separating a hyperplane with the help of support vectors. Some of the human stress recognition studies involving SVM classifier include~\citep{saeed2018selection,saeed2020eeg,vanitha2013hybrid,attallah2020effective}. k-nearest neighbors (kNN) is a type of supervised and non-linear machine learning algorithm using for classification tasks. k-nearest neighbors assign the new data point by calculating the distance of the point from the k nearest neighbors and the data point is assigned to the class whose nearest neighbor has the lowest distance metric. The value of k can be any odd number i.e., 1, 3,5, etc. Higher the value of k, the more reliable results kNN produces. Some of the stress classification studies which have used kNN as a classifier include~\citep{rahman2015mental,karthikeyan2012study,shon2018emotional}. Some of the other classifiers used in stress recognition studies include logistic regression~\citep{asif2019human,vasavi2018regression}, deep belief networks~\citep{song2017development}, deep neural network~\citep{sardeshpande2019psychological,masood2019modeling}, and random forest~\citep{uddin2019synthesizing}. The objective measures of stress can be categorized into methods based on wearable sensors and non-wearable sensors as shown in \Fig{fig2a} and \Fig{fig3a} respectively. The literature corresponding to each of these categories is reviewed in the following subsections. \begin{figure*} \begin{center} \begin{tabular}{c} \includegraphics[width=\linewidth]{Figure2a.png} \end{tabular} \end{center} \caption { \label{fig:fig2a} { Categorization of objective measures of stress using wearable sensors.}} \end{figure*} \begin{figure*} \begin{center} \begin{tabular}{c} \includegraphics[width=\linewidth]{Figure3a.png} \end{tabular} \end{center} \caption { \label{fig:fig3a} { Categorization of objective measures of stress using non-wearable sensors.}} \end{figure*} \subsection{Wearable Sensors based Human Stress Detection} Wearable sensors need physical devices to be connected to the body of an individual to measure the stress response of the body. The autonomic nervous system (ANS) of a human being has two parts i.e., sympathetic nervous systems and parasympathetic nervous systems. When a person is under stress, changes in human ANS occurs. Sympathetic nervous system (SNS) activity is increased whereas activity in the parasympathetic nervous system (PNS) decreases in stressful situations. Wearable sensors-based stress detection is important because it can overcome the limitation of wrong self-reporting by an individual~\citep{garcia1997science,northrup1997problem}. Wearable sensors used for human stress monitoring include electroencephalography (EEG), electromyography (EMG), galvanic skin response (GSR) or electrodermal activity (EDA), electrocardiography (ECG), heart rate (HR), skin temperature (ST), respiratory rate (RR), heart rate variability (HRV), blood volume pressure (BVP), photoplethysmography (PPG), and salivary cortisol (SC). \subsubsection{Electroencephalography based Stress Detection} Brain activity has a strong relationship with stress~\citep{dharmawan2007analysis}. For analysis of brain activity, functional magnetic resonance imaging (fMRI), positron emission tomography (PET), and EEG are commonly used. Out of all these methods, EEG is the most commonly used method due to its low cost and non-invasive nature. EEG field originated back in 1924 when the first time EEG recording was performed~\citep{berger1929elektroenkephalogramm}. EEG is a physiological measure used by the research community as well as physicians to record the brain activity for the analysis and diagnosis of brain diseases and disorders~\citep{chandra2017role}. EEG signals acquisition can be performed using commercially available consumer-grade as well as medical-grade EEG devices. Medical-grade systems are quite expensive as well as sophisticated and are commonly used for patient monitoring in hospitals, whereas consumer-grade EEG headsets are less expensive but they are not as accurate when compared to medical-grade devices. Both types of systems can consist of dry as well as wet electrodes. Each type of electrodes have their pros and cons and thus many factors contribute to the type of device, which is selected for data acquisition. Consumer-grade EEG headsets have several electrodes ranging from $1$ to $16$~\citep{sawangjai2019consumer}, whereas medical-grade EEG caps can have several electrodes ranging from $8$ to $256$ or even more~\citep{troy2012many}. EEG electrodes are small metal plates made of steel and having a silver coating to record brain activity. These electrodes are placed on the human scalp to record brain activity. International 10-20 electrode positioning system specifies the electrode positions in the EEG acquisition system~\citep{trans201210}. Each electrode has a specified name having a letter and a number and a location on the human head, which is standardized. The letter in the name shows the area of the brain where the electrode is placed e.g., F for the frontal lobe and T for the temporal lobe. The right side of the head has even-numbered electrodes, whereas the left side has odd-numbered electrodes~\citep{oostenveld2001five}. EEG electrodes are connected to the data acquisition system in a wired or wireless manner. When there is a change in brain activity the voltage level at different electrodes varies, which corresponds to different diseases and disorders. Amplitude values of EEG signals are approximately around $100 \mu V$. EEG signal is composed of five different frequency bands. The behavior of each EEG frequency band is different in different situations. The frequency range of the EEG signal is from 1 Hz to 50 Hz. Based on the frequency ranges the descending order of brain waves are gamma, beta, alpha, theta, and delta. \begin{enumerate} \item \textbf{Gamma Band:} The brain activity that lies in the range of 30 - 45 Hz is usually regarded as a gamma wave or fast beta wave. The occurrence of this wave is rare and associated with brain diseases. The gamma wave is considered a good indicator of event-related synchronization (ERS) of the brain. Tongue movement, right and left index finger movement, and right toe movement have been demonstrated to relate with gamma waves. Association of gamma-band and human stress has been established in literature~\citep{minguillon2016stress}.\\ \item \textbf{Beta Band:} The electrical activity of the brain that lies in the range of 14 - 26 Hz is considered as a beta wave. This rhythm is found in waking normal individuals and is associated with thinking, attention, focus, and a panic state. Beta activity is mainly originated in the frontal and central regions of the brain. It occurs around the tumor regions of the brain. Among different neural oscillations, a higher level of beta waves acts as a marker denoting that a person is not in a calm state~\citep{sanei2013eeg}. The presence of stress has been shown to increase the spectral power in the EEG beta band~\citep{saeed2015psychological,hamid2015brainwaves}.\\ \item \textbf{Alpha Band:} Alpha waves (8 - 13 Hz) can be detected in all parts of the posterior lobes of the brain and commonly appears like a sine wave or a round-shaped signal. Relaxed alertness without attention is considered associated with alpha waves. The alpha wave is the most observable brain activity due to its prominence. The alpha wave is claimed to be awaiting pattern by visual regions of the brain as in closed eye state alpha wave is produced. Activities like opening the eyes, listening to unfamiliar sounds, anxiety, or mental attention can reduce or even eliminate the alpha waves. It has an amplitude that is normally less than 50 $\mu$V and is found over occipital regions. Its origin and significance are not known physiologically and require more research and experimentation. Stress has shown to be associated with a fall in alpha waves~\citep{hoffmann2005brain}.\\ \item \textbf{Theta Band:} Theta waves (4 - 7.5 Hz) originate due to drowsiness and have been associated with creative inspiration and deep meditation. The arousal of an individual is determined by the theta wave. Pathological problems show larger groups of abnormal theta activity in waking adults. Variations in the theta activity are also used in human stress recognition based studies~\citep{arsalan2019classification}.\\ \item \textbf{Delta Band:} Delta waves (0.5 - 4 Hz) are considered to reflect deep sleep. and the slowest brain waves. Newborn babies and very young children have strong delta wave activities. As the age of the individual increases, the amplitude and occurrence of delta waves are reduced. Delta waves are associated with a deep level of relaxation. These waves can be confused with muscular artifacts produced by the neck and jaw. Therefore, these artifacts need to be removed by applying simple signal processing methods to the EEG signals. \end{enumerate} Asymmetry analysis of the EEG signal is an established feature for the classification of different psychological states~\citep{gatzke2014role,giannakakis2015detection}. The asymmetry index of the EEG signal is the difference of the natural logarithm of the power of the right hemisphere from the left hemisphere of the brain. Commonly used locations for the estimation of the alpha asymmetry in stress-related studies are F3-F4~\citep{seo2008relation,lewis2007effect} because these locations are directly affected by the stressful events~\citep{qin2009acute}. However apart from the frontal part of the brain, stress-related studies involving lateral region i.e., F7-F8~\citep{lopez2012frontal}, anterior region i.e., Fp1-Fp2~\citep{peng2013method}, and posterior regions i.e., T5-T6~\citep{minguillon2016stress} of the brain have been reported in the literature. A large number of studies have a consensus over the fact that the alpha band activity in the right frontal part of the brain is dominating as compared to the left half of the brain under stressful condition~\citep{acharya2012application}. This phenomenon is present in a variety of stressful situations like students feeling stressed during examinations~\citep{giannakakis2015detection} when presented with a stimulus of sad/happy/horror movie~\citep{lopez2012frontal,tomarken1990resting} and even in case of chronic stress~\citep{peng2013method}. The human stress response has been explored using power spectrum and relative power index in a significant number of studies~\citep{hosseini2010higher,khosrowabadi2011brain,sharma2014modeling,ko2009emotion,giannakaki2017emotional}. Alpha band activity is dominant in the relaxation phase when the cognitive demands are minimal whereas on the contrary it has been found that situations involving high strain or alertness, beta-band activity is found to be significant~\citep{hayashi2009beta}. Even though the findings of the stress-related studies are contradictory, still stressful condition has found to decrease the alpha band activity~\citep{alonso2015stress,demerdzieva2011eeg,al2015mental,tran2007detecting,seo2010stress} and increase the beta band activity~\citep{katsis2011integrated}. Stress is found to be correlated to the beta wave in the temporal part of the brain~\citep{choi2015measurement}. Coherence among different brain regions is another important factor that varies with stress. Beta and theta band coherence is increased whereas the alpha band coherence is decreased in the anterior location of the brain hemisphere~\citep{alonso2015stress}. When the person is having a negative mood or depression the alpha and beta band activity is dominant~\citep{huiku2007assessment}. Alpha band activity of the prefrontal electrodes is reduced when the person is facing a stressful event~\citep{marshall2015effects}. The temporal lobe electrode shows a dominant activity of alpha-band when the stressful event occurs~\citep{choi2015measurement}. To classify the mental stress from the resting state EEG a method is proposed in~\citep{sharma2012objective}. Different level of mental stress is measured in participants using single-channel EEG headset and mental workload and public speaking task as a stimulus in a study conducted in~\citep{secerbegovic2017mental}. Alpha and beta bands of the EEG signal are found to be statistically significant bands and classification accuracy of 83.33\% for binary stress classification was reported. An EEG-based multiple level human stress classification framework using 128 channel EEG cap and MIST as a stimulus was presented in~\citep{subhani2017machine}. The study reported an average classification accuracy of 94.6\% and 83.4\% for binary and multi-level stress classification respectively. Another study for acute stress classification using SWT as a stressor and EEG as a modality is presented in~\citep{hou2015eeg}. Two, three, and four levels of stress were measured in this study with an average classification accuracy of 85.71\%, 75.22\%, and 67.06\%, respectively. Another human stress classification study for two and three class problems with the mental arithmetic task as a stimulus and Emotiv EPOC as an EEG data acquisition device is presented in~\citep{jun2016eeg}. Classification accuracy of 96\% and 75\% is achieved for two and three-level stress classes, respectively. A study in~\citep{duru2013assessment} focuses on stress level detection of the surgeon during most stressful phases of an operation via EEG. In~\citep{calibo2013cognitive} authors used the Stroop colored word test to elicit stress in the subjects. After applying preprocessing techniques to the EEG data, features were extracted for further analysis which was then classified using k nearest neighbor and logistic regression classifiers with an accuracy of 73.96\%. In~\citep{pomer2014methodology} authors proposed a methodology for analyzing the stress of military firefighters based on asymmetry levels of alpha waves. In~\citep{vijayaragavan2015eeg}, the authors developed an android application that reduces stress using music and yoga. EEG signals were recorded using the Neurosky headset and were preprocessed by using a high pass filter and proprietary algorithms of Neurosky. A survey of 100 users was conducted in which better relaxation waveforms were observed in the case of yoga by 67\% members and 29\% readings showed better results in the case of music and 4\% reported no results. Hand movement, heart rate variation, and EEG are used to analyze stress, and office syndrome was detected by intelligent watch in~\citep{reanaree2016stress}. In another study, the mind ball game has been used to establish a correlation between human stress and the winning percentage of the game. The study concludes that the person with a lower stress level wins the game more often~\citep{lin2006quantifying}. An EEG and Hilbert Huang Transform-based stress measurement method with support vector machine (SVM) as a classifier has been proposed in~\citep{vanitha2016real}. Another stress measurement scheme using EEG signals is proposed in~\citep{kalas2016stress}. A method of measuring mental stress is proposed based on the Montreal Imaging Stress Task (MIST) as a stimulus, power spectral density, and energy as a feature and SVM as a classifier~\citep{al2015mental}. Another EEG-based mental stress classification scheme using mental arithmetic task as a stimulus to classify stress into three levels is proposed in~\citep{pandiyan2013mental}. A stress classification framework in response to Urdu and English music tracks is presented in~\citep{asif2019human}. Classification accuracy of 98.76\% and 95.06\% for two- and three classes is achieved using a logistic regression classifier. A driver stress measurement framework using EEG signal has been presented in~\citep{halim2020identification}. In the proposed scheme data from 86 automobile drivers were used. EEG data were recorded to log the ongoing brain activity during driving to find a correlation of the brain activity with the emotional response of the driver. Three different classification algorithms which include SVM, neural network, and RF were used to classify the driver's emotional state based on the labeling obtained by the self-reporting questionnaire. The SVM classifier was found to be the best among them with an achieved classification accuracy of 97.95\% for stress vs relaxed state. All of the EEG stress detection methods discussed above are for acute stress. EEG signals have also been used to assess and classify perceived stress. An EEG-based study to identify the appropriate phase of EEG recording for the classification of perceived stress is presented in~\citep{arsalan2019classification}. EEG data of the participants were recorded for a duration of three minutes in open-eye condition before and after performing public speaking activity. The study concluded that the pre-activity phase was better for perceived stress classification. Two-level and three-level stress classification was performed and the classification accuracy of 92.85\% and 64.28\% was achieved, respectively. A perceived stress classification study using closed eye resting-state EEG data is presented in~\citep{saeed2015psychological}. The authors reported that there exists a relationship between the PSS questionnaire score and EEG data of the subject. The beta band of the EEG signal was found to be directly proportional to the level of perceived stress i.e., individuals having high perceived stress have increased beta-band activity and vice versa. Another single-channel EEG headset-based stress quantification study using resting-state and closed eye EEG data is presented in~\citep{saeed2017quantification}. Multiple linear regression analysis was performed and beta waves of the EEG signals were found to be the optimum frequency band for the prediction of the PSS questionnaire score of the subject with a confidence interval of 94\%. The correlation-based feature selection (CFS) method has been used in an EEG-based perceived human stress classification scheme proposed in~\citep{saeed2018selection}. The CFS method yielded the result that beta and low gamma frequency bands have the highest correlation with the PSS score of an individual. A significant difference in the energy spectral density of the alpha and beta bands of the EEG signals in the right and left hemisphere of the brain for stressed and non-stressed individuals is observed in a study conducted in~\citep{hamid2015brainwaves}. Alpha asymmetry has been found as a useful marker of the relationship between human stress and EEG signals in a study conducted in~\citep{sulaiman2011intelligent}. The left hemisphere of the brain has strong activity among the individuals who have low chronic stress whereas, on the other hand, the right hemisphere of the brain has strong activation in subjects having moderate and high chronic stress. A study conducted in~\citep{hamid2010evaluation} found that the PSS questionnaire score and ratio of alpha and beta band of the EEG signal have a negative correlation among themselves. Therefore individuals with having high PSS scores have a negative ratio whereas individuals with a low PSS score have been found to have a positive ratio. Correlation of the EEG temporal characteristics with the recorded EEG signals and the PSS questionnaire score is presented in~\citep{luijcks2015influence}. The study concluded the fact that theta and delta wave of the EEG signal of the participants having high PSS questionnaire score has an increased activation in the post-stimulus phase when compared to the pre-stimulus phase. Moreover, theta band activity in the frontal part of the brain was higher in the post-stimulus phase. Another perceived stress classification study using resting-state EEG data based on PSS questionnaire as well as psychologist interview labeling is presented in~\citep{saeed2020eeg}. The study concluded that by using the psychologist interview labeling, a classification accuracy of 85.20\% was achieved. \Tab{tab1} presents a summary of human stress classification schemes using EEG signals. \begin{table} \caption{Summary of Human Stress Detection Studies using EEG signals.} \label{tab:tab1} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{halim2020identification} & Acute & 86 & -- & Driving & \thead{Time and \\Frequency} & \thead{SVM, RF, NN} & 97.95\% (2) \\ ~\citep{subhani2017machine} & Acute & 22 & 19-25 & \thead{Montreal Imaging \\Stress Task (MIST)} & Frequency & \thead{LR, \\SVM, NB} & \thead{94.60\% (2)\\ 83.40\% (multilevel)} \\ ~\citep{dharmawan2007analysis} & Acute & 20 (18/2) & 20-35 & Game & Frequency & \thead{DT} & 79.08\% \\ ~\citep{secerbegovic2017mental} & Acute & 9 (6/3) & 19.3-22.7 & \thead{Mental Arithmetic \\Task (MAT), \\Computer Games} & \thead{Time and \\Frequency} & SVM & 86.66\% (3) \\ ~\citep{hou2015eeg} & Acute & 9 & 21-28 & \thead{Stroop colour\\word test} & Frequency & SVM & \thead{85.71\% (2)\\75.22\% (3)\\ 67.06\% (4)} \\ ~\citep{jun2016eeg} & Acute & 10 (9/1) & 20-35 & \thead{Mental Arithmetic \\Task (MAT),\\Stroop colour\\word test} & Frequency & SVM & \thead{75.00\% (3)\\96.00\% (2)\\ 88.00\% (2)} \\ ~\citep{calibo2013cognitive} & Acute & 18 & -- & \thead{Stroop colour\\word test} & Frequency & LR, kNN & 73.96\% (2) \\ ~\citep{hosseini2010higher} & Acute & 15 (15/0) & 20-24 & \thead{International \\Affective Picture \\System (IAPS)} & Frequency & SVM & 82.00\% (2) \\ ~\citep{giannakaki2017emotional} & Acute & 5 (5/0) & 22-38 & \thead{International \\Affective Picture \\System (IAPS)} & Frequency & RF & 75.12\% (2) \\ ~\citep{al2015mental} & Acute & 12 (12/0) & 20-24 & \thead{Montreal Imaging \\Stress Task (MIST)} & Wavelet & SVM & \thead{94.00\% (L1),\\ 85.00\% (L2),\\ 80.00\% (L3)} \\ ~\citep{vanitha2016real} & Acute & 6 & -- & \thead{Mathematical\\questions} & Frequency & \thead{hierarchical\\SVM} & 89.07\% (2) \\ ~\citep{asif2019human} & Acute & 27 (13/14) & 20-35 & Music Tracks & Frequency & SMO, LR & \thead{98.76\% (2) \\95.06\% (3)} \\ ~\citep{khosrowabadi2011brain} & Chronic & 26 (20/6) & 18-30 & \thead{University \\Exam} & Frequency & kNN, SVM & 90.00\% (2) \\ ~\citep{saeed2015psychological} & Chronic & 28 (18/10) & 22-33 & Baseline & Frequency & SVM & 71.42\% (2) \\ ~\citep{saeed2017quantification} & Chronic & 28 (18/10) & 22-33 & Baseline & Frequency & NB & 71.42\% (2) \\ ~\citep{saeed2018selection} & Chronic & 28 (18/10) & 22-33 & Baseline & Frequency & SVM & 78.57\% (2) \\ ~\citep{saeed2020eeg} & Chronic & 33 (20/13) & 18-40 & Baseline & Frequency & SVM & 85.20\% (2) \\ ~\citep{arsalan2019classification} & Chronic & 28 (13/15) & 18-40 & Baseline & Frequency & MLP & \thead{92.85\% (2) \\ 64.28\% (3)} \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] LR: Logistic Regression, SVM: Support Vector Machine, kNN: k- Nearest Neighbors, NB: Naive Bayes, SMO: Sequential minimal optimization, RF: Random Forest, MLP: Multilayer Perceptron, DT: Decision Tree, NN: Neural Networks \end{tablenotes} \end{table} \subsubsection{Electromyography based Stress Detection} EMG is a biomedical signal that deals with the electric current generated in the muscles of the human body during its contraction representing neuromuscular activities. EMG signal is recorded via a device called an electromyograph. EMG signals are recorded by placing the sensors near the muscle whose movement needs to be measured. The amplitude of the signal lies in the range of 1-10 mV and the frequency range of the EMG signal is 0-500 Hz with the dominant frequencies between 50-150 Hz~\citep{de2002surface}. EMG is a complicated signal being controlled by the human nervous system and is strongly dependent on the physiological and anatomical characteristics of the skeletal muscles. EMG signals become noisy while moving through different tissues of the human skin. Moreover, the EMG data acquisition device acquires signals from various motor units resulting in an overlap of other muscles' movement to the desired muscle movement. Recently, the measurement of EMG signals using sophisticated equipment has gained a lot of interest in the field of biomedical engineering~\citep{farfan2010evaluation}. EMG has also been a focus of biomedical experts because of its clinical and diagnostic applications. Robotic arm and rehabilitation of patients have been potentially identified as a key area of applications for EMG signal recording. Motor Unit Action Potentials (MUAPs) (their shapes and firing rates) are useful for the treatment of a variety of neuromuscular disorders. Advancement in the available signal processing techniques has made the design and development of state-of-the-art EMG detection and diagnosis techniques a practical possibility. A wide range of mathematical and artificial intelligence (AI) based techniques have gained attention~\citep{reaz2006techniques}. Mathematical models used for EMG signal analysis include wavelet transform, Wigner-Ville Distribution (WVD), Fourier transform, and higher-order statistics. On the other hand, AI-based techniques used include artificial neural networks, dynamic recurrent neural networks, and fuzzy logic. The recorded EMG signal faces two important challenges which include (i) the signal to noise ratio (i.e., the ratio of the energy of the EMG signal to the energy of the noise) and (ii) distortion in the recorded EMG signal (i.e., there should be no alteration in the contribution of each frequency component of the signal). A typical EMG recording is done in two phases i.e., the baseline recording and EMG recording in response to some stimulus and then measured as a ratio of baseline and stimulus-response. Baseline recording is necessary because this level is different for every individual depending on a variety of factors~\citep{weyers2006electromyographic}. Facial EMG has been extensively used in the literature to record facial expressions in response to some kind of stimulus. This finding has been reported by Ekman and Friesen in their study in~\citep{ekman1978technique}. The relationship between human stress and EMG signal has been discussed in a wide range of studies in the literature. A study to investigate the relationship between the changes in human stress level and the muscular tension via EMG signal is presented in~\citep{karthikeyan2012emg}. Stroop color-word test was used as a stimulus and EMG data was acquired from the left trapezius muscle of the participants. Pre-processing of the acquired EMG data was performed using the wavelet de-noising technique and time-domain features were extracted from the data. kNN classifier was used and a classification accuracy of 90.70\% was achieved. A study to validate the stress-EMG paradigm using an unpredictable and uncontrollable stimulus is presented in~\citep{luijcks2014experimentally}. The stimulus given to the participants was an electro-shocker to give electric shocks of 10 milliseconds duration. The experiment performed includes 3-minutes baseline recording, 3-minutes recording before the stimulus, and 2 minutes post-stimulus recording. EMG activity of the trapezius muscles was significantly higher than in the pre-activity phase when compared to the other two phases of the experiment. The study concluded that the presented stimulus is a reliable and valid test to identify the difference between stressed and non-stressed individuals. The activities of the human muscles like the trapezius are associated with stress~\citep{lundberg1994psychophysiological,wijsman2010trapezius,larsson1995effects}. In~\citep{lundberg1994psychophysiological}, the author has designed an experimental study to investigate the effect of mental stress and physical workload separately and in the combined form on the perceived human stress, physiological signals, and the muscular tension faced by an individual by using EMG signal. Stressor given to the subjects includes mental arithmetic task, Stroop color-word test, cold pressor test, standardized test contractions (TC), and a combination of SWT with TC. The results indicate that when compared to baseline recording, stressors induced an increase in blood pressure, heart rate, salivary cortisol, urinary catecholamines, and self-reported questionnaire score. Mental arithmetic tasks caused a significant amount of increase in the EMG activity, SWT when used alongside TC produced more pronounced changes in the EMG signal as compared to SWT alone. The study concluded that an increase in muscular tension was observed when facing only mental stress as well as facing mental stress along with physical workload. Human stress measurement using EMG signals recorded from the upper trapezius muscle is presented in a study conducted in~\citep{wijsman2010trapezius}. The authors have designed two new stress measurement tests for the experiment. Three different stressful situations which include a memory task, a logical puzzle, and a calculation task were presented to the subject and EMG signals of the upper trapezius muscle were recorded. The study revealed the fact that EMG activity of the upper trapezius muscle was higher when facing a stressor as compared to rest condition thus making EMG signal a good indicator of mental stress. Another study to correlate mental stress, blood flow, and the EMG signals recorded from the upper trapezius muscles using the Stroop color-word test is presented in~\citep{larsson1995effects}. The study concluded that there was a decrease in muscle blood flow and an increase in heart rate during the stressor phase. EMG activity of trapezius muscle is increased in response to stress during cold pressor and Stroop color-word test~\citep{krantz2004consistency}. Moreover, an increase in the blood pressure, heart rate, and urinary epinephrine and norepinephrine was observed when facing the stressor but no correlation could be found with salivary cortisol measure. Another important finding of the study was that men have higher blood pressure and an increase in epinephrine as compared to women, whereas on the other hand women have increased heart rate as compared to men. A positive correlation between the negative stress rating and EMG signal during work has been found in a study conducted in~\citep{rissen2000surface}. A study reported that the EMG of the trapezius muscle is increased under low or high mental workload during computer data entry work~\citep{schleifer2008mental}. Moreover, a decrease in the EMG-gap of the left as well as right trapezius muscles was greater during high mental workload as compared to low mental workload. Another study to measure the influence of EMG-based methods and human stress-based methods on the shoulder muscles forces is presented in~\citep{engelhardt2015comparison}. Another study to analyze the stress in the lower back of a subject while at work for different posture positions using the EMG signal is presented in~\citep{tyagi2017stress}. The study founds an application in the area of chair design for a comfortable sitting posture of an individual at work. \Tab{tab2} presents a summary of human stress classification schemes using EMG signals. \begin{table} \caption{Summary of Human Stress Detection Studies using EMG signals.} \label{tab:tab2} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{karthikeyan2012emg} & Acute & 10 (0/10) & -- & \thead{Stroop colour \\word test} & Wavelet & kNN & 90.70\% (4) \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] kNN: k- Nearest Neighbors \end{tablenotes} \end{table} \subsubsection{GSR based Stress Detection} The skin response of an individual is affected whenever we confront an emotional stimulus like listening to the audio, watching a video, or an emotional real-life event. It is pertinent to mention that whatever the reason for the emotional arousal i.e., whether it is due to happiness or excitement or due to fear, anger, depression, or stress, in either case, the skin response of the person changes~\citep{farnsworthgsr}. The response of the human skin is not under human conscious control~\citep{udovivcic2017wearable} and is dependent on the changes in the sweating pattern of a subject and thus reflects the behavior of the sympathetic nervous system~\citep{wu2010analysis}. Another study supports the fact that some signals are generated from the sympathetic nervous system when a change in the skin conductance of a person occurs~\citep{lidberg1981sympathetic}. Sweat reaction occurs due to any emotional change that can be seen in the fingers and palm. The amount of salt in the human skin varies as a result of a sweat reaction thus causing a change in the electrical conductance of the skin~\citep{ayata2017emotion}. As time goes on the sweat glands of a person become more active resulting in a dis-balance of positive and negative ions and thus as a result affecting the flow of current through the skin~\citep{critchley2002electrodermal}. GSR measurement locations are part of the body with a large number of sweat glands. There exist a variety of possible locations on the human body for the measurement of GSR. Common locations of measuring skin response include the fingers, shoulders, foot, and wrist of a person. According to the studies, the palm and fingers of the skin have the highest number of sweat glands and are used as a location for GSR recording in experiments. GSR activity is typically measured in “micro-Siemens ($\mu S$)” or “micro-Mho ($\mu M$)”. Sweating secretion is increased when a person faces some emotional stimuli, whether positive or negative, and due to these measurable changes in skin conductance occurs. One of the most widely used measures of skin activity is GSR, which is also called Electrodermal Activity (EDA) or Skin Conductance (SC). EDA is a physiological measure of the flow of electricity through human skin. Even a small amount of sweating, which is not visible with the naked eye on the surface of the human skin causes a change in its electrical conductivity. EDA can be divided into (i) Skin Conductance Level (SCL) which is the slowly changing part of the EDA, (ii) Skin Conductance Response (SCR) which corresponds to the peaks in the EDA due to some kind of stimulus, and (iii) Non-specific Skin Conductance Response (NS.SCR) which exists even without the existence of any external stimulus. The pattern of the skin response data is distinct according to the state of the person and is considered as one of the reliable stress measurement method~\citep{kurniawan2013stress}. SCR part of EDA increases when encountered with an emotionally arousing situation~\citep{dawson2007electrodermal}. NS.SCR part of the EDA corresponds to the cognitive processes and the psycho-physiological states~\citep{nikula1991psychological}. The skin conductance of a person is increased when the person is stressed, whereas skin conductance gets reduced when the person is relaxed~\citep{liao2005real}. GSR has been used for the cognitive load measurement in the literature~\citep{shi2007galvanic}. The index and the middle finger of the hand of the subject are commonly used as a location for the placement of GSR electrodes because of the existence of a sufficient number of sweat glands to measure the skin response changes. The use of GSR sensor for the stress measurement has been the focus of study in~\citep{healey2000wearable}. Whenever the person is under stress, the moisture in the human skin increases resulting in an increase in the SCL~\citep{giakoumis2012using,blechert2006identifying,ritz2000emotions,reinhardt2012salivary,hoehn1989somatic} and SCR~\citep{setz2009discriminating,ren2012affective,lee2004development,blechert2006identifying,hoehn1989somatic,nomikos1968surprise,lanzetta1976effects} part of the electrodermal activity. SCR peaks commonly appear between 1.5 and 6.5 seconds after the start of the stimulus. In another study, SCL was found to be the most effective marker for measuring stress in comparison to HRV and EMG. Some of the commonly extracted features from the electrodermal activity for the human stress detection include SCR frequency, SCR amplitude, SCR latency, SCR rise time, SCR half recovery, SCR recovery time, SCR response onset, and SCR half recovery. Another interesting fact observed in a study conducted in~\citep{nomikos1968surprise} is that even the expectation of a stressful situation that has yet not occurred can cause an increase in the EDA similar to the situation like if the event has occurred. Many other factors that can affect GSR measurement include the temperature and humidity of the environment. In summary, it can be concluded that the SCR and SCL part of the EDA consistently increase under stress conditions. A chronic stress measurement mechanism using GSR signals is presented in~\citep{panigrahy2017study}. Data of the participants is recorded in three different states i.e., sitting, standing, and sleeping. Stressed and relaxed conditions are discriminated against with a classification accuracy of 76.5\%. Another human stress measurement scheme for the office environment using physiological signals of GSR is proposed in~\citep{hernandez2011call}. An analysis of the self-reported measure obtained from the employees and call logs checked to determine the number of stressed and non-stressed calls was performed. The SVM classifier was used to classify the stressed and non-stressed individuals and a classification accuracy of 73\% was achieved. Another framework for the measurement of stress at the workplace using EDA signal is proposed in~\citep{kocielnik2013smart}. Self-Assessment Manikin Questionnaire was used as a subjective measure and pre-processing of data was performed by the removal of data from the first 15 seconds and the last 10 seconds of the recorded signal. The authors did not report any classification accuracy but they stated that the results obtained from the analysis are meaningful and provides useful information. \Tab{tab3} presents a summary of human stress classification schemes using GSR signals. \begin{table} \caption{Summary of Human Stress Detection Studies using GSR signals.} \label{tab:tab3} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{healey2000wearable} & Acute & 9 & -- & \thead{Car\\driving} & Time & LDA & 96.00\% (4) \\ ~\citep{blechert2006identifying} & Acute & 42 (14/28) & 42.2$\pm$9.9 & Pictures & Time & DFA & 83.30\% (2) \\ ~\citep{setz2009discriminating} & Acute & 33 (33/0) & 24.06 & \thead{Montreal Imaging \\Stress Task (MIST)} & Time & LDA & 82.80\% (2) \\ ~\citep{ren2012affective} & Acute & 30 (14/16) & 26.8$\pm$2.56 & \thead{Stroop color\\ word test} & Time & NB & 85.5\% (2) \\ ~\citep{lee2004development} & Acute & 80 & -- & \thead{Stroop color\\ word test} & Time & \thead{MLP, GRNN,\\ ANFIS} & 96.67\% (2) \\ ~\citep{panigrahy2017study} & Acute & 10 & -- & \thead{Computer \\game} & Time & J48 & 76.50\% (2) \\ ~\citep{hernandez2011call} & Acute & 9 (4/5) & -- & \thead{Call center} & Time & SVM & 73.41\% (2) \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] NB: Naive Bayes, MLP: Multilayer Perceptron, J48: Decision Tree, LDA: Linear Discriminant Analysis, DFA: Discriminant function analysis, GRNN: Generalized regression neural network, ANFIS: Adaptive network based fuzzy inference system \end{tablenotes} \end{table} \subsubsection{Electrocardiography based Stress Detection} ECG is one of the most commonly used techniques for monitoring the functionality of the heart. ECG is a non-invasive modality used for the assessment of electrical activity of the heart in real-time. The activity of the heart is correlated to the human central system. Apart from the monitoring of heart functionality, it is also useful for human stress measurement~\citep{ahn2019novel}. The most commonly used method for the measurement of ECG is a 12-lead ECG technique. In this technique, nine sensors are placed on the human body at specified locations. Three main sensors are placed on the right arm, left arm, and left leg of the person. The sensor placed on the right leg act as a reference electrode for the ECG acquisition system. Even though the complete picture of the heart cannot be obtained by using only these three sensors, but a physician can use these schemes for quick analysis in case of emergency treatment. For higher resolution results, six sensors are placed on the chest of the individual. Using these nine sensors along with the leads (Lead I, Lead II, Lead III) interconnecting their results in a total of twelve leads. One of the most important advantages of this twelve lead system is that it gives detailed information about the heart activity of the subject thus leading to a better diagnosis and cure, whereas, on the other hand, the largest disadvantage of this 12 lead system is that it produces a huge amount of data especially when recording is done for many hours. ECG signals are characterized by peaks that include P, Q, R, S, T, and U. Each of these peaks has its characteristics and gives a piece of specific information about the heart activity of the individual~\citep{al2007hardware}. Commonly used parameters for the assessment of ECG signals include P, PR, QRS complex, and QT. For medical purposes, all these four parameters are evaluated. For other applications, there may be some peaks that are more important than others. A wide range of studies for human stress measurement using ECG signals has been presented in literature~\citep{karthikeyan2012study,karthikeyan2011ecg}. A human stress classification scheme using ECG signal and mental arithmetic task as a stimulus is presented in~\citep{karthikeyan2012study}. Statistical features are extracted using discrete wavelet transform and low and high-frequency bands of the ECG signals are analyzed separately. Three-level human stress classification i.e., low stress, medium stress, and high stress is performed using a kNN classifier. Maximum classification accuracy of 96.3\% and 75.9\% is achieved for low frequency and high-frequency bands, respectively using the covariance feature. Another human stress recognition framework using ECG and discrete wavelet transform is presented~\citep{karthikeyan2011ecg}. Stroop color-word test is used as a stimulus and heart rate variability is extracted as a feature from the recorded ECG signal. An accuracy of 96.41\% is achieved for stress vs relaxed state classification using the kNN classifier. Another human stress assessment scheme based on a mono-fuzzy index extracted from ECG or GSR signals is presented in~\citep{charbonnier2018multi}. Four different stress tasks are used in the experiment which includes mental arithmetic stress task, mental arithmetic control task, trier social stress test, and trier social control test, and classification accuracy of 72\% was achieved for stress and no-stress classes. In~\citep{liu2014listen}, the authors presented a stress classification scheme using ECG signals. The dataset used in the experiment was adopted from PhysioNet~\citep{goldberger2000physiobank} and it consists of physiological signals of GSR and ECG recorded while drivers are in a rest state or were experiencing stressful events. Time and frequency domain features of the heart rate variability and spectral power of the ECG signal are used. An F-measure of 0.85 is achieved for stress classification using an SVM classifier. Acute stress classification using ECG signals is presented in~\citep{tanev2014classification}. Four different stimuli which include images, audio, mental tasks, and rest state are used in the experiment. Linear, as well as non-linear features, are extracted from the HRV data obtained from the ECG signals and classification accuracy of 80\% is achieved for acute stress classification. ECG signal has been analyzed for human stress classification in~\citep{bong2012analysis}. Time-domain features which include heart rate, mean R peak amplitude, and mean R-R intervals are extracted from the ECG signal. kNN and SVM were used for classification and mean classification accuracy of 77.69\% and 66.49\% is achieved for two and three classes, respectively. Short-term ECG and heart rate variability signals are used for human stress classification in a study conducted in~\citep{karthikeyan2013detection}. Stroop color-word test is used as a stimulus, pre-processing of the acquired data is performed using wavelet de-noising algorithm. Frequency domain features are extracted from the HRV signal which is obtained from the recorded ECG signals. Classification is performed using a probabilistic neural network and kNN classifiers with an average achieved accuracy of 91.66\%. A driver stress recognition framework using ECG is proposed in~\citep{keshan2015machine}. The study aimed at the classification of stress at three different levels i.e., low, medium, and highly stressed. Seven different classifiers which include the Naive Bayes, Logistic Regression, Multilayer Perceptron, SVM, J48, kNN, and random forest classifiers are used for the classification purpose. The decision tree algorithm gave the best classification results with achieved accuracy of 88\% for three classes. A stress measurement mechanism among students during an oral exam using ECG signals is presented in~\citep{castaldo2016detection}. ECG data of the student is recorded during the oral exam as well as after the vacations which acted as a baseline recording. Time and frequency domain features are extracted from the recorded data and subjected to classification using Naive Bayes, Decision Tree, SVM, and Multilayer Perceptron classifiers. The best classification accuracy of 80\% is achieved using a decision tree classifier. \Tab{tab4} presents a summary of human stress classification schemes using ECG signals. \begin{table} \caption{Summary of Human Stress Detection Studies using ECG signals.} \label{tab:tab4} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{karthikeyan2012study} & Acute & 10 (0/10) & 20-25 & \thead{Mental arithmetic \\ task} & Wavelet & kNN & 96.30\% (4) \\ ~\citep{karthikeyan2011ecg} & Acute & 10 (0/10) & 20-25 & \thead{Stroop color\\word test} & Wavelet & kNN & 96.41\% (2) \\ ~\citep{charbonnier2018multi} & Acute & 20 & 19-30 & \thead{Stroop color\\ word test} & Frequency & \thead{mono-feature \\fuzzy index} & 72.00\% (4) \\ ~\citep{tanev2014classification} & Acute & 10 (8/2) & 22-26 & \thead{IAPS, IADS} & \thead{Time and \\Frequency} & NB & 90.00\% (2) \\ ~\citep{bong2012analysis} & Acute & 5 & -- & Audio-visual & Time & SVM,kNN & \thead{77.69\% (2) \\ 66.49\% (3)} \\ ~\citep{karthikeyan2013detection} & Acute & 60 (30/30) & 21-25 & \thead{Stroop color\\word test} & \thead{Time and \\Frequency} & kNN, PNN & 91.66\% (2) \\ ~\citep{keshan2015machine} & Acute & 17 & -- & Driving & Time & \thead{NB, LR, MLP, \\SVM, DT, kNN, RF} & 88.00\% (3) \\ ~\citep{castaldo2016detection} & Acute & 42 & -- & \thead{Oral \\Examination} & \thead{Time and \\Frequency} & \thead{NB, DT,\\SVM, MLP} & 80.00\% (2) \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] NB: Naive Bayes, MLP: Multilayer Perceptron, LR: Logistic Regression, DT: Decision Tree, SVM: Support Vector Machine, kNN: k- Nearest Neighbors, RF: Random Forest, IAPS: International Affective Picture System, IADS: International Affective Digital Sounds, PASAT: Paced Auditory Serial Addition Task, FDA: Fisher discriminent algorithm, PNN: Probabilistic neural network \end{tablenotes} \end{table} \subsubsection{Heart Rate based Stress Detection} HR is one of the most widely used measures of human stress available in the literature. Heart rate is defined as the number of heartbeats in one minute (measured in beats per minute (bpm)). The RR interval of the ECG signal, which is defined as the interval between consecutive heartbeats has an inverse relationship with the heart rate of a person. In literature, there exist a large number of studies which report a significant increase in heart rate when facing a stressful situation~\citep{giannakakis2017stress,engert2014exploring,vinkers2013effect,lundberg1994psychophysiological,krantz2004consistency,finsen2001muscle,reinhardt2012salivary,acerbi2016wearable,moriguchi1992spectral,steptoe2001acute,ring2002shifting,tugade2004resilient,vuksanovic2007heart,schubert2009effects,clays2011perception,lackner2011phase,van2015ambulatory}. A human stress detection framework based on facial cues using features of eye movement, mouth activity, head movement, and heart rate acquired via PPG signal is presented in~\citep{giannakakis2017stress}. Four different stressors which include social exposure, emotion recall, stressful images, and stressful videos are used in the experiment. Feature selection is applied to select the optimum set of features for the discrimination of stress state from a neutral state. Five different classifiers which include kNN, Generalized Likelihood Ratio, SVM, the Naïve Bayes, and AdaBoost classifier are employed for classification purposes. Maximum classification accuracy of 91.68\% is achieved by the AdaBoost classifier using social exposure stressors. Another study presenting a comparison of the use of thermal infrared imaging for measurement of human stress with other stress biomarkers of heart rate, heart rate variability, alpha-amylase, cortisol, and finger temperature is presented in~\citep{engert2014exploring}. Two different stressors which include the cold pressor test and trier social stressor test are used in the experiment. The study reported the fact that under stressful situations heart rate of the subjects is increased whereas it decreased in the recovery phase. In another study conducted in~\citep{lundberg1994psychophysiological}, the authors reported an increase in the heart rate of the individuals under stress conditions as compared to baseline rest state condition. Women are found to have an increased heart rate as compared to men when facing stressors, whereas men have higher blood pressure as compared to women under stress conditions in a study presented in~\citep{krantz2004consistency}. The cardiovascular response of the individuals in response to computer mouse work with and without memory demands is discussed in a study presented in~\citep{finsen2001muscle}. The study found that with the increasing memory demands, the heart rate of the individuals is increased. A new stress induction protocol named as Mannheim Multi-component Stress Test (MMST) is designed in a study conducted in~\citep{reinhardt2012salivary}. The MMST protocol included mental arithmetic tasks, effective images, sounds, and motivational stressors. The heart rate of the subjects is found to have an increasing pattern when facing the stressor. Another human stress measurement study based on wearable sensors is presented in~\citep{acerbi2016wearable}. The mean of the heart rate feature is extracted from the recorded heart rate variability signal. The study concluded that stressed and non-stressed individuals have significant differences in the heart rate of the subjects. The influence of acute mental stress on the cardiovascular response and concentrations of inflammatory cytokines are examined in a study conducted in~\citep{steptoe2001acute}. The study reported that participants have an increased heart rate and blood pressure when facing stressors as compared to the baseline condition. Another human stress measurement study conducted in~\citep{vuksanovic2007heart} proved the fact that an increase in the heart rate of the subject in mental stress aloud condition is due to the changes in autonomous modulation of the spectral power of high-frequency bands of the ECG signal. A study to examine the effects of chronic and short-term stress on heart rate and heart rate variability is presented in~\citep{schubert2009effects}. The speech task has been used as a stressor for the experiment and time, frequency, and phase domain measures were examined. The study reported the finding that the heart rate of the subjects is significantly increased when performing the public speaking task as compared to the rest state. A study to correlate the perception of work stressors with the measure of heart rate variability is presented in~\citep{clays2011perception}. Mean of correlation, multiple linear regression, and ANOVA is used to analyze the HRV signal. The mean of the heart rate is extracted as a feature from the HRV signal and it is found that mean HR was raised in the high work stressor group as compared to the low stressor group. On the contrary, few studies exist in the literature which reports no change in the heart rate under stress~\citep{mcduff2014remote,blechert2006identifying,hynynen2011incidence,cinaz2013monitoring,mcduff2016cogcam}. Remote measurement of cognitive stress using heart rate, heart rate variability, and breathing rate have been performed in a study conducted in~\citep{mcduff2014remote}. Physiological data is acquired from the participants in the rest state and stress condition i.e., while performing a mental arithmetic task. The study concluded that there is a significant difference in the breathing rate and the heart rate variability of the subjects in stress vs rest condition, whereas the heart rate of the subject did not show any significant difference in stressed vs relaxed state. Identifying the difference between anxiety and rest state using a variety of physiological signals which include EDA, breathing rate, and cardiovascular measure of heart rate variability and heart rate is presented in~\citep{blechert2006identifying}. Physiological data were acquired in the rest state as well as while facing electric shocks. The study concludes that EDA showed a significant difference in the rest vs stressed state whereas cardiovascular measures show very little difference between the two groups. A study to correlate the self-reported questionnaire with the cardiac autonomic modulation in real-life scenarios is discussed in~\citep{hynynen2011incidence}. PSS questionnaire is filled by the participants and the participants were grouped into the low and high-stress groups based on PSS score. R-R interval data is recorded while participants were sleeping at night and during the orthostatic test after awakening in the morning. R-R interval data is used to extract HRV and HR features in time as well as frequency domain. The study concluded that a high score in the stress questionnaire is correlated with lower HRV in the orthostatic test. Moreover, there is no difference observed in the heart rate and heart rate variability of the low and high-stress participants. A new stress recognition framework using the using contact-free camera as an apparatus and computer-based tasks as a stimulus is proposed in~\citep{mcduff2016cogcam}. PPG signals are recorded and the heart rate, heart rate variability, and breathing rate features are extracted and used for the identification of stress during the tasks. The study identified the fact that heart rate variability has significant changes during the stressor where on the other hand there is no difference in the heart rate and breathing rate of the two groups. It can be observed from the above studies that heart rate has been widely used as a marker for human stress measurement because it is a reliable indicator of arousal due to stress. \Tab{tab5} presents a summary of human stress classification schemes using heart rate. \begin{table} \caption{Summary of Human Stress Detection Studies using Heart Rate Measure.} \label{tab:tab5} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{blechert2006identifying} & Acute & 42 (14/28) & 42.2$\pm$9.9 & Pictures & Time & DFA & 83.30\% (2) \\ ~\citep{giannakakis2017stress} & Acute & 23 (16/7) & 45.1$\pm$10.6 & \thead{Stroop color\\word test,\\IAPS, videos} & Time & \thead{Adaboost} & 91.68\% (2) \\ ~\citep{mcduff2014remote} & Acute & 10 (3/7) & 18-30 & \thead{mental arithmetic \\task (MAT)} & Frequency & SVM &85.00\% (2) \\ ~\citep{mcduff2016cogcam} & Acute & 10 (5/5) & 18-28 & \thead{Berg Card \\Sorting Task (BCST)} & Frequency & NB & 86.00\% (2) \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] NB: Naive Bayes, SVM: Support Vector Machine, IAPS: International Affective Picture System, DFA: Discriminant function analysis \end{tablenotes} \end{table} \subsubsection{Skin Temperature based Stress Detection} Skin temperature is the temperature of the outermost surface of the body. The normal skin temperature of the outer surface of the skin lies between $33.5$ and $36.9^oC$. Our sense of hot and cold depends on the amount of energy to and from the skin. Skin temperature depends on the temperature of the air and time spent in that environment. Human skin temperature is strongly correlated to the heart activity and sweat reaction of an individual. Changes in skin temperature are connected to the stressful and anxious conditions~\citep{mcfarland1985relationship}. Skin temperature has been measured at a variety of locations on the human body like a finger, arm, face, armpits. Measurement of different positions gives different results under stress because the temperature on some parts of the body increases whereas on some other parts of the body temperature decreases. In~\citep{zhai2006stress}, a skin temperature-based human stress measurement method has been developed. Skin temperature has a negative correlation to human stress, i.e., a decrease in stress level corresponds to an increase in ST and vice versa~\citep{reisman1997measurement}. A patch-based human stress monitoring using skin temperature and skin conductance is proposed in~\citep{yoon2016flexible}. Skin temperature has shown a negative correlation with the level of chronic stress~\citep{lee2010wearable,torii1992fall}. Changes in skin temperature to identify different levels of stress are studied in~\citep{karthikeyan2012descriptive}. Stroop color-word test is used as a stimulus and probabilistic neural network as a classifier to achieve an accuracy of 88\% for four levels of stress. The effect of the core and peripheral body temperature on human stress is discussed in~\citep{vinkers2013effect}. The study reported a decrease in the temperature of the fingertips and palm whereas on the contrary the temperature of the upper arm increases. An acute human stress measurement system using skin temperature is presented in~\citep{herborn2015skin}. Skin temperature when measured with an axillary thermometer tends to increase under stressful situations~\citep{marazziti1992psychological}. Some other studies analyzing skin temperature of the surface of the finger under human stress report a decrease in temperature~\citep{lee2004development,rimm1996psychological,vinkers2013effect,karthikeyan2012descriptive,engert2014exploring}. Slope of skin temperature has been used in some stress studies instead of temperature means value~\citep{barreto2007significance}. A study reporting different temperature changes in different parts of the body under a stressful stimulus of an interview~\citep{rimm1996psychological}. The temperature on the hands of the person is decreased whereas the cheeks and eyes of the person tend to show an increase in temperature. Moreover, there also exists a temperature difference in the left and right cheeks of the participants. \Tab{tab6} presents a summary of human stress detection schemes using skin conductance. \begin{table} \caption{Summary of Human Stress Detection Studies using Skin Temperature Measure.} \label{tab:tab6} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{lee2004development} & Acute & 80 & -- & \thead{Stroop color\\ word test} & Time & \thead{MLP, GRNN,\\ ANFIS} & 96.67\% (2) \\ ~\citep{zhai2006stress} & Acute & 32 & 21-42 & \thead{Stroop color\\ word test,\\Emotional\\pictures} & \thead{Time and \\Frequency} & SVM & 90.10\% (2) \\ ~\citep{karthikeyan2012descriptive} & Acute & 60 (30/30) & 22.5$\pm$2.5 & \thead{Stroop color\\ word test} & Time & PNN & 88.00\% (4) \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] MLP: Multilayer Perceptron, PNN: Probabilistic Neural Network, GRNN: Generalized regression neural network, ANFIS: Adaptive network based fuzzy inference system \end{tablenotes} \end{table} \subsubsection{Respiratory Rate based Stress Detection} The respiration rate of a person can be defined as the number of breaths a person takes in a duration of one minute. Two of the most common measures of respiration are breath rate and breath amplitude or depth~\citep{simoes1991respiratory}. The breath rate of a person is increased under stressful conditions, whereas on the contrary it is decreased in calm situation~\citep{vinkers2013effect,mcduff2014remote,grossman1983respiration}. Stress is associated with the irregularities of the respiratory rate~\citep{singh2013stress}, the shift from abdominal to thoracic breathing~\citep{ahmed2015rebreathe}, and faster and shallower breathing~\citep{kreibig2010autonomic}. The breath rate sensor has been reported to be the accurate estimation of the respiratory rate. Breathing activity monitoring using chest cavity expansion has been reported in~\citep{stern2001psychophysiological}. For the detection of human stress, respiratory signals have been acquired using an elastic hall effect sensor placed at the lower part of the chest~\citep{healey2005detecting} and by use of thermistors place in the nasal passage of the subject~\citep{shin1998estimation}. Respiratory rate signal has also been used in combination with other biomedical sensors for the assessment of human stress~\citep{hosseini2011classification}. Oxygen consumption rate has been extracted from the respiratory rate of the person and is considered a considerably reliable measure of human stress because the oxygen demand is increased under stress~\citep{seematter2002metabolic}. In~\citep{fernandez2018mental}, the authors proposed another stress recognition study using respiratory signals. Another study for human stress assessment using a respiration sensor is presented in~\citep{shan2020respiratory}. \Tab{tab7} presents a summary of human stress classification schemes using respiration rate. \begin{table} \caption{Summary of Human Stress Detection Studies using Respiratory Rate Measure.} \label{tab:tab7} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{mcduff2014remote} & Acute & 10 (3/7) & 18-30 & \thead{Mental arithmetic \\task (MAT)} & Frequency & SVM &85.00\% (2) \\ ~\citep{ahmed2015rebreathe} & Acute & 25 (15/10) & 18-35 & \thead{Stroop color\\word test,\\ Public \\speaking} & \thead{Time and \\Frequency} & GEE & 88.00\% (2) \\ ~\citep{hosseini2011classification} & Acute & 15 (15/0) & 20-24 & \thead{Picture \\presentation test} & \thead{Time and \\Frequency} & SVM & 76.95\% (2) \\ ~\citep{wijsman2013wearable} & Acute & 30 (25/5) & 19-53 & \thead{Calculation and \\memory task} & \thead{Time and \\Frequency} & GEE & 74.50\% (2) \\ ~\citep{rigas2011real} & Acute & 13 (10/3) & 22-41 & \thead{Car \\driving} & \thead{Time and \\Frequency} & BN & 96.00\% (2) \\ ~\citep{singh2013novel} & Acute & 10 & -- & \thead{Car \\driving} & Time & ANN & 80.00\% (2) \\ ~\citep{wijsman2011towards} & Acute & 30 (25/5) & 19-53 & \thead{Calculation, \\puzzle and \\memory task} & \thead{Time and \\Frequency} & \thead{ANN, LBN,\\ QBN, FLS} & 80.00\% (3) \\ ~\citep{shan2020respiratory} & Acute & 89 (47/42) & 18-23 & \thead{Stroop color\\word test} & Time & SVM & \thead{93.90\% (2) \\ 93.40\% (2) \\ 89.05\% (2)} \\ ~\citep{fernandez2018mental} & Acute & 43 (26/17) & 18-22 & \thead{Mathematical \\problem} & \thead{Time and \\Frequency} & MLP & 94.44\% (2) \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] FDA: Fisher Discriminant Analysis, GEE: generalized estimating equation, SVM: Support Vector Machine \end{tablenotes} \end{table} \subsubsection{Heart Rate Variability based Stress Detection} HRV is the measure of the variation in the time interval between consecutive heartbeats of an individual. ANS activities can be reliably measured by using the HRV parameter and is a strong tool for human stress assessment~\citep{pflanzer2013galvanic}. HRV shows distinct changes in response to changes in individuals~\citep{acharya2006heart}. HRV can be obtained using ECG as well as PPG sensors data. HRV measurement methods based on ECG signals have been developed in~\citep{clifford2002signal}. An ECG recordings-based study is conducted in~\citep{sloan1994effect} to analyze the relationship of RR intervals (time between consecutive heartbeats), HRV, and human stress. The study concludes that the increase in heart rate is correlated with the decrease in RR interval. Another study for human stress assessment using physiological signals of ECG and HRV is introduced in~\citep{karthikeyan2013detection}. Classification accuracy of 94.66\% for normal vs stressed class using a fusion of ECG and HRV signals is achieved. In~\citep{jobbagy2017hrv}, authors have proposed a stress recognition system to characterize human stress using HRV signals. The influence of HR and HRV on human stress is discussed in~\citep{taelman2009influence}. The HR and HRV of the subjects are recorded in the rest state as well as while performing a mental stressor. The study concluded that the HR and HRV of the subject change when facing a mental stressor and hence can be used as a potential biomarker for the assessment of human stress. Another study about the association of mental stress with HRV is presented in~\citep{salahuddin2007dependence}. Correlation between the perceived stress face by college students and the HRV is explored in~\citep{lombardo2019relationship}. Another cognitive stress measurement model using HRV with an accuracy of 85\% is proposed in~\citep{mcduff2014remote}. A deep learning model for the identification of mental stress in firefighters using HRV data is presented in~\citep{oskooei2019destress}. A study about the correlation of mental stress and HRV of the students during the university final examination is presented in~\citep{hammoud2019stress}. The study reports that HRV in female students is significantly lower as compared to their male counterparts before and after taking the exam. A study to find the correlation between the perceived mental stress and the HRV parameters of the subjects is discussed in~\citep{orsila2008perceived}. A strong correlation between the perceived stress and the values of triangular interpolation of rhythm-to-rhythm (RR) interval histogram (TINN) and the root mean square of differences of successive RR intervals (RMSSD) of the HRV data obtained in the morning and during the workday is found in the study. Some studies claim that the HRV data needs to be of around five minutes for some reasonable analysis~\citep{malik1996heart}, whereas on the other hand, some studies negate this conclusion by claiming that an even smaller amount of data could be used as a reliable marker of human stress~\citep{hall2004acute,salahuddin2007ultra}. Standard deviation of the NN interval (SDNN) is found to be reduced under stressful condition~\citep{blechert2006identifying,acerbi2016wearable,schubert2009effects,clays2011perception,hynynen2011incidence,cinaz2013monitoring,bernardi2000effects,taelman2011instantaneous,tharion2009short,visnovcova2014complexity,madden1995effects}. A study to monitor human stress using physiological signals of electrodermal activity and heart rate variability recorded via wearable sensors is presented in~\citep{acerbi2016wearable}. A new stress-inducing protocol named TransSafe (The Ambient Response to Avoid Negative Stress and enhance SAFEty) is developed in this study. Subjective questionnaires of the State-Trait Anxiety Inventory (STAI) and Shortened State Stress Questionnaire (SSSQ) are filled by the participants before and after facing the stressor. Time and frequency domain features are extracted from the HRV signal. A statistical test is applied to the extracted features and a significant difference is found in some of the features of the EDA and HRV signal. A study to examine the effect of short-term and chronic stress on the heart rate variability of the subject is conducted in~\citep{schubert2009effects}. The speech task has been used as a stressor in the experiment and it is found that the standard deviation of the R-R interval got reduced when facing the stressor. Perception of work stressor relationship with heart rate variability is examined in a study conducted in~\citep{clays2011perception}. Perception of the work stressors is measured using a 27 item job stress questionnaire. An association between the percentage of differences between adjacent normal RR intervals (pNN50), lower high-frequency power, and a higher ratio of the low frequency over high-frequency power and the worker's stress is found. Moreover, no significant correlation between low-frequency power and worker stress is found. An investigation of the relationship of self-reported measures with heart rate variability in real-life situations is explored in~\citep{hynynen2011incidence}. SDNN features extracted from the HRV signal got reduced in a high-stress condition when compared to a low-stress condition. Monitoring of mental workload in office work scenario using heart rate variability feature is proposed in~\citep{cinaz2013monitoring}. NASA Task Load Index questionnaire is used to obtain the subjective mental workload of the participants. Time and frequency domain features are extracted from the HRV signal and the pNN50 feature was found to decrease significantly and the SDNN feature is found to give a consistent decrease under stress conditions. Another study to access that whether talking or reading aloud or silently affects the heart rate variability is presented in~\citep{bernardi2000effects}. An increase in the speed of breathing and a decrease in the mean and variance of the RR interval as compared to normal breathing are observed when reading silently as compared to reading aloud. A study to monitor instantaneous changes in the heart rate activity due to mental workload in an office environment is proposed in~\citep{taelman2011instantaneous}. The participants are asked to perform a low mental workload task and high mental workload task twice where each of these tasks is followed by rest state condition. A significant difference in the heart rate and heart rate variability is observed under mental workload conditions as compared to baseline rest conditions. A study to explore the heart rate variability in students during examination time is presented in~\citep{tharion2009short}. The mean of the RR interval is reported to be significantly lower whereas mean arterial pressure and SDNN are found to be higher during the examination time. A study to understand the relation of acute mental stress and complexity and time asymmetry of HRV signal is proposed in~\citep{visnovcova2014complexity}. Two different stimuli which include SWT and mental arithmetic tasks are used. The study reveals the fact that SDNN was found to be significantly lower when facing a stressful situation as compared to the recovery period. The effect of mental state on the heart rate and blood pressure variability in both males and females is examined in a study conducted in~\citep{madden1995effects}. The mental arithmetic task is used as a stimulus for the experiment. As compared to the control condition, the stressor condition causes a decrease in SDNN, log standard deviation of systolic blood pressure, log total power, and log fractal powers. Root Mean Square of the Successive Differences (RMSSD) is another HRV feature that has been explored in the literature and it has been established to decrease under stress~\citep{acerbi2016wearable,ring2002shifting,hynynen2011incidence,cinaz2013monitoring,taelman2011instantaneous,tharion2009short}. Authors in~\citep{li2009longitudinal} reported that RMSSD, a time-domain feature of HRV and high frequency (HF) power, a frequency domain feature of HRV get decreased in stress condition. Another important frequency-domain feature of HRV discussed in literature is the ratio of Low frequency power (LF) to HF power and is increased under stressful situation~\citep{mcduff2014remote,blechert2006identifying,acerbi2016wearable,moriguchi1992spectral,vuksanovic2007heart,schubert2009effects,clays2011perception,cinaz2013monitoring,mcduff2016cogcam,lucini2002hemodynamic,taelman2011instantaneous,tharion2009short,taelman2009influence,hjortskov2004effect,hall2004acute}. Very low-frequency (VLF) band of the HRV signal is found to be increased in some studies~\citep{acerbi2016wearable,moriguchi1992spectral}. Another HRV feature named correlation dimension D2 is reduced under stress in a study conducted on university students during their examination time in~\citep{melillo2011nonlinear}. A new framework for the remote measurement of heart rate variability using a webcam for the detection of stress is proposed in~\citep{bousefsaf2013remote}. HRV is investigated for different stress factors which include stressed, tensed, concentrated, and stimulated conditions. The study concluded that remote measurement of HRV can be used as a reliable marker of human stress assessment. Another stress recognition study using HRV signals is discussed in~\citep{kim2008detection}. A self-reporting questionnaire was used to label the participants into low and high-stress groups. HRV data were recorded for three different periods during the day and it was concluded that the high stressed participants showed a decrease in the HRV patterns as compared to the low-stress group. Moreover, using logistic regression as a classifier accuracy of 63.2\% is achieved for the low vs high stressed group. Another HRV based stress measurement scheme using time and frequency domain features is presented in~\citep{boonnithi2011comparison}. Stress detection using ECG and HRV features is presented in~\citep{melillo2013classification}. Variation in the heart rate variability of the students in rest conditions and during the examination phase was examined using a non-parametric classifier called Classification and Regression Tree (CART). Sensitivity and specificity of 83.33\% and 90.48\% for the stress vs rest state classification is achieved. Another study for four-level stress classification i.e., no stress, low stress, medium stress, and high stress using HRV features is discussed in~\citep{vanitha2014hierarchical}. The database used for the experiment in this study was the MIT-BIH multi-parameter database where different driving tasks are used as a stimulus. Hierarchical SVM is used as a classifier to classify the four stress state with a classification accuracy of 92\%. Driver stress level recognition using HRV features along with support vector machine classifier is performed in~\citep{munla2015driver}. The database used in this experiment is stress recognition in automobile driver database and classification accuracy of 83\% is reported. A stress recognition measurement scheme for driver stress monitoring using HRV data is proposed in~\citep{wang2013k}. DriveDB database is used in this study for measuring driver stress and features are extracted using parameter-based methods. Kernel-based class separability (KBCS) method for feature selection is used to select the optimum feature set. LDA and PCA algorithms are used for dimensionality reduction. Next, the classification of driver stress is performed using a kNN classifier with an achieved accuracy of 97\%. Taking into consideration the studies presented in the literature, it is evident that the relationship of HRV to stress is not very straightforward. However many of the studies presented in the literature present a consistent relationship with certain HRV features and hence can help draw some useful conclusions. \Tab{tab8} presents a summary of human stress detection schemes using HRV. \begin{table} \caption{Summary of Human Stress Detection Studies using Heart Rate Variability Measure.} \label{tab:tab8} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{blechert2006identifying} & Acute & 42 (14/28) & 42.2$\pm$9.9 & Pictures & Time & DFA & 83.30\% (2) \\ ~\citep{karthikeyan2013detection} & Acute & 60 (30/30) & 21-25 & \thead{Stroop color\\word test} & \thead{Time and \\Frequency} & kNN, PNN & 91.66\% (2) \\ ~\citep{mcduff2014remote} & Acute & 10 (3/7) & 18-30 & \thead{mental arithmetic \\task (MAT)} & Frequency & SVM &85.00\% (2) \\ ~\citep{mcduff2016cogcam} & Acute & 10 (5/5) & 18-28 & \thead{Berg Card \\Sorting Task (BCST)} & Frequency & NB & 86.00\% (2) \\ ~\citep{melillo2011nonlinear} & Acute & 42 & -- & \thead{University \\Examination} & Time & LDA & 90.00\% (2) \\ ~\citep{melillo2013classification} & Acute & 42 (19/23) & 20-28 & \thead{University \\Examination} & \thead{Time and \\Frequency} & CART & 87.00\% (2) \\ ~\citep{vanitha2014hierarchical} & Acute & 16 & -- & \thead{Car \\driving} & \thead{Time and \\Frequency} & \thead{hierarchical\\SVM} & 92.00\% (4) \\ ~\citep{munla2015driver} & Acute & 16 & -- & \thead{Car \\driving} & \thead{Time and \\Frequency} & \thead{SVM-RBF} & 83.00\% (2) \\ ~\citep{wang2013k} & Acute & 27 & -- & \thead{Car \\driving} & \thead{Time and \\Frequency} & \thead{kNN} & 97.00\% (2) \\ ~\citep{kim2008detection} & Chronic & 68 & 10-30 & Baseline & \thead{Time and \\Frequency} & LR & 66.1\% (2) \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] DFA: Discriminant function analysis, kNN: k- Nearest Neighbors, PNN: Probabilistic Neural Network, SVM: Support Vector Machine, NB: Naive Bayes, LDA: Linear Discriminant Analysis, CART: Classification and Regression Tree, RBF: radial basis function, LR: Logistic Regression \end{tablenotes} \end{table} \subsubsection{Blood Volume Pressure based Stress Detection} BVP is a method to measure the amount of pressure exerted on the blood vessels. When measuring blood pressure, we get two values, the first value is systolic blood pressure (SBP) and the second number is called diastolic blood pressure (DBP). The human body releases a large number of stress hormones under stressful conditions which increases blood pressure~\citep{gasperin2009effect} thus making blood pressure a good indicator for stress measurement~\citep{pickering1996environmental}. A 3-year duration study conducted revealed the fact that the individuals who have stress at the workplace tend to have higher SBP and DBP, whereas during sleep they have increased SBP~\citep{schnall1998longitudinal}. SBP and DBP have been reported to be higher during a mental stressor task in~\citep{ring2002shifting,lundberg1994psychophysiological,carroll2003blood,carroll2011blood}. A study presented in~\citep{ring2002shifting} discussed the hemodynamics due to which the increase in blood pressure occurs during the exposure to stress for a longer time duration. Mean arterial pressure (MAP), cardiac output (CO), and total peripheral resistance (TPR) parameters were measured during three phases of the experiment i.e., rest phase, mental arithmetic task phase, and recovery phase. MAP increased at a constant rate during the stressor, CO increased during the first half of the stressor and decreased back to rest state condition toward the end of the task, where on the other hand TPR kept on increasing as the mental arithmetic task progressed. In a study conducted in~\citep{lundberg1994psychophysiological}, it is found that the blood pressure of the subjects i.e., both systolic and diastolic increased in case of stress session as compared to the baseline condition. Authors in~\citep{carroll2003blood} presents a study to examine the relationship of human stress to the future value of blood pressure and how it is affected by gender, age, and socioeconomic condition of the subject. The blood pressure of the subjects was recorded in rest condition and under mental stressors. Moreover, five years of follow-up resting-state blood pressure data of the participants were also available. The findings of the study are that the systolic blood pressure reaction to human stress is found to be positively correlated with the follow-up systolic blood pressure where no correlation could be found with diastolic blood pressure. Another conclusion of the study is that the magnitude of the predicted blood pressure has an association with the gender and socioeconomic position of a person. Another study to correlate the reaction of blood pressure to acute stress and future blood pressure condition is proposed in~\citep{carroll2011blood}. Blood pressure readings in a rest state and with facing stressors are recorded. Moreover, after twelve years, resting-state blood pressure reading was again taken. The study concluded that the systolic blood pressure positively made a prediction about future systolic pressure and there is an increasing pattern of blood pressure over the span of 12 years. However, these findings are not observed in the case of diastolic blood pressure. However, in another study, the author claims that the mental stress of a person does not affect the recorded BVP~\citep{hjortskov2004effect}. There exist a wide of studies in which SBP and DBP is reported to increase under stressful conditions~\citep{finsen2001muscle,vinkers2013effect,lundberg1994psychophysiological,krantz2004consistency,moriguchi1992spectral,steptoe2001acute,ring2002shifting,bernardi2000effects,hjortskov2004effect,schnall1998longitudinal,carroll2003blood,carroll2011blood}. Men have been found to have higher blood pressure under stress as compared to women in a study conducted in~\citep{krantz2004consistency}. In a study conducted in~\citep{vinkers2013effect}, authors reported that when participants are facing a standard stressor there is an increase in their blood pressure i.e., systolic and diastolic as compared to a baseline recording. Authors in~\citep{finsen2001muscle} reported that an increase in the blood pressure of the participants is observed when increasing memory demands. An increase in the blood pressure of the participants is observed when facing a stressor in an experimental study conducted in~\citep{moriguchi1992spectral}. The correlation of mental stress with the cardiovascular response is examined in~\citep{steptoe2001acute}. The blood pressure of the participants is recorded during the rest state as well as while performing stressful tasks. The stressed group has significantly higher blood pressure as compared to the control group. The effect of reading aloud or silent on the blood pressure of the participants is examined in a stress measurement study performed in~\citep{bernardi2000effects}. The effect of mental stress on the HRV and blood pressure of the participants performing computer work is examined in~\citep{hjortskov2004effect}. The study concludes that reading silently causes an increase in the blood pressure of the participant. Hence, looking at these trends, BVP can be considered as a reliable marker of stress. \subsubsection{Photoplethysmography based Stress Detection} PPG is a technique to measure the blood volumetric changes in the vessels~\citep{challoner1979photoelectric}. PPG is a widely accepted technique being used in clinical applications as well as commercial devices, e.g., a pulse oximeter. PPG sensor is quite simple consisting of an infra-red light source that is used to illuminate tissue of the skin and a photodetector is used to measure the changes in the light illumination due to blood flow. Many commercial devices that measure blood pressure, oxygen saturation, and cardiac output are based on a PPG sensor. For the acquisition of PPG signals, currently, a large variety of devices are available in the market. The component of a PPG data acquisition system includes a light source, an LED for lightening up the tissue, and a photodetector to receive the lights and to measure the variations of the light. Light sources commonly used in PPG sensors include red, light green, or infra-red color. Green color has a shorter wavelength and thus it produces a larger variation of light intensity to cardiac changes~\citep{maeda2011advantages}. Heart rate calculation has also been performed using the PPG sensors~\citep{kageyama2007wavelet}. Many algorithms have been developed to measure the heart rate and heart rate variability parameters, which can be used to measure different physiological responses including human stress~\citep{vstula2003evaluation}. PPG signal can also be used to calculate the value of pulse rate, pulse rate variability, blood volume pressure, blood oxygen saturation level, and blood pressure~\citep{giannakakis2019review}. In~\citep{lyu2015measuring}, the author proposed a PPG-based stress-induced vascular response index (sVRI) to measure the stress level of a person. A classical mental arithmetic task with three levels of difficulty is used as a stimulus. The physiological signals of the participants are recorded in the baseline condition as well as while performing the mental arithmetic task. The findings of the study reveal the fact that the proposed sVRI based stress measurement index produces comparable results to the BVP, HR, and HRV measures recorded simultaneously. A PPG-based human stress measurement method is presented in~\citep{chauhan2018real}. The experimental study induced stress using the paced auditory serial addition test (PASAT). Discrete wavelet transform (DWT) coefficients are extracted from the observed data and AdaBoost ensemble classifier is used for stress classification and accuracy of 93\% is achieved using 4-fold cross-validation. A decrease in the pulse wave amplitude (PWA) feature obtained from the PPG signals is observed under stressful conditions in~\citep{henelius2016short}. PPG signal is based on the principle of reflected light so it allows camera-based approaches, which are also known as remote photoplethysmography (rPPG)~\citep{poh2010non}. The rPPG has been effectively used for the measurement of stress~\citep{mcduff2014remote,mcduff2016cogcam}. Another study aimed at finding the features of PPG signals which are beneficial for the assessment of mental stress is presented in~\citep{charlton2018assessing}. Seventeen different features are identified which could be used to identify the stressed and relaxed state. Another study about stress measurement using the heart rate variability feature extracted from the PPG signals is presented in~\citep{mohan2016stress}. PPG signal-based stress recognition study using pulse rate variability and elastic-net regression is discussed in~\citep{li2018photoplethysmography}. The mental arithmetic task is used as a stimulus for the experiment. A significant correlation between the self-reported measure and the result obtained from the stress prediction model was achieved. Another study differentiating between the distress and the calm state using IAPS as a stimulus and extracting temporal, morphological, and frequency domain features from PPG signal is introduced in~\citep{zangroniz2018estimation} with an achieved classification accuracy of 82.35\%. \Tab{tab9} presents a summary of human stress classification schemes using PPG signals. \begin{table} \caption{Summary of Human Stress Detection Studies using PPG Signal.} \label{tab:tab9} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{mcduff2014remote} & Acute & 10 (3/7) & 18-30 & \thead{mental arithmetic \\task (MAT)} & Frequency & SVM &85.00\% (2) \\ ~\citep{mcduff2016cogcam} & Acute & 10 (5/5) & 18-28 & \thead{Berg Card \\Sorting Task (BCST)} & Frequency & NB & 86.00\% (2) \\ ~\citep{chauhan2018real} & Acute & 10 & 30-58 & PASAT & \thead{Frequency and \\Wavelet} & Adaboost & 93.00\% (2) \\ ~\citep{cho2019instant} & Acute & 17 (8/9) & 29.82$\pm$12.02 & \thead{Mental \\Arithmetic Task} & Frequency & ANN & 78.33\% (2) \\ ~\citep{li2018photoplethysmography} & Acute & 178 (85/93) & 16-36 &\thead{Mental \\Arithmetic Task} & Frequency & elastic net & 86-91\% (2) \\ ~\citep{zangroniz2018estimation} & Acute & 50 (28/22) & 20-28 & IAPS & \thead{Time and \\Frequency} & DT & 82.35\% \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] ANN: Artificial Neural Network, SVM: Support Vector Machine, NB: Naive Bayes, DT: Decision Tree \end{tablenotes} \end{table} \subsubsection{Salivary Cortisol based Stress Detection} Cortisol is a well-known biomarker for measuring psychological stress. Salivary cortisol has been used by physicians as a diagnostic tool for the measurement of psychological stress and many other diseases for over two decades and has been reported to be a very good measure of human stress~\citep{kirschbaum1989salivary}. As it is established that stress is a phenomenon that is affected by a wide range of factors, so a reliable measure is required for the accurate estimation of stress. It has been reported that when the person is under an acute stressor, the level of cortisol release increases~\citep{fink2000encyclopedia}. In~\citep{hellhammer2009salivary}, the author has presented a correlation between stress and cortisol level. Human HPA activity is affected by stress and is reflected in cortisol thus making it a very practical tool for stress detection. An acute stress measurement scheme using cortisol secretion is presented in~\citep{boucher2019acute}. A study measuring the response of sweating cortisol to human stress is presented in~\citep{tu2019sweat}. The study deduced the fact that there exists a strong association between the diet and cortisol of a person and therefore adjusting the diet may help in lowering the cortisol level and thus preventing stress-related health issues. The relationship between the changes in the cortisol level of a subject due to mental stress is discussed in~\citep{luo2012relationship}. The study concluded that the cortisol level of a person is increased due to mental stress. Another stress detection study using cortisol as a biomarker is presented in~\citep{nath2020validating}. The model achieved a classification accuracy of 92\%. Acute stress measurement using cortisol as a biomarker is discussed in~\citep{selvaraj2015psychological}. Emotional stress measurement using cortisol is introduced in~\citep{rey2014towards}. The study reveals that the cortisol level of men is found to increase under stress. Salivary cortisol has been used as a measure of assessing the mental workload in~\citep{nomura2009salivary}. \subsection{Non-Wearable Sensor based Human Stress Detection} Non-wearable sensors are the type of sensors in which no physical device needs to be connected to the human body rather the data can be acquired at a considerable distance from the subject. Non-wearable sensors for human stress measurement can be sub-divided into physical measures, behavioral measures, and computer vision-based measures. Physical measures are one in which some observable parameters of the human body like human pupil dilation, human speech, human eye activity, and body postures are recorded, whereas, on the other hand, behavioral measures are the ones in which human stress is measured based on the interaction of the subject with some device like keyboard, mouse, and smartphones. The third and last type of non-wearable sensors used for human stress measurement is computer vision-based sensors like a video camera and thermal imaging. \subsubsection{Physical Measures} A physical property is defined as a property that is observable by humans with a naked eye. To acquire physical measures, sophisticated equipment and sensors are required. Physical measures of stress can be subdivided into four main categories, which include pupil dilation, speech-based measures, eye movement, and body postures. Literature corresponding to each of these categories is given below. \begin{enumerate} \item \textit{Pupil Dilation:} The eye pupil is a hole located at the center of the iris, which allows the light to enter the retina. The color of the eye pupil is black because the light entering the pupil is either absorbed directly by the eye tissues or absorbed after diffusion. The pupil of the eye may appear to open i.e., dilate or close i.e., constrict but it is the iris that governs its movement. Under bright lighting conditions, the pupil constricts to allow less light to enter the eye, whereas under dark conditions the pupil dilates to allow more light to enter the eye. The size of the pupil is controlled by two muscles i.e., constrictor and dilator pupillae, which are in turn controlled by the sympathetic (SNS) and parasympathetic (PNS) part of the ANS~\citep{beatty2000pupillary}. Just like the physiological responses of an individual, pupil dilation is not under subject control and is strongly associated with cognitive and emotional arousal~\citep{bradley2008pupil}. Relationship between the affective states and pupil dilation has been discussed in a number of studies~\citep{onorati2013reconstruction,partala2003pupil,al2013using,bradley2008pupil,ren2012affective,pedrotti2014automatic}. Moreover, pupil dilation has been used as a marker of stress and anxiety assessment~\citep{honma2013hyper,simpson1971effects,baltaci2016stress,zhai2006stress}. Another human stress measurement study based on pupil diameter along with physiological signals of Galvanic Skin Response (GSR), Blood Volume Pulse (BVP), and Skin Temperature (ST) is proposed in~\citep{zhai2006stress}. SVM is used for the classification of stress and relaxed state with an achieved accuracy of 90.10\%. The study concluded that in comparison to physiological signals pupil diameter was a more effective indicator of stress. In a laboratory environment, when the subject is presented with a stressful stimulus the pupil diameter is increased~\citep{de2016acute}. Stress measurement based on pupil dilation has been the subject of study in~\citep{barreto2007non}. An increase in pupil dilation diameter suggests that the pupil dilates at a higher frequency and it means the person is in a stressed state. Experimental studies have shown that due to negative and positive arousing sounds, the diameter of the human pupil is increased quite significantly~\citep{partala2003pupil}. The mean value of pupil diameter has also been used as a parameter for stress detection. Increasing mean values over a time period shows the increasing level of stress. Images having negative valance tend to have a stronger effect on the eye pupil of the subject who feels more stressed~\citep{kimble2010eye}. Moreover, positive as well as negative sounds cause an increase in the pupil diameter of the person~\citep{partala2003pupil}. Public speaking anxiety also affects pupil size~\citep{simpson1971effects}. The size of the pupil is directly proportional to the anxiety level of a person i.e, the more the anxiety level more the pupil diameter and vice versa~\citep{wang2011attention}. Another study about the widening of a pupil under stress is presented in~\citep{liao2005real}. In~\citep{torres2015pupil}, authors presented a study for human stress classification scheme using pupil diameter. Mental arithmetic task questions were asked from the participants in front of a camera and the changes in pupil diameter were observed. The authors concluded that pupil diameter can be a good indicator of mental stress but for better classification results it needs to be combined with physiological signals. The use of pupil dilation for stress analysis has some limitations, which need to be addressed. The size of the human pupil is not constant throughout life i.e., the size of the pupil decreases with increasing age~\citep{winn1994factors}. Ambiguity exists on the effect of gender in the pupil size because there exist some studies which show no correlation~\citep{winn1994factors}. On the other hand, some studies show gender to affect the pupil size when face with a painful~\citep{ellermeier1995gender} and audio stimulus~\citep{partala2003pupil}. Lightening conditions affect the pupil size due to the in-built light reflexes of the human being~\citep{pedrotti2014automatic,reeves1920response}. Thus, to use pupil dilation for human stress assessment it is necessary to take into consideration these limitations and precautionary measures need to be taken beforehand. \Tab{tab10} presents a summary of human stress classification schemes using pupil dilation. \begin{table} \caption{Summary of Human Stress Detection Studies using Pupil Dilation.} \label{tab:tab10} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{ren2012affective} & Acute & 30 (14/16) & 26.8$\pm$2.56 & \thead{Stroop color\\ word test} & Time & NB & 85.5\% (2) \\ ~\citep{zhai2006stress} & Acute & 32 & 21-42 & \thead{Stroop color\\ word test,\\Emotional\\pictures} & \thead{Time and \\Frequency} & SVM & 90.10\% (2) \\ ~\citep{pedrotti2014automatic} & Acute & 33 (16/17) & 23-54 & \thead{Driving \\task} & Frequency & ANN & 79.20\% (2) \\ ~\citep{baltaci2016stress} & Acute & 11 (9/2) & 29-40 & \thead{IAPS} & Time & ABRF & 65-83.8\% (2) \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] ANN: Artificial Neural Network, SVM: Support Vector Machine, NB: Naive Bayes, ABRF: Adaboost with Random Forest \end{tablenotes} \end{table} \item \textit{Speech Based Measures:} Stress measurement using speech features has been one of the focuses of the research community. Stress in the voice is defined as “observable variability in certain speech features due to a response to stressors~\citep{murray1996towards}. Stress measurement from a human voice is a dynamic process. Stress is measured from the nonverbal component of voice. Speech components have variations while facing a stressful stimulus~\citep{womack1999n}. Vocal stress analysis has been performed to identify the discriminating voice features of stressed and neutral conditions~\citep{lefter2015recognizing}. Pitch of the speech signal is found as a distinct feature under emotional stress and is increased when an individual is feeling stressful~\citep{williams1972emotions,hansen1988analysis,cairns1994nonlinear,junqua1996influence,protopapas1997fundamental,hansen2007speech,gharavian2012statistical,lu2012stresssense,kurniawan2013stress,sondhi2015vocal,hansen2011robust}. In~\citep{nwe2003speech}, author suggests that the change in speed of fundamental frequency of voice is the most important feature for stress measurement. Speech signals based human stress measurement has also been discussed in~\citep{fernandez2003modeling,healey2005detecting,lefter2011automatic}. Articulatory, excitation, and cepstral-based features have been used in~\citep{womak1996improved} to identify stress in speech signals and classification accuracy of 91\% is achieved. A study conducted in~\citep{cairns1994nonlinear} distinguished between the loud, angry, neutral, clear speech, and Lombard effect by using speech features of intensity, spectral tilt, and energy. Another speech-based stress classification scheme using spatial and spectral domain features is presented in~\citep{devillers2006real}. The frequency of the speech signal in an angry, Lombard effect, and loud state is different from each other~\citep{hollien2002forensic}. In~\citep{simantiraki2016stress}, the author extracted the spectral tilt feature from the speech signal and found it to be less negative under stressful conditions. The energy of the equivalent rectangular bandwidth band obtained from the spectrogram and Gabor filters are used in ~\citep{he2009stress} to identify stress recognition with an accuracy of 79\%. Stress classification in a noisy environment is performed using the Teager Energy Operator (TEO) features in~\citep{hanson1993finding}. Speech signal has been analyzed for stress using multi-resolution wavelet analysis (MWA) in~\citep{sarikaya1998subband} and a fusion of TEO and MWA in~\citep{fernandez2003modeling}. Hidden Markov Model (HMM) and TEO have been used for the analysis of stress in the speech signals~\citep{hansen2011robust}. Physical features of the vocal cords of a person are examined in~\citep{yao2012physical} to identify stress in speech. The performance of automatic speech recognition could be improved if the speaker stress is accurately identified~\citep{kadambe2007study}. Another speech-based human stress measurement scheme is proposed in~\citep{soury2013stress}. TSST was used as a stimulus and an SVM classifier was used to classify the stress with an achieved recall rate of 72\%. \Tab{tab11} presents a summary of human stress classification schemes using speech-based measures. \begin{table} \caption{Summary of Human Stress Detection Studies using Speech based Measures.} \label{tab:tab11} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{kurniawan2013stress} & Acute & 10 & -- & \thead{Stroop color\\word test,\\mental arithmetic\\ task} & \thead{Time and Frequency} & SVM & 92.00\% (2) \\ ~\citep{womack1999n} & Acute & 44 (30/14) & 22-76 & Speech & \thead{Time and \\Frequency} & HMM & 92.41\% (2) \\ ~\citep{lefter2015recognizing} & Acute & 16 & -- & Speech & \thead{Time and \\Frequency} & BN & 73.00\% (2) \\ ~\citep{cairns1994nonlinear} & Acute & 32 (19/13) & 22-76 & SUSAS & Frequency & HMM & 99.10\% (2) \\ ~\citep{hansen2007speech}& Acute & 32 (19/13) & 22-76 & SUSAS & Frequency & HMM & 73.80\% (2) \\ ~\citep{lu2012stresssense}& Acute & 14 (4/10) & 22.86 & \thead{Job \\Interview} & Frequency & GMM & 81.00\% (2) \\ ~\citep{fernandez2003modeling} & Acute & 4 & -- & \thead{driver \\speech} & Frequency & SVM & 51.20\% (4) \\ ~\citep{womak1996improved} & Acute & 32 (19/13) & 22-76 & SUSAS & Frequency & HMM & 80.64\% (3) \\ ~\citep{simantiraki2016stress} & Acute & 32 (19/13) & 22-76 & SUSAS & Frequency & RF & 92.06\% (2) \\ ~\citep{he2009stress} & Acute & 32 (19/13) & 22-76 & SUSAS & Frequency & GMM & 81.00\% (2) \\ ~\citep{sarikaya1998subband} & Acute & 32 (19/13) & 22-76 & SUSAS & Frequency & MLP & 70.00\% (2) \\ ~\citep{soury2013stress} & Acute & 29 (12/17) & -- & TSST & \thead{Time and \\Frequency} & SVM & 72.00\% (2)\\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] SVM: Support Vector Machine, FDA: Fisher discriminant algorithm, HMM: Hidden Markov Model, BN: Baysian network, GMM: Gaussian Mixture Model, RF: Random Forest, MLP: Multilayer Perceptron \end{tablenotes} \end{table} \item \textit{Eye Activity:} Functioning and behavior of the eye are affected by stress and this fact is supported by different studies. A study for human stress detection using eye blink rate and brain activity of the subject was proposed in~\citep{haak2009detecting}. The stimulus used in the experiment was driving a car in a simulator on a road that included steep and sharp curves and were having many attention-seeking advertising boards. While driving on this road, stressful emotions were elicited in the drivers resulting in a change in eye blink rate and brain activity. A correlation between the eye blinks of the participants and the experienced stress level was established and a higher frequency of eye blinks was observed as an indication of the individual experiencing a stressful condition. Stressful stimulus causing an increase in the eye blink rate of the person as reported in the studies conducted in~\citep{haak2009detecting,giannakakis2017stress}. A biometric identification application of human stress detection is proposed in~\citep{pavlidis2000thermal}. Images are used to detect the facial expressions and eye movement corresponding to anxiety, alertness, and fearfulness, and rapid eye movement under stress state was reported. The biometric application was based on the idea of "what you are doing" instead of the traditional approach of "who are you". The study reported encouraging results of the proposed scheme. A human stress detection framework based on facial cues using features of eye movement, mouth activity, head movement, and heart rate acquired via PPG signal is presented in~\citep{giannakakis2017stress}. Four different stressors which include social exposure, emotion recall, stressful images, and stressful videos were used in the experiment. Social exposure stressors included a task of text reading and self-description speech. Eyeblink rate was reported to be increased in response to stressful images and the Stroop color-word test, whereas reading a difficult text causes the eye blink rate to get reduced. Moreover, an increase in the eye aperture was observed in a stressful situation. Stress affects the eye gaze behavior of a person. In~\citep{laretzaki2011threat}, authors presented a study to determine if and how threat and trait anxiety interact to affect the stability of gaze fixation. Video oculography was used to estimate the gaze position with and without gaze fixation stimulus in a safe and verbal threat condition in subjects characterized for their trait anxiety. Trait anxiety significantly showed that there is a gaze fixation instability under threat conditions. Some stress detection studies have employed different gaze features like gaze direction, congruence, and size of gaze-cue for the assessment of stress~\citep{fox2007anxiety,staab2014influence}. A study to investigate the role of neutral, angry, happy, and fearful facial expressions in enhancing orienting to the direction of eye gaze is presented in~\citep{fox2007anxiety}. Photographs of faces with either direct or averted gaze are used as a stimulus in the experiment. A target letter appeared unpredictably to the left and right sides of the face. approximately 300 ms or 700 ms after the eye gaze direction changed. It is observed from the results that the response time of the participant is less when the eyes in the image gazed toward the subject as compared to the conditions when the eye gazed away from the subject. An enhanced orientation to the eye-gaze of faces with fearful expressions is reported in participants with high trait anxiety scores as compared to participants with low trait anxiety scores. A review to analyze the effect of anxiety on the ocular motor control and the gaze of the subject is presented in~\citep{staab2014influence}. Another human stress measurement scheme using the eye-tracking feature is proposed in~\citep{mokhayeri2011mental}. SWT is used to induce stress in the participants of the experiment. A genetic algorithm is employed to detect the human eye and the noise is removed by using a fuzzy filter. Fuzzy-SVM classifier is used to classify human stress into two classes with an accuracy of 70\%. \Tab{tab12} presents a summary of human stress classification schemes using eye activity measures. \begin{table} \caption{Summary of Human Stress Detection Studies using Eye Activity Measure.} \label{tab:tab12} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{giannakakis2017stress} & Acute & 23 (16/7) & 45.1$\pm$10.6 & \thead{Stroop color\\word test,\\IAPS, videos} & Time & \thead{Adaboost} & 91.68\% (2) \\ ~\citep{mokhayeri2011mental} & Acute & 60 & 20-28 & \thead{Stroop color\\ word test} & \thead{Time} & Fuzzy-SVM & 70.00\% (2) \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] SVM: Support Vector Machine, IAPS: International Affective Picture System \end{tablenotes} \end{table} \item \textit{Body Postures:} Body language is a non-verbal type of communication in which physical behavior instead of words is used to convey information. Visual cues-based behavioral features for human stress measurement are presented in~\citep{aigrain2015person}. Behavioral body language features used in the study are visual cues that are extracted from the data acquired by Kinect and an HD camera. The stimulus used for eliciting stress in the participant was a mental arithmetic task. Classification accuracy of 77\% is achieved using a support vector machine classifier. Another human stress measurement scheme based on activity-related behavioral features is proposed in~\citep{giakoumis2012using}. Accelerometer, video-based camera, ECG, and GSR sensors are used to record the behavioral features of the subject. Stroop color-word test is used as a stress-inducing stimulus for the experiment. The study concluded that the behavioral features extracted correlated with the self-reported response. Behavioral features proved to be better as compared to other physiological signals for the measurement of stress. Classification is performed using linear discriminant analysis and maximum classification accuracy of 96.30\% is achieved. A human stress recognition framework by use of behavioral features extracted by their interaction with technological devices is proposed in~\citep{carneiro2012multimodal}. Eight different behavioral, cognitive, and physical features are examined for the analysis of the effect of different levels of acute stress. A statistical test is applied to measure the difference between different levels of stress. The study revealed the fact that the mean and maximum intensity of the touch is the feature that correlated strongly to human stress. It is also observed that if the stress level of an individual is high then there is less movement in the upper part of the human body. Emotional states including anxiety of an individual are examined using facial cues and gestures from the upper part of the body in~\citep{gunes2007bi}. Stress monitoring by the movement of head and mouth muscles is done~\citep{liao2005real,bevilacqua2018automated}. A study to analyze the facial cues for the estimation of the difference between boredom and stress of a computer game player is presented in~\citep{bevilacqua2018automated}. Seven different facial features are extracted from the players playing the game and it is concluded that 5 out of these 7 features showed a significant difference in boredom and stress state. Head moves under stressful conditions have been reported to be more often~\citep{liao2005real}, more quick~\citep{dinges2005optical,giannakakis2018evaluation}, and the overall head movement is also large~\citep{hadar1983head,giannakakis2018head}. In a study conducted in~\citep{dinges2005optical}, the authors proposed a scheme to detect facial changes during a performance by using optical character recognition in response to low and high stress. Workload and social feedback are used as stress-inducing stimuli in the experiment. The study concluded that the OCR algorithm when applied using mouth and eyebrow region features were able to identify around 75-88\% of the stressed and non-stressed individuals. A study conducted to find the association of the head pose with different kinds of stressors IS proposed in~\citep{giannakakis2018evaluation}. Four different stressors which included, social exposure, emotional recall, stress images or mental tasks and stressful videos are used to induce stress in the participant. Video recording of the subject is performed when facing each stressor. Head movement and pose features are extracted from the recorded videos. The study reports the fact that more quick head movement is observed in participants when facing stressful situations. The Pitch feature showed a significant difference in the stress and neutral state. The highest classification accuracy of 98.6\% for neutral vs stress state classification is achieved using a kNN classifier with K=3. A study to analyze the head movement in the context of speech during neutral and stress conditions is presented in~\citep{giannakakis2018head}. The tasks involved in stimulus presented to the participants included neutral tasks, interview, reading text, anxious and stressful event recall, and stressful images and videos. Translational and rotational head movements are used as a feature to assess the stress and neutral state. The study reveals the fact that facing a stressful situation makes the head movement pattern swift and fast. Emotional situations have been identified using head shakes and nods in~\citep{adams2015decoupling}. Another study about the relationship of body postures with human stress level is presented in~\citep{arnrich2009does}. The study is aimed at finding whether stress-related information can be obtained from the posture data in an office work scenario. MIST is used as a stimulus and features are extracted from the pressure distribution on the chair and were given to a self-organizing map classifier to classify the stress response. Classification accuracy of 73.75\% is achieved for the proposed scheme. \Tab{tab13} presents a summary of human stress classification schemes using body postures. \begin{table} \caption{Summary of Human Stress Detection Studies using Body Postures.} \label{tab:tab13} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{aigrain2015person} & Acute & 14 (11/3) & 24.8$\pm$2.8 & \thead{Mental\\arithmetic task} & \thead{Time and \\Frequency} & SVM & 77\% (2) \\ ~\citep{carneiro2012multimodal} & Acute & 19 & -- & \thead{Computer \\game} & Time & DT & 78\% (2) \\ ~\citep{giannakakis2018evaluation} & Acute & 24 (17/7) & 47.3$\pm$9.3 & \thead{Mental arithmetic\\task, stressful\\ images} & Time & GLR & 97.90\% (2) \\ ~\citep{dinges2005optical} & Acute & 60 (29/31) & 30 & \thead{Workload and \\social feedback} & Time & OCR & 75-88\% (2) \\ ~\citep{arnrich2009does} & Acute & 33 (33/0) & 24.06 & \thead{Montreal Imaging \\Stress Task (MIST)} & Frequency & SOM & 73.75\% (2) \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] SVM: Support Vector Machine, IAPS: International Affective Picture System, OCR: Optical Character Recognition, SOM: Self-Organizing Map, LDA: Linear Discriminant Analysis, DT: Decision Tree, GLR: Generalized Likelihood Ratio \end{tablenotes} \end{table} \end{enumerate} \subsubsection{Behavioral Measures} Behavior measures correspond to the type of behavior a person adopts when interacting with a certain thing. Behavioral measures have been used in literature for the detection of human stress. Behavioral measures of stress can be sub-divided into the following types i.e., interaction with a computer mouse, interaction with a computer keyboard, and interaction with smartphones. Literature corresponding to each of these types is presented as follows. \begin{enumerate} \item \textit{Interaction with Computer Mouse:} Different mouse interaction-based stress measurement studies have been developed in the literature. An approach to measuring human stress by embedding sensors in the computer mouse to measure the physiological signals while the user is using the mouse is presented in~\citep{liao2005real}. Camera, pressure sensors, temperature sensors, and GSR sensors are integrated within the mouse to measure the physiological signals to correlate them to the stress of an individual. A capacitive mouse that measures the amount of interaction and the pressure exerted on the computer mouse is developed in~\citep{hernandez2014under}. The authors concluded that the stressed individuals have a significantly higher contact time with the mouse as compared to non-stressed individuals. Another model of how the user moves the mouse under stress is developed in~\citep{sun2014moustress}. The authors proposed an arm-hand dynamics model to measure the muscle stiffness of the subject while moving the mouse. Mouse speed, click rate, and mouse inactivity have been correlated to stress in~\citep{lim2014detecting}. Different features extracted from the movement of the mouse are associated with stress during examination~\citep{carneiro2015using}. \Tab{tab14} presents a summary of human stress classification schemes using computer mouse interaction. \begin{table} \caption{Summary of Human Stress Detection Studies using Computer Mouse Interaction.} \label{tab:tab14} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{sun2014moustress} & Acute & 49 (23/26) & 20 & \thead{Computer\\task} & Frequency & SVM & 70.00\% (2) \\ ~\citep{carneiro2015using} & Acute & 53 & -- & \thead{Online\\Exam} & Time & NB, DT & 86.40\% (2) \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] SVM: Support Vector Machine, DT: Decision Tree, NB: Naive Bayes \end{tablenotes} \end{table} \item \textit{Interaction with Computer Keyboard:} Interaction with a computer keyboard has also been used as a measure of human stress in the literature. A study presented in~\citep{rodrigues2013keystrokes} developed a keyboard dynamics-based approach to measure stress in university students by recording their key latency and writing speed on the keyboard. Another study considered average key latency, average typing speed on the keyboard, and the error rate as a feature to measure the human stress~\citep{lim2014detecting}. Case-based reasoning systems along with multiple features extracted from the interaction with the keyboard have been used for human stress classification~\citep{andren2005case}. A pressure-sensitive keyboard has been developed in~\citep{hernandez2014under} that gives a pressure value between 0 and 255 for each keystroke on the keyboard. Using the pressure values obtained from the keyboard, the stress of the user is measured. Keystroke dynamics is an important measure of human stress because stress causes an effect on different muscles of the human body like around arms, hand, and shoulder muscles. Author in~\citep{gunawardhane2013non} developed a stress measurement scheme using three features of keystroke dynamics which include durations between key pressers of specific digraphs, trigraphs, and an error rate of backspace and delete. A statistical test was applied to data to show that there exists a significant difference in the keystroke dynamics of the stressed and non-stressed individuals. \item \textit{Interaction with Smartphones:} Smartphones have completely been revolutionized in the last decade and evolved as a mini-computer with far more functionalities and power as compared to traditional mobile phones. Due to this technological revolution, smartphones have facilitated physicians to get real-time data of patients in just no time. Moreover, applications for smartphones have been built, which enable the users to monitor their health and get advice or alert on a particular situation~\citep{kailas2010mobile}. Mobile phones are being embedded with sensors like heart rate and body temperature to analyze the health state in a very cost-effective manner. Mobile applications make use of these in-built sensors for health monitoring. Even though these applications are being built with a lack of any kind of scientific validity, but due to their low or even no cost can reach hundreds of thousands of users in no time. Azumio’s Stress Check and StressViewer~\citep{carneiro2017new} apps utilize the light and the camera of the smartphone to monitor the heart rate of the user. Different apps are also available which not only measures the stress but also provides breathing and some other exercises for relieving the stress. DeStressify~\citep{lee2018evaluation} is another app, which is developed to relieve stress based on music. EDA sensor is used by stress-relieving apps named PIP Relax and Race~\citep{dillon2016smartphone}. In these two apps, the user has to participate in a race and the participant which is more relaxed wins the race. DroidJacket~\citep{colunas2011droid} is another app integrated with VitalJacket-a shirt with an integrated ECG sensor to continuously monitor the health of a person~\citep{colunas2011droid}. A specific sensor platform named a Personal Biomonitoring System is used along with smartphones to measure the stress~\citep{gaggioli2012system}. Another smartphone-based approach for measurement of stress using the features of the speech signal of the user is discussed in~\citep{lu2012stresssense}. A stress classification accuracy of 82.9\% is achieved for indoor scenarios and 77.9\% for outdoor scenarios. Stress-related changes in human behavior using GPS, WiFi, Bluetooth, phone calls, and SMS logs features of a smartphone are explored in~\citep{bauer2012can}. Human stress measurement at the workplace is very important to measure for the better mental and physical health of the employees. A human stress measurement scheme to measure the stress of workers during the workday and sleep at night is presented in~\citep{muaremi2013towards}. Features extracted from the audio, physical activity, and communication data recorded during daytime and heart rate variability data recorded during night sleep are used to built logistic regression models. A leave-one-out cross-validation scheme is used for the classification purpose and classification accuracy of 61\% is achieved for three-level stress classification. A new stress recognition framework named AMMON (\textbf{A}ffective and \textbf{M}ental health \textbf{MON}itor) which is a speech analysis library for analyzing effect and stress directly on mobile phones is proposed in~\citep{chang2011s}. Stress classification is performed and classification accuracy of 84\% is achieved for two-class stress classification. A stress recognition framework using the accelerometer sensor of the smartphone is proposed in~\citep{garcia2015automatic}. Oldenburg burnout inventory (OLBI) questionnaire is used to acquire the subjective response from the users and features are extracted in time and frequency domain and fed to a Naive Bayes and decision tree classifier. Classification accuracy of 71\% is achieved for user-specific models. A driver stress monitoring system using inertial sensors is proposed in~\citep{lee2016wearable}. For comparison of the results, GSR, self-reporting questionnaire, and facial expressions are employed. Forty-six features are extracted from the data which were subjected to feature selection resulting in 22 features. SVM classifier is used to discriminate the low-stressed participants from the high stressed participants with achieved accuracy of 94\%. A student stress measurement mechanism using smartphone sensors is proposed in~\citep{gjoreski2015automatic}. The author's used an accelerometer, GPS, WiFi, time and duration of calls, light sensor data, and a self-reported questionnaire for classification of stress. Forty-seven features are extracted from the acquired data and classification accuracy of 60\% is achieved for three classes. Classification in this study is performed in such a manner that the data of each student is divided into two parts ie., some features of each student are used for training, and some features are used for testing. An automatic stress measurement system for graduate students using smartphone data is proposed in~\citep{bogomolov2014pervasive,bogomolov2014daily}. Smartphone data which included mobile phone activity of call and SMS logs and Bluetooth proximity hits, weather conditions, and personality traits are recorded from 227 students for a duration of one year. Weather conditions are divided into mean temperature, pressure, total precipitation, humidity, visibility, and wind speed whereas personality traits are obtained by using the big five personality trait questionnaire and are labeled into extraversion, neuroticism, agreeableness, conscientiousness, and openness to experience. Subjective labeling of stress for the recorded data is obtained by using a self-reported questionnaire. A wide range of classification algorithms is applied but the random forest algorithm proved to be the best with an achieved classification accuracy of 72\% and a feature reduction from 500 features to 32 features. Another human stress mechanism using mobile phone activity both in the laboratory environment and out of the lab environment is proposed in~\citep{ciman2016individuals}. For the controlled lab environment part of the experiment, an android application that contained search and writes tasks is developed. The user activities which are monitored during the performance of these tasks include users' tap, scroll, swipe, and text input gestures. Stressors are used to induce stress in the participants and the Experience Sampling Method is used to obtain the subject stress scores. kNN, SVM, decision trees, and neural networks classifiers are used with achieved accuracy of 80\%. For the out-of-lab environment, the activities of the subjects including the type of applications used, user physical activity, and light values of the screen are recorded using the daily life usage of mobile phones. The achieved classification accuracy for this part of the experiment is 70\%. Another smart sensor-based and context-aware stress measurement scheme for daily life stress monitoring is proposed in~\citep{gjoreski2017monitoring}. Real-life data of 55 days is recorded and precision and recall of 95\% and 70\% are achieved, respectively. Smartphone data has been used in the stress recognition framework proposed in~\citep{gimpel2015mystress}. The authors developed an android application and used 36 software and hardware sensors in their study. The authors did not report any classification accuracy but they found that high smartphone usage, average battery temperature, the maximum number of running applications, and the frequency of switching the display on are the features that are strongly correlated to stress. Another daily life stress monitoring system using smartphone sensors is proposed in~\citep{sysoev2015noninvasive}. Audio, gyroscope, accelerometer, ambient light sensor data, screen mode changing frequency, self-assessment, and activity type are used as a feature, and NASA-TLX is used as a subjective stress questionnaire. Activity recognizer is used along with the stress recognition system to achieve a classification accuracy of 77\%. Another stress recognition framework called as StayActive is developed for android phones in~\citep{kostopoulos2017stress}. Social interaction, physical activity, and sleeping patterns are used for the measurement of stress. A fusion of offline mathematical modeling and online machine learning models is used to identify stress and give some relaxation therapy when the stress is identified. Circumplex Model of Effect with some modifications about stress measurement is used as a questionnaire to obtain the subjective score. The number of sleeping hours is used as a feature from sleep patterns, number of touches on the screen, number of calls, and SMSs are used as a feature from social interaction. The author did not report any accuracy but they intended to use physiological signals along with a smartphone in the future to further improve the results of the proposed stress measurement scheme. Another unsupervised stress classification scheme using smartphone data is proposed in~\citep{vildjiounaite2018unobtrusive}. The hidden Markov model is used for stress classification and an accuracy of 68\% is achieved. \Tab{tab15} presents a summary of human stress classification schemes using smartphone interaction. \begin{table} \caption{Summary of Human Stress Detection Studies using Smartphone Interaction.} \label{tab:tab15} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{lu2012stresssense} & Acute & 14 (4/10) & 22.86 & \thead{Job\\interviews,\\Marketing\\jobs} & \thead{Time and\\Frequency} & GMM & 81.00\% (2) \\ ~\citep{muaremi2013towards} & Acute 35 (24/11) & 25-62 & \thead{Day long\\recording} & \thead{Time and\\Frequency} & LR & 61.00\% (3)\\ ~\citep{garcia2015automatic} & Chronic & 30 (18/12) & 37.46$\pm$7.26 & \thead{Day long\\recording} & \thead{Time and\\Frequency} & \thead{DT, NB,\\ONB} & 71.00\% (2)\\ ~\citep{lee2016wearable} & Acute & 8 (6/2) & 30$\pm$5 & \thead{Car\\Driving} & \thead{Time and\\Frequency} & SVM & 94.78\% (2) \\ ~\citep{gjoreski2015automatic} & Chronic & 48 & -- & \thead{Day long\\recording} & \thead{Time} & \thead{SVM, DT\\RF} & 60.00\% (3)\\ ~\citep{ciman2016individuals} & Acute & 13 (7/6) & 22-32 & \thead{Mental\\arithmetic task} & Time & \thead{kNN, SVM,\\DT, NN} & 80.00\% (5) \\ ~\citep{gjoreski2017monitoring} & Chronic & 21 & 28$\pm$4 & \thead{Day long\\recording} & \thead{Time and\\Frequency} & \thead{DT, NB,\\kNN, SVM,\\RF, ES} & 73.00\% (3) \\ ~\citep{sysoev2015noninvasive} & Chronic & -- & -- & \thead{Real life\\activities} & Time & RF, SL & 77.00\% (2) \\ ~\citep{vildjiounaite2018unobtrusive} & Chronic & 30 & -- & \thead{Real life\\activities} & \thead{Time and\\Frequency} & HMM & 68.00\% (2)\\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] SVM: Support Vector Machine, DT: Decision Tree, kNN: K- nearest neighbors, ANN: Artificial Neural Network, ONB: Ordinal Naive Bayes, ES: Ensemble Selection, SL: Simple Logic, \end{tablenotes} \end{table} \end{enumerate} \subsubsection{Vision based Measures} Vision-based measures have also been used for human stress detection and used some kind of imaging modality for measuring the response of the user. Vision-based techniques can be sub-divided into thermal infrared (IR) and computer vision-based techniques. Literature for each of these sub-divisions is given below. \begin{enumerate} \item \textit{Thermal Infrared Imaging:} Thermal IR imaging is a non-invasive and contactless technique used to measure the temperature of the human skin. In this technique, a thermal infrared camera is used to record the oxygen tissue saturation level. The benefit of this technique is that it is not affected by skin color and lighting conditions. When a person is feeling stressed, the flow of blood in the vessels get increases, and hence the temperature in the adjacent regions is raised. Human affective states like fear~\citep{levine2001face}, arousal~\citep{nozawa2009correlation}, and stress~\citep{ioannou2014thermal} have been recognized using thermal imaging~\citep{nhan2009classifying}. In most of the affect recognition studies, facial skin temperature is measured using thermal IR imaging to extract some useful information~\citep{nhan2009classifying,shastri2008imaging} Skin temperature of the nose, chin, and corrugator is affected due to stressors~\citep{hong2016real}. Even though no conclusive remarks can be given of the effect of stress on a specific region of the face. However in studies conducted in~\citep{puri2005stresscam,chen2014detection}, forehead temperature is raised under stressful condition. Periorbital areas also show signs of increasing temperature under anxious states~\citep{pavlidis2002thermal,pavlidis2000thermal}. Skin temperature is also reported to increase in supraorbital and periorbital areas under stress~\citep{shastri2008imaging}. Studies conducted in~\citep{engert2014exploring,vinkers2013effect,kang2006determining} showed that an increase in nose temperature when an unknown task is presented to the participants. A decrease in the temperature of the perinasal area is found under stressful condition~\citep{pavlidis2012fast,shastri2012perinasal}. Another thermal camera-based stress recognition framework is proposed in studies conducted in~\citep{cho2017deepbreath,cho2017thermsense}. A thermal camera is used to detect breathing and a respiratory spectrogram was used to extract features. SWT and the mental arithmetic task are used as a stimulus to induce stress in the lab environment. Convolutional Neural Network (CNN) classifier is used for two and three-level stress classification. Two-level stress classification accuracy of 84.59\% and three-level stress classification of 56.52\% is achieved for the proposed scheme. \Tab{tab16} presents a summary of human stress classification schemes using thermal imaging. \begin{table} \caption{Summary of Human Stress Detection Studies using Thermal Imaging.} \label{tab:tab16} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{nhan2009classifying} & Acute & 12 (3/9) & 24$\pm$2.9 & \thead{Visual \\stimulus} & \thead{Time and \\Frequency} & LDA & 70-80\% (6) \\ ~\citep{hong2016real} & Acute & 41 & 20-65 & \thead{Trier Social\\ Stress Test} & Frequency & DEFP & 90.00\% (2) \\ ~\citep{chen2014detection} & Acute & 21 (19/2) & 25 & \thead{Trier Social\\ Stress Test} & Frequency & \thead{Binary\\ classifier} & 88.10\% (2) \\ ~\citep{cho2017deepbreath} & Acute & 8 (5/3) & 18-53 & \thead{Stroop color\\ word test\\Mental \\Arithmetic Task} & Time & CNN & \thead{84.59\% (2)\\56.52\% (3)} \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] LDA: Linear Discriminant Analysis, DEFP: Differential Energy between Philtrum and Forehead, CNN: Convolutional Neural Network \end{tablenotes} \end{table} \item \textit{Computer Vision:} In computer vision-based, human stress assessment, many different organs, and locations of the human body have been used, but the face is the most commonly used location for monitoring stress. Authors proposed a computer recognition algorithm for use by astronauts to identify the facial changes occurring in response to low and high stressor~\citep{dinges2005optical}. Another study to identify the stress of the drivers using facial expressions is presented~\citep{gao2014detecting}. Thermal imaging can be used to measure blood perfusion which is correlated to human stress~\citep{derakhshan2014preliminary}. Another study analyzed the recorded video by use of both temporal thermal spectrum and visible spectrum video features to measure the stress~\citep{sharma2014thermal}. Another human stress measurement scheme using Kinect sensors is proposed in~\citep{aigrain2015person}. The participants of the experiment are asked to give answers to the time-constrained arithmetic questions in front of a video camera. Facial features along with body postures are extracted and fed to an SVM classifier for stress classification. An accuracy of 77\% is achieved for stress vs non-stressed class classification. Changes in the facial blood flow using thermal and visible cameras are used for stress detection in the study conducted in~\citep{mohd2015mental}. It was reported in the study that facial thermal features are difficult to attain due to low contrast so they applied a Nostril mask to focus on the nostril area. Noise smoothing is performed using graph cut algorithms and feature extraction is performed using Scale Invariant Feature Transform resulting in a classification accuracy of 88.6\% for two classes. Facial hyperspectral imaging (HSI) technique and tissue oxygen saturation (StO2) data are used to identify stress in a study conducted in~\citep{chen2014detection}. Results obtained from thermal imaging and HSI are compared in the proposed scheme. TSST is applied to induce stress in the participants of the experiment. It is reported that the StO2 in the eye and forehead of the subject are discriminative features for the identification of stress. An accuracy of 76\% with automatic thresholding and 88\% for manual thresholding is achieved for two-level stress computer vision-based techniques. \begin{table} \caption{Summary of Human Stress Detection Studies using Computer Vision Based Techniques.} \label{tab:tab17} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{gao2014detecting} & Acute & 21 & -- & \thead{Car \\driving} & Time & SVM & 90.50\% (2) \\ ~\citep{derakhshan2014preliminary} & Acute & 12 & -- & \thead{peak of \\tension (POT) test} & Time & SVM & 96.00\% (2) \\ ~\citep{sharma2014thermal} & Acute & 35 (22/13) & -- & \thead{Video \\clips} & \thead{Time and \\Frequency} & SVM & 86.00\% (2) \\ ~\citep{aigrain2015person} & Acute & 14 (11/3) & 24.8$\pm$2.8 & \thead{Public speaking\\ mental\\arithmetic task} & \thead{Time and \\Frequency} & SVM & 77.00\% (2) \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] LDA: Linear Discriminant Analysis, SVM: Support Vector Machine \end{tablenotes} \end{table} \end{enumerate} \section{Multimodal Stress Detection} \label{sec:msa} Multimodal human stress detection has been focused on in literature in a wide range of studies. The primary aim of the multimodal stress detection framework is to increase the system accuracy as compared to single modality stress measurement systems. Multimodal stress detection schemes available in the literature can be sub-divided into (i) fusion of data recorded from different physiological modalities, (ii) fusion of data obtained from motion and physiological sensors (iii) fusion of data obtained from imaging modalities and physiological sensors, (iv) fusion of data obtained from smartphones and physical, behavioral and physiological sensors. In this section, we will discuss available literature for stress measurement using all kinds of multimodal fusion approaches. A multimodal human stress measurement is proposed in~\citep{al2016mental}, where EEG and functional near-infrared spectroscopy (fNIRS) are used to classify acute stress. Classification accuracy of 96.6\% is achieved with EEG and fNIRS data. Another human stress measurement system using GSR and skin temperature is carried out in~\citep{kyriakou2019detecting} with an achieved accuracy of 84\%. Another real-time stress detection scheme using heart rate, skin conductance, and accelerometer sensors is proposed in~\citep{can2019continuous}. Using a fusion of features from these sensors an accuracy of 92.15\% is achieved for three-level stress classification. A multimodal stress classification framework for drivers using the physiological signal of PPG and inertial sensors of accelerometer, gyroscope, and magnetometer is proposed in~\citep{lee2016stress}. Driving was performed in a simulator environment and a driving behavior survey questionnaire is used to obtain the subjective response from the subjects. Time and frequency domain features are extracted from the acquired data and stress classification is performed using an SVM classifier with a radial basis function (RBF) with an achieved accuracy of 95\% for two classes. A stress recognition framework for driver stress monitoring using physiological signals of GSR, ECG, and respiration is proposed in~\citep{chen2017detecting}. Features are extracted in time, frequency, and wavelet domain, and PCA and Sparse Bayesian Learning (SBL) algorithms are applied for feature selection and SVM for classification resulting in an accuracy of 99\% for three classes. Another multimodal driver stress recognition framework using DRIVE DB from the PHYSIONET database and physiological signals of GSR, EMG, and ECG is proposed in~\citep{ghaderi2015machine}. Three-level stress classification is performed using kNN and SVM classifier and classification accuracy of 98\% is achieved. Another multimodal stress classification scheme using physiological signals of BVP, HR, ST, GSR, and RR is proposed in~\citep{gjoreski2016continuous}. The mental arithmetic task is used for the induction of stress and 63 features are extracted from the recorded data. For the in-lab experiment, two (non-stressed, stressed) and three (no stress, low stress, and high stress) level stress classification is performed. For two-level stress classification, an accuracy of 83\% is achieved whereas for a three-class problem an accuracy of 72\% is achieved. For the out-of-lab environment, experimental activities of walking, sitting, running, and cycling are recorded. All these activities are numbered according to their intensities i.e., 1 for lying and 5 for running. To determine the stress-inducing interval, the average intensity of the interval was calculated. The importance of recording the activities along with their intensity was to provide context to the situation to create a distinction between a strong physical activity and a stressful situation. The day-long activity is subdivided into one-hour episodes. An accuracy of 76\% and 92\% is achieved with no-context and with context scenarios, respectively. Another multimodal stress recognition frame framework using ECG signals along with activity and contextual data is proposed in a study conducted in~\citep{maier2014mobile}. The application they designed measured the stress and when the stress exceeded a certain threshold the user is allowed to quit from a stressful situation or is given some relaxation therapy to reduce the level of stress. Accelerometer and GPS data are added to the HRV data obtained from ECG data to achieve the accuracy of the system. The authors did not report any classification accuracy for the proposed scheme. Another stress measurement scheme using accelerometer and EDA data obtained from the wrist sensors, call and SMS data, location and screen on/off features obtained from mobile phones, and subjective stress score using questionnaires is proposed in~\citep{sano2013stress}. Sequential Forward Floating Selection (SFFS) is used to select the optimum set of features which selected screen on/off, mobility, call acceleration, and EDA data features, and accuracy of 75\% is achieved for class stress classification using SVM and kNN as a classifier. Moreover, high-stress behavior is found to be associated with acceleration during sleep in the second half of the day, a small amount of SMS, and screen time. Another multimodal stress classification scheme using respiratory, ECG, and accelerometer sensors is proposed in~\citep{hovsepian2015cstress}. The experimental setup included baseline recording, public speaking task, mental arithmetic task, and cold pressure test. This data is used to train the model using laboratory settings and data from 23 participants is recorded in the out-of-lab environment for testing purposes. SVM is used as a classifier and a recall of 89\% is achieved on the test data from the lab environment and classification accuracy of 72\% is achieved on testing data from the out-of-lab environment. Another multimodal stress measurement study using an accelerometer, EDA, and Bluetooth sensor is presented in~\citep{zubair2015smart}. The experiment is performed in a controlled environment and logistic regression is used as a classifier to achieve an accuracy of 91\% for two-class stress classification. A real-time stress recognition methodology using HR and EDA signals is proposed in~\citep{de2010two}. The public speaking task is used as a stimulus and the kNN classifier is used to detect stress and relaxed state with an accuracy of 95\%. PPG and EDA signals are used to identify stress in subjects in a study conducted in~\citep{sandulescu2015stress}. TSST is used to induce stress and an SVM classifier is used to classify stress into two levels with an accuracy of 80\%. Another study using a combination of EDA and HRV signals for measurement of stress is proposed in~\citep{martinez2017real}. Puzzles are used as a stress-inducing stimulus and an F-measure value of 0.984, 0.970, and 0.943 is achieved for the high, medium, and low-stress groups, respectively. Another stress detection scheme using physiological and sociometric sensors is proposed in~\citep{mozos2017stress}. Physiological sensors include EDA and PPG whereas sociometric sensors include microphone and accelerometer sensors. Public speaking is used as a stressor to induce stress in the participants and kNN, SVM, and AdaBoost are used as a classifier. AdaBoost classifier produces the highest accuracy of 94\% for discriminating stress and neutral condition. Another study presented in~\citep{kurniawan2013stress} author made use of GSR and speech signals for the measurement of stress. SWT, TMST (Trier Mental Stress Test), and TSST are used to induce stress in the participants. Time and frequency domain features are extracted from the speech signals. K-means, GMM, SVM, and decisions are used for classification purposes, SVM produced the best results for each type of stressor. Speech signals resulted in better classification accuracy as compared to EDA data. The fusion of speech data with the GSR signal did not increase the classification accuracy for stress classification. Another multimodal stress classification scheme using cardiac features along with EMG, GSR, and respiratory sensors and Kinect-based video camera is proposed in~\citep{aigrain2016multimodal}. Moreover, in addition to the self-reported questionnaire feedback from the psychology expert is also added to the proposed system. SVM is used as a classifier and a classification accuracy of 85\% for two classes is achieved. Pupil dilation and periorbital temperature data are used for stress classification in a study proposed in~\citep{baltaci2014role}. IAPS is used as a stress-inducing stimulus for the experiment. The decision tree classifier resulted in a classification accuracy of 90\% for two classes. The study proposed in~\citep{baltaci2014role} is improved by authors in their study conducted in~\citep{baltaci2016stress} by the addition of entropy feature for the physiological signals. The study resulted in better classification accuracy as compared to the earlier studies by using the AdaBoost classifier with the Random Forest classifier instead of the decision tree. Human gaze and mouse click behavior are used for stress classification in~\citep{huang2016stressclick}. The stress stimulus used in the experiment is a mental arithmetic task. Random forest classifier achieved the highest accuracy of 60\% for two classes for the generalized stress model. Another multimodal stress recognition framework for computer users using physiological signals of EEG, ECG, EMG, and EOG is proposed in~\citep{akhonda2014stress}. EEG data and the subjective questionnaire score is obtained only once at the start of the experimental procedure, whereas ECG, EOG, and EMG data is acquired continuously. A neural network is used as a classifier and three class stress recognition is performed with an achieved accuracy of 80\%. A stress detection system using physiological signals of ECG, respiration rate, skin temperature, and GSR is presented in~\citep{shi2010personalized} with a recall of 80\%. Another human stress detection system using wearable EEG and ECG sensors is presented in~\citep{ahn2019novel}. Stroop color-word and mental arithmetic test is used as a stimulus and an accuracy of 87.5\% is achieved for stress classification. The fusion of ECG, EMG, GSR, and respiration signals are used to measure driver's stress in~\citep{healey2005detecting}. Classification accuracy of 97\% is achieved for three levels of stress. Another study to analyze the impact of human stress on the sleep pattern using wearable sensors of ECG, GSR, body temperature, and respiration is presented in~\citep{muaremi2014monitoring}. SVM, kNN, NN, RF (Random Forest), and logistic regression classifiers are used to classify the stress and SVM produced the best classification accuracy of 73\% for three classes of stress. A cluster-based technique for the detection of perceived stress using EEG, ECG, GSR, and EMG signals is presented in~\citep{xu2014cluster}. Real-time human stress classification using ECG and thoracic electrical bioimpedance (TEB) signals is presented in~\citep{mohino2015assessment} with an error rate of 21\%. Another stress detection study focusing the working people using ECG and GSR signals is presented in~\citep{sriramprakash2017stress}. The database used in the study was SWELL-KW and a classification accuracy of 72.82\% is achieved for stressed vs non-stressed classes. Another stress recognition system using respiratory rate along with GSR, EEG, and blood volume pressure is presented in~\citep{hosseini2010emotional} with a classification accuracy of 82.7\%. A stress assessment study for the office environment using HR, HRV, GSR, EMG, and respiratory rate is discussed in~\citep{wijsman2013wearable} with 74.5\% as a reported accuracy. Driver stress has been monitored using ECG, GSR, and respiration rate in~\citep{rigas2011real}. GSR, EMG, respiration, and HR combination have also explored driver stress level~\citep{singh2013novel}. A study to classify the emotional states of stress and anger using GSR, EMG, BVP, and respiratory rate signals is discussed in~\citep{picard2001toward}. ECG, GSR, EMG, and respiration rate have also been explored for human stress assessment in~\citep{wijsman2011towards}. HRV, respiration, GSR, EMG, and geographical locations have been used for the detection of mental stress in~\citep{choi2011minimally}. Stress level estimation using GSR, EMG, HR, and respiration sensors is presented in~\citep{gjoreski2016continuous}. A human perceived stress measurement system using smartphone PPG and thermal imaging is presented in~\citep{cho2019instant} with a reported average classification accuracy of 78.3\%. Another study to correlate the results of the human stress detection system based on ECG, EDA, and EEG sensors with the changes in the cortisol level of the subject is discussed in~\citep{betti2017evaluation}. The study reveals that the changes in cortisol levels were strongly in line with the physiological signal response. Classification accuracy of 86\% is achieved for the stress classification. Another stress classification scheme using HRV features and cortisol as a reference is proposed in~\citep{liew2015classifying}. Classification accuracy of 80\% is achieved using cortisol bio-marker. ECG and salivary cortisol are used for the measurement of different levels of psycho-social stress in~\citep{nater2005human}. Bad childhood experience has been reported to cause a change in the physiological processes and can determine the magnitude of the stress response. HRV, BVP, ECG, and salivary cortisol are used in~\citep{aimie2018stress} to propose a stress response index that could be used as a future biomarker for such individuals. A fusion of keyboard strokes along with the linguistic features of the written text has been used in~\citep{vizer2009automated} to measure the cognitive and physical stress of the user. The fusion of features from pupil diameter and physiological signals is used in~\citep{barreto2007non,zhai2008stress} to measure the stress. The stress of a computer user is measured by a fusion of features from physical appearance, physiological signals and behavioral data in a study conducted in~\citep{liao2005real}. In another study, accelerometer data and recorded videos are analyzed to monitor the stress response~\citep{giakoumis2012using}. \Tab{tab18} presents a summary of multimodal human stress classification schemes available in the literature. \begin{table} \caption{Summary of Multimodal Human Stress Detection Studies.} \label{tab:tab18} \scalebox{0.7}{ \begin{tabular}{ccccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & Modalities & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{al2016mental} & Acute & EEG, fNIRS & (22 (22/0)) & 22-30 & \thead{Montreal \\Imaging \\Stress \\ Task (MIST)} & Frequency & SVM & 95.10\% (2) \\ ~\citep{kyriakou2019detecting} & Acute & GSR, ST & 19 (8/11) & 25-45 & \thead{Audio} & Time & -- & 84.00\% (2) \\ ~\citep{can2019continuous} & Acute & \thead{HR, GSR,\\Acc} & 21 (18/3) & 20 & \thead{Computer \\programming} & Frequency & MLP & 92.15\% (3) \\ ~\citep{lee2016stress} & Acute & \thead{Acc, Gyro,\\Mag} & 28 (18/10) & 35$\pm$16 & \thead{Car\\driving} & \thead{Time and\\Frequency} & SVM & 95.00\% (2) \\ ~\citep{chen2017detecting} & Acute & \thead{ECG, GSR,\\RR} & 9 & -- & \thead{Car\\driving} & \thead{Time and\\Frequency} & SVM & 99.00\% (3) \\ ~\citep{ghaderi2015machine} & Acute & \thead{RR, GSR,\\HR, EMG} & 17 & -- & \thead{Car\\driving} & \thead{Time and\\Frequency} & \thead{SVM, kNN} & 98.00\% (3) \\ ~\citep{gjoreski2016continuous} & Acute & \thead{BVP, HR, ST,\\GSR and RR} & 21 & 28$\pm$4.1 & \thead{Mental\\ arithmetic\\task} & \thead{Time and \\Frequency} & RF & 92.00\% (2) \\ ~\citep{sano2013stress} & Acute & SC, Acc & 18 (15/3) &28$\pm$7.8 & \thead{Daily \\Life Activity} & \thead{Time and \\Frequency} & SVM, kNN & 75.00\% (2) \\ ~\citep{hovsepian2015cstress} & Chronic & \thead{RR, ECG,\\Acc} & 23 & -- & \thead{Day long\\ recording} & Time & SVM & 95.30\% (2) \\ ~\citep{zubair2015smart} & Acute & \thead{EDA, Acc,\\Bluetooth} & 12 & -- & \thead{Mental \\arithmetic\\task,\\ Emotional\\pictures} & Time & LR & 91.00\% (2)\\ ~\citep{de2010two} & Acute & HR, EDA & 80 (0/80) & 19-32 & \thead{Public\\Speaking} & Time & kNN & 95.00\% (2) \\ ~\citep{sandulescu2015stress} & Acute & PPG, EDA & 5 & 18-39 & TSST & \thead{Time and\\Frequency} & SVM & 80.00\% (2) \\ ~\citep{mozos2017stress} & Acute & \thead{EDA, PPG\\Acc, microphone} & 18 & -- & \thead{Public\\Speaking} & \thead{Time and\\Frequency} & \thead{kNN, SVM,\\AdaBoost} & 94.00\% (2) \\ ~\citep{kurniawan2013stress} & Acute & GSR, Speech & 10 & -- & \thead{SWT, TMST,\\TSST} & \thead{Time and\\Frequency} & \thead{k-mean,\\SVM, GMM} & 92.00\% (2) \\ ~\citep{aigrain2016multimodal} & Acute & \thead{EMG, GSR\\RR, Kinect} & 21 (6/15) & 26.3$\pm$4.6 & \thead{Mental\\ arithmetic\\task} & \thead{Time and\\Frequency} & SVM & 85.00\% (2) \\ ~\citep{baltaci2016stress} & Acute & \thead{pupil\\dilation,\\periorbital\\temperature} & 11 (9/2) & 29-40 & IAPS & Time & ABRF & 65\%-84\% (2) \\ ~\citep{huang2016stressclick} & Acute & \thead{Eye gaze,\\mouse click \\behaviour} & 20 (13/7) & 20-33 & \thead{Mental \\arithmetic\\task} & Time & RF & 60.00\% (2) \\ ~\citep{akhonda2014stress} & Acute & \thead{EEG, ECG,\\EMG, EOG} & 12 & -- & \thead{Computer\\Work} & \thead{Time and\\Frequency} & NN & 80.00\% (3) \\ ~\citep{ahn2019novel} & Acute & EEG, ECG & 7 & 29.3$\pm$2.4 & \thead{Mental\\ arithmetic\\task, SWT} & \thead{Time and\\Frequency} & SVM & 87.50\% (2) \\ ~\citep{healey2005detecting} & Acute & \thead{ECG, EMG, \\GSR} & 24 & -- & \thead{Car driving} & \thead{Time and \\Frequency} & \thead{FDA} & 97.00\% (3) \\ ~\citep{muaremi2014monitoring} & Acute & \thead{ECG, GSR,\\ST, RR} & 10 (7/3) & 41 & \thead{Sleep\\data} & \thead{Time and \\Frequency} & \thead{SVM, kNN, \\NN, RF, LR} & 73.00\% (3) \\ ~\citep{xu2014cluster} & Chronic & \thead{EEG, ECG, \\GSR, EMG} & 44 (44/0) & 28.6$\pm$7.2 & PASAT & \thead{Time and \\Frequency} & k-Mean & 85.20\% (3) \\ ~\citep{sriramprakash2017stress} & Chronic & ECG, GSR & 25 & -- & \thead{Office\\work} & \thead{Time and \\Frequency} & SVM, kNN & 72.82\% (2) \\ ~\citep{hosseini2010emotional} & Acute & \thead{GSR, EEG,\\BVP} & 15 (15/0) & 20-24 & IAPS & \thead{Time and \\Frequency} & SVM & 84.10\% (2) \\ ~\citep{wijsman2013wearable} & Acute & \thead{HR, HRV, \\GSR, EMG, RR} & 30 (25/5) & 19-53 & \thead{calculation, puzzle\\ and memory task} & \thead{Time and \\Frequency} & GEE & 74.50\% (2) \\ ~\citep{rigas2011real} & Acute & \thead{ECG, \\GSR, RR} & 13 (10/3) & 22-41 & \thead{Car\\driving} & \thead{Time and \\Frequency} & BN & 96.00\% (2) \\ ~\citep{wijsman2011towards} & Acute & \thead{ECG, RR,\\GSR, EMG} & 30 (25/5) & 19-53 & \thead{calculation, puzzle\\ and memory task} & \thead{Time and \\Frequency} & LBN & 80.00\% (2) \\ ~\citep{gjoreski2016continuous} & Acute & \thead{GSR, EMG, \\HR, RR} & 26 & -- & \thead{Daily life\\activity} & \thead{Time and \\Frequency} & RF & 92.00\% (2) \\ ~\citep{cho2019instant} & Acute & \thead{PPG, Thermal\\imaging} & 17 (8/9) & 29.82 & \thead{Mental\\workload} & \thead{Time and \\Frequency} & NN & 78.33\% (2) \\ ~\citep{betti2017evaluation} & Acute & \thead{ECG, EDA,\\EEG} & 26 (8/7) & 60.8$pm$9.5 & MAST & \thead{Time and \\Frequency} & SVM & 86.00\% (2) \\ ~\citep{liew2015classifying} & Acute & \thead{HRV,\\Cortisol} & 22 (17/5) & 21 & TSST & \thead{Time and \\Frequency} & FAM & 80.00\% (2) \\ ~\citep{vizer2009automated} & Acute & \thead{keystroke,\\linguistic\\features} & 24 (10/14) & 18-56 & \thead{Cognitive \\task} & Time & \thead{DT, kNN\\ SVM, ANN} & 75.00\% (2) \\ ~\citep{barreto2007non} & Acute & \thead{BVP, GSR,\\ST, PD} & 32 & 21-42 & \thead{Stroop color\\ word test} & \thead{Time and \\Frequency} & SVM & 90.10\% (2) \\ ~\citep{zhai2008stress} & Acute & \thead{BVP, GSR,\\ST, PD} & 32 & 21-42 & \thead{Stroop color\\ word test} & Time & SVM & 90.10\% (2) \\ ~\citep{giakoumis2012using} & Acute & \thead{ECG, GSR,\\Acc, Video} & 21 (17/4) & 30.4$\pm$3.7 & \thead{Stroop color\\ word test} & Time & LDA & 100\% (2) \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] LDA: Linear Discriminant Analysis, SVM: Support Vector Machine, LBN: Linear Bayes Normal, MAST: Maastricht Acute Stress Test, FAM: Fuzzy ARTMAP \end{tablenotes} \end{table} \section{Future Directions} \label{sec:fd} In this section, future directions which could be adopted for the improvement in existing human stress measurement methods and the open challenges that still need to be addressed in the literature are discussed. One of the most important limitations for most of the existing stress measurement schemes is that they are performed in a well-controlled lab environment whereas measurement of stress in real-life poses a different set of challenges as compared to the lab environment. The challenges posed by real-life stress measurement scenarios include the occurrence of multiple stressors at the same time (unlike lab environment where a particular stressor is given to the participant at one time), incomplete contextual information like lack of information about changing room temperature and the effect of outdoor environmental conditions on the acquired physiological signals. The addition of contextual information in real life, as well as laboratory settings, has been found useful toward efficient measurement of human stress in a study conducted in~\citep{gjoreski2016continuous} thus supporting the fact that contextual information is important and has not been widely considered in the available literature. In future stress measurement studies, challenges occurring due to the out-of-lab environment should be explored to make the available stress measurement method more practical in a real-life scenario. Availability of the ground truth for labeling of data used for training the stress detection model is another open research issue. Either in the lab environment or real-life scenario, we don't have the ground truth and the majority of the studies in the literature have to rely on subjective scoring which can vary from person to person. It has been observed that physiological data from two participants may have the same pattern but one of the participant's label as "stressed" whereas the other label it as "non-stressed" in subjective labeling~\citep{liapis2015stress}. This type of labeling ambiguity degrades the performance of the system and thus needs to be rectified by developing a more robust mechanism of labeling the data. Power consumption of data acquisition is another important factor that needs to be considered while recording the data in an out-of-lab environment. In lab stress measurement system can use medical-grade and other data acquisition devices with constant power supply and there is no power outage, whereas in real-life scenarios all the devices whether they are EEG headset, skin conductance, and heart rate measurement modules or smartwatches all have limited battery life and can last approximately around 4-5 hours. But to record the activities of a user for a complete day, the power consumption of the devices needs to be kept at a minimum level so that the battery can work for more time. Existing literature has not examined this power consumption factor and needs to be addressed to be able to manufacture efficient and long-lasting data acquisition devices for monitoring stress, especially for real-life environments. Noise affects the data acquisition part of the stress recognition measurement in the lab as well as the out-of-lab environment. Physiological signals which include EEG, EMG, ECG, PPG, and RR are most commonly affected by the individual body movement parts and severely degrades the signal quality. Artifact removal techniques include low, high and bandpass filtering, least mean square, notch filtering, recursive least mean square, principal component analysis (PCA), independent component analysis (ICA), and wavelet denoising. Even though these techniques have been found quite useful in the removal of noise from the bio-signals in the lab environment, but still the data acquired in out-of lab presents a wider range of challenges like environmental conditions i.e., temperature, humidity, and lightning affecting these recorded physiological signals and other body functions. An example of the effect of these environmental conditions includes pupil diameter changes in response to light stimulation and changes in skin conductance of the human body due to temperature and physical activity. Thus in an outdoor environment, these conditions are very difficult to remain constant which can somehow be maintained in-lab environment thus posing a greater challenge and affecting the stress measurement study results. Most of the human stress measurement studies available in the literature have focused on acute stress and very little attention has been given to chronic stress detection techniques. Even though it is a fact that chronic stress can be fatal in comparison to acute stress and has given billions of dollars lost to many companies and is affecting their worker's health severely~\citep{gottlieb2013coping}. Chronic stress if poorly managed and is prolonged for a longer time duration turns into traumatic disorders that can permanently damage an individual's life. Thus in future stress measurement work chronic stress measurement should be given its due importance so that it can be properly diagnosed and treated before it becomes a permanent part of one's life. Most of the stress measurement studies in the literature have used their dataset for the detection of stress and thus it is difficult to compare the results of different schemes directly because of the lack of standard publicly available stress measurement datasets, hence the curation of data for measurement of stress like DEAP database for emotions~\citep{koelstra2011deap} using globally accepted practices is the need of the hour. The deep learning-based method has been proved a powerful mechanism for classification in a variety of applications including handwriting and image recognition but the use of deep learning for human stress detection has still not been explored to a great extent in the literature. It is because deep learning-based algorithms require a large dataset to train the model and to acquire a large dataset for stress is a cumbersome task and it is still an open research problem to acquire a large dataset for the deep learning models to be implemented. Only a few studies focusing on deep learning-based stress classification have been presented in the literature~\citep{song2017development} and thus this area needs further exploration. Keeping in view all the above-mentioned limitations of the existing methods, new work on human stress detection should address these challenges so that a more robust system that is practically possible to be implemented for real-life stress monitoring solutions in the future. \section{Conclusion} \label{sec:conc} Human stress is one the most significant challenges faced by modern civilization with a long-lasting impact on both society and the social life of an individual. This also impacts the economic condition of those individuals that are facing stress-related situations. Therefore, automatic recognition of stress is of utmost importance to bring a positive change in society. Some of the important benefits of automated stress monitoring could include a better attitude towards the workplace, an increase in productivity, a decrease in the number of road accidents to name a few. In general, better stress management for individuals would have far-reaching benefits for both individuals and society. Towards this, we have surveyed available literature for human stress measurement including both subjective and objective measures. Commonly used stressors used for inducing stress followed by publicly available stress measurement databases are also examined. Multimodal approaches for human stress detection along with a discussion about the limitations of the existing work and the open research issues which need to be addressed in future stress measurement studies are also explored. In particular, we have focused on methods that use or are suitable for employing artificial intelligence techniques for automated stress detection. The information presented here will provide a platform for designing future studies for stress detection, particularly for employing AI with domain knowledge. With this, we also provide a better context for methods that can be used for daily life stress detection (acute) and for situations that are more demanding (chronic). Towards, this we have comprehensively highlighted the current challenges and future directions. \bibliographystyle{spbasic} \section{Introduction} \label{sec:intro} Human stress research is significant since it plays a vital role in social, physiological, and psychological health. Stress research has a wide range of applications that include stress monitoring during the daily routine, stress assessment for improving health and work productivity, and preventing the onset of serious diseases. This research domain is beneficial for both individuals and society. Stressful conditions manifest in the form of adverse effects on an individual's working abilities and health. This makes them vulnerable to various kinds of diseases and weakens the recovery process for the human body from various clinical conditions~\citep{subhani2017machine}. Long-term exposure to heightened stress could cause symptoms of depression. Besides this, the strong connection between depression and stress could boost anxiety and mood disorders~\citep{dempsey2018stress}. Depression affects almost 350 million people worldwide and the situation seems to be worse in developing countries~\citep{world2015depression}. There are multiple causes of stress in human life that could also lead to mental disorders. These factors include, but are not limited to, internal conflicts, political unrest, economic instability, rising poverty, crime rates, and natural disasters, etc. Hence, stressful conditions could severely affect human life in day-to-day activities and could trigger mental and clinical conditions. Therefore, methods that are efficient in stress detection are needed for the time. Recently, the global change in lifestyle owing to COVID-19 is also believed to infest various mental conditions. We are forced to a change in our daily life and minimal social interaction, which is bound to affect our mental health. Experts believe that this could result in a mental pandemic if not handled properly \citep{spoorthy2020mental,ransing2020can}. Hence, monitoring and detecting stress at the right time has become a need of time. A large segment of the human population is living in a constant state of stress without even knowing its serious consequences. Most individuals are unaware of their current stress level, and ironically one of the reasons for heart attack and strokes is a high level of human stress. There are varied causes for stress such as poor income, joblessness, higher crime rates, natural disasters, and many others. According to a survey of the American Psychological Association in 2017, 62\% of the individuals have stress due to financial problems, 62\% due to problems at the workplace, 57\% due to political unrest in the country, and 51\% due to violence and crime in the society~\citep{american2017stress}. Despite such serious consequences, the definition of human stress in medicine is sometimes vague. Hence, priority should be given to focusing on stress quantification with precise numerical indexes. In the near future, it will not be enough to tell patients (dealing with stress) that it is all in their heads since advancements in diagnostic tools would aid in quantifying stress levels more precisely. Human stress is a difficult phenomenon to explain because every individual perceives it differently. There is a common perception in society that stress is a bad thing, but this is not always true as good stress also exists. For instance, an individual might have increased productivity under a stressful or challenging situation. In recent years, there has been an effort to develop strategies for stress assessment in a variety of daily life activities. For instance, in \citep{can2020personal}, authors have proposed a stress detection scheme based on heart activity, skin conductance, accelerometer, and skin temperature of the subject. The stress measurement protocol is composed of baseline, lecture, exam, and recovery sessions. Maximum accuracy of 94.52\% is achieved for three class stress classification using random forest classifier. A study focused on cognitive training and stress detection in older adults suffering from mild cognitive impairment (MCI) while they were participating in a cognitive and motor rehabilitation session \citep{delmastro2020cognitive}. An acute multi-level stress classification framework using photoplethysmography signals in response to the mental arithmetic task was presented in \citep{zubair2020multilevel}. A stress and anxiety detection mechanism using physiological signals for the academic environment was proposed in \citep{rodriguez2020towards}. A study to evaluate the physical and psychological stress in firefighters using the heart rate variability parameter was presented in \citep{pluntke2019evaluation}. Detection of human stress using ECG data with car driving and mental arithmetic task as a stimulus and deep neural network classifier was performed in a study conducted in \citep{cho2019ambulatory} with an achieved stress classification accuracy of 90.19\%. Regression analysis for the measurement of perceived stress using rest state EEG signal was presented in \citep{gillani2021prediction}. A review study about human stress assessment using physiological signals was presented in \citep{gedam2021review}. One of the major shortcomings of this review is that it only focuses on physiological measures of stress and does not enlighten other common methods of stress measurement like physical and psychological measures. Moreover, discussion about publicly existing datasets of human stress is not available. Generally, stress is defined as a response of the human body to different kinds of situations like a threat or a challenge \citep{muthukumar2010cadmium}. There are two main systems of the human body i.e., the autonomic nervous system (ANS) and hypothalamic-pituitary-adrenal (HPA) axis, which respond to stress~\citep{ulrich2009neural}. When a stressor is encountered by a person, it activates neurons present in the hypothalamus. It releases a hormone called corticotropin-releasing hormone (CRH), which consequently causes the release of another hormone (adrenocorticotropin hormone (ACTH)) from the pituitary gland. ACTH travels in the blood and affects the adrenal glands, which in turn triggers the release of stress hormones including cortisol, epinephrine, and norepinephrine~\citep{kajantie2006effects}. Cortisol is released in response to stress and it helps an individual to cope with an immediate threat. In terms of treatment and cure, stress is categorized into three main types i.e., acute stress, episodic stress, and chronic stress based on the symptoms and duration ~\citep{werner1993risk}. Acute stress (also termed as instantaneous stress) originates from a specific event that is novel or unpredictable for an individual. For instance a public speaking task, a nearly missed road accident, and an interview. These stressors are not prone to affect an individual's health, rather are good for human health. Since such events provide a chance for the human body to practice and develop the fight response to any stressful situation in the future~\citep{segerstrom2004psychological}. Whereas severe stressors, which persist for a longer duration, can lead to serious health disorders. Episodic stress occurs when a person faces multiple acute stressors over a shorter period. Episodic stress is commonly faced by individuals that take on more responsibilities than they can easily manage in a given time. Individuals facing episodic stress are often in a hurry and have a disorganized personalities. Individuals who have a pessimistic approach toward daily life routine tasks tend to have episodic stress. Unlike acute stress, episodic stress has negative effects on individual health. People facing episodic stress have low confidence in their abilities and they assume that they will never be able to come out of a stressful situation~\citep{sincero2012three}. Chronic stress (also called long-term stress) originates due to a variety of reasons like an unsatisfactory job, tense family life, and financial crises~\citep{hiriyappa2013stress}. Unlike acute stressors which can be negative as well as positive, chronic stress is always negative. Chronic stress affects the personality of an individual and can be the cause of many serious diseases which include heart attacks, cancer, and lung diseases~\citep{salleh2008life}. For the assessment of human stress, subjective and objective measures have been used~\citep{gross2016standard,fohr2015subjective}. For subjective stress assessment, two different ways are used including standard stress measurement questionnaires designed by field experts and conducting sessions with psychologists~\citep{gurung2013health}. Whereas, objective measures of stress further include physiological and physical measures~\citep{onorati2013reconstruction,arsalan2019classification}. In physical measures, visible changes in the human body are observed such as facial expressions~\citep{deschenes2015facial}, eye blinking rate~\citep{gowrisankaran2012asthenopia}, and dilation of the pupil~\citep{schulte2011handbook}. Whereas, for physiological measures, sensors are placed on the human body to measure internal changes. Towards this, various biomarkers have been employed including heart rate variability (HRV), heart rate (HR)~\citep{subahni2012association}, electrodermal activity~\citep{liapis2015recognizing}, respiration~\citep{wielgosz2016long}, and cortisol~\citep{liew2015classifying}. Further, in recent years the application of machine learning for developing artificially intelligent systems has gained pace. One of the driving factors for this scientific advancement has been the success of deep learning algorithms. Machine learning is poised to significantly change and improve how healthcare systems work. The improvement in computational power will allow the development of embedded systems that are AI-enabled in healthcare. Human stress detection could also benefit from these advancements. Machine learning can be deployed for both offline and online stress assessments. Some challenges that need to be handled include dealing with unpaired data, assigning reliable labels, and develop algorithms that reliably work with limited data and are explainable. Some studies focus on reviewing the current state of affairs related to human stress detection. For instance, a review on human stress detection using bio-signals is presented in~\citep{giannakakis2019review}. However, a discussion about the psychological, physical, and behavioral measures of human stress is found lacking. Further, publicly available databases for human stress measurement were also not explored. In another study, objective, subjective, physical, and behavioral measures for stress detection, as well as publicly available data used for human stress, are discussed. Another application-specific human stress measurement survey focusing on driver stress level is presented in~\citep{rastgoo2018critical}. Physical and physiological measures of human stress for driver stress detection are explored in detail. The limitation of this survey is that it only discusses a specific application i.e., driver stress level, and is not generic. Similarly, a review of methods developed for human stress measurement at the workplace is discussed in~\citep{carneiro2017new}. The limitation of this survey is that it only discusses the stress measurement methods about a specific application i.e., workplace environment and there is also no discussion about the publicly available existing databases for human stress assessment. Human stress measurement survey using smartphones and wearable sensors is presented in~\citep{can2019stress}. The paper presents the in-lab and out-of-laboratory environment stress measurement studies. The major limitation of the presented survey was the lack of discussion of existing publicly available datasets for human stress detection and the presentation of a limited amount of literature as compared to other available stress assessment surveys. A survey of devices available in the market was presented in~\citep{thapliyal2017stress} without any information on the studies using those devices. In summary, there is a need for a comprehensive presentation of the available human stress measurement methods. To the best of our knowledge, the current review addresses most of the shortcomings of existing human stress assessment survey papers by thoroughly investigating all the subjective and objective measures of human stress. In particular, our major contributions include, \begin{enumerate} \item Subjective measures, which include psychological questionnaires, are explored in detail for completeness. \item Objective measures of stress comprising of data acquired from wearable and non-wearable sensors are elaborated. \item Publicly available human stress measurement databases and commonly used stimuli for inducing stress are also discussed in detail. \item Future research directions in the domain of automated human stress detection using artificial intelligence are identified. \end{enumerate} The organization of this review paper is as follows. \Sec{intro} presents an introduction to available stress measurement techniques categorization and a discussion about the existing stress measurement reviews and their limitations. \Sec{sads} presents a review of the commonly used stressors adopted in stress measurement studies for inducing stress in the participants followed by a brief discussion about publicly available stress detection databases. Subjective stress measurement techniques commonly used in literature are explored in \Sec{ssa} followed by objective stress measurement techniques, its general framework, and its associated literature is explored in \Sec{osa}. \Sec{msa} presents the multimodal stress detection schemes available in the literature followed by a discussion about limitations of the existing schemes and future directions in \Sec{fd} and conclusion in \Sec{conc}. \section{Stress Detection Datasets and Stressors} \label{sec:sads} The section is subdivided into two parts: first, we discuss some commonly used stressors for inducing stress in humans, secondly, we summarize publicly available datasets for human stress detection. \subsection{Stress Inducers: Stressors} Human stress measurement methods presented in literature use a wide variety of stressors, which could include a public speaking task, an interview, an arithmetic task stressor, and many others. Stress is measured in response to these stressors by using different physiological and psychological techniques. Herein, we will review the most commonly used stressors for inducing stress in the participants and their related literature. \subsubsection{Stroop Color Word Test (SWT)} SWT is a neuropsychological test which has been developed by J.R. Stroop in 1935 and it has been widely adopted for experimental as well as clinical purposes. SWT was composed of three different tasks~\citep{stroop1935stroop}, where the first task consists of names of all colors written in black, in the second task the names of the colors and the color of the written text is different, whereas in the third task there are squares of different colors. During the test, a participant should answer the color of the word and not the word itself. In another version of SWT, three tasks were named as neutral (introductory session), congruent or non-conflict task, and non-congruent or conflict task. In the introductory session, all color names are written in black. In the congruent session, all color names are written in the same color as the color name. Whereas, in the non-congruent session, the name of the color is written in a different color from the color name. SWT has undergone a wide range of changes since its inception in 1935. The alterations include an increase or decrease in the task duration, the addition of more colors to the experimental tasks, and selection of one or more non-congruent colors among the number of congruent colors. Stroop color-word test has been widely used in brain imaging and human attention measurement studies~\citep{pujol2001effect} and for the measurement and identification of human stress~\citep{pehlivanouglu2005computer,tulen1989characterization,svetlak2010electrodermal,renaud1997stress,zhai2006stress,lundberg1994psychophysiological,alonso2015stress,ren2012affective,kurniawan2013stress,giannakakis2017stress,giakoumis2012using,karthikeyan2012descriptive,krantz2004consistency}. \subsubsection{Mental Arithmetic Task (MAT)} MAT is one of the most commonly used stimuli for inducing stress~\citep{lundberg1994psychophysiological,ushiyama1991physiologic,tomaka1994effects,seraganian1997effect,ring2002secretory,hassellund2010long,linden1991arithmetic}. Mental arithmetic task is a mechanism to increase the mental workload by performing a series of arithmetic operations with a varying range of difficulty. This stimulus is easy to implement and does not requires any special instrument. Another variant of the mental arithmetic task is Montreal Imaging Stress Task (MIST)~\citep{dedovic2005montreal}, which is a computer-based stress-inducing protocol mainly consisting of mental arithmetic problems and has been used as a stressor in several studies~\citep{setz2009discriminating,minguillon2016stress,al2015mental,al2016mental} \subsubsection{Cold Pressor Test (CPT)} The CPT is another stimulus that is commonly used for inducing stress in stress measurement experiments. CPT was first introduced by Hines and Brown in 1932~\citep{hines1932standard}. In particular, CPT involves immersion of the human hand or limb in cold water for a duration of 2 to 3 minutes. During this experiment, the subject feels uncomfortable and it is painful to adapt to a particular temperature for quite some time. The CPT protocol is widely used in laboratory experiments because of its ease of use. CPT triggers the activation of the sympathetic nervous system which increases blood pressure, heart rate, and skin conductance of the human body~\citep{lovallo1975cold}. A rise in cortisol level is also observed during CPT~\citep{al2002adrenocortical,bullinger1984endocrine}. Various versions of CPT have been used in different experiments which include immersion of both hands~\citep{suter2007cold} or both feet in hot or cold water~\citep{previnaire2012severity}. In~\citep{frings2013stress}, bilateral foot immersion was used to elicit stress response increasing salivary cortisol concentration and heart rate. In~\citep{hassellund2010long}, the author conducted a study in which the right hand of the subject was immersed completely in cold water for a time duration of one minute. In another study ~\citep{shi2010personalized}, the participant was asked to keep their hand in ice water until they started to feel discomfort. \subsubsection{Social Evaluative Tasks} Psycho-social stress is a type of human stress which occurs when an individual has to face people or a group of people as in public speaking task. When a socially threatening situation occurs, two mechanisms of the human body are affected, which include the autonomic nervous system and the neuroendocrine system. Hypothalamus activates both these systems to monitor the environmental demand (i.e., stress) as well as the internal state of the subject~\citep{bitsika2014hpa}. Based on these two mechanisms, physiological as well as the behavioral response is activated to generate a fight-or-flight response~\citep{taylor2000biobehavioral}. The physiological system of a human being is affected and has an immediate impact with an exposure to a social stressor~\citep{dickerson2004acute}. Exposure to social stressors has been the cause of many diseases including depression~\citep{mcewen2005glucocorticoids}, cardiovascular diseases~\citep{kemp2012depression}, and immune dysfunction~\citep{glaser2005stress}. Obesity, anxiety, and psychosocial stress have also been found interlinked to each other~\citep{pittig2013heart}. Hence, curing social stress is important, towards which exposure therapies have been developed to treat anxiety. Real-life social evaluative situations generate psychosocial stress~\citep{wolpe2013systematic}. Instead of real-life events exposure, virtual reality has also been used as a stressor~\citep{parsons2008affective}. Virtual reality exposure therapy (VRET) is an intermediate phase between thoughts and real-life events. Virtual reality is useful for a person who has difficulty imagining fearful tasks. VRET has also the advantage that if the stimuli become too threatening for the patient, the therapist has the control to stop the stimuli. VRET is a very effective method of treating social anxiety and based on this VRET patients learn methods to face such a threatening situation in real life~\citep{bordnick2012feasibility}. The public speaking task as a social stressor has been a focus on very few studies. Existing literature either focuses on the real audience~\citep{kudielka2009we} or a virtual audience~\citep{slater2006experimental,felnhofer2014afraid}. A complete study based on different physiological measures to find the impact of social stress in the real world as well as a controlled environment is still pending. Existing literature has shown that a virtual audience has been able to induce stress based on a virtual public speaking task based on heart rate and self-reported anxiety measure~\citep{slater2006experimental,felnhofer2014afraid,pertaub2002experiment}. Moreover, literature exists on the comparison of stress based on gender. It is shown in~\citep{kudielka2009we} that when men and women are both subjected to real-life stressors, no significant difference based on gender was found. HPA has also been found to have no changes between male and female participants~\citep{kelly2008sex}. It has been established in the literature that women have a decreased happiness level after facing social stressors. A study presented in~\citep{hemmeter2005modification} shows that men have higher cortisol concentration than females when facing a virtual stressor. Social rejection shows higher cortisol levels in women as compared to men in a study given in~\citep{stroud2002sex}. In~\citep{kothgassner2016salivary} author examined the stress response of a public speaking task in front of a real audience, virtual audience, and in an empty lecture hall. Gender difference in stress response was also evaluated and heart rate, heart rate variability, and saliva cortisol were used as a parameter of measurement. \subsubsection{Music} The effect of music on human stress has also been the subject of various studies. In~\citep{escher1993music}, authors experimented with cortisol changes and found that positive cortisol changes occur when the subject was asked to listen to the music before and during stressful medical treatment. In~\citep{suda2008emotional}, the author has demonstrated the effect of music on suppressing stress with an increase of cortisol level whereas, on the other hand in another study, the authors have demonstrated the fact that for a non-music condition the cortisol levels decreased after the stressor period~\citep{khalfa2003effects}. Another parameter of research is the effect of music on the SNS system of an individual. Different experiments have been conducted in this regard to establish the effect of music on the SNS system. In~\citep{bartlett1996physiological}, a decrease in SNS activity was observed in response to music. But few other studies contradict these findings. An investigation into the fact that whether human stress is relieved due to music is reported in~\citep{allen2001normalization}. It was concluded that the level of relaxation and the ability to cope with challenges is increased with a decrease in the perceived level of stress of an individual. A decrease in anxiety in response to listening to music is a consistent finding of many studies~\citep{knight2001relaxing}. Few studies exist that have reported no reduction in anxiety in response to music~\citep{evans2002effectiveness}. \subsubsection{International Affective Picture System (IAPS)} IAPS is a collection of photos that have been widely used to elicit an emotional response either positive or negative in the viewers~\citep{lang1997international}. IAPS is a set of photos that have been evaluated on a 9-scale rating of valance and arousal. IAPS has been used as a very effective tool to induce stress in stress recognition experiments~\citep{baltaci2016stress,liao2005real,giannakakis2017stress,nhan2009classifying,khalilzadeh2010qualitative}. The database is developed by the National Institute of Mental Health Center for Emotion and Attention at the University of Florida and is composed of 956 images that have been categorized into pleasant (to elicit positive feelings), non-pleasant (to elicit negative feelings) and neutral images. The database consist of a normative rating which is developed on three dimensions i.e., valance, arousal, and dominance, represents the average rating of the emotion induced by each picture. This rating helps the researchers using IAPS in their research to select an appropriate set of images for inducing relevant emotions. The establishment of this type of average rate is termed as standardization by psychologists. The standard rating of IAPS was obtained from 100 students composed of 50 males and 50 females having US-American origin. Normative rating of IAPS is also obtained from non-US participants of other origin i.e., Hungarian~\citep{deak2010hungarian}, German~\citep{gruhn2008age}, Portuguese~\citep{lasaitis2008brazilian}, Indian~\citep{lohani2013cross}, and Spanish~\citep{dufey2011adding}. Various kind of physiological modalities which include fMRI~\citep{caria2010volitional}, EEG~\citep{hajcak2009brain}, magnetoencephalography~\citep{styliadis2015distinct}, skin conductance~\citep{d2010early}, heart rate~\citep{bradley2001emotion}, and electromyography~\citep{baglioni2010psychophysiological} have been used along with IAPS stimulus. \subsubsection{Trier Social Stress Test (TSST)} TSST is a psychological stress-inducing protocol in the laboratory environment and was developed by Clemens Kirschbaum in 1993~\citep{kirschbaum1993trier}. TSST consists of two parts which include an anticipation period of 10 minutes and a test period of 10 minutes in which the subject has to deliver a speech and perform a mental arithmetic task in front of an audience. TSST has been used in a variety of stress measurement studies for inducing stress~\citep{kurniawan2013stress,engert2014exploring,vinkers2013effect,nater2005human}. \subsection{Publicly Available Datasets for Human Stress Detection} Only a few human stress assessment datasets have been curated by the research community and are publicly available for further research. In this section, we present details of publicly available data for this task using physiological signals. A human stress measurement data (\url{https://physionet.org/content/drivedb/1.0.0/}) to measure the driver stress using physiological signals of electrocardiogram, electromyogram, skin conductance, and respiration is presented in~\citep{healey2005detecting}. The physiological signals from 24 drivers were acquired during three different phases i.e., the rest condition, highway driving, and city driving. The three conditions (rest, highway, city) under which the data is acquired were mapped onto three stress levels i.e., low stressed, medium stressed, and highly stressed, respectively. One of the major limitations of this database is that the sampling rate of all the acquired physiological sensors is low e.g., electromyogram signal is recorded at a sampling rate of 15.5 Hz. Another dataset to measure driver workload (\url{http://www.hcilab.org/automotive/}) using physiological signals of heart rate, skin conductance, and body temperature and GPS, acceleration, and brightness level data obtained from the smartphone are presented in~\citep{schneegass2013data}. The data from 10 drivers (7 males and 3 females) is acquired while driving on the pre-defined route of 23.6 km on five different road types i.e., 30 km/h zone, 50 km/h zone, highway, freeway, and tunnel. Moreover, the labels depicting different levels of workload i.e., no workload to maximum workload are also provided in the database. The dataset can be used for the assessment of different levels of workload based on physiological signals. Another publicly available human stress measurement dataset (\url{https://physionet.org/content/noneeg/1.0.0/}) using bio-signals of electrodermal activity, temperature, acceleration, heart rate, and arterial oxygen level is presented in~\citep{birjandtalab2016non}. It consists of data from 20 participants consisting of 16 males and 4 females. Data acquisition is performed under four different conditions i.e., relaxed state, physical stress, cognitive stress, and emotional stress. Relaxation condition is achieved by asking the participants to listen to a soothing music track. Physical stress is induced by making the participants jog on a treadmill at 3 miles/hour. Cognitive stress is elicited by asking the participants to count backward from 2485 in a step of seven. Lastly, emotional stress is evoked by watching a video clip from a movie. Another dataset (\url{https://osf.io/c42cn/wiki/home/}) to measure the driver's behavior under different kinds of emotional, cognitive, and startling stressors which are the major cause of accidents is presented in~\citep{taamneh2017multimodal}. The dataset was acquired by involving 68 drivers, who drove under four different conditions i.e., no distraction, emotional distraction, cognitive distraction, and sensorimotor distraction in a controlled environment in a driving simulator. Modalities used for acquiring the driver response include heart rate, respiration rate, facial expressions, gaze, and electrodermal activity from the palm of the subject. Different types of subjective questionnaires were used to measure the cognitive state, personality type, and task load of the subject. \textbf{WE}arable \textbf{S}tress and \textbf{A}ffect \textbf{D}ataset (WESAD) (\url{https://ubicomp.eti.uni-siegen.de/home/datasets/}) is a publicly available data consisting of physiological and motion data of subjects for both emotion and stress stimuli~\citep{schmidt2018introducing}. Here 15 participants were involved in the experiment which includes 12 male and 3 female participants and the data were acquired in the laboratory setting. Data for each subject were recorded in three different conditions i.e., baseline recording done by performing a reading task, funny condition achieved by watching a set of funny videos, and stressed condition achieved by exposure to trier social stressor test. Sensor modalities used in the data acquisition include electrocardiography, electrodermal activity, electromyography, blood volume pulse, respiration, body temperature, and three-axis acceleration. A multimodal \textbf{S}mart reasoning system for \textbf{WELL}-being at work and at home \textbf{K}nowledge \textbf{W}ork (SWELL-KW) dataset (\url{http://cs.ru.nl/~skoldijk/SWELL-KW/Dataset.html}) for research on human stress and user modeling is developed in~\citep{koldijk2014swell}. Data were curated from 25 participants performing a variety of knowledge work tasks, which included report writing, preparing presentations, checking emails, and information search. Stress was induced by telling the participants that they have to present one of the prepared presentations to get the full experiment participation fee. Data for each participant were recorded for a duration of three hours, which was sub-divided into three one-hour blocks. Each block started with an eight-minute relaxation period block and then after that, the participant was assigned to the tasks on which he/she has to work. The participants had to write two reports and prepare one presentation in each block of the experiment. Stress was induced by showing a countdown timer flashing the remaining time for task completion. A dataset (\url{https://catalog.ldc.upenn.edu/LDC99S78}) to measure the effect of human stress on speech signals was presented in~\citep{steeneken1999speech}. Three different databases named Speech Under Stress Conditions (SUSC), Speech Under Simulated and Actual Stress (SUSAS), and DERA License Plate (DLP) datasets were developed in this research work to develop robust speech processing algorithms for the identification of human stress in the speech signals. Towards this, 32 speakers constituting 19 male and 13 female participants with an age bracket ranging from 22 to 76 years participated in the experiment to record 16,000 voice samples. The speech signals were sampled using a 16-bit analog to digital converter at a sampling rate of 8 kHz. Another dataset (\url{https://www.sensornetworkslab.com/clas}) for \textbf{C}ognitive \textbf{L}oad, \textbf{A}ffect and \textbf{S}tress Recognition (CLAS) was presented in~\citep{markova2019clas}. The database consists of the physiological recording of ECG, PPG, and EDA and motion data of accelerometer from 62 healthy participants (45 men and 17 women) with ages ranging from 20-50 years. The data was acquired while performing three interactive and two perspective tasks. The interactive tasks include mathematical problems, logic tasks, and Stroop color-word tests, whereas perspective tasks are composed of images and audio-video stimuli. All the physiological signals were acquired at a sampling rate of 256 Hz with a resolution of 16 bits per sample. Another publicly available dataset for assessment of social stress in humans using physiological signals of blood volume pulse and electrodermal activity is presented in \citep{meziatisabour2021ubfc}. Moreover, video recording was done to measure the remote photoplethysmography and facial features. A total of 68 undergraduate students from the psychology department participated in the experiment. The Competitive State Anxiety Inventory (CSAI) questionnaire was used to measure the three dimensions of self-reported anxiety which include cognitive anxiety, somatic anxiety, and self-confidence. Trier Social Stress Test has been used as a stimulus during which the physiological signals and the video recording is performed. A perceived human stress measurement dataset (\url{https://sites.google.com/site/simplrgp/resources}) using EEG signal was presented in~\citep{arsalan2019classification}. The database consists of EEG recordings from 28 participants (13 male and 15 females) with ages ranging from 18 to 40 years, in three different phases of the experiment i.e., pre-activity, during activity, and post-activity. EEG recording was performed while the participant was delivering a presentation on an unknown topic for a time duration of five minutes. Subjective scores from the perceived stress scale questionnaire were also recorded. \section{Subjective Stress Assessment} \label{sec:ssa} Subjective measures for human stress assessment have been traditionally used for many decades. While here our objective is to review methods that use data from wearable and non-wearable sensors for automated stress detection using artificial intelligence. However, subjective measures are explored herein for completeness. Further, such assessments have been used to benchmark machine learning-based methods. Towards this, there exists a wide range of questionnaires developed by psychologists for measuring a different type of stress. These measures are based on the questionnaire being filled by the subject. Psychological questionnaires are being used by the researchers to validate the objective measures obtained from the sensors. Perceived stress scale (PSS) questionnaire~\citep{cohen1983global} is commonly used by psychologists to measure chronic stress. Acute stress disorder (ASD) scale questionnaire~\citep{bryant2000acute} is developed by psychologists to measure acute stress. Some of the other questionnaires used by the psychologists are relative stress scale (RSS)~\citep{ulstein2007relative}, daily stress inventory (DSI)~\citep{brantley1987daily}, brief symptom inventory~\citep{derogatis1993brief} and trier inventory for the assessment of chronic stress (TICS)~\citep{schulz1999trier}. A brief review of the commonly used questionnaires for stress assessment is given below. \subsection{Acute Stress Disorder (ASD)} ASD is a subjective self-reporting questionnaire inventory that is used to quantify acute stress disorder and post-traumatic stress disorder. ASD is a self-reporting version of the Acute Stress Disorder Interview (ASDI) questionnaire. ASD was developed with three aims i.e., (a) identification of ASD, (b) self-report version of ASDI, and (c) a measure of post-traumatic stress disorders (PSTD). It is a 19-item questionnaire and is compliant with the Diagnostic and Statistical Manual of Mental Disorders criteria. The scale has been successfully used to measure the acute stress order among a wide range of subjects~\citep{bryant2000acute}. The 19 questions of the ASD questionnaire are composed of 5 dissociatives, 4 reexperiencing, 4 avoidance, and 6 arousal items. The questions for the ASD questionnaire are rated on a five-point Likert scale, where 1 means that a condition did not occur at all and 5 means the particular situation occurred very strongly. The minimum score of the questionnaire can be 19 and a maximum score of 85. A study to analyze the factor structure of acute stress disorder in the earthquake victims of the Chinese population is conducted in~\citep{wang2010factor}. The study was conducted on a sample of 353 samples consisting of 180 men and 173 women with a mean age of 29.36 and a standard deviation of 11.45. The study concluded that a four-factor model consisting of dissociation, reexperiencing, avoidance, and arousal is consistent with the conceptualization of ASD. A wide range of studies has been conducted to establish a correlation between PSTD and ASD. The studies report that around three-quarters of the survivors of the trauma patients who show symptoms of ASD ultimately develop PSTD~\citep{harvey1998relationship,harvey1999two,harvey2000two,brewin1999acute}. A study conducted for motor vehicle/industrial accidents in~\citep{harvey1998relationship} found a 3-factor model consisting of acute post-traumatic stress reactions, dissociative symptoms, and dissociative amnesia. The study was conducted on 99 participants consisting of 65 men and 34 women with a mean age of 31.59 and a standard deviation of 11.28. \subsection{Brief Symptom Inventory (BSI)} BSI is a questionnaire developed by a psychologist to measure psychological distress and psychiatric disorders in people~\citep{derogatis1993brief}. The data collected from the questionnaire can be used to diagnose and treat the patients. BSI is a 53-item questionnaire with each question being answered on a five-point scale of 1 to 5. The 53 items of BSI consist of questions of nine symptoms dimensions including Somatization, Obsession-Compulsion, Interpersonal Sensitivity, Depression, Anxiety, Hostility, Phobic Anxiety, Paranoid Ideation, and Psychoticism and three indices of distress i.e., Global Severity Index, Positive Symptom Distress Index, and Positive Symptom Total. The time required by the subject to complete the questionnaire is approximately 8 to 12 minutes. The respondent of the questionnaire answers the questions on a scale from 0 (condition never occurs) to 5 (a condition that occurs very frequently). The minimum score of the questionnaire can be 53 whereas a maximum score of 265 can be recorded. The somatization dimensions are calculated from items 2, 7, 23, 29, 30, 33, and 37, obsession-compulsion dimension is obtained from items 5, 15, 26, 27, 32, and 36, interpersonal sensitivity is measured from items 20, 21, 22, and 42, depression dimension is evaluated from items 9, 16, 17, 18, 35, and 50, anxiety is obtained from items 1, 12, 19, 38, 45, and 49, hostility dimension is calculated from items 6, 13, 40, 41, and 46, phobic anxiety is taken from items 8, 28, 31, 43, and 47, paranoid ideation is evaluated from items 4, 10, 24, 48, and 51, and psychoticism is measured from items 3, 14, 34, 44, and 53 of the questionnaire. Items 11, 25, 39, and 52 did not contribute to the calculation of any dimension but they are recorded because of their clinical importance. Global Severity Index is calculated by the sum of the items of all the nine dimensions as well as the four items which were not included for the calculation of any dimension and then dividing the sum by the total number of items a particular person answered. Positive Symptom Total is calculated by counting the number of items whose responses are non-zero. Positive Symptom Distress Index is obtained by dividing the sum of the non-zero response items by the positive symptom total. BSI has been used for examining the relationship among psycho-social family risk factors, parental psychological distress, and quality of life in pediatric cancer survivors in a study conducted in~\citep{racine2018quality}. The study reports that families having a low level of distress having a lesser impact on the quality of life of pediatric cancer survivors. The relationship between psycho-pathological symptoms and technological addictions has also been studied. In a study conducted among 126 university students, the nine dimensions obtained from the BSI questionnaire were found to be significantly correlated to internet addiction~\citep{adalier2012relationship}. A significant association was found between the anxiety level and internet addiction in adolescents in a study conducted in~\citep{stavropoulos2017longitudinal}. \subsection{Relative Stress Scale (RSS)} RSS is a commonly used subjective marker to measure the psychiatric disorders among the individuals who act as caretakers of dementia patients. It is a 15-item questionnaire and is a reliable measure of stress disorders among the carers of dementia patients. The items of the questionnaire are scored on a scale of 0 to 5, where 0 means a particular event never occurred and 5 means a particular event occurs very frequently. The minimum score of the questionnaire is 0 where the maximum score is 60. Age gender, education, occupation, and the relationship of the carer with the patient are also recorded in RSS. The carer of the patients was also asked to specify their routine and estimated time they used to take care and assist the patient in a week. The RSS questionnaire covers many different aspects of carer burden like a subjective emotional response (emotional distress), the negative feeling associated with the behavior of patients (negative feelings), restrictions in the patient carer's social life (social distress). Question items 1, 2, 3, 4, 5, and 6 of the RSS questionnaire measures emotional distress, items 7, 8, 9, 10, 11, and 13 measures social distress, and items 12, 14, and 15 measure the negative feelings of the patient carers. Emotional distress is carers is directly proportional to the amount of time spent per week with the patient and more specifically the emotional distress is higher in female carers supporting the fact that female carers are more emotional in their approach~\citep{fitting1986caregivers} whereas their counterpart males are more task or goal oriented~\citep{corcoran1992gender}. Social distress is also higher in carers who spend more time with the patients and patients with high social distress needs help and a break from caring for dementia patients. The negative feelings in the carer are associated with the patient's age, i.e., the younger the patient, the more the negative feelings that occur in the patient carers. RSS has been widely adopted in Norway for clinical purposes and research to measure the carer burden~\citep{braekhus1999social,thommessen2002psychosocial}. RSS has been used in literature for the validation of the distress scale of the Neuropsychiatric Inventory~\citep{kaufer1998assessing}. In~\citep{greene1982measuring}, RSS has been used to provide a useful basis for discussion with carers of dementia patients. \subsection{Daily Stress Inventory (DSI)} DSI is another measure developed to provide the research scientists and doctors with information about the psychiatric issues of the patients after they have gone through some stressful events. DSI is specifically designed for the measurement of small stressful events that need to be measured on daily basis. DSI possesses useful and unique qualities for the measurement of stressful events. DSI is a 58-item questionnaire that allows the participant to indicate the events which occurred in the last 24 hours. After indicating the events that occurred, the subject rate the stressfulness of those events on a Likert-type scale from 1 to 7. 1 refers to the events which occurred but they were not stressful whereas a score of 7 means that a particular event caused panic to the subject. At the end of the DSI inventory, two blank items were provided to let the subject report those events which were not included in the 58 items. However, the scores of the blank items were not counted toward the calculation of stress scores. The minimum score of the DSI inventory can be 58 whereas a maximum score of 406 can be obtained. Three different scores are computed for each individual, (i) the number of events that are reported by the subject to have occurred, (ii) the sum of the total score of all these events, (iii) the average of the scores of these events. DSI inventory has been used frequently in research studies and has shown good validity ad reliability~\citep{maslach1997evaluating}. DSI was aimed at daily monitoring over a course of seven to ten days to measure the changes in daily faced stressors and to observe the relationship of these stressors to physical and psychological symptoms~\citep{brantley1993daily}. This daily monitoring leads to a better association between these small stressors and the illness of the subject. A study conducted in~\citep{goreczny1988daily} monitored 24 patients with asthma and chronic obstructive pulmonary disease. Daily stress score was recorded via DSI inventory and respiratory symptoms were recorded for a time of 21 days. The study depicted the fact that on a highly stressful day, asthma symptoms in patients worsened. DSI has been used for correlation with other medical conditions like headache~\citep{mosley1991time,waggoner1986investigation}, Crohn’s disease~\citep{garrett1991relation} and diabetes~\citep{goetsch1990stress}. \subsection{Perceived Stress Scale (PSS)} PSS is a questionnaire developed to measure the chronic stress of an individual. The questionnaire assesses the extent to which an individual has been stressed in the last thirty days. PSS is a 10 item questionnaire scored on a scale of 0 to 4 where 0 means that a particular situation never occurred whereas 4 means a situation occurred very frequently. The score from all the items of the questionnaire is summed up to get the PSS questionnaire score. The minimum and maximum score that can be obtained from the PSS questionnaire is 0 and 40, respectively. PSS's final score is obtained by reversing the four-item of the questionnaire which include items 4, 5, 7, and 8, and using the other items of the questionnaire as it is. PSS has been used in a wide range of studies for the assessment of chronic stress among individuals. \subsection{Trier Inventory for the Assessment of Chronic Stress (TICS)} The TICS is a standardized questionnaire for assessing nine interrelated factors of chronic psychosocial stress and is a very reliable and effective tool. The nine factors which are addressed by TICS include Work Overload (e.g., "I have too many tasks to perform."), Social Overload (e.g., "I must frequently care for the well-being of others."), Pressure to Perform (e.g., "I have tasks to fulfill that pressure me to prove myself."), Work Discontent (e.g., "Times when none of my tasks seem meaningful to me."), Excessive Demands at Work (e.g., "Although I try, I do not fulfill my duties as I should."), Lack of Social Recognition (e.g., "Although I do my best, my work is not appreciated."), Social Tensions (e.g. "I have unnecessary conflicts with others."), Social Isolation (e.g., "Times when I have too little contact with other people."), and Chronic Worrying (e.g., "Times when I worry a lot and cannot stop). TICS is a 57-item questionnaire that is rated on a 5-point scale from 0 to 4 based on whether the participant experienced a particular situation in the last 3 months or not. On the 5-point scale, 0 means a situation never occurred, 1 means a situation very rarely occurs, 2 means a situation sometimes occurs, 3 means a particular situation often occurs, and 4 means a particular situation occurs very frequently. The total score of the TICS questionnaire can range from 0 to 228. In a study conducted in~\citep{sturmbauer2019stress}, a correlation between the Stress and Adversity Inventory (STRAIN) with TICS and PSS was examined. It was found that STRAIN is more correlated to TICS as compared to PSS. A correlation between the TICS score and central serous chorioretinopathy (CSC) named syndrome in young and middle-aged adults was established in~\citep{buehl2012trier}. The study found that people with CSC syndrome have higher TICS score as compared to individuals with no CSC syndrome. Subjective measures of human stress have been widely used in the literature but there exist some limitations and shortcomings of these methods. One of the major shortcomings of these subject measures is that these questionnaires are being responded to by the subject himself and if the subject answers the items of the questionnaire in a biased manner then the score obtained for stress measurement is unreliable and incorrect. Secondly, to answer the questionnaires, the subject has to be literate and able to properly read the items of the questionnaire. Thirdly, the questionnaires for stress measurement are not available in all the languages thus creating a bottleneck and hence cannot be used by individuals whose first language is not the one in which the questionnaire has been developed. Keeping in view these limitations, using only subjective measures is not a reliable indicator of stress, thus objective measures of stress are essential for the development of better stress measurement protocols. \section{Objective Stress Detection} \label{sec:osa} Objective measures of stress include physiological and physical measures. Physiological measures of stress need sensors to be connected to the human body at some specified location e.g., EEG, ECG, and EDA whereas, in the case of physical sensors the measurement can be done at a distance from the subject without the need of any physical contact. Objective measures of stress are free from human intervention and hence cannot be biased like the subjective questionnaire and it is the major benefit of objective measures over the subjective assessment of stress. Moreover, the studies which use objective measures of stress also validate their finding using subjective questionnaires~\citep{healey2005detecting}. The data acquisition protocols in case of objective measures of stress are time-consuming and complicated and hence to record data for a large population sample is difficult. The limited capacity of the existing stress modeling protocol and lack of a large data sample make it necessary to include the conventional subjective stress measurement methods for the validation of objective measures. It is because of these factors, subjective measures are still regarded as an efficient measure of stress~\citep{ulstein2007high,weidner1989hostility}. In this section, we will discuss a general framework for human stress assessment and review all the literature available for human stress measurement using objective methods. The general machine learning framework of human stress detection includes data acquisition and annotation, pre-processing, feature extraction and selection, and classification steps that are shown in \Fig{fig1a}. Each of these steps plays a vital role in accurate human stress detection and is discussed below. \begin{figure*} \begin{center} \includegraphics[width=80mm]{Figure1a.png} \end{center} \caption { \label{fig:fig1a} { General machine learning framework for the objective stress detection.}} \end{figure*} \noindent\textbf{Data Acquisition and Annotation} is one of the most important steps in the human stress detection framework. The quality of the acquired data is of utmost importance for the robust analysis of human stress and to draw a reliable conclusion. Before the start of data acquisition, a good experimental design following the standard protocols is needed. In stress measurement studies, there exist two types of experimental protocols which include (i) measuring stress induced by an external stimulus also called acute or instantaneous stress, and (ii) measuring perceived or chronic stress without using any external stimulus. Before data acquisition, the stress-inducing protocol which will be used need to be defined. Another important factor which needs to be considered for data acquisition is that whether the data need to be acquired in laboratory settings or out-of-laboratory environment. The number of participants of the experiment is also important because if the number of participants is small, the findings of the study could not be generalized and there is also a chance that data acquired from some participants might get corrupted but on the other hand, acquiring data from a large number of participants is a time consuming and cumbersome process to follow. The physical and physiological sensors for whom data is to be recorded should be selected before the start of data acquisition. In addition to the above-mentioned parameters, another important factor that needs to be considered in data acquisition is data annotation. Data annotation is the process of assigning each training example of the data to a particular class depending on some criteria. Commonly used criteria for data annotation in stress measurement studies include the use of subjective questionnaire~\citep{asif2019human,arsalan2019classification} and evaluation by psychologists~\citep{saeed2020eeg}. This variation in the labeling technique also poses challenges for the comparison of the available techniques with each other. \noindent\textbf{Pre-processing} is the second step in the stress detection pipeline and plays an important role in the whole process. Signals acquired by using wearable and non-wearable sensors during the data acquisition phase are affected by different kinds of noises which include power line~\citep{lin2016removal}, eye blinking artifacts~\citep{shoker2005artifact}, muscular artifacts~\citep{chen2014preliminary}, and posture or physical activity~\citep{alamudun2012removal}. Noise removal techniques for pre-processing different kinds of modalities have been developed in the literature. Accelerometer sensor has been used in stress detection studies and the noise which affects accelerometer data is composed of high-frequency components and can be removed by using low pass filtering. Authors have applied low pass filtering to remove high-frequency artifacts from the accelerometer signal in their stress assessment studies conducted in~\citep{mozos2017stress,gjoreski2017monitoring}. Two of the main steps for pre-processing an ECG signal are to identify the R-peaks and RR interval. Algorithms like Pan and Tompkin’s algorithm~\citep{pan1985real} have been developed to identify the R-peaks of the ECG signal. Moreover, to identify the valid RR-intervals algorithms have also been proposed~\citep{hovsepian2015cstress}. Pre-processing of PPG signals has been explored in the literature. PPG signals are affected by low-frequency noise which can be mitigated by the use of high-frequency filtering~\citep{elgendi2012analysis}. Meaningful information in an EDA signal is normally contained in low-frequency components and the noise is the high-frequency component of the signal which can be removed by passing the EDA signal through a low pass filter. Another important pre-processing task performed with the EDA signals is its segmentation into a slow varying baseline conductivity known as skin conductance level (SCL) and a high-frequency component called skin conductance response (SCR). Authors in~\citep{choi2011development} have proposed a technique to separate the SCL and SCR components of the EDA signal. Techniques for pre-processing an EMG have been proposed in the literature. A two-step noise removal technique for EMG signals is proposed in~\citep{wijsman2010trapezius}. In the first step, band-pass filtering is applied to the EMG signal to limit the signal from 20 to 450 Hz. In the second step, power line interference is negated by applying notch filters at frequencies of 50, 100, 150, 200, 250, and 350 Hz. Another important contamination source for EMG signals is ECG signals i.e., cardiac artifacts. Different algorithms to remove cardiac noise from EMG signals have been compared in a study conducted in~\citep{willigenburg2012removing}. \noindent\textbf{Feature Extraction and Selection} are critical for an efficient machine learning model. Feature extraction corresponds to the process of extraction of meaningful features from the acquired data. Meaningful features are the extracted set of features that are descriptive i.e., the features have discriminating values for instances from different classes. The extracted features constitute a feature vector which is fed as input to the classification stage. The extracted features can be categorized differently e.g., time or frequency or wavelet domain features, linear features vs non-linear features, unimodal vs multimodal features. The computational complexity of the extracted set of features can range from simple statistical features e.g., mean, median, minimum, and maximum to complex features based on certain modalities. A different set of features are extracted from each sensor for human stress recognition. Some of the commonly used features for accelerometer sensors in stress recognition studies include mean, standard deviation, variance, maximum, absolute value, signal magnitude area, root mean squared, energy, differential entropy, discrete Fourier transform, peak magnitude frequency, peak power, and zero crossing~\citep{garcia2015automatic,can2019continuous,sano2013stress}. List of some of the features extracted from ECG and the PPG signals in stress measurement studies include mean and standard deviation of the R-R interval, root mean square difference of the consecutive R-R interval, heart rate, heart rate variability, mean R peak amplitude, mean standard deviation, skewness, kurtosis, percentile, geometric and harmonic mean, low-frequency power, high-frequency power, power ratio, crest time, and instantaneous pulse ratio ~\citep{bong2012analysis,ahn2019novel,mohino2015assessment,cho2019instant,charlton2018assessing}. Common features extracted from the EEG signals in human stress measurement studies include~divisional asymmetry, rational asymmetry, mean power, power spectral density, alpha asymmetry index, normalized band power, relative power, coherence, and amplitude asymmetry~\citep{arsalan2019classification,ahn2019novel,asif2019human}. Looking at EDA based human stress measurement studies, statistical features of mean, standard deviation, mean of the absolute values, root mean square, the proportion of negative samples, the slope of the EDA level, mean EDA peak rate and height, minimum and maximum have been commonly used~\citep{giakoumis2012using,setz2009discriminating}. Feature selection is defined as a process that aims at selecting the subset of features that have the highest discriminative power and yield the highest classification accuracy from among the extracted set of features. Different features selection algorithms have been used in stress classification studies which include genetic algorithm~\citep{shon2018emotional}, t-test~\citep{saeed2020eeg}, minimum redundancy maximum relevance (mRMR)~\citep{subhani2017mrmr}, principal component analysis (PCA)~\citep{deng2012evaluating}, particle swarm optimization (PSO)~\citep{yerigeri2019meta}, wrapper based feature selection~\citep{hasan2019hybrid}, Bhattacharya distance~\citep{subhani2017machine}, and independent component analysis (ICA)~\citep{palacios2019ica}. \noindent\textbf{Classification} is the last step in the human stress detection framework and is an important part of the whole process. The classification process can be performed by either use of statistical measures (t-test or ANOVA) or by using machine learning techniques. For both types of techniques, the selected or extracted set of features is fed as input to the classification stage. The t-test is a type of inferential statistics test which is aimed at finding whether there is any significant difference between the means of two groups or not. The t-test is based on the assumption that the dependent variable of the data follows a normal distribution and we can identify the probability of a particular instance. T-test produces a p-value whose acceptable value is considered to be less than 0.05. A p-value of 0.01 means that the likelihood to get the difference in the two groups by chance is 1 out of 100 times. The t-test is applied to cases in which we need to find the difference between two groups where to find the difference between more than two groups ANOVA test is applied. The second type of method used for the classification of human stress in the literature includes machine learning techniques. A wide variety of algorithms depending upon the situation have been employed in human stress recognition studies. Multilayer perceptron (MLP) is a type of feed-forward neural network which is composed of at least three layers i.e., input layer, hidden layer, and output layer. MLP has been used for binary as well as multi-class stress classification tasks in a wide range of human stress recognition studies~\citep{arsalan2019classification,arsalan2019classification_EMBC}. Another commonly used classification technique for human stress recognition is the Naive Bayes algorithm. Naive Bayes algorithm is a type of algorithm which is based on the Bayes probability theorem and the conditional probability rule. Some of the stress recognition studies which have used the Naive Bayes algorithm include~\citep{ahuja2019mental,saeed2017quantification}. Support vector machine has also been used in a sizable amount of human stress recognition studies. Support vector machine (SVM) is a supervised machine learning classifier and it works by defining a separating a hyperplane with the help of support vectors. Some of the human stress recognition studies involving SVM classifier include~\citep{saeed2018selection,saeed2020eeg,vanitha2013hybrid,attallah2020effective}. k-nearest neighbors (kNN) is a type of supervised and non-linear machine learning algorithm using for classification tasks. k-nearest neighbors assign the new data point by calculating the distance of the point from the k nearest neighbors and the data point is assigned to the class whose nearest neighbor has the lowest distance metric. The value of k can be any odd number i.e., 1, 3,5, etc. Higher the value of k, the more reliable results kNN produces. Some of the stress classification studies which have used kNN as a classifier include~\citep{rahman2015mental,karthikeyan2012study,shon2018emotional}. Some of the other classifiers used in stress recognition studies include logistic regression~\citep{asif2019human,vasavi2018regression}, deep belief networks~\citep{song2017development}, deep neural network~\citep{sardeshpande2019psychological,masood2019modeling}, and random forest~\citep{uddin2019synthesizing}. The objective measures of stress can be categorized into methods based on wearable sensors and non-wearable sensors as shown in \Fig{fig2a} and \Fig{fig3a} respectively. The literature corresponding to each of these categories is reviewed in the following subsections. \begin{figure*} \begin{center} \begin{tabular}{c} \includegraphics[width=\linewidth]{Figure2a.png} \end{tabular} \end{center} \caption { \label{fig:fig2a} { Categorization of objective measures of stress using wearable sensors.}} \end{figure*} \begin{figure*} \begin{center} \begin{tabular}{c} \includegraphics[width=\linewidth]{Figure3a.png} \end{tabular} \end{center} \caption { \label{fig:fig3a} { Categorization of objective measures of stress using non-wearable sensors.}} \end{figure*} \subsection{Wearable Sensors based Human Stress Detection} Wearable sensors need physical devices to be connected to the body of an individual to measure the stress response of the body. The autonomic nervous system (ANS) of a human being has two parts i.e., sympathetic nervous systems and parasympathetic nervous systems. When a person is under stress, changes in human ANS occurs. Sympathetic nervous system (SNS) activity is increased whereas activity in the parasympathetic nervous system (PNS) decreases in stressful situations. Wearable sensors-based stress detection is important because it can overcome the limitation of wrong self-reporting by an individual~\citep{garcia1997science,northrup1997problem}. Wearable sensors used for human stress monitoring include electroencephalography (EEG), electromyography (EMG), galvanic skin response (GSR) or electrodermal activity (EDA), electrocardiography (ECG), heart rate (HR), skin temperature (ST), respiratory rate (RR), heart rate variability (HRV), blood volume pressure (BVP), photoplethysmography (PPG), and salivary cortisol (SC). \subsubsection{Electroencephalography based Stress Detection} Brain activity has a strong relationship with stress~\citep{dharmawan2007analysis}. For analysis of brain activity, functional magnetic resonance imaging (fMRI), positron emission tomography (PET), and EEG are commonly used. Out of all these methods, EEG is the most commonly used method due to its low cost and non-invasive nature. EEG field originated back in 1924 when the first time EEG recording was performed~\citep{berger1929elektroenkephalogramm}. EEG is a physiological measure used by the research community as well as physicians to record the brain activity for the analysis and diagnosis of brain diseases and disorders~\citep{chandra2017role}. EEG signals acquisition can be performed using commercially available consumer-grade as well as medical-grade EEG devices. Medical-grade systems are quite expensive as well as sophisticated and are commonly used for patient monitoring in hospitals, whereas consumer-grade EEG headsets are less expensive but they are not as accurate when compared to medical-grade devices. Both types of systems can consist of dry as well as wet electrodes. Each type of electrodes have their pros and cons and thus many factors contribute to the type of device, which is selected for data acquisition. Consumer-grade EEG headsets have several electrodes ranging from $1$ to $16$~\citep{sawangjai2019consumer}, whereas medical-grade EEG caps can have several electrodes ranging from $8$ to $256$ or even more~\citep{troy2012many}. EEG electrodes are small metal plates made of steel and having a silver coating to record brain activity. These electrodes are placed on the human scalp to record brain activity. International 10-20 electrode positioning system specifies the electrode positions in the EEG acquisition system~\citep{trans201210}. Each electrode has a specified name having a letter and a number and a location on the human head, which is standardized. The letter in the name shows the area of the brain where the electrode is placed e.g., F for the frontal lobe and T for the temporal lobe. The right side of the head has even-numbered electrodes, whereas the left side has odd-numbered electrodes~\citep{oostenveld2001five}. EEG electrodes are connected to the data acquisition system in a wired or wireless manner. When there is a change in brain activity the voltage level at different electrodes varies, which corresponds to different diseases and disorders. Amplitude values of EEG signals are approximately around $100 \mu V$. EEG signal is composed of five different frequency bands. The behavior of each EEG frequency band is different in different situations. The frequency range of the EEG signal is from 1 Hz to 50 Hz. Based on the frequency ranges the descending order of brain waves are gamma, beta, alpha, theta, and delta. \begin{enumerate} \item \textbf{Gamma Band:} The brain activity that lies in the range of 30 - 45 Hz is usually regarded as a gamma wave or fast beta wave. The occurrence of this wave is rare and associated with brain diseases. The gamma wave is considered a good indicator of event-related synchronization (ERS) of the brain. Tongue movement, right and left index finger movement, and right toe movement have been demonstrated to relate with gamma waves. Association of gamma-band and human stress has been established in literature~\citep{minguillon2016stress}.\\ \item \textbf{Beta Band:} The electrical activity of the brain that lies in the range of 14 - 26 Hz is considered as a beta wave. This rhythm is found in waking normal individuals and is associated with thinking, attention, focus, and a panic state. Beta activity is mainly originated in the frontal and central regions of the brain. It occurs around the tumor regions of the brain. Among different neural oscillations, a higher level of beta waves acts as a marker denoting that a person is not in a calm state~\citep{sanei2013eeg}. The presence of stress has been shown to increase the spectral power in the EEG beta band~\citep{saeed2015psychological,hamid2015brainwaves}.\\ \item \textbf{Alpha Band:} Alpha waves (8 - 13 Hz) can be detected in all parts of the posterior lobes of the brain and commonly appears like a sine wave or a round-shaped signal. Relaxed alertness without attention is considered associated with alpha waves. The alpha wave is the most observable brain activity due to its prominence. The alpha wave is claimed to be awaiting pattern by visual regions of the brain as in closed eye state alpha wave is produced. Activities like opening the eyes, listening to unfamiliar sounds, anxiety, or mental attention can reduce or even eliminate the alpha waves. It has an amplitude that is normally less than 50 $\mu$V and is found over occipital regions. Its origin and significance are not known physiologically and require more research and experimentation. Stress has shown to be associated with a fall in alpha waves~\citep{hoffmann2005brain}.\\ \item \textbf{Theta Band:} Theta waves (4 - 7.5 Hz) originate due to drowsiness and have been associated with creative inspiration and deep meditation. The arousal of an individual is determined by the theta wave. Pathological problems show larger groups of abnormal theta activity in waking adults. Variations in the theta activity are also used in human stress recognition based studies~\citep{arsalan2019classification}.\\ \item \textbf{Delta Band:} Delta waves (0.5 - 4 Hz) are considered to reflect deep sleep. and the slowest brain waves. Newborn babies and very young children have strong delta wave activities. As the age of the individual increases, the amplitude and occurrence of delta waves are reduced. Delta waves are associated with a deep level of relaxation. These waves can be confused with muscular artifacts produced by the neck and jaw. Therefore, these artifacts need to be removed by applying simple signal processing methods to the EEG signals. \end{enumerate} Asymmetry analysis of the EEG signal is an established feature for the classification of different psychological states~\citep{gatzke2014role,giannakakis2015detection}. The asymmetry index of the EEG signal is the difference of the natural logarithm of the power of the right hemisphere from the left hemisphere of the brain. Commonly used locations for the estimation of the alpha asymmetry in stress-related studies are F3-F4~\citep{seo2008relation,lewis2007effect} because these locations are directly affected by the stressful events~\citep{qin2009acute}. However apart from the frontal part of the brain, stress-related studies involving lateral region i.e., F7-F8~\citep{lopez2012frontal}, anterior region i.e., Fp1-Fp2~\citep{peng2013method}, and posterior regions i.e., T5-T6~\citep{minguillon2016stress} of the brain have been reported in the literature. A large number of studies have a consensus over the fact that the alpha band activity in the right frontal part of the brain is dominating as compared to the left half of the brain under stressful condition~\citep{acharya2012application}. This phenomenon is present in a variety of stressful situations like students feeling stressed during examinations~\citep{giannakakis2015detection} when presented with a stimulus of sad/happy/horror movie~\citep{lopez2012frontal,tomarken1990resting} and even in case of chronic stress~\citep{peng2013method}. The human stress response has been explored using power spectrum and relative power index in a significant number of studies~\citep{hosseini2010higher,khosrowabadi2011brain,sharma2014modeling,ko2009emotion,giannakaki2017emotional}. Alpha band activity is dominant in the relaxation phase when the cognitive demands are minimal whereas on the contrary it has been found that situations involving high strain or alertness, beta-band activity is found to be significant~\citep{hayashi2009beta}. Even though the findings of the stress-related studies are contradictory, still stressful condition has found to decrease the alpha band activity~\citep{alonso2015stress,demerdzieva2011eeg,al2015mental,tran2007detecting,seo2010stress} and increase the beta band activity~\citep{katsis2011integrated}. Stress is found to be correlated to the beta wave in the temporal part of the brain~\citep{choi2015measurement}. Coherence among different brain regions is another important factor that varies with stress. Beta and theta band coherence is increased whereas the alpha band coherence is decreased in the anterior location of the brain hemisphere~\citep{alonso2015stress}. When the person is having a negative mood or depression the alpha and beta band activity is dominant~\citep{huiku2007assessment}. Alpha band activity of the prefrontal electrodes is reduced when the person is facing a stressful event~\citep{marshall2015effects}. The temporal lobe electrode shows a dominant activity of alpha-band when the stressful event occurs~\citep{choi2015measurement}. To classify the mental stress from the resting state EEG a method is proposed in~\citep{sharma2012objective}. Different level of mental stress is measured in participants using single-channel EEG headset and mental workload and public speaking task as a stimulus in a study conducted in~\citep{secerbegovic2017mental}. Alpha and beta bands of the EEG signal are found to be statistically significant bands and classification accuracy of 83.33\% for binary stress classification was reported. An EEG-based multiple level human stress classification framework using 128 channel EEG cap and MIST as a stimulus was presented in~\citep{subhani2017machine}. The study reported an average classification accuracy of 94.6\% and 83.4\% for binary and multi-level stress classification respectively. Another study for acute stress classification using SWT as a stressor and EEG as a modality is presented in~\citep{hou2015eeg}. Two, three, and four levels of stress were measured in this study with an average classification accuracy of 85.71\%, 75.22\%, and 67.06\%, respectively. Another human stress classification study for two and three class problems with the mental arithmetic task as a stimulus and Emotiv EPOC as an EEG data acquisition device is presented in~\citep{jun2016eeg}. Classification accuracy of 96\% and 75\% is achieved for two and three-level stress classes, respectively. A study in~\citep{duru2013assessment} focuses on stress level detection of the surgeon during most stressful phases of an operation via EEG. In~\citep{calibo2013cognitive} authors used the Stroop colored word test to elicit stress in the subjects. After applying preprocessing techniques to the EEG data, features were extracted for further analysis which was then classified using k nearest neighbor and logistic regression classifiers with an accuracy of 73.96\%. In~\citep{pomer2014methodology} authors proposed a methodology for analyzing the stress of military firefighters based on asymmetry levels of alpha waves. In~\citep{vijayaragavan2015eeg}, the authors developed an android application that reduces stress using music and yoga. EEG signals were recorded using the Neurosky headset and were preprocessed by using a high pass filter and proprietary algorithms of Neurosky. A survey of 100 users was conducted in which better relaxation waveforms were observed in the case of yoga by 67\% members and 29\% readings showed better results in the case of music and 4\% reported no results. Hand movement, heart rate variation, and EEG are used to analyze stress, and office syndrome was detected by intelligent watch in~\citep{reanaree2016stress}. In another study, the mind ball game has been used to establish a correlation between human stress and the winning percentage of the game. The study concludes that the person with a lower stress level wins the game more often~\citep{lin2006quantifying}. An EEG and Hilbert Huang Transform-based stress measurement method with support vector machine (SVM) as a classifier has been proposed in~\citep{vanitha2016real}. Another stress measurement scheme using EEG signals is proposed in~\citep{kalas2016stress}. A method of measuring mental stress is proposed based on the Montreal Imaging Stress Task (MIST) as a stimulus, power spectral density, and energy as a feature and SVM as a classifier~\citep{al2015mental}. Another EEG-based mental stress classification scheme using mental arithmetic task as a stimulus to classify stress into three levels is proposed in~\citep{pandiyan2013mental}. A stress classification framework in response to Urdu and English music tracks is presented in~\citep{asif2019human}. Classification accuracy of 98.76\% and 95.06\% for two- and three classes is achieved using a logistic regression classifier. A driver stress measurement framework using EEG signal has been presented in~\citep{halim2020identification}. In the proposed scheme data from 86 automobile drivers were used. EEG data were recorded to log the ongoing brain activity during driving to find a correlation of the brain activity with the emotional response of the driver. Three different classification algorithms which include SVM, neural network, and RF were used to classify the driver's emotional state based on the labeling obtained by the self-reporting questionnaire. The SVM classifier was found to be the best among them with an achieved classification accuracy of 97.95\% for stress vs relaxed state. All of the EEG stress detection methods discussed above are for acute stress. EEG signals have also been used to assess and classify perceived stress. An EEG-based study to identify the appropriate phase of EEG recording for the classification of perceived stress is presented in~\citep{arsalan2019classification}. EEG data of the participants were recorded for a duration of three minutes in open-eye condition before and after performing public speaking activity. The study concluded that the pre-activity phase was better for perceived stress classification. Two-level and three-level stress classification was performed and the classification accuracy of 92.85\% and 64.28\% was achieved, respectively. A perceived stress classification study using closed eye resting-state EEG data is presented in~\citep{saeed2015psychological}. The authors reported that there exists a relationship between the PSS questionnaire score and EEG data of the subject. The beta band of the EEG signal was found to be directly proportional to the level of perceived stress i.e., individuals having high perceived stress have increased beta-band activity and vice versa. Another single-channel EEG headset-based stress quantification study using resting-state and closed eye EEG data is presented in~\citep{saeed2017quantification}. Multiple linear regression analysis was performed and beta waves of the EEG signals were found to be the optimum frequency band for the prediction of the PSS questionnaire score of the subject with a confidence interval of 94\%. The correlation-based feature selection (CFS) method has been used in an EEG-based perceived human stress classification scheme proposed in~\citep{saeed2018selection}. The CFS method yielded the result that beta and low gamma frequency bands have the highest correlation with the PSS score of an individual. A significant difference in the energy spectral density of the alpha and beta bands of the EEG signals in the right and left hemisphere of the brain for stressed and non-stressed individuals is observed in a study conducted in~\citep{hamid2015brainwaves}. Alpha asymmetry has been found as a useful marker of the relationship between human stress and EEG signals in a study conducted in~\citep{sulaiman2011intelligent}. The left hemisphere of the brain has strong activity among the individuals who have low chronic stress whereas, on the other hand, the right hemisphere of the brain has strong activation in subjects having moderate and high chronic stress. A study conducted in~\citep{hamid2010evaluation} found that the PSS questionnaire score and ratio of alpha and beta band of the EEG signal have a negative correlation among themselves. Therefore individuals with having high PSS scores have a negative ratio whereas individuals with a low PSS score have been found to have a positive ratio. Correlation of the EEG temporal characteristics with the recorded EEG signals and the PSS questionnaire score is presented in~\citep{luijcks2015influence}. The study concluded the fact that theta and delta wave of the EEG signal of the participants having high PSS questionnaire score has an increased activation in the post-stimulus phase when compared to the pre-stimulus phase. Moreover, theta band activity in the frontal part of the brain was higher in the post-stimulus phase. Another perceived stress classification study using resting-state EEG data based on PSS questionnaire as well as psychologist interview labeling is presented in~\citep{saeed2020eeg}. The study concluded that by using the psychologist interview labeling, a classification accuracy of 85.20\% was achieved. \Tab{tab1} presents a summary of human stress classification schemes using EEG signals. \begin{table} \caption{Summary of Human Stress Detection Studies using EEG signals.} \label{tab:tab1} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{halim2020identification} & Acute & 86 & -- & Driving & \thead{Time and \\Frequency} & \thead{SVM, RF, NN} & 97.95\% (2) \\ ~\citep{subhani2017machine} & Acute & 22 & 19-25 & \thead{Montreal Imaging \\Stress Task (MIST)} & Frequency & \thead{LR, \\SVM, NB} & \thead{94.60\% (2)\\ 83.40\% (multilevel)} \\ ~\citep{dharmawan2007analysis} & Acute & 20 (18/2) & 20-35 & Game & Frequency & \thead{DT} & 79.08\% \\ ~\citep{secerbegovic2017mental} & Acute & 9 (6/3) & 19.3-22.7 & \thead{Mental Arithmetic \\Task (MAT), \\Computer Games} & \thead{Time and \\Frequency} & SVM & 86.66\% (3) \\ ~\citep{hou2015eeg} & Acute & 9 & 21-28 & \thead{Stroop colour\\word test} & Frequency & SVM & \thead{85.71\% (2)\\75.22\% (3)\\ 67.06\% (4)} \\ ~\citep{jun2016eeg} & Acute & 10 (9/1) & 20-35 & \thead{Mental Arithmetic \\Task (MAT),\\Stroop colour\\word test} & Frequency & SVM & \thead{75.00\% (3)\\96.00\% (2)\\ 88.00\% (2)} \\ ~\citep{calibo2013cognitive} & Acute & 18 & -- & \thead{Stroop colour\\word test} & Frequency & LR, kNN & 73.96\% (2) \\ ~\citep{hosseini2010higher} & Acute & 15 (15/0) & 20-24 & \thead{International \\Affective Picture \\System (IAPS)} & Frequency & SVM & 82.00\% (2) \\ ~\citep{giannakaki2017emotional} & Acute & 5 (5/0) & 22-38 & \thead{International \\Affective Picture \\System (IAPS)} & Frequency & RF & 75.12\% (2) \\ ~\citep{al2015mental} & Acute & 12 (12/0) & 20-24 & \thead{Montreal Imaging \\Stress Task (MIST)} & Wavelet & SVM & \thead{94.00\% (L1),\\ 85.00\% (L2),\\ 80.00\% (L3)} \\ ~\citep{vanitha2016real} & Acute & 6 & -- & \thead{Mathematical\\questions} & Frequency & \thead{hierarchical\\SVM} & 89.07\% (2) \\ ~\citep{asif2019human} & Acute & 27 (13/14) & 20-35 & Music Tracks & Frequency & SMO, LR & \thead{98.76\% (2) \\95.06\% (3)} \\ ~\citep{khosrowabadi2011brain} & Chronic & 26 (20/6) & 18-30 & \thead{University \\Exam} & Frequency & kNN, SVM & 90.00\% (2) \\ ~\citep{saeed2015psychological} & Chronic & 28 (18/10) & 22-33 & Baseline & Frequency & SVM & 71.42\% (2) \\ ~\citep{saeed2017quantification} & Chronic & 28 (18/10) & 22-33 & Baseline & Frequency & NB & 71.42\% (2) \\ ~\citep{saeed2018selection} & Chronic & 28 (18/10) & 22-33 & Baseline & Frequency & SVM & 78.57\% (2) \\ ~\citep{saeed2020eeg} & Chronic & 33 (20/13) & 18-40 & Baseline & Frequency & SVM & 85.20\% (2) \\ ~\citep{arsalan2019classification} & Chronic & 28 (13/15) & 18-40 & Baseline & Frequency & MLP & \thead{92.85\% (2) \\ 64.28\% (3)} \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] LR: Logistic Regression, SVM: Support Vector Machine, kNN: k- Nearest Neighbors, NB: Naive Bayes, SMO: Sequential minimal optimization, RF: Random Forest, MLP: Multilayer Perceptron, DT: Decision Tree, NN: Neural Networks \end{tablenotes} \end{table} \subsubsection{Electromyography based Stress Detection} EMG is a biomedical signal that deals with the electric current generated in the muscles of the human body during its contraction representing neuromuscular activities. EMG signal is recorded via a device called an electromyograph. EMG signals are recorded by placing the sensors near the muscle whose movement needs to be measured. The amplitude of the signal lies in the range of 1-10 mV and the frequency range of the EMG signal is 0-500 Hz with the dominant frequencies between 50-150 Hz~\citep{de2002surface}. EMG is a complicated signal being controlled by the human nervous system and is strongly dependent on the physiological and anatomical characteristics of the skeletal muscles. EMG signals become noisy while moving through different tissues of the human skin. Moreover, the EMG data acquisition device acquires signals from various motor units resulting in an overlap of other muscles' movement to the desired muscle movement. Recently, the measurement of EMG signals using sophisticated equipment has gained a lot of interest in the field of biomedical engineering~\citep{farfan2010evaluation}. EMG has also been a focus of biomedical experts because of its clinical and diagnostic applications. Robotic arm and rehabilitation of patients have been potentially identified as a key area of applications for EMG signal recording. Motor Unit Action Potentials (MUAPs) (their shapes and firing rates) are useful for the treatment of a variety of neuromuscular disorders. Advancement in the available signal processing techniques has made the design and development of state-of-the-art EMG detection and diagnosis techniques a practical possibility. A wide range of mathematical and artificial intelligence (AI) based techniques have gained attention~\citep{reaz2006techniques}. Mathematical models used for EMG signal analysis include wavelet transform, Wigner-Ville Distribution (WVD), Fourier transform, and higher-order statistics. On the other hand, AI-based techniques used include artificial neural networks, dynamic recurrent neural networks, and fuzzy logic. The recorded EMG signal faces two important challenges which include (i) the signal to noise ratio (i.e., the ratio of the energy of the EMG signal to the energy of the noise) and (ii) distortion in the recorded EMG signal (i.e., there should be no alteration in the contribution of each frequency component of the signal). A typical EMG recording is done in two phases i.e., the baseline recording and EMG recording in response to some stimulus and then measured as a ratio of baseline and stimulus-response. Baseline recording is necessary because this level is different for every individual depending on a variety of factors~\citep{weyers2006electromyographic}. Facial EMG has been extensively used in the literature to record facial expressions in response to some kind of stimulus. This finding has been reported by Ekman and Friesen in their study in~\citep{ekman1978technique}. The relationship between human stress and EMG signal has been discussed in a wide range of studies in the literature. A study to investigate the relationship between the changes in human stress level and the muscular tension via EMG signal is presented in~\citep{karthikeyan2012emg}. Stroop color-word test was used as a stimulus and EMG data was acquired from the left trapezius muscle of the participants. Pre-processing of the acquired EMG data was performed using the wavelet de-noising technique and time-domain features were extracted from the data. kNN classifier was used and a classification accuracy of 90.70\% was achieved. A study to validate the stress-EMG paradigm using an unpredictable and uncontrollable stimulus is presented in~\citep{luijcks2014experimentally}. The stimulus given to the participants was an electro-shocker to give electric shocks of 10 milliseconds duration. The experiment performed includes 3-minutes baseline recording, 3-minutes recording before the stimulus, and 2 minutes post-stimulus recording. EMG activity of the trapezius muscles was significantly higher than in the pre-activity phase when compared to the other two phases of the experiment. The study concluded that the presented stimulus is a reliable and valid test to identify the difference between stressed and non-stressed individuals. The activities of the human muscles like the trapezius are associated with stress~\citep{lundberg1994psychophysiological,wijsman2010trapezius,larsson1995effects}. In~\citep{lundberg1994psychophysiological}, the author has designed an experimental study to investigate the effect of mental stress and physical workload separately and in the combined form on the perceived human stress, physiological signals, and the muscular tension faced by an individual by using EMG signal. Stressor given to the subjects includes mental arithmetic task, Stroop color-word test, cold pressor test, standardized test contractions (TC), and a combination of SWT with TC. The results indicate that when compared to baseline recording, stressors induced an increase in blood pressure, heart rate, salivary cortisol, urinary catecholamines, and self-reported questionnaire score. Mental arithmetic tasks caused a significant amount of increase in the EMG activity, SWT when used alongside TC produced more pronounced changes in the EMG signal as compared to SWT alone. The study concluded that an increase in muscular tension was observed when facing only mental stress as well as facing mental stress along with physical workload. Human stress measurement using EMG signals recorded from the upper trapezius muscle is presented in a study conducted in~\citep{wijsman2010trapezius}. The authors have designed two new stress measurement tests for the experiment. Three different stressful situations which include a memory task, a logical puzzle, and a calculation task were presented to the subject and EMG signals of the upper trapezius muscle were recorded. The study revealed the fact that EMG activity of the upper trapezius muscle was higher when facing a stressor as compared to rest condition thus making EMG signal a good indicator of mental stress. Another study to correlate mental stress, blood flow, and the EMG signals recorded from the upper trapezius muscles using the Stroop color-word test is presented in~\citep{larsson1995effects}. The study concluded that there was a decrease in muscle blood flow and an increase in heart rate during the stressor phase. EMG activity of trapezius muscle is increased in response to stress during cold pressor and Stroop color-word test~\citep{krantz2004consistency}. Moreover, an increase in the blood pressure, heart rate, and urinary epinephrine and norepinephrine was observed when facing the stressor but no correlation could be found with salivary cortisol measure. Another important finding of the study was that men have higher blood pressure and an increase in epinephrine as compared to women, whereas on the other hand women have increased heart rate as compared to men. A positive correlation between the negative stress rating and EMG signal during work has been found in a study conducted in~\citep{rissen2000surface}. A study reported that the EMG of the trapezius muscle is increased under low or high mental workload during computer data entry work~\citep{schleifer2008mental}. Moreover, a decrease in the EMG-gap of the left as well as right trapezius muscles was greater during high mental workload as compared to low mental workload. Another study to measure the influence of EMG-based methods and human stress-based methods on the shoulder muscles forces is presented in~\citep{engelhardt2015comparison}. Another study to analyze the stress in the lower back of a subject while at work for different posture positions using the EMG signal is presented in~\citep{tyagi2017stress}. The study founds an application in the area of chair design for a comfortable sitting posture of an individual at work. \Tab{tab2} presents a summary of human stress classification schemes using EMG signals. \begin{table} \caption{Summary of Human Stress Detection Studies using EMG signals.} \label{tab:tab2} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{karthikeyan2012emg} & Acute & 10 (0/10) & -- & \thead{Stroop colour \\word test} & Wavelet & kNN & 90.70\% (4) \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] kNN: k- Nearest Neighbors \end{tablenotes} \end{table} \subsubsection{GSR based Stress Detection} The skin response of an individual is affected whenever we confront an emotional stimulus like listening to the audio, watching a video, or an emotional real-life event. It is pertinent to mention that whatever the reason for the emotional arousal i.e., whether it is due to happiness or excitement or due to fear, anger, depression, or stress, in either case, the skin response of the person changes~\citep{farnsworthgsr}. The response of the human skin is not under human conscious control~\citep{udovivcic2017wearable} and is dependent on the changes in the sweating pattern of a subject and thus reflects the behavior of the sympathetic nervous system~\citep{wu2010analysis}. Another study supports the fact that some signals are generated from the sympathetic nervous system when a change in the skin conductance of a person occurs~\citep{lidberg1981sympathetic}. Sweat reaction occurs due to any emotional change that can be seen in the fingers and palm. The amount of salt in the human skin varies as a result of a sweat reaction thus causing a change in the electrical conductance of the skin~\citep{ayata2017emotion}. As time goes on the sweat glands of a person become more active resulting in a dis-balance of positive and negative ions and thus as a result affecting the flow of current through the skin~\citep{critchley2002electrodermal}. GSR measurement locations are part of the body with a large number of sweat glands. There exist a variety of possible locations on the human body for the measurement of GSR. Common locations of measuring skin response include the fingers, shoulders, foot, and wrist of a person. According to the studies, the palm and fingers of the skin have the highest number of sweat glands and are used as a location for GSR recording in experiments. GSR activity is typically measured in “micro-Siemens ($\mu S$)” or “micro-Mho ($\mu M$)”. Sweating secretion is increased when a person faces some emotional stimuli, whether positive or negative, and due to these measurable changes in skin conductance occurs. One of the most widely used measures of skin activity is GSR, which is also called Electrodermal Activity (EDA) or Skin Conductance (SC). EDA is a physiological measure of the flow of electricity through human skin. Even a small amount of sweating, which is not visible with the naked eye on the surface of the human skin causes a change in its electrical conductivity. EDA can be divided into (i) Skin Conductance Level (SCL) which is the slowly changing part of the EDA, (ii) Skin Conductance Response (SCR) which corresponds to the peaks in the EDA due to some kind of stimulus, and (iii) Non-specific Skin Conductance Response (NS.SCR) which exists even without the existence of any external stimulus. The pattern of the skin response data is distinct according to the state of the person and is considered as one of the reliable stress measurement method~\citep{kurniawan2013stress}. SCR part of EDA increases when encountered with an emotionally arousing situation~\citep{dawson2007electrodermal}. NS.SCR part of the EDA corresponds to the cognitive processes and the psycho-physiological states~\citep{nikula1991psychological}. The skin conductance of a person is increased when the person is stressed, whereas skin conductance gets reduced when the person is relaxed~\citep{liao2005real}. GSR has been used for the cognitive load measurement in the literature~\citep{shi2007galvanic}. The index and the middle finger of the hand of the subject are commonly used as a location for the placement of GSR electrodes because of the existence of a sufficient number of sweat glands to measure the skin response changes. The use of GSR sensor for the stress measurement has been the focus of study in~\citep{healey2000wearable}. Whenever the person is under stress, the moisture in the human skin increases resulting in an increase in the SCL~\citep{giakoumis2012using,blechert2006identifying,ritz2000emotions,reinhardt2012salivary,hoehn1989somatic} and SCR~\citep{setz2009discriminating,ren2012affective,lee2004development,blechert2006identifying,hoehn1989somatic,nomikos1968surprise,lanzetta1976effects} part of the electrodermal activity. SCR peaks commonly appear between 1.5 and 6.5 seconds after the start of the stimulus. In another study, SCL was found to be the most effective marker for measuring stress in comparison to HRV and EMG. Some of the commonly extracted features from the electrodermal activity for the human stress detection include SCR frequency, SCR amplitude, SCR latency, SCR rise time, SCR half recovery, SCR recovery time, SCR response onset, and SCR half recovery. Another interesting fact observed in a study conducted in~\citep{nomikos1968surprise} is that even the expectation of a stressful situation that has yet not occurred can cause an increase in the EDA similar to the situation like if the event has occurred. Many other factors that can affect GSR measurement include the temperature and humidity of the environment. In summary, it can be concluded that the SCR and SCL part of the EDA consistently increase under stress conditions. A chronic stress measurement mechanism using GSR signals is presented in~\citep{panigrahy2017study}. Data of the participants is recorded in three different states i.e., sitting, standing, and sleeping. Stressed and relaxed conditions are discriminated against with a classification accuracy of 76.5\%. Another human stress measurement scheme for the office environment using physiological signals of GSR is proposed in~\citep{hernandez2011call}. An analysis of the self-reported measure obtained from the employees and call logs checked to determine the number of stressed and non-stressed calls was performed. The SVM classifier was used to classify the stressed and non-stressed individuals and a classification accuracy of 73\% was achieved. Another framework for the measurement of stress at the workplace using EDA signal is proposed in~\citep{kocielnik2013smart}. Self-Assessment Manikin Questionnaire was used as a subjective measure and pre-processing of data was performed by the removal of data from the first 15 seconds and the last 10 seconds of the recorded signal. The authors did not report any classification accuracy but they stated that the results obtained from the analysis are meaningful and provides useful information. \Tab{tab3} presents a summary of human stress classification schemes using GSR signals. \begin{table} \caption{Summary of Human Stress Detection Studies using GSR signals.} \label{tab:tab3} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{healey2000wearable} & Acute & 9 & -- & \thead{Car\\driving} & Time & LDA & 96.00\% (4) \\ ~\citep{blechert2006identifying} & Acute & 42 (14/28) & 42.2$\pm$9.9 & Pictures & Time & DFA & 83.30\% (2) \\ ~\citep{setz2009discriminating} & Acute & 33 (33/0) & 24.06 & \thead{Montreal Imaging \\Stress Task (MIST)} & Time & LDA & 82.80\% (2) \\ ~\citep{ren2012affective} & Acute & 30 (14/16) & 26.8$\pm$2.56 & \thead{Stroop color\\ word test} & Time & NB & 85.5\% (2) \\ ~\citep{lee2004development} & Acute & 80 & -- & \thead{Stroop color\\ word test} & Time & \thead{MLP, GRNN,\\ ANFIS} & 96.67\% (2) \\ ~\citep{panigrahy2017study} & Acute & 10 & -- & \thead{Computer \\game} & Time & J48 & 76.50\% (2) \\ ~\citep{hernandez2011call} & Acute & 9 (4/5) & -- & \thead{Call center} & Time & SVM & 73.41\% (2) \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] NB: Naive Bayes, MLP: Multilayer Perceptron, J48: Decision Tree, LDA: Linear Discriminant Analysis, DFA: Discriminant function analysis, GRNN: Generalized regression neural network, ANFIS: Adaptive network based fuzzy inference system \end{tablenotes} \end{table} \subsubsection{Electrocardiography based Stress Detection} ECG is one of the most commonly used techniques for monitoring the functionality of the heart. ECG is a non-invasive modality used for the assessment of electrical activity of the heart in real-time. The activity of the heart is correlated to the human central system. Apart from the monitoring of heart functionality, it is also useful for human stress measurement~\citep{ahn2019novel}. The most commonly used method for the measurement of ECG is a 12-lead ECG technique. In this technique, nine sensors are placed on the human body at specified locations. Three main sensors are placed on the right arm, left arm, and left leg of the person. The sensor placed on the right leg act as a reference electrode for the ECG acquisition system. Even though the complete picture of the heart cannot be obtained by using only these three sensors, but a physician can use these schemes for quick analysis in case of emergency treatment. For higher resolution results, six sensors are placed on the chest of the individual. Using these nine sensors along with the leads (Lead I, Lead II, Lead III) interconnecting their results in a total of twelve leads. One of the most important advantages of this twelve lead system is that it gives detailed information about the heart activity of the subject thus leading to a better diagnosis and cure, whereas, on the other hand, the largest disadvantage of this 12 lead system is that it produces a huge amount of data especially when recording is done for many hours. ECG signals are characterized by peaks that include P, Q, R, S, T, and U. Each of these peaks has its characteristics and gives a piece of specific information about the heart activity of the individual~\citep{al2007hardware}. Commonly used parameters for the assessment of ECG signals include P, PR, QRS complex, and QT. For medical purposes, all these four parameters are evaluated. For other applications, there may be some peaks that are more important than others. A wide range of studies for human stress measurement using ECG signals has been presented in literature~\citep{karthikeyan2012study,karthikeyan2011ecg}. A human stress classification scheme using ECG signal and mental arithmetic task as a stimulus is presented in~\citep{karthikeyan2012study}. Statistical features are extracted using discrete wavelet transform and low and high-frequency bands of the ECG signals are analyzed separately. Three-level human stress classification i.e., low stress, medium stress, and high stress is performed using a kNN classifier. Maximum classification accuracy of 96.3\% and 75.9\% is achieved for low frequency and high-frequency bands, respectively using the covariance feature. Another human stress recognition framework using ECG and discrete wavelet transform is presented~\citep{karthikeyan2011ecg}. Stroop color-word test is used as a stimulus and heart rate variability is extracted as a feature from the recorded ECG signal. An accuracy of 96.41\% is achieved for stress vs relaxed state classification using the kNN classifier. Another human stress assessment scheme based on a mono-fuzzy index extracted from ECG or GSR signals is presented in~\citep{charbonnier2018multi}. Four different stress tasks are used in the experiment which includes mental arithmetic stress task, mental arithmetic control task, trier social stress test, and trier social control test, and classification accuracy of 72\% was achieved for stress and no-stress classes. In~\citep{liu2014listen}, the authors presented a stress classification scheme using ECG signals. The dataset used in the experiment was adopted from PhysioNet~\citep{goldberger2000physiobank} and it consists of physiological signals of GSR and ECG recorded while drivers are in a rest state or were experiencing stressful events. Time and frequency domain features of the heart rate variability and spectral power of the ECG signal are used. An F-measure of 0.85 is achieved for stress classification using an SVM classifier. Acute stress classification using ECG signals is presented in~\citep{tanev2014classification}. Four different stimuli which include images, audio, mental tasks, and rest state are used in the experiment. Linear, as well as non-linear features, are extracted from the HRV data obtained from the ECG signals and classification accuracy of 80\% is achieved for acute stress classification. ECG signal has been analyzed for human stress classification in~\citep{bong2012analysis}. Time-domain features which include heart rate, mean R peak amplitude, and mean R-R intervals are extracted from the ECG signal. kNN and SVM were used for classification and mean classification accuracy of 77.69\% and 66.49\% is achieved for two and three classes, respectively. Short-term ECG and heart rate variability signals are used for human stress classification in a study conducted in~\citep{karthikeyan2013detection}. Stroop color-word test is used as a stimulus, pre-processing of the acquired data is performed using wavelet de-noising algorithm. Frequency domain features are extracted from the HRV signal which is obtained from the recorded ECG signals. Classification is performed using a probabilistic neural network and kNN classifiers with an average achieved accuracy of 91.66\%. A driver stress recognition framework using ECG is proposed in~\citep{keshan2015machine}. The study aimed at the classification of stress at three different levels i.e., low, medium, and highly stressed. Seven different classifiers which include the Naive Bayes, Logistic Regression, Multilayer Perceptron, SVM, J48, kNN, and random forest classifiers are used for the classification purpose. The decision tree algorithm gave the best classification results with achieved accuracy of 88\% for three classes. A stress measurement mechanism among students during an oral exam using ECG signals is presented in~\citep{castaldo2016detection}. ECG data of the student is recorded during the oral exam as well as after the vacations which acted as a baseline recording. Time and frequency domain features are extracted from the recorded data and subjected to classification using Naive Bayes, Decision Tree, SVM, and Multilayer Perceptron classifiers. The best classification accuracy of 80\% is achieved using a decision tree classifier. \Tab{tab4} presents a summary of human stress classification schemes using ECG signals. \begin{table} \caption{Summary of Human Stress Detection Studies using ECG signals.} \label{tab:tab4} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{karthikeyan2012study} & Acute & 10 (0/10) & 20-25 & \thead{Mental arithmetic \\ task} & Wavelet & kNN & 96.30\% (4) \\ ~\citep{karthikeyan2011ecg} & Acute & 10 (0/10) & 20-25 & \thead{Stroop color\\word test} & Wavelet & kNN & 96.41\% (2) \\ ~\citep{charbonnier2018multi} & Acute & 20 & 19-30 & \thead{Stroop color\\ word test} & Frequency & \thead{mono-feature \\fuzzy index} & 72.00\% (4) \\ ~\citep{tanev2014classification} & Acute & 10 (8/2) & 22-26 & \thead{IAPS, IADS} & \thead{Time and \\Frequency} & NB & 90.00\% (2) \\ ~\citep{bong2012analysis} & Acute & 5 & -- & Audio-visual & Time & SVM,kNN & \thead{77.69\% (2) \\ 66.49\% (3)} \\ ~\citep{karthikeyan2013detection} & Acute & 60 (30/30) & 21-25 & \thead{Stroop color\\word test} & \thead{Time and \\Frequency} & kNN, PNN & 91.66\% (2) \\ ~\citep{keshan2015machine} & Acute & 17 & -- & Driving & Time & \thead{NB, LR, MLP, \\SVM, DT, kNN, RF} & 88.00\% (3) \\ ~\citep{castaldo2016detection} & Acute & 42 & -- & \thead{Oral \\Examination} & \thead{Time and \\Frequency} & \thead{NB, DT,\\SVM, MLP} & 80.00\% (2) \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] NB: Naive Bayes, MLP: Multilayer Perceptron, LR: Logistic Regression, DT: Decision Tree, SVM: Support Vector Machine, kNN: k- Nearest Neighbors, RF: Random Forest, IAPS: International Affective Picture System, IADS: International Affective Digital Sounds, PASAT: Paced Auditory Serial Addition Task, FDA: Fisher discriminent algorithm, PNN: Probabilistic neural network \end{tablenotes} \end{table} \subsubsection{Heart Rate based Stress Detection} HR is one of the most widely used measures of human stress available in the literature. Heart rate is defined as the number of heartbeats in one minute (measured in beats per minute (bpm)). The RR interval of the ECG signal, which is defined as the interval between consecutive heartbeats has an inverse relationship with the heart rate of a person. In literature, there exist a large number of studies which report a significant increase in heart rate when facing a stressful situation~\citep{giannakakis2017stress,engert2014exploring,vinkers2013effect,lundberg1994psychophysiological,krantz2004consistency,finsen2001muscle,reinhardt2012salivary,acerbi2016wearable,moriguchi1992spectral,steptoe2001acute,ring2002shifting,tugade2004resilient,vuksanovic2007heart,schubert2009effects,clays2011perception,lackner2011phase,van2015ambulatory}. A human stress detection framework based on facial cues using features of eye movement, mouth activity, head movement, and heart rate acquired via PPG signal is presented in~\citep{giannakakis2017stress}. Four different stressors which include social exposure, emotion recall, stressful images, and stressful videos are used in the experiment. Feature selection is applied to select the optimum set of features for the discrimination of stress state from a neutral state. Five different classifiers which include kNN, Generalized Likelihood Ratio, SVM, the Naïve Bayes, and AdaBoost classifier are employed for classification purposes. Maximum classification accuracy of 91.68\% is achieved by the AdaBoost classifier using social exposure stressors. Another study presenting a comparison of the use of thermal infrared imaging for measurement of human stress with other stress biomarkers of heart rate, heart rate variability, alpha-amylase, cortisol, and finger temperature is presented in~\citep{engert2014exploring}. Two different stressors which include the cold pressor test and trier social stressor test are used in the experiment. The study reported the fact that under stressful situations heart rate of the subjects is increased whereas it decreased in the recovery phase. In another study conducted in~\citep{lundberg1994psychophysiological}, the authors reported an increase in the heart rate of the individuals under stress conditions as compared to baseline rest state condition. Women are found to have an increased heart rate as compared to men when facing stressors, whereas men have higher blood pressure as compared to women under stress conditions in a study presented in~\citep{krantz2004consistency}. The cardiovascular response of the individuals in response to computer mouse work with and without memory demands is discussed in a study presented in~\citep{finsen2001muscle}. The study found that with the increasing memory demands, the heart rate of the individuals is increased. A new stress induction protocol named as Mannheim Multi-component Stress Test (MMST) is designed in a study conducted in~\citep{reinhardt2012salivary}. The MMST protocol included mental arithmetic tasks, effective images, sounds, and motivational stressors. The heart rate of the subjects is found to have an increasing pattern when facing the stressor. Another human stress measurement study based on wearable sensors is presented in~\citep{acerbi2016wearable}. The mean of the heart rate feature is extracted from the recorded heart rate variability signal. The study concluded that stressed and non-stressed individuals have significant differences in the heart rate of the subjects. The influence of acute mental stress on the cardiovascular response and concentrations of inflammatory cytokines are examined in a study conducted in~\citep{steptoe2001acute}. The study reported that participants have an increased heart rate and blood pressure when facing stressors as compared to the baseline condition. Another human stress measurement study conducted in~\citep{vuksanovic2007heart} proved the fact that an increase in the heart rate of the subject in mental stress aloud condition is due to the changes in autonomous modulation of the spectral power of high-frequency bands of the ECG signal. A study to examine the effects of chronic and short-term stress on heart rate and heart rate variability is presented in~\citep{schubert2009effects}. The speech task has been used as a stressor for the experiment and time, frequency, and phase domain measures were examined. The study reported the finding that the heart rate of the subjects is significantly increased when performing the public speaking task as compared to the rest state. A study to correlate the perception of work stressors with the measure of heart rate variability is presented in~\citep{clays2011perception}. Mean of correlation, multiple linear regression, and ANOVA is used to analyze the HRV signal. The mean of the heart rate is extracted as a feature from the HRV signal and it is found that mean HR was raised in the high work stressor group as compared to the low stressor group. On the contrary, few studies exist in the literature which reports no change in the heart rate under stress~\citep{mcduff2014remote,blechert2006identifying,hynynen2011incidence,cinaz2013monitoring,mcduff2016cogcam}. Remote measurement of cognitive stress using heart rate, heart rate variability, and breathing rate have been performed in a study conducted in~\citep{mcduff2014remote}. Physiological data is acquired from the participants in the rest state and stress condition i.e., while performing a mental arithmetic task. The study concluded that there is a significant difference in the breathing rate and the heart rate variability of the subjects in stress vs rest condition, whereas the heart rate of the subject did not show any significant difference in stressed vs relaxed state. Identifying the difference between anxiety and rest state using a variety of physiological signals which include EDA, breathing rate, and cardiovascular measure of heart rate variability and heart rate is presented in~\citep{blechert2006identifying}. Physiological data were acquired in the rest state as well as while facing electric shocks. The study concludes that EDA showed a significant difference in the rest vs stressed state whereas cardiovascular measures show very little difference between the two groups. A study to correlate the self-reported questionnaire with the cardiac autonomic modulation in real-life scenarios is discussed in~\citep{hynynen2011incidence}. PSS questionnaire is filled by the participants and the participants were grouped into the low and high-stress groups based on PSS score. R-R interval data is recorded while participants were sleeping at night and during the orthostatic test after awakening in the morning. R-R interval data is used to extract HRV and HR features in time as well as frequency domain. The study concluded that a high score in the stress questionnaire is correlated with lower HRV in the orthostatic test. Moreover, there is no difference observed in the heart rate and heart rate variability of the low and high-stress participants. A new stress recognition framework using the using contact-free camera as an apparatus and computer-based tasks as a stimulus is proposed in~\citep{mcduff2016cogcam}. PPG signals are recorded and the heart rate, heart rate variability, and breathing rate features are extracted and used for the identification of stress during the tasks. The study identified the fact that heart rate variability has significant changes during the stressor where on the other hand there is no difference in the heart rate and breathing rate of the two groups. It can be observed from the above studies that heart rate has been widely used as a marker for human stress measurement because it is a reliable indicator of arousal due to stress. \Tab{tab5} presents a summary of human stress classification schemes using heart rate. \begin{table} \caption{Summary of Human Stress Detection Studies using Heart Rate Measure.} \label{tab:tab5} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{blechert2006identifying} & Acute & 42 (14/28) & 42.2$\pm$9.9 & Pictures & Time & DFA & 83.30\% (2) \\ ~\citep{giannakakis2017stress} & Acute & 23 (16/7) & 45.1$\pm$10.6 & \thead{Stroop color\\word test,\\IAPS, videos} & Time & \thead{Adaboost} & 91.68\% (2) \\ ~\citep{mcduff2014remote} & Acute & 10 (3/7) & 18-30 & \thead{mental arithmetic \\task (MAT)} & Frequency & SVM &85.00\% (2) \\ ~\citep{mcduff2016cogcam} & Acute & 10 (5/5) & 18-28 & \thead{Berg Card \\Sorting Task (BCST)} & Frequency & NB & 86.00\% (2) \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] NB: Naive Bayes, SVM: Support Vector Machine, IAPS: International Affective Picture System, DFA: Discriminant function analysis \end{tablenotes} \end{table} \subsubsection{Skin Temperature based Stress Detection} Skin temperature is the temperature of the outermost surface of the body. The normal skin temperature of the outer surface of the skin lies between $33.5$ and $36.9^oC$. Our sense of hot and cold depends on the amount of energy to and from the skin. Skin temperature depends on the temperature of the air and time spent in that environment. Human skin temperature is strongly correlated to the heart activity and sweat reaction of an individual. Changes in skin temperature are connected to the stressful and anxious conditions~\citep{mcfarland1985relationship}. Skin temperature has been measured at a variety of locations on the human body like a finger, arm, face, armpits. Measurement of different positions gives different results under stress because the temperature on some parts of the body increases whereas on some other parts of the body temperature decreases. In~\citep{zhai2006stress}, a skin temperature-based human stress measurement method has been developed. Skin temperature has a negative correlation to human stress, i.e., a decrease in stress level corresponds to an increase in ST and vice versa~\citep{reisman1997measurement}. A patch-based human stress monitoring using skin temperature and skin conductance is proposed in~\citep{yoon2016flexible}. Skin temperature has shown a negative correlation with the level of chronic stress~\citep{lee2010wearable,torii1992fall}. Changes in skin temperature to identify different levels of stress are studied in~\citep{karthikeyan2012descriptive}. Stroop color-word test is used as a stimulus and probabilistic neural network as a classifier to achieve an accuracy of 88\% for four levels of stress. The effect of the core and peripheral body temperature on human stress is discussed in~\citep{vinkers2013effect}. The study reported a decrease in the temperature of the fingertips and palm whereas on the contrary the temperature of the upper arm increases. An acute human stress measurement system using skin temperature is presented in~\citep{herborn2015skin}. Skin temperature when measured with an axillary thermometer tends to increase under stressful situations~\citep{marazziti1992psychological}. Some other studies analyzing skin temperature of the surface of the finger under human stress report a decrease in temperature~\citep{lee2004development,rimm1996psychological,vinkers2013effect,karthikeyan2012descriptive,engert2014exploring}. Slope of skin temperature has been used in some stress studies instead of temperature means value~\citep{barreto2007significance}. A study reporting different temperature changes in different parts of the body under a stressful stimulus of an interview~\citep{rimm1996psychological}. The temperature on the hands of the person is decreased whereas the cheeks and eyes of the person tend to show an increase in temperature. Moreover, there also exists a temperature difference in the left and right cheeks of the participants. \Tab{tab6} presents a summary of human stress detection schemes using skin conductance. \begin{table} \caption{Summary of Human Stress Detection Studies using Skin Temperature Measure.} \label{tab:tab6} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{lee2004development} & Acute & 80 & -- & \thead{Stroop color\\ word test} & Time & \thead{MLP, GRNN,\\ ANFIS} & 96.67\% (2) \\ ~\citep{zhai2006stress} & Acute & 32 & 21-42 & \thead{Stroop color\\ word test,\\Emotional\\pictures} & \thead{Time and \\Frequency} & SVM & 90.10\% (2) \\ ~\citep{karthikeyan2012descriptive} & Acute & 60 (30/30) & 22.5$\pm$2.5 & \thead{Stroop color\\ word test} & Time & PNN & 88.00\% (4) \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] MLP: Multilayer Perceptron, PNN: Probabilistic Neural Network, GRNN: Generalized regression neural network, ANFIS: Adaptive network based fuzzy inference system \end{tablenotes} \end{table} \subsubsection{Respiratory Rate based Stress Detection} The respiration rate of a person can be defined as the number of breaths a person takes in a duration of one minute. Two of the most common measures of respiration are breath rate and breath amplitude or depth~\citep{simoes1991respiratory}. The breath rate of a person is increased under stressful conditions, whereas on the contrary it is decreased in calm situation~\citep{vinkers2013effect,mcduff2014remote,grossman1983respiration}. Stress is associated with the irregularities of the respiratory rate~\citep{singh2013stress}, the shift from abdominal to thoracic breathing~\citep{ahmed2015rebreathe}, and faster and shallower breathing~\citep{kreibig2010autonomic}. The breath rate sensor has been reported to be the accurate estimation of the respiratory rate. Breathing activity monitoring using chest cavity expansion has been reported in~\citep{stern2001psychophysiological}. For the detection of human stress, respiratory signals have been acquired using an elastic hall effect sensor placed at the lower part of the chest~\citep{healey2005detecting} and by use of thermistors place in the nasal passage of the subject~\citep{shin1998estimation}. Respiratory rate signal has also been used in combination with other biomedical sensors for the assessment of human stress~\citep{hosseini2011classification}. Oxygen consumption rate has been extracted from the respiratory rate of the person and is considered a considerably reliable measure of human stress because the oxygen demand is increased under stress~\citep{seematter2002metabolic}. In~\citep{fernandez2018mental}, the authors proposed another stress recognition study using respiratory signals. Another study for human stress assessment using a respiration sensor is presented in~\citep{shan2020respiratory}. \Tab{tab7} presents a summary of human stress classification schemes using respiration rate. \begin{table} \caption{Summary of Human Stress Detection Studies using Respiratory Rate Measure.} \label{tab:tab7} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{mcduff2014remote} & Acute & 10 (3/7) & 18-30 & \thead{Mental arithmetic \\task (MAT)} & Frequency & SVM &85.00\% (2) \\ ~\citep{ahmed2015rebreathe} & Acute & 25 (15/10) & 18-35 & \thead{Stroop color\\word test,\\ Public \\speaking} & \thead{Time and \\Frequency} & GEE & 88.00\% (2) \\ ~\citep{hosseini2011classification} & Acute & 15 (15/0) & 20-24 & \thead{Picture \\presentation test} & \thead{Time and \\Frequency} & SVM & 76.95\% (2) \\ ~\citep{wijsman2013wearable} & Acute & 30 (25/5) & 19-53 & \thead{Calculation and \\memory task} & \thead{Time and \\Frequency} & GEE & 74.50\% (2) \\ ~\citep{rigas2011real} & Acute & 13 (10/3) & 22-41 & \thead{Car \\driving} & \thead{Time and \\Frequency} & BN & 96.00\% (2) \\ ~\citep{singh2013novel} & Acute & 10 & -- & \thead{Car \\driving} & Time & ANN & 80.00\% (2) \\ ~\citep{wijsman2011towards} & Acute & 30 (25/5) & 19-53 & \thead{Calculation, \\puzzle and \\memory task} & \thead{Time and \\Frequency} & \thead{ANN, LBN,\\ QBN, FLS} & 80.00\% (3) \\ ~\citep{shan2020respiratory} & Acute & 89 (47/42) & 18-23 & \thead{Stroop color\\word test} & Time & SVM & \thead{93.90\% (2) \\ 93.40\% (2) \\ 89.05\% (2)} \\ ~\citep{fernandez2018mental} & Acute & 43 (26/17) & 18-22 & \thead{Mathematical \\problem} & \thead{Time and \\Frequency} & MLP & 94.44\% (2) \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] FDA: Fisher Discriminant Analysis, GEE: generalized estimating equation, SVM: Support Vector Machine \end{tablenotes} \end{table} \subsubsection{Heart Rate Variability based Stress Detection} HRV is the measure of the variation in the time interval between consecutive heartbeats of an individual. ANS activities can be reliably measured by using the HRV parameter and is a strong tool for human stress assessment~\citep{pflanzer2013galvanic}. HRV shows distinct changes in response to changes in individuals~\citep{acharya2006heart}. HRV can be obtained using ECG as well as PPG sensors data. HRV measurement methods based on ECG signals have been developed in~\citep{clifford2002signal}. An ECG recordings-based study is conducted in~\citep{sloan1994effect} to analyze the relationship of RR intervals (time between consecutive heartbeats), HRV, and human stress. The study concludes that the increase in heart rate is correlated with the decrease in RR interval. Another study for human stress assessment using physiological signals of ECG and HRV is introduced in~\citep{karthikeyan2013detection}. Classification accuracy of 94.66\% for normal vs stressed class using a fusion of ECG and HRV signals is achieved. In~\citep{jobbagy2017hrv}, authors have proposed a stress recognition system to characterize human stress using HRV signals. The influence of HR and HRV on human stress is discussed in~\citep{taelman2009influence}. The HR and HRV of the subjects are recorded in the rest state as well as while performing a mental stressor. The study concluded that the HR and HRV of the subject change when facing a mental stressor and hence can be used as a potential biomarker for the assessment of human stress. Another study about the association of mental stress with HRV is presented in~\citep{salahuddin2007dependence}. Correlation between the perceived stress face by college students and the HRV is explored in~\citep{lombardo2019relationship}. Another cognitive stress measurement model using HRV with an accuracy of 85\% is proposed in~\citep{mcduff2014remote}. A deep learning model for the identification of mental stress in firefighters using HRV data is presented in~\citep{oskooei2019destress}. A study about the correlation of mental stress and HRV of the students during the university final examination is presented in~\citep{hammoud2019stress}. The study reports that HRV in female students is significantly lower as compared to their male counterparts before and after taking the exam. A study to find the correlation between the perceived mental stress and the HRV parameters of the subjects is discussed in~\citep{orsila2008perceived}. A strong correlation between the perceived stress and the values of triangular interpolation of rhythm-to-rhythm (RR) interval histogram (TINN) and the root mean square of differences of successive RR intervals (RMSSD) of the HRV data obtained in the morning and during the workday is found in the study. Some studies claim that the HRV data needs to be of around five minutes for some reasonable analysis~\citep{malik1996heart}, whereas on the other hand, some studies negate this conclusion by claiming that an even smaller amount of data could be used as a reliable marker of human stress~\citep{hall2004acute,salahuddin2007ultra}. Standard deviation of the NN interval (SDNN) is found to be reduced under stressful condition~\citep{blechert2006identifying,acerbi2016wearable,schubert2009effects,clays2011perception,hynynen2011incidence,cinaz2013monitoring,bernardi2000effects,taelman2011instantaneous,tharion2009short,visnovcova2014complexity,madden1995effects}. A study to monitor human stress using physiological signals of electrodermal activity and heart rate variability recorded via wearable sensors is presented in~\citep{acerbi2016wearable}. A new stress-inducing protocol named TransSafe (The Ambient Response to Avoid Negative Stress and enhance SAFEty) is developed in this study. Subjective questionnaires of the State-Trait Anxiety Inventory (STAI) and Shortened State Stress Questionnaire (SSSQ) are filled by the participants before and after facing the stressor. Time and frequency domain features are extracted from the HRV signal. A statistical test is applied to the extracted features and a significant difference is found in some of the features of the EDA and HRV signal. A study to examine the effect of short-term and chronic stress on the heart rate variability of the subject is conducted in~\citep{schubert2009effects}. The speech task has been used as a stressor in the experiment and it is found that the standard deviation of the R-R interval got reduced when facing the stressor. Perception of work stressor relationship with heart rate variability is examined in a study conducted in~\citep{clays2011perception}. Perception of the work stressors is measured using a 27 item job stress questionnaire. An association between the percentage of differences between adjacent normal RR intervals (pNN50), lower high-frequency power, and a higher ratio of the low frequency over high-frequency power and the worker's stress is found. Moreover, no significant correlation between low-frequency power and worker stress is found. An investigation of the relationship of self-reported measures with heart rate variability in real-life situations is explored in~\citep{hynynen2011incidence}. SDNN features extracted from the HRV signal got reduced in a high-stress condition when compared to a low-stress condition. Monitoring of mental workload in office work scenario using heart rate variability feature is proposed in~\citep{cinaz2013monitoring}. NASA Task Load Index questionnaire is used to obtain the subjective mental workload of the participants. Time and frequency domain features are extracted from the HRV signal and the pNN50 feature was found to decrease significantly and the SDNN feature is found to give a consistent decrease under stress conditions. Another study to access that whether talking or reading aloud or silently affects the heart rate variability is presented in~\citep{bernardi2000effects}. An increase in the speed of breathing and a decrease in the mean and variance of the RR interval as compared to normal breathing are observed when reading silently as compared to reading aloud. A study to monitor instantaneous changes in the heart rate activity due to mental workload in an office environment is proposed in~\citep{taelman2011instantaneous}. The participants are asked to perform a low mental workload task and high mental workload task twice where each of these tasks is followed by rest state condition. A significant difference in the heart rate and heart rate variability is observed under mental workload conditions as compared to baseline rest conditions. A study to explore the heart rate variability in students during examination time is presented in~\citep{tharion2009short}. The mean of the RR interval is reported to be significantly lower whereas mean arterial pressure and SDNN are found to be higher during the examination time. A study to understand the relation of acute mental stress and complexity and time asymmetry of HRV signal is proposed in~\citep{visnovcova2014complexity}. Two different stimuli which include SWT and mental arithmetic tasks are used. The study reveals the fact that SDNN was found to be significantly lower when facing a stressful situation as compared to the recovery period. The effect of mental state on the heart rate and blood pressure variability in both males and females is examined in a study conducted in~\citep{madden1995effects}. The mental arithmetic task is used as a stimulus for the experiment. As compared to the control condition, the stressor condition causes a decrease in SDNN, log standard deviation of systolic blood pressure, log total power, and log fractal powers. Root Mean Square of the Successive Differences (RMSSD) is another HRV feature that has been explored in the literature and it has been established to decrease under stress~\citep{acerbi2016wearable,ring2002shifting,hynynen2011incidence,cinaz2013monitoring,taelman2011instantaneous,tharion2009short}. Authors in~\citep{li2009longitudinal} reported that RMSSD, a time-domain feature of HRV and high frequency (HF) power, a frequency domain feature of HRV get decreased in stress condition. Another important frequency-domain feature of HRV discussed in literature is the ratio of Low frequency power (LF) to HF power and is increased under stressful situation~\citep{mcduff2014remote,blechert2006identifying,acerbi2016wearable,moriguchi1992spectral,vuksanovic2007heart,schubert2009effects,clays2011perception,cinaz2013monitoring,mcduff2016cogcam,lucini2002hemodynamic,taelman2011instantaneous,tharion2009short,taelman2009influence,hjortskov2004effect,hall2004acute}. Very low-frequency (VLF) band of the HRV signal is found to be increased in some studies~\citep{acerbi2016wearable,moriguchi1992spectral}. Another HRV feature named correlation dimension D2 is reduced under stress in a study conducted on university students during their examination time in~\citep{melillo2011nonlinear}. A new framework for the remote measurement of heart rate variability using a webcam for the detection of stress is proposed in~\citep{bousefsaf2013remote}. HRV is investigated for different stress factors which include stressed, tensed, concentrated, and stimulated conditions. The study concluded that remote measurement of HRV can be used as a reliable marker of human stress assessment. Another stress recognition study using HRV signals is discussed in~\citep{kim2008detection}. A self-reporting questionnaire was used to label the participants into low and high-stress groups. HRV data were recorded for three different periods during the day and it was concluded that the high stressed participants showed a decrease in the HRV patterns as compared to the low-stress group. Moreover, using logistic regression as a classifier accuracy of 63.2\% is achieved for the low vs high stressed group. Another HRV based stress measurement scheme using time and frequency domain features is presented in~\citep{boonnithi2011comparison}. Stress detection using ECG and HRV features is presented in~\citep{melillo2013classification}. Variation in the heart rate variability of the students in rest conditions and during the examination phase was examined using a non-parametric classifier called Classification and Regression Tree (CART). Sensitivity and specificity of 83.33\% and 90.48\% for the stress vs rest state classification is achieved. Another study for four-level stress classification i.e., no stress, low stress, medium stress, and high stress using HRV features is discussed in~\citep{vanitha2014hierarchical}. The database used for the experiment in this study was the MIT-BIH multi-parameter database where different driving tasks are used as a stimulus. Hierarchical SVM is used as a classifier to classify the four stress state with a classification accuracy of 92\%. Driver stress level recognition using HRV features along with support vector machine classifier is performed in~\citep{munla2015driver}. The database used in this experiment is stress recognition in automobile driver database and classification accuracy of 83\% is reported. A stress recognition measurement scheme for driver stress monitoring using HRV data is proposed in~\citep{wang2013k}. DriveDB database is used in this study for measuring driver stress and features are extracted using parameter-based methods. Kernel-based class separability (KBCS) method for feature selection is used to select the optimum feature set. LDA and PCA algorithms are used for dimensionality reduction. Next, the classification of driver stress is performed using a kNN classifier with an achieved accuracy of 97\%. Taking into consideration the studies presented in the literature, it is evident that the relationship of HRV to stress is not very straightforward. However many of the studies presented in the literature present a consistent relationship with certain HRV features and hence can help draw some useful conclusions. \Tab{tab8} presents a summary of human stress detection schemes using HRV. \begin{table} \caption{Summary of Human Stress Detection Studies using Heart Rate Variability Measure.} \label{tab:tab8} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{blechert2006identifying} & Acute & 42 (14/28) & 42.2$\pm$9.9 & Pictures & Time & DFA & 83.30\% (2) \\ ~\citep{karthikeyan2013detection} & Acute & 60 (30/30) & 21-25 & \thead{Stroop color\\word test} & \thead{Time and \\Frequency} & kNN, PNN & 91.66\% (2) \\ ~\citep{mcduff2014remote} & Acute & 10 (3/7) & 18-30 & \thead{mental arithmetic \\task (MAT)} & Frequency & SVM &85.00\% (2) \\ ~\citep{mcduff2016cogcam} & Acute & 10 (5/5) & 18-28 & \thead{Berg Card \\Sorting Task (BCST)} & Frequency & NB & 86.00\% (2) \\ ~\citep{melillo2011nonlinear} & Acute & 42 & -- & \thead{University \\Examination} & Time & LDA & 90.00\% (2) \\ ~\citep{melillo2013classification} & Acute & 42 (19/23) & 20-28 & \thead{University \\Examination} & \thead{Time and \\Frequency} & CART & 87.00\% (2) \\ ~\citep{vanitha2014hierarchical} & Acute & 16 & -- & \thead{Car \\driving} & \thead{Time and \\Frequency} & \thead{hierarchical\\SVM} & 92.00\% (4) \\ ~\citep{munla2015driver} & Acute & 16 & -- & \thead{Car \\driving} & \thead{Time and \\Frequency} & \thead{SVM-RBF} & 83.00\% (2) \\ ~\citep{wang2013k} & Acute & 27 & -- & \thead{Car \\driving} & \thead{Time and \\Frequency} & \thead{kNN} & 97.00\% (2) \\ ~\citep{kim2008detection} & Chronic & 68 & 10-30 & Baseline & \thead{Time and \\Frequency} & LR & 66.1\% (2) \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] DFA: Discriminant function analysis, kNN: k- Nearest Neighbors, PNN: Probabilistic Neural Network, SVM: Support Vector Machine, NB: Naive Bayes, LDA: Linear Discriminant Analysis, CART: Classification and Regression Tree, RBF: radial basis function, LR: Logistic Regression \end{tablenotes} \end{table} \subsubsection{Blood Volume Pressure based Stress Detection} BVP is a method to measure the amount of pressure exerted on the blood vessels. When measuring blood pressure, we get two values, the first value is systolic blood pressure (SBP) and the second number is called diastolic blood pressure (DBP). The human body releases a large number of stress hormones under stressful conditions which increases blood pressure~\citep{gasperin2009effect} thus making blood pressure a good indicator for stress measurement~\citep{pickering1996environmental}. A 3-year duration study conducted revealed the fact that the individuals who have stress at the workplace tend to have higher SBP and DBP, whereas during sleep they have increased SBP~\citep{schnall1998longitudinal}. SBP and DBP have been reported to be higher during a mental stressor task in~\citep{ring2002shifting,lundberg1994psychophysiological,carroll2003blood,carroll2011blood}. A study presented in~\citep{ring2002shifting} discussed the hemodynamics due to which the increase in blood pressure occurs during the exposure to stress for a longer time duration. Mean arterial pressure (MAP), cardiac output (CO), and total peripheral resistance (TPR) parameters were measured during three phases of the experiment i.e., rest phase, mental arithmetic task phase, and recovery phase. MAP increased at a constant rate during the stressor, CO increased during the first half of the stressor and decreased back to rest state condition toward the end of the task, where on the other hand TPR kept on increasing as the mental arithmetic task progressed. In a study conducted in~\citep{lundberg1994psychophysiological}, it is found that the blood pressure of the subjects i.e., both systolic and diastolic increased in case of stress session as compared to the baseline condition. Authors in~\citep{carroll2003blood} presents a study to examine the relationship of human stress to the future value of blood pressure and how it is affected by gender, age, and socioeconomic condition of the subject. The blood pressure of the subjects was recorded in rest condition and under mental stressors. Moreover, five years of follow-up resting-state blood pressure data of the participants were also available. The findings of the study are that the systolic blood pressure reaction to human stress is found to be positively correlated with the follow-up systolic blood pressure where no correlation could be found with diastolic blood pressure. Another conclusion of the study is that the magnitude of the predicted blood pressure has an association with the gender and socioeconomic position of a person. Another study to correlate the reaction of blood pressure to acute stress and future blood pressure condition is proposed in~\citep{carroll2011blood}. Blood pressure readings in a rest state and with facing stressors are recorded. Moreover, after twelve years, resting-state blood pressure reading was again taken. The study concluded that the systolic blood pressure positively made a prediction about future systolic pressure and there is an increasing pattern of blood pressure over the span of 12 years. However, these findings are not observed in the case of diastolic blood pressure. However, in another study, the author claims that the mental stress of a person does not affect the recorded BVP~\citep{hjortskov2004effect}. There exist a wide of studies in which SBP and DBP is reported to increase under stressful conditions~\citep{finsen2001muscle,vinkers2013effect,lundberg1994psychophysiological,krantz2004consistency,moriguchi1992spectral,steptoe2001acute,ring2002shifting,bernardi2000effects,hjortskov2004effect,schnall1998longitudinal,carroll2003blood,carroll2011blood}. Men have been found to have higher blood pressure under stress as compared to women in a study conducted in~\citep{krantz2004consistency}. In a study conducted in~\citep{vinkers2013effect}, authors reported that when participants are facing a standard stressor there is an increase in their blood pressure i.e., systolic and diastolic as compared to a baseline recording. Authors in~\citep{finsen2001muscle} reported that an increase in the blood pressure of the participants is observed when increasing memory demands. An increase in the blood pressure of the participants is observed when facing a stressor in an experimental study conducted in~\citep{moriguchi1992spectral}. The correlation of mental stress with the cardiovascular response is examined in~\citep{steptoe2001acute}. The blood pressure of the participants is recorded during the rest state as well as while performing stressful tasks. The stressed group has significantly higher blood pressure as compared to the control group. The effect of reading aloud or silent on the blood pressure of the participants is examined in a stress measurement study performed in~\citep{bernardi2000effects}. The effect of mental stress on the HRV and blood pressure of the participants performing computer work is examined in~\citep{hjortskov2004effect}. The study concludes that reading silently causes an increase in the blood pressure of the participant. Hence, looking at these trends, BVP can be considered as a reliable marker of stress. \subsubsection{Photoplethysmography based Stress Detection} PPG is a technique to measure the blood volumetric changes in the vessels~\citep{challoner1979photoelectric}. PPG is a widely accepted technique being used in clinical applications as well as commercial devices, e.g., a pulse oximeter. PPG sensor is quite simple consisting of an infra-red light source that is used to illuminate tissue of the skin and a photodetector is used to measure the changes in the light illumination due to blood flow. Many commercial devices that measure blood pressure, oxygen saturation, and cardiac output are based on a PPG sensor. For the acquisition of PPG signals, currently, a large variety of devices are available in the market. The component of a PPG data acquisition system includes a light source, an LED for lightening up the tissue, and a photodetector to receive the lights and to measure the variations of the light. Light sources commonly used in PPG sensors include red, light green, or infra-red color. Green color has a shorter wavelength and thus it produces a larger variation of light intensity to cardiac changes~\citep{maeda2011advantages}. Heart rate calculation has also been performed using the PPG sensors~\citep{kageyama2007wavelet}. Many algorithms have been developed to measure the heart rate and heart rate variability parameters, which can be used to measure different physiological responses including human stress~\citep{vstula2003evaluation}. PPG signal can also be used to calculate the value of pulse rate, pulse rate variability, blood volume pressure, blood oxygen saturation level, and blood pressure~\citep{giannakakis2019review}. In~\citep{lyu2015measuring}, the author proposed a PPG-based stress-induced vascular response index (sVRI) to measure the stress level of a person. A classical mental arithmetic task with three levels of difficulty is used as a stimulus. The physiological signals of the participants are recorded in the baseline condition as well as while performing the mental arithmetic task. The findings of the study reveal the fact that the proposed sVRI based stress measurement index produces comparable results to the BVP, HR, and HRV measures recorded simultaneously. A PPG-based human stress measurement method is presented in~\citep{chauhan2018real}. The experimental study induced stress using the paced auditory serial addition test (PASAT). Discrete wavelet transform (DWT) coefficients are extracted from the observed data and AdaBoost ensemble classifier is used for stress classification and accuracy of 93\% is achieved using 4-fold cross-validation. A decrease in the pulse wave amplitude (PWA) feature obtained from the PPG signals is observed under stressful conditions in~\citep{henelius2016short}. PPG signal is based on the principle of reflected light so it allows camera-based approaches, which are also known as remote photoplethysmography (rPPG)~\citep{poh2010non}. The rPPG has been effectively used for the measurement of stress~\citep{mcduff2014remote,mcduff2016cogcam}. Another study aimed at finding the features of PPG signals which are beneficial for the assessment of mental stress is presented in~\citep{charlton2018assessing}. Seventeen different features are identified which could be used to identify the stressed and relaxed state. Another study about stress measurement using the heart rate variability feature extracted from the PPG signals is presented in~\citep{mohan2016stress}. PPG signal-based stress recognition study using pulse rate variability and elastic-net regression is discussed in~\citep{li2018photoplethysmography}. The mental arithmetic task is used as a stimulus for the experiment. A significant correlation between the self-reported measure and the result obtained from the stress prediction model was achieved. Another study differentiating between the distress and the calm state using IAPS as a stimulus and extracting temporal, morphological, and frequency domain features from PPG signal is introduced in~\citep{zangroniz2018estimation} with an achieved classification accuracy of 82.35\%. \Tab{tab9} presents a summary of human stress classification schemes using PPG signals. \begin{table} \caption{Summary of Human Stress Detection Studies using PPG Signal.} \label{tab:tab9} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{mcduff2014remote} & Acute & 10 (3/7) & 18-30 & \thead{mental arithmetic \\task (MAT)} & Frequency & SVM &85.00\% (2) \\ ~\citep{mcduff2016cogcam} & Acute & 10 (5/5) & 18-28 & \thead{Berg Card \\Sorting Task (BCST)} & Frequency & NB & 86.00\% (2) \\ ~\citep{chauhan2018real} & Acute & 10 & 30-58 & PASAT & \thead{Frequency and \\Wavelet} & Adaboost & 93.00\% (2) \\ ~\citep{cho2019instant} & Acute & 17 (8/9) & 29.82$\pm$12.02 & \thead{Mental \\Arithmetic Task} & Frequency & ANN & 78.33\% (2) \\ ~\citep{li2018photoplethysmography} & Acute & 178 (85/93) & 16-36 &\thead{Mental \\Arithmetic Task} & Frequency & elastic net & 86-91\% (2) \\ ~\citep{zangroniz2018estimation} & Acute & 50 (28/22) & 20-28 & IAPS & \thead{Time and \\Frequency} & DT & 82.35\% \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] ANN: Artificial Neural Network, SVM: Support Vector Machine, NB: Naive Bayes, DT: Decision Tree \end{tablenotes} \end{table} \subsubsection{Salivary Cortisol based Stress Detection} Cortisol is a well-known biomarker for measuring psychological stress. Salivary cortisol has been used by physicians as a diagnostic tool for the measurement of psychological stress and many other diseases for over two decades and has been reported to be a very good measure of human stress~\citep{kirschbaum1989salivary}. As it is established that stress is a phenomenon that is affected by a wide range of factors, so a reliable measure is required for the accurate estimation of stress. It has been reported that when the person is under an acute stressor, the level of cortisol release increases~\citep{fink2000encyclopedia}. In~\citep{hellhammer2009salivary}, the author has presented a correlation between stress and cortisol level. Human HPA activity is affected by stress and is reflected in cortisol thus making it a very practical tool for stress detection. An acute stress measurement scheme using cortisol secretion is presented in~\citep{boucher2019acute}. A study measuring the response of sweating cortisol to human stress is presented in~\citep{tu2019sweat}. The study deduced the fact that there exists a strong association between the diet and cortisol of a person and therefore adjusting the diet may help in lowering the cortisol level and thus preventing stress-related health issues. The relationship between the changes in the cortisol level of a subject due to mental stress is discussed in~\citep{luo2012relationship}. The study concluded that the cortisol level of a person is increased due to mental stress. Another stress detection study using cortisol as a biomarker is presented in~\citep{nath2020validating}. The model achieved a classification accuracy of 92\%. Acute stress measurement using cortisol as a biomarker is discussed in~\citep{selvaraj2015psychological}. Emotional stress measurement using cortisol is introduced in~\citep{rey2014towards}. The study reveals that the cortisol level of men is found to increase under stress. Salivary cortisol has been used as a measure of assessing the mental workload in~\citep{nomura2009salivary}. \subsection{Non-Wearable Sensor based Human Stress Detection} Non-wearable sensors are the type of sensors in which no physical device needs to be connected to the human body rather the data can be acquired at a considerable distance from the subject. Non-wearable sensors for human stress measurement can be sub-divided into physical measures, behavioral measures, and computer vision-based measures. Physical measures are one in which some observable parameters of the human body like human pupil dilation, human speech, human eye activity, and body postures are recorded, whereas, on the other hand, behavioral measures are the ones in which human stress is measured based on the interaction of the subject with some device like keyboard, mouse, and smartphones. The third and last type of non-wearable sensors used for human stress measurement is computer vision-based sensors like a video camera and thermal imaging. \subsubsection{Physical Measures} A physical property is defined as a property that is observable by humans with a naked eye. To acquire physical measures, sophisticated equipment and sensors are required. Physical measures of stress can be subdivided into four main categories, which include pupil dilation, speech-based measures, eye movement, and body postures. Literature corresponding to each of these categories is given below. \begin{enumerate} \item \textit{Pupil Dilation:} The eye pupil is a hole located at the center of the iris, which allows the light to enter the retina. The color of the eye pupil is black because the light entering the pupil is either absorbed directly by the eye tissues or absorbed after diffusion. The pupil of the eye may appear to open i.e., dilate or close i.e., constrict but it is the iris that governs its movement. Under bright lighting conditions, the pupil constricts to allow less light to enter the eye, whereas under dark conditions the pupil dilates to allow more light to enter the eye. The size of the pupil is controlled by two muscles i.e., constrictor and dilator pupillae, which are in turn controlled by the sympathetic (SNS) and parasympathetic (PNS) part of the ANS~\citep{beatty2000pupillary}. Just like the physiological responses of an individual, pupil dilation is not under subject control and is strongly associated with cognitive and emotional arousal~\citep{bradley2008pupil}. Relationship between the affective states and pupil dilation has been discussed in a number of studies~\citep{onorati2013reconstruction,partala2003pupil,al2013using,bradley2008pupil,ren2012affective,pedrotti2014automatic}. Moreover, pupil dilation has been used as a marker of stress and anxiety assessment~\citep{honma2013hyper,simpson1971effects,baltaci2016stress,zhai2006stress}. Another human stress measurement study based on pupil diameter along with physiological signals of Galvanic Skin Response (GSR), Blood Volume Pulse (BVP), and Skin Temperature (ST) is proposed in~\citep{zhai2006stress}. SVM is used for the classification of stress and relaxed state with an achieved accuracy of 90.10\%. The study concluded that in comparison to physiological signals pupil diameter was a more effective indicator of stress. In a laboratory environment, when the subject is presented with a stressful stimulus the pupil diameter is increased~\citep{de2016acute}. Stress measurement based on pupil dilation has been the subject of study in~\citep{barreto2007non}. An increase in pupil dilation diameter suggests that the pupil dilates at a higher frequency and it means the person is in a stressed state. Experimental studies have shown that due to negative and positive arousing sounds, the diameter of the human pupil is increased quite significantly~\citep{partala2003pupil}. The mean value of pupil diameter has also been used as a parameter for stress detection. Increasing mean values over a time period shows the increasing level of stress. Images having negative valance tend to have a stronger effect on the eye pupil of the subject who feels more stressed~\citep{kimble2010eye}. Moreover, positive as well as negative sounds cause an increase in the pupil diameter of the person~\citep{partala2003pupil}. Public speaking anxiety also affects pupil size~\citep{simpson1971effects}. The size of the pupil is directly proportional to the anxiety level of a person i.e, the more the anxiety level more the pupil diameter and vice versa~\citep{wang2011attention}. Another study about the widening of a pupil under stress is presented in~\citep{liao2005real}. In~\citep{torres2015pupil}, authors presented a study for human stress classification scheme using pupil diameter. Mental arithmetic task questions were asked from the participants in front of a camera and the changes in pupil diameter were observed. The authors concluded that pupil diameter can be a good indicator of mental stress but for better classification results it needs to be combined with physiological signals. The use of pupil dilation for stress analysis has some limitations, which need to be addressed. The size of the human pupil is not constant throughout life i.e., the size of the pupil decreases with increasing age~\citep{winn1994factors}. Ambiguity exists on the effect of gender in the pupil size because there exist some studies which show no correlation~\citep{winn1994factors}. On the other hand, some studies show gender to affect the pupil size when face with a painful~\citep{ellermeier1995gender} and audio stimulus~\citep{partala2003pupil}. Lightening conditions affect the pupil size due to the in-built light reflexes of the human being~\citep{pedrotti2014automatic,reeves1920response}. Thus, to use pupil dilation for human stress assessment it is necessary to take into consideration these limitations and precautionary measures need to be taken beforehand. \Tab{tab10} presents a summary of human stress classification schemes using pupil dilation. \begin{table} \caption{Summary of Human Stress Detection Studies using Pupil Dilation.} \label{tab:tab10} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{ren2012affective} & Acute & 30 (14/16) & 26.8$\pm$2.56 & \thead{Stroop color\\ word test} & Time & NB & 85.5\% (2) \\ ~\citep{zhai2006stress} & Acute & 32 & 21-42 & \thead{Stroop color\\ word test,\\Emotional\\pictures} & \thead{Time and \\Frequency} & SVM & 90.10\% (2) \\ ~\citep{pedrotti2014automatic} & Acute & 33 (16/17) & 23-54 & \thead{Driving \\task} & Frequency & ANN & 79.20\% (2) \\ ~\citep{baltaci2016stress} & Acute & 11 (9/2) & 29-40 & \thead{IAPS} & Time & ABRF & 65-83.8\% (2) \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] ANN: Artificial Neural Network, SVM: Support Vector Machine, NB: Naive Bayes, ABRF: Adaboost with Random Forest \end{tablenotes} \end{table} \item \textit{Speech Based Measures:} Stress measurement using speech features has been one of the focuses of the research community. Stress in the voice is defined as “observable variability in certain speech features due to a response to stressors~\citep{murray1996towards}. Stress measurement from a human voice is a dynamic process. Stress is measured from the nonverbal component of voice. Speech components have variations while facing a stressful stimulus~\citep{womack1999n}. Vocal stress analysis has been performed to identify the discriminating voice features of stressed and neutral conditions~\citep{lefter2015recognizing}. Pitch of the speech signal is found as a distinct feature under emotional stress and is increased when an individual is feeling stressful~\citep{williams1972emotions,hansen1988analysis,cairns1994nonlinear,junqua1996influence,protopapas1997fundamental,hansen2007speech,gharavian2012statistical,lu2012stresssense,kurniawan2013stress,sondhi2015vocal,hansen2011robust}. In~\citep{nwe2003speech}, author suggests that the change in speed of fundamental frequency of voice is the most important feature for stress measurement. Speech signals based human stress measurement has also been discussed in~\citep{fernandez2003modeling,healey2005detecting,lefter2011automatic}. Articulatory, excitation, and cepstral-based features have been used in~\citep{womak1996improved} to identify stress in speech signals and classification accuracy of 91\% is achieved. A study conducted in~\citep{cairns1994nonlinear} distinguished between the loud, angry, neutral, clear speech, and Lombard effect by using speech features of intensity, spectral tilt, and energy. Another speech-based stress classification scheme using spatial and spectral domain features is presented in~\citep{devillers2006real}. The frequency of the speech signal in an angry, Lombard effect, and loud state is different from each other~\citep{hollien2002forensic}. In~\citep{simantiraki2016stress}, the author extracted the spectral tilt feature from the speech signal and found it to be less negative under stressful conditions. The energy of the equivalent rectangular bandwidth band obtained from the spectrogram and Gabor filters are used in ~\citep{he2009stress} to identify stress recognition with an accuracy of 79\%. Stress classification in a noisy environment is performed using the Teager Energy Operator (TEO) features in~\citep{hanson1993finding}. Speech signal has been analyzed for stress using multi-resolution wavelet analysis (MWA) in~\citep{sarikaya1998subband} and a fusion of TEO and MWA in~\citep{fernandez2003modeling}. Hidden Markov Model (HMM) and TEO have been used for the analysis of stress in the speech signals~\citep{hansen2011robust}. Physical features of the vocal cords of a person are examined in~\citep{yao2012physical} to identify stress in speech. The performance of automatic speech recognition could be improved if the speaker stress is accurately identified~\citep{kadambe2007study}. Another speech-based human stress measurement scheme is proposed in~\citep{soury2013stress}. TSST was used as a stimulus and an SVM classifier was used to classify the stress with an achieved recall rate of 72\%. \Tab{tab11} presents a summary of human stress classification schemes using speech-based measures. \begin{table} \caption{Summary of Human Stress Detection Studies using Speech based Measures.} \label{tab:tab11} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{kurniawan2013stress} & Acute & 10 & -- & \thead{Stroop color\\word test,\\mental arithmetic\\ task} & \thead{Time and Frequency} & SVM & 92.00\% (2) \\ ~\citep{womack1999n} & Acute & 44 (30/14) & 22-76 & Speech & \thead{Time and \\Frequency} & HMM & 92.41\% (2) \\ ~\citep{lefter2015recognizing} & Acute & 16 & -- & Speech & \thead{Time and \\Frequency} & BN & 73.00\% (2) \\ ~\citep{cairns1994nonlinear} & Acute & 32 (19/13) & 22-76 & SUSAS & Frequency & HMM & 99.10\% (2) \\ ~\citep{hansen2007speech}& Acute & 32 (19/13) & 22-76 & SUSAS & Frequency & HMM & 73.80\% (2) \\ ~\citep{lu2012stresssense}& Acute & 14 (4/10) & 22.86 & \thead{Job \\Interview} & Frequency & GMM & 81.00\% (2) \\ ~\citep{fernandez2003modeling} & Acute & 4 & -- & \thead{driver \\speech} & Frequency & SVM & 51.20\% (4) \\ ~\citep{womak1996improved} & Acute & 32 (19/13) & 22-76 & SUSAS & Frequency & HMM & 80.64\% (3) \\ ~\citep{simantiraki2016stress} & Acute & 32 (19/13) & 22-76 & SUSAS & Frequency & RF & 92.06\% (2) \\ ~\citep{he2009stress} & Acute & 32 (19/13) & 22-76 & SUSAS & Frequency & GMM & 81.00\% (2) \\ ~\citep{sarikaya1998subband} & Acute & 32 (19/13) & 22-76 & SUSAS & Frequency & MLP & 70.00\% (2) \\ ~\citep{soury2013stress} & Acute & 29 (12/17) & -- & TSST & \thead{Time and \\Frequency} & SVM & 72.00\% (2)\\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] SVM: Support Vector Machine, FDA: Fisher discriminant algorithm, HMM: Hidden Markov Model, BN: Baysian network, GMM: Gaussian Mixture Model, RF: Random Forest, MLP: Multilayer Perceptron \end{tablenotes} \end{table} \item \textit{Eye Activity:} Functioning and behavior of the eye are affected by stress and this fact is supported by different studies. A study for human stress detection using eye blink rate and brain activity of the subject was proposed in~\citep{haak2009detecting}. The stimulus used in the experiment was driving a car in a simulator on a road that included steep and sharp curves and were having many attention-seeking advertising boards. While driving on this road, stressful emotions were elicited in the drivers resulting in a change in eye blink rate and brain activity. A correlation between the eye blinks of the participants and the experienced stress level was established and a higher frequency of eye blinks was observed as an indication of the individual experiencing a stressful condition. Stressful stimulus causing an increase in the eye blink rate of the person as reported in the studies conducted in~\citep{haak2009detecting,giannakakis2017stress}. A biometric identification application of human stress detection is proposed in~\citep{pavlidis2000thermal}. Images are used to detect the facial expressions and eye movement corresponding to anxiety, alertness, and fearfulness, and rapid eye movement under stress state was reported. The biometric application was based on the idea of "what you are doing" instead of the traditional approach of "who are you". The study reported encouraging results of the proposed scheme. A human stress detection framework based on facial cues using features of eye movement, mouth activity, head movement, and heart rate acquired via PPG signal is presented in~\citep{giannakakis2017stress}. Four different stressors which include social exposure, emotion recall, stressful images, and stressful videos were used in the experiment. Social exposure stressors included a task of text reading and self-description speech. Eyeblink rate was reported to be increased in response to stressful images and the Stroop color-word test, whereas reading a difficult text causes the eye blink rate to get reduced. Moreover, an increase in the eye aperture was observed in a stressful situation. Stress affects the eye gaze behavior of a person. In~\citep{laretzaki2011threat}, authors presented a study to determine if and how threat and trait anxiety interact to affect the stability of gaze fixation. Video oculography was used to estimate the gaze position with and without gaze fixation stimulus in a safe and verbal threat condition in subjects characterized for their trait anxiety. Trait anxiety significantly showed that there is a gaze fixation instability under threat conditions. Some stress detection studies have employed different gaze features like gaze direction, congruence, and size of gaze-cue for the assessment of stress~\citep{fox2007anxiety,staab2014influence}. A study to investigate the role of neutral, angry, happy, and fearful facial expressions in enhancing orienting to the direction of eye gaze is presented in~\citep{fox2007anxiety}. Photographs of faces with either direct or averted gaze are used as a stimulus in the experiment. A target letter appeared unpredictably to the left and right sides of the face. approximately 300 ms or 700 ms after the eye gaze direction changed. It is observed from the results that the response time of the participant is less when the eyes in the image gazed toward the subject as compared to the conditions when the eye gazed away from the subject. An enhanced orientation to the eye-gaze of faces with fearful expressions is reported in participants with high trait anxiety scores as compared to participants with low trait anxiety scores. A review to analyze the effect of anxiety on the ocular motor control and the gaze of the subject is presented in~\citep{staab2014influence}. Another human stress measurement scheme using the eye-tracking feature is proposed in~\citep{mokhayeri2011mental}. SWT is used to induce stress in the participants of the experiment. A genetic algorithm is employed to detect the human eye and the noise is removed by using a fuzzy filter. Fuzzy-SVM classifier is used to classify human stress into two classes with an accuracy of 70\%. \Tab{tab12} presents a summary of human stress classification schemes using eye activity measures. \begin{table} \caption{Summary of Human Stress Detection Studies using Eye Activity Measure.} \label{tab:tab12} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{giannakakis2017stress} & Acute & 23 (16/7) & 45.1$\pm$10.6 & \thead{Stroop color\\word test,\\IAPS, videos} & Time & \thead{Adaboost} & 91.68\% (2) \\ ~\citep{mokhayeri2011mental} & Acute & 60 & 20-28 & \thead{Stroop color\\ word test} & \thead{Time} & Fuzzy-SVM & 70.00\% (2) \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] SVM: Support Vector Machine, IAPS: International Affective Picture System \end{tablenotes} \end{table} \item \textit{Body Postures:} Body language is a non-verbal type of communication in which physical behavior instead of words is used to convey information. Visual cues-based behavioral features for human stress measurement are presented in~\citep{aigrain2015person}. Behavioral body language features used in the study are visual cues that are extracted from the data acquired by Kinect and an HD camera. The stimulus used for eliciting stress in the participant was a mental arithmetic task. Classification accuracy of 77\% is achieved using a support vector machine classifier. Another human stress measurement scheme based on activity-related behavioral features is proposed in~\citep{giakoumis2012using}. Accelerometer, video-based camera, ECG, and GSR sensors are used to record the behavioral features of the subject. Stroop color-word test is used as a stress-inducing stimulus for the experiment. The study concluded that the behavioral features extracted correlated with the self-reported response. Behavioral features proved to be better as compared to other physiological signals for the measurement of stress. Classification is performed using linear discriminant analysis and maximum classification accuracy of 96.30\% is achieved. A human stress recognition framework by use of behavioral features extracted by their interaction with technological devices is proposed in~\citep{carneiro2012multimodal}. Eight different behavioral, cognitive, and physical features are examined for the analysis of the effect of different levels of acute stress. A statistical test is applied to measure the difference between different levels of stress. The study revealed the fact that the mean and maximum intensity of the touch is the feature that correlated strongly to human stress. It is also observed that if the stress level of an individual is high then there is less movement in the upper part of the human body. Emotional states including anxiety of an individual are examined using facial cues and gestures from the upper part of the body in~\citep{gunes2007bi}. Stress monitoring by the movement of head and mouth muscles is done~\citep{liao2005real,bevilacqua2018automated}. A study to analyze the facial cues for the estimation of the difference between boredom and stress of a computer game player is presented in~\citep{bevilacqua2018automated}. Seven different facial features are extracted from the players playing the game and it is concluded that 5 out of these 7 features showed a significant difference in boredom and stress state. Head moves under stressful conditions have been reported to be more often~\citep{liao2005real}, more quick~\citep{dinges2005optical,giannakakis2018evaluation}, and the overall head movement is also large~\citep{hadar1983head,giannakakis2018head}. In a study conducted in~\citep{dinges2005optical}, the authors proposed a scheme to detect facial changes during a performance by using optical character recognition in response to low and high stress. Workload and social feedback are used as stress-inducing stimuli in the experiment. The study concluded that the OCR algorithm when applied using mouth and eyebrow region features were able to identify around 75-88\% of the stressed and non-stressed individuals. A study conducted to find the association of the head pose with different kinds of stressors IS proposed in~\citep{giannakakis2018evaluation}. Four different stressors which included, social exposure, emotional recall, stress images or mental tasks and stressful videos are used to induce stress in the participant. Video recording of the subject is performed when facing each stressor. Head movement and pose features are extracted from the recorded videos. The study reports the fact that more quick head movement is observed in participants when facing stressful situations. The Pitch feature showed a significant difference in the stress and neutral state. The highest classification accuracy of 98.6\% for neutral vs stress state classification is achieved using a kNN classifier with K=3. A study to analyze the head movement in the context of speech during neutral and stress conditions is presented in~\citep{giannakakis2018head}. The tasks involved in stimulus presented to the participants included neutral tasks, interview, reading text, anxious and stressful event recall, and stressful images and videos. Translational and rotational head movements are used as a feature to assess the stress and neutral state. The study reveals the fact that facing a stressful situation makes the head movement pattern swift and fast. Emotional situations have been identified using head shakes and nods in~\citep{adams2015decoupling}. Another study about the relationship of body postures with human stress level is presented in~\citep{arnrich2009does}. The study is aimed at finding whether stress-related information can be obtained from the posture data in an office work scenario. MIST is used as a stimulus and features are extracted from the pressure distribution on the chair and were given to a self-organizing map classifier to classify the stress response. Classification accuracy of 73.75\% is achieved for the proposed scheme. \Tab{tab13} presents a summary of human stress classification schemes using body postures. \begin{table} \caption{Summary of Human Stress Detection Studies using Body Postures.} \label{tab:tab13} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{aigrain2015person} & Acute & 14 (11/3) & 24.8$\pm$2.8 & \thead{Mental\\arithmetic task} & \thead{Time and \\Frequency} & SVM & 77\% (2) \\ ~\citep{carneiro2012multimodal} & Acute & 19 & -- & \thead{Computer \\game} & Time & DT & 78\% (2) \\ ~\citep{giannakakis2018evaluation} & Acute & 24 (17/7) & 47.3$\pm$9.3 & \thead{Mental arithmetic\\task, stressful\\ images} & Time & GLR & 97.90\% (2) \\ ~\citep{dinges2005optical} & Acute & 60 (29/31) & 30 & \thead{Workload and \\social feedback} & Time & OCR & 75-88\% (2) \\ ~\citep{arnrich2009does} & Acute & 33 (33/0) & 24.06 & \thead{Montreal Imaging \\Stress Task (MIST)} & Frequency & SOM & 73.75\% (2) \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] SVM: Support Vector Machine, IAPS: International Affective Picture System, OCR: Optical Character Recognition, SOM: Self-Organizing Map, LDA: Linear Discriminant Analysis, DT: Decision Tree, GLR: Generalized Likelihood Ratio \end{tablenotes} \end{table} \end{enumerate} \subsubsection{Behavioral Measures} Behavior measures correspond to the type of behavior a person adopts when interacting with a certain thing. Behavioral measures have been used in literature for the detection of human stress. Behavioral measures of stress can be sub-divided into the following types i.e., interaction with a computer mouse, interaction with a computer keyboard, and interaction with smartphones. Literature corresponding to each of these types is presented as follows. \begin{enumerate} \item \textit{Interaction with Computer Mouse:} Different mouse interaction-based stress measurement studies have been developed in the literature. An approach to measuring human stress by embedding sensors in the computer mouse to measure the physiological signals while the user is using the mouse is presented in~\citep{liao2005real}. Camera, pressure sensors, temperature sensors, and GSR sensors are integrated within the mouse to measure the physiological signals to correlate them to the stress of an individual. A capacitive mouse that measures the amount of interaction and the pressure exerted on the computer mouse is developed in~\citep{hernandez2014under}. The authors concluded that the stressed individuals have a significantly higher contact time with the mouse as compared to non-stressed individuals. Another model of how the user moves the mouse under stress is developed in~\citep{sun2014moustress}. The authors proposed an arm-hand dynamics model to measure the muscle stiffness of the subject while moving the mouse. Mouse speed, click rate, and mouse inactivity have been correlated to stress in~\citep{lim2014detecting}. Different features extracted from the movement of the mouse are associated with stress during examination~\citep{carneiro2015using}. \Tab{tab14} presents a summary of human stress classification schemes using computer mouse interaction. \begin{table} \caption{Summary of Human Stress Detection Studies using Computer Mouse Interaction.} \label{tab:tab14} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{sun2014moustress} & Acute & 49 (23/26) & 20 & \thead{Computer\\task} & Frequency & SVM & 70.00\% (2) \\ ~\citep{carneiro2015using} & Acute & 53 & -- & \thead{Online\\Exam} & Time & NB, DT & 86.40\% (2) \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] SVM: Support Vector Machine, DT: Decision Tree, NB: Naive Bayes \end{tablenotes} \end{table} \item \textit{Interaction with Computer Keyboard:} Interaction with a computer keyboard has also been used as a measure of human stress in the literature. A study presented in~\citep{rodrigues2013keystrokes} developed a keyboard dynamics-based approach to measure stress in university students by recording their key latency and writing speed on the keyboard. Another study considered average key latency, average typing speed on the keyboard, and the error rate as a feature to measure the human stress~\citep{lim2014detecting}. Case-based reasoning systems along with multiple features extracted from the interaction with the keyboard have been used for human stress classification~\citep{andren2005case}. A pressure-sensitive keyboard has been developed in~\citep{hernandez2014under} that gives a pressure value between 0 and 255 for each keystroke on the keyboard. Using the pressure values obtained from the keyboard, the stress of the user is measured. Keystroke dynamics is an important measure of human stress because stress causes an effect on different muscles of the human body like around arms, hand, and shoulder muscles. Author in~\citep{gunawardhane2013non} developed a stress measurement scheme using three features of keystroke dynamics which include durations between key pressers of specific digraphs, trigraphs, and an error rate of backspace and delete. A statistical test was applied to data to show that there exists a significant difference in the keystroke dynamics of the stressed and non-stressed individuals. \item \textit{Interaction with Smartphones:} Smartphones have completely been revolutionized in the last decade and evolved as a mini-computer with far more functionalities and power as compared to traditional mobile phones. Due to this technological revolution, smartphones have facilitated physicians to get real-time data of patients in just no time. Moreover, applications for smartphones have been built, which enable the users to monitor their health and get advice or alert on a particular situation~\citep{kailas2010mobile}. Mobile phones are being embedded with sensors like heart rate and body temperature to analyze the health state in a very cost-effective manner. Mobile applications make use of these in-built sensors for health monitoring. Even though these applications are being built with a lack of any kind of scientific validity, but due to their low or even no cost can reach hundreds of thousands of users in no time. Azumio’s Stress Check and StressViewer~\citep{carneiro2017new} apps utilize the light and the camera of the smartphone to monitor the heart rate of the user. Different apps are also available which not only measures the stress but also provides breathing and some other exercises for relieving the stress. DeStressify~\citep{lee2018evaluation} is another app, which is developed to relieve stress based on music. EDA sensor is used by stress-relieving apps named PIP Relax and Race~\citep{dillon2016smartphone}. In these two apps, the user has to participate in a race and the participant which is more relaxed wins the race. DroidJacket~\citep{colunas2011droid} is another app integrated with VitalJacket-a shirt with an integrated ECG sensor to continuously monitor the health of a person~\citep{colunas2011droid}. A specific sensor platform named a Personal Biomonitoring System is used along with smartphones to measure the stress~\citep{gaggioli2012system}. Another smartphone-based approach for measurement of stress using the features of the speech signal of the user is discussed in~\citep{lu2012stresssense}. A stress classification accuracy of 82.9\% is achieved for indoor scenarios and 77.9\% for outdoor scenarios. Stress-related changes in human behavior using GPS, WiFi, Bluetooth, phone calls, and SMS logs features of a smartphone are explored in~\citep{bauer2012can}. Human stress measurement at the workplace is very important to measure for the better mental and physical health of the employees. A human stress measurement scheme to measure the stress of workers during the workday and sleep at night is presented in~\citep{muaremi2013towards}. Features extracted from the audio, physical activity, and communication data recorded during daytime and heart rate variability data recorded during night sleep are used to built logistic regression models. A leave-one-out cross-validation scheme is used for the classification purpose and classification accuracy of 61\% is achieved for three-level stress classification. A new stress recognition framework named AMMON (\textbf{A}ffective and \textbf{M}ental health \textbf{MON}itor) which is a speech analysis library for analyzing effect and stress directly on mobile phones is proposed in~\citep{chang2011s}. Stress classification is performed and classification accuracy of 84\% is achieved for two-class stress classification. A stress recognition framework using the accelerometer sensor of the smartphone is proposed in~\citep{garcia2015automatic}. Oldenburg burnout inventory (OLBI) questionnaire is used to acquire the subjective response from the users and features are extracted in time and frequency domain and fed to a Naive Bayes and decision tree classifier. Classification accuracy of 71\% is achieved for user-specific models. A driver stress monitoring system using inertial sensors is proposed in~\citep{lee2016wearable}. For comparison of the results, GSR, self-reporting questionnaire, and facial expressions are employed. Forty-six features are extracted from the data which were subjected to feature selection resulting in 22 features. SVM classifier is used to discriminate the low-stressed participants from the high stressed participants with achieved accuracy of 94\%. A student stress measurement mechanism using smartphone sensors is proposed in~\citep{gjoreski2015automatic}. The author's used an accelerometer, GPS, WiFi, time and duration of calls, light sensor data, and a self-reported questionnaire for classification of stress. Forty-seven features are extracted from the acquired data and classification accuracy of 60\% is achieved for three classes. Classification in this study is performed in such a manner that the data of each student is divided into two parts ie., some features of each student are used for training, and some features are used for testing. An automatic stress measurement system for graduate students using smartphone data is proposed in~\citep{bogomolov2014pervasive,bogomolov2014daily}. Smartphone data which included mobile phone activity of call and SMS logs and Bluetooth proximity hits, weather conditions, and personality traits are recorded from 227 students for a duration of one year. Weather conditions are divided into mean temperature, pressure, total precipitation, humidity, visibility, and wind speed whereas personality traits are obtained by using the big five personality trait questionnaire and are labeled into extraversion, neuroticism, agreeableness, conscientiousness, and openness to experience. Subjective labeling of stress for the recorded data is obtained by using a self-reported questionnaire. A wide range of classification algorithms is applied but the random forest algorithm proved to be the best with an achieved classification accuracy of 72\% and a feature reduction from 500 features to 32 features. Another human stress mechanism using mobile phone activity both in the laboratory environment and out of the lab environment is proposed in~\citep{ciman2016individuals}. For the controlled lab environment part of the experiment, an android application that contained search and writes tasks is developed. The user activities which are monitored during the performance of these tasks include users' tap, scroll, swipe, and text input gestures. Stressors are used to induce stress in the participants and the Experience Sampling Method is used to obtain the subject stress scores. kNN, SVM, decision trees, and neural networks classifiers are used with achieved accuracy of 80\%. For the out-of-lab environment, the activities of the subjects including the type of applications used, user physical activity, and light values of the screen are recorded using the daily life usage of mobile phones. The achieved classification accuracy for this part of the experiment is 70\%. Another smart sensor-based and context-aware stress measurement scheme for daily life stress monitoring is proposed in~\citep{gjoreski2017monitoring}. Real-life data of 55 days is recorded and precision and recall of 95\% and 70\% are achieved, respectively. Smartphone data has been used in the stress recognition framework proposed in~\citep{gimpel2015mystress}. The authors developed an android application and used 36 software and hardware sensors in their study. The authors did not report any classification accuracy but they found that high smartphone usage, average battery temperature, the maximum number of running applications, and the frequency of switching the display on are the features that are strongly correlated to stress. Another daily life stress monitoring system using smartphone sensors is proposed in~\citep{sysoev2015noninvasive}. Audio, gyroscope, accelerometer, ambient light sensor data, screen mode changing frequency, self-assessment, and activity type are used as a feature, and NASA-TLX is used as a subjective stress questionnaire. Activity recognizer is used along with the stress recognition system to achieve a classification accuracy of 77\%. Another stress recognition framework called as StayActive is developed for android phones in~\citep{kostopoulos2017stress}. Social interaction, physical activity, and sleeping patterns are used for the measurement of stress. A fusion of offline mathematical modeling and online machine learning models is used to identify stress and give some relaxation therapy when the stress is identified. Circumplex Model of Effect with some modifications about stress measurement is used as a questionnaire to obtain the subjective score. The number of sleeping hours is used as a feature from sleep patterns, number of touches on the screen, number of calls, and SMSs are used as a feature from social interaction. The author did not report any accuracy but they intended to use physiological signals along with a smartphone in the future to further improve the results of the proposed stress measurement scheme. Another unsupervised stress classification scheme using smartphone data is proposed in~\citep{vildjiounaite2018unobtrusive}. The hidden Markov model is used for stress classification and an accuracy of 68\% is achieved. \Tab{tab15} presents a summary of human stress classification schemes using smartphone interaction. \begin{table} \caption{Summary of Human Stress Detection Studies using Smartphone Interaction.} \label{tab:tab15} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{lu2012stresssense} & Acute & 14 (4/10) & 22.86 & \thead{Job\\interviews,\\Marketing\\jobs} & \thead{Time and\\Frequency} & GMM & 81.00\% (2) \\ ~\citep{muaremi2013towards} & Acute 35 (24/11) & 25-62 & \thead{Day long\\recording} & \thead{Time and\\Frequency} & LR & 61.00\% (3)\\ ~\citep{garcia2015automatic} & Chronic & 30 (18/12) & 37.46$\pm$7.26 & \thead{Day long\\recording} & \thead{Time and\\Frequency} & \thead{DT, NB,\\ONB} & 71.00\% (2)\\ ~\citep{lee2016wearable} & Acute & 8 (6/2) & 30$\pm$5 & \thead{Car\\Driving} & \thead{Time and\\Frequency} & SVM & 94.78\% (2) \\ ~\citep{gjoreski2015automatic} & Chronic & 48 & -- & \thead{Day long\\recording} & \thead{Time} & \thead{SVM, DT\\RF} & 60.00\% (3)\\ ~\citep{ciman2016individuals} & Acute & 13 (7/6) & 22-32 & \thead{Mental\\arithmetic task} & Time & \thead{kNN, SVM,\\DT, NN} & 80.00\% (5) \\ ~\citep{gjoreski2017monitoring} & Chronic & 21 & 28$\pm$4 & \thead{Day long\\recording} & \thead{Time and\\Frequency} & \thead{DT, NB,\\kNN, SVM,\\RF, ES} & 73.00\% (3) \\ ~\citep{sysoev2015noninvasive} & Chronic & -- & -- & \thead{Real life\\activities} & Time & RF, SL & 77.00\% (2) \\ ~\citep{vildjiounaite2018unobtrusive} & Chronic & 30 & -- & \thead{Real life\\activities} & \thead{Time and\\Frequency} & HMM & 68.00\% (2)\\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] SVM: Support Vector Machine, DT: Decision Tree, kNN: K- nearest neighbors, ANN: Artificial Neural Network, ONB: Ordinal Naive Bayes, ES: Ensemble Selection, SL: Simple Logic, \end{tablenotes} \end{table} \end{enumerate} \subsubsection{Vision based Measures} Vision-based measures have also been used for human stress detection and used some kind of imaging modality for measuring the response of the user. Vision-based techniques can be sub-divided into thermal infrared (IR) and computer vision-based techniques. Literature for each of these sub-divisions is given below. \begin{enumerate} \item \textit{Thermal Infrared Imaging:} Thermal IR imaging is a non-invasive and contactless technique used to measure the temperature of the human skin. In this technique, a thermal infrared camera is used to record the oxygen tissue saturation level. The benefit of this technique is that it is not affected by skin color and lighting conditions. When a person is feeling stressed, the flow of blood in the vessels get increases, and hence the temperature in the adjacent regions is raised. Human affective states like fear~\citep{levine2001face}, arousal~\citep{nozawa2009correlation}, and stress~\citep{ioannou2014thermal} have been recognized using thermal imaging~\citep{nhan2009classifying}. In most of the affect recognition studies, facial skin temperature is measured using thermal IR imaging to extract some useful information~\citep{nhan2009classifying,shastri2008imaging} Skin temperature of the nose, chin, and corrugator is affected due to stressors~\citep{hong2016real}. Even though no conclusive remarks can be given of the effect of stress on a specific region of the face. However in studies conducted in~\citep{puri2005stresscam,chen2014detection}, forehead temperature is raised under stressful condition. Periorbital areas also show signs of increasing temperature under anxious states~\citep{pavlidis2002thermal,pavlidis2000thermal}. Skin temperature is also reported to increase in supraorbital and periorbital areas under stress~\citep{shastri2008imaging}. Studies conducted in~\citep{engert2014exploring,vinkers2013effect,kang2006determining} showed that an increase in nose temperature when an unknown task is presented to the participants. A decrease in the temperature of the perinasal area is found under stressful condition~\citep{pavlidis2012fast,shastri2012perinasal}. Another thermal camera-based stress recognition framework is proposed in studies conducted in~\citep{cho2017deepbreath,cho2017thermsense}. A thermal camera is used to detect breathing and a respiratory spectrogram was used to extract features. SWT and the mental arithmetic task are used as a stimulus to induce stress in the lab environment. Convolutional Neural Network (CNN) classifier is used for two and three-level stress classification. Two-level stress classification accuracy of 84.59\% and three-level stress classification of 56.52\% is achieved for the proposed scheme. \Tab{tab16} presents a summary of human stress classification schemes using thermal imaging. \begin{table} \caption{Summary of Human Stress Detection Studies using Thermal Imaging.} \label{tab:tab16} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{nhan2009classifying} & Acute & 12 (3/9) & 24$\pm$2.9 & \thead{Visual \\stimulus} & \thead{Time and \\Frequency} & LDA & 70-80\% (6) \\ ~\citep{hong2016real} & Acute & 41 & 20-65 & \thead{Trier Social\\ Stress Test} & Frequency & DEFP & 90.00\% (2) \\ ~\citep{chen2014detection} & Acute & 21 (19/2) & 25 & \thead{Trier Social\\ Stress Test} & Frequency & \thead{Binary\\ classifier} & 88.10\% (2) \\ ~\citep{cho2017deepbreath} & Acute & 8 (5/3) & 18-53 & \thead{Stroop color\\ word test\\Mental \\Arithmetic Task} & Time & CNN & \thead{84.59\% (2)\\56.52\% (3)} \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] LDA: Linear Discriminant Analysis, DEFP: Differential Energy between Philtrum and Forehead, CNN: Convolutional Neural Network \end{tablenotes} \end{table} \item \textit{Computer Vision:} In computer vision-based, human stress assessment, many different organs, and locations of the human body have been used, but the face is the most commonly used location for monitoring stress. Authors proposed a computer recognition algorithm for use by astronauts to identify the facial changes occurring in response to low and high stressor~\citep{dinges2005optical}. Another study to identify the stress of the drivers using facial expressions is presented~\citep{gao2014detecting}. Thermal imaging can be used to measure blood perfusion which is correlated to human stress~\citep{derakhshan2014preliminary}. Another study analyzed the recorded video by use of both temporal thermal spectrum and visible spectrum video features to measure the stress~\citep{sharma2014thermal}. Another human stress measurement scheme using Kinect sensors is proposed in~\citep{aigrain2015person}. The participants of the experiment are asked to give answers to the time-constrained arithmetic questions in front of a video camera. Facial features along with body postures are extracted and fed to an SVM classifier for stress classification. An accuracy of 77\% is achieved for stress vs non-stressed class classification. Changes in the facial blood flow using thermal and visible cameras are used for stress detection in the study conducted in~\citep{mohd2015mental}. It was reported in the study that facial thermal features are difficult to attain due to low contrast so they applied a Nostril mask to focus on the nostril area. Noise smoothing is performed using graph cut algorithms and feature extraction is performed using Scale Invariant Feature Transform resulting in a classification accuracy of 88.6\% for two classes. Facial hyperspectral imaging (HSI) technique and tissue oxygen saturation (StO2) data are used to identify stress in a study conducted in~\citep{chen2014detection}. Results obtained from thermal imaging and HSI are compared in the proposed scheme. TSST is applied to induce stress in the participants of the experiment. It is reported that the StO2 in the eye and forehead of the subject are discriminative features for the identification of stress. An accuracy of 76\% with automatic thresholding and 88\% for manual thresholding is achieved for two-level stress computer vision-based techniques. \begin{table} \caption{Summary of Human Stress Detection Studies using Computer Vision Based Techniques.} \label{tab:tab17} \scalebox{0.9}{ \begin{tabular}{cccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{gao2014detecting} & Acute & 21 & -- & \thead{Car \\driving} & Time & SVM & 90.50\% (2) \\ ~\citep{derakhshan2014preliminary} & Acute & 12 & -- & \thead{peak of \\tension (POT) test} & Time & SVM & 96.00\% (2) \\ ~\citep{sharma2014thermal} & Acute & 35 (22/13) & -- & \thead{Video \\clips} & \thead{Time and \\Frequency} & SVM & 86.00\% (2) \\ ~\citep{aigrain2015person} & Acute & 14 (11/3) & 24.8$\pm$2.8 & \thead{Public speaking\\ mental\\arithmetic task} & \thead{Time and \\Frequency} & SVM & 77.00\% (2) \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] LDA: Linear Discriminant Analysis, SVM: Support Vector Machine \end{tablenotes} \end{table} \end{enumerate} \section{Multimodal Stress Detection} \label{sec:msa} Multimodal human stress detection has been focused on in literature in a wide range of studies. The primary aim of the multimodal stress detection framework is to increase the system accuracy as compared to single modality stress measurement systems. Multimodal stress detection schemes available in the literature can be sub-divided into (i) fusion of data recorded from different physiological modalities, (ii) fusion of data obtained from motion and physiological sensors (iii) fusion of data obtained from imaging modalities and physiological sensors, (iv) fusion of data obtained from smartphones and physical, behavioral and physiological sensors. In this section, we will discuss available literature for stress measurement using all kinds of multimodal fusion approaches. A multimodal human stress measurement is proposed in~\citep{al2016mental}, where EEG and functional near-infrared spectroscopy (fNIRS) are used to classify acute stress. Classification accuracy of 96.6\% is achieved with EEG and fNIRS data. Another human stress measurement system using GSR and skin temperature is carried out in~\citep{kyriakou2019detecting} with an achieved accuracy of 84\%. Another real-time stress detection scheme using heart rate, skin conductance, and accelerometer sensors is proposed in~\citep{can2019continuous}. Using a fusion of features from these sensors an accuracy of 92.15\% is achieved for three-level stress classification. A multimodal stress classification framework for drivers using the physiological signal of PPG and inertial sensors of accelerometer, gyroscope, and magnetometer is proposed in~\citep{lee2016stress}. Driving was performed in a simulator environment and a driving behavior survey questionnaire is used to obtain the subjective response from the subjects. Time and frequency domain features are extracted from the acquired data and stress classification is performed using an SVM classifier with a radial basis function (RBF) with an achieved accuracy of 95\% for two classes. A stress recognition framework for driver stress monitoring using physiological signals of GSR, ECG, and respiration is proposed in~\citep{chen2017detecting}. Features are extracted in time, frequency, and wavelet domain, and PCA and Sparse Bayesian Learning (SBL) algorithms are applied for feature selection and SVM for classification resulting in an accuracy of 99\% for three classes. Another multimodal driver stress recognition framework using DRIVE DB from the PHYSIONET database and physiological signals of GSR, EMG, and ECG is proposed in~\citep{ghaderi2015machine}. Three-level stress classification is performed using kNN and SVM classifier and classification accuracy of 98\% is achieved. Another multimodal stress classification scheme using physiological signals of BVP, HR, ST, GSR, and RR is proposed in~\citep{gjoreski2016continuous}. The mental arithmetic task is used for the induction of stress and 63 features are extracted from the recorded data. For the in-lab experiment, two (non-stressed, stressed) and three (no stress, low stress, and high stress) level stress classification is performed. For two-level stress classification, an accuracy of 83\% is achieved whereas for a three-class problem an accuracy of 72\% is achieved. For the out-of-lab environment, experimental activities of walking, sitting, running, and cycling are recorded. All these activities are numbered according to their intensities i.e., 1 for lying and 5 for running. To determine the stress-inducing interval, the average intensity of the interval was calculated. The importance of recording the activities along with their intensity was to provide context to the situation to create a distinction between a strong physical activity and a stressful situation. The day-long activity is subdivided into one-hour episodes. An accuracy of 76\% and 92\% is achieved with no-context and with context scenarios, respectively. Another multimodal stress recognition frame framework using ECG signals along with activity and contextual data is proposed in a study conducted in~\citep{maier2014mobile}. The application they designed measured the stress and when the stress exceeded a certain threshold the user is allowed to quit from a stressful situation or is given some relaxation therapy to reduce the level of stress. Accelerometer and GPS data are added to the HRV data obtained from ECG data to achieve the accuracy of the system. The authors did not report any classification accuracy for the proposed scheme. Another stress measurement scheme using accelerometer and EDA data obtained from the wrist sensors, call and SMS data, location and screen on/off features obtained from mobile phones, and subjective stress score using questionnaires is proposed in~\citep{sano2013stress}. Sequential Forward Floating Selection (SFFS) is used to select the optimum set of features which selected screen on/off, mobility, call acceleration, and EDA data features, and accuracy of 75\% is achieved for class stress classification using SVM and kNN as a classifier. Moreover, high-stress behavior is found to be associated with acceleration during sleep in the second half of the day, a small amount of SMS, and screen time. Another multimodal stress classification scheme using respiratory, ECG, and accelerometer sensors is proposed in~\citep{hovsepian2015cstress}. The experimental setup included baseline recording, public speaking task, mental arithmetic task, and cold pressure test. This data is used to train the model using laboratory settings and data from 23 participants is recorded in the out-of-lab environment for testing purposes. SVM is used as a classifier and a recall of 89\% is achieved on the test data from the lab environment and classification accuracy of 72\% is achieved on testing data from the out-of-lab environment. Another multimodal stress measurement study using an accelerometer, EDA, and Bluetooth sensor is presented in~\citep{zubair2015smart}. The experiment is performed in a controlled environment and logistic regression is used as a classifier to achieve an accuracy of 91\% for two-class stress classification. A real-time stress recognition methodology using HR and EDA signals is proposed in~\citep{de2010two}. The public speaking task is used as a stimulus and the kNN classifier is used to detect stress and relaxed state with an accuracy of 95\%. PPG and EDA signals are used to identify stress in subjects in a study conducted in~\citep{sandulescu2015stress}. TSST is used to induce stress and an SVM classifier is used to classify stress into two levels with an accuracy of 80\%. Another study using a combination of EDA and HRV signals for measurement of stress is proposed in~\citep{martinez2017real}. Puzzles are used as a stress-inducing stimulus and an F-measure value of 0.984, 0.970, and 0.943 is achieved for the high, medium, and low-stress groups, respectively. Another stress detection scheme using physiological and sociometric sensors is proposed in~\citep{mozos2017stress}. Physiological sensors include EDA and PPG whereas sociometric sensors include microphone and accelerometer sensors. Public speaking is used as a stressor to induce stress in the participants and kNN, SVM, and AdaBoost are used as a classifier. AdaBoost classifier produces the highest accuracy of 94\% for discriminating stress and neutral condition. Another study presented in~\citep{kurniawan2013stress} author made use of GSR and speech signals for the measurement of stress. SWT, TMST (Trier Mental Stress Test), and TSST are used to induce stress in the participants. Time and frequency domain features are extracted from the speech signals. K-means, GMM, SVM, and decisions are used for classification purposes, SVM produced the best results for each type of stressor. Speech signals resulted in better classification accuracy as compared to EDA data. The fusion of speech data with the GSR signal did not increase the classification accuracy for stress classification. Another multimodal stress classification scheme using cardiac features along with EMG, GSR, and respiratory sensors and Kinect-based video camera is proposed in~\citep{aigrain2016multimodal}. Moreover, in addition to the self-reported questionnaire feedback from the psychology expert is also added to the proposed system. SVM is used as a classifier and a classification accuracy of 85\% for two classes is achieved. Pupil dilation and periorbital temperature data are used for stress classification in a study proposed in~\citep{baltaci2014role}. IAPS is used as a stress-inducing stimulus for the experiment. The decision tree classifier resulted in a classification accuracy of 90\% for two classes. The study proposed in~\citep{baltaci2014role} is improved by authors in their study conducted in~\citep{baltaci2016stress} by the addition of entropy feature for the physiological signals. The study resulted in better classification accuracy as compared to the earlier studies by using the AdaBoost classifier with the Random Forest classifier instead of the decision tree. Human gaze and mouse click behavior are used for stress classification in~\citep{huang2016stressclick}. The stress stimulus used in the experiment is a mental arithmetic task. Random forest classifier achieved the highest accuracy of 60\% for two classes for the generalized stress model. Another multimodal stress recognition framework for computer users using physiological signals of EEG, ECG, EMG, and EOG is proposed in~\citep{akhonda2014stress}. EEG data and the subjective questionnaire score is obtained only once at the start of the experimental procedure, whereas ECG, EOG, and EMG data is acquired continuously. A neural network is used as a classifier and three class stress recognition is performed with an achieved accuracy of 80\%. A stress detection system using physiological signals of ECG, respiration rate, skin temperature, and GSR is presented in~\citep{shi2010personalized} with a recall of 80\%. Another human stress detection system using wearable EEG and ECG sensors is presented in~\citep{ahn2019novel}. Stroop color-word and mental arithmetic test is used as a stimulus and an accuracy of 87.5\% is achieved for stress classification. The fusion of ECG, EMG, GSR, and respiration signals are used to measure driver's stress in~\citep{healey2005detecting}. Classification accuracy of 97\% is achieved for three levels of stress. Another study to analyze the impact of human stress on the sleep pattern using wearable sensors of ECG, GSR, body temperature, and respiration is presented in~\citep{muaremi2014monitoring}. SVM, kNN, NN, RF (Random Forest), and logistic regression classifiers are used to classify the stress and SVM produced the best classification accuracy of 73\% for three classes of stress. A cluster-based technique for the detection of perceived stress using EEG, ECG, GSR, and EMG signals is presented in~\citep{xu2014cluster}. Real-time human stress classification using ECG and thoracic electrical bioimpedance (TEB) signals is presented in~\citep{mohino2015assessment} with an error rate of 21\%. Another stress detection study focusing the working people using ECG and GSR signals is presented in~\citep{sriramprakash2017stress}. The database used in the study was SWELL-KW and a classification accuracy of 72.82\% is achieved for stressed vs non-stressed classes. Another stress recognition system using respiratory rate along with GSR, EEG, and blood volume pressure is presented in~\citep{hosseini2010emotional} with a classification accuracy of 82.7\%. A stress assessment study for the office environment using HR, HRV, GSR, EMG, and respiratory rate is discussed in~\citep{wijsman2013wearable} with 74.5\% as a reported accuracy. Driver stress has been monitored using ECG, GSR, and respiration rate in~\citep{rigas2011real}. GSR, EMG, respiration, and HR combination have also explored driver stress level~\citep{singh2013novel}. A study to classify the emotional states of stress and anger using GSR, EMG, BVP, and respiratory rate signals is discussed in~\citep{picard2001toward}. ECG, GSR, EMG, and respiration rate have also been explored for human stress assessment in~\citep{wijsman2011towards}. HRV, respiration, GSR, EMG, and geographical locations have been used for the detection of mental stress in~\citep{choi2011minimally}. Stress level estimation using GSR, EMG, HR, and respiration sensors is presented in~\citep{gjoreski2016continuous}. A human perceived stress measurement system using smartphone PPG and thermal imaging is presented in~\citep{cho2019instant} with a reported average classification accuracy of 78.3\%. Another study to correlate the results of the human stress detection system based on ECG, EDA, and EEG sensors with the changes in the cortisol level of the subject is discussed in~\citep{betti2017evaluation}. The study reveals that the changes in cortisol levels were strongly in line with the physiological signal response. Classification accuracy of 86\% is achieved for the stress classification. Another stress classification scheme using HRV features and cortisol as a reference is proposed in~\citep{liew2015classifying}. Classification accuracy of 80\% is achieved using cortisol bio-marker. ECG and salivary cortisol are used for the measurement of different levels of psycho-social stress in~\citep{nater2005human}. Bad childhood experience has been reported to cause a change in the physiological processes and can determine the magnitude of the stress response. HRV, BVP, ECG, and salivary cortisol are used in~\citep{aimie2018stress} to propose a stress response index that could be used as a future biomarker for such individuals. A fusion of keyboard strokes along with the linguistic features of the written text has been used in~\citep{vizer2009automated} to measure the cognitive and physical stress of the user. The fusion of features from pupil diameter and physiological signals is used in~\citep{barreto2007non,zhai2008stress} to measure the stress. The stress of a computer user is measured by a fusion of features from physical appearance, physiological signals and behavioral data in a study conducted in~\citep{liao2005real}. In another study, accelerometer data and recorded videos are analyzed to monitor the stress response~\citep{giakoumis2012using}. \Tab{tab18} presents a summary of multimodal human stress classification schemes available in the literature. \begin{table} \caption{Summary of Multimodal Human Stress Detection Studies.} \label{tab:tab18} \scalebox{0.7}{ \begin{tabular}{ccccccccc} \hline\noalign{\smallskip} \thead{Method} & \thead{Type of\\Stress} & Modalities & \thead{Number of\\Subjects (M/F)} & Age & \thead{Stimulus} & \thead{Features\\Domain} & Classifier & \thead{Accuracy (Classes)}\\ \noalign{\smallskip}\hline\noalign{\smallskip} ~\citep{al2016mental} & Acute & EEG, fNIRS & (22 (22/0)) & 22-30 & \thead{Montreal \\Imaging \\Stress \\ Task (MIST)} & Frequency & SVM & 95.10\% (2) \\ ~\citep{kyriakou2019detecting} & Acute & GSR, ST & 19 (8/11) & 25-45 & \thead{Audio} & Time & -- & 84.00\% (2) \\ ~\citep{can2019continuous} & Acute & \thead{HR, GSR,\\Acc} & 21 (18/3) & 20 & \thead{Computer \\programming} & Frequency & MLP & 92.15\% (3) \\ ~\citep{lee2016stress} & Acute & \thead{Acc, Gyro,\\Mag} & 28 (18/10) & 35$\pm$16 & \thead{Car\\driving} & \thead{Time and\\Frequency} & SVM & 95.00\% (2) \\ ~\citep{chen2017detecting} & Acute & \thead{ECG, GSR,\\RR} & 9 & -- & \thead{Car\\driving} & \thead{Time and\\Frequency} & SVM & 99.00\% (3) \\ ~\citep{ghaderi2015machine} & Acute & \thead{RR, GSR,\\HR, EMG} & 17 & -- & \thead{Car\\driving} & \thead{Time and\\Frequency} & \thead{SVM, kNN} & 98.00\% (3) \\ ~\citep{gjoreski2016continuous} & Acute & \thead{BVP, HR, ST,\\GSR and RR} & 21 & 28$\pm$4.1 & \thead{Mental\\ arithmetic\\task} & \thead{Time and \\Frequency} & RF & 92.00\% (2) \\ ~\citep{sano2013stress} & Acute & SC, Acc & 18 (15/3) &28$\pm$7.8 & \thead{Daily \\Life Activity} & \thead{Time and \\Frequency} & SVM, kNN & 75.00\% (2) \\ ~\citep{hovsepian2015cstress} & Chronic & \thead{RR, ECG,\\Acc} & 23 & -- & \thead{Day long\\ recording} & Time & SVM & 95.30\% (2) \\ ~\citep{zubair2015smart} & Acute & \thead{EDA, Acc,\\Bluetooth} & 12 & -- & \thead{Mental \\arithmetic\\task,\\ Emotional\\pictures} & Time & LR & 91.00\% (2)\\ ~\citep{de2010two} & Acute & HR, EDA & 80 (0/80) & 19-32 & \thead{Public\\Speaking} & Time & kNN & 95.00\% (2) \\ ~\citep{sandulescu2015stress} & Acute & PPG, EDA & 5 & 18-39 & TSST & \thead{Time and\\Frequency} & SVM & 80.00\% (2) \\ ~\citep{mozos2017stress} & Acute & \thead{EDA, PPG\\Acc, microphone} & 18 & -- & \thead{Public\\Speaking} & \thead{Time and\\Frequency} & \thead{kNN, SVM,\\AdaBoost} & 94.00\% (2) \\ ~\citep{kurniawan2013stress} & Acute & GSR, Speech & 10 & -- & \thead{SWT, TMST,\\TSST} & \thead{Time and\\Frequency} & \thead{k-mean,\\SVM, GMM} & 92.00\% (2) \\ ~\citep{aigrain2016multimodal} & Acute & \thead{EMG, GSR\\RR, Kinect} & 21 (6/15) & 26.3$\pm$4.6 & \thead{Mental\\ arithmetic\\task} & \thead{Time and\\Frequency} & SVM & 85.00\% (2) \\ ~\citep{baltaci2016stress} & Acute & \thead{pupil\\dilation,\\periorbital\\temperature} & 11 (9/2) & 29-40 & IAPS & Time & ABRF & 65\%-84\% (2) \\ ~\citep{huang2016stressclick} & Acute & \thead{Eye gaze,\\mouse click \\behaviour} & 20 (13/7) & 20-33 & \thead{Mental \\arithmetic\\task} & Time & RF & 60.00\% (2) \\ ~\citep{akhonda2014stress} & Acute & \thead{EEG, ECG,\\EMG, EOG} & 12 & -- & \thead{Computer\\Work} & \thead{Time and\\Frequency} & NN & 80.00\% (3) \\ ~\citep{ahn2019novel} & Acute & EEG, ECG & 7 & 29.3$\pm$2.4 & \thead{Mental\\ arithmetic\\task, SWT} & \thead{Time and\\Frequency} & SVM & 87.50\% (2) \\ ~\citep{healey2005detecting} & Acute & \thead{ECG, EMG, \\GSR} & 24 & -- & \thead{Car driving} & \thead{Time and \\Frequency} & \thead{FDA} & 97.00\% (3) \\ ~\citep{muaremi2014monitoring} & Acute & \thead{ECG, GSR,\\ST, RR} & 10 (7/3) & 41 & \thead{Sleep\\data} & \thead{Time and \\Frequency} & \thead{SVM, kNN, \\NN, RF, LR} & 73.00\% (3) \\ ~\citep{xu2014cluster} & Chronic & \thead{EEG, ECG, \\GSR, EMG} & 44 (44/0) & 28.6$\pm$7.2 & PASAT & \thead{Time and \\Frequency} & k-Mean & 85.20\% (3) \\ ~\citep{sriramprakash2017stress} & Chronic & ECG, GSR & 25 & -- & \thead{Office\\work} & \thead{Time and \\Frequency} & SVM, kNN & 72.82\% (2) \\ ~\citep{hosseini2010emotional} & Acute & \thead{GSR, EEG,\\BVP} & 15 (15/0) & 20-24 & IAPS & \thead{Time and \\Frequency} & SVM & 84.10\% (2) \\ ~\citep{wijsman2013wearable} & Acute & \thead{HR, HRV, \\GSR, EMG, RR} & 30 (25/5) & 19-53 & \thead{calculation, puzzle\\ and memory task} & \thead{Time and \\Frequency} & GEE & 74.50\% (2) \\ ~\citep{rigas2011real} & Acute & \thead{ECG, \\GSR, RR} & 13 (10/3) & 22-41 & \thead{Car\\driving} & \thead{Time and \\Frequency} & BN & 96.00\% (2) \\ ~\citep{wijsman2011towards} & Acute & \thead{ECG, RR,\\GSR, EMG} & 30 (25/5) & 19-53 & \thead{calculation, puzzle\\ and memory task} & \thead{Time and \\Frequency} & LBN & 80.00\% (2) \\ ~\citep{gjoreski2016continuous} & Acute & \thead{GSR, EMG, \\HR, RR} & 26 & -- & \thead{Daily life\\activity} & \thead{Time and \\Frequency} & RF & 92.00\% (2) \\ ~\citep{cho2019instant} & Acute & \thead{PPG, Thermal\\imaging} & 17 (8/9) & 29.82 & \thead{Mental\\workload} & \thead{Time and \\Frequency} & NN & 78.33\% (2) \\ ~\citep{betti2017evaluation} & Acute & \thead{ECG, EDA,\\EEG} & 26 (8/7) & 60.8$pm$9.5 & MAST & \thead{Time and \\Frequency} & SVM & 86.00\% (2) \\ ~\citep{liew2015classifying} & Acute & \thead{HRV,\\Cortisol} & 22 (17/5) & 21 & TSST & \thead{Time and \\Frequency} & FAM & 80.00\% (2) \\ ~\citep{vizer2009automated} & Acute & \thead{keystroke,\\linguistic\\features} & 24 (10/14) & 18-56 & \thead{Cognitive \\task} & Time & \thead{DT, kNN\\ SVM, ANN} & 75.00\% (2) \\ ~\citep{barreto2007non} & Acute & \thead{BVP, GSR,\\ST, PD} & 32 & 21-42 & \thead{Stroop color\\ word test} & \thead{Time and \\Frequency} & SVM & 90.10\% (2) \\ ~\citep{zhai2008stress} & Acute & \thead{BVP, GSR,\\ST, PD} & 32 & 21-42 & \thead{Stroop color\\ word test} & Time & SVM & 90.10\% (2) \\ ~\citep{giakoumis2012using} & Acute & \thead{ECG, GSR,\\Acc, Video} & 21 (17/4) & 30.4$\pm$3.7 & \thead{Stroop color\\ word test} & Time & LDA & 100\% (2) \\ \noalign{\smallskip}\hline \end{tabular} } \begin{tablenotes} \item[*] LDA: Linear Discriminant Analysis, SVM: Support Vector Machine, LBN: Linear Bayes Normal, MAST: Maastricht Acute Stress Test, FAM: Fuzzy ARTMAP \end{tablenotes} \end{table} \section{Future Directions} \label{sec:fd} In this section, future directions which could be adopted for the improvement in existing human stress measurement methods and the open challenges that still need to be addressed in the literature are discussed. One of the most important limitations for most of the existing stress measurement schemes is that they are performed in a well-controlled lab environment whereas measurement of stress in real-life poses a different set of challenges as compared to the lab environment. The challenges posed by real-life stress measurement scenarios include the occurrence of multiple stressors at the same time (unlike lab environment where a particular stressor is given to the participant at one time), incomplete contextual information like lack of information about changing room temperature and the effect of outdoor environmental conditions on the acquired physiological signals. The addition of contextual information in real life, as well as laboratory settings, has been found useful toward efficient measurement of human stress in a study conducted in~\citep{gjoreski2016continuous} thus supporting the fact that contextual information is important and has not been widely considered in the available literature. In future stress measurement studies, challenges occurring due to the out-of-lab environment should be explored to make the available stress measurement method more practical in a real-life scenario. Availability of the ground truth for labeling of data used for training the stress detection model is another open research issue. Either in the lab environment or real-life scenario, we don't have the ground truth and the majority of the studies in the literature have to rely on subjective scoring which can vary from person to person. It has been observed that physiological data from two participants may have the same pattern but one of the participant's label as "stressed" whereas the other label it as "non-stressed" in subjective labeling~\citep{liapis2015stress}. This type of labeling ambiguity degrades the performance of the system and thus needs to be rectified by developing a more robust mechanism of labeling the data. Power consumption of data acquisition is another important factor that needs to be considered while recording the data in an out-of-lab environment. In lab stress measurement system can use medical-grade and other data acquisition devices with constant power supply and there is no power outage, whereas in real-life scenarios all the devices whether they are EEG headset, skin conductance, and heart rate measurement modules or smartwatches all have limited battery life and can last approximately around 4-5 hours. But to record the activities of a user for a complete day, the power consumption of the devices needs to be kept at a minimum level so that the battery can work for more time. Existing literature has not examined this power consumption factor and needs to be addressed to be able to manufacture efficient and long-lasting data acquisition devices for monitoring stress, especially for real-life environments. Noise affects the data acquisition part of the stress recognition measurement in the lab as well as the out-of-lab environment. Physiological signals which include EEG, EMG, ECG, PPG, and RR are most commonly affected by the individual body movement parts and severely degrades the signal quality. Artifact removal techniques include low, high and bandpass filtering, least mean square, notch filtering, recursive least mean square, principal component analysis (PCA), independent component analysis (ICA), and wavelet denoising. Even though these techniques have been found quite useful in the removal of noise from the bio-signals in the lab environment, but still the data acquired in out-of lab presents a wider range of challenges like environmental conditions i.e., temperature, humidity, and lightning affecting these recorded physiological signals and other body functions. An example of the effect of these environmental conditions includes pupil diameter changes in response to light stimulation and changes in skin conductance of the human body due to temperature and physical activity. Thus in an outdoor environment, these conditions are very difficult to remain constant which can somehow be maintained in-lab environment thus posing a greater challenge and affecting the stress measurement study results. Most of the human stress measurement studies available in the literature have focused on acute stress and very little attention has been given to chronic stress detection techniques. Even though it is a fact that chronic stress can be fatal in comparison to acute stress and has given billions of dollars lost to many companies and is affecting their worker's health severely~\citep{gottlieb2013coping}. Chronic stress if poorly managed and is prolonged for a longer time duration turns into traumatic disorders that can permanently damage an individual's life. Thus in future stress measurement work chronic stress measurement should be given its due importance so that it can be properly diagnosed and treated before it becomes a permanent part of one's life. Most of the stress measurement studies in the literature have used their dataset for the detection of stress and thus it is difficult to compare the results of different schemes directly because of the lack of standard publicly available stress measurement datasets, hence the curation of data for measurement of stress like DEAP database for emotions~\citep{koelstra2011deap} using globally accepted practices is the need of the hour. The deep learning-based method has been proved a powerful mechanism for classification in a variety of applications including handwriting and image recognition but the use of deep learning for human stress detection has still not been explored to a great extent in the literature. It is because deep learning-based algorithms require a large dataset to train the model and to acquire a large dataset for stress is a cumbersome task and it is still an open research problem to acquire a large dataset for the deep learning models to be implemented. Only a few studies focusing on deep learning-based stress classification have been presented in the literature~\citep{song2017development} and thus this area needs further exploration. Keeping in view all the above-mentioned limitations of the existing methods, new work on human stress detection should address these challenges so that a more robust system that is practically possible to be implemented for real-life stress monitoring solutions in the future. \section{Conclusion} \label{sec:conc} Human stress is one the most significant challenges faced by modern civilization with a long-lasting impact on both society and the social life of an individual. This also impacts the economic condition of those individuals that are facing stress-related situations. Therefore, automatic recognition of stress is of utmost importance to bring a positive change in society. Some of the important benefits of automated stress monitoring could include a better attitude towards the workplace, an increase in productivity, a decrease in the number of road accidents to name a few. In general, better stress management for individuals would have far-reaching benefits for both individuals and society. Towards this, we have surveyed available literature for human stress measurement including both subjective and objective measures. Commonly used stressors used for inducing stress followed by publicly available stress measurement databases are also examined. Multimodal approaches for human stress detection along with a discussion about the limitations of the existing work and the open research issues which need to be addressed in future stress measurement studies are also explored. In particular, we have focused on methods that use or are suitable for employing artificial intelligence techniques for automated stress detection. The information presented here will provide a platform for designing future studies for stress detection, particularly for employing AI with domain knowledge. With this, we also provide a better context for methods that can be used for daily life stress detection (acute) and for situations that are more demanding (chronic). Towards, this we have comprehensively highlighted the current challenges and future directions. \bibliographystyle{spbasic}
2024-02-18T23:41:13.947Z
2022-02-08T02:31:31.000Z
algebraic_stack_train_0000
4,515
68,810
proofpile-arXiv_066-6255
\section{Introduction} Given a binary word $w=(w_i)_{i=1}^n\in\{0,1\}^n$ of length~$n$, denote by $w[j,k]$ the subword of length $k-j+1$ starting at position $j$ and ending at position~$k$, that is, $w[j,k]=w_jw_{j+1}\dots w_k$. Let $|w|_1$ be the number of 1s in the word~$w$. We define the \emph{profile\/} $f_w\colon\{0,\dots,n\}\to\{0,\dots,n\}$ of $w$ by \[ f_w(k)=\max_{0\le j\le n-k}|w[j+1,j+k]|_1, \] so that $f_w(k)$ is the maximum number of 1s in any subword of $w$ of length~$k$. The word $w$ is called \emph{prefix normal\/} if for all $0\le k\le n$ this number is maximized at $j=0$, so that \[ |w[1,k]|_1\ge|w[j+1,j+k]|_1\qquad\text{for }0\le j\le n-k. \] In other words, a word $w$ is called prefix normal if the number of $1$s in any subword is at most the number of $1$s in the prefix of the same length. If $j<k$ then we can remove the common subword $w[j+1,k]$ of $w[1,k]$ and $w[j+1,j+k]$, so that $|w[1,k]|_1\ge|w[j+1,k+j]|_1$ iff $|w[1,j]|_1\ge|w[k+1,k+j]|_1$. Thus to show that $w$ is prefix normal it is enough to check that \begin{equation}\label{e1} |w[1,k]|_1\ge |w[j+1,j+k]|_1\qquad\text{for }k\le j\le n-k. \end{equation} Prefix normal words were introduced by G.~Fici and Z.~Lipt\'ak in \cite{pnf2conf} because of their connection to binary jumbled pattern matching. Recently, prefix normal words have been used because of their connection to trees with a prescribed number of vertices and leaves in caterpillar graphs~\cite{caterpillar}. The number of prefix normal words of length $n$ is listed as sequence A194850 in The On-Line Encyclopedia of Integer Sequences (OEIS)~\cite{OEIS}. We prove the following result, conjectured in~\cite{pnf1} (Conjecture 2) where also weaker upper and lower bounds were shown, see also~\cite{pnf1x}. \begin{theorem}\label{t:1} The number of prefix normal words of length $n$ is $2^{n-\Theta((\log n)^2)}$. \end{theorem} Given an arbitrary binary word $w$ of length $n$, the \emph{prefix normal form} $\tilde w$ of $w$ is the unique binary word of length $n$ that satisfies \[ |\tilde w[1,k]|_1=f_w(k). \] Note that for any $w$, $f_w(k)\le f_w(k+1)\le f_w(k)+1$, so $\tilde w$ is well-defined. Moreover, we can define an equivalence relation $\sim$ on binary words of length $n$ by \[ w\sim v \qquad\Longleftrightarrow\qquad f_w=f_v \qquad\Longleftrightarrow\qquad \tilde w=\tilde v. \] Indeed, $\tilde w$ is just the lexicographically maximal element of the equivalence class $[w]$ of $w$ under this equivalence relation. In~\cite{pnf2conf} it is asked how large can an equivalence class $[w]$ be. In other words, what is the maximum number of words of length $n$ that have the same fixed prefix normal form. This maximum number is listed in the OEIS as sequence A238110~\cite{OEIS}. From Theorem~\ref{t:1} it is clear that it must be at least $2^{\Theta((\log n)^2)}$. However, we show that it is much larger. \begin{theorem}\label{t:2} For each $n$ there exists a prefix normal word $w$ such that the number of binary words of length $n$ with prefix normal form $w$ is $2^{n-O(\sqrt{n\log n})}$. \end{theorem} \section{Proofs} \begin{proof}[Proof of the lower bound of Theorem~\ref{t:1}.] To prove the lower bound we will need to construct $2^{n-\Theta((\log n)^2)}$ prefix normal words of length~$n$. We will do so by giving a random construction and showing that this construction almost always produces a prefix normal word. Fix a constant $c> \sqrt{2}$ and define \[ p_k=\begin{cases} \frac{1}{2}+c\sqrt{\frac{\log n}{k}},&\text{for }k>16c^2\log n;\\ 1,&\text{for }k\le 16c^2\log n. \end{cases} \] Write $k_0:=\lfloor 16c^2\log n\rfloor$ so $p_k=1$ if $k\le k_0$, and $p_k\in[\frac12,\frac34]$ for $k>k_0$. Let $w$ be a random word with each letter $w_k$ chosen to be 1 with probability~$p_k$, independently for each $k=1,\dots,n$. Clearly \eqref{e1} holds for all $k\le k_0$, so assume $k>k_0$. By comparing the integral $\int c\sqrt{\frac{\log n}{k}}\,dk=2c\sqrt{k\log n}+C$ with the corresponding Riemann sum, we note that \[ \sum_{i=1}^k p_i = \tfrac{k}{2}+2c\sqrt{k\log n} + O(1) \] uniformly for $k>k_0$ (and uniformly in~$c$). Indeed, the approximation of the integral by the Riemann sum has error at most the maximum term, due to the monotonicity of the integrand, and the additive constant is also $O(1)$ by considering the case $k=k_0$. From this we estimate the expected difference \begin{equation}\label{difference} |w[1,k]|_1-|w[j+1,j+k]|_1=\sum_{i=1}^kw_i+\sum_{i=j+1}^{k+j}(1-w_i)-k \end{equation} as \[ \mu:=\E\big(|w[1,k]|_1-|w[j+1,j+k]|_1\big) =2c\sqrt{k\log n}-2c\sqrt{(j+k)\log n}+2c\sqrt{j\log n}+O(1). \] This expression is minimized when $j$ is as small as possible, i.e., $j=k$. Thus \[ \mu\ge 2(2-\sqrt2)c\sqrt{k\log n}+O(1)> c\sqrt{k\log n} \] for sufficiently large $n$. By~\eqref{difference}, $|w[1,k]|_1-|w[j+1,j+k]|_1$ can be considered as the sum of $2k$ independent Bernoulli random variables (with an offset of $-k$). We recall the \emph{Hoeffding bound\/}~\cite{hoeffding} that states that if $X$ is the sum of $n$ independent random variables in the interval $[0,1]$ then for all $x\ge 0$, \begin{equation}\label{eq:hoeffding} \Prb\big(X-\E(X)\ge x\big)\le \exp\{-2x^2/n\}\quad\text{and}\quad \Prb\big(X-\E(X)\le -x\big)\le \exp\{-2x^2/n\}. \end{equation} (Note that these two bounds are essentially the same bound as the second can be easily derived from the first by exchanging the roles of the $0$s and $1$s but we state them both here for convenience.) Let $\mu^*= \E\big(\sum_{i=1}^k w_i +\sum_{i=j+1}^{k+j}(1-w_i)\big)$. Note that $\mu^*=\mu+k$. We have \begin{align*} \Prb\big(|w[1,k]|_1<|w[j+1,j+k]|_1\big) &\stackrel{\eqref{difference}}{=} \Prb\left(\sum_{i=1}^k w_i +\sum_{i=j+1}^{k+j}(1-w_i) <k \right)\\ &\le \Prb\left(\sum_{i=1}^k w_i +\sum_{i=j+1}^{k+j}(1-w_i) -\mu^* <k- \mu^* \right) \\ &\le \Prb\left(\sum_{i=1}^k w_i +\sum_{i=j+1}^{k+j}(1-w_i) -\mu^* < - \mu \right) \\ &\stackrel{\eqref{eq:hoeffding}}{\le} \exp\big\{-2\mu^2/(2k)\big\}\\ &\le\exp\big\{-c^2\log n \big\} \end{align*} Hence if $c$ is large enough ($c>\sqrt{2}$) then $\Prb(|w[1,k]|_1<|w[j+1,j+k]|_1)=o(n^{-2})$. Taking a union bound over all possible values of $k$ and~$j$, we deduce that $w$ is prefix normal with probability $1-o(1)$. It remains to count the number of such~$w$. For any discrete random variable $X$, define the \emph{entropy\/} of the distribution of $X$ as \[ H(X):=\sum_x -\Prb(X=x)\log_2\Prb(X=x), \] where the sum is over all possible values $x$ of $X$ and the logarithm is to base~2. If the random variable is a Bernoulli random variable, we call $H(\mathrm{Be}(p))$ the {\it binary entropy function}~$H_b(p)$. We use the following well-known (and easily verified) facts about the entropy. \begin{enumerate}[label=H\arabic*)] \item If $X_1,\dots,X_n$ are independent discrete random variables and $X=(X_1,\dots,X_n)$, then $H(X)=\sum_{i=1}^n H(X_i)$. \item If $X$ takes on at most $N$ possible values with positive probability then $H(X)\le \log_2 N$. \item \label{H3} The Taylor series of the binary entropy function in a neighbourhood of $1/2$ is \[ H_b(p)=1 - \frac{1}{2\ln 2}\sum_{n=1}^\infty \frac{(1-2p)^{2n}}{n(2n-1)}.\] In particular, for a Bernoulli random variable with $\Prb(X=1)=\frac12+x$, $H(X)=1-\Theta(x^2)$. \item If $\cB$ is subset of possible values of $X$ we have \[ H(X)=H(X\mid X\in\cB)\Prb(X\in\cB)+H(X\mid X\notin\cB)\Prb(X\notin\cB)+H(1_{X\in\cB}), \] where $X\mid \cE$ denotes the distribution of $X$ conditioned on the event $\cE$ and $1_{\cE}$ denotes the indicator function of~$\cE$. \end{enumerate} Applying these results to our random word $w$ we have \[ H(w)=\sum_{k>k_0}^n H(w_k)=n-k_0-\Theta\left(\sum_{k=k_0}^n c^2\tfrac{\log n}{k}\right) =n-\Theta((\log n)^2). \] On the other hand, if $\cB$ is the set of prefix normal words, then \begin{align*} H(w)&=H(w\mid w\in\cB)\Prb(w\in\cB)+H(w\mid w\notin\cB)\Prb(w\notin\cB)+H(1_{w\in\cB})\\ &\le \log_2(|\cB|)\Prb(w\in\cB)+n\,\Prb(w\notin\cB)+1\\ &= n+1 - (n-\log_2|\cB|)(1-o(1)). \end{align*} We deduce that $n-\log_2|\cB|\le\Theta((\log n)^2)$ and hence $|\cB|\ge 2^{n-\Theta((\log n)^2)}$. \end{proof} \begin{proof}[Proof of the upper bound in Theorem~\ref{t:1}.] We will prove the upper bound in two parts. Firstly we will show that most prefix normal words have to contain a good number of $1$s in any prefix of reasonable size as we cannot extend a prefix with too few 1s to a prefix normal word in many ways. Secondly, we will show that there are at most $2^{n-\Theta (\log^2n)}$ ways to construct a word which has sufficiently many $1$s in all reasonably sized prefixes. Assume $\log n\le k\le \sqrt n$ and consider the first $\lfloor\sqrt n\rfloor$ blocks of size $k$ of~$w$. If $|w[1,k]|_1=d$ then the number of choices for the second and subsequent blocks is at most $2^k(1-\Prb(\Bin(k,\tfrac12)>d))$, and hence the number of choices for $w$ is at most \[ 2^n\big(1-\Prb\big(\Bin(k,\tfrac12)>d\big)\big)^{\lfloor\sqrt n\rfloor-1} \le 2^{n-\Omega(\sqrt n\,\Prb(\Bin(k,1/2)>d))}. \] If $\Prb(\Bin(k,\tfrac12)>d)>n^{-1/3}$, say, then there are far fewer than $2^{n-\Theta((\log n)^2)}$ choices of such prefix normal words, even allowing for summation over all such $k$ and~$d$. Using Stirling's formula one can show that for $1/2<\lambda<1$ and $\lambda k$ integral, \[ \Prb\big(\Bin(k,\tfrac12)\geq \lambda k) = \sum_{i=\lambda k}^k\binom{k}{i}2^{-k}\geq \frac{2^{k H_b(\lambda)-k}}{\sqrt{8k\lambda(1-\lambda)}} \geq \frac{2^{k H_b(\lambda)-k}}{\sqrt{2k}} , \] see for example \cite{ash} for a detailed proof. Thus, by \ref{H3}, we have \[ \Prb\big(\Bin(k,\tfrac12)>\tfrac{k}{2}+x\big)\ge \frac{1}{\sqrt{2k}}2^{-\Theta(x^2/k)}, \] provided $x<k/2$. Thus if $\log n\le k\le \sqrt n$ and $\Prb(\Bin(k,\tfrac12)>d)>n^{-1/3}$ we can deduce that $d\ge \frac{k}{2}+c\sqrt{k\log n}$ for some small universal constant $c>0$. Thus, without loss of generality, we can restrict to prefix normal words with the property that \begin{equation}\label{e2} |w[1,k]|_1\ge \tfrac{k}{2}+c\sqrt{k\log n}\qquad\text{for all $k$ with}\qquad \log n\le k\le\sqrt n. \end{equation} Define $d_0=c\sqrt{\log n}$, which for simplicity we shall assume is an integer. (One can reduce $c$ slightly to ensure this is the case.) Define $\cE_t$ to be the event that \eqref{e2} holds with $k=4^t$, i.e., that $|w[1,4^t]|_1\ge 2^{2t-1}+2^td_0$. Let $t_0$ be the smallest $t$ such that $4^t\ge\log n$ and let $t_1$ be the largest $t$ such that $4^t\le\sqrt{n}$. We bound the probability that a uniformly chosen $w\in\{0,1\}^n$ satisfies $\cE_{t_0}\cap\cE_{t_0+1}\cap\dots\cap\cE_{t_1}$. Write $\cE_{t,j}$ for the event that $|w[1,4^t]|_1=2^{2t-1}+2^td_0+j$ and $\cE_{t,\ge j}$ for the event that $|w[1,4^t]|_1\ge 2^{2t-1}+2^td_0+j$. Thus $\cE_t$ is just $\cE_{t,\ge0}$. Write $\cE_{\le t}$ for the intersection $\cE_{t_0}\cap\cE_{t_0+1}\cap\dots\cap \cE_t$. {\bf Claim:} For $t\in[t_0,t_1]$ and $j\ge0$, \[ \Prb\big(\cE_{\le t-1}\cap\cE_{t,\ge j}\big)\le n^{-2c^2(t-t_0+1)/3}\beta_t^j/(1-\beta_t), \] where $\beta_t:=\exp\{-2^{3-t}d_0/3\}$. Note that $\beta_t<1$ for all $t\in[t_0,t_1]$. For the case $t=t_0$ we simply use the Hoeffding bound \eqref{eq:hoeffding} to obtain \begin{align*} \Prb(\cE_{t_0,\ge j})&=\Prb\big(\Bin(4^{t_0},\tfrac12)\ge 2^{2t_0-1}+2^{t_0}d_0+j\big) \le \exp\big\{-2(2^{t_0}d_0+j)^2/4^{t_0}\big\}\\ &\le \exp\big\{-2d_0^2-4jd_0/2^{t_0}\big\} = n^{-2c^2}\beta_{t_0}^{3j/2}<n^{-2c^2/3}\beta_{t_0}^j/(1-\beta_{t_0}) \end{align*} as required. Now assume the claim is true for~$t$. We first want to give a bound on $\Prb(\cE_{\le t}\cap\cE_{t+1,\ge j})$. Note that if $\cE_{\le t-1} \cap \cE_{t,i}$ holds then in particular $\cE_{t,i}$ holds and thus for $\cE_{t+1,\ge j}$ to hold we still need at least \[ 2^{2(t+1)-1} + 2^{t+1}d_0 +j - 2^{2t-1}-2^td_0-i = 3\cdot 2^{2t-1} + 2^{t}d_0 +j -i \] $1$s in the interval $[4^t +1, 4^{t+1}]$. Thus we get \[ \Prb\big(\cE_{\le t}\cap\cE_{t+1,\ge j}\big) \le\sum_{i\ge0}\Prb(\cE_{\le t-1}\cap\cE_{t,i}) \Prb\big(|w[4^t+1,4^{t+1}]|_1\ge 3\cdot 2^{2t-1}+2^t d_0+j-i\big). \] Note that there are $4^{t+1}-4^t=3\cdot 4^t$ elements in the interval $[4^t +1, 4^{t+1}]$ and that we expect \[ \frac{3\cdot 4^t}{2} = 3\cdot 2^{2t-1} \] $1$s in this interval. Hence by Hoeffding \begin{align*} \Prb\big(|w[4^t+1,4^{t+1}]|_1\ge 3\cdot 2^{2t-1}+2^t d_0+j\big) &\le \exp\big\{ -2(2^td_0+j)^2/(3\cdot 4^t)\big\}\\ &\le \exp\big\{-2d_0^2/3-4jd_0/(3\cdot 2^t)\big\}\\ &= n^{-2c^2/3}\beta_{t+1}^j. \end{align*} Note that the final inequality is even true for negative $j$: for $j\ge - 2^td_0$ Hoeffding's bound holds, and for $j\le -2^t d_0$ the bound on the probability is larger than~$1$. If we let $p_i=\Prb(\cE_{\le t-1}\cap\cE_{t,\ge i})$ then we have \begin{align*} \Prb(\cE_{\le t}\cap\cE_{t+1,\ge j}) &\le \sum_{i\ge 0}(p_i-p_{i+1}) n^{-2c^2/3}\beta_{t+1}^{j-i}\\ &\le n^{-2c^2/3}\beta_{t+1}^j\big(p_0+(1-\beta_{t+1})(\beta_{t+1}^{-1}p_1+\beta_{t+1}^{-2}p_2+\dots)\big). \end{align*} Now by induction, $p_i\le n^{-2c^2(t-t_0+1)/3}\beta_t^i/(1-\beta_t)$. As $\beta_t=\beta_{t+1}^2$ we have \begin{align*} \Prb(\cE_{\le t}\cap\cE_{t+1,\ge j}) &\le n^{-2c^2(t-t_0+2)/3}\beta_{t+1}^j (1+(1-\beta_{t+1})(\beta_{t+1}+\beta_{t+1}^2+\dots))/(1-\beta_{t+1}^2)\\ &=n^{-2c^2(t-t_0+2)/3}\beta_{t+1}^j(1+\beta_{t+1})/(1-\beta_{t+1}^2)\\ &= n^{-2c^2(t-t_0+2)/3}\beta_{t+1}^j/(1-\beta_{t+1}), \end{align*} as required. Thus the claim is proved. Now we take $t=t_1$ and $j=0$ to deduce that $\Prb(\cE_{\le t_1})\le n^{-2c^2(t_1-t_0+1)/3}/(1-\beta_{t_1})$. Recall $\beta_{t_1} = \exp(-2^{3-t_1}d_0/3)$, $d_0=c\sqrt{\log n}$, and that $t_1$ was chosen so $\sqrt{n}/4 < 4^{t_1} \le \sqrt{n}$. Thus, for large~$n$, $n^{-1/4} < 2^{3-t_1}d_0/3 < 1$. Using the inequality $e^{-x}\le 1-x/2$, which holds for $0\le x\le 1$, we deduce that $1-\beta_{t_1}\ge n^{-1/4}/2$, and so $1/(1-\beta_{t_1}) = O(n^{1/4})$. Also, we have $t_1-t_0+1=\Theta(\log n)$ as $n\to\infty$ and thus $\Prb(\cE_{\le t_1})\le 2^{-\Omega((\log n)^2)}$. As the probability that a uniformly chosen word $w$ satisfies $\cE_{\le t_1}$ is at most $2^{-\Omega((\log n)^2)}$, we deduce that the number of prefix normal words is at most $2^{n-\Theta((\log n)^2)}$. \end{proof} \begin{proof}[Proof of Theorem~\ref{t:2}.] Fix an integer $t\approx\sqrt{n\log n}$ and assume for simplicity that $n$ is a multiple of $2t$. Define $w=(10)^t1^{2t}c_1c_2\dots c_{(n-4t)/2t}$, where $c_i$ are arbitrary Catalan sequences of length $2t$. Here a \emph{Catalan sequence} is a binary sequence $c$ of length $2t$ such that $|c[1,i]|_1\le i/2$ for all $i=1,\dots,2t$ and $|c|_1=t$. It is well-known that the number of choices for $c_i$ is the \emph{Catalan number} \[ C_t=\frac{1}{t+1}\binom{2t}{t}\sim \frac{2^{2t}}{\sqrt\pi t^{3/2}}. \] It is easy to see that the prefix normal form of any $w$ of this form is \begin{equation}\label{e3} \tilde w=1^{2t}(01)^{(n-2t)/2}. \end{equation} Indeed, there is a subword $1^k$ of $w$ for all $k\le 2t$. For $k>2t$, if we write $k=2tq+r$ with $0\le r<2t$ then we have a subword $(10)^{r/2}1^{2t}c_1\dots c_{q-1}$ or $0(10)^{(r-1)/2}1^{2t}c_1\dots c_{q-1}$ which is of length $t$ and has the requisite number $t+\lfloor k/2\rfloor$ of 1s. On the other hand, the definition of a Catalan sequence implies no other subword of length $k$ containing the $1^{2t}$ subword can possibly have more 1s. Any substring intersecting the $1^{2t}$ and of length greater than $2t$ can be replaced by one containing the $1^{2t}$ with at least as many ones. And finally, any subword of $w$ length $k>2t$ not intersecting the $1^{2t}$ subword (so contained within the $c_1\dots c_{(n-4t)/2t}$ subword) can have at most $t+\lfloor k/2\rfloor$ 1s as an end-word of $c_i$ contains at most $t$ 1s and there are at most $\lfloor k/2\rfloor$ 1s in the initial subword of $c_{i+1}c_{i+2}\dots$ of length~$k$. It remains to count the number of possible $w$'s. This is just \[ C_t^{(n-4t)/(2t)}=2^{n-4t-(\log t)3n/4t+O(n/t)}. \] Taking $t\sim\sqrt{n\log n}$ gives $2^{n-O(\sqrt{n\log n})}$ words $w$ satisfying~\eqref{e3}. \end{proof} {\bf Acknowledgement:} We would like to thank the anonymous referees for their helpful comments and their quick response.
2024-02-18T23:41:14.948Z
2019-03-20T01:19:53.000Z
algebraic_stack_train_0000
4,572
3,423
proofpile-arXiv_066-6284
\section{Introduction} Describing the relationship between rainfall and runoff is one of the central tasks in the field of hydrology \cite{Klemes1982}. This involves the prediction of the river discharge from meteorological observations from a river basin. The basin or catchment of a river is defined by the area of which all (surface) runoff drains to a common outlet \cite{WMO2012}. Predicting the discharge of the river is necessary for e.g. flood forecasting, the design of flood protection measures, or the efficient management of hydropower plants. Within the basin of a river, various hydrological processes take place that influence and lead to the river discharge, including, for example, evapotranspiration, where water is lost to the atmosphere, snow accumulation and snow melt, water movement in the soil or groundwater recharge and discharge (see Fig. \ref{fig_hyd_mod}). \begin{figure} \centering \includegraphics[width=11cm]{figures/hyd_model.png} \caption{Simplified visualization of processes and fluxes that influence the river discharge, such as precipitation, snow melt, surface runoff or subsurface flows.} \label{fig_hyd_mod} \end{figure} The hydrological processes have highly non-linear interactions and depend, to a large degree, on the states of the system, which represent the memory of, e.g. a river basin. Consequently, hydrological models are formulated in a state-space approach where the states at a specific time depend on the input $\boldsymbol{I}_t$, the system states at the previous time step $\boldsymbol{S}_{t-1}$, and a set of parameters $\Theta_i$ \cite{Herrnegger2015}: \begin{equation}\label{eq_1} \boldsymbol{S}_t = f(\boldsymbol{I}_t, \boldsymbol{S}_{t-1}; \Theta_i) \end{equation} The discharge at a given time step $t$ is driven by the system states and in consequences by the meteorological events of the preceding time steps. More generally, any output $\boldsymbol{O}_t$ of a hydrological system (e.g. the runoff) can be described as: \begin{equation}\label{eq_2} \boldsymbol{O}_t = g(\boldsymbol{I}_t, \boldsymbol{S}_{t}; \Theta_j), \end{equation} where $g(\cdot)$ is a mapping function that connects the states of the system and the inputs to the system output, and $\Theta_j$ is the corresponding subset of model parameters. For making proficient predictions these non-linearities make it inevitable (at least in classical process-based hydrological models) to explicitly implement the hydrological processes \cite{Herrnegger2012,Lindstrom2010,Perrin2003,Thielen2008}. However, defining the mathematical representations of the processes, including the model structures and determining their effective parameters so that the resulting system exhibits good performance and generalizable properties (e.g. in the form of seamless parameter fields) still remains a challenge in the field of hydrology \cite{Gupta1999,Klotz2017,Samaniego2017}. A significant problem and limiting factor in this context is the frequently missing information regarding the physical properties of the system \cite{Beven2001,Freeze1969}. These tend to be highly heterogeneous in space (e.g. soil characteristics) and can additionally change over time, e.g. vegetation cover. Our knowledge of the properties on, or near the surface has increased significantly in the last decades. This is mainly due to advances in high-resolution air- as well as spaceborne remote sensing \cite{Brenner2017,Hengl2017,Myneni2002}. However, hydrology, to a significant part, takes place underground, for which detailed information is rarely available. In essence, the process-based models try to describe a system determined by spatially and temporally distributed system states and physical parameters, which are most of the time unknown. In contrast, data-driven methods, such as Neural Networks, are solely trained to predict the discharge, given meteorological observations, and do not necessitate an explicit definition of the underlying processes. But these models have not the best reputation among many hydrologists because of the prevailing opinion that models ``must work well for the right reasons'' \cite{Klemes1986}. However, due to their predictive power, first studies of using Neural Networks for predicting the river discharge date back to the early 90s \cite{Daniell1991,Halff1993}. Recently, Kratzert et al. \cite{hess-2018-247} used Long Short-Term Memory networks (LSTMs) \cite{Hochreiter1997} for daily rainfall-runoff modelling and could show that LSTMs achieve competitive results, compared to the well established Sacramento Soil Moisture Accounting Model \cite{Burnash1973} coupled with the Snow-17 snow model \cite{Anderson1973}. LSTM is an especially well-suited network architecture for Hydrological applications, since the evolution of states can be modelled explicitly through time and mapped to a given output. The approach is very similar to rainfall-runoff models defined by Equations \ref{eq_1}-\ref{eq_2} (in the case of the LSTM the system states are the memory cell states and the parameters are the learnable network weights, \cite{hess-2018-247}). The aim of this chapter is to show different possibilities that enable the interpretation of the LSTM and its internals in the context of rainfall-runoff simulations. Concretely, we explore and investigate the following questions: How many days of the past influence the output of the network at a given day of the year? Do some of the memory cells correlate with hydrological states? If yes, which input variables influence these cells and how? Answering these questions is important to a) gain confidence in data-driven models, e.g. in case of the necessity for extrapolation, b) have tools to understand possible mistakes and difficulties in the learning process and c) potentially learn from findings for future applications. \subsection{Related work} In the field of water resources and hydrology, a lot of effort has been made on interpreting neural networks and analyzing the importance of input variables (see \cite{Bowden2005} for an overview). However, so far only feed-forward neural networks have been applied in these studies. Only recently, Kratzert et al. \cite{hess-2018-247} have demonstrated the potential use of LSTMs for the task of rainfall-runoff modelling. In their work they have also shown that memory cells with interpretable functions exist, which were found by visual inspection. Outside of hydrology, LSTMs found a wide range of applications, which attracted researches to study on analyzing and interpreting the network internals. For example Hochreiter et al. \cite{Hochreiter2007} found new protein motifs through analyzing LSTM memory cells. Karpathy et al. \cite{Karpathy2015} inspected memory cells in character level language modelling and identify some interpretable memory cells, e.g. cells that track the line-length or cells that check if the current text is inside brackets of quotation marks. Li et al. \cite{Li2015} inspected trained LSTMs in the application of sentence- and phrase-based sentiment classification and showed through saliency analysis, which parts of the inputs are influencing the network prediction most. Arras et al. \cite{Arras2017} used Layer-wise Relevance Propagation to calculate the impact of single words on sentiment analysis from text sequences. Also for sentiment analysis, Murdoch et al. \cite{Murdoch2018} present a decomposition strategy for the hidden and cell state of the LSTM to extract the contributions of single words on the overall prediction. Poerner et al. \cite{Poerner2018} summarize various interpretability efforts in the natural language processing domain and present an extension of the LIME framework, introduced originally by Reibiero et al. \cite{Ribeiro2016}. Strobelt et al. \cite{Strobelt2018} developed LSTMVis, a general purpose visual analysis tool for inspecting hidden state values in recurrent neural networks. Inspired by these studies, we investigate the internals of LSTMs in the domain of environmental science and compare our findings to hydrological domain knowledge. \section{Methods} \subsection{Model architecture} In this study, we will use a network consisting of a single LSTM layer with 10 hidden units and a dense layer, that connects the output of the LSTM at the last time step to a single output neuron with linear activation. To predict the discharge of a single time step (day) we provide the last 365 time steps of meteorological observations as inputs. Compared to Eq. \ref{eq_1} we can formulate the LSTM as: \begin{equation}\label{eq_3} \{\boldsymbol{c}_t, \boldsymbol{h}_t\} = f_{\mathrm{LSTM}}(\boldsymbol{i}_t, \boldsymbol{c}_{t-1}, \boldsymbol{h}_{t-1}; \Theta_k), \end{equation} where $f_{\mathrm{LSTM}}(\cdot)$ symbolizes the LSTM cell that is a function of the meteorological input $\boldsymbol{i}_t$ at time $t$, and the previous cell state $\boldsymbol{c}_{t-1}$ as well as the previous hidden state $\boldsymbol{h}_{t-1}$, parametrized by the network weights $\Theta_k$. The output of the system, formally described in Eq. \ref{eq_2}, would in this specific case be given by: \begin{equation}\label{eq_4} y = f_{\mathrm{Dense}}(\boldsymbol{h}_{365}; \Theta_l), \end{equation} where $y$ is the output of a dense layer $f_{\mathrm{Dense}}(\cdot)$ parametrized by the weights $\Theta_l$, which predicts the river discharge from the hidden state at the end of the input sequence $\boldsymbol{h}_{365}$. The difference between the LSTM and conventional rainfall-runoff models is that the former has the ability to infer the needed structure/parametrization from data without preconceived assumptions about the nature of the processes. This makes them extremely attractive for hydrological applications. The network is trained for 50 epochs to minimize the mean squared error using RMSprop \cite{Tieleman2012} with an initial learning rate of 1e-2. The final model is selected based on the score of an independent validation set. \subsection{Data} In this work, we concentrate on two different basins from the publicly available CAMELS data set \cite{Addor2017,Newman2014}. Basin A, which is influenced by snow, and basin B, which is not influenced by snow. Some key attributes of both basins can be found in Table~\ref{tab_basin}. \begin{table} \caption{Basin overview.}\label{tab_basin} \setlength{\tabcolsep}{10pt} \begin{tabular}{ c c c c c c } \multirow{2}{*}{Basin} & \multirow{2}{*}{ID\footnote{USGS stream gauge ID}} & Snow & \multirow{2}{*}{Area (km\textsuperscript{2})} & NSE & NSE \\ & & fraction\footnote{Fraction of precipitation falling with temperatures below 0$^{\circ}$C} & & validation & test\\ \toprule A & 13340600\footnote{Clearwater river, CA} & 56 \% & 3357 & 0.79 & 0.76 \\ B & 11481200\footnote{Little river, CA} & 0 \% & 105 & 0.72 & 0.72 \\ \end{tabular} \end{table} For meteorological forcings, the data set contains basin averaged daily records of precipitation (mm/d), solar radiation (W/m\textsuperscript{2}), minimum and maximum temperature ($^{\circ}$C) and vapor pressure (Pa). The streamflow is reported as daily average (m\textsuperscript{3}/s) and is normalized by the basin area (m\textsuperscript{2}) to (mm/d). Approximately 33~years of data is available, of which we use the first 15 for training the LSTMs. Of the remaining years the first 25~\% is used as validation data by which we select the final model. The remaining data points (approx. 13 years) are used for the final evaluation and for all experiments in this study. The meteorological input features, as well as the target variable, the discharge are normalized by the mean and standard deviation of the training period. One LSTM is trained for each basin separately and the trained model is evaluated using the Nash-Sutcliffe-Efficiency \cite{Nash1970}, an established measure used to evaluate hydrological time series given by the following equation: \begin{equation}\label{eq_5} \text{NSE} = 1 - \frac{\sum_{t=1}^{T}(Q_m^t - Q_o^t)^2}{\sum_{t=1}^{T}(Q_o^t - \bar{Q}_{o})^2}, \end{equation} where $T$ is the total number of time steps, $Q_m^t$ is the simulated discharge at time $t$ $(1 \leq t \leq T)$, $ Q_o^t$ is the observed discharge at time $t$ and $\bar{Q}_{o}$ is the mean observed discharge. The range of the NSE is (-inf, 1], where a value of 1 means a perfect simulation, a NSE of 0 means the simulation is as good as the mean of the observation and everything below zero means the simulation is worse compared to using the observed mean as a prediction. In the test period the LSTM achieves a NSE of above 0.7 (see Table \ref{tab_basin}), which can be considered a reasonably good result \cite{Moriasi2015}. \begin{figure} \includegraphics[width=\textwidth]{figures/discharge_example.pdf} \caption{Example of predicted (dashed line) and observed discharge (solid line) of two years of the test period in the snow influenced basin. Corresponding daily precipitation sums are plotted upside down, where snow (temperature below 0$^{\circ}$C) is plotted darker. \label{fig_hydrograph}} \end{figure} Figure \ref{fig_hydrograph} shows observed and simulated discharge of two years of the test period in the snow influenced basin A, as well as the input variable precipitation. We can see that the discharge has its peak in the spring/early summer und that the model in the year 2012 underestimates the discharge, while in the second year it fits the observed discharge pretty well. The time lag between precipitation and discharge can be explained by snow accumulation in the winter months and subsequent melt of this snow layer in spring. \subsection{Integrated gradients} Different methods have been presented recently to analyze the attribution of input variables on the network output (or any in-between neuron) (e.g. \cite{Arras2017,Bach2015,Kindermans2017,Montavon2017,Sundararajan2017}) In this study we focus on Integrated gradients by Sundarajan et al. \cite{Sundararajan2017}. Here, the attribution of each input to e.g. the output neuron is calculated by looking at the change of this neuron when the input shifts from a baseline input to the target input of interest. Formally, let $\boldsymbol{x}$ be the input of interest, (in our case a sequence of 365 time steps with 5 meteorological observations each), ${\boldsymbol{x}}'$ the baseline input and $F(\cdot)$ the neuron of interest. Then the integrated gradients, for the \textit{i}-th input variable $x_i$, can be appoximated by: \begin{equation}\label{eq_6} \text{IntegratedGrads}_i^{\mathrm{approx}}(\bm x) := \frac{\mathbf{x}_i - \mathbf{x'}_i}{m} \sum_{k=1}^{m}\frac{\partial F\left(\mathbf{\tilde{x}} \right)}{\partial \mathbf{\tilde{x}}_i}\bigg|_{\mathbf{\tilde{x}} = \mathbf{x'} + \frac{k}{m}(\mathbf{x} - \mathbf{x'})}, \end{equation} where $m$ is the number of steps used to approximate the integral (here $m=1000$). As baseline ${\boldsymbol{x}}'$, we used an input sequence of zeros. \subsection{Experiments} \subsubsection{Question 1: How many days of past influence the network output?\newline} The discharge of a river in a seasonal influenced region varies strongly throughout the year. For e.g. snow influenced basins the discharge usually peaks in the spring or early summer, when not only precipitation and groundwater but also snow melt contributes to the discharge generation. Therefore, at least from a hydrological point of view, the precipitation of the entire winter might be influential for the correct prediction of the discharge. In contrast, in drier periods (e.g. here at the end of summer) the discharge likely depends on far fewer time steps of the meteorological past. Since we provide a constant number of time steps (365 days) of meteorological data as input, it is interesting to see how many of the previous days are really used by the LSTM for the final prediction. To answer this question, we calculate the integrated gradients for one sample of the test period w.r.t. the input variables and sum the integrated gradients across the features for each time step. We then calculate the difference from time step to time step and determine the first timestep $t$ $(1 \leq t \leq T)$, at which the difference surpasses a threshold of 2e-3, with $T$ being the total length of the sequence. We have chosen the threshold value empirically so that noise in the integrated gradient signal is ignored. For each sample the number of Time Steps Of Influence (TSOI) on the final prediction can then be calculated by: \begin{equation}\label{eq_7} \text{TSOI} = T - n \end{equation} This is repeated for each day of each year in the test period. \subsubsection{Question 2: Do memory cells correlate with hydrological states?\newline} The discharge of a river basin is frequently approximated by decomposing its (hypothetical) components into a set of interacting reservoirs or storages (see Fig. \ref{fig_hyd_mod}). Take snow as an example, which is precipitation that falls if temperatures are below 0$^{\circ}$C. It can be represented in a storage $\boldsymbol{S}$ (see Eq.~\ref{eq_1}), which generally accumulates during the winter period and depletes from spring to summer when the temperatures rise above the melting point. Similarly other components of the system can be modelled as reservoirs of lower or higher complexity. The soil layer, for example, can also be represented by a bucket, which is filled - up to a certain point - by incoming water (e.g. rainfall) and depleted by evapotranspiration, horizontal subsurface flow and water movement into deeper layers, e.g. the groundwater body. Theoretically, memory cells of LSTMs could learn to mimic these storage processes. This is a crucial property, at least from a hydrological point of view, to be able to correctly predict the river discharge. Therefore the aim of this experiment is to see if certain memory cells $\boldsymbol{c}_t$ (Eq.~\ref{eq_3}) correlate to these hydrological states $\boldsymbol{S}_t$ (Eq.~\ref{eq_1}). Because the CAMELS data does not include observations for these states, we take the system states of the included SAC-SMA + Snow-17 model as a proxy. This is far from optimal, but since this is a well established and studied hydrological model, we can assume that at least the trend and tendencies of these simulations are correct. Furthermore, we only want to test in this experiment if memory cells correlate with these system states and not if they quantitatively match these states exactly. Of the calibrated SAC-SMA + Snow-17 we use the following states as a reference in this experiment: \begin{itemize} \item \textbf{SWE} (snow water equivalent): This is the amount of water stored in the snow layer. This would be available if the entire snow in the system would melt. \item \textbf{UZS} (upper zone storage): This state is calculated as the sum of the UZTWC (upper zone tension water storage content) and the UZFWC (upper zone free water storage) of the SAC-SMA + Snow-17 model. This storage represents upper layer soil moisture and controls the fast response of the soil, e.g. direct surface runoff and interflow. \item \textbf{LZS} (lower zone storage): This state is calculated by the sum of LZTWC (lower zone tension water content), LZFSC (lower zone free supplemental water storage content) and LZFPC (lower zone free primary water storage content). This storage represents the groundwater storage and is relevant for the baseflow\footnote{Following \cite{WMO2012} the baseflow is defined as: ``Discharge which enters a stream channel mainly from groundwater, but also from lakes and glaciers, during long periods when no precipitation or snowmelt occurs.''}. \end{itemize} For each sample in the test period we calculate the correlation of the cell states with the corresponding time series of these four states. \subsubsection{Question 3: Which inputs influence a specific memory cell?\newline} Suppose that we find memory cells that correlate with time series of hydrological system states, then a natural question would be if the inputs influencing these memory cells agree with our understanding of the hydrological system. For example, a storage that represents the snow layer in a catchment, should be influenced by precipitation and solar radiation in a contrarious way. Solid precipitation or snow would increase the amount of snow available in the system during winter. At the same time, solar radiation, providing energy for sublimation, would effectively reduce the amount of snow stored in the snow layer. Therefore, in this experiment we look in more detail at specific cells that emerged from the previous experiment and analyse the influencing variables on this cell. We do this by calculating the integrated gradients from a single memory cell at the last time step of the input sequence w.r.t. the input variables. \section{Results and Discussion} \subsection{Timesteps influencing the network output} Figure \ref{fig_tsoi} shows how many time steps with meteorological inputs from the past have an influence on the LSTM output at the time step of prediction (TSOI). The TSOI does thereby not differentiate between single inputs. It is rather the integrated signal of all inputs. Instead of using specific dates, we here show the temporal dimension in the unit day of year (DOY). Because all years of the test period are integrated in the plot, we show the 25~\%, 50~\% and 75~\% percent quantiles. For the sake of interpretation and the seasonal context, Fig. \ref{fig_tsoi} also includes the temporal dynamics of the median precipitation, temperature and discharge. \begin{figure} \centering \includegraphics[width=12cm]{figures/tsoi.pdf} \caption{Time steps of influence (TSOI) on the network output over the unit day of year (DOY) for the snow influenced basin A (left column) and the basin B without snow influence (right column). Corresponding median precipitation, discharge and minimum temperature are shown for reference. For the snow-influenced basin A, we can see for example that the TSOI increases during the winter period and is largest during the snow melting period ($\sim$DOY 100-160), which matches our understanding of the hydrological processes. } \label{fig_tsoi} \end{figure} The left column of Fig. \ref{fig_tsoi} shows the results for snow influenced basin A. Here, 3 different periods can be distinguished in the TSOI time series: (1) Between DOY 200 and 300 the TSOI shows very low values of less than 10-20~days. This period corresponds to the summer season characterised by high temperatures and low flows, with fairly little precipitation. In this period the time span of influence of the inputs on the output is short. From a hydrological perspective this makes sense, since the discharge in this period is driven by short-term rainfall events, which lead to a very limited response in the discharge. The short time span of influence can be explained by higher evapotranspiration rates in this season and lower precipitation amounts. Higher evapotranspiration rates lead to the loss of precipitation to the atmosphere, which is then missing in the discharge. The behaviour of the hydrograph is fairly easy to predict and does not necessitate much information about the past. (2) In the winter period, starting with DOY 300, the TSOI increases over time reaching a plateau of 140-150 days around DOY 100. In this period the daily minimum temperature is below 0$^{\circ}$C, leading to solid precipitation (snow) and therefore water being stored as snow in the catchment without leading to runoff. This is underlined with the low discharge values, despite high precipitation input. Thus the LSTM has to understand that this input does not lead to an immediate output and therefore the TSOI has to increase. The plateau is reached, as soon as the minimum temperature values are higher than the freezing point. From a hydrological perspective, it is interesting to observe that the TSOI at the end of the winter season has a value which corresponds to the beginning of the winter season ($\sim$DOY 300), when the snow accumulation begins. It should be noted that the transition between the winter period and the following spring season is not sharp, at least when taking the hydrograph as a reference. It is visible that, although the TSOI is still increasing, the discharge is also increasing. From a hydrological point of view this can be explained by a mixed signal in discharge - appart from melting snow (daily maximum temperature is larger than 0$^{\circ}$C), we still have negative minimum temperatures, which would lead to snow fall. (3) In spring, during the melting season, the TSOI stays constant (DOY 100-160) followed by a sharp decrease until the summer low flows. During the main melting period, the TSOI of approximately 140-150 days highlights that the LSTM uses the entire information of the winter period to predict the discharge. The precipitation in this period now falls as rain, directly leading to runoff, without increasing the TSOI. At the same time, all the inputs from the winter still influence the river discharge explaining the stable plateau. The sharp decrease of the TSOI around DOY 160 represents a transition period where the snow of the winter continuously loses its influence until all snow has melted. Although it has the same precipitation seasonality, Basin B (Fig. \ref{fig_tsoi}, right column) has different characteristics compared to basin A, since it is not influenced by snow. Here, only 2 different periods can be distinguished, where the transitions periods are however more pronounced: (1) Between DOY 180 and 280, the warm and dry summer season, the catchment is characterised by very low flows. In this period the TSOI is also constantly low, with values of around 10-15 days. The discharge can be predicted with a very short input time series, since rainfall as input is missing and the hydrology does not depend on any inputs. (2) Following summer, the wet period between DOY 280 and 180 is characterised by a steady increase in the TSOI. The temporal influence of rainfall on runoff becomes longer, the longer the season lasts. The general level of runoff is now significantly higher compared to the summer. It does not solely depend on single rainfall events, but is driven by the integrated signal of inputs from the past, explaining the increasing TSOI. The TSOI reaches a maximum median value of around 120 days. This peak is followed by a decrease towards the end of the wet period, which is however not as rapid as in basin A. This distinct transition period in TSOI between wet and dry ($\sim$DOY 100-180) season corresponds very well with the observed falling limb in the hydrograph. As the runoff declines, the influence of past meteorological inputs also declines. Compared to basin A, a higher variability in TSOI is evident, which can be explained with a higher variability in precipitation inputs in the single years. In basin A, a high interannual variability in the rainfall is also observable. However, the lower temperatures below freezing level lead to the precipitation falling as snow, and therefore act as a filter. This leads to a lower variability in discharge and in consequence in TSOI. Overall, the TSOI results of the two contrasting basins match well with our hydrological understanding of the anterior days influencing the runoff signal at a specific day. It is interesting to see that the LSTM shows the capability to learn these differing, basin specific properties of long-term dependencies. \subsection{Correlation of memory cells with hydrological states} Figure \ref{fig_corr} shows the average correlation of every memory cell with the hydrological states considered in this experiment. The correlation is averaged over all samples in the test period and only correlations with $ \rho > 0.5$ are shown. We can see, that in both basins some of the memory cells have a particularly high correlation with the provided hydrological states. For both basins several cells exhibit a high correlation with both, the upper (UZS) and lower (LZS), soil states. Although the majority of the cells show a positive correlation, negative correlations are also visible, which are however of a lower absolute magnitude. \begin{figure} \centering \includegraphics[width=\textwidth]{figures/correlations.pdf} \caption{Average correlations between memory cells and hydrological states for basin A (upper plot) and basin B (lower plot). Only correlations with $ \rho > 0.5$ are shown. Ellipse are scaled by the absolute correlation and rotated by their sign (left inclined for negative correlation and right inclined for positive correlation)} \label{fig_corr} \end{figure} The correlation between the LSTM cells and the baseflow influencing state LZS are significantly higher for the drier basin B. However, the baseflow index, a measure to define the importance of the contribution of the baseflow to total runoff, is lower for this basin. In basins A and B the ratios of mean daily baseflow to mean daily discharge are about 66~\% and 44~\%, respectively. Currently, we cannot explain this discrepancy. In the snow influenced basin, the trained LSTM also has some cells with high correlation with the snow-water-equivalent (SWE). The occurrence of multiple cells with high correlation to different system states can be seen as an indicator that the LSTM is not yet defined in a parsimonious way. Therefore, hydrologist can use this information to restrict the neural network even further. In general, the correlation analysis is difficult to interpret in detail. Frequently, high correlations however exist, indicating a strong relationship between LSTM memory cells and system states from a well-established hydrological model. \subsection{Inspection of memory cells} In the first experiment, we used the integrated gradient method to calculate the attribution of the input variables on the output of the model. In this experiment, we apply it to analyse interactions between arbitrary neurons within the neural network (e.g. here, a memory cell at the last time step). The previous experiment proved that memory cells with a high correlation to some of the hydrological system states exist. The aim of this experiment is therefore to analyse the influences and functionality of a given cell. Here, we can explore (i) which (meteorological) inputs are important and (ii) at what time in the past this influence was high. We chose a ``\textit{snow-cell}'' from the previous experiment to demonstrate this task, since the accumulation and depletion of snow is a particularly illustrative example. To be more precise, we chose to depict a single sample of the test period from the cell with the highest correlation to the SWE, which is memory cell 4 from basin A (see Fig. \ref{fig_corr}). Figure \ref{fig_snowcell} shows the integrated gradients of the meteorological inputs in the top row, the evolution of the memory cell value in the second row and the corresponding change in minimum and maximum temperature in the third row. \begin{figure} \centering \includegraphics[width=10cm]{figures/snowcell.pdf} \caption{Integrated gradient analysis of the snow-cell (cell 4) of the LSTM trained for basin A. Upper plot shows the integrated gradient signal on each input variable at each time step. The plot in the center shows the memory cell state over time for reference and the bottom plot the minimum and maximum daily temperature.} \label{fig_snowcell} \end{figure} One can see that the snow-cell is mainly influenced by the meteorological inputs of precipitation, minimum temperature, maximum temperature and solar radiation. Precipitation shows the largest magnitude of positive influence. Solar radiation in contrast has a negative sign of influence, possibly reproducing the sublimation from the snow layer and leading to a reduction of the snow state. All influencing factors only play a role at temperatures around or below freezing level. This matches the expected behaviour of a snow storage from a hydrological point of view: Lower temperatures and concurrent precipitation are associated with snow accumulation. Consequently, this leads to an increase in magnitude of the memory cell value (especially for the first temperature below the freezing point). This can be observed e.g. in October 1999, where the temperature values decrease, the influences of the meteorological parameters appear and the snow-cell begins to accumulate. In contrast, as the temperature rises, the value of the cell decreases again, especially when the daily minimum temperature also rises above 0$^{\circ}$C. This suggests that the LSTM realistically represents short- as well as long-term dynamics in the snow cell storage and their connection to the meteorological inputs. \section{Conclusion} LSTM networks are a versatile tool for time series predictions, with many potential applications in hydrology and environmental sciences in general. However, currently they do not enjoy a wide-spread application. We argue that one reason is the difficulty to interpret the LSTMs. The methods presented in this book provide solutions regarding interpretability and allow a deeper analysis of these models. In this chapter, we demonstrate this for the task of rainfall-runoff modelling (where the aim is to predict the river discharge from meteorological observations). In particular, we were able to show that the processes learned by the LSTM matches our comprehension of a real-world environmental system. For this study, we focused on a qualitative analysis of the correspondence between the hydrological system and the learned behaviour of the LSTM. In a first experiment, we looked at the number of time steps of influence on the network output (TSOI) and how this number varies throughout the year. We saw that the TSOI pattern matches our hydrological understanding of the yearly pattern. In the next experiment, we looked at the correlation of the memory cells of the network with some selected states of the hydrological system (such as snow or soil moisture). We found some cells that exhibited relatively high correlation with the chosen states, which strengthens the hypothesis that the LSTM obtained some general understanding of the runoff-generation processes. In the last experiment, we inspected a single memory cell that exhibited a high correlation with the snow state. We analyzed the influencing inputs over time through the integrated gradient method and could see, that the behavioral patterns manifested in the cell, closely resemble the ones suggested by hydrological theory. We view this as a further underpinning of the observation that the internally modelled processes of the network follow some sort of physically viable pattern. We hypothesize that this relation can be seen as a legitimization of the LSTM usage within environmental sciences applications, and thus believe that the presented methods will pave the way for its future in environmental-modelling. The correspondence of the memory cells and the physical states can be especially useful in novel situations, which often arise in this context. Environmental scientists and practitioners can exploit it (together with the proposed techniques) to ``peek into the LSTM'' and argue about potential behaviours. Our demonstration was certainly not exhaustive and should rather be seen as indicative application study. The most important message is that the combination of domain-knowledge (in this case hydrology) and the insights provided by the proposed interpretation techniques, provide the fundamentals for designing environmental forecasting systems with neural networks. Consequently, we expect that the combination of powerful data driven models (such as LSTMs) with the possibility of interpretation by experts will lead to new insights in the field of application. \bibliographystyle{splncs04}
2024-02-18T23:41:15.076Z
2019-11-13T02:08:38.000Z
algebraic_stack_train_0000
4,579
5,707
proofpile-arXiv_066-6580
\section{Introduction} Black hole thermodynamics is a subject that helps a theoretical physicist to unveil the deep connection between gravitation, quantum theory and statistical physics. Even after 50 years of Bekenstein and Hawking's work, the problem of black hole entropy and the temperature is not well understood \cite{Hawking:1974sw, Bekenstein1972, Bekenstein1973, Bardeen1973}. During the past three decades, in quest to find a quantum gravity theory the focus has directed towards the black holes in asymptotically anti-de Sitter(AdS) spacetimes. It is mainly due to the pioneering work of Hawking and Page, which explored the phase transition between the radiation and a large black hole \cite{Hawking:1982dh}. Black holes in AdS cavity provided necessary thermal stability to this thermodynamic system. But the Smarr relation for AdS black is found inconsistent with the first law \cite{Kastor:2009wy}. To rectify this problem, cosmological constant $\Lambda$ was considered as a thermodynamical variable. In the first law, it was interpreted as the thermodynamic pressure, and its conjugate quantity was found to be the geometrical volume. This has extended the first law of black hole thermodynamics with a necessary $VdP$ term \cite{Kastor:2009wy, Dolan:2011xt}. In this extended phase space, thermodynamics of charged AdS black hole was found to be analogous to a van der Waals fluid system \cite{Kubiznak2012, Gunasekaran2012, Kubiznak:2016qmn}. This has led to a new arena in the black hole physics called the \emph{black hole chemistry}. This macroscopic picture of the black hole is used to propose a phenomenological model for black hole microstructure \cite{Ruppeinerb2008}. Even though the microscopic information is not a requirement for the thermodynamics, it may be used for quantum gravity studies. Prominent model for drawing microscopic information from thermodynamics is the Ruppeiners's thermodynamic geometry \cite{Ruppeiner95}. It is constructed on the equilibrium state space in the context of thermodynamic fluctuation theory, but can be useful in studying black holes too \cite{Ruppeinerb2008}. Through gaussian fluctuation moments, a Riemannian geometry is constructed in the thermodynamic equilibrium space, whose metric tells us about the fluctuations between the states. This method is applied to van der Waals fluids and to a variety of other statistical systems \cite{Ruppeiner95, Janyszek_1990, Oshima_1999x, Mirza2008, PhysRevE.88.032123}. These studies show that the thermodynamic geometry encodes the information about the microscopic interaction. The thermodynamic scalar curvature $R$ is proportional to the correlation volume of the underlying system. The sign of $R$ indicates the type of interaction in the microstructure, positive for repulsive and negative for attractive interactions. In recent times, there has been a lot of interest in the thermodynamic geometry to investigate critical phenomenon and microstructure of various black holes in AdS spacetime \cite{Wei2015, Sahay:2016kex, Guo2019, Miao2017, Zangeneh2017, Wei:2019ctz, Kumara:2019xgt, Kumara:2020mvo, Xu:2019nnp, Chabab2018, Deng2017, Miao2019a, Chen2019, Du2019, Dehyadegari2017, Ghosh:2019pwy, Ghosh:2020kba}. Recently, a novel approach for Ruppeiner geometry was developed to explore the missing information due to the singularity in the scalar curvature \cite{Wei2019a}. This is mainly due to the vanishing of heat capacity at constant volume. The new normalised scalar curvature takes care of this problem. A metric can be defined by Taylor expanding the Boltzmann entropy around the equilibrium value. The thermodynamical coordinates were chosen to be the temperature and volume, and the Helmholtz free energy was chosen as the thermodynamic potential. Applying this method to the van der Waals (vdW) fluid, it was found that dominant interaction in the microstructure is attractive throughout the parameter space. Utilizing the analogy with vdW fluid, thermodynamic geometry of a charged AdS black hole is analysed. In contrast to the vdW fluid, the interaction is not attractive over the entire parameter space. Even though the interaction is attractive for the large black hole (LBH) every where and small black hole (SBH) for most of the parameter space, there exists a weak repulsive interaction in the SBH phase at very low temperatures \cite{Wei2019a, Wei2019b}. Interestingly, this behaviour is not universal for all asymptotically AdS black holes. In the case of five-dimensional neutral Gauss-Bonnet black hole, interaction similar to vdW fluid is observed, with a dominant attractive interaction throughout the SBH and LBH phases \cite{Wei:2019ctz}. Soon later, work is extended to 4-dimensional Gauss-Bonnet black holes \cite{Wei2020}. Subsequently, the microscopic interactions for $4-D$ AdS topological black holes dRGT massive gravity were studied\cite{Yerra:2020oph, Wu:2020fij}. Microstructure was found to be distinct, with the presence of both repulsive and attractive interactions in both the SBH and LBH phases. In our recent paper, we have investigated the microstructure of regular Hayward and Born-Infeld AdS black holes \cite{Kumara:2020ucr, NaveenaKumara:2020biu}. The microscopic interactions observed is similar to the case of charged AdS black holes in regular Hayward case. Where as, Born-Infeld AdS black holes show a reentrant phase transition, which has a distinct microstructure. Apart from these studies, the study of microstructure using this novel method is limited to a few black holes. Motivated by the recent progress, here we explore the phase structure and microstructure of a regular Bardeen AdS black hole. Regular black holes are the ones which does not possess a singularity at the centre. Even though it is in the domain of quantum gravity theory to obtain a singularity free solution, a phenomenological model can be constructed in the classical gravity. Firstly such a regular solution was derived by Bardeen \cite{Bardeen1973}. Later many have found that regular black holes can be an exact solution to gravity coupled with a non-linear electromagnetic source \cite{AyonBeato:1998ub, AyonBeato:2000zs, Hayward:2005gi}. We have studied phase transitions and thermodynamic geometry of regular black holes in our recent papers \cite{Rizwan2019, Rajaniheat, Naveen2019photon}. It is noticed that the presence of magnetic monopole charge imparts a phase structure to the regular black holes similar to the electric charge. So we find it interesting to probe the microstructure corresponding to the magnetically charged Bardeen black holes in asymptotically AdS spacetimes. The paper is organised as follows. In section \ref{metric}, we review the action and derivation of the regular Bardeen black hole in AdS spacetime. In section \ref{Phase structure}, we mainly focus on the thermodynamics and phase structure of the black hole. Then the Ruppeiner geometry and analysis of critical features are discussed in section \ref{TG}. The final section \ref{summary} is dedicated for the summary and conclusions. \section{Regular Bardeen AdS Black hole} \label{metric} The Bardeen black hole emerges as the solution to the Einstein's gravity coupled to a non-linear electrodynamics source with a negative cosmological constant $\Lambda$. We will consider an action, \begin{equation} \mathcal{S}=\frac{1}{16\pi}\int d^4x \sqrt{-\tilde{g}}\left(R-2\Lambda-\mathcal{L}(\mathcal{F})\right),\label{Action} \end{equation} where $R$ denotes the Ricci scalar, $\tilde{g}$ the determinant of metric tensor $\tilde{g}_{\mu\nu}$, and $\Lambda$ is the cosmological constant. $\mathcal{L}(\mathcal{F})$ is the Lagrangian density of non-linear electrodynamics, which is the function of the field strength $\mathcal{F}=F_{\mu\nu}F^{\mu\nu}$ with $F_{\mu\nu}=\partial_\mu A_\nu- \partial_\nu A_\mu$. Variation of the action (\ref{Action}) leads to Einstein's and Maxwell's equations of motion, given by \begin{equation} G_{\mu\nu}+\Lambda g_{\mu\nu}= T_{\mu\nu},\quad\quad \nabla_\mu \left(\frac{\partial\mathcal{L}(\mathcal{F})}{\partial\mathcal{F}}F^{\mu\nu}\right)=0 \quad\text{and}\quad \nabla_\mu\left(*F^{\nu\mu}\right)=0.\label{eqns} \end{equation} $G_{\mu\nu}$ is the Einstein tensor and $T_{\mu\nu} = 2\left(\frac{\partial\mathcal{L}(\mathcal{F})}{\partial\mathcal{F}}F_{\mu\lambda}F^{\lambda}_\nu-\frac{1}{4}g_{\mu\nu}\mathcal{L}(\mathcal{F})\right)$ is the energy-momentum tensor. The Lagrangian density in the case of Bardeen black holes is, \begin{equation} \mathcal{L}(\mathcal{F})= \frac{12}{\alpha}\left(\frac{\sqrt{\alpha\mathcal{F}}}{1+\sqrt{\alpha \mathcal{F}}}\right)^{5/2}, \end{equation} where $\alpha$ is a positive quantity with a dimension $[\text{Length}]^2$. We take the following ansatz for Maxwell's field tensor, \begin{equation} F_{\mu\nu}=2 \delta^\theta_{[\mu}\delta^\phi_{\nu]}Q(r)\sin \theta. \end{equation} But from Maxwell's equations (\ref{eqns}), $dF=\frac{dQ(r)}{dr}dr\wedge d\theta\wedge d\phi=0$ which require $Q(r)$ to be a constant $Q_m$. For a spherically symmetric solution, the non-vanishing components of Maxwell's field tensor are $F_{tr}$ and $F_{\theta\phi}$. Since we are interested in a magnetically charged regular solution, we choose gauge potential and Maxwell's field tensor to be, \begin{equation} A_\mu= Q_m \cos\theta \delta^\phi_\mu,\quad F_{\theta\phi}=-F_{\phi\theta} = Q_m \sin\theta, \end{equation} where $Q_m$ is the magnetic monopole charge. The scalar function $F$ is obtained from $F_{\theta\phi}$ as, \begin{equation} F=\frac{2Q_m^2}{r^4}. \end{equation} We can rewrite Lagrangian density $\mathcal{L}(\mathcal{F})$ as a function of radial distance, \begin{equation} \mathcal{L}(r)=\frac{12}{\alpha}\left(\frac{2\alpha Q_m^2}{r^2+2\alpha Q_m^2}\right)^{5/2}. \end{equation} A static spherically symmetric solution for the Einstein's equation can be put in the form, \begin{equation} ds^2=-f(r)dt^2+\frac{dr^2}{f(r)}+r^2\left(d\theta^2+\sin^2 \theta d\phi^2\right), \end{equation} with the metric function $f(r)=1-\frac{2 m(r)}{r}-\frac{\Lambda r^2}{3}$. Making use of the line element, the Einstein's equation is solved for fixing the functional form of $m(r)$. $G_{tt}$ and $G_{rr}$ components of Einstein's equation read as, \begin{align} \frac{1}{r^2}\partial_r m(r)-\Lambda &=\frac{1}{4}\mathcal{L}(r),\\ \frac{1}{r}\partial_r^2 m(r)-\Lambda&=\left(\frac{1}{4}\mathcal{L}(r)- \frac{\partial\mathcal{L}}{\partial\mathcal{F}} F_{\theta\phi}F^{\theta\phi}\right). \end{align} Integrating the above differential equations, we obtain the mass function $m(r)$ for a regular Bardeen AdS black hole as, \begin{equation} m(r)=\frac{\Lambda r^3}{6}+ \frac{Mr^3}{\left(g^2+r^2\right)^{3/2}}, \end{equation} where $M$ is the mass of the black hole and $g$ is the charge parameter related to total charge $Q_m$, \begin{equation} Q_m=\frac{g^2}{\sqrt{2\alpha}}. \end{equation} So, the line element for Bardeen-AdS black hole is written with the metric function, \begin{equation} f(r)=1-\frac{2 M r^2}{\left(g^2+r^2\right)^{3/2}}-\frac{\Lambda r^2}{3}. \end{equation} \section{Thermodynamics and phase structure} \label{Phase structure} In this section, we review thermodynamics of the black hole in an extended phase space, where the cosmological constant $\Lambda$ is given the status of a dynamical variable pressure $P$. It can be justified from Smarr relation and first law of black hole thermodynamics in the asymptotically AdS spacetimes. The thermodynamic pressure $P$ is related to $\Lambda$ as, \begin{equation} P=-\frac{\Lambda}{8\pi}.\label{pressure} \end{equation} Firstly, we write the first law of black hole thermodynamics and Smarr relation for the magnetically charged Bardeen AdS black hole \cite{Fan:2016hvf, Fan:2016rih}, \begin{align} dM=&TdS+\Psi dQ_m+VdP+\Pi d\alpha,\\ M=&2(TS-VP+\Pi \alpha)+\Psi Q_m. \end{align}This can be obtained either from Komar integral, or from scaling argument presented in the paper by Kastor \emph{et al } \cite{Kastor:2009wy}. Notice that there exists additional terms $\alpha$ and $\Pi$, they are the parameters related to the non-linear electromagnetic field and its conjugate potential respectively. We can write black hole mass $M$ using the condition $f(r_h)=0$ at the event horizon $r=r_h$, \begin{equation} M=\frac{\left(g^2+r_h^2\right)^{3/2} \left(8 \pi P r_h^2+3\right)}{6 r_h^2}.\label{Mass} \end{equation} The Hawking temperature of the black hole is obtained as, \begin{equation} T=\left.\frac{f'(r)}{4\pi}\right|_{r=r_h}=-\frac{g^2}{2 \pi r (g^2 + r_h^2)} + \frac{r_h}{4 \pi \left(g^2 + r_h^2\right)} + \frac{2 P r_h^3}{g^2 + r_h^2}, \label{temperature} \end{equation} where we have used equations (\ref{Mass}) and (\ref{pressure}) for mass $M$ and pressure $P$. The required thermodynamic quantities for our analysis, volume $V$ and entropy $S$ can be obtained from the first law, \begin{align} V=&\left(\frac{\partial M}{\partial P}\right)_{S,Q_m}=\frac{4}{3} \pi \left(g^2+r_h^2\right)^{3/2},\\ S=&\int \frac{dM}{T}= \pi r_h^2 \left. \left[\left(1-\frac{2g^2}{r_h^2}\right) \sqrt{1+\frac{g^2}{r_h^2}}+\frac{3 g^2}{r_h^2} \log \left(\sqrt{g^2+r_h^2}+r_h\right)\right]\right.\label{Volume}. \end{align} The thermodynamic stability of black hole is specified by the heat capacity at constant pressure $C_P$ and volume $C_V$, which is determined as, \begin{align} C_P=&T\left(\frac{\partial S}{\partial T}\right)_P=\frac{2 S \left(\pi \beta^2+S\right) \left(-2 \pi \beta^2+8 P S^2+S\right)}{2 \pi ^2 \beta^4+\pi \beta^2 S (24 P S+7)+S^2 (8 P S-1)},\\ C_V=&T\left( \frac{\partial S}{\partial T}\right)_V=0. \label{cv} \end{align} One can obtain the equation of state, $P=P(V,T)$, utilising the expression for the Hawking temperature (\ref{temperature}) and thermodynamic volume (\ref{Volume}), \begin{equation} P=\frac{\left(\frac{6 V}{\pi }\right)^{2/3} \left(-1+2 \pi T \sqrt{\left(\frac{6 V}{\pi }\right)^{2/3}-4 g^2}\right)+12 g^2}{2 \pi \left(\left(\frac{6 V}{\pi }\right)^{2/3}-4 g^2\right)^2}. \end{equation} We study the phase structure of the black hole in the canonical ensemble with a fixed monopole charge $g$. The $P-V$ isotherms show a first-order van der Waals fluid like phase transition between two phases, namely, the \emph{small black hole} (SBH) and the \emph{large black hole} (LBH) phase. The critical point is obtained from the inflection point of the $P-V$ isotherm, \begin{equation} \left.\frac{\partial P}{\partial V}\right|_{r=r_h} = \left.\frac{\partial^2 P}{\partial V^2}\right|_{r=r_h}=0. \end{equation} The critical quantities temperature $(T_c)$, pressure $(P_c)$ and volume $(V_c)$, thus obtained are given below, \begin{align} T_c = &\frac{\left(17-\sqrt{273}\right) \sqrt{\frac{1}{2} \left(\sqrt{273}+15\right)}}{24 \pi g},\\ P_c = &\frac{\sqrt{273}+27}{12 \left(\sqrt{273}+15\right)^2 \pi g^2},\\ V_c = &\frac{4}{3} \pi g^3 \left(1+\frac{1}{2} \left(\sqrt{273}+15\right)\right)^{3/2}. \end{align} Using critical quantities, we define the following reduced coordinates, \begin{equation} T_r=\frac{T}{T_c},\quad P_r=\frac{P}{P_c},\quad V_r=\frac{V}{V_c}. \end{equation} We can rewrite the equation of state in the reduced parameter space as, \begin{equation} P_r=\frac{\left(\sqrt{273}+15\right)^2 {V_r}^{2/3} \left(3 \left(\sqrt{273}+17\right)-4 T_r \sqrt{\sqrt{273}+15} \sqrt{-2+\left(\sqrt{273}+17\right) {V_r}^{2/3}}-\frac{18}{{V_r}^{2/3}}\right)}{\left(\sqrt{273}+27\right) \left(2-\left(\sqrt{273}+17\right) {V_r}^{2/3}\right)^2}. \end{equation} The phase transition in a Bardeen AdS black hole was extensively studied by AG Tzikas \cite{Tzikas:2018cvs}. But there are couple of things left out in the analysis due to the difficulty in inverting the equation of state and solving $r_h$ as function of pressure and temperature, $r_h=r_h(P,T)$. We address this problem numerically and obtain the coexistence equation from the swallow tail behaviour of Gibbs free energy. We begin our analysis from the Gibbs free energy, which is defined as the Legendre transform of enthalpy, recall that addition of $VdP$ term leads to the identification of mass as enthalpy $H$. The Gibbs free energy reads as, \begin{equation} G(P,r_h,g)= H-TS. \label{Gibbseqn} \end{equation} And change in the Gibbs free energy, \begin{equation} dG =-SdT+VdP+\Phi dg. \end{equation} \begin{figure}[H] \centering \subfigure[ref1][\, $ G_r \quad \text{vs} \quad T_r$]{\includegraphics[scale=0.8]{BGibbsplot} \label{BGibbs}} \qquad \subfigure[ref2][\, $P_r \quad \text{vs} \quad V_r$]{\includegraphics[scale=0.8]{PVdiagram} \label{BPV}} \caption{In fig \ref{BGibbs}, the Gibbs free energy $G_r$ is plotted as a function of reduced temperature $T_r$ for different reduced pressure $P_r$. The swallow tail behaviour is exhibited when $P_r<1$. In inlets, a magnified view of the swallow tail at a pressure $P_r=P^*<1$ is shown. In fig \ref{BPV} Isotherm with reduced temperature $T_r=T^*<1$ under Maxwell's construction is shown. The SBH, superheated SBH, unstable region, supercooled LBH and stable LBH phases are labelled in the figure \ref{BPV}. } \end{figure} The Gibbs energy and its change is important in determining thermodynamic stability of a system. In an equilibrium state, when the pressure, temperature and charge are fixed, $G$ takes a minimal value. But often writing $G$ explicitly as the function of temperature and pressure, $G(P,T)$ is difficult. We can obtain the Gibbs free energy plots parametrically using equations (\ref{Gibbseqn}) and (\ref{temperature}). In fig \ref{BGibbs}, we plot reduced Gibbs free energy $(G_r)$ as the function of reduced temperature $(T_r)$ for different pressures. When reduced pressure $P_r<1$, we can see a \textquotedblleft swallow tail behaviour \textquotedblright which is a typical signature of a first-order phase transition. A close observation will reveal that there are three regions in the swallowtail, two branches corresponding to the stable SBH and LBH phases, and a tail connecting these two. As the difference in Gibbs free energy between two branches becomes zero, the transition takes place between SBH and LBH phases. At the critical pressure or below these two branches become distinct. But they do intersect at certain temperature $T^*$, where two phases coexist. We use these data to plot the coexistence curve and fit into a coexistence equation numerically. Using the fitting method, we obtain an expression for the coexistence equation, \begin{align} P_r=&-0.022 + 0.625 T_r -6.726 T_r^2 + 48.312 {T_r}^3 - 194.636 {T_r}^4 + 511.332 {T_r}^5\\ & - 887.151 {T_r}^6 + 1009{T_r}^7 - 722.605 {T_r}^8 + 295.313 {T_r}^9 - 52.4384 {T_r}^{10},\\ T_r \in & (0,1). \end{align} \begin{figure}[H] \centering \subfigure[ref1][]{\includegraphics[scale=0.8]{BPTCoexSpin} \label{BPT}} \qquad \subfigure[ref2][]{\includegraphics[scale=0.8]{BTVCoexSpin} \label{BVT}} \caption{Coexistence and spinodal curves in $P_r-T_r$ and $T_r-V_r$ plane. The coexistence curve is shown in solid line and the spinodal curves are shown in dashed line. } \label{fig1} \end{figure} In Fig \ref{BPT}, we have obtained $P_r-T_r$ coexistence diagram using the fitting formula. The red line is the locus of coexistence phase pressure and temperature $(P^*,T^*)$. The light magenta shaded region below the curve is the the LBH phase and the light green color region depicts SBH phase. The black point at the coordinate $(1,1)$ denotes critical point and above that (unshaded region) differentiation of phases is impossible known as supercritical region. In the fig \ref{BPT}, along the coexistence curve, we have also shown the spinodal curve marked by the blue dashed line. It is plotted using the condition, \begin{equation} \left( \partial _{V_r} T_r \right)_{P_r}=0,\quad \text{or}\quad \left( \partial _{V_r} P_r \right)_{T_r}=0. \end{equation} The spinodal curve equation obtained from above condition is of the form, \begin{align} T_{rsp}=&\frac{3 \left(17+\sqrt{273}\right) \left(-10+\left(17+\sqrt{273}\right) {V_r}^{2/3}\right) \sqrt{-2+\left(17+\sqrt{273}\right){V_r}^{2/3}}}{4 \sqrt{15+\sqrt{273}} \left(-4+\left(17+\sqrt{273}\right) {V_r}^{2/3}+\left(17 \sqrt{273}+281\right) {V_r}^{4/3}\right)}, \\ P_{rsp}=&\frac{3 \left(15+\sqrt{273}\right)^2 \left(-12 {V_r}^{2/3}+9 \left(17+\sqrt{273}\right) {V_r}^{4/3}-\left(281+17 \sqrt{273}\right) {V_r}^2\right)}{2 \left(27+\sqrt{273}\right) {V_r}^{2/3} \left(3 \left(17+\sqrt{273}\right) {V_r}^{2/3}-\left(\left(285 \sqrt{273}+4709\right) {V_r}^2+4\right)\right)}. \end{align} Using parametric plots, we have obtained spinodal curves, which is the locus of extreme points separating metastable SBH and LBH phases from the unstable region. And as it is evident from figures \ref{BPT} and \ref{BVT}, the spinodal curve is the envelope of the saturated mixture of SBH and LBH phases. The spinodal curve also has a maximum at the critical point. The coexistence phase structure is shown in $T_r-V_r$ plane along with the spinodal curve in fig \ref{BVT}. Careful analysis of curves will find that there are five regions, namely SBH phase, LBH phase, supercritical phase, metastable superheated SBH and supercooled LBH phase. In both the plots in $P_r-T_r$ and $T_r-V_r$ planes, the extremal point coincides with the extrema in the spinodal curves. To have more clarity, we can turn to $P_r-V_r$ isotherms. When the reduced temperature $T_r$ of isotherm is below 1, we can see an oscillating behaviour with an inflection point at the critical point. At any temperature $T^*$, corresponding to coexistence $T_r-V_r$ curve which is obviously bounded below 1, the isotherm consists of fore mentioned five regions. In fig \ref{BPV}, we have presented labelled $P_r-V_r$ plot indicating different regions. For Maxwell's construction, a vertical line is drawn at the pressure $P^*$. The line divides the isotherm into two equally occupied regions satisfying Maxwell's equal-area law. At the temperature $T^*$, the volume of the SBH and LBH phases is $V_s$ and $V_l$ respectively. The $V_s$ and $V_l$ are obtained from $T_r-V_r$ coexistence curve. The terminology used here is defined parallel to the analogous van der Waals fluid system. From the $P_r-V_r$ isotherm, we see that SBH phase (thick blue) can exist till the pressure $P^*$ where it has the volume $V_s$. When the pressure is reduced below $P^*$, the system moves to a superheating phase without undergoing a transition. This phase denoted by the pink dashed portion is the superheated SBH phase. This state is metastable in the sense it can undergo a phase transition with even small fluctuation. End of this metastable phase is marked by a black dot which represents the spinodal curve. Further, there exists a small unstable region with positive slope denoted by the black dotted line in the figure \ref{BPV}. This unstable region terminates at the extremum, from there system moves to another metastable state known as supercooled LBH. The unstable region is separated from the metastable region by the spinodal curve. The supercooled LBH phase is marked as the magenta dashed line in the plot. The system continues in this state till $P^*$, after that system, undergoes rapid expansion with a slight change in pressure. The volume acts as an order parameter during this transition. At the critical point, the difference between the volumes of SBH and LBH phases vanishes and they form a single supercritical phase. These regions are also portrayed in the coexistence curve in $T_r-V_r$ plane fig \ref{BVT}. Using the numerical method, we plot the volume change $\Delta V_r=V_l-V_s$ as the function of reduced temperature $T_r$ in fig \ref{BVTorder}. It shows that $\Delta V_r$ approaches zero at the critical point and monotonously increases when temperature is reduced. The series expansion of $\Delta V_r$ around the critical point reads, \begin{equation} \Delta V_r = 4.03691 (1-T_r)^{0.537073}. \end{equation} The critical exponent is $0.537$, which is approximately equal to the universal value $1/2$. This result is similar to the one earlier obtained in different black holes \cite{Wei2019a,Wei2019b,Wei:2019ctz,Kumara:2020ucr,Wu:2020fij}. \begin{figure}[H] \centering \subfigure[ref3][\, $\log\left(\Delta V_r\right) \quad \text{vs} \quad T_r$]{\includegraphics[scale=0.8]{BOrderparameterVT} \label{BVTorder}} \caption{The volume change $\Delta V_r=V_{l}-V_{s}$ as a function of reduced temperature $T_r$. Magnified view near the critical point is shown in the inlets. } \end{figure} \section{Microstructure of the Bardeen AdS Black Hole} \label{TG} It is known from early works of George Ruppeiner, that the information about the thermodynamic phase transition is captured in the thermodynamic geometry constructed in the thermodynamic parameter space $(P,T,V,S)$. In this section, we study the critical behaviour and the microstructure of the Bardeen black hole using the novel Ruppeiner geometry put forward by Wei \emph{et.al} \cite{Wei2019a}, where $T$ and $V$ are chosen as the fluctuation coordinates. The line element is written in $(T, V)$ coordinates as, \begin{equation} dl^2=\frac{C_V}{T^2}dT^2-\frac{\left( \partial _V P\right)_T }{T}dV^2. \label{line} \end{equation} And the normalised scalar curvature $R_N$ is obtained from the above line element as, \begin{equation} R_N= R C_V= \frac{\left(\partial_V P\right)^2-T^2 \left(\partial_{V,T}P\right)^2+2T^2\left(\partial_V\right) \left(\partial_{V,T,T}P\right)}{2\left(\partial_V P\right)^2}.\label{RN} \end{equation} We have seen from equation (\ref{cv}) that the heat capacity ($C_V$) at constant volume vanishes for the black hole. This can result in a singularity and this singular behaviour in the curvature scalar $R$ is rectified by multiplying it with $C_V$. The normalised scalar $R_N$ gives the information about the microscopic interactions present in the black hole. The metric tensor is calculated using the line element (\ref{line}) and $R_N$ is obtained from the equation (\ref{RN}). $R_N$ hence obtained is a complicated expression $R_N(T,V,g)$. After converting it in the reduced coordinates $R_N(T_r,V_r)$, it is plotted against the reduced volume $V_r$ with a fixed temperature is shown in fig (\ref{RN}). In reduced coordinates, $R_N$ is independent of monopole charge $g$. From the figures \ref{BRNV1} and \ref{BRNV2}, we can see that $R_N$ has two divergent points below the critical point $T_r<1$. And when the temperature becomes equal to the critical temperature $T_r=1$, these divergences merge and shoot up at the point $V_r=1$ as showed in fig \ref{BRNV3}. As expected, the divergence vanishes for all temperatures above the critical temperature $T_r>1$ fig \ref{BRNV4}. This shows that the information about the phase transition and critical phenomenon is well expressed by the normalised curvature scalar $R_N$. Interestingly, these two divergent points correspond to the metastable points in the spinodal curves. We can notice that even though $R_N$ is negative for most of the parameter space, there exists a small range where it is positive which are shown in the inlets of Fig (\ref{RNplots}). The sign changing curve is plotted in the $T_r-V_r$ plane utilising the condition scalar curvature $R_N=0$. \begin{figure}[H] \centering \subfigure[ref1][]{\includegraphics[scale=0.8]{BRV05}\label{BRNV1}} \qquad \subfigure[ref2][]{\includegraphics[scale=0.8]{BRV09}\label{BRNV2}} \subfigure[ref1][]{\includegraphics[scale=0.8]{BRV1}\label{BRNV3}} \qquad \subfigure[ref1][]{\includegraphics[scale=0.8]{BRV12}\label{BRNV4}} \caption{The normalised curvature scalar $R_N$ is plotted against the reduced volume $V_r$ at different reduced temperature $T_r$. } \label{RNplots} \end{figure} The scalar curvature $R_N$ vanishes and changes its sign at the point $T_0$, given by \begin{equation} T_0=\frac{T_{rsp}}{2}=\frac{3 \left(\sqrt{273}+17\right) \left(\left(\sqrt{273}+17\right) {V_r}^{2/3}-10\right) \sqrt{\left(\sqrt{273}+17\right) {V_r}^{2/3}-2}}{8 \sqrt{\sqrt{273}+15} \left(\left(\sqrt{273}+17\right) {V_r}^{2/3}+\left(17 \sqrt{273}+281\right) {V_r}^{4/3}-4\right)}. \end{equation} Sign change also happens at the point, \begin{equation} V_r=V_0=\frac{5}{8} \sqrt{\frac{5}{2} \left(4709-285 \sqrt{273}\right)}. \end{equation} The sign-changing curve distinguishes regions of negative $R_N$ from the positive. As we know the scalar curvature $R_N$ tells about the microscopic interaction. Positive $R_N$ means a repulsive interaction and negative $R_N$ signifies an attractive interactions in the microstructure. To have more clarity, we have placed all three plots, coexistence, spinodal and sign-changing plots in a single plot fig \ref{BSignCoexSpin}. Different regions in the figure \ref{BSignCoexSpin} correspond to the stable and metastable phases. The light magenta shaded region under the sign changing curve has positive and unshaded region has negative $R_N$ . The region \textcircled{1}, area common between the spinodal curve and sign-changing curve is a saturated SBH+ LBH phase. This phase always has a repulsive interaction in the microstructure with a positive $R_N$. Next, the left-most region bounded inside the line $V_r=V_0$, also has a repulsive interaction with a positive $R_N$. In that, region \textcircled{3} is an SBH phase and region \textcircled{2} is a metastable superheated SBH phase. Nevertheless, there is a stable SBH portion which lies outside this positive $R_N$ region marked with \textcircled{4}, has a attractive interaction in the microstructure. All other phases, supercooled LBH as well as stable LBH phase have negative $R_N$, with dominant attractive interaction. It affirms that, there exists attractive and repulsive interaction in the black hole microstructure, a hint of this is observed in $R-V_r$ plot (\ref{RNplots}). \begin{figure}[H] \centering \subfigure[ref1][]{\includegraphics[scale=0.8]{BSignSpinCoex2}\label{BSignCoexSpin}} \qquad \subfigure[ref2][]{\includegraphics[scale=0.8]{BRT}\label{BRT}} \caption{ \ref{BSignCoexSpin}: The sign-changing curve of $R_N$ along with the coexistence and spinodal curves. \ref{BRT}: The behaviour of normalised curvature scalar $R_N$ along the coexistence line. The red (solid) line and blue (dashed) line correspond to a large black hole and a small black hole, respectively. The inlet shows the region where the SBH branch takes a positive $R_N$ value. } \end{figure} In Fig \ref{BRT}, normalised curvature scalar $R_N$ is plotted as a function of temperature along the coexistence line. This is obtained numerically from the $P_r-T_r$ coexistence fitting equation. Both of the branches, SBH and LBH , diverge to infinity at the critical temperature $T_r=1$. Besides this, we can see that sign of $R_N$ is always negative for the LBH phase, but the same is not true for the SBH phase. As shown in inlets of fig \ref{BRT}, in the small temperature range, there exists a region with positive $R_N$. Even though the scalar curvature decreases with temperature for both the branches, LBH branch never attains positive $R_N$. Like in the previous plot (\ref{RNplots}), this also leads to the conclusion that there exists attractive and repulsive interaction in the microstructure of SBH phase. We also notice that the intensity of attractive interaction in the SBH phase is stronger than that of the LBH phase. It can happen due to the strong correlation between the black hole molecules in the SBH phase than in loosely correlated LBH phase. This behaviour is similar to van der Waals liquid-gas system, where the attractive interaction in the liquid phase is more intense than in the gaseous phase. \begin{figure}[H] \centering \subfigure[ref1][]{\includegraphics[scale=0.8]{RTfitSBH}\label{RTfitSBH}\label{SBH}} \qquad \subfigure[ref2][]{\includegraphics[scale=0.8]{RTfitLBH}\label{RTfitLBH}\label{LBH}} \caption{ The behaviour of scalar curvature $\ln|R_N|$ near the critical point is shown in terms of $\ln(1-T_r)$ for the case of LBH in fig \ref{LBH} and SBH in fig \ref{SBH}. The numerical data points are marked by black dotes and line obtained from fitting formula are in solid blue line for SBH and in red line for LBH phase. } \end{figure} Finally, we can find the critical exponent corresponding to the divergence of $R_N$ along the coexistence line for the SBH and LBH branches. This can be obtained numerically assuming that $R_N$ has the form, \begin{equation} R_N\sim\left(1-T_r\right)^p. \end{equation} Taking logarithm on both sides, it reduces to, \begin{equation} \ln|R_N|=-p\ln\left(1-T_r\right)+q. \end{equation} We have numerically generated data for $R_N$ as a function of coexistence temperature $T_r$ in the range $0.9$ to $0.999$. Along the SBH branch, we can fit the data as, \begin{equation} \ln|R|=-1.82528\ln\left(1-T_r\right)-0.970181 .\label{RLtSBH} \end{equation} Similarly, for LBH branch, we obtain, \begin{equation} \ln|R|=- 2.15789\ln\left(1-T_r\right)-3.07915 . \label{RLtLBH} \end{equation} We have plotted these equations separately for SBH and LBH phases in the figures \ref{SBH} and \ref{LBH} along with the numerical data points. These plots show a great consistency in the solid lines and numerical data. Apart from the numerical errors, the results show that the critical exponent $p$ is approximately equal to 2. From equations (\ref{RLtSBH}) and (\ref{RLtLBH}), we can write, \begin{equation} R_N\left(1-T_r\right)^2=-\exp^{-\left(1.82528+2.15789\right)/2}=-0.132038. \end{equation} This ratio is slightly higher than the universal ratio $-1/8$ found in vdW system and other AdS black holes. Taking the numerical errors into account, the result we obtained is very close to the universal ratio. \section{Summary and Conclusions} \label{summary} In this paper, we have concentrated mainly on studying the thermodynamics and microstructure of regular Bardeen AdS black holes. Information about the coexistence phases missing in earlier studies in the literature are addressed. We have dedicated initial sections for obtaining coexistence $P_r-T_r$ equations from the Gibbs free energy plots. The Gibbs free energy in reduced coordinates is plotted as a function of reduced temperature $T_r$ with a fixed pressure $P_r$. The appearance of swallowtail behaviour in these plots below the critical pressure is used to generate data for obtaining the coexistence equation. As it is difficult to obtain coexistence equation analytically in Bardeen black holes, from Maxwell's equal-area law or Gibbs free energy, we have used the fitting formula. Through the coexistence equation, different regions in $P_r-V_r$ isotherm is analysed at a reduced temperature $T_r<1$. It is noticed that a first-order phase transition analogous to vdW system takes place between stable SBH and LBH phases. Besides, there exists metatable superheated SBH and supercooled LBH phases. The stable and metastable phases are distinguished from each other by plotting a spinodal curve. The unstable regions are removed through the Maxwell's constructions. These distinct phases in the black hole are studied through the coexistence and spinodal curves in $P_r-T_r$ and $T_r-V_r$ planes. The change in volume $\Delta V_r=V_{rl}-V_{rs}$ acts as an order parameter during the SBH-LBH phase transition. Near the critical point, the critical exponent is calculated, which matches with the universal value of $1/2$. In the second half of the paper, we have studied the thermodynamics through Ruppeiner geometry. The novel method proposed by Wei\emph{et.al} is used to calculate the thermodynamic scalar curvature \cite{Wei2019a}. Using the reduced equation of state for Bardeen black hole, novel scalar curvature is calculated and plotted against reduced volume $V_r$ at different temperatures. The critical behaviour is well captured in the plots with appearance and disappearance of divergences below and above the critical point. Moreover, it is noticed that scalar curvature attains both positive and negative values in the plots. The sign of $R_N$ encodes the information about the microscopic interactions. This leads one to an inference that both attractive and repulsive interaction exists in the black hole microstructure. To have more details on the microstructure, we have analysed the behaviour of scalar curvature along the coexistence curve. In the absence of analytical expression for the coexistence curve, we depend on the numerical methods for obtaining $R_N$ vs $T_r$ plots for SBH and LBH branches. Both the branches diverge to negative infinity at critical point $T_r=1$. Except for the divergence, the microstructure of the SBH and LBH are distinct. The LBH phase always has a larger $|R_N|$ than the SBH branch. Moreover, SBH branch attains positive $R_N$ in the small temperature range. This is in agreement with $R_N-V_r$ plots, SBH microstructure has repulsive as well as attractive interactions, but LBH microstructure has only attractive interactions. This is in contrast to the vdW fluid system where only attractive dominant interactions are present between the molecules. Our results imply that the phase transition leads to a change in the microstructure of the regular Bardeen AdS black holes. A similar type of behaviour is observed in charged AdS black hole and regular Hayward AdS black hole. But this feature is not universal, in five-dimensional neutral Gauss-Bonnet black hole case, only attractive interaction present in the entire parameter space similar to van der Waals fluid. In Born-Infeld AdS black holes and massive gravity theories, the nature of interaction depends on the value of coupling and massive parameter respectively. \acknowledgments Authors A.R.C.L., N.K.A. and K.H. would like to thank U.G.C.Govt. of India for financial assistance under UGC-NET-SRF scheme.
2024-02-18T23:41:16.544Z
2020-08-17T02:15:26.000Z
algebraic_stack_train_0000
4,645
6,267
proofpile-arXiv_066-6592
\section{Introduction} \IEEEPARstart{O}{fdm} technique is widely employed in wireless communications, mainly due to its ability in converting a frequency-selective fading channel into a group of flat-fading sub-channels \cite{lu2000space}. Compared to conventional single-carrier systems, \ac{ofdm} offers increased robustness against multipath fading distortions since channel equalization can be easily performed in the frequency domain through a bank of one-tap multiplier \cite{dai2010positioning}. Moreover, \ac{ofdm} can be efficiently implemented using \ac{fft} \cite{murphy2002low}, which makes it more appealing compared to other multi-carrier modulation techniques such as Filter Bank Multi Carrier and Generalised Frequency Division Multiplexing. Owing to superior advantages of \ac{ofdm}, it is exploited in many IEEE standards, such as, IEEE 802.15.3a, IEEE 802.16d/e, and IEEE 802.15.4g \cite{green, jimenez2004design, ofdm2ieee802.15}, which are used for different applications. For instance, \ac{ofdm} combined with massive \ac{mimo} technique achieves a high data rate, making it suitable for multimedia broadcasting \cite{kim2008apparatus}. Moreover, many \ac{iot} applications such as smart buildings and \ac{v2x} leverage \ac{ofdm} as their main communication scheme \cite{ofdmieee802.15, ofdm2ieee802.15 }. \ac{ofdm}, however, undergoes a sever \ac{isi} caused by the high selectivity of the fading channel \cite{ wang2005robust}. In order to mitigate this issue, usually a guard interval with a fixed length is inserted between every two consecutive \ac{ofdm} symbols. When the guard interval is the partial repetition of the transmitting data samples, this scheme is called \ac{cp}-\ac{ofdm} \cite{channelestimationCP }. The primary benefit of \ac{cp}-\ac{ofdm} is the ease of \ac{to} estimation or equivalently estimating the starting point of \ac{fft}. This is referred to as time synchronization \cite{tufvesson1999time}, and is easily carried out by using \ac{cp} and its correlation with the data sequence. Despite the ease of time synchronization in \ac{cp}-\ac{ofdm}, it possesses some major disadvantages such as excessive power transmission which is due to the transmission of \ac{cp}. \ac{zp}-\ac{ofdm} \cite{muquet2000reduced}, where the guard interval is filled with zeros, overcomes this issue. However, the time synchronization, or equivalently \ac{to} estimation, in \ac{zp}-\ac{ofdm} becomes a very difficult and complicated task. There are two approaches in order to estimate \ac{to} in \ac{zp}-\ac{ofdm}. In the first approach which is called \ac{da} time synchronization, a series of training sequences (pilots) are used to estimate \ac{to}. The second approach, referred to as \ac{nda} time synchronization, however, relies on the statistical properties of the transmitted data sequence. \subsection{Related work} The \ac{da} time synchronization for \ac{zp}-\ac{ofdm} has been studied in the literature \cite{nasir2016timing}. On the other hand, \ac{nda} time synchronization lacks a reliable mathematical analysis. An \ac{nda} time synchronization algorithm for \ac{zp}-\ac{ofdm} has been proposed in \cite{bolcskei2001blind, LeNir2010} which are mainly heuristic algorithms. More specifically, these algorithms trace the energy ratio of partially cropped data sequences; which do not always show a reliable performance in terms of lock-in probability for highly selective channels. Also, a mathematical approach towards \ac{nda} \ac{to} estimation for \ac{zp}-\ac{ofdm} systems has been proposed in \cite{koosha2020}. The authors in \cite{koosha2020} proposed a \ac{ml} \ac{to} estimator for a \ac{zp}-\ac{ofdm} under a frequency selective channel. However, the algorithm in \cite{koosha2020} is highly complex which hinders its implementation for \ac{mimo} systems. Moreover, the algorithm proposed in \cite{koosha2020} is deigned for \ac{awgn} channel which implies that its performance degrades when the channel experiences an impulsive noise. \subsection{Motivation} \ac{zp}-\ac{ofdm} requires less transmission power compared to \ac{cp}-\ac{ofdm}, due to lack of \ac{cp}, which makes it a suitable candidate for future \ac{iot} devices. However, time synchronization becomes challenging in \ac{zp}-\ac{ofdm} where time synchronization algorithms fail to achieve high lock-in probability, or are highly complex for practical implementations. Moreover, the proposed \ac{to} estimators for \ac{zp}-\ac{ofdm} systems so far are developed for Gaussian noise models. However, many real-world channels, e.g. underwater, urban and indoor channels, are known to experience an impulsive noise, rather than a simple Gaussian noise \cite{blackard1993measurements, middleton1973man, middleton1993elements,middleton1987channel}. This noise is originally coming from the great amount of noise in the nature, and the electronic equipment. It is well-known that designing communication systems under simple Gaussian noise model can significantly affect the performance of such systems when they experience an impulsive noise in reality \cite{wang1999robust}. Hence, an accurate yet low-complex time synchronization algorithm for \ac{zp}-\ac{ofdm} are still needed to be developed. In this paper, we propose a low complexity mathematical approach towards \ac{nda} time synchronization for \ac{zp}-\ac{ofdm} systems in an impulsive noise channel. Simulation results demonstrate that the proposed estimator has a negligible performance gap in terms of lock-in probability with the estimator in \cite{koosha2020} while possessing a significantly lower complexity. \subsection{Contributions} In this paper, we \begin{itemize} \item propose a low-complexity approximate \ac{ml} \ac{to} estimator for \ac{mimo} \ac{zp}-\ac{ofdm} systems in highly selective channels with impulsive noise. This algorithm (i) achieves high lock-in probability, and (ii) has significantly lower complexity compared to \cite{koosha2020}. \item Higher order statistics of the proposed approximate \ac{pdf} of the received samples are investigated. \item A lower complexity algorithm, i.e. \ac{ed}, for \ac{mimo} \ac{zp}-\ac{ofdm} systems experiencing Gaussian noise is proposed. This algorithm achieves high lock-in probability for high \ac{snr}s while possessing a low complexity. \item Complexity of the proposed algorithms and the one in \cite{koosha2020} has been studied. \end{itemize}{} The paper is organized as follows. System model is discussed in Section \ref{sec: sys mod}. The main ideas and the proposed \ac{ml} estimator for systems are presented in Section \ref{sec: siso}. The complexity of the algorithm is compared to that of proposed in \cite{koosha2020} is studied in Section \ref{sec: complexity}. Simulation results and conclusions are given in Sections \ref{sec: simul} and \ref{sec: conclu}, respectively. \textit{Notations}: Column vectors are denoted by bold lower case letters. Random variables are indicated by uppercase letters. Matrices are denoted by bold uppercase letters. Conjugate, absolute value, transpose, and the expected value are indicated by $(\cdot)^*$, $|\cdot|$, $(\cdot)^{\rm{T}}$, and $\mathbb{E}\{\cdot\}$, respectively. Brackets, e.g. ${\bf a}[k]$, are used for discrete indexing of a vector ${\bf a}$. \section{System model} \label{sec: sys mod} We consider a \ac{mimo}-\ac{ofdm} wireless system with $m_t$ and $m_r$ transmit and receive antennas, respectively. This system uses \ac{zp}-\ac{ofdm} technique to communicate over a frequency selective Rayleigh fading channel. We assume a perfect synchronization at the transmit antennas. Let $\{x^{(n)}_k\}_{k=0}^{n_{\rm{x}}-1}$, $\mathbb{E}\{|x_k|^2\}= \sigma^2_{\rm{x}}$ be the $n_{\rm{x}}$ complex data samples from the $n$-th \ac{ofdm} block to be transmitted from the $i$-th transmit antenna. Hence, their corresponding \ac{ofdm} signal can be expressed as \begin{align}\label{eq: ofdm symbol} x^{(n)}_i(t)=\sum_{k=0}^{n_{{\rm{x}}-1}} x^{(n)}_k e^{\frac{j2\pi k t}{T_{\rm{x}}}}\,\,\,\,\,\,\,\,\,\,\ 0 \le t \le T_{\rm{x}}, \end{align} \noindent where $T_{\rm{x}}$ denotes the duration of the data signal. To deal with the delay spread of the wireless channel, a zero-padding guard interval of length $T_{{\rm{z}}}$ is added to \eqref{eq: ofdm symbol}, in order to form the transmitted \ac{ofdm} signal. Hence, the $n$-th transmitted \ac{ofdm} signal from the $i$-th transmit antenna is given as \begin{align} \label{eq: s continues} s^{(n)}_i(t)= \begin{cases} x^{(n)}_i(t) \,\,\,\,\,\,\,\,\,\,\,\ 0 \le t \le T_{\rm{x}} \\ 0 \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, T_{\rm{x}} < t \le T_{\rm{s}}, \end{cases} \end{align} \noindent where $T_{\rm{s}}$ denotes the signal duration, and $T_{\rm{s}}= T_{\rm{x}}+T_{\rm{z}}$. Since the receiver demodulates the received signal and performs sampling, it is more convenient from both practical and mathematical point of view to develop algorithms and perform the analysis in baseband and discrete format. To this end, we continue our analysis in discrete baseband format from now on. Let us denote the sampling rate at the receiver by $f_{\rm{s}}=1/T_{\rm{sa}}$. We assume the transmitted signal, i.e. $s^{(n)}_i(t)$, passes through a frequency selective channel with $n_{\rm{h}}$-taps. Let $\{h_{ji}[k]\}_{k=0}^{n_{\rm{h}}-1}$ denote the $k$-th channel tap between the transmit antenna $i$ and the received antenna $j$. The channel taps are assumed to be statistically independent complex Gaussian random variables with zero-mean, i.e. Rayleigh fading. The delay profile of the taps are given as \begin{align}\label{7u8i0000} \mathbb{E}{\{}h[k]h^*[k-m]{\}}=\sigma_{{{\rm{h}}_k}}^2\delta[m], \end{align} $k=0,1,\dots, n_{\rm{h}}-1$. We assume the channel delay profile is known to the receiver. In the absence of synchronization error and \ac{isi}, the discrete received baseband vector is expressed as \begin{align}\label{Sys Model: matrix form conv 2} {\bf Y}^{(n)}= \begin{cases} {\bf H} {\bf S}^{(n)} + {\bf W}^{(n)}, \,\,\,\,\,\,\,\,\,\,\,\ n \ge 0 \\ {\bf W}^{(n)}, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, n<0, \end{cases} \end{align} where ${\bf H}$ denotes the discrete channel filter, and is defined as \begin{equation} \label{matrix H} {\bf H}=\begin{pmatrix}\vspace{0.2 cm} {\bf {\rm{H}}}_{11} & {\bf {\rm{H}}}_{12} & \cdots & {\bf {\rm{H}}}_{1{\rm{m_t}}} \\ \vspace{0.2 cm} {\bf {\rm{H}}}_{21} & {\bf {\rm{H}}}_{22} & \cdots & {\bf {\rm{H}}}_{2{\rm{m_t}}} \\ \vspace{0.2 cm} \vdots & \cdots & \ddots & \cdots \\ \vspace{0.2 cm} {\bf {\rm{H}}}_{{\rm{m_r}} 1} & {\bf {\rm{H}}}_{{\rm{m_r}} 2} & \cdots & {\bf {\rm{H}}}_{ {\rm{m_r}} {\rm{m_t}}}, \\ \end{pmatrix} \end{equation} \noindent where ${\bf {\rm{H}}}_{ji}$ is the lower triangular Toeplitz channel matrix between transmit antenna $i$ and the received antenna $j$, with first column $[h_{ji}[0] \ h_{ji}[1] \ \cdots \ h_{ji}[n_{\rm{h}}-1] ~ \ 0 \ \cdots \ 0]^\text{T}$. We set this matrix to be $n_{\rm{s}} \times n_{\rm{s}}$ where \begin{equation} \begin{split} n_{\rm{s}} = n_{\rm{x}}+n_{\rm{z}},~~~ n_{\rm{s}}\triangleq T_{\rm{s}}/T_{\rm{sa}}\\ n_{\rm{x}}\triangleq T_{\rm{x}}/T_{\rm{sa}},~~~ n_{\rm{z}}\triangleq T_{\rm{z}}/T_{\rm{sa}} \end{split}{} \end{equation} where $n_{\rm{s}},n_{\rm{x}},$ and $n_{\rm{z}}$ denote the number of \ac{ofdm} signal samples, number of data samples, and the number of zero samples, respectively. Moreover, ${\bf S}^{(n)}$, ${\bf Y}^{(n)}$, and ${\bf W}^{(n)}$ are defined as \begin{equation} \label{sys mod: s y w} {\bf S}^{(n)} =\begin{pmatrix}\vspace{0.2 cm} {\bf s}^{(n)}_1 \\ \vspace{0.2 cm} {\bf s}^{(n)}_2 \\ \vspace{0.2 cm} \vdots \\ \vspace{0.2 cm} {\bf s}^{(n)}_{{\rm{m_t}}} \\ \end{pmatrix},~ {\bf Y}^{(n)} \triangleq \begin{pmatrix}\vspace{0.2 cm} {\bf y}^{(n)}_1 \\ \vspace{0.2 cm} {\bf y}^{(n)}_2 \\ \vspace{0.2 cm} \vdots \\ \vspace{0.2 cm} {\bf y}^{(n)}_{{\rm{m_r}}} \\ \end{pmatrix},~ {\bf W}^{(n)} \triangleq \begin{pmatrix}\vspace{0.2 cm} {\bf w}^{(n)}_1 \\ \vspace{0.2 cm} {\bf w}^{(n)}_2 \\ \vspace{0.2 cm} \vdots \\ \vspace{0.2 cm} {\bf w}^{(n)}_{{\rm{m_r}}} \\ \end{pmatrix} \end{equation} \noindent where ${\bf y}^{(n)}_j$, ${\bf w}^{(n)}_j$, and ${\bf s}^{(n)}_i$ denote the received vector at the $j$-th receive antenna, the noise vector at the $j$-th receive antenna, and the transmitted vector from the $i$-th transmit antenna, respectively, and are given as \begin{align} \label{eq: sys mod y w s anttena i} {\bf y}^{(n)}_j &\triangleq [ \hspace{0.1 cm} y^{(n)}_j[0] \hspace{0.2 cm} y^{(n)}_j[1] \hspace{0.2 cm} \cdots \hspace{0.2 cm} y^{(n)}_j [{n_{\rm{s}}-1}] \hspace{0.1 cm}]^{\rm{T} }\\ {\bf w}^{(n)}_j &\triangleq [ \hspace{0.1 cm} w^{(n)}_j[0] \hspace{0.2 cm} w^{(n)}_j[1] \hspace{0.2 cm} \cdots \hspace{0.2 cm} w^{(n)}_j [{n_{\rm{s}}-1}] \hspace{0.1 cm} ]^{\rm{T}}\\ {\bf s}^{(n)}_i &\triangleq [ \hspace{0.1 cm} s^{(n)}_i[0] \hspace{0.2 cm} s^{(n)}_i[1] \hspace{0.2 cm} \cdots \hspace{0.2 cm} s^{(n)}_i [{n_{\rm{s}}-1}] \hspace{0.1 cm} ]^{\rm{T}} \nonumber \\ &\hspace{-0.4 cm} =[ \hspace{0.1 cm} x^{(n)}_i(0) \hspace{0.2 cm} x^{(n)}_i(T_{\rm{sa}}) \hspace{0.2 cm} \cdots \hspace{0.2 cm} ... x^{(n)}_i((n_{\rm{x}}-1) T_{\rm{sa}}) \hspace{0.2 cm} \underbrace{ 0 \hspace{0.2 cm} \cdots \hspace{0.2 cm} 0 \hspace{0.1 cm} }_{n_{\rm{z}}} ]^{\rm{T}} \end{align} Note that if the receiver starts to receive samples before any data is transmitted, it only receives noise samples, thus, we have ${\bf Y}^{(n)}= {\bf W}^{(n)},~ n<0$ in \eqref{Sys Model: matrix form conv 2}. In order to avoid \ac{isi}, the length of the zero-padding should be greater than or equal to the number of channel taps, i.e. $n_{\rm{z}} \ge n_{\rm{h}}$. This assumption holds throughout our analysis in this paper. We also consider the Class A impulsive noise model, i.e. Gaussian mixtures, that is defined as \cite{shongwe2015study} \begin{equation} \label{eq: noise pdf} f_{W^{(n)}_j[k]}(w) = \sum^{L-1}_{l=0} p_l \mathcal{CN}(w:0, \sigma_{{\rm{w}}_l}^2), \end{equation} \noindent for $k \in \{0,1,\cdots,{n_{\rm{s}}-1} \}$. It is well-known that the Gaussian mixture noise is a more accurate noise model than the conventional simple Gaussian model \cite{wang1999robust}. That is, the Gausssian mixture distribution models the noise in many real-world channels such as urban, underwater, and indoor channels, more accurately compared to the conventional simple Gaussian model \cite{blackard1993measurements, middleton1973man, middleton1993elements,middleton1987channel}. According to the \ac{clt}, transmitted \ac{ofdm} samples, i.e. $s^{(n)}_i[k]= x^{(n)}_i(kT_{{\rm{sa}}}),~ \forall k \in \{0, 1, \cdots , {n_{\rm{x}}}-1 \}$, can be modeled as \ac{iid} zero-mean Gaussian random variables. Hence, \begin{align} \label{eq: ofdm samples gauss} s^{(n)}_i[k] ~\text{or}~ x^{(n)}_i(kT_{{\rm{sa}}}) \sim \mathcal{CN}(0,\sigma^2_x),~~~ \forall k \in \{0, 1, \cdots , {n_{\rm{x}}}-1 \} \end{align} where \begin{align} \label{eq: ofdm samples gauss power} \mathbb{E}\Big{\{}s^{(n)}_i[k] s^{(n)}_p[k']^*\Big{\}}&=\mathbb{E}\Big{\{}x^{(n)}_i(kT_{{\rm{sa}}}) x^{(n)}_p(k'T_{{\rm{sa}}})^*\Big{\}} \\ &= \sigma^2_x \delta[k-k'] \delta[i-p],\\ \nonumber &\forall i,p \in \{1,2,\cdots,{m_{\rm{t}}} \}, \\ \nonumber &\forall k,k' \in \{0, 1, \cdots , {n_{\rm{x}}}-1 \}. \end{align} Now, assume that there is a \ac{to} $\tau \triangleq dT_{\rm{sa}}+\epsilon$ between the transmitter and the receiver, where $d$ and $\epsilon$ represent the integer and fractional part of the \ac{to}, respectively. Since the fractional part of \ac{to}, $\epsilon$, can be corrected through channel equalization, it suffices to estimate the beginning of the received \ac{ofdm} vector within one sampling period. In fact, it is common in practice to model the \ac{to} as a multiple of the sampling period, and consider the remaining fractional error as part of the channel impulse response. To this end, we focus on estimating the integer part of the \ac{to}, $d$, which is essential in order to perform the \ac{fft} operation at the receiver, and decode the data in subsequent steps. The next sections propose an approximate \ac{ml} estimator for estimating $d$. \section{Maximum Likelihood Estimation For Single-Input Single-Output System} \label{sec: siso} In order to better understand the main ideas, and for the sake of notation simplicity, we first derive the approximate \ac{ml} \ac{to} estimator for a \ac{siso}-\ac{ofdm} system where ${\rm{m_t}}=1$ and ${\rm{m_r}}=1$. We then extend the proposed results for \ac{siso} systems in order to obtain the \ac{ml} \ac{to} estimator for \ac{mimo}-\ac{ofdm} systems. We consider a \ac{siso} wireless system; hence, ${\rm{m_t}}=1$ and ${\rm{m_r}}=1$. For the sake of notation simplicity, we remove the subscripts $i$ or $j$, denoting the variables associated with the $i$-th transmit or $j$-th receive antenna, from ${\bf y}^{(n)}_j, y^{(n)}_j[k], {\bf w}^{(n)}_j, w^{(n)}_j[k], {\bf s}^{(n)}_i, s^{(n)}_i[k], {\bf {\rm{H}}}_{ji}, h_{ji}[k]$, and $ x^{(n)}_i(k) $. Hence, Equation \eqref{Sys Model: matrix form conv 2} can be rewritten as \begin{align}\label{siso: matrix form conv 2} {\bf y}^{(n)}= \begin{cases} {\rm{H}} {\bf s}^{(n)} + {\bf w}^{(n)} \triangleq {\bf v}^{(n)} +{\bf w}^{(n)}, \,\,\,\,\,\,\,\,\,\,\,\ n \ge 0 \\ {\bf w}^{(n)}, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, n<0 \end{cases} \end{align} \noindent where \begin{align} {\bf y}^{(n)} &\triangleq [ \hspace{0.1 cm} y^{(n)}[0] \hspace{0.2 cm} y^{(n)}[1] \hspace{0.2 cm} \cdots \hspace{0.2 cm} y^{(n)} [{n_{\rm{s}}}-1] \hspace{0.1 cm}]^{\rm{T}} \\ {\bf v}^{(n)} &\triangleq [ \hspace{0.1 cm} v^{(n)}[0] \hspace{0.2 cm} \cdots v^{(n)}[1] \hspace{0.2 cm} \cdots \hspace{0.2 cm} v^{(n)} [{n_{\rm{s}}}-1] \hspace{0.1 cm} ]^{\rm{T}}\\ {\bf w}^{(n)} &\triangleq [ \hspace{0.1 cm} w^{(n)}[0] \hspace{0.2 cm} \cdots w^{(n)}[1] \hspace{0.2 cm} \cdots \hspace{0.2 cm} w^{(n)} [{n_{\rm{s}}}-1] \hspace{0.1 cm} ]^{\rm{T}}\\ {\bf s}^{(n)} &\triangleq [ \hspace{0.1 cm} s^{(n)}[0] \hspace{0.2 cm} s^{(n)}[1] \hspace{0.2 cm} \cdots \hspace{0.2 cm} s^{(n)} [{n_{\rm{s}}-1}] \hspace{0.1 cm} ]^{\rm{T}} \nonumber \\ &\hspace{-0.4 cm} =[ \hspace{0.1 cm} x^{(n)}(0) \hspace{0.2 cm} x^{(n)}(T_{\rm{sa}}) \hspace{0.2 cm} \cdots \hspace{0.2 cm} x^{(n)}((n_{\rm{x}}-1) T_{\rm{sa}}) \hspace{0.2 cm} \underbrace{ 0 \hspace{0.2 cm} \cdots \hspace{0.2 cm} 0 \hspace{0.1 cm} }_{n_{\rm{z}}} ]^{\rm{T}}. \end{align} \noindent and ${\bf {\rm{H}}}$ is the lower triangular $n_{\rm{s}} \times n_{\rm{s}}$ Toeplitz channel matrix with first column $[h[0] \ h[1] \ \cdots \ h[n_{\rm{h}}-1] ~ \ 0 \ \cdots \ 0]^\text{T}$. Now, we assume that the integer part of the \ac{to}, $d$, can take values from a set $\mathcal{D}= \{-n_{\rm{s}}+1,\cdots,-1,0,1,\cdots,n_{\rm{s}}-1\}$, i.e. $d \in \mathcal{D}$. Note that the negative values of the delay, $d$, denotes the time when the receiver starts early to receive samples. That is, for $d<0$, the receiver receives $\lvert d\rvert$ noise samples from the environment, and then receives the transmitted \ac{ofdm} samples starting from the $\lvert d \rvert+1$-th sample at the receiver. Similarly, when $d \ge 0$, the receiver starts late to receive the samples. In other words, the receiver misses the first $d$ samples from the transmitted \ac{ofdm} samples. Allowing $d$ to take both negative and positive values enables the final estimator to perform both the frame and symbol synchronization; hence, it possesses a significant advantage. The problem of estimating the \ac{to} can be formulated as a multiple hypothesis testing problem, i.e. ${\rm{H}}_d,~ \forall d \in \mathcal{D}=\{-n_{\rm{s}}+1, \cdots, n_{\rm{s}}-1\}$. We first assume that the receiver uses $N$ observation vectors, ${\bf y}^{(0)}$, ${\bf y}^{(1)}$, $\cdots$, ${\bf y}^{(N-1)}$, each with length $n_{{\rm{s}}}$, in order to estimate the timing offset $d$. Later, we allow the receiver to use any arbitrary number of received samples, not necessarily a multiple of $n_{{\rm{s}}}$, for estimation. In order to derive the \ac{ml} \ac{to} estimator, we need to obtain the joint \ac{pdf} of the $N$ observation vectors under the different hypotheses ${\rm{H}}_d$. Assuming ${\rm{H}}_{d}$, this \ac{pdf} is denoted by $f( {\bf y}^{(0)}, {\bf y}^{(1)}, \cdots, {\bf y}^{(N-1)}| {\rm{H}}_{d})$. The following lemma from \cite{koosha2020} gives some insights about the samples of the received vectors. We provide this lemma here without the proof. \begin{lemma} \label{lemma: obser vec} For any arbitrary $n$ and $k,~ 0 \le k \le n_{\rm{s}}-1$, let us define the actual index of an observation sample, i.e. $y^{(n)}[k]$, as $k^{\rm{ac}}= n_{\rm{s}}n+k$. Then, for any arbitrary $d \in \mathcal{D}$, the elements of the observation vectors, i.e. $y^{(n)}[k]$, are uncorrelated random variables. Moreover, the elements with an actual index difference greater than $n_{\rm{h}}$ are independent. \end{lemma} \begin{corollary} \label{corol: independ} Samples from different observation vectors are independent. \end{corollary} According to Lemma \ref{lemma: obser vec}, although the elements of the observation vectors, i.e $y^{(n)}[k]$, are uncorrelated, and those with actual index difference greater than $n_{\rm{h}}$ are independent, but, they are not generally independent. Hence, deriving a closed-form expression for the joint \ac{pdf}, i.e. $f( {\bf y}^{(0)}, {\bf y}^{(1)}, \cdots, {\bf y}^{(N-1)}| {\rm{H}}_{d})$, is not mathematically tractable. However, authors in \cite{koosha2020} showed that the final estimator is less sensitive to the independency assumption of the elements of the observation vectors than their sole \ac{pdf}s. Thus, using Corollary \ref{corol: independ}, we can write \begin{equation} \label{eq: indepency 1} \begin{split} f( {\bf y}^{(0)}, {\bf y}^{(1)}, \cdots, &{\bf y}^{(N-1)}| {\rm{H}}_{d}) \\ &= f_{{\bf Y}^{(0)}}( {\bf y}^{(0)}|{\rm{H}}_{d}) ~ f_{{\bf Y}^{(1)}}({\bf y}^{(1)}|{\bf y}^{(0)},{\rm{H}}_{d}) \cdots \\ &~~~~~~~~~ f_{{\bf Y}^{(N-1)}}({\bf y}^{(N-1)} | {\bf y}^{(0)}, {\bf y}^{(1)}, \cdots , {\rm{H}}_{d})\\ &= \prod_{n=0}^{N-1} f_{{\bf Y}^{(n)}}( {\bf y}^{(n)}| {\rm{H}}_{d}) \\ &\simeq \prod_{n=0}^{N-1} \prod_{k=0}^{n_{\rm{s}}-1} f_{Y^{(n)}[k]}(y^{(n)}[k]| {\rm{H}}_{d}) \end{split} \end{equation} Let us denote the in-phase and quadrature components of a received sample, i.e. $y^{(n)}[k]$, by $y^{(n)}_{{\rm{I}}}[k]$ and $y^{(n)}_{{\rm{Q}}}[k]$, respectively. Since, the in-phase and quadrature components are independent from each other, one can rewrite Equation \eqref{eq: indepency 1} as \begin{equation} \label{eq: indepency 2} \begin{split} f( {\bf y}^{(0)}, &{\bf y}^{(1)}, \cdots, {\bf y}^{(N-1)}| {\rm{H}}_{d}) \simeq \prod_{n=0}^{N-1} \prod_{k=0}^{n_{\rm{s}}-1} f_{Y^{(n)}[k]}(y^{(n)}[k]| {\rm{H}}_{d})\\ &= \prod_{n=0}^{N-1} \prod_{k=0}^{n_{{\rm{s}}}-1} f_{Y^{(n)}_{{\rm{I}}}[k]}\big(y^{(n)}_{{{\rm{I}}}}[k] | {\rm{H}}_{d} \big) ~ f_{Y^{(n)}_{{\rm{Q}}}[k]}\big(y^{(n)}_{{{\rm{Q}}}}[k] | {\rm{H}}_{d} \big) \end{split} \end{equation} Using Equation \eqref{siso: matrix form conv 2}, we have $Y^{(n)}_{{{\rm{I}}}}[k]=V^{(n)}_{{{\rm{I}}}}[k] + W^{(n)}_{{{\rm{I}}}}[k]$. The same equation holds for the quadrature components of the received samples. Hence, the \ac{pdf} $f_{Y^{(n)}_{{\rm{I}}}[k]}\big(y^{(n)}_{{{\rm{I}}}}[k] | {\rm{H}}_{d} \big)$, or similarly $f_{Y^{(n)}_{{\rm{Q}}}[k]}\big(y^{(n)}_{{{\rm{Q}}}}[k] | {\rm{H}}_{d} \big)$, for $n\ge0$ can be obtained as \begin{equation} \label{eq: convol siso} \begin{split} f_{Y^{(n)}_{{\rm{I}}}[k]}\big(y^{(n)}_{{{\rm{I}}}}[k] | {\rm{H}}_{d} \big)=& f_{V^{(n)}_{{{\rm{I}}}}[k]} \big( v^{(n)}_{{{\rm{I}}}}[k] | {\rm{H}}_{d} \big) \\ &* f_{W^{(n)}_{{{\rm{I}}}}[k]}\big( w^{(n)}_{{{\rm{I}}}}[k] | {\rm{H}}_{d} \big) \end{split} \end{equation} \noindent where $*$ denotes the convolution operation. Deriving the \ac{pdf} $f_{V^{(n)}_{{{\rm{I}}}}[k]} \big( v^{(n)}_{{{\rm{I}}}}[k] | {\rm{H}}_{d} \big)$, or $f_{V^{(n)}_{{{\rm{Q}}}}[k]} \big( v^{(n)}_{{{\rm{Q}}}}[k] | {\rm{H}}_{d} \big)$, is complex, and results in a complicated expression \cite{koosha2020}. Here, we approximate this \ac{pdf} with a Gaussian \ac{pdf} with zero mean and a specific variance for each received sample in favor of reducing the complexity of the final estimator. We later justify this assumption in more details. The first step to derive the \ac{pdf} of in-phase component of a received sample $Y^{(n)}_{{\rm{I}}}[k]$ is to determine the corresponding mean and variance of $V^{(n)}_{{\rm{I}}}[k]$ under assumption ${\rm{H}}_d$. In the absence of \ac{to}, i.e. ${\rm{H}}_0$, Equation \eqref{siso: matrix form conv 2} holds and can be expanded as Equation \eqref{eq: expa conv 2} given at the top of the next page. Using Equations \eqref{eq: expa conv 2} and \eqref{eq: ofdm samples gauss} and the fact that $\mathbb{E}\{h_{{\rm{I}}}[k]\}=\mathbb{E}\{h_{{\rm{Q}}}[k]\}=0$, we conclude \begin{align}\label{eq: mean siso} \mathbb{E}\{V^{(n)}_{{\rm{I}}}[k]|{\rm{H}}_0\}=0. \end{align} By substituting \begin{align} \mathbb{E}\big{\{}s^{(n)}_{{{\rm{I}}}^2}[k]\big{\}}&= \mathbb{E}\big{\{}s^{(n)}_{{{\rm{Q}}}^2}[k]\big{\}} \\ \nonumber &=\begin{cases} \frac{\sigma_{\rm{x}}^2}{2}, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\ 0 \le k \le n_{\rm{x}}-1 \\ 0, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, n_{\rm{x}} \le k \le n_{\rm{s}}-1 \end{cases} \end{align} \noindent and $\mathbb{E}\{h^{(n)}_{\rm{I}}[k] h^{(n)}_{\rm{I}}[r]\}=\mathbb{E}\{h^{(n)}_{\rm{Q}}[k] h^{(n)}_{\rm{Q}}[r]\}=\sigma_{{\rm{h}}_k}^2\delta [k-r]$, and using Equation \eqref{eq: expa conv 2}, one can derive \begin{align}\label{eq: var siso} \sigma^2_k &\triangleq \mathbb{E}\big{\{} {\big( v^{(n)}_{{\rm{I}}}}[k] \big)^2|{\rm{H}}_0\big{\}} \\ &=\begin{cases} \sum^{b}_{r=a} \frac{ \sigma^2_{h_r} \sigma^2_{{\rm{x}}}}{2} &~~~ 0 \le k < n_{\rm{x}} + n_{\rm{h}}-2 \\ \\ \nonumber 0 &~~~ n_{\rm{x}} + n_{\rm{h}}-1 \le k \le n_{\rm{s}}-1 \\ &~~~ \rm{or}~ n<0 \\ \end{cases} \end{align} where \begin{equation} \label{eq: expa conv 21} (a,b) \hspace{-0.2em}= \hspace{-0.2em} \begin{cases} (0,k) &~ 0 \le k \le n_{\rm{h}}-2\\ (0,n_{\rm{h}}-1) &~ n_{\rm{h}}-1 \le k \le n_{\rm{x}}-1\\ (k-n_{\rm{x}}+1,n_{\rm{h}}-1) &~ n_{\rm{x}} \le k \le n_{\rm{x}}+n_{\rm{h}}-2.\\ \end{cases} \end{equation} Using Equations \eqref{eq: var siso} and \eqref{eq: expa conv 21}, one can define the vector of variances of the received samples under assumption ${\rm{H}}_0$ as \begin{figure*}[t] \label{eq:6767} \begin{equation} \label{eq: expa conv 2} v^{(n)}_{{\rm{I}}}[k] \hspace{-0.2em}= \hspace{-0.2em} \begin{cases} \sum^{m}_{u=0} h_{\rm{I}}[u] s^{(n)}_{{\rm{I}}}[k-u] - h_{\rm{Q}}[u] s^{(n)}_{{\rm{Q}}}[k-u] ~~~~~~~~~~~~~~~~~~~~~ 0 \le k < n_{\rm{h}}-2 \\ \sum^{n_{\rm{h}}-1}_{u=0} h_{\rm{I}}[u] s^{(n)}_{{\rm{I}}}[k-u] - h_{\rm{Q}}[u] s^{(n)}_{{\rm{Q}}}[k-u] ~~~~~~~~~~~~~~~~~~~~ n_{\rm{h}}-1 \le k \le n_{\rm{x}}-1, \\ \sum^{n_{\rm{h}}-1}_{u=m-n_{\rm{x}}+1} h_{\rm{I}}[u] s^{(n)}_{{\rm{I}}}[k-u] - h_{\rm{Q}}[u] s^{(n)}_{{\rm{Q}}}[k-u] ~~~~~~~~~~~~~ n_{\rm{x}} \le k \le n_{\rm{x}}+n_{\rm{h}}-2,\\ 0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ n_{\rm{x}}+n_{\rm{h}}-1 \le k \le n_{\rm{s}}-1, \end{cases} \end{equation} \\ \noindent\rule{\textwidth}{1pt} \end{figure*} $\boldsymbol{ \sigma}^2_{{\bf V}_{\rm{I}}|{\rm{H}}_0}\triangleq$ \begin{equation}\label{eq: var matrix H0 siso} \hspace{-0.5cm} \left[\begin{array}{@{}c} \vspace{0.2cm} \vdots\\ \vspace{0.2cm} \sigma^2_{{\bf V}_{\rm{I}}[-2]|{\rm{H}}_0} \\ \vspace{0.2cm} \sigma^2_{{\bf V}_{\rm{I}}[-1]|{\rm{H}}_0} \\ \hdashline \\ \vspace{0.2cm} \sigma^2_{{\bf V}_{\rm{I}}[0]|{\rm{H}}_0} \\ \sigma^2_{{\bf V}_{\rm{I}}[1]|{\rm{H}}_0} \\ \vdots \\ \vspace{0.2cm} \sigma^2_{{\bf V}_{\rm{I}}[n_{\rm{s}}-1]|{\rm{H}}_0} \\ \hdashline \\ \vspace{0.2cm} \sigma^2_{{\bf V}_{\rm{I}}[n_{\rm{s}}]|{\rm{H}}_0} \\ \sigma^2_{{\bf V}_{\rm{I}}[n_{\rm{s}}+1]|{\rm{H}}_0} \\ \vdots \\ \vspace{0.2cm} \sigma^2_{{\bf V}_{\rm{I}}[c_1+2n_{\rm{s}}-1]|{\rm{H}}_0} \\ \hdashline \vspace{-0.3cm} \\ \vdots \vspace{0.3cm} \end{array}\right] \stackrel{(a)}{=} \hspace{-0.1cm} \left[\begin{array}{@{}c@{}} \vspace{0.2cm} \vdots\\ 0 \\ \vspace{0.2cm} 0 \\ \hdashline \\ \vspace{0.2cm} \sigma^2_{0} \\ \sigma^2_{1} \\ \vdots \\ \vspace{0.2cm} \sigma^2_{n_{\rm{s}}-1} \\ \hdashline \\ \vspace{0.2cm} \sigma^2_{0} \\ \sigma^2_{1} \\ \vdots \\ \vspace{0.2cm} \sigma^2_{n_{\rm{s}}-1} \\ \hdashline \vspace{-0.3cm} \\ \vdots \vspace{0.3cm} \end{array}\right] \end{equation} \noindent where $\boldsymbol{ \sigma}^2_{{\bf V}_{\rm{I}}|{\rm{H}}_0}[k]= \sigma^2_{{\bf V}_{\rm{I}}[k]|{\rm{H}}_0}$ corresponds to the variance of the $k$-th received sample under ${\rm{H}}_0$, i.e. $Y^{(n)}_{{\rm{I}}}[k]$. Now, note that any \ac{to} in the received samples of length $m$ results in an only a shift in the vector of variances given in \eqref{eq: var matrix H0 siso} as \begin{equation} \label{eq: var Hd siso} \boldsymbol{ \sigma}^2_{{\bf V}_{\rm{I}}|{\rm{H}}_d} = \boldsymbol{ \sigma}^2_{{\bf V}_{\rm{I}}|{\rm{H}}_0} (d:d+m-1) \end{equation} where we denote a shortened version of an arbitrary unlimited-length vector ${\bf a}= [\cdots~a[-2]~a[-1]~a[0]~a[1]~a[2]~ \cdots ]^\textrm{T}$ by ${\bf a}(r:k),~ r\le k$, and is defined as \begin{equation} \label{eq: def trun i j} {\bf a}(r:k) \triangleq \big[a[r]~a[r+1]~ \cdots~ a[k]\big]^\textrm{T}. \end{equation} Finally, using Equations \eqref{eq: var Hd siso} and \eqref{eq: mean siso}, we have \begin{align}\label{eq: V siso} V^{(n)}_{{\rm{I}}}[k]\sim \mathcal{N}(0, \boldsymbol{ \sigma}^2_{{\bf V}_{\rm{I}}|{\rm{H}}_d}[k]), \end{align} \noindent where $V^{(n)}_{{\rm{I}}}[k]\sim \mathcal{N}(0, 0) $ means $V^{(n)}_{{\rm{I}}}[k]=0$ . Similar equation holds for $V^{(n)}_{{\rm{Q}}}[k]$. Now that we derived the variance of $V^{(n)}_{{\rm{I}}}[k]$ under assumption ${\rm{H}}_d$, i.e. Eq. \eqref{eq: var Hd siso}, we are ready to derive the \ac{pdf} of an in-phase component of a received sample $Y^{(n)}_{{\rm{I}}}[k]$ under ${\rm{H}}_d$. Since $W^{(n)}[k]=W^{(n)}_{{\rm{I}}}[k]+j W^{(n)}_{{\rm{Q}}}[k]$, from \eqref{eq: noise pdf} we have \begin{equation}\label{eq: noise in-phase siso} f_{W^{(n)}_{\rm{I}}[i]}(w)= \sum^{L-1}_{l=0} p_l \mathcal{CN}\Big{(}w:0, \frac{\sigma_{{\rm{w}}_l}^2}{2}\Big{)}. \end{equation} Finally, by substituting \eqref{eq: V siso} and \eqref{eq: noise in-phase siso} into \eqref{eq: convol siso}, we can write \begin{equation} \label{eq: pdf inphase siso} \begin{split} &f_{Y_{\rm{I}}[k]} (y|{\rm{H}}_d) = \sum^{L-1}_{l=0} \frac{p_l}{\sqrt{2\pi \frac{\sigma_{{\rm{w}}_l}^2}{2}}} \frac{1}{\sqrt{2\pi \boldsymbol{ \sigma}^2_{{\bf V}_{\rm{I}}|{\rm{H}}_d}[k]}} \times \\ &\int^{\infty}_{-\infty} \exp\bigg\{-\frac{1}{2\frac{\sigma_{{\rm{w}}_l}^2}{2}} \big(y - v\big)^2 \bigg\} \exp\Big{(}-\frac{v^2}{2\boldsymbol{ \sigma}^2_{{\bf V}_{\rm{I}}|{\rm{H}}_d}[k]}\Big{)} dv\\ &= \sum^{L-1}_{l=0} \frac{p_l \exp\big{(}-\frac{y^2}{\sigma_{{\rm{w}}_l}^2}\big{)}}{\sqrt{2\pi \frac{\sigma_{{\rm{w}}_l}^2}{2}}} \frac{1}{\sqrt{2\pi \boldsymbol{ \sigma}^2_{{\bf V}_{\rm{I}}|{\rm{H}}_d}[k]}} \\ &\int^{\infty}_{-\infty} \exp\bigg\{- \Big{(} \frac{1}{\sigma_{{\rm{w}}_l}^2} + \frac{1}{2\boldsymbol{ \sigma}^2_{{\bf V}_{\rm{I}}|{\rm{H}}_d}[k]} \Big{)} v^2 + \Big{(}\frac{2y}{\sigma_{{\rm{w}}_l}^2}\Big{)} v \bigg\} dv\\ &\stackrel{a}{=} \sum^{L-1}_{l=0} \frac{p_l}{\sqrt{2\pi (\boldsymbol{ \sigma}^2_{{\bf V}_{\rm{I}}|{\rm{H}}_d}[k] + \frac{\sigma_{{\rm{w}}_l}^2}{2})}} \exp\bigg{(}-\frac{y^2}{2\Big{(}\boldsymbol{ \sigma}^2_{{\bf V}_{\rm{I}}|{\rm{H}}_d}[k] + \frac{\sigma_{{\rm{w}}_l}^2}{2}\Big{)}}\bigg{)} \end{split} \end{equation} \noindent one can derive the similar equations for $f_{Y_{\rm{Q}}[k]} (y|{\rm{H}}_0)$. Before we dig into the last step and derive the \ac{pdf} of $Y^{(n)}[k]$, we use the next definitions to simplify notations. \textit{Definition 1}: The mathematical function $B\big(z,t \big): \mathds{R} \times \mathds{R} \longmapsto \mathds{R}$ is defined as \begin{equation} B\big(z,t \big)= \sum^{L-1}_{l=0} \frac{p_l}{\sqrt{2\pi (t + \frac{\sigma^2_{{\rm{w}}_l}}{2})}} \exp\Big{(}-\frac{z^2}{2(t + \frac{\sigma^2_{{\rm{w}}_l}}{2})}\Big{)}, \end{equation} \noindent where $t \ge 0$. \textit{Definition 2}: The mathematical function $\mathcal{P}\big(c, \kappa \big) : \mathds{C} \times \mathds{R} \longmapsto \mathds{R}$ is called $\mathcal{P}$-function, and is defined as \begin{equation} \label{eq: P-function def} \mathcal{P}\big(c, \kappa \big)= B\big(c_1,\kappa \big) B\big(c_2,\kappa \big), \end{equation} \noindent where $c_1= {\rm{Re}}\{c\}$, $c_2= {\rm{Im}}\{c\}$, and $\kappa \ge 0$ is called shape parameter. Using \textit{Definitions 1} and \textit{2} and Equation \eqref{eq: pdf inphase siso}, one can show that the \ac{pdf} of the received samples under ${\rm{H}}_d$, i.e. $f_{Y[k]} (y|{\rm{H}}_d)$, are $\mathcal{P}$-functions with different shape parameters as \begin{equation} \label{eq: repeat pat n>0} \begin{split} f_{Y[k]} (y|{\rm{H}}_d) &= f_{Y_{\rm{I}}[k]} (y_{\rm{I}}|{\rm{H}}_d) f_{Y_{\rm{Q}}[k]} (y_{\rm{Q}}|{\rm{H}}_d)\\ &= B\big(y_{\rm{I}}, \boldsymbol{ \sigma}^2_{{\bf V}_{\rm{I}}|{\rm{H}}_d}[k] \big) B\big(y_{\rm{Q}}, \boldsymbol{ \sigma}^2_{{\bf V}_{\rm{I}}|{\rm{H}}_d}[k] \big) \\ &= \mathcal{P}\big(y,\boldsymbol{ \sigma}^2_{{\bf V}_{\rm{I}}|{\rm{H}}_d}[k] \big), \end{split} \end{equation} \noindent where $y_{\rm{I}}= {\rm{Re}}\{y\}$, $y_{\rm{Q}}= {\rm{Im}}\{y\}$. Finally, for any arbitrary number of received samples $m \ge 1$, i.e. ${\bf y}= \big[~ y[0]~y[1]~\cdots~y[m-1]~\big]^{\rm{T}}$, using \eqref{eq: repeat pat n>0}, we have \begin{equation} \label{eq: ml siso} \begin{split} \hat{\boldsymbol{d}}^{\text{opt}} &= \operatorname*{argmax}_{d \in \mathcal{D} }~ \prod^{m-1}_{k=0} f_{Y[k]}({\bf y}[k]| {\rm{H}}_{d})\\ &= \operatorname*{argmax}_{d \in \mathcal{D} }~ \prod^{m-1}_{k=0} \mathcal{P}\big({\bf y}[k], \boldsymbol{ \sigma}^2_{{\bf V}_{\rm{I}}|{\rm{H}}_d}[k] \big). \end{split} \end{equation} The next theorem summarizes our discussions in this section. \begin{theorem} In a doubly selective channel and under impulsive noise defined in \eqref{eq: noise pdf}, the approximate \ac{nda} \ac{ml} \ac{to} estimator for a \ac{zp}-\ac{ofdm} systems in given as \begin{equation} \label{eq: theo ml siso} \hat{\boldsymbol{d}}^{\text{opt}} = \operatorname*{argmax}_{d \in \mathcal{D} }~ \prod^{m-1}_{k=0} \mathcal{P}\big({\bf y}[k], \boldsymbol{ \sigma}^2_{{\bf V}_{\rm{I}}|{\rm{H}}_d}[k] \big). \end{equation} where $\mathcal{P}\big(c, \kappa \big)$ is defined in \eqref{eq: P-function def} and $\boldsymbol{ \sigma}^2_{{\bf V}_{\rm{I}}|{\rm{H}}_d}[k]$ is given in \eqref{eq: var Hd siso}. \end{theorem} \subsection{Higher Order Statistical Analysis} In order to see that the Gaussian approximation of $V^{(n)}_{{{\rm{I}}}}[k]$, or $V^{(n)}_{{{\rm{Q}}}}[k]$, is mainly valid, we have plotted the true \ac{pdf} of $Y^{(n)}_{{{\rm{I}}}}[k]$ versus the approximate \ac{pdf} given in \eqref{eq: pdf inphase siso} in Fig. \ref{fig: Empirical vs Analytical} for different values of $k=1$, chosen from the first range given in \eqref{eq: expa conv 2}, and $k=150$, chosen from the second range in \eqref{eq: expa conv 2}. Moreover, the corresponding probabilistic measures such as mean, variance, skewness, and kurtosis of the true and approximate \ac{pdf}s are given in Table \ref{table: pdfs metrics}. Skewness measures the asymmetry of a probability distribution around its mean, and is defined as \begin{align} sk \triangleq \frac{\mathbb{E}\{(Y-\mu)^3\}} {\mathbb{E}\{(Y-\mu)^2\}}. \end{align} Kurtosis is a measure of sharpness of a \ac{pdf}, and is defined as \begin{align} ku \triangleq \frac{\mathbb{E}\{(Y-\mu)^4\}} {\big(\mathbb{E}\{(Y-\mu)^2\} \big)^2}. \end{align} The true \ac{pdf} is obtained through $10^6$ Monte Carlo simulations where $n_{\rm{x}}=512, n_{\rm{z}}=20, n_{\rm{h}}=10, f_{\rm{s}}=10^6$, and modulation order is $M=128$. Also, an exponential power delay profile is considered and is given as \begin{align} \mathbb{E}\{|h[k]|^2\}= \sigma^2_{{\rm{h}}_k}=\alpha \exp\big{(}-\beta k\big{)}, \, k=0,1,\cdots, 9, \end{align} \noindent where $\alpha=0.396$ is a normalizing factor that guarantees the average energy of the channel taps sum to one, and $\beta=0.5$. For the sake of simplicity in illustrations, we set $p_0=1$, and $p_l=0,~ \forall l\neq 0$, which resembles a Gaussian noise. As can be seen in Fig. \ref{fig: Empirical vs Analytical}(a), when $k=1$, the approximate \ac{pdf} deviates from the true \ac{pdf} in terms of Kurtosis, the sharpness of the \ac{pdf}, and almost matches the other metrics such as skewness and variance. This deviation holds for the samples in the first and third ranges given in \eqref{eq: expa conv 2} for the given setup, i.e. $0 \le k \le n_{\rm{h}}-2$ and $n_{\rm{x}} \le k \le n_{\rm{x}}+n_{\rm{h}}-2$. However, as seen in Fig. \ref{fig: Empirical vs Analytical}(b), when $k=150$, the approximate \ac{pdf} reasonably matches with the true \ac{pdf} in terms of all probabilistic measures given in Table \ref{table: pdfs metrics}. This approximate matching holds for the second and fourth ranges given in \eqref{eq: expa conv 2} for the given setup, i.e. $n_{\rm{h}}-1 \le k \le n_{\rm{x}}-1$ and $n_{\rm{x}}+n_{\rm{h}}-1 \le k \le n_{\rm{s}}-1$. These ranges contain $\frac{n_{\rm{x}} +n_{\rm{z}}-2n_{\rm{h}}+2 }{n_{\rm{x}}+n_{\rm{z}} }$ of the samples in a given received \ac{ofdm} vector which is more than $96\%$ of the samples. That is, the approximate \ac{pdf} reasonably matches the true \ac{pdf} of more than $95\%$ of the samples in a given received \ac{ofdm} vector while it slightly differs for less than $5\%$ of the received samples. This implies that the performance of the final estimator should not degrade considerably; although, the final estimator should have significantly lower complexity compared to that of given in \cite{koosha2020} due to the simplicity of the approximate \ac{pdf}. \begin{figure} \centering \subfloat[]{\includegraphics[width=3.1in]{pdf_indx_1_snr_15.eps}} \newline \subfloat[]{\includegraphics[width=3.1in]{pdf_indx_150_snr_15.eps}} \caption{The comparison between the empirical and analytical \ac{pdf}s of the received samples for $k=1$ and $k=150$ when \ac{snr}$=15$ dB.} \label{fig: Empirical vs Analytical} \end{figure} \begin{table}[t!] \centering \caption{Various probability measures for empirical \ac{pdf} versus analytical \ac{pdf}. } \label{table: pdfs metrics} \resizebox{0.5\textwidth}{!}{ \begin{tabularx}{0.42\textwidth}{@{ }l*{5}{C}@{ }} \toprule & \multicolumn{2}{c}{$k = 1$ } & \phantom{abc}& \multicolumn{2}{c}{$k = 150$} \\\cmidrule{2-3} \cmidrule{5-6} & Empirical & Analytical && Empirical & Analytical \\ \midrule Mean & -0.00022 & 0 & & 0.00041 & 0 \\ Variance & 0.1231 & 0.1232 & & 0.5021 & 0.5023 \\ Skewness & -0.0048 & 0 & & 0.0090 & 0 \\ Kurtosis & 4.4519 & 3 & & 3.3000 & 3 \\ \bottomrule \end{tabularx} } \end{table} \subsection{Case Study: Gaussian noise when $d \ge 0$ } Assume the receiver starts late to receive the samples. That is $d \ge 0$. Also, assume the receiver uses $N$ vectors of $n_{\rm{s}}$ samples in order to estimate the \ac{to} in an environment with Gaussian noise. Taking logarithm of \eqref{eq: ml siso}, and after some mathematical manipulations, we have \begin{equation} \label{eq: ml siso wed} \begin{split} \hat{\boldsymbol{d}}^{\text{opt}} &= \operatorname*{argmin}_{d \in \mathcal{D} }~ \sum^{Nn_{\rm{s}}-1}_{k=0} \frac{|{\bf y}[k]|^2}{\boldsymbol{ \sigma}^2_{{\bf V}_{\rm{I}}|{\rm{H}}_d}[k] + \frac{\sigma_{{\rm{w}}_l}^2}{2}}. \end{split} \end{equation} Equation \eqref{eq: ml siso wed} shows that the final estimator in the given setup turns into a \ac{wed}. Note that \ac{wed} assigns larger weights to those samples carrying no information, i.e. noise samples, while dedicating smaller weights to those carrying information. \subsection{Energy Detector} Inspired by \ac{wed} derived in \eqref{eq: ml siso wed}, we introduce a sub-optimal lower complexity estimator, i.e. \ac{ed}, where we consider an extreme case of \ac{wed}. That is, we assign a very large weight, i.e. one, to noise samples while we assign a zero weight to those samples carrying information. Hence, \begin{equation} \label{eq: ml siso ed} \begin{split} \hat{\boldsymbol{d}}^{\text{opt}} &= \operatorname*{argmin}_{d \in \mathcal{D} }~ \sum^{N-1}_{r=0} \sum^{\phi_2(d)}_{k=\phi_1(d) } |{\bf y}[k]|^2, \end{split} \end{equation} where \begin{align} &\phi_1(d) \triangleq n_{\rm{x}}+ n_{\rm{h}}-1 + r n_{\rm{s}}-d \\ &\phi_2(d) \triangleq (r+1)n_{\rm{s}}-1-d. \end{align} In the next section, we extend the \ac{ml} \ac{to} estimator given in Equation \eqref{eq: ml siso} for \ac{siso} systems to an \ac{ml} \ac{to} estimator for \ac{mimo} systems. \subsection{Maximum Likelihood Estimation for Multiple-Input Multiple-Output System} \label{sec: mimo} In this subsection, we derive the \ac{ml} \ac{to} for a \ac{mimo}-\ac{ofdm} system under a frequency selective channel. We use the results proposed for \ac{siso} scenario, and try to extend the notations. The next theorem gives \ac{ml} \ac{to} estimator for \ac{mimo} systems. \begin{theorem} \label{theo: pdf y mimo} For a \ac{zp} \ac{mimo}-\ac{ofdm} wireless communication system and Class A impulsive noise defined in \eqref{eq: noise pdf}, the approximate \ac{ml} \ac{to} estimator is given as \begin{equation} \label{eq: ml mimo} \hat{\boldsymbol{d}}^{\text{opt}} = \operatorname*{argmax}_{d \in \mathcal{D} }~ \prod^{m_{\rm{r}}}_{j=1} \prod^{m-1}_{k=0} \mathcal{P}\big({\bf y}_j[k], \boldsymbol{\sigma}^2_{{\bf V}_{\rm{I}}|{\rm{H}}_d}[k] \big). \end{equation} where $\mathcal{P}\big(c, \kappa \big)$ is defined in \eqref{eq: P-function def} and $\boldsymbol{ \sigma}^2_{{\bf V}_{\rm{I}}|{\rm{H}}_d}[k]$ is given in \eqref{eq: var Hd siso}. \end{theorem} \begin{proof} Using Equations \eqref{Sys Model: matrix form conv 2}, \eqref{matrix H} and \eqref{sys mod: s y w} for a \ac{mimo}-\ac{ofdm} system, we have \begin{equation} \label{eq: proof mimo} {\bf y}_{j}^{(n)}= \sum_{i=1}^{{\rm{m_t}}} {\bf {\rm{H}}}^{(n)}_{ji} {\bf s}^{(n)}_i + {\bf w}^{(n)}_j,~ \forall j \in \{1,2,\cdots,{\rm{m_r}}\}. \end{equation} \noindent Also, we have \begin{align} \label{eq: ofdm samples gauss power} \mathbb{E}\Big{\{}s^{(n)}_i[k] s^{(n)}_p[k']^*\Big{\}}&=\mathbb{E}\Big{\{}x^{(n)}_i(kT_{{\rm{sa}}}) x^{(n)}_p(k'T_{{\rm{sa}}})^*\Big{\}} \\ \nonumber &= \frac{\sigma^2_x}{m_{\rm{t}}} \delta[k-k'] \delta[i-p],\\ \nonumber &\forall i,p \in \{1,2,\cdots,{\rm{m_t}} \}, \\ \nonumber &\forall k,k' \in \{0, 1, \cdots , {n_{\rm{x}}}-1 \}. \end{align} \noindent since the transmit power should remain the same for \ac{siso} and \ac{mimo} systems. Using \eqref{eq: proof mimo} and following the same steps as in \ac{siso} case, one can arrive at \eqref{eq: ml mimo}. \end{proof} Next section compares the complexity of the proposed approximate \ac{ml} algorithm with that of given in \cite{koosha2020}. \section{Complexity} \label{sec: complexity} Complexity of an algorithm plays a crucial role in using the algorithm in wireless communication systems. That is, an algorithm should be rather simple yet accurate enough in order to be considered for practical implementations. To this end, the complexity of the proposed approximate \ac{ml} algorithm, referred to as A-\ac{ml}, with the complexity of the original \ac{ml} estimator in \cite{koosha2020}, denoted as O-\ac{ml}, and that of Transition Metric \cite{LeNir2010} are given in Table \ref{table:complexity}. As can be seen, A-\ac{ml} has significantly lower computational complexity compared to O-\ac{ml} while possessing a negligible performance loss in terms of lock-in probability, shown in next section. Also, A-\ac{ml} has the same computational complexity as Transition Metric while demonstrating a large performance gap in terms of lock-in probability. As seen, the main advantage of A-\ac{ml} is its simplicity which enables the designer to easily extend it to \ac{mimo} systems. Note that A-\ac{ml} can be implemented in fully vectorized format that makes it considerably faster than O-\ac{ml}. One can further improve the complexity and the exhaustive search in \eqref{eq: ml mimo} by using complexity-reduced search algorithms such as Golden Section Search algorithm which has a significantly lower complexity of $\mathcal{O}(\log(|\mathcal{D}|))$ compared to that of exhaustive search, $\mathcal{O}(|\mathcal{D}|)$. \begin{table}[t!] \centering \caption{Complexity of the proposed algorithms} \resizebox{0.39\textwidth}{!}{ \begin{tabularx}{0.3\textwidth}{cc} \toprule {\bf Estimator} & {\bf Computational Complexity } \\ \hline A-ML & $\mathcal{O}(Nn_{\rm{s}})$ \\ O-ML & $\mathcal{O}(Nn_{\rm{s}}^3)$ \\ Transition Metric & $\mathcal{O}(Nn_{\rm{s}})$ \\ \bottomrule \end{tabularx} } \label{table:complexity} \end{table} \section{Simulations} \label{sec: simul} In this section, we compare the proposed algorithm A-\ac{ml} with O-\ac{ml} given in \cite{koosha2020}, and Transition Metric \cite{LeNir2010}. \subsection{Simulation Setup} Unless otherwise mentioned, the following setup is considered for simulations. A \ac{zp}-\ac{ofdm} system with 128-QAM modulation in a frequency-selective Rayleigh fading channel is considered with data samples length of $n_{\rm{x}}=512$, and \ac{zp} guard interval of length $n_{\rm{z}}=20$. The number of received \ac{ofdm} vectors used for estimation is set $N=10$. Sampling rate is $f_{\rm{s}}=10^6$. The exponential channel delay profile parameters are $\alpha=1, \beta=0.05$, where $n_{\rm{h}}=10$. A Jakes model for Doppler spectrum with maximum Doppler shift of $f_{\rm{D}}=5$ Hz is considered. A two-components impulsive noise with parameters $p_0=0.99$, $p_1=0.01$, $\sigma_{{\rm{w}}_0}^2=1$, and $\sigma_{{\rm{w}}_1}^2=100$ is set. \ac{snr} in dB is defined as $\gamma\triangleq 10 \log(\frac{\sigma_{ \rm{x}}^2}{\sigma_{\rm{w}}^2})$. Simulation results are obtained through $10^4$ Monte Carlo realizations, and the delay is uniformly chosen from the range $d \in [-30 , 30]$. \subsection{Simulation Results} The probability of lock-in of A-\ac{ml}, O-\ac{ml} and Transition Metric for different values of \ac{snr} when $m_{\rm{t}}=m_{\rm{r}}=1$ are depicted in Fig. \ref{fig: snr}. As shown, there is a negligible performance gap between A-\ac{ml} and O-\ac{ml} while A-\ac{ml} possesses a much lower computational complexity. Also, A-\ac{ml} significantly outperforms Transition Metric. \begin{figure} \centering \includegraphics[height=2.835in]{comparison.eps} \caption{Lock-in probability of A-ML, O-ML and Transition Metric for different values of \ac{snr}. } \label{fig: snr} \end{figure} The performance of A-\ac{ml} versus the number of observation vectors used for estimation is shown in Fig. \ref{fig: obser}. As seen, the performance of A-\ac{ml} improves as the number of observation vectors increases. This figure shows that with a reasonable buffer capacity or an increase in the number of antennas, a receiver using A-\ac{ml} is able to achieve high lock-in probability, e.g more than 0.9. \begin{figure} \centering \includegraphics[height=2.835in]{fig4_obserLen.eps} \caption{Lock-in probability of A-\ac{ml} for different number of observation vectors used for estimation. } \label{fig: obser} \end{figure} The performance of \ac{wed} and \ac{ed} for different values of positive delays, i.e. when the receiver starts late to receive samples, and for various \ac{snr} values is depicted in Fig. \ref{fig: wed}. This figure shows that \ac{ed} is able to achieve a high probability of lock-in in higher \ac{snr}s while a considerable performance gap is clear at lower \ac{snr}s. This performance gap is originating from the simplifying assumption of zero and one weight assignments in \ac{ed}, which results in significantly lower computational complexity of \ac{ed} compared to \ac{wed}. \begin{figure} \centering \includegraphics[height=2.835in]{wed_ed.eps} \caption{Lock-in probability of \ac{wed} and \ac{ed} for values of \ac{snr}. } \label{fig: wed} \end{figure} Fig. \ref{fig: mimo} shows the performance of A-\ac{ml} for \ac{mimo} systems for different values of $m_{\rm{t}}$ and $m_{\rm{r}}$. As seen, the performance of A-\ac{ml} improves significantly due to the fact that the number of receive samples used for estimation increases when the number of transmit or receive antenna increases. Increasing the number of observations improves the accuracy of \ac{ml} estimators; hence, the performance of A-\ac{ml} improves. \begin{figure} \centering \includegraphics[height=2.835in]{fig_mimo_comp.eps} \caption{Lock-in probability of A-\ac{ml} for various number of antennas. } \label{fig: mimo} \end{figure} The effect of impulsive noise on the performance of A-\ac{ml} is shown in Fig. \ref{fig: noise}. Here, we consider a two-components Gaussian mixture where $\sigma_{{\rm{w}}_0}^2=1$ $\sigma_{{\rm{w}}_1}^2=100$, and we change the ratio of $p_0/p_1$. As seen, when a component, i.e. $\sigma_{{\rm{w}}_0}^2=1$, becomes strong, i.e. $p_0$ increases, the performance improves. This is because the uncertainty of A-\ac{ml} arising from Eq. \eqref{eq: pdf inphase siso} decreases as $p_0$ increases. \begin{figure} \centering \includegraphics[height=2.835in]{noiseM.eps} \caption{Lock-in probability of A-\ac{ml} for different values of $p_0/p_1$. } \label{fig: noise} \end{figure} Since A-\ac{ml} employs power delay profile for synchronization, we study the sensitivity of A-\ac{ml} to power delay profile estimation errors in Fig. \ref{fig: sens}. In order to generate power delay profile errors, we use the following equation \begin{equation} \sigma^2_{{\rm{h}}^{\rm{new}}_k} = (1+ A_k \alpha ) \sigma^2_{{\rm{h}}_k}, \, k=0,1,\cdots, n_{\rm{h}}-1, \end{equation} where $A_k$ is uniformly (randomly) chosen for each tap from the set $\{ -1, 1\}$. $\sigma^2_{{\rm{h}}^{\rm{new}}_k} $, instead of $\sigma^2_{{\rm{h}}_k}$, is then fed to A-ML in order to estimate \ac{to}. The probability of lock-in of A-\ac{ml} for different values of $\alpha$ is shown in Fig. \ref{fig: sens}. Although the performance of A-\ac{ml} degrades with an increase in power delay profile error; however, a large error is required to achieve a $50\%$ loss in terms of lock-in probability. Moreover, note that an error of 0.7 results in less than four percent performance loss which implies that A-\ac{ml} is fairy insensitive to power delay profile estimation errors. \begin{figure} \centering \includegraphics[height=2.835in]{sens.eps} \caption{Lock-in probability of A-\ac{ml} for different values of power delay profile estimation errors. } \label{fig: sens} \end{figure} \section{Conclusion} \label{sec: conclu} \ac{zp}-\ac{ofdm} systems possess many advantages compared to \ac{cp}-\ac{ofdm} systems. However, the time synchronization in \ac{zp}-\ac{ofdm} systems are significantly challenging due to the lack of \ac{cp}. In this paper, we proposed an approximate yet accurate low-complexity \ac{nda} \ac{ml} \ac{to} estimator, i.e. A-\ac{ml}, for \ac{zp} \ac{mimo}-\ac{ofdm} systems in highly selective channels. We showed that A-\ac{ml} has a significantly lower complexity than that of proposed in \cite{koosha2020}, i.e. O-\ac{ml}, while having a negligible performance gap in terms of lock-in probability. This makes A-\ac{ml}, unlike O-\ac{ml}, suitable for practical implementations. Moreover, it is shown that A-\ac{ml} dramatically outperforms the current stat-of-the-art \ac{nda} \ac{to} estimator for \ac{zp}-\ac{ofdm} referred to as Transition Metric. \IEEEpeerreviewmaketitle \bibliographystyle{IEEEtran}
2024-02-18T23:41:16.614Z
2020-08-18T02:04:09.000Z
algebraic_stack_train_0000
4,647
9,251